Fable Legends Review

Four co-op players making their way through a combat/puzzle arena while a Mastermind player spawns monsters to impede player progress? Digging through the infected viscera of Resident Evil Resistance’s confined hallways reveals the familiar face of a forgotten zombified corpse.

Fable Legends, an upcoming four-player co-op hack 'n slash game, might just scratch your itch. In it, you can play either as one of four would-be heroes, or as the villain whose job is to stop them. Fable Legends will take place in an 'online world' and feature integration with SmartGlass.

“Holy sh.t, is that?”For those who were disappointed by Microsoft’s cancellation of the now-defunct Lionhead Studios’ final project, it’s essentially alive and well here. While the title has obviously shedded the skin of the fantastical Albion in lieu of a drabber, grittier aesthetic of infections and cartilage, there’s no shortage of the cancelled game’s design decisions that can be discovered here.its inclusion as a pack-in multiplayer mode for the underwhelming Resident Evil 3 speaks volumes – but goddamn is it uniqueHowever, one difference is obvious: Resident Evil Resistance is alive – as ironic as that is – and Fable Legends is way, waaay dead. Nevertheless, the 4v1 asymmetric multiplayer style has been resuscitated and, ya know what? It’s pretty darn fun.From the decidedly barebones main menu of Resident Evil Resistance, you can choose what you want to play as: Survivors or Masterminds. (You can also choose quick play and let the game decide for you.) Survivors take the role of the traditional Resident Evil player, scouring environments for keys or items to unlock doors, managing inventory items and taking out zombies. On the other hand, Masterminds spy on players through various cameras, spawn enemies, lay traps and do anything possible to impede player progress. With an energy meter dictating what enemies, traps or otherwise you can spawn, you have to think somewhat about what you spawn and when.

Trapping a survivor in a leghold trap and then shooting them with a rifle is a pretty solid way of weakening them.Of course, many players will find it more fun to revel in the pure chaos you’re able to create as the tyrannical Mastermind. Don’t worry, we did, too. You’re forced to always plan ahead: maybe you can hide a sneaky landmine around the corner of a key item. Perhaps, you could also spawn a tough zombie while they grab it.

Lest?The absolute best part of playing as the Mastermind is the awesome ability to take control of spawned monsters, placing yourself right in the action. While you can’t spawn a surrounding army to help you in your quest to turn surviving humans into mulch, the novel concept of just being a Resident Evil monster almost makes you forget that.

Movesets are limited and you can’t control every monster in your roster – let me be a zombie dog, damnit! — but it’s a lot of fun to create unbridled chaos as the monsters you spawn.Each of the game’s four Masterminds also have access to a super powerful Resident Evil boss that they can spawn a few times per match. From ’s William Berkin and Mr X to the man-eating plant Yateveo, fans of the core series should be excited to finally take control of classic monsters. We just wish there were more of them.

William Birkin is a big, strong, monsterous boy. He has a big pipe and he isn’t afraid to use it.Playing as the four-person team of survivors is, while not any less fun, considerably more stressful than the gleeful devilry of the Mastermind. You’re supposed to work together, as if that’ll ever happen in random lobbies, to escape three sections of the Mastermind’s test chamber before a time limit runs out. Dying or getting bit by an enemy increases the time limit, killing enemies or reviving teammates increases it.As rounds get tenser and more dangerous, a sense of camaraderie often arises, even in lobbies where players were complete d.cks just minutes before!

Resident Evil Resistance is a game where you can’t afford to let your teammates die, despite some awful unbalanced monsters being able to take them out in a prolonged powerful attack. DealsYou can now get a steep $290 discount on Surface Pro X bundle, which includes the Surface Pro X device, Signature Keyboard and the Slim Pen. Amazon is currently offering the Black Surface Pr.You can now get a huge $300 discount on the purchase of the Surface Pro 7 (256GB).

One of the more exciting features built into Windows 10 is DirectX 12, a new programming interface that promises to modernize the way games talk to graphics chips.Prior versions of DirectX—and specifically its graphics-focused component, known as Direct3D—are used by the vast majority of today’s PC games, but they’re not necessarily a good fit for how modern GPUs really work. These older APIs tend to impose more overhead than necessary on the graphics driver and CPU, and they’re not always terribly effective at keeping the GPU fed with work. Both of these problems tend to sap performance. Thus, DirectX has often been cited as the culprit when console games make a poor transition to the PC platform in spite of the PC’s massive advantage in raw power.Although, honestly, you can’t blame an API for something like the Arkham Knight. Console ports have other sorts of problems, too.Anyhow, by offering game developers more direct, lower-level access to the graphics processor, DirectX 12 promises to unlock new levels of performance in PC gaming.

This new API also exposes a number of novel hardware features not accessible in older versions of Direct3D, opening up the possibility of new techniques that provide richer visuals than previously feasible in real-time rendering.So yeah, there’s plenty to be excited about.DirectX 12 is Microsoft’s baby, and it’s not just a PC standard. Developers will also use it on the Xbox One, giving them a unified means of addressing two major gaming platforms at once.That’s why there’s perhaps no better showcase for DX12 than Fable Legends, the upcoming game from Lionhead Studios.

Game genres have gotten wonderfully and joyously scrambled in recent years, but I think I’d describe Legends as a free-to-play online RPG with MOBA and FPS elements. Stick that in yer pipe and smoke it. Legends will be exclusive to the Xbox One and Windows 10, and it will take advantage of DX12 on the PC as long as a DirectX 12-capable graphics card is present.In order to demonstrate the potential of DX12, Microsoft has cooked up a benchmark based on a pre-release version of Fable Legends. We’ve taken it for a spin on a small armada of the latest graphics cards, and we have some interesting results to share.This Fable Legends benchmark looks absolutely gorgeous, thanks in part to the DirectX 12 API and the Unreal 4 game engine.

The artwork is stylized in a not-exactly-photorealistic fashion, but the demo features a tremendously complex set of environments. The video above utterly fails to do it justice, thanks both to YouTube’s compression and a dreaded 30-FPS cap on my video capture tool. The animation looks much smoother coming directly from a decent GPU.To my eye, the Legends benchmark represents a new high-water mark in PC game visuals for this reason: a near-complete absence of the shimmer, crawling, and sparkle caused by high-frequency noise—both on object edges and inside of objects.

(Again, you’d probably have to see it in person to appreciate it.) This sheer solidity makes Legends feel more like an offline-rendered scene than a real-time PC game. As I understand it, much of the credit for this effect belongs to the temporal anti-aliasing built into Unreal Engine 4.

This AA method evidently offers quality similar to full-on supersampling with less of a performance hit. Here’s hoping more games make use of it in the future.DX12 is a relatively new creation, and Fable Legends has clearly been in development for quite some time. The final game will work with DirectX 11 as well as DX12, and it was almost surely developed with the older API and its requirements in mind.

The question, then, is: how exactly does Legends take advantage of DirectX 12? Here’s Microsoft’s statement on the matter.Lionhead Studios has made several additions to the engine to implement advanced visual effects, and has made use of several new DirectX 12 features, such as Async Compute, manual Resource Barrier tracking, and explicit memory management to help the game achieve the best possible performance.That’s not a huge number of features to use, given everything DX12 offers. Still, the memory management and resource tracking capabilities get at the heart of what this lower-level API is supposed to offer. The game gets to manage video memory itself, rather than relying on the GPU driver to shuffle resources around.Asynchronous compute shaders, meanwhile, have been getting a lot of play in certain pockets of the ‘net since the first DX12 benchmark, built around Oxide Games’ Ashes of the Singularity, was released.

This feature allows the GPU to execute multiple kernels (or basic programs) of different types simultaneously, and it could enable more complex effects to be created and included in each frame.Early tests have shown that the scheduling hardware in AMD’s graphics chips tends to handle async compute much more gracefully than Nvidia’s chips do. That may be an advantage AMD carries over into the DX12 generation of games. However, Nvidia says its Maxwell chips can support async compute in hardware—it’s just not enabled yet. We’ll have to see how well async compute works on newer GeForces once Nvidia turns on its hardware support.For now, well, I suppose we’re about to see how the latest graphics cards handle Fable Legends.

Let’s take a look.Our testing methodsThe graphics cards we used for testing are listed below. Please note that many of them are not stock-clocked reference cards but actual consumer products with faster clock speeds. For example, the GeForce GTX 980 Ti we tested is the Asus Strix model that won our. Similarly, the Radeon R9 Fury and 390X cards are also Asus Strix cards with tweaked clock frequencies. We prefer to test with consumer products when possible rather than reference parts, since those are what folks are more likely to buy and use.The Asus Strix Radeon R9 390XAs ever, we did our best to deliver clean benchmark numbers.

Our test systems were configured like so. DriverrevisionGPUbasecore clock(MHz)GPUboostclock(MHz)Memoryclock(MHz)Memorysize(MB)SapphireNitro R7 370Catalyst 15.201beta–6MSIRadeon R9 285Catalyst 15.201beta–8XFXRadeon R9 390Catalyst 15.201beta–96AsusStrix R9 390XCatalyst 22a–92RadeonR9 NanoCatalyst 22a–6AsusStrix R9 FuryCatalyst 22a–6RadeonR9 Fury XCatalyst 22a–6GigabyteGTX 950GeForce38MSIGeForce GTX 960GeForce38MSIGeForce GTX 970GeForce36GigabyteGTX 980GeForce36AsusStrix GTX 980 TiGeForce34Thanks to Intel, Corsair, Kingston, and Gigabyte for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our to talk with us about them. Fable Legends performance at 1920×1080The Legends benchmark is simple enough to use.

You can run a test with one of three pre-baked options. The first option uses the game’s “ultra” quality settings at 1080p. The neverhood steam 2. The second uses “ultra” at 3840×2160. The third choice is meant for integrated graphics solutions; it drops down to the “low” quality settings at 1280×720.The demo spits out tons of data in a big CSV file, and blessedly, the time to render each frame is included. Naturally, I’ve run the test on a bunch of cards and have provided the frame time data below. You can click through the buttons to see a plot taken from one of the three test instances we ran for each card. We’ll start with the ultra-quality results at 1920×1080.Browse through all of the plots above and you’ll notice something unusual: all of the cards produce the same number of frames, regardless of how fast or slow they are.

That’s not what you’d generally get out of a game, but the Legends benchmark works like an old Quake timedemo. It produces the same set of frames on each card, and the run time varies by performance. That means the benchmark is pretty much completely deterministic, which is nice.The next thing you’ll notice is that some of the cards have quite a few more big frame-time spikes than others. The worst offenders are the GeForce GTX 950 and 960 and the Radeon R9 285. All three of those cards have something in common: only 2GB of video memory onboard.

Although by most measures the Radeon R7 370 has the slowest GPU in this test, its 4GB of memory allows it to avoid some of those spikes.The GeForce GTX 980 Ti is far and away the fastest card here in terms of FPS averages. The 980 Ti’s lead is a little larger we’ve seen in the past, probably due to the fact that we’re testing with an Asus Strix card that’s quite a bit faster than the reference design. We reviewed a bunch of 980 Ti cards, and the Strix was our top pick.The 980 Ti comes back to the pack a little with our 99th-percentile frame time metric, which can be something of an equalizer.

The GTX 980 is fast generally, but it does struggle with a portion of the frames it renders, like all of the cards do.The frame time curves illustrate what happens with the most difficult frames to render.All of the highest-end Radeons and GeForces look pretty strong here. Each of them struggle slightly with the most demanding one to two percent of frames, but the tail of each curve barely rises above 33 milliseconds—which translates to 30 FPS. Not bad.These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS or a 30Hz refresh rate.

Go beyond that with vsync on, and you’re into the bad voodoo of quantization slowdowns. 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame, and 8.3 ms is a relatively new addition that equates to 120Hz, for those with fast gaming displays.As you can see, only the four slowest cards here spend any time beyond the 50-ms threshold, which means the rest of the GPUs are doing a pretty good job at pumping out some prime-quality eye candy without many slowdowns.

Click to the 33-ms threshold, and you’ll see a similar picture, too. Unfortunately, a perfect 60 FPS is elusive for even the top GPUs, as the 16.7-ms results illustrate.Now that we have all of the data before us, I have a couple of impressions to offer. First, although the GeForce cards look solid generally, the Hawaii-based Radeons from AMD perform especially well here. The R9 390X outdoes the pricier GeForce GTX 980, and the Radeon R9 390 beats out the GTX 970.There is a big caveat to remember, though.

In power consumption tests, our GPU test rig when equipped with an R9 390X, versus 282W with a GTX 980. The delta between the R9 390 and GTX 970 was similar, at 121W.That said, the R9 390 and 390X look pretty darned good next to R9 Fury and Fury X, too. The two Fury cards are only marginally quicker than their Hawaii-based siblings. Perhaps the picture will change at a higher resolution? Fable Legends performance at 3840×2160I’ve left the Radeon R7 370 and GeForce GTX 950 out of my 4K tests, and I’ve snuck in another contender.

I probably should have left out the GeForce GTX 960 and Radeon R9 285, which have no business attempting this feat in 4K.We’ve sliced and diced these frame-time distributions in multiple ways, but the story these results tell is the same throughout: the GeForce GTX 980 Ti is easily the best performer here, and only it is fast enough to achieve nearly a steady 30 frames per second. The 980 Ti’s 99th-percentile result is 33.8 ms, just a tick above the 33.3-ms threshold that equates to 30 FPS.The Fury X isn’t too far behind, and it leads a pack of Radeons that all perform pretty similarly. Once again, there’s barely any daylight between the Fury and the 390X. The Fiji GPU used in the Fury and Fury X is substantially faster than the Hawaii GPU driving the 390 and 390X in terms of texturing and shader processing power, but its in terms of geometry throughput and pixel-pushing power via the ROP units. One or both of those two constraints could be coming into play here. CPU core and thread scalingI’m afraid I haven’t had time to pit the various integrated graphics solutions against one another in this Fable Legends test, but I was able to take a quick look at how the two fastest graphics chips scale up when paired with different CPU configs.

Since the new graphics APIs like DirectX 12 are largely about reducing CPU overhead, that seemed like the thing to do.For this little science project, I used the fancy firmware on the Gigabyte X99 boards in my test rigs to enable different numbers of CPU cores on their Core i7-5960X processors. I also selectively disabled Hyper-Threading. The end result was a series of tests ranging from a single-core CPU config with a single thread (1C/1T) through to the full-on 5960X with eight cores and 16 threads (8C/16T).Interesting. The sweet spot with the Radeon looks to be the four-core, four-thread config, while the GeForce prefers the 6C/6T config. Perhaps Nvidia’s drivers use more threads internally. The performance with both cards suffers a little with eight cores enabled, and it drops even more when Hyper-Threading is turned on.Why? Part of the answer is probably pretty straightforward: this application doesn’t appear to make very good use of more than four to six threads.

Given that fact, the 5960X probably benefits from the power savings of having additional cores gated off. If turning off those cores saves power, then the CPU can probably spend more time running at higher clock speeds via Turbo Boost as a result.I’m not sure what to make of the slowdown with Hyper-Threading enabled. Simultaneous multi-threading on a CPU core does require some resource sharing, which can dampen per-thread performance. However, if the operating system scheduler is doing its job well, then multiple threads should only be scheduled on a CPU core when other cores are already occupied—at least, I expect that’s how it should work on a desktop CPU. Hmmm.The curves flatten out a bit when we raise the resolution and image quality settings because GPU speed constraints come into play, but the trends don’t change much. In this case, the Fury X doesn’t benefit from more than two CPU cores.Perhaps we can examine CPU scaling with a lower-end CPU at some point.So now what?We’ve now taken a look at one more piece of the DirectX 12 puzzle, and frankly, the performance results don’t look a ton different than what we’ve seen in current games.The GeForce cards perform well generally, in spite of this game’s apparent use of asynchronous compute shaders. Cards based on AMD’s Hawaii chips look relatively strong here, too, and they kind of embarrass the Fiji-based R9 Fury offerings by getting a little too close for comfort, even in 4K.

One would hope for a stronger showing from the Fury and Fury X in this case.But, you know, it’s just one benchmark based on an unreleased game, so it’s nothing to get too worked up about one way or another. I do wish we could have tested DX12 versus DX11, but the application Microsoft provided only works in DX12. We’ll have to grab a copy of Fable Legends once the game is ready for public consumption and try some side-by-side comparisons.Enjoy our work? Pay what you want to and support us.

Scott, exactly how many “DX12” features are we looking at here?Was async-compute turned on/off for the AMD cards–we know it’s off for Maxwell, so that goes without saying. Does this game even use async compute? AnandTech says, “The engine itself draws on DX12 explicit features such as ‘asynchronous compute, manual resource barrier tracking, and explicit memory management’ that either allow the application to better take advantage of available hardware or open up options that allow developers to better manage multi-threaded applications and GPU memory resources respectively. ”If that’s true then the nVidia drivers for this bench must turn it off–since nVidia admits to not supporting it.But sadly, not even that description is very informative at all. Uh, I’m not too convinced here about the DX12 part–more like Dx11this looks suspiciously like nVidia’s behind-the-scenes “revenge” setup for their embarrassing admission that Maxwell doesn’t support async compute! (What a setup It’s really cutthroat isn’t it?)I also thought this was very funny: “That may be an advantage AMD carries over into the DX12 generation of games.

However, Nvidia says its Maxwell chips can support async compute in hardware—it’s just not enabled yet.”Come on nVidia’s pulled this before;) I remember when they pulled it with 8-bit palletized texture support with the TNT versus 3dfx years agothey said it was there but not turned on. The product came and went and finally nVidia said, “OOOOps!

We tried, but couldn’t do it. We feel real bad about that.” Yea;)Sure thing. Seriously, you don’t actually believe that at this late date if Maxwell had async compute that nVidia would have turned it.off. in the drivers, do you? They don’t say, of course. The denial does not compute, especially since BBurke has been loudly representing that Maxwell supports 100% of d3d12–(except for async compute we know now–and what else, I wonder?)I’ve looked at these supposed “DX12” Fable benchmark results on a variety of sites, and unfortunately none of them seem very informative as to what “DX12” features we’re actually looking at.

Indeed, the whole thing looks like a dog & pony PR frame-rate show for nVidia’s benefit. There’s almost nothing about DX12 apparent.We seem to be approaching new lows in the industry:/. AFAIK, Nvidia’s async problem is possibly driver related, but at the same time it has already been enabled with software emulation, which makes their incoming support comment questionable. The Oxide guys have made a couple of statements about their experience with it:quote. Joking or not, it’s the truth.

Some of his comments are exaggerated, but the part about the ROPs is spot on, which is why someone could be confused by the “humor”.The only thing I took away from the “joke”, was that there really is no length nvidia fans won’t go to diss and dismiss AMD, and it’s rather sad. The Ti has more ROPs, and beats the Fury because of it. That doesn’t apply to the 970/980, and AMD has picked up the slack in that segment.You guys act like the Ti’s win over the Fury somehow invalidates the 390/X’s win over the 970/980, and it really doesn’t. LET ME BREAK IT DOWN FOR YOU.1) No one’s dismissing AMD.

It’s the fanboys. It’s the people who created so much noise over AoS and Async compute that are causing this reaction.2) The Ti having a higher ROP rate isn’t “unfair”. It’s a design decision. It’s like saying HBM or Async compute is “unfair”. It’s not unfair, it’s fair.3) Hawaii and the 980 have the same ROPs. So the 390X/390 tie the 980 because of the same number of ROPs, plus the AMD cards (unfairly???) get to use async compute.4) The 970 only loses because of having an (unfairly!) low ROP rate.How does that sound? ¬¬No one’s acting like GM200 win invalidates Hawaii’s win.

Again, the only reason people are picking on AMD/fans is because of all the premature noise they created because of AoS. And in fact, I think we’ve been saying for quite some time that AMD’s drivers have been holding Hawaii back.What we are looking at and examining is architectural tradeoffs and how they’ve played out. The 980 Ti wins because Nvidia made the right call. Hawaii wins because AMD made the right calls.

Fury loses because AMD made the wrong ones. GM104 performs quite well if you consider that it’s a smaller, lower powered, cheaper to produce mid-range chip. Of course, it’s more expensive for consumers than Hawaii which is actually what goes against it.None of this is still really a problem for Nvidia because:#1 No games have been released yet#2 Async compute, if it can be enabled on Maxwell, will likely improve performance.#3 A game could be produced that uses a larger set of DX12 features, including ones that both Nvidia and AMD have. Things could vary considerably.#4 A lot of existing DX11 and older games are held back on AMD because of bad drivers.All things are obviously still problems for AMD. This will change as the first DX12 generation actually comes out, and if Pascal gets delayed.Also, none of what I said was exaggerated (by me).

Everything was paraphrased. 1.) Yes they are. The fanboys are the worst, but you’re doing it too. See #3.2.) Never said it was “unfair”. Nvidia made the right design choice there.3.) What did you just say about being “unfair”?4.) The 970 still has 64 ROPs.

It just has less memory and shaders.Fury was not designed as efficiently as the Ti. That doesn’t mean it’s compute power can’t be useful, but it definitely was the wrong choice for high resolution, and it’s not going to beat the Ti.

Hawaii on the other hand, seems to have hit the nail on the head for mid-range.#1. No games use dx12 yet. This means nothing, because all future games are pretty much guaranteed to support dx12.

The xbox runs dx12, and it would be a straight port for AMD, including support for Async.#2. The 390’s still going to win though, having more ram and shader power.#3. Sure, and I can pretty much guarantee it. All future gameworks titles will be optimized for nvidia’s extra dx12 features.

The only catch is Async, and that gameworks never increases performance.#4. Hawaii isn’t so far behind in dx11 that it’s unplayable. It may occasionally lose by a small margin, but it handles dx11 games fine, and AMD consistently provides decent performance improvements with driver updates.All in all, I don’t see any problem. Things are pretty equal right now, aside from the undeserved AMD bashing. Async is no less “unfair” than the Ti having more ROPs, so that needs to stop.

It’s not cheating, it’s a performance enhancing feature of dx12. To me, it seems like AMD is now doing in hardware what Nvidia had previously been doing in software with their dx11 driver, and it turns out to be more efficient.

It is what it is, don’t make excuses for Nvidia. They still have that $650 Ti which you can rave about. I did notice that after reading the article here first. It is mainly interesting as AMD GPU performance with slower processors has been uncompetitive for some time with DX11. As it stands an AMD GPU purchase is a very bad idea for anyone without a fast CPU.It seems from this test at least that this may no longer be the case with DX12, indeed they actually fared better than Nvidia at lower CPU clock speeds. I’d love to see more investigation into this once some final DX12 benchmarks are made available.

It has been awhile since I’ve seen such a large performance drop with Hyperthreading enabled. 8C/8T to 8C/16T is only a single data point and it is also a rather large number of threads. Any data for the more common 2C/4T and 4C/8T configurations? I’m curious of those so any signs of slow down.As for a cause, I’m more inclined to think it is more about workloads shifting between cores and causing a bit of L2 cache thrashing. Power management can rotate around workloads to normalize individual core temperatures. Not sure how practical it would be to lock down specific threads to individual cores here but conceptually it would prevent the trashing.The other oddity I’ve spotted is the spiky nature of the the GTX 970.

There are a handful of frame it seems that simply take a long time for these cards to render. With the other cards, GTX 950/960 and R9 285/370 was to be somewhat expected considering their midrange/low end nature but I didn’t expect them to be that bad either. Eh, not quiiiiiiiiiiiiiiiiite.Their peak floating point performance is the same(Well, the Xbone’s might be higher. It got a clockspeed boost not that long before release), but the peak integer performance is much higher. The biggest deal is it’s much easier to hit those peaks(Hell, I don’t think it was possible to get anywhere near the theoretical peak integer performance on the old consoles’ CPUs Floating point, if you were a sneaky optimizing ninja of doom, you could get near), and that alone helps the average game maintain better levels of CPU performance and utilization.

Well, the PS3 Cell’s SPEs are really more akin to the specific vector units (AVX) inside the Jaguar cores, and speaking purely in terms of raw performance, the Cell is actually faster (for SP FP workloads) than the Jaguar CPUs in the newer consoles. Of course, raw floating-point throughput doesn’t really tell you much, but the point is that the newer consoles really do have very slow CPUs. Eight cores does NOT make up for the poor per-thread performance of the low-power cores used, thanks to Amdahl’s law which I’ve mentioned elsewhere in this post’s comments.quote=”the”. Damage, 1st page, in your chart, you have XFX R9 390 and 4096 for memory. Should that be 8192??Also at 1080p the 390 and the 980 are so close in performance, but newegg doesn’t seem to reflect thaturl. Seems to me, if the 390X is spoiling the R9 Fury X’s day, then the 290X would be, too, since they’re mostly the same. And the 290X at its current pricing would REALLY spoil its day.

Then if the 290 and 290X are as close as they typically have been, that would also mean the R9 290 would be spoiling the entire Fiji line and the 390 and 390X.Of course, Fiji isn’t THAT far behind 980 Ti, either, which in reality means that the 290 cards still floating around right now are also ghosting it. So basically the R9 290/290X are the best deal you can get atm and should get if you’re getting anything unless you must have the absolute top end performancein this specific game benchmark anyway. I do wish AMD would work on fixing their DX11 drivers to be multithreaded and to implement something along the lines of Shadercache to improve DX11 performance, which would benefit a great many games a lot of us own already. No where have I claimed the 290X is faster than the 390X or 390 for that matter, just that they’re very close.

Which is demonstrated repeatedly in that article. I find it hilarious you’re referencing the PCars graph that shows a 1.2ms spread between 390X and 290X.

That backs my point up That is a repeated, but insignificant, difference.And the Wiktionary link. So you’re just going to ignore those 1-4 definitions under adjectives? I would stop posting links to stuff which undermines your own argument. You could have said “Oh well I was thinking about it more this way” in your first reply to me but no, you IMMEDIATELY get defensive (over what I don’t even know) and act like a jerk.To answer your earlier (rhetorical) question that’s why no one listens to you. Ooh cool thing I just saw at AT (haven’t read their article yet, just glanced over some graphs):url. Fable actually uses a GI system they developed internally based on light propagation volumes. It uses neither VXGI nor any feature level 121 features like conservative rasterization, else it would not run on the consoles (or a fair amount of PC hardware).

Fable legends reviews

See here:url. I like the Frame Rate vs cumulative percent graphs anandtech has put up.

They expand the areas that show the largest differences for core count and thread count.url. Based on what AMD has said, including that GCN 1.x doesn’t scale beyond four “shader engines”, it is pretty clear now that Hawaii was truly the fully realized version of GCN 1.x. Fiji’s last chance to prove that it was “just drivers” holding it back was DX12 / Vulkan titles and we’re seeing every indication it’s just as unbalanced for DX12 games as DX11.On the brightside, it looks like DX12/Vulkan are really going to ignite the competition fire between Nvidia and AMD again since driver-performance was really Nvidia’s only consistent competitive advantage.Sadly, what Nvidia can no longer invest in drivers, they will surely invest in gameworks and such. The GPU software wars will continue, just in a new arena. Hey loser: Where’s the apology for your flat-out wrong propaganda campaign about Intel “abandoning” socketed CPUs?You might find this article about socketed chips that I guarantee are better than Zen a fascinating read: url. Lovely opinion.

Bleach Online Launching Servers Plan on Mar, 2020. Dear players, Here is the servers plan for Mar, 2020. We will also host many new and exciting events and activities as part of the launch celebration. We’ll be giving away rewards that'll help you dominate the in-game world. Come join the excitement free browser game! Login Register. Bleach Online. Bleach Online is a free to play fantasy MMORPG, based on the setting and characters of the popular Anime and Manga, Bleach. Explore this world and work together with friends old and new to protect yourselves in this strange new world. You awaken after a long injury-induced coma to find yourself in a strange new. Watch all episodes of Bleach online and follow Ichigo Kurosaki, a Soul Reaper born with the abilities to see ghosts who is dedicated to protecting the innocent. Password: Reset: Submit. Bleach online login site. Bleach Online is a popular free browser game.Join millions of others already exploring the Shinigami World and play the best bleach game at gogames.me.

I think you need to re-watch the video.Oh, and it’s also real mature of you to abuse your gold status privileges. I was wondering what was going on for a second there. Also, I noted that in between your -3’s, there was a period of time where I got a single -1, and you got a single +1. Looks like you have a second account here too.One point I should probably emphasize from this mythbusting, is that AMD only has higher i.

It looked to me as if folks were trying to help you by freely spending their time to provide you excellent advice at no charge. There was no hypocrisy involved.url.

It’s very confusing in many ways, even thou it was pointed out:1) If it wasn’t enough for us to keep track of the default specs of almost a dozen cards, now we must also put those overclocked cards in perspective too;2) The charts don’t point this out, and they are pasted/linked to in other forums as well;3) It makes power consumption discussions even more confusing, because most partners don’t publish their cards TDPs, so people end up using the stock card TDP for those too;Factory overclocked cards should be relegated to shoot-out style articles, otherwise they add a lot of noise and echo. So if this game is doing its own memory management in DX12, that suggests the DX12 codepath is radically different from the DX11 codepath. It will be interesting to see comparisons of the two (hard to tease out with the other variables, of course).The results from the 2GB cards suggests a follow-up comparison that holds the GPU constant and varies only the available graphics memory, at various image quality settings. Of course that may be quickly moot if results like this quickly drive sub-4GB cards out of the enthusiast market, but a low-end comparison (2C/4T CPUs, iGPUs and sub-$200 dGPUs) would still be interesting since DX12 may make those perfectly viable gaming configurations (with IQ turned down, of course).Related to this (and I’m far from the first to suggest this): game performance may be about to get a lot more variable, especially as you vary the amount of graphics memory, because memory management is the kind of thing that’s easy to get wrong, especially in the odd corner cases. It may be less of an issue if most of that is deferred to game engines like Unreal, but with great(er) power comes great(er) responsibility, and not all game devs will be up the challenge (at least initially). This will be another 200+ comment article, won’t it?

😀As far as I see itWins for Nvidia:– The 980 Ti looks i. The way i see it they are all close enough to call it a tienone of them can do 4k good enoughthey can all do 1080p pretty good (970 and up)how much better one card is over another is reflected by its pricewhile the 980ti is faster than the furyx in average fps – if you look at the frame times the furyx isnt too far off for it to matterimo from 970 and up its all equal because when you spend slightly more you get slightly fasteredit: in no way do i disagree with anything you have said ninjitsu, i just think the differences are tiny and both amd and nvidia are good choices 🙂. I’m seeing no benefit from more than two cores on a Fury X at settings you’d play the game on, very minor benefits (10%) between two and six cores on a 980ti and a drop in performance when SMT is enabled.Obviously Intel only offers dual cores with low clockspeeds and low cache amounts, both of which will potentially have an additional impact on performance.It would be nice to see a Pentium vs. I3 minus HT vs.

2C/2T 5960X vs. 2C/4T 5960X test to see which factors do make the biggest difference. It’s the fastest CPU for a two thread workload.If you absolutely have to have the maximum performance from something that can’t benefit from more than a dual core your current best choice in the Intel range is the 4790k, hence it’s ‘Intel’s fastest dual core’.There are good reasons to spoil the performance of dual cores and keep clock speeds low. It helps AMD compete and it also helps the environment – more cores at a lower clockspeed is a more power efficient way to provide processing ability which is why phones and tablets tend to be quad cores.I’m just trying to make the point that I’m skeptical that the orwellian mantra of ‘four cores good, two cores bad’ is the way we should be thinking of these things. Blast Processing wasn’t strictly marketing; the Genesis’ Yamaha VDP (read: graphics chip, primitive as it was) was significantly faster than the SNES’ Ricoh PPU, particularly with regard to DMA. This is what it referred to.

Of course, the SNES’ PPU had various other features that the older Yamaha VDP didn’t, but it’s unfair to point out “Blast Processing” as being raw marketing or fanboyism. (.’▽’) It’s a real hardware feature, and it’s just as valid as bringing up Mode 7.But really, the Nintendo Super Famicom and SEGA Mega Drive are much more closely matched than you seem to think despite being radically different in design. SEGA’s console excelled at scenes with fast motion because they play to the hardware’s strengths — check out games like Thunder Force IV and of course the Sonic the Hedgehog series. You won’t find anything like that on the SFC because that hardware isn’t great at fast motion, and instead excels in slower titles where it can focus on showing off its much richer color palette and visual special effects, like hardware-based sprite scaling and rotation. Compare the frenetic Gunstar Heroes (SMD) to the much more deliberate (and gorgeous!) Assault Suits Valken (aka Cybernator) (SFC). Both AMAZING, excellent games, and both representative of their respective hardware. There is a reason most of the best shooter games of the generation were on the SEGA console, and most of the best RPGs were on the SFC.It’s very popular to say things like “ah, the Genesis’ music sucked, it was really a generation behind”, but they were closely matched here too; they simply took radically different approaches.

The SFC used a primitive relative of wavetable MIDI for its music generation, which sounded great on titles with more orchestral scores (Final Fantasy titles, Zelda), but really falls down with rock, electronic, or industrial styles. Meanwhile, the Mega Drive’s complex Yamaha FM synthesis + Texas Instruments programmable sound generator were much harder to work with (and easier to make sound awful), but by nature were much more versatile and could produce url=https://www.youtube.com/watch?v=qTiEESxMD30. Oh, another great set of links to demonstrate the difference:url=https://www.youtube.com/watch?v=0fs05RepMiY. A fine article, Scott. Thanks for all this work.

Makes me want to subscribe. 🙂I would like to know what’s going on with the Radeons not getting any additional speed with 6 cores enabled over 4. They’re supposed to have superior compute, and we’ve been told that DX12 is supposed to remove the driver as a bottleneck between the CPU and the GPU cores. It somehow seems like there’s still a coding issue with this, as, even if the Radeons were to be slower than the GeForces, they should still see an increase with the same number of CPU cores enabled. There could be some kind of serial code or data bottleneck anywhere in the software stack (DX12 itself, the drivers, or Unreal Engine) that shows up at thread counts above 6. It’s early days yet for all the pieces involved, and that sort of issue is not unusual with multithreaded code — something you don’t think will be a problem turns out to be when all the other resource constraints are removed.

It’s also at the high end of hardware, which represents a very small portion of the bell curve; you’d expect most of the testing (and even the design) to target the four-core “sweet spot” that represents the vast middle of the curve. It’s the kind of thing that gets tweaked late, or even not until the next version. Not exactly, it has known shuddering issuse when it stressed and forced to the use the 512MiB pool. Granted, under such conditions the 970 begins to be limited by its shading and texture power.

It is a minor problem in the grant scheme of things.What ticked people off is that Nvidia’s marketing didn’t read the memo from engineering team that disclosure the odd memory design of 970 and what it could do. People who got a hold of 970 discovered it through stress testing. Nvidia did an official addendum afterwards. I cant see any nasty spikes in the graphthe 960 is clearly spiking badly but the 970 isntalso the 970 is missing a higher percentage of resources from a 980 than a 290 is from a 290x (2048/1664 vs 2816/2560) so there will be a bigger difference between themon the chart the 980 sits around a little over 40ms and the 970 at around 50-55 ms most of the time, the 980 spikes to a little over 60ms and the 970 to around 80 at the same spot (around 2700 seconds into the bench)to me its clearly because of less resources (almost 1/4 less)where are these crazy spikes?. No, it is the “970”.Unless you got some form of color blindness. There are spikes going on with 970 that aren’t found in the 980. There aren’t as dramatic as 960, but they are there.The 970 and 980 both follow a similar curve, but just pay extra attention to the 970.

Notice how there appear to be 5-15ms spikes going on a minute or so and they are completely abstain from 980? The 390X doesn’t suffer from it either at the same points. Although it does come across a major spike in one portion of the 4K test. The 970 is “shuddering” when it is trying to use the last 512MiB and the core logic has to take the long way to access it. This inflicts a hit on latency/frame-timing. This does appear to happen in 4K benches but it doesn’t happen that often since 970’s drivers are trying to avoid using that last 512MIB as much as possible.970 users who were stress testing their units beyond 3.5GiB range have been noticing the same shuddering/spiking during benching and real-time gaming.The only people who are denying this at point are die-hard Nvidia fanboys or ignorant 970 owners (who never stress their cards) suffering from buyer’s remorse.

Your original hypothesis:quote. If it’s not a big deal, then what are you arguing for?Here, I’ll help you out because you haven’t seen your optometrist in years:url. It would be interesting to see a 2C/4T test also, for comparison to the 4C/4T and as a representative of what the low end might have (along with integrated graphics, an interesting subject for another day).My theory is that you’ll see a drop with HT in general, mostly due to cache contention. Two threads on the same physical CPU effectively halves the available cache (particularly L1) and, depending on the code, that can be a significant hit. We used to see that a lot with early iterations of HT, but Intel did some magic in later chips to reduce its impact; it may be that DX12 has stumbled into it again.It’s also possible that something in the stack (DX12, the drivers, or the game engine) isn’t well threaded for thread-counts above 6. In that case threading overhead (Amdahl limits on serial code, or locking contention on shared resources — I wonder if Unreal Engine uses TSX?) could be the culprit, and we may be seeing the evidence for that in the results for 6C/6T vs 8C/8T.