• 🏆 Texturing Contest #33 is OPEN! Contestants must re-texture a SD unit model found in-game (Warcraft 3 Classic), recreating the unit into a peaceful NPC version. 🔗Click here to enter!
  • It's time for the first HD Modeling Contest of 2024. Join the theme discussion for Hive's HD Modeling Contest #6! Click here to post your idea!

Specific gaming desktop build

Status
Not open for further replies.
Level 15
Joined
Aug 7, 2013
Messages
1,337
I'm preparing to purchase and build my (1st) gaming desktop.

The two games that I'd like to play on really good / max settings are:

1. Starcraft 2
2. Fallout 4

With that in mind, what could someone recommend me? Let's assume I have two budgets: one $2000 or under, and the other has no limit.

This site seems to suggests that such a computer could be made for Fallout 4 just over $1,000: http://newbcomputerbuild.com/newb-computer-build/build-a-gaming-pc-build-to-play-fallout-4/

Would that also apply to SC2, or does it scale poorly? Because SC2 can vary depending on how many units / objects are in play...
 
Level 15
Joined
Mar 31, 2009
Messages
1,397
In short: don't

In long: the GPU market has been stagnant in performance since 2012, with only a roughly 33% difference going from GK104/Tahiti to GK110/Hawaii and from GK110/Hawaii to GM200/Fiji. This is due to TSMC producing an especially horrific 20nm node that was unsuitable for any performance applications. However, their 14nm is by all accounts alright, and the first cards are scheduled to launch in roughly 6 months. This will very likely result in a straight up doubling of performance; greater than the jump from GT200B/RV790 to GF100/RV870; which was an especially large jump already.
 

Deleted member 212788

D

Deleted member 212788

I strongly suggest using PCpartpicker and posting on either LinusTechTips or OC3D
 

Dr Super Good

Spell Reviewer
Level 64
Joined
Jan 18, 2005
Messages
27,198
SC2 is not that demanding. It only uses 2 cores so any multi core GPU at 3 GHz and modern gaming graphic card (not even high range) will max the game effortlessly even at highish resolutions. Just remember to run the 32bit client as something is not quite right with the 64bit client (probably memory bandwidth, or cache related).

Fallout 4 is a piece of trash when it comes to running on PCs due to it being aimed at consoles and using an engine developed a decade ago which was hacked to keep up with standards. You will probably never be able to max it out. Anything that runs SC2 well should be able to run Fallout 4 pretty well, however in reality you will probably need something much better and the game will still suffer low frame rates and other performance problems no matter how powerful your system is.

I would recommend a fast (3 GHz+) Intel i5 or i7 which is at least a quad core processor, and a NVidia 960 or 980 depending on how much you want to spend. Memory is not so important but you would probably want anything from 8 to 16 GB depending on what is available. Any 1+TB mechanical drive would work however if you want fast initial load times (SC2) or seamless chunk transition (Fallout 4) you may want to consider getting a SSD for the OS and game storage as those practically eliminate seek and read delay.

AMD also offers a number of good solutions for CPU and GPU, after all they do make both the Xbox One and PlayStation 4 inner workings. For the same performance they can often be cheaper so unless you have brand bias you should take a look. I am however unfamiliar with their specific range so cannot give any recommendations. The processor will need a slightly higher clock as generally AMD processors perform worse than Intel processors cycle for cycle. Graphic wise AMD and NVidia have highly comparable solutions which often have similar performance, so I recommend looking at various benchmark sites and judging which is better for yourself.
 
Level 15
Joined
Mar 31, 2009
Messages
1,397
Here's a tutorial on how you make a good gaming pc.
hahhahaha holy fuck I haven't laughed so hard in awhile. The $300 ram with a 50% price premium over equivalent sets. The $130 shitty CLC barely better than a $30 chunk of metal. The $700 CPU that performs the same as the $300 model. The $150 case that is known to have terrible airflow. The crappy SSD that leverages brand over performance and is priced over its betters. If I wanted a mediocre PSU I'd just buy EVGA; not to mention a thousand watter is an insane overkill for that system. MSI is renowned for making motherboards that fry just after warranty.

The only thing missing is some dipshit putting a soundcard in.
 

Deleted member 212788

D

Deleted member 212788

All you will ever need for current games at 1440p

PCPartPicker part list / Price breakdown by merchant

CPU: Intel Core i5-6600K 3.5GHz Quad-Core Processor ($273.98 @ Newegg)
CPU Cooler: Cooler Master Hyper 212 EVO 82.9 CFM Sleeve Bearing CPU Cooler ($19.89 @ OutletPC)
Motherboard: Gigabyte GA-Z170-HD3 ATX LGA1151 Motherboard ($104.99 @ SuperBiiz)
Memory: A-Data XPG Z1 16GB (2 x 8GB) DDR4-2400 Memory ($68.99 @ Newegg)
Storage: Samsung 850 EVO-Series 250GB 2.5" Solid State Drive ($77.88 @ OutletPC)
Storage: Western Digital Caviar Blue 1TB 3.5" 7200RPM Internal Hard Drive ($49.98 @ OutletPC)
Video Card: EVGA GeForce GTX 980 Ti 6GB FTW ACX 2.0+ Video Card ($646.98 @ Newegg)
Case: Fractal Design Define S ATX Mid Tower Case ($71.99 @ SuperBiiz)
Power Supply: EVGA SuperNOVA G2 650W 80+ Gold Certified Fully-Modular ATX Power Supply ($59.99 @ Newegg)
Monitor: Acer G257HU smidpx 60Hz 25.0" Monitor ($259.99 @ B&H)
Keyboard: Cooler Master OCTANE Wired Gaming Keyboard w/Optical Mouse ($34.99 @ Newegg)
Total: $1669.65
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2015-12-11 18:33 EST-0500
 
Level 15
Joined
Mar 31, 2009
Messages
1,397
All you will ever need for current games at 1440p

PCPartPicker part list / Price breakdown by merchant

CPU: Intel Core i5-6600K 3.5GHz Quad-Core Processor ($273.98 @ Newegg)
CPU Cooler: Cooler Master Hyper 212 EVO 82.9 CFM Sleeve Bearing CPU Cooler ($19.89 @ OutletPC)
Motherboard: Gigabyte GA-Z170-HD3 ATX LGA1151 Motherboard ($104.99 @ SuperBiiz)
Memory: A-Data XPG Z1 16GB (2 x 8GB) DDR4-2400 Memory ($68.99 @ Newegg)
Storage: Samsung 850 EVO-Series 250GB 2.5" Solid State Drive ($77.88 @ OutletPC)
Storage: Western Digital Caviar Blue 1TB 3.5" 7200RPM Internal Hard Drive ($49.98 @ OutletPC)
Video Card: EVGA GeForce GTX 980 Ti 6GB FTW ACX 2.0+ Video Card ($646.98 @ Newegg)
Case: Fractal Design Define S ATX Mid Tower Case ($71.99 @ SuperBiiz)
Power Supply: EVGA SuperNOVA G2 650W 80+ Gold Certified Fully-Modular ATX Power Supply ($59.99 @ Newegg)
Monitor: Acer G257HU smidpx 60Hz 25.0" Monitor ($259.99 @ B&H)
Keyboard: Cooler Master OCTANE Wired Gaming Keyboard w/Optical Mouse ($34.99 @ Newegg)
Total: $1669.65
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2015-12-11 18:33 EST-0500

its shit
Skylake manages to be worse than Haswell
A-Data has terrible QC
980 Ti is doing worse than the Fury X in DX12 games
Keyboard choice is terrible, generally it's useless to buy a keyboard between $15 and $80; as membranes don't get any better, and switches are past $80 usually.
Acer panels usually have flaws like backlight bleeding.
http://pcpartpicker.com/p/7QmFRB
 

Deleted member 212788

D

Deleted member 212788

its shit
Skylake manages to be worse than Haswell
A-Data has terrible QC
980 Ti is doing worse than the Fury X in DX12 games
Keyboard choice is terrible, generally it's useless to buy a keyboard between $15 and $80; as membranes don't get any better, and switches are past $80 usually.
Acer panels usually have flaws like backlight bleeding.
http://pcpartpicker.com/p/7QmFRB

I think you are mighty confused friend

Skylake is around 10% faster than Haswell on average. No idea where you got your info from but it is factually incorrect. - http://cpu.userbenchmark.com/Compare/Intel-Core-i5-6600K-vs-Intel-Core-i5-4690K/3503vs2432

A-Data is more than fine - typically RAM is the least of your concerns as RMAing faulty RAM is fairly easy

GM200's DX12 performance was somewhat addressed and unless you are going with SLI or CF where CF scales better allowing Fury X CF surpass 980 Ti SLI then the GM200 chip is a better choice

Octane is a good option when you don't want to overspend - things like Razer and other boutique brands are mostly plagued by short lifespans

Acer panels are okay - their laptops are garbage but the screens are quite good. Just look at the Predator they released recently.

Finally, try to at LEAST cite sources when discussing.
 
Level 15
Joined
Dec 24, 2007
Messages
1,403
Fury X is only doing better in DX12 on a single game, that is currently in Alpha and is sponsored by AMD; hardly a reliable metric for measuring DX12 performance. There is currently no reliable way for anyone to subjectively test DX12 performance. The Fury lineup is disappointing and overpriced (this is coming from someone who has owned AMD/ATI cards for the last 8 years). Honestly, I would hold off on a 980 Ti though. Pascal is slated for release within the next year, which brings a huge slew of improvements. We've been stuck on the same 20nm node for a while, and pascal means a die shrink, plus a rumored 16gb of HBM2 vram. We should finally start seeing some big improvements in the GPU world again. I would get a much cheaper GPU that will hold you over until pascal is released, then go all out once it's released.

Honestly, that build looks perfectly fine. Sure, there are better keyboards out there (I love my mechanical keyboard), but if you're just looking for a budget keyboard it's perfectly serviceable.
 

Deleted member 212788

D

Deleted member 212788

Fury X is only doing better in DX12 on a single game, that is currently in Alpha and is sponsored by AMD; hardly a reliable metric for measuring DX12 performance. There is currently no reliable way for anyone to subjectively test DX12 performance. The Fury lineup is disappointing and overpriced (this is coming from someone who has owned AMD/ATI cards for the last 8 years). Honestly, I would hold off on a 980 Ti though. Pascal is slated for release within the next year, which brings a huge slew of improvements. We've been stuck on the same 20nm node for a while, and pascal means a die shrink, plus a rumored 16gb of HBM2 vram. We should finally start seeing some big improvements in the GPU world again. I would get a much cheaper GPU that will hold you over until pascal is released, then go all out once it's released.

Honestly, that build looks perfectly fine. Sure, there are better keyboards out there (I love my mechanical keyboard), but if you're just looking for a budget keyboard it's perfectly serviceable.

Fury X and Fury prices were cut by 100$ each recently I believe

Fury is about 30-50$ more expensive than a 980 while offering 10% more performance

Fury X is around 50-70$ cheaper than reference 980 Tis (which it outperforms due to better thermals and no OC capability on the Nivida stock cooler)

16GB of HBM2 are for Quadros due Q1 2016 - Gaming Pascal is due Q3 2016 with Arcitc Islands Q2 2016

Most cards will be 8GB HBM2 for the high end and GDDR5X for anything bellow a 1080/R9 490X
 
Level 15
Joined
Dec 24, 2007
Messages
1,403
Fury X and Fury prices were cut by 100$ each recently I believe

Fury is about 30-50$ more expensive than a 980 while offering 10% more performance

Fury X is around 50-70$ cheaper than reference 980 Tis (which it outperforms due to better thermals and no OC capability on the Nivida stock cooler)

16GB of HBM2 are for Quadros due Q1 2016 - Gaming Pascal is due Q3 2016 with Arcitc Islands Q2 2016

Most cards will be 8GB HBM2 for the high end and GDDR5X for anything bellow a 1080/R9 490X

Ah, that's all news to me. The price cut on the Fury lineup is a game changer, they simply weren't worth the price they were charging at launch.
 
Level 15
Joined
Mar 9, 2008
Messages
2,174
Skylake manages to be worse than Haswell
Totally, especially with gigantic gains we've seen in generation to generation in the past few years, it's an absolute deal breaker.

A-Data has terrible QC
Source: My butthole

Keyboard choice is terrible, generally it's useless to buy a keyboard between $15 and $80; as membranes don't get any better, and switches are past $80 usually.
Hmm, maybe if they included a mouse in the combo it would be worth it? No?
Sidenote: For a build of this caliber, the cheapass set like Octane isn't a sensible choice.

980 Ti is doing worse than the Fury X in DX12 games
Oh man, gotta get dem performances for that one alpha game and a fucking benchmark.
 
Level 15
Joined
Mar 31, 2009
Messages
1,397
I think you are mighty confused friend
Skylake is around 10% faster than Haswell on average. No idea where you got your info from but it is factually incorrect. - http://cpu.userbenchmark.com/Compare/Intel-Core-i5-6600K-vs-Intel-Core-i5-4690K/3503vs2432
In Synthetics; real world usage it just ain't cutting it. In workstation applications it's a 6% increase; in gaming it's a 1% decrease, mainly due to the high latencies DDR4 currently requires. A similar issue was seen when DDR3 first launched.
A-Data is more than fine - typically RAM is the least of your concerns as RMAing faulty RAM is fairly easy
Bull, RAM degrades over time regardless. A-Data has a higher tolerance for dead cells than say, GSkill, Crucial, or Corsair. Means less capacity left in the designated reserve.
GM200's DX12 performance was somewhat addressed and unless you are going with SLI or CF where CF scales better allowing Fury X CF surpass 980 Ti SLI then the GM200 chip is a better choice
Addressed as in; "we promise to improve our software enabled async until we add a hardware solution into Pascal; like Fermi had"? Because thats all I've heard from Nvidia other than them begging Stardock not to include async support in the engine. Anyways, go check the Battlefront and Blops3 benchmarks as well. Fury X is only behind in Gameworks games. Not to mention Nvidia seems to be gimping the Kepler series now, a 780 Ti on par with a 960 in a few games, which is totally absurd given their specs.
Octane is a good option when you don't want to overspend - things like Razer and other boutique brands are mostly plagued by short lifespans
Cooler Master is renowned for cheap shitty 'gamer' products. If you want a keyboard you get a cheap logitech or microsoft; or shell out for a Ducky.
Acer panels are okay - their laptops are garbage but the screens are quite good. Just look at the Predator they released recently.
hahahhahahha, you mean the monitor with massive bleed and regularly ships out with dead pixels? Go read up on it mate.
Finally, try to at LEAST cite sources when discussing.
Like you did?

is sponsored by AMD
Except it isn't. At all.

And did you see the Fable Legends benches? http://www.extremetech.com/gaming/2...o-head-to-head-in-latest-directx-12-benchmark
 
Level 15
Joined
Dec 24, 2007
Messages
1,403
Last edited:

Dr Super Good

Spell Reviewer
Level 64
Joined
Jan 18, 2005
Messages
27,198
Fury X is only doing better in DX12 on a single game, that is currently in Alpha and is sponsored by AMD; hardly a reliable metric for measuring DX12 performance.
That is not entirely correct. They have recently discovered the game performs better when using both a NVidia and AMD card than with AMD and NVidia cards alone. It is fastest with AMD main, but using NVidia as secondary gives more performance than using the same AMD card again. As such NVidia is still very competitive.

Additionally most games are optimized for AMD cards at the moment. You have Microsoft and Sony to blame for their AMD made consoles.

In Synthetics; real world usage it just ain't cutting it. In workstation applications it's a 6% increase; in gaming it's a 1% decrease, mainly due to the high latencies DDR4 currently requires. A similar issue was seen when DDR3 first launched.
1% decrease is what you get for running music in the background. Only reason not to go it would be cost as the performance is on par if not slightly better. I will however admit that the gains might be nowhere near as large as they boast, like always.

I had no problem with early DDR3. I still use a first generation I7 processor using the very rare tri-channel RAM configuration and it was still leaps and bounds faster than Core2 Quad which it was designed to supersede.

Addressed as in; "we promise to improve our software enabled async until we add a hardware solution into Pascal; like Fermi had"? Because thats all I've heard from Nvidia other than them begging Stardock not to include async support in the engine. Anyways, go check the Battlefront and Blops3 benchmarks as well. Fury X is only behind in Gameworks games. Not to mention Nvidia seems to be gimping the Kepler series now, a 780 Ti on par with a 960 in a few games, which is totally absurd given their specs.
The async issue is because it is a new feature, which AMD developed. Only AMDs newest cards actually can do it, so do not expect many games to actually take advantage of it since they are still stuck using old generation AMD cards in game consoles. Additionally hardware support for it is not required, as the specification only defines behaviour and not really the implementation.

The main problem they were complaining about is that when they tried to use a feature which NVidia advertised as supporting it did not behave as expected. Bad performance was not a problem, it was the no performance at all which was since the feature did not work as specified (they could not get performance metrics from it).

How come AMD can do it yet NVidia cannot? It is because the specification was written largely by AMD and around their desired hardware features. One can probably thank the Xbox division for that since they have massive deals with AMD. As such AMD probably already had the ideas and implementation worked out long before it became part of the standard. When it did become part of the standard NVidia was left to think what on earth they mean by the feature, and what is the point of it or how does one implement it without having to redesign their cards from scratch or lower performance in other areas.

I believe NVidia thought the feature would hardly ever be used, after all most game companies target Xbox One and PS4 so cannot use the feature. They were also pretty close to their final product design so they could hardly make large changes. As such they tried to hack together a software implementation, which did not work probably because no one but AMD knows how the feature is "meant" to work.

980ti holds its own against AMD's cards assuming they are the same tier. Differences are so small that they can be considered trivial.

Fable Legends is a Microsoft game designed to perform well on the Xbox One. As such it is optimized around AMD cards so one can expect AMD to do well. The fact NVidia is still competitive with it shows that their cards do still work.
 
Level 15
Joined
Mar 31, 2009
Messages
1,397
That is not entirely correct. They have recently discovered the game performs better when using both a NVidia and AMD card than with AMD and NVidia cards alone. It is fastest with AMD main, but using NVidia as secondary gives more performance than using the same AMD card again. As such NVidia is still very competitive.
Yeah I saw that, it was some voodoo fuckin shit; especially the mixed 7970 and 680 graph where if the 680 was the master it'd actually drop in performance.
I had no problem with early DDR3. I still use a first generation I7 processor using the very rare tri-channel RAM configuration and it was still leaps and bounds faster than Core2 Quad which it was designed to supersede.
That wasn't early DDR3 mate. I'm talking Intel P35 boards with 65nm first gen Core 2's. Back when 2gb kits of it were $300+. DDR2 outperformed it at first because the frequency boosts were offset by the hilariously higher timings.
The async issue is because it is a new feature, which AMD developed.
This shows to me that you haven't a bloody clue of what it actually entails. It's essentially multithreading shaders using the queue process, breaking things into sectors.
Only AMDs newest cards actually can do it, so do not expect many games to actually take advantage of it since they are still stuck using old generation AMD cards in game consoles. Additionally hardware support for it is not required, as the specification only defines behaviour and not really the implementation.
GCN 1.0 does it fine, VLIW4/5 cannot. You might be confusing the older APU's with whats in the consoles, as those were VLIW4 while the consoles use GCN. Nvidia also theoretically supported asynchronous compute for CUDA using drivers going back to Fermi, but there was no hardware implementation.
How come AMD can do it yet NVidia cannot? It is because the specification was written largely by AMD and around their desired hardware features. One can probably thank the Xbox division for that since they have massive deals with AMD. As such AMD probably already had the ideas and implementation worked out long before it became part of the standard. When it did become part of the standard NVidia was left to think what on earth they mean by the feature, and what is the point of it or how does one implement it without having to redesign their cards from scratch or lower performance in other areas.
It's a basic feature both knew about, Nvidia thought they could cut corners and use a software solution because it was originally just a niche for CUDA programmers. They're going to have to redesign their queueing engine from the ground up, which won't be fun given their thread/warp model.
I believe NVidia thought the feature would hardly ever be used, after all most game companies target Xbox One and PS4 so cannot use the feature.
They're already being used on console games bruv. Battlefield 4 was the first one IIRC
Fable Legends is a Microsoft game designed to perform well on the Xbox One. As such it is optimized around AMD cards so one can expect AMD to do well. The fact NVidia is still competitive with it shows that their cards do still work.
It's funny, if you use GPUview on the game, you can see that Nvidia gets calls to put things in the async queue; but then just queues it normally as if it weren't async. That said the game wasn't a heavy user of the queue, neither was Ashes for that matter.

Right, so that big AMD logo at the bottom of their website and the "gaming evolved" thing is just my imagination? You can bet AMD is working with them to ensure it runs well on their cards, just like nvidia works with gameworks titles.
Stardock is partnered with AMD; Oxide, the engine programmers working with Stardock on it, are not. http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1200#post_24356995
 

Dr Super Good

Spell Reviewer
Level 64
Joined
Jan 18, 2005
Messages
27,198
That wasn't early DDR3 mate. I'm talking Intel P35 boards with 65nm first gen Core 2's. Back when 2gb kits of it were $300+. DDR2 outperformed it at first because the frequency boosts were offset by the hilariously higher timings.
Was not aware any Core2 supported DDR3, my mistake.

This shows to me that you haven't a bloody clue of what it actually entails. It's essentially multithreading shaders using the queue process, breaking things into sectors.
This shows me you "haven't a bloody clue of what it actually entails". It is actually closer to Hyper Threading with no real parralisim occurring. Only 1 command queue will execute at a time, but another command queue can execute with no overhead during cycles when the main command queue job blocks. This can allow hardware manufacturers to lower the overhead of context switching pipelines to virtually free, allowing pipeline pre-emption as a viable software technique. It also allows the functional hardware to be utilized more heavily in cases that commands result in stalls.

The specification specifies a number of default required command queues and any number of custom pipelines. Although aimed at hardware acceleration and recommended for implementation, it is not mandatory. Also no hardware implementation does it perfectly, with many having limits (such as 16 or 32 for Xbox One).

GCN 1.0 does it fine, VLIW4/5 cannot. You might be confusing the older APU's with whats in the consoles, as those were VLIW4 while the consoles use GCN. Nvidia also theoretically supported asynchronous compute for CUDA using drivers going back to Fermi, but there was no hardware implementation.
AMD must have messed up their own press releases. They clearly stated that Async was only fully supported by their newest cards. Or possibly that what the game does is only supported on their newest cards. They are probably referring to some limitation aspect of the command queues since the Xbox One and PlayStation 4 likely have worse or fewer optional features compared with their newer cards.

It's a basic feature both knew about, Nvidia thought they could cut corners and use a software solution because it was originally just a niche for CUDA programmers. They're going to have to redesign their queueing engine from the ground up, which won't be fun given their thread/warp model.
You must remember the feature comes at a cost. It might reduce performance in other areas to install due to the extra logic required. If a non-standard feature is not used you will be stupid to keep it in.

For the most part a driver solution could work with at most some performance loss. It is only if you start to depend on the feature, specifically for mixing Direct Compute results in the graphic pipeline, will it become impossible. This is why Ashes of Singularity specifically does this, because it needs a full hardware implementation to work otherwise the context switching is impossible. The fact it just did not work meant the feature was not implemented properly. It should still have worked, although probably very badly.

They're already being used on console games bruv. Battlefield 4 was the first one IIRC
Anything using Direct3D 12 will use command queues. If they gain anything from it is entirely another question. If they take advantage of some form of asynchronous hardware implementation is not required. The only thing they are guaranteed by using Direct3D 12 is the order in which command queues run (which is where NVidia probably failed). AMD is selling a specific implementation of command queues which is their "asynchronous compute" or whatever it is called.

It's funny, if you use GPUview on the game, you can see that Nvidia gets calls to put things in the async queue; but then just queues it normally as if it weren't async. That said the game wasn't a heavy user of the queue, neither was Ashes for that matter.
Once again, asynchronous compute is an AMD implementation of Direct3D 12 command queues. How it is implemented is not defined, with the actual documentation hinting that a GPU could in theory run each command queue completely in parallel (not only during pipeline stalls).

As long as the synchronization between command queues remains correct it still complies. Even if priorities do not work that well.

Stardock is partnered with AMD; Oxide, the engine programmers working with Stardock on it, are not. http://www.overclock.net/t/1569897/v...#post_24356995
Stardock's involvement will be shaky as always with such projects. Usually they help make it, but then leave the other company to look after it (aka Demigod). They seem more like a team of consultant programmers than an actual software company.
 
Level 15
Joined
Mar 31, 2009
Messages
1,397
Only 1 command queue will execute at a time
That's where you're wrong actually. It does everything else you say, but it also allows multiple queues to execute during the same cycle for different sectors of the GPU if there is incomplete usage. Also don't call it Hyper-Threading, it's simultaneous multithreading, and yes it's a good analogy but is an incomplete one.
AMD must have messed up their own press releases. They clearly stated that Async was only fully supported by their newest cards. Or possibly that what the game does is only supported on their newest cards. They are probably referring to some limitation aspect of the command queues since the Xbox One and PlayStation 4 likely have worse or fewer optional features compared with their newer cards.
Not sure where you got that from. All the whitepapers I've seen have mentioned GCN as a whole. Capability was increased over time (1.1 especially), but even the oldest GCN cards can still do async just fine.
http://developer.amd.com/wordpress/media/2012/10/Asynchronous-Shaders-White-Paper-FINAL.pdf
http://partner.amd.com/Documents/MarketingDownloads/en/AMD_Radeon_DX12_9-15.pdf
You must remember the feature comes at a cost. It might reduce performance in other areas to install due to the extra logic required. If a non-standard feature is not used you will be stupid to keep it in.
They took a bet it wouldn't be used outside of a few rare compute scenarios, and it turned tits up for them.
Stardock's involvement will be shaky as always with such projects. Usually they help make it, but then leave the other company to look after it (aka Demigod). They seem more like a team of consultant programmers than an actual software company.
They do make a good number of inhouse projects as well. They're an odd company.
 

Dr Super Good

Spell Reviewer
Level 64
Joined
Jan 18, 2005
Messages
27,198
That's where you're wrong actually. It does everything else you say, but it also allows multiple queues to execute during the same cycle for different sectors of the GPU if there is incomplete usage. Also don't call it Hyper-Threading, it's simultaneous multithreading, and yes it's a good analogy but is an incomplete one.
As far as I can tell, all documentation of the feature outside of the official Direct3D specification (which is unrelated, they only hint command queues can possibly do as it is a specification) say that asynchronous compute only reclaims cycles which would otherwise have been lost due to pipeline stalls. They constantly use words like "interleave" which is a hyper threading term as opposed to "parallel" for describing how command queues are executed. They do use "parallel" to describe the command queues themselves but that is obvious given that is required by Direct3D 12.

Let us look at an extract...
The ACEs can operate in parallel with the graphics command processor and two DMA engines. The graphics command processor handles graphics queues, the ACEs handle compute queues, and the DMA engines handle copy queues. Each queue can dispatch work items without waiting for other tasks to complete, allowing independent command streams to be interleaved on the GPU’s Shader Engines and execute simultaneously.
We immediately find the feature is aimed at Direct Compute, with the graphic command queue being assigned a separate different unit. Secondly copy operations probably executed in parallel under the old system already, just that they could not be fed in parallel due to the old model having only a single command queue (so only a gain in feeding efficiency). Finally they state "interleaved" which is a hyper threading term. This means that they do not really run multiple command queues on the shader engines, but rather 1 command stream and swap to another if the that one blocks in a very efficient way.

One can immediately see that their implementation of command queues functionality is very efficient for Direct Compute tasks with graphics tasks. Not only can you schedule a lot of them for logical execution in parallel (in the same sense hyperthreading allows you to schedule two threads for logical execution in parallel although only one may execute at a time) but you can also set them to various priority levels to manage where GPU resources go.

Possibly when running very small or simple Direct Compute commands it may share the shader resources to execute them in parallel. However this will not help the graphic commands at all.

The big question is how much Direct Compute is actually used in gaming?
They took a bet it wouldn't be used outside of a few rare compute scenarios, and it turned tits up for them.
Has it? as far as I know most games perform comparably with their cards except one. Even that performs better having their GPU working on it along with an AMD one than with AMD cards alone.
 

Roland

R

Roland

This is why we only get topics from asspberg guy.
People can't ask even simple questions without someone going 110% reddit circlejerk.

Seconded... Although this asspberg kid needs to study about computers more than asking piece of shit confusing questions.
 
Level 34
Joined
Sep 6, 2006
Messages
8,873
Dear God people.

sethmachine, the max build that was listed in the article you linked is perfectly acceptable in my opinion. I would have to do some research since I'm not up to date on GPUs right now, but based on the other choices for that PC, it's safe to say it's probably a good buy.

don's build was also fine, but I don't like gigabyte motherboards (personal experience), and video card is overkill. I personally don't buy cards over $300, but I definitely wouldn't buy one over $600. I really don't think you're getting your money's worth. If I had to take a stab, I'd say generally you're getting a decent deal at ~$300-400, but at ~$600 you're paying that $300 extra for only slightly better output. There may be exceptions, but I think this rule applies most often.
 
Level 15
Joined
Dec 24, 2007
Messages
1,403
The proper answer was given in the third post. That's why.

The guy clearly isn't an expert on the subject and simply wanted some advice, and you went ahead and started throwing all sorts of numbers and acronyms at the guy.

Perhaps if you actually tried to help him by speaking in terms that he'll understand it would be more effective.

Either way, this thread has gone on for too long and hasn't produced any real, productive and helpful results. I'm sorry that it devolved into this OP, I hope you were able to find some answers amongst the needless noise.
 
Status
Not open for further replies.
Top