1. Head to the 33rd Modeling Contest Poll and drink to your heart's desire.
    Dismiss Notice
  2. Choose your means of doom in the 17th Mini Mapping Contest Poll.
    Dismiss Notice
  3. A slave to two rhythms, the 22nd Terraining Contest is here.
    Dismiss Notice
  4. The heavens smile on the old faithful. The 16th Techtree Contest has begun.
    Dismiss Notice
  5. The die is cast - the 6th Melee Mapping Contest results have been announced. Onward to the Hive Cup!
    Dismiss Notice
  6. The glory of the 20th Icon Contest is yours for the taking!
    Dismiss Notice
  7. Check out the Staff job openings thread.
    Dismiss Notice
Dismiss Notice
60,000 passwords have been reset on July 8, 2019. If you cannot login, read this.

Specific gaming desktop build

Discussion in 'Computer Tech' started by sethmachine, Dec 5, 2015.

Thread Status:
Not open for further replies.
  1. sethmachine

    sethmachine

    Joined:
    Aug 7, 2013
    Messages:
    1,318
    Resources:
    0
    Resources:
    0
    I'm preparing to purchase and build my (1st) gaming desktop.

    The two games that I'd like to play on really good / max settings are:

    1. Starcraft 2
    2. Fallout 4

    With that in mind, what could someone recommend me? Let's assume I have two budgets: one $2000 or under, and the other has no limit.

    This site seems to suggests that such a computer could be made for Fallout 4 just over $1,000: http://newbcomputerbuild.com/newb-computer-build/build-a-gaming-pc-build-to-play-fallout-4/

    Would that also apply to SC2, or does it scale poorly? Because SC2 can vary depending on how many units / objects are in play...
     
  2. TheLordOfChaos201

    TheLordOfChaos201

    Joined:
    Jul 2, 2011
    Messages:
    1,739
    Resources:
    0
    Resources:
    0
    uhm... oblivious graphics were so high that my pc glitches
     
  3. BlargHonk

    BlargHonk

    Joined:
    Mar 31, 2009
    Messages:
    1,119
    Resources:
    0
    Resources:
    0
    In short: don't

    In long: the GPU market has been stagnant in performance since 2012, with only a roughly 33% difference going from GK104/Tahiti to GK110/Hawaii and from GK110/Hawaii to GM200/Fiji. This is due to TSMC producing an especially horrific 20nm node that was unsuitable for any performance applications. However, their 14nm is by all accounts alright, and the first cards are scheduled to launch in roughly 6 months. This will very likely result in a straight up doubling of performance; greater than the jump from GT200B/RV790 to GF100/RV870; which was an especially large jump already.
     
  4. don_svetlio

    don_svetlio

    Joined:
    Nov 30, 2011
    Messages:
    1,421
    Resources:
    3
    Models:
    1
    Skins:
    1
    Maps:
    1
    Resources:
    3
    I strongly suggest using PCpartpicker and posting on either LinusTechTips or OC3D
     
  5. Dr Super Good

    Dr Super Good

    Spell Reviewer

    Joined:
    Jan 18, 2005
    Messages:
    26,100
    Resources:
    3
    Maps:
    1
    Spells:
    2
    Resources:
    3
    SC2 is not that demanding. It only uses 2 cores so any multi core GPU at 3 GHz and modern gaming graphic card (not even high range) will max the game effortlessly even at highish resolutions. Just remember to run the 32bit client as something is not quite right with the 64bit client (probably memory bandwidth, or cache related).

    Fallout 4 is a piece of trash when it comes to running on PCs due to it being aimed at consoles and using an engine developed a decade ago which was hacked to keep up with standards. You will probably never be able to max it out. Anything that runs SC2 well should be able to run Fallout 4 pretty well, however in reality you will probably need something much better and the game will still suffer low frame rates and other performance problems no matter how powerful your system is.

    I would recommend a fast (3 GHz+) Intel i5 or i7 which is at least a quad core processor, and a NVidia 960 or 980 depending on how much you want to spend. Memory is not so important but you would probably want anything from 8 to 16 GB depending on what is available. Any 1+TB mechanical drive would work however if you want fast initial load times (SC2) or seamless chunk transition (Fallout 4) you may want to consider getting a SSD for the OS and game storage as those practically eliminate seek and read delay.

    AMD also offers a number of good solutions for CPU and GPU, after all they do make both the Xbox One and PlayStation 4 inner workings. For the same performance they can often be cheaper so unless you have brand bias you should take a look. I am however unfamiliar with their specific range so cannot give any recommendations. The processor will need a slightly higher clock as generally AMD processors perform worse than Intel processors cycle for cycle. Graphic wise AMD and NVidia have highly comparable solutions which often have similar performance, so I recommend looking at various benchmark sites and judging which is better for yourself.
     
  6. Here's a tutorial on how you make a good gaming pc.
    https://www.youtube.com/watch?v=DGaID2fSFXg
     
  7. BlargHonk

    BlargHonk

    Joined:
    Mar 31, 2009
    Messages:
    1,119
    Resources:
    0
    Resources:
    0
    hahhahaha holy fuck I haven't laughed so hard in awhile. The $300 ram with a 50% price premium over equivalent sets. The $130 shitty CLC barely better than a $30 chunk of metal. The $700 CPU that performs the same as the $300 model. The $150 case that is known to have terrible airflow. The crappy SSD that leverages brand over performance and is priced over its betters. If I wanted a mediocre PSU I'd just buy EVGA; not to mention a thousand watter is an insane overkill for that system. MSI is renowned for making motherboards that fry just after warranty.

    The only thing missing is some dipshit putting a soundcard in.
     
  8. Dr Super Good

    Dr Super Good

    Spell Reviewer

    Joined:
    Jan 18, 2005
    Messages:
    26,100
    Resources:
    3
    Maps:
    1
    Spells:
    2
    Resources:
    3
    They probably would if newer versions of Windows supported them directly or If Corsair made one. They would probably even throw in Corsair aerofoils with Corsair stripes to make it go faster if they existed.
     
  9. don_svetlio

    don_svetlio

    Joined:
    Nov 30, 2011
    Messages:
    1,421
    Resources:
    3
    Models:
    1
    Skins:
    1
    Maps:
    1
    Resources:
    3
    All you will ever need for current games at 1440p

    PCPartPicker part list / Price breakdown by merchant

    CPU: Intel Core i5-6600K 3.5GHz Quad-Core Processor ($273.98 @ Newegg)
    CPU Cooler: Cooler Master Hyper 212 EVO 82.9 CFM Sleeve Bearing CPU Cooler ($19.89 @ OutletPC)
    Motherboard: Gigabyte GA-Z170-HD3 ATX LGA1151 Motherboard ($104.99 @ SuperBiiz)
    Memory: A-Data XPG Z1 16GB (2 x 8GB) DDR4-2400 Memory ($68.99 @ Newegg)
    Storage: Samsung 850 EVO-Series 250GB 2.5" Solid State Drive ($77.88 @ OutletPC)
    Storage: Western Digital Caviar Blue 1TB 3.5" 7200RPM Internal Hard Drive ($49.98 @ OutletPC)
    Video Card: EVGA GeForce GTX 980 Ti 6GB FTW ACX 2.0+ Video Card ($646.98 @ Newegg)
    Case: Fractal Design Define S ATX Mid Tower Case ($71.99 @ SuperBiiz)
    Power Supply: EVGA SuperNOVA G2 650W 80+ Gold Certified Fully-Modular ATX Power Supply ($59.99 @ Newegg)
    Monitor: Acer G257HU smidpx 60Hz 25.0" Monitor ($259.99 @ B&H)
    Keyboard: Cooler Master OCTANE Wired Gaming Keyboard w/Optical Mouse ($34.99 @ Newegg)
    Total: $1669.65
    Prices include shipping, taxes, and discounts when available
    Generated by PCPartPicker 2015-12-11 18:33 EST-0500
     
  10. BlargHonk

    BlargHonk

    Joined:
    Mar 31, 2009
    Messages:
    1,119
    Resources:
    0
    Resources:
    0
    its shit
    Skylake manages to be worse than Haswell
    A-Data has terrible QC
    980 Ti is doing worse than the Fury X in DX12 games
    Keyboard choice is terrible, generally it's useless to buy a keyboard between $15 and $80; as membranes don't get any better, and switches are past $80 usually.
    Acer panels usually have flaws like backlight bleeding.
    http://pcpartpicker.com/p/7QmFRB
     
  11. don_svetlio

    don_svetlio

    Joined:
    Nov 30, 2011
    Messages:
    1,421
    Resources:
    3
    Models:
    1
    Skins:
    1
    Maps:
    1
    Resources:
    3
    I think you are mighty confused friend

    Skylake is around 10% faster than Haswell on average. No idea where you got your info from but it is factually incorrect. - http://cpu.userbenchmark.com/Compare/Intel-Core-i5-6600K-vs-Intel-Core-i5-4690K/3503vs2432

    A-Data is more than fine - typically RAM is the least of your concerns as RMAing faulty RAM is fairly easy

    GM200's DX12 performance was somewhat addressed and unless you are going with SLI or CF where CF scales better allowing Fury X CF surpass 980 Ti SLI then the GM200 chip is a better choice

    Octane is a good option when you don't want to overspend - things like Razer and other boutique brands are mostly plagued by short lifespans

    Acer panels are okay - their laptops are garbage but the screens are quite good. Just look at the Predator they released recently.

    Finally, try to at LEAST cite sources when discussing.
     
  12. Deathcom3s

    Deathcom3s

    Joined:
    Dec 24, 2007
    Messages:
    1,282
    Resources:
    1
    Models:
    1
    Resources:
    1
    Fury X is only doing better in DX12 on a single game, that is currently in Alpha and is sponsored by AMD; hardly a reliable metric for measuring DX12 performance. There is currently no reliable way for anyone to subjectively test DX12 performance. The Fury lineup is disappointing and overpriced (this is coming from someone who has owned AMD/ATI cards for the last 8 years). Honestly, I would hold off on a 980 Ti though. Pascal is slated for release within the next year, which brings a huge slew of improvements. We've been stuck on the same 20nm node for a while, and pascal means a die shrink, plus a rumored 16gb of HBM2 vram. We should finally start seeing some big improvements in the GPU world again. I would get a much cheaper GPU that will hold you over until pascal is released, then go all out once it's released.

    Honestly, that build looks perfectly fine. Sure, there are better keyboards out there (I love my mechanical keyboard), but if you're just looking for a budget keyboard it's perfectly serviceable.
     
  13. don_svetlio

    don_svetlio

    Joined:
    Nov 30, 2011
    Messages:
    1,421
    Resources:
    3
    Models:
    1
    Skins:
    1
    Maps:
    1
    Resources:
    3
    Fury X and Fury prices were cut by 100$ each recently I believe

    Fury is about 30-50$ more expensive than a 980 while offering 10% more performance

    Fury X is around 50-70$ cheaper than reference 980 Tis (which it outperforms due to better thermals and no OC capability on the Nivida stock cooler)

    16GB of HBM2 are for Quadros due Q1 2016 - Gaming Pascal is due Q3 2016 with Arcitc Islands Q2 2016

    Most cards will be 8GB HBM2 for the high end and GDDR5X for anything bellow a 1080/R9 490X
     
  14. Deathcom3s

    Deathcom3s

    Joined:
    Dec 24, 2007
    Messages:
    1,282
    Resources:
    1
    Models:
    1
    Resources:
    1
    Ah, that's all news to me. The price cut on the Fury lineup is a game changer, they simply weren't worth the price they were charging at launch.
     
  15. Velmarshal

    Velmarshal

    Joined:
    Mar 9, 2008
    Messages:
    1,531
    Resources:
    0
    Resources:
    0
    Totally, especially with gigantic gains we've seen in generation to generation in the past few years, it's an absolute deal breaker.

    Source: My butthole

    Hmm, maybe if they included a mouse in the combo it would be worth it? No?
    Sidenote: For a build of this caliber, the cheapass set like Octane isn't a sensible choice.

    Oh man, gotta get dem performances for that one alpha game and a fucking benchmark.
     
  16. BlargHonk

    BlargHonk

    Joined:
    Mar 31, 2009
    Messages:
    1,119
    Resources:
    0
    Resources:
    0
    In Synthetics; real world usage it just ain't cutting it. In workstation applications it's a 6% increase; in gaming it's a 1% decrease, mainly due to the high latencies DDR4 currently requires. A similar issue was seen when DDR3 first launched.
    Bull, RAM degrades over time regardless. A-Data has a higher tolerance for dead cells than say, GSkill, Crucial, or Corsair. Means less capacity left in the designated reserve.
    Addressed as in; "we promise to improve our software enabled async until we add a hardware solution into Pascal; like Fermi had"? Because thats all I've heard from Nvidia other than them begging Stardock not to include async support in the engine. Anyways, go check the Battlefront and Blops3 benchmarks as well. Fury X is only behind in Gameworks games. Not to mention Nvidia seems to be gimping the Kepler series now, a 780 Ti on par with a 960 in a few games, which is totally absurd given their specs.
    Cooler Master is renowned for cheap shitty 'gamer' products. If you want a keyboard you get a cheap logitech or microsoft; or shell out for a Ducky.
    hahahhahahha, you mean the monitor with massive bleed and regularly ships out with dead pixels? Go read up on it mate.
    Like you did?

    Except it isn't. At all.

    And did you see the Fable Legends benches? http://www.extremetech.com/gaming/2...o-head-to-head-in-latest-directx-12-benchmark
     
  17. Deathcom3s

    Deathcom3s

    Joined:
    Dec 24, 2007
    Messages:
    1,282
    Resources:
    1
    Models:
    1
    Resources:
    1
    Last edited: Dec 14, 2015
  18. Dr Super Good

    Dr Super Good

    Spell Reviewer

    Joined:
    Jan 18, 2005
    Messages:
    26,100
    Resources:
    3
    Maps:
    1
    Spells:
    2
    Resources:
    3
    That is not entirely correct. They have recently discovered the game performs better when using both a NVidia and AMD card than with AMD and NVidia cards alone. It is fastest with AMD main, but using NVidia as secondary gives more performance than using the same AMD card again. As such NVidia is still very competitive.

    Additionally most games are optimized for AMD cards at the moment. You have Microsoft and Sony to blame for their AMD made consoles.

    1% decrease is what you get for running music in the background. Only reason not to go it would be cost as the performance is on par if not slightly better. I will however admit that the gains might be nowhere near as large as they boast, like always.

    I had no problem with early DDR3. I still use a first generation I7 processor using the very rare tri-channel RAM configuration and it was still leaps and bounds faster than Core2 Quad which it was designed to supersede.

    The async issue is because it is a new feature, which AMD developed. Only AMDs newest cards actually can do it, so do not expect many games to actually take advantage of it since they are still stuck using old generation AMD cards in game consoles. Additionally hardware support for it is not required, as the specification only defines behaviour and not really the implementation.

    The main problem they were complaining about is that when they tried to use a feature which NVidia advertised as supporting it did not behave as expected. Bad performance was not a problem, it was the no performance at all which was since the feature did not work as specified (they could not get performance metrics from it).

    How come AMD can do it yet NVidia cannot? It is because the specification was written largely by AMD and around their desired hardware features. One can probably thank the Xbox division for that since they have massive deals with AMD. As such AMD probably already had the ideas and implementation worked out long before it became part of the standard. When it did become part of the standard NVidia was left to think what on earth they mean by the feature, and what is the point of it or how does one implement it without having to redesign their cards from scratch or lower performance in other areas.

    I believe NVidia thought the feature would hardly ever be used, after all most game companies target Xbox One and PS4 so cannot use the feature. They were also pretty close to their final product design so they could hardly make large changes. As such they tried to hack together a software implementation, which did not work probably because no one but AMD knows how the feature is "meant" to work.

    980ti holds its own against AMD's cards assuming they are the same tier. Differences are so small that they can be considered trivial.

    Fable Legends is a Microsoft game designed to perform well on the Xbox One. As such it is optimized around AMD cards so one can expect AMD to do well. The fact NVidia is still competitive with it shows that their cards do still work.
     
  19. BlargHonk

    BlargHonk

    Joined:
    Mar 31, 2009
    Messages:
    1,119
    Resources:
    0
    Resources:
    0
    Yeah I saw that, it was some voodoo fuckin shit; especially the mixed 7970 and 680 graph where if the 680 was the master it'd actually drop in performance.
    That wasn't early DDR3 mate. I'm talking Intel P35 boards with 65nm first gen Core 2's. Back when 2gb kits of it were $300+. DDR2 outperformed it at first because the frequency boosts were offset by the hilariously higher timings.
    This shows to me that you haven't a bloody clue of what it actually entails. It's essentially multithreading shaders using the queue process, breaking things into sectors.
    GCN 1.0 does it fine, VLIW4/5 cannot. You might be confusing the older APU's with whats in the consoles, as those were VLIW4 while the consoles use GCN. Nvidia also theoretically supported asynchronous compute for CUDA using drivers going back to Fermi, but there was no hardware implementation.
    It's a basic feature both knew about, Nvidia thought they could cut corners and use a software solution because it was originally just a niche for CUDA programmers. They're going to have to redesign their queueing engine from the ground up, which won't be fun given their thread/warp model.
    They're already being used on console games bruv. Battlefield 4 was the first one IIRC
    It's funny, if you use GPUview on the game, you can see that Nvidia gets calls to put things in the async queue; but then just queues it normally as if it weren't async. That said the game wasn't a heavy user of the queue, neither was Ashes for that matter.

    Stardock is partnered with AMD; Oxide, the engine programmers working with Stardock on it, are not. http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1200#post_24356995
     
  20. Dr Super Good

    Dr Super Good

    Spell Reviewer

    Joined:
    Jan 18, 2005
    Messages:
    26,100
    Resources:
    3
    Maps:
    1
    Spells:
    2
    Resources:
    3
    Was not aware any Core2 supported DDR3, my mistake.

    This shows me you "haven't a bloody clue of what it actually entails". It is actually closer to Hyper Threading with no real parralisim occurring. Only 1 command queue will execute at a time, but another command queue can execute with no overhead during cycles when the main command queue job blocks. This can allow hardware manufacturers to lower the overhead of context switching pipelines to virtually free, allowing pipeline pre-emption as a viable software technique. It also allows the functional hardware to be utilized more heavily in cases that commands result in stalls.

    The specification specifies a number of default required command queues and any number of custom pipelines. Although aimed at hardware acceleration and recommended for implementation, it is not mandatory. Also no hardware implementation does it perfectly, with many having limits (such as 16 or 32 for Xbox One).

    AMD must have messed up their own press releases. They clearly stated that Async was only fully supported by their newest cards. Or possibly that what the game does is only supported on their newest cards. They are probably referring to some limitation aspect of the command queues since the Xbox One and PlayStation 4 likely have worse or fewer optional features compared with their newer cards.

    You must remember the feature comes at a cost. It might reduce performance in other areas to install due to the extra logic required. If a non-standard feature is not used you will be stupid to keep it in.

    For the most part a driver solution could work with at most some performance loss. It is only if you start to depend on the feature, specifically for mixing Direct Compute results in the graphic pipeline, will it become impossible. This is why Ashes of Singularity specifically does this, because it needs a full hardware implementation to work otherwise the context switching is impossible. The fact it just did not work meant the feature was not implemented properly. It should still have worked, although probably very badly.

    Anything using Direct3D 12 will use command queues. If they gain anything from it is entirely another question. If they take advantage of some form of asynchronous hardware implementation is not required. The only thing they are guaranteed by using Direct3D 12 is the order in which command queues run (which is where NVidia probably failed). AMD is selling a specific implementation of command queues which is their "asynchronous compute" or whatever it is called.

    Once again, asynchronous compute is an AMD implementation of Direct3D 12 command queues. How it is implemented is not defined, with the actual documentation hinting that a GPU could in theory run each command queue completely in parallel (not only during pipeline stalls).

    As long as the synchronization between command queues remains correct it still complies. Even if priorities do not work that well.

    Stardock's involvement will be shaky as always with such projects. Usually they help make it, but then leave the other company to look after it (aka Demigod). They seem more like a team of consultant programmers than an actual software company.
     
Thread Status:
Not open for further replies.