• 🏆 Texturing Contest #33 is OPEN! Contestants must re-texture a SD unit model found in-game (Warcraft 3 Classic), recreating the unit into a peaceful NPC version. 🔗Click here to enter!
  • It's time for the first HD Modeling Contest of 2024. Join the theme discussion for Hive's HD Modeling Contest #6! Click here to post your idea!

Fermi based GTX 480/470 to hit Market March 26th

Status
Not open for further replies.
Level 15
Joined
Mar 31, 2009
Messages
1,397
Probably $400+

In other news. ATI just released the HD5830: http://www.tomshardware.com/reviews/radeon-hd-5830,2564.html

In stores in maybe a day or so. (Note: Review is using a HD5870 that was repurposed by ATI as a HD5830, power benchies may be off, and it's shorter)

Nvidia needs to get their cards out fast. ATI already has HD5450,570,670,750,770,830,850,870, and 970. Ranging from $50 to $600

Also, in an article you may have missed: http://www.tomshardware.com/news/nvidia-fermi-gf100-geforce-gpu,9709.html

Gonna be a long while before they come down to decent prices I think.
 
Level 45
Joined
Jun 3, 2005
Messages
6,982
I have a question..
whats the point of getting the newest Video card unless you have a terribly shitty one?
I notice alot of pc people who are really into already have some of the best cards, yet...they try/buy the new ones that come out every so and so... I dont see the point of being updated to the latest card when it doesnt really make that much of a difference compared to todays modern games :/
 
Level 15
Joined
Dec 24, 2007
Messages
1,403
I would say it's mostly because they want to be able to say they have the best video card on the market at the time. xD

I mean, if you do a lot of 3D editing, game design, etc. getting a brand new top of the line video card certainly wouldn't hurt, but you definitely don't need to break the bank getting the biggest and best video card you can.

I personally just upgraded from an aging 7900GS, as it just wasn't cutting it when it came to doing 3D level design for UT3 mods and such, and i needed something better. In my case, the vid card i was using was almost 4 years old anyway, so i had a pretty valid excuse for upgrading.

Honestly, if you don't really do a lot of 3D intensive work, or are not a huge PC gamer, you don't really need to constantly upgrade your video card, maybe once every 2 or 3 years if absolutely necessary.
 
Level 15
Joined
Mar 31, 2009
Messages
1,397
I am going to get a HD5850 or equal when it hits $200 (current price is $300)


My HD4870 can't max BFBC2 D:

Or run it in DX11 (only 10.1)


Still will be a long time before I replace this, took a year for the HD4870 to hit $200
 
Level 15
Joined
Dec 24, 2007
Messages
1,403

Dr Super Good

Spell Reviewer
Level 64
Joined
Jan 18, 2005
Messages
27,198
AF has virtually no demand as far as I can tell, even on old graphic cards I could set it to max and never noticed it cap frame rate.

AA on the other hand is very demanding and especially from 8 to 16 times you can notice serious performance decreases of like 33%+ less framerate.

Nvidias AA is slower than ATI, yes but thats cause it is higher quality. If you change the performance options of AA you can get it to run quite a bit faster but look poorer. Most noticable is AA on billboards, even my 275 GTX lags in WC3 if you have a couple of them full screen due to the demanding gamma correct AA. Turn that option off and I think my frame rate does not even drop.

The main reason for the new graphic cards is unlike ATI, nvidia is targeting professionals and not home users. Nvidias cards have a whole new support for programming, allowing them to be used more efficently for certain tasks.

What tasks other than gaming can use dozens of top range graphic cards? Well there is the movie industry (they are absoulty perfect for render farms and doing video processing), there is image recognition (all that power is needed to compair vast ammounts of videa data at real time rates) and finally simulations (all that power is perfect for smulating behaviour of physical systems for designing stuff).

Do not forget all the serious game places, which will buy them.

All in all, home users like us are actually only part of nvidis sales. They are actually aiming these monster cards at professionals or very serious hobbiest gamers.

Tie in an I7 hex core processor and lots of ram, and with one of these babies you could literally play a game looking better than movies from the post 2000 age.
 
Level 15
Joined
Mar 31, 2009
Messages
1,397
Higher quality AA? Quit your fanboy gibbering and look how AA works. Nvidia is just slower in the AA department.

ATI supports all of what you just said using DirectCompute, which is part of the DX11 standard. Also, go look at Laptop cards, Nvidia is getting their ass handed to them.

I'd rather program for DirectCompute than Cuda any day, it's non-proprietary.


Also, are you going to buy yours for a Render Farm? I doubt it
 
Level 7
Joined
Apr 3, 2009
Messages
317
I have a question..
whats the point of getting the newest Video card unless you have a terribly shitty one?
I notice alot of pc people who are really into already have some of the best cards, yet...they try/buy the new ones that come out every so and so... I dont see the point of being updated to the latest card when it doesnt really make that much of a difference compared to todays modern games :/

When you have lots of money, you spend it :) It's how we people work :D
And screw the new Nvidia cards.. I'm stickn with ATI! .... Maaaaaaybe I'll just buy ONE 470 to try it out :mwahaha:
 
Level 7
Joined
Apr 3, 2009
Messages
317
Nvidia AA is slower than ATI, yes but thats cause it is higher quality. If you change the performance options of AA you can get it to run quite a bit faster but look poorer. Most noticable is AA on billboards, even my 275 GTX lags in WC3 if you have a couple of them full screen due to the demanding gamma correct AA. Turn that option off and I think my frame rate does not even drop.

... IMPOSSIBLE for a 275GTX to lag on wc3, IMPOSSIBLE I SAY! Even my HD 4530M (512MB DDR3) is always on 58-60 fps no matter what, maxed settings, 1366x768, everything and always on 58-60 unless you spam a map full of trash.. Impossible for a 275GTX to lag on wc3 no matter what settings! You lie! Never has my HD 5770 lagged on Wc3 or WoW or Cod6 and they are all maxed out 1680x1050 with a few of them on.. Lies lies lies!
 

Dr Super Good

Spell Reviewer
Level 64
Joined
Jan 18, 2005
Messages
27,198
BlargHonk... Please read up before posting.

ATI cards do not support running C++ code nativly on them. You can only interface with them via direct compute or OpenCL.
Nvidias new cards let you directly run C++ code on them without DX11 or open CL having to be loaded making them much more like massive CPUs than GPUs.

Nvidias AA is higher quality. How? Well if you read up about AA, you find that some checks will prevent artifacts and others will make the result more true to proper supersampeling. I am sure their AA performance is near idential with identically fast graphic cards when AA quality is set to low (AA transparency super sampeling turned off and such).

I would like to remind you that at the time of SC2's beta release. The massive DX11 best of the best ATI card are completly unable to run SC2 with AA enabled. Nvidia's graphic cards on the other hand have no problem getting them going with 16Q AA. Thus this shoots your AA quality argument down the drain, as ATI is not even able to turn it on for SC2 while nvidia has no problem.
 
Level 15
Joined
Nov 1, 2004
Messages
1,058
The GF100 is definitely a monster when it comes to geometry processing, but with such a high TDP and so much heat generated at not a huge performance advantage over the 5870, it looks like it's worth waiting.

As soon as TSMC gets their 40nm act together, and nVidia spins the next generation of Fermi/GF100 chips, then things might look better.
 
Level 15
Joined
Dec 24, 2007
Messages
1,403

Have you not read any of the dozens of reviews? The 480 costs more, offers about a 5% performance increase on average (and does worse in some aspects), uses an additional 150W of power compared to the HD5870, and reaches temps of around 97C on load. And it sounds like a leaf blower (people are calling it the next generation of the FX 5800 Ultra).

Give it some time, let them do some revisions, and i'm sure it'll do fine. But for being 6 months late, i'm unimpressed.
 
Level 15
Joined
Mar 31, 2009
Messages
1,397
Good chart for this:

DirectX%2011%20Performance1.png


Average%20Power%20Consumption%201.png


Efficiency%20Index%201.png


The benchmark they use utilizes tessellation heavily, this is actually the best scenario for the GF100, and it loses.
 

Dr Super Good

Spell Reviewer
Level 64
Joined
Jan 18, 2005
Messages
27,198
I love how biased these benchmarks can be.

Firstly the cards are brand new, this all could be a driver problem so comparisons only give current performance.
Its like compairing the geforce 280 to the radon 5970 in starcraft 2 performance on max with 16*aa upon SC2s beta start.
The 280 would win massivly cause the drivers of the radon card did not even allow you to enable AA when playing the game. As such I could put a "void" instead of a FPS bar due to drivers not even supporting the test. It is so easy to manipulate values and such to be what ever you want.

Also what game did they use to make this benchmark? I have never heard of Unigine Heaven 2.0. There will always be strong points and weak points of graphic cards. It could so happen that they choose a particularly strong point of the radeon or a weak point of nvidia. Perhapse tessilisation is not its strong point, but what about physics? Have they even tested the claimed ability to run C++ code on the geforce 400 series, or was that dropped?

All in all I doubt the reliability of that benchmark due to it being initial release (drivers can be a problem), and that it did not explore what the geforce 400 series was designed for (more general processing and physics).
 
Level 15
Joined
Mar 31, 2009
Messages
1,397
Unigine Heaven 2.0 is a GPU benchmark. Drivers are always an issue on launch, but they have had 6 months to work it out, Nvidia also said themselves that the Tessellation engine on the GF1 is superior to the HD5's.


And they are doing that later, doing a GTX480 vs HD5970 vs i7 980X, all running C++
 

Dr Super Good

Spell Reviewer
Level 64
Joined
Jan 18, 2005
Messages
27,198
There is a chance that the tessellation engine is better, just not on that benchmark. Did they even compare image quality, as I am perty sure both cards do not produce a 100% identicle render output. For example if it is producing a higher quality tessellation, it is to be expected that the performance of it would be decreased.
 
Level 15
Joined
Dec 24, 2007
Messages
1,403
Here's some more charts, from Hardocp.

126962492671BZgJ5ZxI_7_1.png


126962492671BZgJ5ZxI_7_2.png


126962492671BZgJ5ZxI_7_3.png


The GTX 480 certainly has a place in professional workstations, but for gaming, you'd be much better off going with an HD 5870 or 5970, assuming you can get a good price (provantage has 5970's for ~$600, compared to Newegg's $700 average.)

The 5970 and GTX 480 use about the same amount of power, however, the other being a dual-GPU card, it shouldn't be surprising. The fact that the 480 draws about 500W is pretty ridiculous for a single-GPU card.

As far as tessellation goes, i don't really care at this point. Tessellation isn't even close to becoming mainstream, maybe in 2 years it will be more mainstream, but by then we'll have something like the HD 6xxx series or GTX 6xx or something, and i'm sure those cards will have a heavy emphasis on tessellation. And for the record DSG, you should try to keep up with things, Nvidia has been boasting about Fermi's superior tessellation abilities for a while now, and as those Unigine 2.0 benches show, it loses to the 5970 in that regard.
 

Dr Super Good

Spell Reviewer
Level 64
Joined
Jan 18, 2005
Messages
27,198
Define "superior tessellation"...
Like I said it might be that the card produces it with a superiour quality or a simple driver problem with that particular test. In all tests they only asume that the outputs are identical frame for frame which is not always true.

Already even the ATI cards are too pricy, with a cost of 600-700$, and nvidia at equally outragious prices. If you have not noticed nvidia was planning to axe their tessla series and replace it with cards from the main stream which probably has happned (why the specs are so insane). The ATI cards are probably also aimed at a simlar market but also at rich people.

Yes their visuals may be top range but really they are too expensive for now and most games still are not being made in DX10 let alone DX11. Maybe in a few years this will change but people should probably stick to puying DX10/10.1 cards from ati and nvidia.
 
Level 15
Joined
Dec 24, 2007
Messages
1,403
DX 10 failed because Vista failed, it simply wasn't worth it to make a product that was restricted to a failed platform. DX 11 is poised to be much more popular among developers.

By superior tessellation i mean the fact that nvidia has been saying that fermi is able to handle tessellation better, thus the claims to superior tessellation ability.
 
Status
Not open for further replies.
Top