• 🏆 Texturing Contest #33 is OPEN! Contestants must re-texture a SD unit model found in-game (Warcraft 3 Classic), recreating the unit into a peaceful NPC version. 🔗Click here to enter!
  • It's time for the first HD Modeling Contest of 2024. Join the theme discussion for Hive's HD Modeling Contest #6! Click here to post your idea!

Distributed computing for video gaming ?

Status
Not open for further replies.
Level 15
Joined
Aug 7, 2013
Messages
1,337
Let's suppose I have access to a bunch of remote / virtual machines on my network. They could be Windows, Linux, RedHat, CentOS, etc. boxes.

Is there a way or technology that could allow me to put them together to form a computing grid to allow for high performance video gaming, rather than having to create a single local machine that requires relatively expensive equipment?

For example, if I want to play SC2 on Highest settings, it appears my only option would be to:

1. Purchase a relatively expensive local machine with maxed out graphics cards, processors, etc.

If I could instead combine the computation of N machines, I'd have a second option:

2. Use a network of relatively cheap machines to perform the computations needed to for highest setting.

Option 2 also probably scales much better, and gets around needing to wait for newer hardware to be developed, if it could work.

This is sort of not unlike "cloud gaming," where the host/server does all the computation and processing, and the client simply gets the play the game without needing any crazy hardware or specs.

Is this possible to do ?
 
Ah, like NVidia GRID? It has always been an interesting project, and I look forward to seeing how they'll get the ball rolling once they iron out the details. I don't think they have multiplayer support yet, which is a pretty big reason not to invest too much in it yet.

I'm kinda curious how they plan on giving their so-called "lag-free" service (that is in the hands of ISP's for the most part). It seems pretty demanding--a lot of people may need to update their router or potentially even their internet service. GPU prices are not too bad if you want to support most games. However, I can see it becoming more of a thing as games become more demanding.

I could see it really kicking off if they get a dedicated title. i.e. a really, really computationally expensive game to blow people's minds. Is it in NVidia's best interest? Idk. But I could see people getting hyped if that were the case. For now, it is cool as research, but it isn't as "portable" as people would expect. South Korea and Scandinavia will probably support it just fine, but eh it'll be a bit of a while before most of the US catches up (google will probably play a big role in this).

EDIT: As for your own idea, how many people do you plan on servicing? It'll be a pretty expensive endeavor. NVidia can do it because they pretty much control the GPU market and have the resources, but to offer fast connection speeds isn't an easy/inexpensive task.
 

Dr Super Good

Spell Reviewer
Level 63
Joined
Jan 18, 2005
Messages
27,197
Is there a way or technology that could allow me to put them together to form a computing grid to allow for high performance video gaming, rather than having to create a single local machine that requires relatively expensive equipment?
Yes and no.

In theory one could turn them into a computational cloud and use them to run some fairly impressive games (far more happening than a single computer can cope with). An example being an MMORPG server. They communicate general state information with each other (eg player accounts and progress) while breaking the world simulation into small parts (eg each running separate raid instances, or maybe running separate world continents). This scales very well, and is how all MMORPGs operate. Even Diablo III works like this to some extent, with each server running many sessions and multiple servers working together to allow for thousands of parallel sessions.

In reality one cannot link multiple computers together to generally gain performance. Sure the entire available computational performance increases, but the latency and data path bandwidth dramatically decreases overall flooring performance. This means you cannot do something like run StarCraft II across 2 computers for better performance, since it depends very heavily on communication bandwidth with memory and communication latency to perform well.

Is this possible to do ?
No, otherwise it would have been done. The problem is the communication between the separate computers is orders of magnitude slower than within a single computer. For highly parallel applications with minimal communication requirements this provides near infinite scaling potential. However for highly serial applications with huge communication requirements (eg SC2) this provides no scaling at all and would probably degrade performance to the same levels as a computer from 1992.

To provide some quantification to the problem here are some rough approximations. Accessing data from cache is in the order of nanoseconds and has practically infinite bandwidth (part of CPU). Accessing data from computer memory is in the order of microseconds and has multiple gigabytes per second bandwidth (communication through motherboard). Accessing data across a network is in the order of miliseconds and has multiple megabytes per second bandwidth (communication through Ethernet). The differences are factors of 1000.
 
Level 15
Joined
Mar 31, 2009
Messages
1,397
Let's suppose I have access to a bunch of remote / virtual machines on my network. They could be Windows, Linux, RedHat, CentOS, etc. boxes.

Is there a way or technology that could allow me to put them together to form a computing grid to allow for high performance...
See you were ok up until here. I'd have told you about Beowulf clusters and the finer points, such as synchronization, and of applications that are easily distributable over such a cluster as well as links to various sites dedicated to their support.
video gaming
This is where you became retarded.
 
Status
Not open for further replies.
Top