1. Choose your means of doom in the 17th Mini Mapping Contest Poll.
    Dismiss Notice
  2. A slave to two rhythms, the 22nd Terraining Contest is here.
    Dismiss Notice
  3. The heavens smile on the old faithful. The 16th Techtree Contest has begun.
    Dismiss Notice
  4. The die is cast - the 6th Melee Mapping Contest results have been announced. Onward to the Hive Cup!
    Dismiss Notice
  5. The glory of the 20th Icon Contest is yours for the taking!
    Dismiss Notice
  6. Shoot to thrill, play to kill. Sate your hunger with the 33rd Modeling Contest!
    Dismiss Notice
  7. Check out the Staff job openings thread.
    Dismiss Notice
Dismiss Notice
60,000 passwords have been reset on July 8, 2019. If you cannot login, read this.

Distributed computing for video gaming ?

Discussion in 'Computer Tech' started by sethmachine, Jan 20, 2016.

  1. sethmachine

    sethmachine

    Joined:
    Aug 7, 2013
    Messages:
    1,318
    Resources:
    0
    Resources:
    0
    Let's suppose I have access to a bunch of remote / virtual machines on my network. They could be Windows, Linux, RedHat, CentOS, etc. boxes.

    Is there a way or technology that could allow me to put them together to form a computing grid to allow for high performance video gaming, rather than having to create a single local machine that requires relatively expensive equipment?

    For example, if I want to play SC2 on Highest settings, it appears my only option would be to:

    1. Purchase a relatively expensive local machine with maxed out graphics cards, processors, etc.

    If I could instead combine the computation of N machines, I'd have a second option:

    2. Use a network of relatively cheap machines to perform the computations needed to for highest setting.

    Option 2 also probably scales much better, and gets around needing to wait for newer hardware to be developed, if it could work.

    This is sort of not unlike "cloud gaming," where the host/server does all the computation and processing, and the client simply gets the play the game without needing any crazy hardware or specs.

    Is this possible to do ?
     
  2. GhostWolf

    GhostWolf

    Joined:
    Jul 29, 2007
    Messages:
    4,952
    Resources:
    2
    Tools:
    1
    Tutorials:
    1
    Resources:
    2
    Theoretically it's possible to some extent, if a game uses OpenCL.
    As to it being used in actual programs, dream on.
    It's not like games require strong computers anyway. The only expensive part you need is a decent GPU.
     
  3. PurgeandFire

    PurgeandFire

    Code Moderator

    Joined:
    Nov 11, 2006
    Messages:
    7,429
    Resources:
    18
    Icons:
    1
    Spells:
    4
    Tutorials:
    9
    JASS:
    4
    Resources:
    18
    Ah, like NVidia GRID? It has always been an interesting project, and I look forward to seeing how they'll get the ball rolling once they iron out the details. I don't think they have multiplayer support yet, which is a pretty big reason not to invest too much in it yet.

    I'm kinda curious how they plan on giving their so-called "lag-free" service (that is in the hands of ISP's for the most part). It seems pretty demanding--a lot of people may need to update their router or potentially even their internet service. GPU prices are not too bad if you want to support most games. However, I can see it becoming more of a thing as games become more demanding.

    I could see it really kicking off if they get a dedicated title. i.e. a really, really computationally expensive game to blow people's minds. Is it in NVidia's best interest? Idk. But I could see people getting hyped if that were the case. For now, it is cool as research, but it isn't as "portable" as people would expect. South Korea and Scandinavia will probably support it just fine, but eh it'll be a bit of a while before most of the US catches up (google will probably play a big role in this).

    EDIT: As for your own idea, how many people do you plan on servicing? It'll be a pretty expensive endeavor. NVidia can do it because they pretty much control the GPU market and have the resources, but to offer fast connection speeds isn't an easy/inexpensive task.
     
  4. Dr Super Good

    Dr Super Good

    Spell Reviewer

    Joined:
    Jan 18, 2005
    Messages:
    26,099
    Resources:
    3
    Maps:
    1
    Spells:
    2
    Resources:
    3
    Yes and no.

    In theory one could turn them into a computational cloud and use them to run some fairly impressive games (far more happening than a single computer can cope with). An example being an MMORPG server. They communicate general state information with each other (eg player accounts and progress) while breaking the world simulation into small parts (eg each running separate raid instances, or maybe running separate world continents). This scales very well, and is how all MMORPGs operate. Even Diablo III works like this to some extent, with each server running many sessions and multiple servers working together to allow for thousands of parallel sessions.

    In reality one cannot link multiple computers together to generally gain performance. Sure the entire available computational performance increases, but the latency and data path bandwidth dramatically decreases overall flooring performance. This means you cannot do something like run StarCraft II across 2 computers for better performance, since it depends very heavily on communication bandwidth with memory and communication latency to perform well.

    No, otherwise it would have been done. The problem is the communication between the separate computers is orders of magnitude slower than within a single computer. For highly parallel applications with minimal communication requirements this provides near infinite scaling potential. However for highly serial applications with huge communication requirements (eg SC2) this provides no scaling at all and would probably degrade performance to the same levels as a computer from 1992.

    To provide some quantification to the problem here are some rough approximations. Accessing data from cache is in the order of nanoseconds and has practically infinite bandwidth (part of CPU). Accessing data from computer memory is in the order of microseconds and has multiple gigabytes per second bandwidth (communication through motherboard). Accessing data across a network is in the order of miliseconds and has multiple megabytes per second bandwidth (communication through Ethernet). The differences are factors of 1000.
     
  5. BlargHonk

    BlargHonk

    Joined:
    Mar 31, 2009
    Messages:
    1,119
    Resources:
    0
    Resources:
    0
    See you were ok up until here. I'd have told you about Beowulf clusters and the finer points, such as synchronization, and of applications that are easily distributable over such a cluster as well as links to various sites dedicated to their support.
    This is where you became retarded.