- Joined
- Jun 13, 2016
- Messages
- 586
Preface
A while ago, I worked on a Network / FileIO library for Wurst, a duo of systems to facilitate easy save/load in WC3.
The only decent resources on the subject matter, which I used as a reference for my systems, are two systems by TriggerHappy - Sync, and SyncInteger. I've (somewhat partially) reimplemented them in Wurst, with various additions and adjustments.
During the implementation period, I learned a lot about networking in WC3, as well as some useful tricks. They aren't really documented anywhere, except in libraries, so I would like to share my newfound insight into WC3's internals on this subject with the community.
What is this tutorial about?
This tutorial is going to cover various advanced topics related to networking in WC3.
For those who are unfamiliar with the idea, in a very abstract sense, networking refers to the act of sending something that only one client knows about, to the other clients.
Most mappers don't ever need to think once about networking in WC3, because it is handled behind the scenes automatically by the engine, without the mapper having to intervene. This makes it very easy to just write your game logic - but not very easy when you actually have to send something from one client to another - such as the contents of a file that only one client has, which is a necessity if you want to use a codeless (i.e. file-based) save/load system.
If you would like some code examples to read along with this tutorial, you can head over to Wurst's standard library (wurstscript/WurstStdlib2) and look in the following files, and their dependencies:
- network/SyncSimple.wurst
- network/Network.wurst
Without further ado, let's get into it.
How does WC3's networking model work?
Unlike virtually all FPS (and even some RTS) games, WC3 is built using something called deterministic lockstep.
In deterministic lockstep, every player in a game has all the data about the game world, even the data the player maybe shouldn't know about. This includes units in the fog of war, other players' selections, the orders they are sending, and sometimes, even, the messages they send (even if it is ally or observer only).
The only data that gets sent over the network are user inputs, such as unit selections, unit orders, chat messages, some button presses, and so on.
Each player then simulates the game world in parallel to other users, on his machine, using the current state of the world and the user inputs he received from other players. This is where the deterministic part becomes really important - given the same starting state, and the same inputs, every player is going to arrive at the same conclusion about the simulation.
At least, usually. Sometimes, due to a bug in WC3 or a logic error on the mapmaker's part, a desync will occur, where some players arrive at one result, and others at another. This makes the simulations diverge, and players will start to see different results on their screens. At this point, they are no longer playing the same game. Usually, WC3 catches that and disconnects the player who desynced, but in extremely rare and bizzare cases, the game may continue to go on in multiple desynced states, creating a lot of confusion for unaware players.
If this sounds too abstract or confusing, consider the following real-world analogy as a simple explanation:
Imagine you're playing a game of chess with a friend who lives far away, by mail. Each of you starts with the same board configuration, and after that, you mail your moves to your friend, and vice-versa. When he receives your mail, he applies your move, makes his own move, and then mails that move to you. When you receive his mail, you do the same. And so on, and so on. Since chess is a deterministic game, meaning there is no random element, this is all you need to have a synchronized board (i.e. game state), provided that you execute each others' moves correctly.
But, suppose you make one move, and accidentally make a mistake, and write it down as a different move when mailing your friend. He now thinks your board is a different configuration than the one you really have. This is analagous to the dreaded desync, and it is going to get worse with time, and may be hard to catch.
WC3's networking slightly in-depth
Obviously, WC3 is much more complicated than a chess board. There are potentially hundreds of units, each with a dozen of different parameters, all at once, doing different things. As mentioned previously, only the user inputs get networked, and everything else is simulated in parallel.
In WC3, the host (in past times, a player, now, usually, a bot hosted by someone else) serves as a relay between the players. Each player communicates only with the host, sending their actions to them, and receiving other players' actions from them.
There's one important takeaway here: the clients do not simulate their own actions until they receive a confirmation from the host. This is necessary in order to maintain synchronization, so that each client simulates every event at the same game tick. In addition, the host also waits for some time before relaying your order to the other players, in order to allow for other players with higher ping to catch up. This is usually called latency, and it is introduced in order to have a smoother game for everyone, even if at the expense of some extra delay.
One other important detail is that WC3 is built on top of the TCP protocol, which is sequential. This means that whatever you send through TCP is going to arrive in the same order that you sent it in. This is likewise very important, because it means that all your inputs are going to be processed by other players in the same order. This is also important, because this allows us to reliably send data from one player to the rest. More on that later.
Using WC3 networking to our advantage
That was a lot of theory, but it isn't going to be terribly useful to most mappers, since all of this is automagically handled by the engine behind the scenes. The mapper doesn't have to think much about what goes on the wire, and what doesn't. Unless...
Unless we need to send some data that isn't automatically networked. There aren't many things in WC3 like that. It pretty much boils down to:
- Obscure local data that isn't networked, e.g. camera position / parameters, terrain height, and some other things.
- File contents.
Fortunately for us, there are natives in JASS that allow us to send arbitrary data from one player to the rest. These are the GameCache Sync natives, namely SyncStoredInteger, SyncStoredReal and SyncStoredBoolean. There is also SyncStoredString, but unfortunately, it is broken and doesn't work.
When called locally, i.e. from within a Local Player clause (e.g. inside an if client == GetLocalPlayer() statement), these synchronize the current stored value at the specified index to the other players. Using these three natives, we can send integers, reals, and booleans to other players.
Cool! We can send stuff. That means that, theoretically, we can load a file from one of the players, break it down into integers, reals, and booleans (which is outside the scope of this tutorial), and send it to other players.
There is just one problem, however. There is no event that fires when a GameCache sync has finished, and when the other clients have received the data. If only we could somehow generate our own event...
Wait, remember when I said that every user input in WC3 is sequential? Turns out, that GameCache syncs are no different! They go through the same system that all other user inputs do - unit orders, key presses, chat messages, and so on. Consider the following code snippet:
JASS:
if sender == GetLocalPlayer() then
call SyncStoredInteger(...)
call SelectUnit(...)
endif
In fact, using this SelectUnit trick we can build a simple wrapper that allows us to synchronize some code between players, making an event that only fires after all previous pending network actions have been processed! In fact, this is exactly what I did in my Wurst library. You can look in network/SyncSimple.wurst to see exactly how it can be done.
At that point, all you need is some scaffolding to wrap it all up into a system of some kind. Previously, only TriggerHappy had dared try and do so, in his Sync and SyncInteger libraries. I personally found the event-driven vJASS API rather unwieldy to use, so I never bothered, even though his libraries have proven to be an immense aid in writing my own. However, since Wurst provides lambdas and many other cool language features, it was possible to make a much more user-friendly API in Wurst. The actual library is rather boring, and mostly consists of various tricks and scaffolding to support arbitrary amount of data and string (de-)serialization. If you're curious, you can skim through the source code, it's pretty heavily documented. Here's a few exerpts, going slightly more in-depth:
SyncSimple.wurst:
This library can be used to send a 'synchronize' network event that will fire
for all players after all previous network events have also been received
by all players.
Examples of such events include:
- Unit selection
- Unit orders
- Chat messages
- Gamecache synchronization
- Keyboard events
For example, this can be used in conjunction with gamecache synchronization
to send 'local' values from one player to the rest, such as camera position,
data from files, and so on. For a package implementing this functionality,
see Network.
It depends on the fact that all network events in WC3 are fired sequentially,
meaning that they arrrive for other players in the order that they were
sent in.
It also depends on the fact that the EVENT_PLAYER_UNIT_SELECTED fires
synchronously for all players at the same time, allowing us to know for
certain when other players have acknowledged our unit selection, as well
as all network events that have been fired before it.
By calling the .sync() method, we queue a network action (specifically,
a unit selection event) that will only be delivered after all previously
queued network actions have also been delivered.
The primary use of this library is in conjunction with gamecache's Sync
natives, because they also fire sequential network events. When we call
.sync() after a series of Sync natives, we ensure that the .onSynced()
callback will only be called after all players have received the data.
This way, we can send local data from one player to the rest, such as
camera position, data from files, and so on.
There may be other usages related to async network events as well.
for all players after all previous network events have also been received
by all players.
Examples of such events include:
- Unit selection
- Unit orders
- Chat messages
- Gamecache synchronization
- Keyboard events
For example, this can be used in conjunction with gamecache synchronization
to send 'local' values from one player to the rest, such as camera position,
data from files, and so on. For a package implementing this functionality,
see Network.
It depends on the fact that all network events in WC3 are fired sequentially,
meaning that they arrrive for other players in the order that they were
sent in.
It also depends on the fact that the EVENT_PLAYER_UNIT_SELECTED fires
synchronously for all players at the same time, allowing us to know for
certain when other players have acknowledged our unit selection, as well
as all network events that have been fired before it.
By calling the .sync() method, we queue a network action (specifically,
a unit selection event) that will only be delivered after all previously
queued network actions have also been delivered.
The primary use of this library is in conjunction with gamecache's Sync
natives, because they also fire sequential network events. When we call
.sync() after a series of Sync natives, we ensure that the .onSynced()
callback will only be called after all players have received the data.
This way, we can send local data from one player to the rest, such as
camera position, data from files, and so on.
There may be other usages related to async network events as well.
Network.wurst:
Overview:
In multiplayer games, this package is for synchronizing data between game clients.
It's useful for when one player is the source of game data, such as from gamecache or file IO.
Like SyncSimple, it depends on the fact that all network actions in WC3 are
sequential, and are received by players in the same order that are sent in.
It uses the SyncStored* natives of the gamecache to send data to other players
from the sender, and then uses SyncSimple to send a 'finish' event, that will
only be received by other players after they have also received all sync data.
The Network class provides 4 independent buffers of data:
integers,
reals,
booleans,
strings
each of which can be written/read to using the relevant
methods of the HashBuffer class.
Before sending the data, the sender populates the HashBuffer with the data that
they want to send, then Network.start() is called, and data is received from
the same buffer inside the callback when it has all been transferred.
Read SyncSimple docs for a slightly more in-depth overview of this particular
peculiarity of WC3.
Implementation overview:
Reference for various claims about performance and operation can be found here:
Sync Doc | HIVE
Original research by TriggerHappy.
There are several core classes here:
HashBuffer
GamecacheBuffer
StringEncoder
GamecacheKeys
Network
HashBuffer is a hashtable-based container with buffer semantics for writing
integers, reals, booleans, strings and serializables.
Longer gamecache keys take a longer time to synchronize (for each value synced,
we also send it's keys), so we use keys of fixed length and send data in multiple
rounds. Because of this, we can't store all data immediately in the gamecache.
We also need to know the size of data being sent prior to starting the transmission,
so we have to store all of it in an intermediate buffer, which is HashBuffer.
Prior to sending, all strings in the HashBuffer are encoded into a buffer
of integers, because SyncStoredString doesn't work. The responsible class is StringEncoder.
After sending, they are decoded back into strings and written to the HashBuffer.
GamecacheKeys provides int-string conversion for keys for usage in gamecaches.
GamecacheBuffer is a gamecache-based container for writing
integers, reals and booleans. There is a GamecacheBuffer for each primitive type,
int, bool, real and asciiInts.
Network is the main class that coordinates HashBuffer and GamecacheBuffer and does
all the heavy lifting.
Before starting the transmission, the HashBuffer is locked into an immutable state, in
which incorrect mutation attempts will print warnings, as long as safety checks are not disabled.
The maximum amount of data across all primitive buffers is calculated, and the amount of
required 'sync rounds' is calculated - that is, the amount of times we need to flush/sync
data out of the gamecaches to keep key sizes short.
Since only the local player has any knowledge about the amount of data needed to be sent,
and consequently, the amount of sync rounds required, we first send a pre-fetch "metadata"
payload with the amount of data in each buffer and the amount of sync rounds, using fixed
keys. At the same time, we also send the first payload.
When a round is received, we write data to the HashBuffer, using the metadata to know when
to stop, and start another sync round if necessary. If it is not necessary,
we open the HashBuffer for reading and call the finish callback, and destroy the instance.
In multiplayer games, this package is for synchronizing data between game clients.
It's useful for when one player is the source of game data, such as from gamecache or file IO.
Like SyncSimple, it depends on the fact that all network actions in WC3 are
sequential, and are received by players in the same order that are sent in.
It uses the SyncStored* natives of the gamecache to send data to other players
from the sender, and then uses SyncSimple to send a 'finish' event, that will
only be received by other players after they have also received all sync data.
The Network class provides 4 independent buffers of data:
integers,
reals,
booleans,
strings
each of which can be written/read to using the relevant
methods of the HashBuffer class.
Before sending the data, the sender populates the HashBuffer with the data that
they want to send, then Network.start() is called, and data is received from
the same buffer inside the callback when it has all been transferred.
Read SyncSimple docs for a slightly more in-depth overview of this particular
peculiarity of WC3.
Implementation overview:
Reference for various claims about performance and operation can be found here:
Sync Doc | HIVE
Original research by TriggerHappy.
There are several core classes here:
HashBuffer
GamecacheBuffer
StringEncoder
GamecacheKeys
Network
HashBuffer is a hashtable-based container with buffer semantics for writing
integers, reals, booleans, strings and serializables.
Longer gamecache keys take a longer time to synchronize (for each value synced,
we also send it's keys), so we use keys of fixed length and send data in multiple
rounds. Because of this, we can't store all data immediately in the gamecache.
We also need to know the size of data being sent prior to starting the transmission,
so we have to store all of it in an intermediate buffer, which is HashBuffer.
Prior to sending, all strings in the HashBuffer are encoded into a buffer
of integers, because SyncStoredString doesn't work. The responsible class is StringEncoder.
After sending, they are decoded back into strings and written to the HashBuffer.
GamecacheKeys provides int-string conversion for keys for usage in gamecaches.
GamecacheBuffer is a gamecache-based container for writing
integers, reals and booleans. There is a GamecacheBuffer for each primitive type,
int, bool, real and asciiInts.
Network is the main class that coordinates HashBuffer and GamecacheBuffer and does
all the heavy lifting.
Before starting the transmission, the HashBuffer is locked into an immutable state, in
which incorrect mutation attempts will print warnings, as long as safety checks are not disabled.
The maximum amount of data across all primitive buffers is calculated, and the amount of
required 'sync rounds' is calculated - that is, the amount of times we need to flush/sync
data out of the gamecaches to keep key sizes short.
Since only the local player has any knowledge about the amount of data needed to be sent,
and consequently, the amount of sync rounds required, we first send a pre-fetch "metadata"
payload with the amount of data in each buffer and the amount of sync rounds, using fixed
keys. At the same time, we also send the first payload.
When a round is received, we write data to the HashBuffer, using the metadata to know when
to stop, and start another sync round if necessary. If it is not necessary,
we open the HashBuffer for reading and call the finish callback, and destroy the instance.
Epilogue
Obviously, I haven't covered every single topic relating to WC3 networking. This tutorial was meant to provide an important theoretical base for writing your own networking systems, for whichever purpose they may be used, and to spread the knowledge in the community. I haven't even scratched the surface of desyncs, as it can get really complicated with those.
With that said, there are already libraries implementing this functionality both in vJASS and Wurst, but I felt like it was worthwhile taking a behind-the-scenes look at this very obscure topic.
In the future, I might write a tutorial about File IO in WC3 and fully-fledged codeless Save/Load systems (which Wurst also supports, in the form of Persistable, combining File IO with Networking and various tips. Please tell me if you'd like a tutorial for that!
And, of course, if you have any suggestion, questions, or corrections for this tutorial - please leave a comment as well.
Last edited: