diff --git a/implementation_details.md b/implementation_details.md index 4f62881a..6c88e310 100644 --- a/implementation_details.md +++ b/implementation_details.md @@ -1,9 +1,12 @@ # Implementation Details + ## `Connection` != `Player` + I know I've been using the terms "client" and "player" somewhat interchangeably, but `Connection` and `Player` should be separate tokens. There's no benefit in forcing one player per connection. Having `Player` be its own thing makes it easier to do stuff like online splitscreen, temporarily fill team slots with bots, etc. ## "Clock" Synchronization + Ideally, clients predict ahead by just enough to have their inputs reach the server right before they're needed. People often try to have clients estimate the clock time on the server (with some SNTP handshake) and use that to schedule the next simulation step, but that's overly complex. What we really care about is: How much time passes between when the server receives my input and when that input is consumed? If the server simply tells clients how long their inputs are waiting in its buffer, the clients can use that information to converge on the correct lead. @@ -58,10 +61,12 @@ interp_time = max(interp_time, predicted_time - max_lag_comp) The key idea here is that simplifying the client-server relationship makes the problem easier. You *could* have the server apply inputs whenever they arrive, rolling back if necessary, but that would only complicate things. If the server never accepts late inputs and never changes its pace, no one needs to coordinate. ## Prediction <-> Interpolation + Clients can't directly modify the authoritative state, but they should be able to predict whatever they want locally. One obvious implementation is to literally fork the latest authoritative state. If copying the full state ends up being too expensive, we can probably use a copy-on-write layer. My current idea to shift components between prediction and interpolation is to default to interpolated (reset upon receiving a server update) and then use specialized change detection `DerefMut` magic to flag as predicted. -``` + +```rust Predicted PredictAdded PredictRemoved @@ -72,6 +77,7 @@ Cancelled CancelAdded CancelRemoved ``` + Everything is predicted by default, but users can opt-out by filtering on `Predicted`. In the more conservative cases, clients would predict the entities driven by their input, the entities they spawn (until confirmed), and any entities mutated as a result of the first two. Systems with filtered queries (i.e. physics, path-planning) should typically run last. We can also use these filters to generate events that only trigger on authoritative changes and events that trigger on predicted changes to be confirmed or cancelled later. The latter are necessary for handling sounds and particle effects. Those shouldn't be duplicated during rollbacks and should be faded out if mispredicted. @@ -79,6 +85,7 @@ We can also use these filters to generate events that only trigger on authoritat Should UI be allowed to reference predicted state or only verified state? ## Predicting Entity Creation + This requires some special consideration. The naive solution is to have clients spawn dummy entities. When an update that confirms the result arrives, clients can simply destroy the dummy and spawn the true entity. IMO this is a poor solution because it prevents clients from smoothly blending these entities from predicted time into interpolated time. It won't look right. @@ -92,6 +99,7 @@ A better solution is for the server to assign each networked entity a global ID - A more extreme solution would be to somehow bake global IDs directly into the memory allocation. If memory layouts are mirrored, relative pointers become global IDs, which don't need to be explicitly written into packets. This would save 4-8 bytes per entity before compression. ## Smooth Rendering + Rendering should come after `NetworkFixedUpdate`. Whenever clients receive an update with new remote entities, those entities shouldn't be rendered until that update is interpolated. @@ -103,13 +111,14 @@ We'll also need to distinguish instant motion from integrated motion when interp Is an exponential decay enough for smooth error correction or are there better algorithms? ## Lag Compensation + Lag compensation deals with colliders. To avoid weird outcomes, lag compensation needs to run after all motion and physics systems. Again, people often imagine having the server estimate what interpolated state the client was looking at based on their RTT, but we can resolve this without any guesswork. Clients can just tell the server what they were looking at by bundling the interpolated tick numbers and the blend value inside the input payloads. With this information, the server can reconstruct *exactly* what each client saw. -``` +```plaintext tick number (predicted) tick number (interpolated from) @@ -119,6 +128,7 @@ interpolation blend value ``` So there are two ways to go about the actual compensation: + - Compensate upfront by bringing new projectiles into the present (similar to a rollback). - Compensate over time ("amortized"), constantly testing projectiles against the history buffer. @@ -133,6 +143,7 @@ For clients with too-high ping, their interpolation will lag far behind their pr When a player is parented to another entity, which they have no control over (e.g. the player is a passenger in a vehicle), the non-predicted movement of that parent must be rewound during compensation to spawn any projectiles fired by the player in the correct location. ## Unconditional Rollbacks + Every article on "rollback netcode" and "client-side prediction and server reconciliation" encourages having clients compare their predicted state to the authoritative state and reconciling *if* they mispredicted. But how do you actually detect a mispredict? I thought of two methods while I was writing this: @@ -140,7 +151,7 @@ I thought of two methods while I was writing this: 1. Unordered scan looking for first difference. 2. Ordered scan to compute checksum and compare. -The first option has an unpredictable speed. The second option requires a fixed walk of the game state (checksums *are* probably worth having even if only for debugging non-determinism). There may be options I didn't consider, but the point I'm trying to make is that detecting changes among large numbers of entities isn't cheap. +The first option has an unpredictable speed. The second option requires a fixed walk of the game state (checksums *are* probably worth having even if only for debugging non-determinism). There may be options I didn't consider, but the point I'm trying to make is that detecting changes among large numbers of entities isn't cheap. Let's consider a simpler default: @@ -149,6 +160,7 @@ Let's consider a simpler default: Now, you may think that's wasteful, but I would say "if mispredicted" gives you a false sense of security. Mispredictions can occur at any time, *especially* during long-lasting complex physics interactions. It's much easier to profile and optimize for your worst-case if clients *always* rollback and re-sim. It's also more memory-efficient, since clients never need to store old predicted states. ## Delta-Compressed Snapshots + - The server keeps an incrementally updated copy of the networked state. - Components are stored with their global ID instead of the local ID. - The server keeps a ring buffer of "patches" for the last `N` snapshots. @@ -161,12 +173,14 @@ Now, you may think that's wasteful, but I would say "if mispredicted" gives you - Pass compressed payloads to protocol layer. - Protocol and I/O layers do whatever they do and send the packet. -## Interest Managed Updates +## Interest-Managed Updates + TODO ## Messages + TODO Messages are best for sending global alerts and any gameplay mechanics you explicitly want modeled as request-reply (or one-way) interactions. They can be unreliable or reliable. You can also postmark messages to be executed on a certain tick like inputs. That can only be best effort, though. -The example I'm thinking of is buying items from an in-game vendor. The server doesn't simulate UI, but ideally we can write the message transaction in the same system. A macro might end up being the most ergonomic choice. \ No newline at end of file +The example I'm thinking of is buying items from an in-game vendor. The server doesn't simulate UI, but ideally we can write the message transaction in the same system. A macro might end up being the most ergonomic choice. diff --git a/networked_replication.md b/networked_replication.md index 02711b7c..0a8f2a45 100644 --- a/networked_replication.md +++ b/networked_replication.md @@ -6,7 +6,7 @@ This RFC proposes an implementation of engine features for developing networked ## Motivation -Networking is unequivocally the most lacking feature in all general-purpose game engines. +Networking is unequivocally the most lacking feature in all general-purpose game engines. While most engines provide low-level connectivity—virtual connections, optionally reliable UDP channels, rooms—almost none of them ([except][1] [Unreal][2]) provide high-level *replication* features like prediction, interest management, or lag compensation, which are necessary for most networked multiplayer games. @@ -15,6 +15,7 @@ This broad absence of first-class replication features stifles creative ambition Bevy's ECS opens up the possibility of providing a near-seamless, generalized networking API. What I hope to explore in this RFC is: + - What game design choices and constraints does networking add? - How does ECS make networking easier to implement? - What should developing a networked multiplayer game in Bevy look like? @@ -29,7 +30,7 @@ As a user, you only have to annotate your gameplay-related components and system > Game design should (mostly) drive networking choices. Future documentation could feature a questionnaire to guide users to the correct configuration options for their game. Genre and player count are generally enough to decide. -The core primitive here is the `Replicate` trait. All instances of components and resources that implement this trait will be automatically detected and synchronized over the network. Simply adding a `#[derive(Replicate)]` should be enough in most cases. +The core primitive here is the `Replicate` trait. All instances of components and resources that implement this trait will be automatically detected and synchronized over the network. Simply adding a `#[derive(Replicate)]` should be enough in most cases. ```rust #[derive(Replicate)] @@ -48,6 +49,7 @@ struct Health { hp: u32, } ``` + By default, both client and server will run every system you add to `NetworkFixedUpdate`. If you want systems or code snippets to run exclusively on one or the other, you can annotate them with `#[client]` or `#[server]` for the compiler. ```rust @@ -62,6 +64,7 @@ fn ball_movement_system( ``` For more nuanced runtime cases—say, an expensive movement system that should only process the local player entity on clients—you can use the `Predicted` query filter. If you need an explicit request or notification, you can use `Message` variants. + ```rust fn update_player_velocity( mut q: Query<(&Player, &mut Rigidbody)>) @@ -89,22 +92,22 @@ Bevy can configure an `App` to operate in several different network modes. | Mode | Playable? | Authoritative? | Open to connections? | | :--- | :---: | :---: | :---: | -| Client | ✓ | ✗ | ✗ | -| Standalone | ✓ | ✓ | ✗ | -| Listen Server | ✓ | ✓ | ✓ | +| Client | ✓ | ✗ | ✗ | +| Standalone | ✓ | ✓ | ✗ | +| Listen Server | ✓ | ✓ | ✓ | | Dedicated Server | ✗ | ✓ | ✓ | | Relay | ✗ | ✗ | ✓ | -
- ```rust // TODO: Example App configuration. ``` ## Implementation Strategy + [Link to more in-depth implementation details (more of an idea dump atm).](../main/implementation_details.md) ### Requirements + - `ComponentId` (and maybe the other `*Ids`) should be stable between clients and the server. - Must have a means to isolate networked and non-networked state. - `World` should be able to reserve an `Entity` ID range, with separate storage metadata. @@ -115,7 +118,9 @@ Bevy can configure an `App` to operate in several different network modes. - Networked components must only be mutated inside `NetworkFixedUpdate`. - The ECS scheduler should support nested loops. - (I'm pretty sure this isn't an actual blocker, but the workaround feels a little hacky.) + ### The Replicate Trait + ```rust // TODO impl Replicate for T { @@ -124,6 +129,7 @@ impl Replicate for T { ``` ### Specialized Change Detection + ```rust // TODO // Predicted (+ Added and Removed variants) @@ -135,7 +141,8 @@ impl Replicate for T { ``` ### Rollback via Run Criteria -```rust + +```rust /* TODO The "outer" loop is the number of fixed update steps as determined by the fixed timestep accumulator. @@ -144,7 +151,9 @@ The "inner" loop is the number of steps to re-simulate. ``` ### NetworkFixedUpdate + Clients + 1. Iterate received server updates. 2. Update simulation and interpolation timescales. 3. Sample inputs and push them to send buffer. @@ -152,6 +161,7 @@ Clients 5. Simulate predicted tick. Server + 1. Iterate received client inputs. 2. Sample buffered inputs. 3. Simulate authoritative tick. @@ -159,21 +169,26 @@ Server 5. Push client updates to send buffer. Everything aside from the simulation steps could be auto-generated. + ### Saving Game State + - At the end of each fixed update, server iterates `Changed` and `Removed` for all replicable components and duplicates them to an isolated copy. - Could pass this copy to another thread to do the serialization and compression. - This copy has no `Table`, those would be rebuilt by the client. ### Preparing Server Packets + - Snapshots (full state updates) will use delta compression and manual fragmentation. - Eventual consistency (partial state updates) will use interest management. - Both will most likely use the same data structure. ### Restoring Game State + - At the beginning of each fixed update, the client decodes the received update and generates the latest authoritative state. - Client then uses this state to write its local prediction copy that has all the tables and non-replicable components. ## Drawbacks + - Lots of potentially cursed macro magic. - Direct writes to `World`. - Seemingly limited to components that implement `Clone` and `Serialize`. @@ -181,42 +196,47 @@ Everything aside from the simulation steps could be auto-generated. ## Rationale and Alternatives ### Why *this* design? + Networking is a widely misunderstood problem domain. The proposed implementation should suffice for most games while minimizing design friction—users need only annotate gameplay-related components and systems, put those systems in `NetworkFixedUpdate`, and configure some settings. Polluting the API with "networked" variants of structs and systems (aside from `Transform`, `Rigidbody`, etc.) would just make life harder for everybody, both game developers and Bevy maintainers. IMO the ease of macro annotations is worth any increase in compile times when networking features are enabled. ### Why should Bevy provide this? -People who want to make multiplayer games want to focus on designing their game and not worry about how to implement prediction, how to serialize their game, how to keep packets under MTU, etc. Having these come built-in would be a huge selling point. + +People who want to make multiplayer games want to focus on designing their game and not worry about how to implement prediction, how to serialize their game, how to keep packets under MTU, etc. Having these come built-in would be a huge selling point. ### Why not wait until Bevy is more mature? + It'll only grow more difficult to add these features as time goes on. Take Unity for example. Its built-in features are too non-deterministic and its only working solutions for state transfer are paid third-party assets. Thus far, said assets cannot integrate deeply enough to be transparent (at least not without substituting parts of the engine). ### Why does this need to involve `bevy_ecs`? + I strongly doubt that fast, efficient, and transparent replication features can be implemented without directly manipulating a `World` and its component storages. We may need to allocate memory for networked data separately. ## Unresolved Questions -- Can we provide lints for undefined behavior like mutating networked state outside of `NetworkFixedUpdate`? + +- Can we provide lints for undefined behavior like mutating networked state outside of `NetworkFixedUpdate`? - Do rollbacks break change detection or events? - ~~When sending partial state updates, how should we deal with weird stuff like there being references to entities that haven't been spawned or have been destroyed?~~ Already solved by generational indexes. - How should UI widgets interact with networked state? React to events? Exclusively poll verified data? -- How should we handle correcting mispredicted events and FX? +- How should we handle correcting mispredicted events and FX? - Can we replicate animations exactly without explicitly sending animation data? ## Future Possibilities + - With some tools to visualize game state diffs, these replication systems could help detect non-determinism in other parts of the engine. - Much like how Unreal has Fortnite, Bevy could have an official (or curated) collection of multiplayer samples to dogfood these features. - Bevy's future editor could automate most of the configuration and annotation. - Replication addresses all the underlying ECS interop, so it should be settled first. But beyond replication, Bevy need only provide one good default for protocol and I/O for the sake of completeness. I recommend dividing crates at least to the extent shown below to make it easy for developers to swap the low-level transport with [whatever][3] [alternatives][4] [they][5] [want][7]. -| `bevy::net::replication` | `bevy::net::protocol` | `bevy::net::io` | +| `bevy::net::replication` | `bevy::net::protocol` | `bevy::net::io` | | -- | -- | -- | |
  • save and restore
  • prediction
  • serialization
  • delta compression
  • interest management
  • visual error correction
  • lag compensation
  • statistics (high-level)
|
  • (N)ACKs
  • reliability
  • virtual connections
  • channels
  • encryption
  • statistics (low-level)
|
  • send
  • recv
  • poll
| - [1]: https://youtu.be/JOJP0CvpB8w "Unreal Networking Features" [2]: https://www.unrealengine.com/en-US/tech-blog/replication-graph-overview-and-proper-replication-methods "Unreal Replication Graph Plugin" [3]: https://github.com/quinn-rs/quinn [4]: https://partner.steamgames.com/doc/features/multiplayer [5]: https://developer.microsoft.com/en-us/games/solutions/multiplayer/ [6]: https://dev.epicgames.com/docs/services/en-US/Overview/index.html -[7]: https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-intro.html \ No newline at end of file +[7]: https://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-intro.html diff --git a/replication_concepts.md b/replication_concepts.md index d7bbde1e..a69e9340 100644 --- a/replication_concepts.md +++ b/replication_concepts.md @@ -1,15 +1,18 @@ # Replication + > The goal of replication is to ensure that all of the players in the game have a consistent model of the game state. Replication is the absolute minimum problem which all networked games have to solve in order to be functional, and all other problems in networked games ultimately follow from it. - [Mikola Lysenko][1] ---- +## Simulation Behavior Abstractly, you can think of a game as a pure function that accepts an initial state and player inputs and generates a new state. + ```rust let new_state = simulate(&state, &inputs); ``` -Fundamentally, if several players want to perform a synchronized simulation over a network, they have basically two options: -- Send their inputs to each other and independently and deterministically simulate the game. +If several players want to perform a synchronized simulation over a network, they have basically two options: + +- Send their inputs to each other and independently and deterministically simulate the game. -
also known asactive replication, lockstep, state-machine synchronization, determinism
- Send their inputs to a single machine (the server) who simulates the game and broadcasts updates back. -
also known aspassive replication, client-server, primary-backup, state transfer
@@ -18,7 +21,8 @@ In other words, players can either run the "real" game or follow it. For the rest of this RFC, I'll refer to them as determinism and state transfer, respectively. I just think they're the most literal terminology. -## Why determinism? +### Why determinism? + Deterministic multiplayer is basically local multiplayer but with *really* long controller cables. The netcode simply supplies the gameplay code with inputs. They're basically decoupled. Determinism has low infrastructure costs, both in terms of bandwith and server hardware. All steady-state network traffic is input, which is not only small but also compresses well. (Note that as player count increases, there *is* a crossover point where state transfer becomes more efficient). Likewise, as the game runs completely on the clients, there's no need to rent powerful servers. Relays are still handy for efficiently managing rooms and scaling to higher player counts, but those could be cheap VPS instances. @@ -27,7 +31,8 @@ Determinism is also tamperproof. It's impossible to do anything like speedhack o That every client must run the *entire* world is also determinism's biggest limit. While this works well for games with thousands of micro-managed entities like *Starcraft 2*, you won't be seeing games with expansive worlds like *Genshin Impact* networked this way anytime soon. -## Why state transfer? +### Why state transfer? + Determinism is awesome when it fits but it's generally unavailable. Neither Godot nor Unity nor Unreal can make this guarantee for large parts of their engines, particularly physics. Whenever you can't have or don't want determinism, you should use state transfer. @@ -36,7 +41,8 @@ Its main underlying idea is **authority**, which is just like ownership in Rust. The server usually owns everything, but authority is very flexible. In games like *Destiny* and *Fall Guys*, clients own their movement state. Other games even trust clients to confirm hits. Distributing authority like this adds complexity and obviously leaves the door wide open for cheaters, but sometimes it's necessary. In VR, it makes sense to let clients claim and relinquish authority over interactable objects. -## Why not messaging patterns? +### Why not messaging patterns? + The only other strategy you really see used for replication is messaging. RPCs. I actually see these most often in the free asset space. (I guess it's the go-to pattern outside of games?) Take chess for example. Instead of sending polled player inputs or the state of the chessboard, you could just send the moves like "white, e2 to e4," etc. @@ -44,6 +50,7 @@ Take chess for example. Instead of sending polled player inputs or the state of Here's the issue. Messages are tightly coupled to their game's logic. They can't be generalized. Chess is simple—one turn, one event—but what about an FPS? What messages would it need? How many? When and where would those messages need be sent and received? If those messages have cascading effects, they can only be sent reliable, ordered. + ```rust let mut s = state[n]; for message in queue.iter() { @@ -55,15 +62,19 @@ for message in queue.iter() { // applied and applied in the right order. *state[n+1] = s; ``` + Messages are great for when you want explicit request-reply interactions and global alerts like players joining or leaving. They just don't cut it as a replication mechanism for real-time games. Even if you avoided send and receive calls everywhere (i.e., collect and send in batches), messages don't compress as well as inputs or state. -# Latency +## Latency + Networking is hard because we want to let players who live in different countries play together *at the same time*, something that special relativity tells us is [strictly impossible][2]... unless we cheat. ### Lockstep + The simplest solution is to concede to the universe with grace and have players stall until they've received whatever data they need to execute the next simulation step. Blocking is fine for most turn-based games but simply doesn't cut it for real-time games. - + ### Adding Local Input Delay + The first trick we can pull is have each player delay their own input for a bit, trading responsiveness for more time to receive the incoming data. Our brains are pretty lenient about this, so we can actually *reduce* the latency between players. Two players in a 1v1 match actually could experience simultaneity if each delayed their input by half the round-trip time. @@ -73,6 +84,7 @@ This trick has powered the RTS genre for decades. With a large enough input dela > determinism + lockstep + local input delay = "delay-based netcode" ### Predict-Rollback + Instead of blocking, what if players just guess the missing data and keep going? Doing that would let us avoid stuttering, but then we'd have to deal with guessing incorrectly. Well, when the player finally has that missing remote data, what they can do is restore their simulation to the previous verified state, update it with the received data, and then re-predict the remaining steps. @@ -84,18 +96,18 @@ With prediction, input delay is no longer needed, but it's still useful. Reducin > determinism + predict-rollback + local input delay (optional) = "rollback netcode" ### Selective Prediction -Determinism is an all or nothing deal. If you predict, you predict everything. -State transfer has the flexibility to predict only *some* things, letting you offload expensive computations onto the server. There *are* client-server games like *Rocket League* who still predict everything (FWIW deterministic predict-rollback would have been a better fit), including other clients—the server redistributes inputs along with game state to reduce error. However, most often clients only predict what they control directly. +Determinism is an all or nothing deal. If you predict, you predict everything. +State transfer has the flexibility to predict only *some* things, letting you offload expensive computations onto the server. There *are* client-server games like *Rocket League* who still predict everything (FWIW deterministic predict-rollback would have been a better fit), including other clients—the server redistributes inputs along with game state to reduce error. However, most often clients only predict what they control directly. -# Visual Consistency +## Visual Consistency Real quick, always hard snap the simulation state. If clients do any blending, it's entirely visual. Yes, this does mean that entities may appear in different positions from where they should be. On the other hand, we have to honor this inaccurate view to keep players happy. ### Smooth Rendering and Lag Compensation -Predicting only *some* things adds implementation complexity. +Predicting only *some* things adds implementation complexity. When clients predict everything, they produce renderable state at a fixed pace. Now, anything that isn't predicted must be rendered using data received from the server. The problem is that server updates are sent over a lossy, unreliable internet that disrupts any consistent spacing between packets. This means clients need to buffer incoming server updates long enough to have two authoritative updates to interpolate most of the time. @@ -103,33 +115,40 @@ Gameplay-wise, not predicting everything also divides entities between two point Visually, we'll often have to blend between extrapolated and authoritative data. Simply interpolating between two authoritative updates is incorrect. The visual state can and will accrue errors, but that's what we want. Those can be tracked and smoothly reduced (to some near-zero threshold, then cleared). -# Bandwidth +## Bandwidth + ### How much can we fit into each packet? + Not a lot. You can't send arbitrarily large packets over the internet. The information superhighway has load limits. The conservative, almost universally supported "maximum transmissible unit" or MTU is 1280 bytes. Accounting for IP and UDP headers and some connection metadata, you realistically can send ~1200 bytes of game data per packet. -If you significantly exceed this, some random stop along the way will delay the packet and break it up into fragments. +If you significantly exceed this, some random stop along the way will delay the packet and break it up into fragments. [Fragmentation](https://packetpushers.net/ip-fragmentation-in-detail/) [sucks](https://blog.cloudflare.com/ip-fragmentation-is-broken) because it multiplies the likelihood of the overall packet being lost (all fragments have to arrive to read the full packet). Getting fragmented along the way is even worse because of the added delay. It's okay if the sender manually fragments their packet (like 2 or 3) *upfront*, although the higher loss does limit simulation rate, just don't rely on the internet to do it. ### Okay, but that doesn't seem like much? + Well, there are two more reasons not to yeet giant 100kB packets across the network: + - Bandwidth costs are the lion's share of hosting expenses. - Many players still have limited bandwidth. So unless we limit everyone to <20Hz tick rates, our only options are: + - Send smaller things. - Send fewer things. ### Snapshots + Alright then, state transfer. The most obvious strategy is to send full **snapshots**. All we can do with these is make them smaller (i.e. quantize floats, then compress everything). Fortunately, snapshots are very compressible. An extremely popular idea called **delta compression** is to send each client a diff (often with further compression on top) of the current snapshot and the latest one they acknowledged receiving. Clients can then use these to patch their existing snapshots into the current one. -The server can fragment payloads as a last resort. +The server can fragment payloads as a last resort. ### Eventual Consistency + When snapshots fail or hidden information is needed, the best alternative is to prioritize sending each client the state most relevant to them. This technique is commonly called **eventual consistency**. Determining relevance is often called **interest management** or **area of interest**. Each granular piece of state is given a "send priority" that accumulates over time and resets when sent. How quickly priority accumulates for different things is up to the developer, though physical proximity and visual salience usually have the most influence. @@ -138,4 +157,4 @@ Eventual consistency can be combined with delta compression, but I wouldn't reco [1]: https://0fps.net/2014/02/10/replication-in-networked-games-overview-part-1/ [2]: https://en.wikipedia.org/wiki/Relativity_of_simultaneity -[3]: https://en.wikipedia.org/wiki/Client-side_prediction \ No newline at end of file +[3]: https://en.wikipedia.org/wiki/Client-side_prediction