🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Shooting & Catching Edges In Prediction

Started by
14 comments, last by ProfL 4 years, 9 months ago

Rounding will help with determinism in X% of the cases, and exacerbate the nondeterminism in 100-X% of the cases. That is, if the implementation is nondeterministic, anything from 2.0000 to 2.9999 rounds to 2, but 1.9999 rounds to 1 (for example) even though it may be 2.0001 on the other side. (Doesn't matter where you put the rounding rule, there will be some two values that will be very close but round differently.)

That being said, doing rounding of all your physics variables between each physics tick is actually a good way of achieving better determinism. If both client and server rounds to network precision after each simulation step, it won't matter whether a state was initially derived from a "network checkpoint" or from a "previous step's output."

We did this for There.com; the tool that generated the network marshaling for our object structs also generated code that rounded the same way that the network would round when packing/unpacking the data. We ran this on all state that changed after each simulation tick. It actually was a not insignificant fraction of our math budget at the time ?

("the time" was 800 MHz Pentium III days)

enum Bool { True, False, FileNotFound };
Advertisement

Well only just rounding doesn't work deterministically right? If I say round 5.1999 to the 2nd decimal place, one machine may say it's 5.20 and another machine may say it's 5.10

At that point you have to start using fixed point arithmetic. Sadly I don't think I can use fixed point stuff because I'm already using Unity math libs and raycasting for a lot of my movement calculations.

Quote

If I say round 5.1999 to the 2nd decimal place, one machine may say it's 5.20 and another machine may say it's 5.10

If you set the floating point control word correctly, they will round to the same value. And when you say "decimal place," the values possible are "5.20" (round to nearest, round up, round to even) or "5.19" (round down, round towards zero.)

The floating point rounding mode, and how it treats things like denormals, and which instruction set it uses (SSE, SSE2, SSE3, SSE4, AVX, AVX2, ...) matters, and should ideally match between client and server.

Using integer-based fixed point math may be a good idea to achieve 100% determinism. This is even more important when you want to have clients on cell phones and chromebooks (ARM CPUs) talking to servers in a datacenter running x86_64 architecture.

Unity, however, doesn't do any of this for you. It's not really possible to build a fully deterministic simulation on top of Unity, and it's really hard on Unreal. (The built-in Unreal simulation doesn't even support fixed time steps.) You have to go write your own physics and simulation code for this. At least with Unreal, you get the source code ... When you use Unity, you're better off making design choices in your game (when you play different sound/visual effects, for example) that work best with "it may sometimes be wrong."

enum Bool { True, False, FileNotFound };

Oh yeah I wasn't going for the full determinism route, changing compiler flags and all that. Just was looking for the next best thing. That's why I was thinking fixed point is the only real other option but when the other person mentioned rounding I thought maybe there was something else.

Getting "fixed point" into an engine, to get 100% determinism on calculations is actually more feasible than fixing floats.

Ages ago I've been working on adding a multiplayer, post-launch of the game, cause some marketing person promised it. It was tons of work and we've spend a lot of time tracking de-syncs. We tried for a long time to get the FPU code somehow synchronized, we really pullet tricks out of our hat to make this happen, but across AMD/Intel and even across different Intel CPU generation, results diverged, hence one night I've just went rage mode and replaced all "float" by a class with an fixed point implementation. (Important: make the ctor explicit and explicit extraction of floats, e.g. to pass to the renderer).

That made nearly all of the game work in-sync, we had just a few rand issues that were easy to fix. HOWEREVER, fixed point turned out to to run into tons of under and overflow. First we've tried to tweak the range, then we've gone for several ranges, depending on where the code is, but that was still not 100% stable. 3rd step was to go for 64bit (back then, before 64bit cpus, 2x 32bit), but that just reduced the issues, you had to play way longer to find new issues. Finally, my colleague suggested to implement a software float "like pascal does". That was the silver bullet.

Nowadays, there are many float implementations, and I think some compiler (GCC?) have even a flag for it. IF you ever decide to go that route, skip all the pitfalls we ran into :)

This topic is closed to new replies.

Advertisement