There are various decisions the netcode can make about how to reconcile with this incorrectness, and different games make different tradeoffs.
For example in hitscan FPS games, when two players fatally shoot one another at the same time, some games will only process the first packet received, and award the kill to that player, while other games will allow kill trading within some time window.
A tolerance is just an amount of incorrectness that the designer of the system can accept.
When it comes to CRUD apps using read-replicas, so long as the designer of the system is aware of and accepts the consistency errors that will sometimes occur, does that make that system correct?
- the system compensating for the network being fallible
- the system not fulfilling its design goals
- the system not being specified well enough to test if the design goals were fulfilled
Sure, if performance characteristics were the same, people would go for strong consistency. The reason many different consistency models are defined is that there’s different tradeoffs that are preferable to a given problem domain with specific business requirements.
It's only when several frames in a row are dropped that people start to notice, and even then they rarely care as long as the message within the video has enough data points for them to make an (educated) guess.
Minor annoyance, maybe, rage quit the application? Not a chance.
Suppose we're talking about multiplayer game networking, where the central store receives torrents of UDP packets and it is assumed that like half of them will never arrive. It doesn't make sense to view this as "we don't care about the player's actual position". We do. The system just has tolerances for how often the updates must be communicated successfully. Lost packets do not make the system incorrect.