Preferences

withinboredom parent
A few milliseconds difference can mean all the difference in the world at high enough throughput (which is about the best you can get with NTP). When you can control the networking cards and time sources, you can get it within a few nanoseconds across an entire datacenter, with monitoring to drain the node if clock skew gets too high.

slt2021
And why applications are so sensitive for such small difference in time?

Seems like poor engineering practice.

preseinger
usual problem is when you try to model logical causality (a before b) with physical time (a.timestamp < b.timestamp)

logical causality does not represent poor engineering practice :)

slt2021
this only applies if you carry over physical time from one machine to another, assuming perfect physical synchronization of time.

if you stick to a single source of truth - only one machine's time is used as a source of truth - then the problem disappears.

for example instead of using java/your-language's time() function (which could be out of sync across different app nodes) just use database's internal CURRENT_TIMESTAMP() when writing to db.

another alternative is compare timestamps with up to 1 minute/hour precision, if you carry over time from one machine to another. That way you have a a buffer of time for different machines to synchronize clocks over NTP

preseinger
if you can delegate synchronization/ordering to the monotonic clock of a single machine, then you should definitely do that :)

but that's a sort of trivial base case -- the interesting bit is when you can't make that kind of simplifying assumption

plandis
You can rephrase this question in terms of causality. Why does it matter that we know if some process happens before some other process at some defined(?) level of precision.

There are ways around this but they are restrictive or come at the cost of increased latency. Sometimes those are acceptable trade offs and sometimes they are not.

slt2021
root of problem is using clocks from different hosts (which could be out of sync), and carrying over that time from one machine to another - essentially assuming clocks across different machines are perfectly synchronized 100% of time.

if you use a single source of truth for clocks (simplest example is use RDBMS's current_timestamp() instead of your programming language's time() function), and the problem disappears

justsomehnguy
Imagine you have an account holding $200.

Now two operations come, one adding $300, other one withdrawing $400. What the result would be, depending on thd order of operations?

shsbdncudx
It is, agree. Imho in most cases the right answer is to build it to not require that kind of clock synchronisation
You can build system that do not require physical clock synchronization, but using physical clock often lead to simpler code and major performance advantage.

That's why Google built True Time, which provides physical time guarantee of [min_real_timestamp, max_real_timestamp] for each timestamp instant. You can easily know the ordering of 2 events by comparing the bounds of their timestamps as long as the bounds do not overlap. In order to achieve that, Google try to keep the bound as small as possible, using the most accurate clocks they can find: atomic and GPS clocks.

plandis
Yes, that is essentially the point of logical clocks :)

This item has no comments currently.