Networked clock synchronization with ClockDelta object


I am currently attempting to synchronize the notion of the clock between multiple clients connected through the Panda distributed object model.

The short version of my question is this: how is the Uncertainty value of a ClockDelta object supposed to be set? It is initialized to None and never changes from there; following through the code, I can find no path that will cause it to be set to anything else. As such the ClockDelta object never… does anything, because its uncertainty is infinite.

The long version of my question follows (to get some background of what I am trying to do):

My first step, before trying to implement something with ClockDelta, was to develop a rudimentary system by creating my own distributed object, an instance of which would live on each client, which sends a “ping” and “ping reply” message (with a timestamp in each direction) in order to calculate approximate network latency and clock delta. I have this completed, and it works, although it is of course not terribly accurate due to the fact that it only takes one sample.

The ClockDelta class shipped with Panda, however, appears to be a considerably more robust version of the same concept. I’ve been studying it and trying to determine how it is meant to be used, and although I understand it for the most part, I haven’t been able to get it to work. I note that the DistributedSmoothNode class makes use of the ClockDelta object and I have been studying that as well, which has helped, but I feel I am still missing a piece of the puzzle.

Without going into too many details of my implementation, I have attached a timestamp value to each distributed message passed between my clients. I have started out by borrowing most of the code from the setComponentTLive method of DistributedSmoothNode in order to, presumably, initiate a resync when the remote timestamp appears to have drifted too far from our local clock.

The problem I am encountering is related to the fact that all of the ClockDelta synchronization methods expect an Uncertainty value to be passed along. My understanding is that, essentially, you pass the uncertainty value along with the timestamps etc. from one client to another and this is then used as part of the sychronization calculations. However, the ClockDelta class initialized uncertainty to “None” (which is essentially Infinity) and an unknown uncertainty can’t be used for synchronization. In fact the setComponentTLive method that I am using as a model specifically checks for an unknown uncertainty, and does absolutely nothing if it finds it.

Tracing through the code though, I can’t conceive of anything I can do to get the uncertainty value “primed”. The only method I see that sets uncertainty is newDelta, and the only methods I can see that invoke newDelta already expect to know what the uncertainty is. Unless I am missing something, the only way to prime it would be for me to just outright set it manually, though I have no idea what I’d set it to.

Hopefully this makes sense and someone can point me in the right direction. To me, ClockDelta certainly seems like the right tool for the task, if I can make it work. Any help is appreciated!

  • lem

You’re right, you’re missing some pieces. The pieces that get the ball rolling are handled by our application startup code, which is not part of the public Panda distribution. We should probably move the relevant code into the public Panda distribution so it will be useful, but we haven’t had a chance to separate it out properly. I’ll just post the high-level idea here for you.

On the client, there’s a special DistributedObject that has the following methods:

    def synchronize(self):
        now = globalClock.getRealTime()

        self.attemptCount = 0"Clock sync begin")
        self.start = now
    def serverTime(self, timestamp):
        end = globalClock.getRealTime()
        elapsed = end - self.start
        self.attemptCount += 1"Clock sync roundtrip took %0.3f ms" % (elapsed * 1000.0))

        average = (self.start + end) / 2.0 - self.extraSkew
        uncertainty = (end - self.start) / 2.0 + abs(self.extraSkew)

        globalClockDelta.resynchronize(average, timestamp, uncertainty)

        if globalClockDelta.getUncertainty() > self.maxUncertainty:
            if self.attemptCount < self.maxAttempts:
      "Uncertainty is too high, trying again.")
                self.start = globalClock.getRealTime()
  "Giving up on uncertainty requirement.")"latency %0.0f ms, sync ±%0.0f ms" % (elapsed * 1000.0, globalClockDelta.getUncertainty() * 1000.0))

And this is met with the following code on the server:

    def requestServerTime(self):
        timestamp = globalClockDelta.getRealNetworkTime(bits=32)
        requesterId = self.air.getAvatarIdFromSender()
        self.sendUpdateToAvatarId(requesterId, "serverTime", [timestamp])

The ball starts rolling when the client starts up and establishes communication with the server. One of the first thing the client does is call synchronize() on itself. This stores the current local timestamp, then sends a “requestServerTime” message to the server. The server immediately response with a “serverTime” message back to the client that contains the server’s local timestamp.

When the client receives the “serverTime” message, it looks at its current local timestamp again and uses that to compute the round-trip time, which is used to determine the uncertainty factor (since the server could have sampled its own time at any point within that round-trip). If the uncertainty is too high, it tries again; otherwise, it accepts the result, and the clocks are now synchronized.

You may have to adapt the above code into your own application startup process, but that’s the general idea, at least.


Thanks David!

That makes a lot more sense. In fact I was beginning to experiment with an initialization scheme very similar to the one you just described- the equations I came up with were the same so I’m sure I’m on the right track now. Basically I used my “dumb” original synchronization routine to calculate an initial delta and uncertainty, which is essentially the same thing your startup routine does. Haven’t got it all tied together yet but I’m confident I can make it work.

Thanks again,

  • lem

One more question that I’ve been pondering while implementing this is what form of time to base the timestamp on- frame time or real time.

In the initialization code you provided, the timestamps used in the calculation are derived from globalClock.getRealTime()-- e.g., the time as of the moment the timestamp is recorded.

In cDistributedSmoothNodeBase.cxx however, I note that each packet is timestamped with the C++ equivalent of globalClock.getFrameTime()-- e.g., the time as of the beginning of the current frame.

The latter method-- getFrameTime()-- makes slightly more sense to me, since the networking engine is all being driven off of the Task Manager, and therefore an indeterminate amount of ‘other’ code has already executed before we record the timestamp. getFrameTime() therefore seems to me to be a more consistent measure.

Would this conclusion be correct, or is there actually a reason for using getRealTime() over getFrameTime() as the timestamp?


  • lem.

Right, getFrameTime() is probably smarter.