Linux: globalClock adjustment at start

It seems that the clock isn’t adjusted correctly, or not adjusted at all on Linux, though I get those 2 lines util warning. The proof is a doLater task or a wait interval is ran before the set time.
Is it a bug or my bad build ?

I’ve noticed on windows that sometimes the global clock doesn’t get set/reset or is set wrong as seen through huge delta times returned by the global clock on the first frame…

If this happens, it’s usually the result of someone mucking around with directly assigning task.wakeTime, or some such shenanigans, which will invalidate the heap queue used internally to the task manager, and cause tasks to wake early or late.

If you’re referring to the warning message at startup that looks like:

:util(warning): Adjusting global clock's real time by 0.75049 seconds.

That’s a perfectly normal message. It gets printed whenever you pause the interpreter, or incidentally at startup time. It doesn’t indicate there’s anything wrong with the clock.

You might get erratic delta times on the first frame, depending on how your application starts up. If you just call run() exactly once you should be OK.

On some Windows multi-core machines, the clock can behave erratically, due to a bug in Windows itself. There are functions in Panda to try to compensate for this, but the best solution is usually just to lock the process to one core. I believe Panda does this by default.


Oh, there is 1 render frame call before run(). To solve that, I have to reset the frame time back to before the call.

# I have to restore the frame time, otherwise it would bring me troubles later

The weird thing is it doesn’t matter on Windows. How come ?

Hmm, I wasn’t aware of that. Might be a bug in the glxGraphicsWindow, or some such–maybe it’s forcing a call to render on window creation? Weird.


Sorry, I was wrong. There is nothing wrong with the clock at all.
I’ve investigated it closer. It does happen on Windows too, if I use shorter wake time. This makes very good sense because on Linux, the loading time is longer than on Windows, since model cache on Linux is disabled.


And here i been wondering why all panda3d programs start on the 1st core. Is there way to disable this or the timer will blow up? Maybe disable this after getting the timer? What is the nature of the bug? Can i have more info on this?

It’s an inherent bug in some multicore motherboards that manifests only on Windows. Or maybe it’s an inherent bug in Windows that manifests only on some multicore motherboards, depending on who you ask.

The nature of the bug is that the Windows call QueryPerformanceCounter(), which Panda requires to get a high-resolution timer (as do most games and game engines), is implemented on some motherboards by querying a timer built into the CPU. But when the OS migrates the process from CPU 0 to CPU 1 (for instance), suddenly it’s querying a different timer, which isn’t in sync with the first timer. Result: the time measurement suddenly jumps forward or backward by a random number of seconds.

If you don’t lock the process to one CPU, the OS is free to migrate it back and forth as it sees fit. It might hop back and forth several times a minute, and each time it does, the clock jumps wildly.

There are service patches that can be applied to fix it, but you have to go and find them. Most people, naturally, don’t have these installed.

Only a small percentage of computers demonstrate this problem. If your computer doesn’t have this problem (or you’ve installed the appropriate patch to fix it), and you don’t care about your application running properly on anyone else’s computer, you can turn off this locking with:

lock-to-one-cpu 0

in your Config.prc file.

Of course, there’s no particular performance advantage to turning off this feature, unless you want to run multiple different Panda instances simultaneously, or you have compiled your own custom Panda with thread support enabled and you intend to use multiple threads.


To solve this problem easily, I only need to use slave mode right after ShowBase instance is created, and set it to normal mode before run(). That works great, but when I used loadPrcFileData to set it to slave mode, it didn’t work. It works only if I set it in the physical prc file. Is it a very special case or simply a leftover bug ?

There are certain prc variables that are not queried dynamically at runtime, but are only sampled at application startup, and their values saved. clock-mode is one such variable. Thus, setting it via loadPrcFileData() is too late–it’s already been sampled.


Haa, as I guessed. isDynamic() returns 0, as of text-flatten, but it can be queried dynamically, and the result is so obvious.

lock-to-one-cpu 0

Does not work for me it still clocked to the 1st core.