Is there an easy way to retrieve the timing of buffer flips? It seems pstats keeps track of this, but I need the information on the client side and have no need to run a pstats server.
The PStats ‘server’ runs locally and you can run a client on the same computer. So you can keep everything client-side.
Just use want-pstats 1 in the Config.prc (or one of the other ways described in the manual), then run “pstats.exe” (or “pstats” on linux/mac) and you will be able to see the data.
There is significant overhead in doing this. I need this information for every frame (the pstats sever seems to get only a set number of frames and interpolate the results), and without the network overhead (even if it’s between two local ports).
PStats does no frame interpolation, although it’s true that if you have an extremely high frame rate, more than 100fps, it might need to drop a few frames on the floor. It normally reports each frame precisely, though.
Also, “network bandwidth” on the local machine doesn’t involve your network card, and thus isn’t even related to what people normally call “network bandwidth”, and it is unlikely to impact any resource on your machine measurably. Furthermore, PStats limits itself to a specific (extremely low) bandwidth consumption anyway (which is why it needs to occasionally drop frames when the frame data is coming too fast).
But PStats is intended for human observation of render performance, for the purposes of scene optimatization. If you’re doing measurements that really require data for each and every frame, presumably you’re not talking about human observation of render performance. Out of curiosity, what exactly are you doing?
But to answer your question, if you want to time the frame flip precisely without using PStats, you will have to insert the timing measurements inside the C++ code, since the flip call is handled entirely in C++. This is easy to do; just look in graphicsEngine.cxx for all the occurrences of “flip” (you’ll see the PStats timers already in place). Insert your own timers in the same place, and you’ll be golden.
Of course, you’ll need to be comfortable with editing C++ code, and compiling Panda from scratch. Is this really easier than using PStats?
Thanks for the reply. The setting that I’m working in requires logging exactly when a particular (2D / stuck to aspect2d) object is created/comes on screen with as little latency as possible from when it actually happens. Although modifying the source code would not be a problem, I’m leaving it as a last resort for a few reasons, the most important being that I’d like to keep updates for my users as hassle-free as possible (i.e. being able to download from here instead of waiting for me to patch the latest version). I thought of another way of doing this, and wanted to know what your thoughts were… it looks like the igLoop, which actually calls renderFrame (which in turn seems to fill the back buffer and do the flip) has a priority of 50. If I add a task with priority > 50 and record getFrameTime(), will I get a good estimate (say within 10ms)? Or is there enough going on (what does go on?) between tick() and flipping the buffer for this to not be accurate?
Or, to be sure that a flip has occurred but be a little less accurate (it’s more important to be sure a flip occurred than being close to the event - though we want to be as close as possible), can I record getFrameTime() in such a task and count it as a flip for the frame before? At that point, is it guaranteed that the buffer has flipped to show the previous frame (i.e. the flip call was made, I realize they are non-blocking)? This seems to give “close” results too; empirically, getDt() on the machines this will run on is ~9e-05.
Sorry, yet another revision to my last two posts. If I set auto-flip to true, can I do what I said two posts ago and record it in a task “that same frame” instead of waiting for the next?
Ah, I see.
With auto-flip false (the default), renderFrame() will call flip() to present the previous frame, then issue the commands to draw the current frame, and then tick the clock in preparation for drawing the next frame.
(Note that “tick the clock” means copying the current time, as reported by getRealTime(), into the held value reported by getFrameTime(). This value is considered the “start time” of the currently-computing frame, and is used to compute animations, etc., for things to be drawn in this frame.)
This means that the flip will occur roughly at the beginning of renderFrame(), and a task of priority 49 that stores the value of getRealTime() will often get a fairly accurate measurement of the currently-visible frame’s presentation time. However, since the graphics card might have been still drawing at the time we called flip, there might be cases when there is a delay of several milliseconds between the call to flip() and the actual presentation of the frame. So, this approach will be accurate sometimes, and inaccurate other times, depending largely on the scene complexity.
With auto-flip true, renderFrame() will draw the frame, call flip(), and then tick the clock. This is a much better situation for your needs. It means that, at any given time, getFrameTime() will return very close to the presentation time of the currently-visible frame, even if the call to flip() does not return right away.
Thanks again for all of your help. Here’s what I ended up doing… I was hoping you can comment on its correctness.
- Leave auto-flip as False
- Create a priority 51 task that:
- Records the time
- Times the call to base.graphicsEngine.flipFrame()
- Gets an interval of when the flip was issued: (base, base+callTime).
Looking through the code, it looks like Panda should mark my “manual” flip and only worry about actual rendering in igLoop. Is this right?
Ah, I forgot about graphicsEngine.flipFrame(). Yes, that should be fine. Calling this explicitly is roughly equivalent to setting auto-flip true.
Not sure if the time before the call to flip is important to you, though. Isn’t the only important time value the time after flip finished? Any time spent in flip itself is just time spent waiting for the graphics card to do whatever it needs to do, and has little to do with when the frame actually flips.