My Panda3D (1.10.8 , Python 8.7, Windows 10, intel motherboard GPU) application’s frame rate (as displayed by the FrameRateMeter) declines with time in the application. I narrowed in on the cause by disabling much of its functionality until I found that managing building floor LoD due to moving around caused the decline in frame rate. I.e.with higher app functionalities disabled:
- frame rate is constant if the camera doesn’t move
- If floor display is disabled and I move the camera throughout the world for 15 minutes, the frame rate may vary somewhat with displayed complexity, but will return to the vsynced 60fps when display complexity is not high.
- If floor display is enabled, as I move the camera through the world, the maximum achievable frame rate continuously declines. If I move the camera to display a simple view and stop moving it around, the frame rate remains degraded.
During three 10–15minute tests, frame rate started at vsynced 60fps and declined to 35, 37 and 36 fps. Per Windows Task Manager statistics for the application:
- CPU% started at about 19% and dropped about 1/2% when resting at the end of the test.
- Application memory started at 1825MB, rose to 2750-2830MB and then fell back through 1800MB after 10 minutes of inactivity.
- GPU% started at (22.5, 25, 24)%, fell to (13, 16, 15)%, and doesn’t recover.
I repeated the test on a linux laptop with NVidia GPU. The statistic values were different (158 at start to 60 at finish) fps w/o vsync, but exhibited the same symptom of frame rate declining with usage.
Output from render.ls()
from the start and end of one of the runs are nearly identical, with a 6 node difference due to me not getting the camera back to exactly the starting point.
My Panda3D app has 100+ multi-floor buildings. Each has a transform to local building space. The building’s walls, floors, etc are defined within building space. Each building exterior is displayed (in one of 2 resolutions managed by a panda3d LODNode).
Floors are viewed at highly oblique angles, and the detailed textures for several buildings are quite large. So, I decided to manage floor texture resolution myself. Each floor in a building is displayed as a small array of rectangular tiles with applied textures. For each tile location in a floor array, I pre-built 6 rectangles that are identical except for their texture resolution’s power of 2. As the camera approaches a building, I add the floor tiles to the building node and manage the resolution at each tile location through swapping out via detachNode()
and swapping in via reparentTo(building_node)
the tile with the appropriate resolution texture. When the camera moves away from a building, I detachNode()
all of the building’s floor tiles from the building_node
The accumulating decline in frame rate is strongly associated with the accumulated volume of detachNode() and reparentTo() activity. Any ideas on how to maintain performance?