I cannot remember exactly when this started occurring, possibly when I began using simplepbr. However, consistently across my Actor models, I now get a ~15 fps drop when playing even the simplest animations. I can verify that it is indeed the animation play cycle that causes the fps drop. Sometimes the animation causes a “phantom lag” for a moment afterwards and then the fps is back up to 60.
I have not tested this with low-polygon models. I am maybe willing to drop my polygon counts if this is a vertex processing limit of some kind.
Firstly, note that it is not useful to measure a performance loss in FPS, since FPS is a non-linear scale. It is more useful to express the difference in milliseconds, which you can calculate by doing 1000 / fps, or by setting frame-rate-meter-milliseconds true in Config.prc. The difference between 45 and 60 fps is a much smaller difference than the one between 30 and 15 FPS.
CPU animation can be slow for heavy models. Please verify with PStats that this is indeed CPU animation time that is causing the slowdown. If so, you can simplify the model geometry, or, you can enable hardware skinning, which is easy to enable and vastly more efficient in the majority of cases.
I have tested a low-polygon model now, and the counter stays at 16.7 ms throughout the animation. I’m willing to attribute this to polygon counts, but I would like understand why my CPU cores all remain under 70% utilization during the more complex model lag.
Hard to say without looking at the PStats charts. However, note that unless you disable the multi-threaded render pipeline, Panda will only use one CPU core.
Note that for HW animation to work, you need to enable setShaderAuto, or apply a custom shader with a special tag set. I suspect that it has not actually been enabled.
Turns out that just calling character_body.setShaderAuto() where character_body is my Actor model path seems to turn on hardware skinning, with the config variables in place.
This is an excellent discovery, as I was already knee-deep in writing a custom GLSL shader (which does not seem to be required now). If there are any fatal flaws with this approach, please let me know. The simplepbr rendering seems to be unaffected visually.
simplepbr will be disabled on your model if you enable the shader generator.
It would not be difficult at all, it would just require a few extra lines to the shader and an extra line to set the F_hardware_skinning flag on the ShaderAttrib.
I am attempting to get this working with my own files.
So far I get the error AssertionError: false at line 2096 of panda/src/display/graphicsStateGuardian.cxx
when I try to apply the simplepbr shader directly with the modifier. I can get the program to run, with GL errors, using a modified version of that Hardware Skinning post you made.
The problem is the lack of error message on Panda’s end. It doesn’t recognise some of the inputs, particularly p3d_TextureBaseColor and p3d_TextureMetallicRoughness.
simplepbr’s shader loader has a special routine that inserts #define statements that redefine p3d_TextureBaseColor to p3d_Texture0, etc. Because you’re loading the shader outside of simplebpr, those defines are no longer active and you are now loading an invalid texture input.
Thanks. With a few minor adjustments I think I have managed to get the basic simplepbr .vert and .frag shaders working with your Hardware Skinning example.
One weird issue that remains is that the animation stops playing if I explicitly flag the shader with F_hardware_skinning, True. But as far as I can tell, the animation is being composed on the GPU. It runs butter smooth with a 5,000 polygon model flipping around.
I’m sure there are issues with this preliminary hack. However, this is as far as I got without causing errors in the terminal to be thrown. As far as I can tell, the current code does not throw an error.
However, the animation stops playing when specifically flagged for hardware skinning in the main function. I suspect something’s not being set properly. ; )