I want to optimize the rendering of the animated (with skeletal animations) characters. Where should I start from?
My target is to render 8 characters (without LODs) on screen with 70 bones (default blender armature without face bones) for each character with at least 120 FPS on GeForce GTX 1050. I did some tests and figured out that performance depends on the bones count and polygons count. I can’t reduce the number of bones, so I’m going to reduce the number of polygons. How far should I go?
I have checked the polygon count of the characters from some games:
Quake [1996] - 200
Quake 4 [2006] - 2500
Unreal Tournament [1999] - 800
Unreal Tournament 2k3 [2003] - 3000
I don’t think I would be able to reduce the number of polygons lower than 1000. So I’m thinking about 2000-3000 polygons.
What else should I do? Can I remove some data from vertex buffer like colors and tangents, because I’m not using normal-mapping. Could it help?
Does the performance depends of the number of frames (duration) and frame rate of the currently playing animation? Should I reduce the number of keyframes of the animation and rely on the frames interpolation?
Does the Panda3D animation system is not optimized well, because I’ve seen some people recommends going up to 25k polygons per character. Or is it my GPU is too slow for those tasks?
I have tried both, but I didn’t made any custom skinning shaders for it.
I have tested with “hardware-animated-vertices #f” and “hardware-animated-vertices #t”. I don’t see any difference in performance. How could I check if it’s working, does this option creates some specific OpenGL calls, which I could see in NVIDIA Nsight Graphics debugger?
It requires a custom shader. If the ShaderAttrib isn’t flagged as supporting skinning, it will fall back to CPU animation. Perhaps in a future version of Panda3D we can automatically inject this into the shader.
I highly recommend this, because doing the calculations on the CPU is quite slow. It just requires a small change to the vertex shader.
Nice. It works. Now the rendering is 10x times faster.
But how do I test that I’m not using the CPU skinning. It is possible to completely disable CPU skinning? I’m trying to add GPU skinning to RenderPipeline by modifying existing ShaderAttrib of the NodePath. I have already added the GLSL code, which modifies the matrices.