I hit a snag developing my Panda3d game today. I’m using 1.7.0 with auto shaders, and I found that my frame rate dropped to less than 2 FPS. (From a normal high-60s.)
I installed Nvidia’s PerfSDK stuff and found these errors with GLexpert:
OGLE: Category: 0x00002000, MessageID: 0x008D0002
The current program/shader related state may lead to non-optimal
performance: Fragment Program 18 is going to be recompiled
because the shader key based on GL state mismatches.
GL_CLAMP_VERTEX_COLOR_ARB is clamped.
GL_CLAMP_FRAGMENT_COLOR_ARB is clamped.
There are 8 constants bound by this program.
Constant 0 is bound to a mix of special and general values: general general general 1.0.
Constant 1 is bound to special values: 1.0 1.0 1.0 1.0.
Constant 2 is bound to a mix of special and general values: general general general 0.0.
Constant 3 is bound to a mix of special and general values: general general general 0.0.
Constant 4 is bound to a mix of special and general values: general general general 1.0.
Constant 5 is bound to special values: 1.0 1.0 1.0 1.0.
Constant 6 is bound to a mix of special and general values: general 1.0 0.5 general.
Constant 7 is bound to a mix of special and general values: general 1.0 0.0 0.0.
Texture 0 uses an 8 bit fixed point format.
Texture 0 is bound to texture target GL_TEXTURE_2D.
Texture 1 uses an 8 bit fixed point format.
Texture 1 is bound to texture target GL_TEXTURE_2D.
Program depends on alpha state.
There’s a LOT of these. Also, when I turn on the PRC flag of dump-generated-shaders, I am seeing about 20-30 ‘dumping shaders’ per frame. Obviously, re-compiling all of my shaders every frame is absolutely not the best way to achieve solid performance.
Any ideas? It states that the program depends on alpha state, so I suppose there is some render state set by Panda3d which is throwing OpenGL for a loop.
I’ve been Googling for the exact mechanism by which OpenGL caches ARB shaders, but I’m not finding anything. I’d appreciate some pointers on where I might look next.
Meanwhile, I am going to keep trying to find what exact render state is causing this issue.
Update: Doh. I feel like a moron! I mistyped and was doing a “render.setShaderInput” per frame. So, of course, this was invalidating ALL auto-shaders attached to the render node every time a frame rendered. As a warning to people in the future who find this thread in a search: Be careful where and when you set shader inputs when using Autoshader. Now that I know what to look for, I found this thread where it’s discussed that mixing global shader inputs and Autoshader is a bad thing.
Another thing I learned in that thread, why it starts automatically re-generating shaders when I use my new TexGen code with Autoshaders. Seems Pro-rsoft had already tried this and ended up with a huge performance hit due to the Autoshader mechanism regenerating because of incorrectly-identified state changes.
Well, this is all making sense now.