Debugging "AssertionError: Shader input * is not presen

I get errors like “AssertionError: Shader input SomeInputName is not present.” pretty often. Whats a good way to debug these?

All this error tells me is that the set of nodes using shaders that need that input is not a subset of the nodes that have it, on some frame. Not very useful, when I don’t know which nodes have it, or which nodes need it, or even what nodes I have.

I have a massive collection of auto-generted shaders spanning hundreds of geoms. I have no idea which node or geom is missing the input. I’d much rather get a warning, and have that shader fail so I could see which object it was on. Even better would be a warning, including printing the node path there the error occurred.

Something that adds a bit of complexity: shaders can be applied to geoms directly (and some of mine are), rather than PandaNodes. Knowing the path to the geomNode would be enough for me.

In my case, I set my shader input on my map (which everything is parented to, in this case everything being a skybox and one model), and I get the error. I tried setting the shader input on render, and I still get the error. At this point I have no idea where to look. This is just a viewer app with 2 models (sky box and viewed model), I don’t use render2d, so if the node is not under render, I don’t know where it might be. I even tried disabling all my extra buffers and associated cards. Unfortunately my setup for generating the shaders it rather involved with a lot of code, so its hard to strip it down any more.

This is panda3d 1.8.0.

Full error:

Not sure if it’ll be any useful information, but have you tried to raise the verbosity of the notify-level in your prc file?

Which notify level? I tried “notify-level-gobj spam” (Which is not listed in panda3d.org/manual/index.php … _Variables ) and several of the other ones that are listed there. I got nothing useful.

Unfortunately, that error is raised in ShaderAttrib, where it’s not possible to get the name of the node to which the attribute is applied. That wouldn’t be conceptually right, anyway - shader inputs aren’t bound to particular nodes, but they can be propagated through the scene graph in complex ways, like most other render attributes.

What I can do is change it into a non-fatal error and have it send a default value depending on the type, eg a float would translate to 0.0. But this would make it more difficult for developers to keep these kinds of errors out of the final version of the game - these kinds of issues can be hard to track down if people’s shaders start behaving in undefined ways and they don’t know where to start looking for the issue. Having an assertion error ensures that uniform shader inputs must strictly be set.

The proper way to solve this would be to find wherever you’re applying the shaders and also add a setShaderInput line for every variable in the shader, as a default value that can be overridden by shader inputs set on other nodes.

How about a config var to make it non fatal?

Since the C++ part of the stack trace is missing, I can’t tell, but perhaps an exception could be raised in this case, and caught lower in the call stack where its processing a node? (Of course, that would be a bit of a mess)

For this case, I should be able to cover it by adding an assert available node to my shader generator configuration graph which will cause an exception when processing the geom thats missing this shader input in its render-state but needs it. I’ll have to modify my loader though, since currently I generate my shaders before attaching the model to the scene graph which provides that input. I suppose that would be a better design to move to anyway though.

Normally this isn’t a big deal, but for some reason something apparently not under render is causing it, which I have no idea what it could possibly be (everything is under render, I think). Is there a way to find all active scene graph roots?

Thanks!

Edit: while on the subject of shader inputs, I’ve had lots of trouble with texture shader inputs not getting matched up with the correct textures. Even named ones seem to fail and get other random textures sometimes (which sometimes even change to other random textures over time, for flickering crazy nonsense), and I don’t think I’ve seen it fail the assert when they are missing (just more random textures). Is this a panda bug, driver bug, or something else? I can do some experiments to get some more data if needed.

You can set “assert-abort 1” in your Config.prc to make Panda abort when an assertion is triggered, allowing you to break into the call stack using your favourite debugger.