Is an (significant) overhead to using setShader() a bunch of times with the same shader rather than relying on the scene graph to apply the shader to the subnodes?
My shader generator traverses the scene graph, and it would be a bit easier if I did not have to worry about minimizing the setShader() calls. (Specifically, deciding which shaders to apply to nodes with no geometry based on which ones are used most by the children+parent. This could be done, but would be a bit of work)
Edit: I should clarify. I’m looking for rendering performance issues, not time spend initializing/setting up the shaders.
No, I don’t think there’s a significant performance difference there. I’m not 100% sure, because of the issue of processing the shader parameters, though–if you are using a lot of shader parameters, it might be worth it to try an A/B comparison.
Ok, thanks. I was mainly worried that worst case, it would lead to deselecting and reselecting the same shader on the GPU for every geom which could be an issue. If I do run into performance issues, I’ll look into the cause, but for now, its nice to know that it will probably at least work pretty well the easy way.
Right, that’s not the way that Panda’s scene graph works. Panda automatically collects Geoms with the same state together and renders them all at the same time, regardless of where they appear in the scene graph or from which node they inherited their state.