Decoupling positional scenegraph from renderstate scenegraph

I’m working on a project which requires multiple materials (shaders) and renderstates on a per camera basis. Trying to organize everything is a big mess. After playing with different scenegraphs and camera states, I always end up in some frustrated state.

On the way home today, I suddenly realized that the key thing bothering me is that while objects generally have some spatial organization, which is useful to organize in a spatial graph, their render attributes are better organized in a different graph that has nothing to do with their spatial graph. Thus, trying to stuff everything down one scene graph and manipulating things via camera tag states gets really complicated.

What do you guys think? Is Panda2 going to separate spatial vs material inheritances or is it going to use some other system?

For Panda 2.0, we are talking about ways to more easily maintain parallel state graphs, for rendering the same object in a different state in each rendering pass, even though it maintains the same position in the scene graph.

The current proposal involves replacing the way-clumsy camera tag system with a different NodePath accessor that makes this sort of thing more natural, something like:

model.setTransparency(1)
model.getPassState("glow").setTransparency(0)

David

Sounds Panda2.0 is going to rock.

Is there a public document anywhere detailing these discussions/plans?

I don’t anyone’s assembled our musings into a document cohesive enough to post anywhere yet, though I guess it might deserve a blog post soon.

David