I am using Panda3D for AI projects and need to generate some semantic maps. I searched the forum and didn’t find anything related, so I would like to ask for some instructions.
A straightforward idea is to add a new column to the vertex data for each model. This column will indicate which color it should be rendered so that I can retrieve it in the shader and change the base color to the target semantic color. I am not sure if this is the best way to do it. If it is ok, how can I add new information to the vertex data?
Yes, using material is a good way. But I want to apply this effect to a new buffer/camera only and keep the rendering result of the main window unchanged with the original texture/shading. So I guess your second suggestion would be the only way to achieve this.
Yes, the second way would be great as I want to keep the content of the main window unchanged and render the whole scene to a new buffer where objects have semantic color instead.
I believe applying shader input to each model node path is better than modifying the vertex data. Thanks!
And then you tell the camera to assign a different state to the object depending on this tag:
# Disable lighting on this camera
camera.setInitialState(RenderState.make(LightAttrib.makeOff()))
# Give a unique state to each object depending on its "type" tag
camera.setTagStateKey("type")
camera.setTagState("vehicle", RenderState.make(ColorAttrib.makeFlat((0, 0, 1, 1))))
camera.setTagState("pedestrian", RenderState.make(ColorAttrib.makeFlat((1, 0, 0, 1))))
You might need to set an override value when creating the render states. You can also set the state in a more programmer-friendly way on a dummy node from which you extract the state using np.state.