I’d like to add a feature “render to vertex buffer”

“- Render to vertex array: The application can use a fragment
program to render some image into one of its buffers, then read
this image out into a buffer object via glReadPixels. Then, it can
use this buffer object as a source of vertex data.”

Where would be the best place in Panda source to add this feature?


Great question! I need to research the OpenGL extensions to do this. I’ll be back to you in a half hour.

The more I think about it, the trickier this seems. The big problem is that Panda3D tries very hard to hide vertex buffer implementation details from the game programmer.

For instance, let’s say you load a simple 3D model consisting of only one mesh. Each vertex has a position, normal, color, and texcoord. Furthermore, let’s assume this is an animated model, so position and normal are to be updated every frame, but color and texcoord are not animated - those are static.

Panda3D has a lot of options. It can put all the data into one big vertex buffer in this order: position, normal, color, texcoord. Or it could put the data in the vertex buffer in a different order: texcoord, position, color, normal. Or, it could create four separate vertex buffers - one for color, one for texcoord, one for normal, one for position. Or, it could group things according to static/dynamic: position and normal in one vertex buffer, texcoord and color in the other.

Panda3D also has the option of optimizing out vertex columns entirely. For example, let’s say that panda notices that all the vertex colors are the same. Panda3D can remove the vertex colors and replace them with a flat per-mesh color.

Panda3D may also add vertex columns. It may be that the data originally was intended to be software animated, but maybe the card supports hardware skinning. In that case, Panda3D may opt to add bone weights and bone indices to the vertex data.

Panda3D will make a heuristic decision about which of these options is best. The decision depends a lot on whether you’re using OpenGL or DirectX, and on the card’s capabilities.

When Panda3D loads a model, it stores the vertex data in main memory in a driver-independent format. Later, when it goes to render that data, it will copy the data into one or more vertex buffers.

So the problem is that you just don’t know, at all, how Panda3D is going to store vertex data. You need to change that. You need a mechanism that allows you explicit control over vertex buffer creation.

You’re going to have to add a new data structure to Panda3D. But I’m not good enough at Panda3D core coding to know exactly what you need to add. Basically, it’s going to have to be something that says “This data structure would normally contain vertex data. But this is just a placeholder, there is no data. We do have a format, though: position, normal, texcoord, and it needs to be packed into one big array in exactly that format.” Later, during vertex buffer creation time, the system will see the placeholder and will create a VBO with the desired characteristics - or, if video card capabilities do not allow it, it will fail.

Once you have that new data structure, adding the copy-to-vbo is not hard at all. But adding the data structure is going to involve some pretty serious digging into Panda3D’s core data structures.

David should comment on this.

Actually, maybe I’m overthinking this. Perhaps all that’s needed is:

  1. An annotation that you can put into a VertexArrayData that says, “The vertex buffers must be created using the following format. If you can’t do that, then die trying.”

  2. An bit that you can set in a VertexArrayData that says “There is no data to copy into the vertex buffer. All data will be synthesized by the video card.”

I’m trying to implement this feature. I’ll let you know when I’ll manage to make it work.