Procedural Geom through compute shader


I am currently generating procedural meshes with Panda3D. I started with GeomVertexData and its high-level interface and am now using a simple pointer to the GeomVertexArrayData’s actual data to write faster.

I would like to be able to do that with a compute shader, to generate it directly on GPU and avoid the CPU-GPU transfer overhead, but I can’t find a way to do that with Panda3D’s Geoms.
Ideally, I would love to generate all the data in the compute shader(vertex positions, normals, and triangles), and still benefit from the Geom’s C++ interface (by subclassing or whatever)

Does someone have any suggestions ? I can’t find a way to get an openGL buffer handle from a GeomVertexData, or construct a GeomVertexData from one, so i’m kind of lost for now.

Hi, welcome!

To do this you need to use vertex pulling, which is a method that allows you to render geometry sourced from an arbitrary source of data (such as an buffer texture or SSBO, which can be written to by a compute shader) as determined by the vertex shader.

To do this, you create a Geom with an empty GeomVertexFormat, and in your vertex shader, you pull the data from the desired SSBO or buffer texture. Here is an example that shows vertex pulling with a buffer texture (though one generated on the CPU):

The only limitation is that you need to declare the number of triangles on the CPU. However, you can oversize your triangle buffer and fill the rest with zero-area triangles.

You will also need to generate a bounding volume on the CPU (or simply use an OmniBoundingVolume to disable CPU-side culling).

You can also do this with SSBOs via ShaderBuffer, which is a type of buffer that can be bound to a shader and have an arbitrary format. This might be a little more convenient because these buffers can have an arbitrary structure.

I hope this nudges you in the right direction. Let me know if you need additional assistance.

I just created some sample code showing how to do this:

Woha, first of all, thank you very much ! I did not hope for such a complete and nice answer :slightly_smiling_face:

ShaderBuffer seems to be the way to go, I didn’t know it.

To begin with, I’m quite new to the OpenGL realm, so please forgive me if I’m not clear in my questions and answers. So, if I get those right, you’re rendering with your own vertices and fragment shaders in those examples ?

What I’d like to do is to generate on demand the vertices, their normals, and the triangles indices in a compute shader, and then let the Panda3D machinery handle the rendering, just as I would when simply using a GeomVertexWriter: can you think of a way to do that ? Because when I generate this data procedurally on CPU, I don’t have to bother with fragment or vertices shaders : I guessed there should be a way to do just that on GPU, but again, I fell I don’t understand much of OpenGL yet, so maybe I’m just typing nonsense.

But I think I already understood how to generate this data with a compute shader with your examples, so thank you again for this !

Yes, in the “embers” sample the initial data in the buffer is populated from the CPU but I could have just left it at all-zeroes and filled it in entirely on the GPU if I wanted to.

In that sample, I stored an array-of-structs in the buffer where each element represented a single triangle. Instead you could make it work more like a traditional vertex buffer with the three vertices stored separately. And even a separate index array if you wanted to. With vertex pulling, it’s completely up to you how you choose to store the vertex data, or what data you want to store on each vertex, since you are in complete control of how the buffer is both filled and accessed.

The compute shader in my example is what is filling in the rows of the buffer with the data for the embers, and the vertex shader is using gl_VertexID to index into the array in that buffer to pull in the vertex data for each ember.

The only piece of information that you need to set on the CPU side is the number of vertices in the Geom (ie. how many times to run the vertex shader). As I mentioned earlier you can just create an oversized buffer and use zero-area vertices for the ones you don’t use.

Yes, I think I got the ember example right. At least I feel like I understood what you just said, the way you said it.

But what I want is to simply generate the vertex and normal data, and an index array in a compute shader, then let Panda3D do the rendering, without writing my own vertex shader. Is that possible ?

Because I do this to generate an actual geometry (some kind of asteroid for the moment) meant to be lighted and everything, and I don’t want to re-implement all of these effects myself, if I can avoid it.

When I use a Geom the easy way, some shaders are generated by Panda3D to render it, right ? A way to get the result I look for would be to generate the same shaders and pass those the ShaderBuffer as input the right way, I think.

I understand.

It sounds like we need a way to bind a GeomVertexArrayData to a shader as an SSBO (or perhaps to tell Panda to source a GeomVertexArrayData from a ShaderBuffer). I’m happy to look into implementing that. Could you file this as a feature request in GitHub?

Yes, more like the second phrasing : I want to bind a ShaderBuffer to the shaders automatically generated for usual rendering, assuming the responsibility to generate data in a suitable format. (Maybe some shader syntactic sugar can be added with more generated shader functions and structs ?)

I’m glad we can understand, I’ll happily file a feature request and I’m willing to help implement that, if that could be of use.

I spent some time reading code while waiting your answer : my guess is that a new class, similar to Geom, could be implemented to encapsulate that. Or maybe a simple adaptation of GeomVertexArrayData, using underlying ShaderBuffers, would be more pertinent ? Anyway, my target usage is generating data to push to shaders from a ShaderGenerator, without it having to live on the CPU at all.

I posted the feature request : Let me know if it is not clear, or if I can be of any help : I can’t wait to see the result !