This is something I’ve been working on for the last 3 months or so, I got the proof of concept version up in a few days, but then things kind of kept on exploding in my face… anyway it is almost ready, or at least ready enough to be show.
tl;dr version:
git here: github.com/wezu/p3d_gpu_particle
You use the editor to make some effects, save it to a file, then import wfx and do:
particle=Wfx()
particle.load("default.wfx")
particle.start()
You can link effects to moving emitters, set a global force or per emitter forces, turn the effects on and off (per emitter), or pause the whole system
smiley=loader.loadModel('smiley')
smiley.reparentTo(render)
#link emitter to node
particle.set_emitter_node(emitter_id=3, node=smiley)
#set emitter force
particle.set_emitter_force(emitter_id=1, Vec3(0.5, 0, 0))
#set global force
particle.set_global_force(Vec3(0,0,-1))
Editor screenshot:
The Long version
The system uses 3 floating point texture buffers, that are ‘rotated’ in a way similar to triple buffering. The first buffer has the position of the particles two frames back, the second one has the position last frame, and the third one is the one we render to in the current frame, then in the next frame the first buffer will be the one we render to and buffer two and three will have the position in the last frames. This way I don’t need to store the particle velocity and I can run it on OGL 3.x hardware that has no compute shader nor image store functions (but also uses more memory).
The particles use the position texture in the vertex shader to set the position of each particle, each vertex is rendered as a point with a texture (the texture is/can be animated). You can control the size, weight, texture and lifetime of each particle, but why make things simple? The size and weight are not just start and end values, you have a sin function that controls how these values change over time.
Since there is no programmable blend stage, I needed to make two geoms - one with additive blending and the other with alpha blending (well ‘dual’ actually), because you need additive blending for things like fire, alpha blending for smoke and from time to time the binary part of dual blending is also useful for things like snow.
You can render over a million particles with this system, but if you have all of them on screen and they are big (like 100pixels each) then you’d better have a gpu with a lot of fillrate because that’s the bottleneck here.
Some things that are missing but planed:
-
Collisions with the world
A cheap way to do collisions would be to render a heighmap with normals from above and let the particles collide with that, but that would only work in some scenarios (outdoor scenes). A more advanced solution is to render a 3D texture of the world (voxelize it), but without compute shaders I don’t see a way to do it in realtime, it still could work for static scenes. I’ve also read on the interweb that one could use the depth buffer of the main camera to do collisions with particles, but I don’t think that would work for my setup (I don’t know the screen space position of a particle when I’m doing physics, I just have the particles world space pos). -
Vector fields
This is simple, I just need to find a way to generate a vector field, write it to a 3D texture and get that texture to a smpler3D -
Better texture support in the editor
I wanted to make the texture thing simple, just say what texture you want for a batch of particles and it will do the work for you, but it’s not working as expected , so I’ll need to write something different with a bit more manual input from the user. -
Helper functions in the editor
I need to write some functions to help generate particles in specific shapes - rings, spheres, planes, custom mesh shape and some others -
Attract, repulse, vortex forces
I’m doing the physics in a fragment shader and to be honest I have no idea how to code this kind of forces
There are probably things that can be improved or added, if you have any suggestions or questions, feel free to ask here.