Point Clouds and Particles

Hi to all,

At the outset, I would like to point out that while I have been dealing with Python for quite a long time (and with programming - for a very long time), I have been experimenting with Panda3D only for a few weeks.

I have 2 questions, the second arises from the first:

I can already load models with triangles quite efficiently, and manipulate them, illuminate them, cast shadows, etc. But I totally failed when it comes to handling point clouds in Panda3D. I note that I have already searched a lot of the Internet, including the forum, of course, but I have not found any obvious answers to the question: how to load and visualize point clouds. I tried to load point cloud files with loadModel(), but while the operation completes without errors, such a Node is sort of “empty” (for example getTightBounds() returns None). Of course I know that I could somehow figure out the format of the file storing a point cloud (for example PLY or GLTF), parse that, and then pass the vertexes to GeomPrimitive with something like addVertex(). But isn’t there a simpler loader? For example, in such Open3D there are literally 3 lines of code:

from open3d import *
cloud = open3d.io.read_point_cloud("pointcloud.ply")  # Read the point cloud
open3d.visualization.draw_geometries([cloud])  # Visualize the point cloud

Suppose I get through the previous problem somehow. But another question arises here. Is there any possibility for the loaded point cloud to become the starting point for particles? Something like this example that is to use my point cloud instead of using Particle Factory, which would be emitted and then rendered. I know there is something like GeomParticleRenderer, but I think that’s probably not what I can use?

If this can help in any way, I will just add at the end that I generate point clouds with a Lidar from the iPhone 12 Pro (a few examples). The largest of the point clouds displayed there has 1.2 million points (and Open3D animates it smoothly on my computer), but I would like to be able to visualize larger clouds, such as even several million points (Open3D can handle a cloud of 10 million points, although not super smooth). I am also interested in handling point colors in the cloud.

Regards
Mikołaj (Miklesz)

Hi, and welcome to the Panda3D community! :slight_smile:

That’s kind of strange. What output do you get when you call ls() or analyze() on your loaded model? Just to make sure that the model file contains actual geometry.

If the problem would turn out to be that there are no triangles defined in the model file, only vertices (or not even those), then you could try to find a way to triangulate the point cloud into a mesh before exporting to e.g. glTF. Then you can call make_points_in_place() on the Geom(s) of the loaded model to turn the mesh(es) into a point cloud in Panda:

model_root = base.loader.load_model(model_filename)
model_root.ls()
model_root.analyze()
model_root.reparent_to(base.render)
point_cloud = model_root.find("**/+GeomNode")
point_cloud.set_render_mode_thickness(5)

for geom in point_cloud.node().modify_geoms():
    geom.make_points_in_place()

There are ways to manipulate vertex colors in Panda3D, using low-level geometry manipulation or shader techniques.
Feel free to describe in more detail exactly for what purpose you would like to change the colors (e.g. for visualizing the selection of a group of points, perhaps by dragging a rectangle around them).

As for using your loaded point cloud as particles, I will have to defer to others who have more experience with the Panda3D particle system.

1 Like

Hi!

Thank you for your answer.

Your hints turned out to be very helpful! My problem was that I did not understand fully how Panda3D stores objects yet. I exported only vertices from models, so the model file contained real geometry (no triangles defined). When I started exporting the mesh it then just had to call make_points_in_place() on the geom of the loaded model to turn the mesh into a point cloud in Panda - just like you showed. And it started to work:

As for the colors, I didn’t want to modify them so much as keep them from the original model, which I managed to do. Overall, my goal is to visualize the scans made with LIDAR in a visually attractive way. I know that a lot can be done with shaders, but I don’t really know shaders (and I don’t know if I will find time to learn). Although I managed to get some visually interesting effects using common filters alone. If someone can suggest a few more ideas, I would of course be happy to try it out.

The question remains for using my loaded point cloud as particles. Since no one answered it, I will wait a while and possibly create a new thread, with this question only articulated.

2 Likes

Cool results! Those renders look somewhat dreamlike.

I admit I’m not sure if this would work as a “mesh particle emitter”, but you could perhaps try MeshDrawer MeshDrawer — Panda3D Manual

It seems the geometry method might take your point cloud node path and convert it into a MeshDrawer object. Again, I’ve never tried this, so I could be wrong.

Thank you for your answer! :slight_smile:
I tried MeshDrawer using this example (I had to tweak it a bit as it’s a bit deprecated) which was put on GitHub by the “treeform” user (from this forum). Unfortunately, as I figured out, the philosophy of MeshDrawer is that every time I update an image frame, I have to do a Python loop update for each point. For example:

    for v,pos,frame,size,color in particles:
        generator.billboard(pos+v*t,frame,size*sin(t*2)+3,color)

On my computer (MacBook Pro, M1, 2020) I am able to do about 20,000 updates and still get 60fps. However, 20,000 points is relatively little - in the examples I pasted above, the entire scene is around 130,000 points. And I’ve had point clouds with a million or more points. As soon as I use one Python function (like setPos or setHpr, respectively) to perform, for example, a shift or rotation of all points, then everything works very quickly, because the recalculation of points takes place somewhere “inside”, in a highly optimised code, written/ran probably in C++, or even somewhere “deeper”, in the graphics card. However, when I have to reposition the positions of all points on the Python level in real time, the performance drops drastically. Hence, I am looking for a solution that, for example, would allow me to define the initial and final positions of all of the individual points in the point cloud, and then, in real time, only define the level of transition from one state to another state. Or else, define my point cloud and have it explode (but automatically, like the built-in Panda3D particles, not manually updating each point).
One can say that, for example, I would like to get an effect similar to something like in this video (unfortunately it’s not my work, and this is Unity at all).

I think it’s worth noting that such things are preferably done at a low level, but not python. It’s also questionable CPU usage, it’s slow in itself. Shaders are now used for this kind of task. I remembered and found an example of a post with a similar task, maybe you can extract something for your idea.

2 Likes

I’m digging up this thread. Thank you first of all for the answer. I didn’t react earlier because I had to test and think about a few things. My concept has also changed a bit.
First of all, I decided that due to my current conditions in terms of the availability of time, I have no real chance to bite into writing shaders. I know shaders are more appropriate here, but I have to be realistic and assume I only have as much as Python callable and I just have to creatively run Panda3D functions assuming the guts inside are optimized enough (C++, shader generators, and those things).
In fact, the current Particles system in Panda3D, despite some limitations, is not so bad for me. I am able to create a single Particles effect with tens of thousands of points, achieving 60 fps. The worse thing is that I can only control all these points to a limited extent, and above all, I cannot control them individually.
And I currently dream of creating a “display” of Particles. So far, I’ve managed to do something like the picture below:


And although I can apply forces to these particles and, say, after being displayed, “blow” points around the scene, it is a “cheat” that comes at a price. It is not a multicolored Particles object, but about 650 “megapixels” placed next to each other (12 particles each). And although I can still apply forces and so on to all of them, the performance is weak. I am able to smoothly animate at most about 7800 particles (650 objects, 12 particles each). Apparently, it is much less computationally expensive to animate one object with a large number of particles than many objects with a small number of particles each.
And now the question: is Panda3D giving a chance to even individually colour each of the particles? Because then I would still create one object with a large number of particles, color it depending on the particle coordinate (coloring would be computationally expensive in Python, but I would do it once, before starting the animation), and then animate it as a whole.

Very much so, I believe: In short, my understanding is that modern graphics hardware is optimised for a lot of polygons, but relatively-few batches.

I imagine that particle-effects internally have only a single geometry-object each (or some batching mechanism), meaning that they need send only one call to the graphics card. As a result (and speaking simplistically), one can potentially have an awful lot of particles from a single particle effect.

Conversely, however, having a great many particle effects presumably results in a great many scene-graph nodes, and from that, a great many batches to be sent to the graphics card. This multiplicity of batches, then, slows things down.

You might be able to ease things via the Rigid Body Combiner–but I don’t know how well it works with particle effects.

1 Like

Thank you for your answer. OK, so I will look at the Rigid Body Combiner, maybe it will do something (then I would still use my “cheat”, but maybe it would be faster).
PS: I know I can render individual particles as full 3D objects (with GeomParticleRenderer), and maybe then I could modify each of these 3D objects. But I have the impression that rendering tens of thousands of full 3D objects, even if made of one point, will be all the more deadly. Or am I wrong?

As long as those 3D objects are fairly small, and are internally part of the same mesh, they may well be fine: modern graphics cards can render a ridiculous number polygons, I believe.

The trick is as I said: this stands only as long as these 3D objects are part of the same internal mesh. Once they incur multiple batches you’re likely to again see slowdown, I fear.

That said, I doubt that it would be any more difficult or easy to modify 3D particles as 2D ones, beyond the complexities of 3D-work.

1 Like

Thanks again for your help.
It took me a while, but I had a little fun with that RBC (Rigid Body Combiner). Unfortunately, while for the example from the documentation, thanks to RBC, I am able to increase the number of cubes rendered several times (from 4000 to 20,000 at 60 fps), replacing them (cubes) with objects containing Particles (I tested 650 ParticleEffect objects, each with 150 particles), if it even gives some profit, I would say… homeopathic. :wink: Consequently, I gave up this idea - but thank you.
All I can think of is to create ONE ParticleEffect object containing THOUSANDS Quad objects rendered by GeomParticleRenderer, recolor those Quads, and then combine it all through RBC. But I don’t know if that makes sense with the Quads. And it will take a long time to do so.

1 Like