Point Clouds and Particles

Hi to all,

At the outset, I would like to point out that while I have been dealing with Python for quite a long time (and with programming - for a very long time), I have been experimenting with Panda3D only for a few weeks.

I have 2 questions, the second arises from the first:

I can already load models with triangles quite efficiently, and manipulate them, illuminate them, cast shadows, etc. But I totally failed when it comes to handling point clouds in Panda3D. I note that I have already searched a lot of the Internet, including the forum, of course, but I have not found any obvious answers to the question: how to load and visualize point clouds. I tried to load point cloud files with loadModel(), but while the operation completes without errors, such a Node is sort of “empty” (for example getTightBounds() returns None). Of course I know that I could somehow figure out the format of the file storing a point cloud (for example PLY or GLTF), parse that, and then pass the vertexes to GeomPrimitive with something like addVertex(). But isn’t there a simpler loader? For example, in such Open3D there are literally 3 lines of code:

from open3d import *
cloud = open3d.io.read_point_cloud("pointcloud.ply")  # Read the point cloud
open3d.visualization.draw_geometries([cloud])  # Visualize the point cloud

Suppose I get through the previous problem somehow. But another question arises here. Is there any possibility for the loaded point cloud to become the starting point for particles? Something like this example that is to use my point cloud instead of using Particle Factory, which would be emitted and then rendered. I know there is something like GeomParticleRenderer, but I think that’s probably not what I can use?

If this can help in any way, I will just add at the end that I generate point clouds with a Lidar from the iPhone 12 Pro (a few examples). The largest of the point clouds displayed there has 1.2 million points (and Open3D animates it smoothly on my computer), but I would like to be able to visualize larger clouds, such as even several million points (Open3D can handle a cloud of 10 million points, although not super smooth). I am also interested in handling point colors in the cloud.

Regards
Mikołaj (Miklesz)

Hi, and welcome to the Panda3D community! :slight_smile:

That’s kind of strange. What output do you get when you call ls() or analyze() on your loaded model? Just to make sure that the model file contains actual geometry.

If the problem would turn out to be that there are no triangles defined in the model file, only vertices (or not even those), then you could try to find a way to triangulate the point cloud into a mesh before exporting to e.g. glTF. Then you can call make_points_in_place() on the Geom(s) of the loaded model to turn the mesh(es) into a point cloud in Panda:

model_root = base.loader.load_model(model_filename)
model_root.ls()
model_root.analyze()
model_root.reparent_to(base.render)
point_cloud = model_root.find("**/+GeomNode")
point_cloud.set_render_mode_thickness(5)

for geom in point_cloud.node().modify_geoms():
    geom.make_points_in_place()

There are ways to manipulate vertex colors in Panda3D, using low-level geometry manipulation or shader techniques.
Feel free to describe in more detail exactly for what purpose you would like to change the colors (e.g. for visualizing the selection of a group of points, perhaps by dragging a rectangle around them).

As for using your loaded point cloud as particles, I will have to defer to others who have more experience with the Panda3D particle system.

1 Like

Hi!

Thank you for your answer.

Your hints turned out to be very helpful! My problem was that I did not understand fully how Panda3D stores objects yet. I exported only vertices from models, so the model file contained real geometry (no triangles defined). When I started exporting the mesh it then just had to call make_points_in_place() on the geom of the loaded model to turn the mesh into a point cloud in Panda - just like you showed. And it started to work:

As for the colors, I didn’t want to modify them so much as keep them from the original model, which I managed to do. Overall, my goal is to visualize the scans made with LIDAR in a visually attractive way. I know that a lot can be done with shaders, but I don’t really know shaders (and I don’t know if I will find time to learn). Although I managed to get some visually interesting effects using common filters alone. If someone can suggest a few more ideas, I would of course be happy to try it out.

The question remains for using my loaded point cloud as particles. Since no one answered it, I will wait a while and possibly create a new thread, with this question only articulated.

2 Likes

Cool results! Those renders look somewhat dreamlike.

I admit I’m not sure if this would work as a “mesh particle emitter”, but you could perhaps try MeshDrawer MeshDrawer — Panda3D Manual

It seems the geometry method might take your point cloud node path and convert it into a MeshDrawer object. Again, I’ve never tried this, so I could be wrong.

Thank you for your answer! :slight_smile:
I tried MeshDrawer using this example (I had to tweak it a bit as it’s a bit deprecated) which was put on GitHub by the “treeform” user (from this forum). Unfortunately, as I figured out, the philosophy of MeshDrawer is that every time I update an image frame, I have to do a Python loop update for each point. For example:

    for v,pos,frame,size,color in particles:
        generator.billboard(pos+v*t,frame,size*sin(t*2)+3,color)

On my computer (MacBook Pro, M1, 2020) I am able to do about 20,000 updates and still get 60fps. However, 20,000 points is relatively little - in the examples I pasted above, the entire scene is around 130,000 points. And I’ve had point clouds with a million or more points. As soon as I use one Python function (like setPos or setHpr, respectively) to perform, for example, a shift or rotation of all points, then everything works very quickly, because the recalculation of points takes place somewhere “inside”, in a highly optimised code, written/ran probably in C++, or even somewhere “deeper”, in the graphics card. However, when I have to reposition the positions of all points on the Python level in real time, the performance drops drastically. Hence, I am looking for a solution that, for example, would allow me to define the initial and final positions of all of the individual points in the point cloud, and then, in real time, only define the level of transition from one state to another state. Or else, define my point cloud and have it explode (but automatically, like the built-in Panda3D particles, not manually updating each point).
One can say that, for example, I would like to get an effect similar to something like in this video (unfortunately it’s not my work, and this is Unity at all).

I think it’s worth noting that such things are preferably done at a low level, but not python. It’s also questionable CPU usage, it’s slow in itself. Shaders are now used for this kind of task. I remembered and found an example of a post with a similar task, maybe you can extract something for your idea.

1 Like