Transparency of 3d surface vertices

Hi,

I am trying to render large 3D transparent surface meshes (with more than 50K vertices, and 100K faces). Particularly, every vertex has an associated alpha that is stored in the RGBA format.

First of all, I should mention that the surfaces render perfectly without transparency:

But when adding transparency, it seems to fail to correctly identify what faces should render first. I tried reading various posts including this one but wasn’t able to resolve the issue.

Let me give you a visual representation of what happens. If I add a simple transparency (alpha=0.7) the visualization changes to the following:

As can be seen, there are several transparency artifacts introduced to the image (ideally the image would look like a glass brain). Knowing the actual data structure of the model, I realized that the artifacts are in fact related to the order of the triangles in the mesh. Let me explain this with a couple of examples. If I change the order of triangles in my GeomTriangles to sort them along the x-axis, this is what the render looks like from the positive x direction:

I’d say this is nearly as perfect as I would imagine. However, there’s a catch… If I rotated the camera to view the same object from the negative x direction this is what it would look like:

As it can be clearly seen the right hemisphere which was supposed to be rendered behind the left hemisphere, is actually rendered in front of it. So basically, triangles seem to be rendered according to the order they are stored in the GeomTriangles and not according to their proximity to the camera.

To further verify this point, I tried visualizing the same surface after randomly shuffling the triangles and this is how it looks:

It is clear how this is messing everything up!

I have tried a few other things (like disabling back face culling) to see if it helps, but nothing has worked thus far. A long while ago, I had a similar problem with Mayavi which I was able to fix by enabling depth peeling (use_depth_peeling=True)

I’m basically trying to find similar optionality in panda3d but wasn’t successful thus far. (I can see that there are ways to take depth into account as described here, but am not sure how to appropriately use it.

It would be great if you could potentially guide me in the right direction.

Thanks

If your mesh is convex-ish, then you can sort your triangles inside-out, but it looks like this isn’t quite the case for you.

If you can accept the cost of enabling multisampling, you can enable multisample antialiasing in Panda3D, which (with sufficient samples) may produce a desirable result.

Otherwise, you need to use some method for order-independent transparency. Depth peeling is a relatively simple method, but it is not implemented out of the box with Panda3D (but feel free to put in a feature request on GitHub). Another method is per-pixel linked lists.

1 Like

I tried multisample antialiasing (the code below, not sure if that’s what you meant), but it didn’t have any effect on the resulting render:

self.surface_nodePath.setTransparency(TransparencyAttrib.MDual)
self.surface_nodePath.setAntialias(AntialiasAttrib.MMultisample)

I’d say depth peeling would in fact be a very appealing feature. I’ll create a feature request as you recommended.

In the meantime, I’m wondering if sorting the triangles manually (according to the direction of the camera) would actually be an acceptable solution. I understand that doing it in python may reduce fps, but I really want to ensure approximate correctness for the rendered transparency (I don’t want the farther surface to be rendered in the front).

Is there a way to ask panda3d to update the object’s triangles every time before rendering?

Also to update on this prior comment, I think the reason that multisampling is not working for me might be my graphics card. (when adding the multisampling to the transparency attribute, I get binary transparency)

I think this is yet another reason why depth peeling needs to be implemented as multisampling is only supported on some hardware.

On a further update, I tried reordering the triangles every time before a new render (using tasks). However, I’m not sure why the reordering had no effect on the renders, it seems that the initial order of the triangles matters. Here’s what my code looks like:

        # Add the reorderFacesTask procedure to the task manager.
        self.taskMgr.add(self.reorderFacesTask, "ReorderFacesTask")


    # Define a procedure to reorder the rendered triangles for correct transparency.
    def reorderFacesTask(self, task):
        # get the direction that the camera is aimed towards
        direction = np.array(self.camera.get_quat())[1:]
        
        if (direction != self.camera_direction).any():
            self.camera_direction = direction
            self.vertex_memview[:] = self.mycoords
            # reorder triangles accordingly
            sorted_triangles = lrt[np.argsort(lrxyz[lrt].mean(1).dot(direction))]
            self.faces_array = array.array("I", sorted_triangles.reshape(-1))
            self.tris_memview[:] = self.faces_array

        return Task.cont

Further update:

I was able to sort of solve the issue by creating a delayed task that repeats every second, and in that task, I remove the previously rendered node and redraw a node with a new triangle ordering to solve the problem.

But I reckon depth peeling is still an important additional feature to the package.

No, you need to set the transparency mode to TransparencyAttrib.MMultisampleAlpha, this has nothing to do with antialiasing. You should also make sure you are requesting multisample properties in your framebuffer, by putting this in Config.prc:

framebuffer-multisample true
multisamples 8

You shouldn’t keep around a memoryview across frames. Panda has no idea that you even modified the data this way, since memoryview is a Python construct. You should re-obtain a writable handle to the data with modify_array(0) and re-create the memoryview for every frame in which you need to modify the data. If you were using the threaded pipeline, you would even get a deadlock here because Panda would refuse to render while you’re still holding a writable handle to the data.

So, I got nerd-sniped by my own link, and I implemented OIT using per-pixel linked lists:

However, this method does have some drawbacks compared to depth peeling, notably the steep OpenGL version requirement, and the demands on GPU memory (and the not-so-graceful failure when you run out of this memory). I might add this to Panda in a future version, but it probably cannot replace depth peeling, especially for older cards.

1 Like