Complex semi-transparent object

I need to render complex semi-transparent objects (energy contours).
But the transparency depends on the orientation of the camera due to the ordering problem.

Here are the snapshots.

  1. Angle #1

  2. Angle #2 (transparency disappears)

I can not split the object into smaller objects and the MMultisample does not work.
Is there any way to implementing order-independent transparency?


What is the transparency? I think if you use the following, it might work:

def setObjTransparency(task):
    obj.setColor(r, g, b, a)
    return task.cont

taskMgr.add(setObjTransparency, "Sets the transparency")


AsyncTaskManager::DoneStatus set_obj_transparency(GenericAsyncTask *task, void *data) {
    obj.set_color(LColor(r, g, b, a));
    return AsyncTaskManager::DS_cont;

task_mgr->add(new GenericAsyncTask("Sets the transparency", &set_obj_transparency, nullptr));

(I put C++ because you didn’t mention your language)

This way the transparency will be reset every frame.

Are the layers interleaved? If so that is a tricky problem. You might need to choose an interleaving option that doesn’t depend on ordering (like additive mode) in conjunction with disabling depth testing, or use a more advanced order-independent transparency mode.

If they are not interleaved and appear in a fixed order then make sure that the objects have a different Z position so that Panda’s alpha sorting can sort them properly, or you can specify the render order manually.

@panda3dmastercoder’s suggestion will do nothing. There’s absolutely no point to setting the same property every frame.

I just reread the code. Will it work if we clear and reset the color every frame?

EDIT: We can set the transparency attribute earlier and not in the Task.

A single-layer object has the same problem. (this isosurface is generated from complex 3D networks)

With .setDepthTest(True)
Angle #1:

Angle #2:

Angle #3:

With .setDepthTest(False)
Angle #1:

Angle #2 (weird blending occurs. the blending of left and rights regions differ):

Should I use advanced OIT mode?

If you have many self-overlapping areas and you need the blending to work correctly (and you don’t want additive blending), you may need to look into OIT methods such as depth peeling.

Yes, that is what I exactly want…

It seems difficult to implement it right away because I’m a beginner. Can I get some advice? Is there any OIT that can be implemented without shader language?

An alternative way to do OIT is to sort all the triangles individually based on the camera angle. But this is not very efficient. Splitting your mesh into layers might be effective, but you mentioned that this is not an option for you.

I think depth peeling requires very little shader code, but does require some more advanced knowledge of Panda to set up rendering using multiple GraphicsBuffers.

I may find time at some point to experiment with this technique, though not in the coming weeks, I’m afraid.

I don’t think it’s possible because there are hundreds of thousands of triangles in the mesh. Thank you for your kind reply and I should think about how to easily implement OIT in my own way.

By the way, it’s sad that no one asked about OIT. It would be nice if someone had ever implemented it.

I believe that tobspr implemented OIT in his RenderPipeline at some point, but he used a complex approach with per-pixel linked lists as part of a deferred rendering set-up.

I’ll check the source code and see if it’s understandable.

I’m sure you’re already busy with other plans, but I hope that OIT will be provided as a basic Panda3D function for beginners like me in the future.

If you are OK with enabling multisampling on your framebuffer, then you can use multisample alpha mode that is in Panda today. It will generate good results if you have a decent number of multisamples, though at a significant performance cost.

Unfortunately, MMultisample alpha does not work on my computer…

How fast do you need this to run? Does it need to operate at interactive or real-time frame-rates?

If it’s acceptable that the program not run at interactive or real-time frame-rates, then you could perhaps implement a simple form of ray-tracing. That should allow you to render accurate transparency, I imagine.

(Actually, it might perhaps be worth trying anyway, if nothing else is working, in case you can get a simple form to run at an acceptable frame-rate.)

Please note that you need to ask Panda for a multisample framebuffer before multisample alpha will work.

An easier method for OIT that I’ve learned about in the meantime is the “weighted average” method. It is not quite correct but still “good enough” for most purposes. The idea is that when rendering, the color of the incoming fragment is added into the framebuffer, but the alpha value is also added to a separate buffer. At the end of rendering, the framebuffer value is divided by this accumulated alpha value.

This would probably be fairly easy to add to Panda’s shader generator, but the question is whether it would be acceptable to have a less correct method of transparency.