Transparent 3d models and texture value accumulation

Hi,

I have a problem with rendering a model that is supposed to represent a ghost. I am using an opacity map to make the entire actor semi-transparent, just like we would imagine a ghost to be. What comes up is that when different parts of the actor are on top of each other their texture values add together making specific parts of the model much less transparent than others. This wouldn’t be awful, except that some parts of the model are cards that have been added for hair, etc. These cards sort against each other poorly so I get large rectangular areas that flash from semi-transparent to almost completely opaque, and back.

So my question is, does anyone know a rendering trick that will help me make this ghost look decent? I can’t use shaders as I’m locked into both a DirectX render and an older (1.3.2) version of panda. Thanks in advance!

There are lots of tricks. See the “Transparency and Blending” page in the manual for a few of them.

For a ghost, you might get good results with just using additive blending instead of transparency.

attrib = ColorBlendAttrib.make(ColorBlendAttrib.MAdd, ColorBlendAttrib.OIncomingAlpha, ColorBlendAttrib.OOne)
ghost.node().setAttrib(attrib)
ghost.setBin('fixed', 0)

David

much obliged!

This trick does eliminate the need for transparency, which helps in quite few different ways. But, I’m still running into the original problem. Let me try to give a better example of what I am trying to fix:

Rendering the actor’s face we have the face geometry itself and the hair behind it being drawn. In a frontal shot the hair behind the face adds to the final face color and the onscreen output is overly bright were the two overlap. We should see the hair through the head, but we should not be seeing it more brightly than other parts of the model.

I’m quite sure that the solution is readily available in the ColorBlendAttrib settings but the modes and operands are fairly opaque to me and I can’t find documentation on them, aside from the list in the API. Thanks, again, in advance!

Well, you could go back to regular alpha blending, and avoid the extra-bright problem where the ghost is self-occluding. Turn off depth write on the ghost object (in addition to putting him in the fixed bin) to avoid some of the artifacts you described with parts flashing:

ghost.setTransparency(TransparencyAttrib.MAlpha)
ghost.setDepthWrite(False)
ghost.setBin('fixed', 0) 

Explanation: turning off depth write means the ghost will not update the depth buffer as he is drawn. This means that the parts of the ghost that are in front of other parts won’t be occluded by those parts behind him. However, depth test is still enabled, which means the ghost will test against the things in the scene that are already there, which is necessary to prevent the ghost from being drawn on top of things that should be in front of him. So this is good. However, in order for this to work, it is necessary that the ghost is drawn after all of the other objects in the scene have already been drawn, since you’re just laying down pixels on top of pixels that are already in the frame buffer. That’s why we put in him the fixed bin, which is drawn after all of the normal bins.

There will be issues with multiple ghosts in the scene, if one of them is in front of the other. You can reduce this by sorting all of the ghosts back-to-front, so that the ghosts in the back are drawn before the ghosts in the front. This will work as long as the ghosts are not very close to each other. To do this, you’ll need a special bin instead of the fixed bin; there’s another post where I demonstrate that.

But it still won’t be perfect. What I think you’re asking is for the ghost to occlude its own parts normally, but not to occlude anything in the environment. This is clearly outside the scope of any simple state attribute.

To truly achieve this effect, you’ll need to render your ghosts offscreen into a texture buffer (using normal opacity), then render the texture onto a card onscreen, and make the card semi-transparent. Of course, then you might have troubles if there is something in the scene in front of your ghost. You could solve that problem by copying the depth buffer from your original scene into your offscreen buffer, but now things are getting complicated.

Instead of using one big texture buffer for all your ghosts, you could render each ghost onto a card individually, then place each card in the scene where the ghost is. That would also pretty well solve the depth-buffer issue, and it might be simpler than the above suggestion. Still pretty complicated, though.

Or, you could model your ghost so that there are no self-occluding parts. That severely limits the shape your ghost can have, of course (he couldn’t be human-shaped, for instance). He would have to be fully convex, like a human in a plastic bag.

Or, you could live with the self-occlusion. As you said initially, it’s not awful. :slight_smile:

David

Is there a way to render the ghost in two passes? First with depth test and write enabled, but color buffer writes disabled, and second with depth test enabled, depth write disabled (for speed) and color writes enabled?

This method should result in a rendering equivalent to rendering to an off screen buffer, then copying into the scene transparently with correct depth occlusion with the rest of the scene (individual ghosts will still need to be sorted). I’m just not sure of the exact process required to accomplish this in Panda3D.

Further info:

There is only ever one ‘ghosted’ actor on screen at a time AND it is not in the vein of a VFX, but rather a player controlled actor which interacts with the environment. So warping the model onto a card via a second framebuffer will most likely implode sorting in all of the actors interactions.

Arkaein’s plan seems to be exactly what I need. It will get quite complicated, however, so I think I am going to try out all the other options in drwr’s response and see what I get.

Thanks again, both of you. I’ll post my results in a bit.

Sure, you could do this by making two instances of the ghost, and setting the appropriate states on each instance. You’d use binning to guarantee the ordering of the two instances with respect to each other.

Not sure that it would work, though. When you draw the first pass, you fill in the depth buffer with the pixels for the ghost. When you draw the second pass, you test against those pixels, which are already filled in. Thus, your test will always fail, and nothing will be drawn. You could try to futz this by setting the depth comparison test to “<=” instead of “<”, and that would help, but some cards will handle this better than others. You could also try to add a depth offset attrib to shift the pixels a bit the second pass, and that will help a little more, but again your mileage may vary. Some pixels will got lost. Of course, since he’s a ghost anyway, that might be all right.

David

Quote: “But it still won’t be perfect. What I think you’re asking is for the ghost to occlude its own parts normally, but not to occlude anything in the environment. This is clearly outside the scope of any simple state attribute.”

Yep, that is exactly what I am looking for - and, I expect, the card solution will have sorting issues against the environment due to both interactions with 3d objects and interactions with other 2d cards in 3d-space placed in the fixed bin. ( And combination’s of both circumstances. )

The multi-pass render seems like my only option that really delivers what I want.

Actually, on second thought, maybe this approach will work really well. My past frustrations with this sort of technique have involved problems with matching depth values on slightly different (but coplanar) polygons; but since these polygons will all be exactly the same, they may be more likely to match up precisely in the depth buffer.

David

That’s what I was thinking. However I just realized that there is another potential problem with this approach. If all of the ghosts are binned together, then a ghost will not blend with ghosts behind it, because those ghosts will not have the color writes work because the depth test will fail in the second pass.

The method could still work if two bins are used for each ghost, with ghosts binned from far to near, and all ghost bins rendered after the main scene. That way each ghost will only be tested against itself, and so ghosts far away will be fully rendered and internally consistent, and then near ghosts should be properly blended over them.

It works! Here’s a patch file that applies Arkaein’s proposed solution to Roaming Ralph, and makes Ralph ghostly.

In this patch, I play games within the node structure underneath the Actor root node. This requires setting flattenable = False on recent versions of Panda, but probably back in 1.3.2 it wasn’t necessary. Also note that in 1.3.2 you might need to replace calls like “self.ralph_a.setAttrib(…)” with calls like “self.ralph_a.node().setAttrib(…)”, since the NodePath-level setAttrib function may not have existed then.

Of course, this relies on only having one ghost in the scene. As Arkaein points out, multiple ghosts wouldn’t be transparent to each other (though maybe that wouldn’t be so bad visually).

I slightly prefer the additive blending effect, it looks a little more ghostly to me than ordinary transparency. (Uncomment the ColorBlendAttrib line to try it out.) I guess it depends on whether your ghost is supposed to be a disembodied soul, or just an invisible man.

David

--- Tut-Roaming-Ralph.py	2007-09-13 18:26:47.000000000 -0700
+++ ghost.py	2009-02-26 11:37:41.856500000 -0800
@@ -74,10 +74,12 @@
         ralphStartPos = self.environ.find("**/start_point").getPos()
         self.ralph = Actor("models/ralph",
                                  {"run":"models/ralph-run",
-                                  "walk":"models/ralph-walk"})
+                                  "walk":"models/ralph-walk"},
+                           flattenable = False)
         self.ralph.reparentTo(render)
         self.ralph.setScale(.2)
         self.ralph.setPos(ralphStartPos)
+        self.makeGhost()
 
         # Create a floater object.  We use the "floater" as a temporary
         # variable in a variety of calculations.
@@ -262,6 +264,35 @@
         self.prevtime = task.time
         return Task.cont
 
+    def makeGhost(self):
+        from pandac.PandaModules import ColorWriteAttrib, DepthTestAttrib, DepthOffsetAttrib, TransparencyAttrib, ColorBlendAttrib
+
+        # Get Ralph's "Geom" node.  This is the node beneath the Actor
+        # root that contains his actual geometry.  In recent versions
+        # of Panda, it only exists if flattenable = False is passed to
+        # the Actor constructor.
+        
+        assert(self.ralph.getGeomNode() != self.ralph)
+        gn = self.ralph.getGeomNode()
+        gn.detachNode()
+
+        # Instance A, to fill in the depth buffer.
+        self.ralph_a = self.ralph.attachNewNode('ralph_a')
+        self.ralph_a.setAttrib(ColorWriteAttrib.make(ColorWriteAttrib.COff))
+        gn.instanceTo(self.ralph_a)
+
+        # Instance B, to draw the color buffer later.
+        self.ralph_b = self.ralph.attachNewNode('ralph_b')
+        self.ralph_b.setDepthWrite(False)
+        self.ralph_b.setAttrib(DepthTestAttrib.make(DepthTestAttrib.MLessEqual))
+        self.ralph_b.setTransparency(TransparencyAttrib.MAlpha)
+        #self.ralph_b.setAttrib(ColorBlendAttrib.make(ColorBlendAttrib.MAdd, ColorBlendAttrib.OIncomingAlpha, ColorBlendAttrib.OOne))
+        self.ralph_b.setAlphaScale(0.5)
+        gn.instanceTo(self.ralph_b)
+
+        # Ensure the proper render order.
+        self.ralph_a.setBin('fixed', 0)
+        self.ralph_b.setBin('fixed', 10)
 
 w = World()
 run()

genius, Thanks drwr!

I’m going to pop this in to a much more complicated system, but I don’t see any reason why this shouldn’t do the trick!

works like a charm!

And, I like the additive better as well.

Thanks again for the responses!