Transitioning Between Models: Looking for Suggestions/Advice

For a certain part of my current project, I have regions in the game-world that can be “shifted” between one state and another, with each state having separate and often-different geometry.

Up until now, I’ve been presenting this “shift” with a simple transparent fade–but I find that this effect is a little underwhelming, and incurs transparency-sorting nuisances.

To that end, I want to implement a more-interesting, shader-based effect for the shift–which brings me, then, to the point of this thread…

You see, this project already makes use of a number of shaders, even for basic scenery. And I don’t want to alter these to support this new effect.

Especially in the most common case, where it would mean an extra texture-read, at least one additional shader input (and it already has so many T-T), and an additional if-statement–all of which would have no bearing in the vast majority of cases.

So, I’m looking for other ways. And I’ve considered a few:

I could create a variant of my most common shader, one that supports the new effect.

This should work well–but makes any objects that use other shaders awkward to include in such regions.

(I suppose that I could have custom code to fade them out via transparency, but that’s not ideal.)

I could render the relevant objects to a pair of off-screen textures, then composite those onto a quad in the actual scene.

This should allow for all of my current shaders to be applied.

However, as best I’m aware, one can’t explicitly write to the depth-buffer, and so the scene would end up with depth-information representing the quad, not the scenery…

I could use the stencil-buffer.

As best I understand it, this should allow for a transition that supports my various custom shaders, and that includes the relevant depth-information.

But, a little like the first idea above, this would seem to incur the addition of a whole separate rendering of the entire scene, all the time, for the sake of an effect with relatively-limited scope.

Further, it’s not clear to me whether one can render textures/shader-output into the stencil buffer (as with a normal colour-buffer), or whether it’s geometry-only.

It’s also not clear to me whether the stencilling would be handled automatically, or whether it would require that I modify my shaders to read the stencil-buffer–which would bring me back to the issue that I have with the first idea, above…

So, does anyone have any thoughts or ideas on this…?

Sure you can write to the depth buffer: gl_FragDepth - OpenGL 4 Reference Pages
There’s the drawback of losing the early Z test though; Whether that even matters to you, profiling will have to show.

The stencil buffer isn’t likely to help you, as that specifies which parts of the screen space to render; I don’t see how that would be applicable here.

Ah, I didn’t know about that! I could have sworn that someone told me that one couldn’t write to the depth-buffer–but perhaps I’m misremembering.

Well, thank you for correcting me, then! :slight_smile:

That does seem to make the offscreen approach more viable…

As I understand it–and a quick play with the manual’s stencil-buffer example-program confirms–the effect of the stencil buffer can be applied to individual nodes, thus allowing some nodes to be stenciled and others not.

See the screenshot below, noting that the panda is stencilled, while the smiley (behind it) is not:

This effect could then be used to progressively stencil the transitioning objects -in and -out.

[edit]
In fact, I made a quick-and-dirty demonstration of the sort of thing that I have in mind, modified from the manual’s sample-program. See a quick gif below:
stencil (Copy)

I’m not sure I fully understand the problem; why not just apply your modified shader to only the objects that need to receive the effect? And when the effect is done, apply the simpler shader again?

It is possible to write to the depth buffer, as pointed out above, though the caveat is that the GPU will now have to perform depth testing after running the fragment shader, which means that the fragment shader is run even for objects that are obscured behind other objects. This will increase pixel fill pressure. (You can partially mitigate this with a technique called “conservative depth”, which is a googleable term, though it can’t always be used.)

It may also be worth profiling whether the extra if and texture read are actually going to form a bottleneck in your program. All modern GPUs can fairly efficiently skip the if branch if the condition is dynamically uniform (ie. the condition can be determined without executing code dependent on texture reads or vertex data) and certainly when it’s a uniform constant (ie. a simple value passed via set_shader_input).

Essentially, the problem is that I don’t just have one or two shaders. There are two that cover the majority of cases–but then there are others for special cases. Cases like certain types of environment, or water, or glowing lights, etc. And I don’t want to have variant shaders for every single possible shader that I might want to have in such an environment…

Aah, I see! Thank you for the caveat–I’ll bear that in mind, I intend!

I’m not sure about the “if”–you make a good point, and one that I hadn’t been aware of, regarding GPUs skipping such conditions where called for.

As to textures, I’ve generally learned to minimise the number that I use where feasible. I’m not sure that they’re a bottleneck, as such, but historically I do seem to have found that they impact performance.

(Possibly in part due to a speculated inefficiency in my rendering setup–I’m not deferring anything–and likely in part because I am in some places using a minmap-offset in my texture reads (to keep textures sharp at the camera-range that I’m using).)

It also just feels… unpleasantly inefficient to add an extra texture-read to the majority of objects, when only a minority will use them.

(Plus, again, it occurs to me that taking this approach would again either involve changing a variety of shaders, or limiting which shaders may be present in these environments–thus limiting my level-design options–or using some alternative approach for shaders other than those that I do alter…)

Right, texture reads are high-latency. The GPU can hide that latency partially if the UV coordinates used to sample them can be determined up-front, and are not calculated in the shader itself. However, it could be effective to hide them behind an if that is a uniform constant.

One approach you can take is by using an #ifdef in the shader, like #ifdef USE_SPECIAL_EFFECT, and then make a method that generates a unique variant of your shader by just inserting things like #define USE_SPECIAL_EFFECT 1 at the top (noting they would have to be placed after the #version line). These preprocessor directives are executed at compile-time, and not at run-time, so they do not cost performance. In fact, Panda 1.11 will allow you to set these defines programmatically rather than having to modify the text.

Still, you do want to avoid a combinatoric explosion of different shader variants if at all possible, because compiling shaders is slow.

Yeah. Ifdefs would allow me to avoid having to write the variants myself, at least–but would still incur the cost of additional shaders.

Let me ask: Can one render more than just geometry to the stencil buffer? That is, could one render textured geometry (with the texture being present in the result), or even shader-output, to the stencil buffer…?

I’m still not sure that it’s a good approach, but I do want to weigh the idea fairly, and otherwise am just curious…

I will say: Right now, I’m leaning towards just making a single set of variant-shaders (two or three) for the most-common shader cases (based on where I currently intend to use this effect) and then limit what elements are present in these transitioning areas…

That is impressively quickly hacked together, and I would love to see the source code, but it also shows exactly what I meant, and I don’t see how it would be helpful for you; Then again, your specification of what you want to achieve exactly was a bit vague, and sounded to me like “blending two renders of meshes representing some object, e.g. fading alpha-transparently from one to the other.”

Thank you! :slight_smile:

It’s actually really simple–just a minor modification of the code given in the manual, adding a second model that uses a slightly different “stencil-reader” attribute.

As for showing the code… I originally intended to do so–but then it was accidentally lost in my wrestling with the forum’s image-uploader for the purpose of showing the gif! ^^;

Still, I’ve remade it, and post it below. It might be subtly different to the original, but should convey the same essential information. As noted above, most of it comes straight from the manual–my additions are indicated by an exclamation mark.

from panda3d.core import *

# Do this before the next import:
loadPrcFileData("", "framebuffer-stencil #t")

import direct.directbase.DirectStart

constantOneStencil = StencilAttrib.make(1,StencilAttrib.SCFAlways,
StencilAttrib.SOZero,StencilAttrib.SOReplace,
StencilAttrib.SOReplace,1,0,1)

stencilReader = StencilAttrib.make(1,StencilAttrib.SCFEqual,
StencilAttrib.SOKeep, StencilAttrib.SOKeep,
StencilAttrib.SOKeep,1,1,0)

# !
stencilReader2 = StencilAttrib.make(1,StencilAttrib.SCFGreaterThan,
StencilAttrib.SOKeep, StencilAttrib.SOKeep,
StencilAttrib.SOKeep,1,1,0)
# /!

cm = CardMaker("cardmaker")
cm.setFrame(-.5,.5,-.5,.5)

# To rotate the card to face the camera, we create
# it and then parent it to the camera.
viewingSquare = render.attachNewNode(cm.generate())
viewingSquare.reparentTo(base.camera)
viewingSquare.setPos(0, 5, 0)

viewingSquare.node().setAttrib(constantOneStencil)
viewingSquare.node().setAttrib(ColorWriteAttrib.make(0))
viewingSquare.setBin('background',0)
viewingSquare.setDepthWrite(0)

view = loader.loadModel("panda")
view.reparentTo(render)
view.setScale(3)
view.setY(150)
view.node().setAttrib(stencilReader)

# !
view = loader.loadModel("smiley")
view.reparentTo(render)
# (The "smiley" model is smaller than
#  and has a different origin to
#  the "panda" model.)
view.setScale(10)
view.setY(150)
view.setZ(10)
view.node().setAttrib(stencilReader2)
# /!

base.run()

But does it not transition from one object to another?

I did specifically say that I had been using a transparent fade, and that I found it to be “a little underwhelming”. So no, I’m not looking for a transparent fade using alpha. :stuck_out_tongue:

I was vague in what, specifically, I was trying to achieve–but I feel that it’s not really important for this conversation.

I want to transition from one object to another, and am looking for ways to do so. I’ve used alpha-transparency, and been unsatisfied with it. Now I’m looking at other methods…

Unfortunately you can’t write to the stencil buffer in the shader the way you can write depth, sorry. You can write to auxiliary targets, but you can’t use that to do any testing in the same render pass, though you could use them for post-process compositing.

It’s always a trade-off between shader complexity and number of shaders. I’m afraid there’s no one-size-fits-all answer for this, but your inclination appears sound.

Ahh, thank you for clarifying that, then!

In which case, indeed, it looks like the stencil buffer likely isn’t for me–at least for this purpose.

Hmm… Am I understanding that correctly to mean that, if I were to render my two objects two a pair of off-screen textures and then use a shader to composite them onto a quad (which would then be part of the main scene, and rendered as such)–i.e. using only my own shaders and objects, not the stencil buffer–I could at best have them one frame behind…?

If so, then that approach likely won’t work for me either, I fear.

Leaving me, in the end, with my inclination to make a small number of variant shaders…

(With, I do think, a simple alpha-fade for small special-effect objects–glows and the like. I daresay that it wouldn’t work as well for larger things, like bodies of water, but I may just want to avoid having such things in regions that have this transition.)

No, doesn’t need to be one frame behind at all. You can render and then composite them in the same frame. I think you can even reuse the depth buffer from the first pass.

I’ve kind of lost sight of what you’re trying to accomplish with that approach, though. Are you trying to implement the transition effect as a post-processing effect?

I don’t really mind whether it’s a post-process effect or not (although a non-post-process effect would likely fit with the extant code better).

The goal remains the same: that there are two objects/sets of objects, and that I be able to transition between them with a nice effect. To that last end, my thought is to use a shader so that the transition can be somewhat dynamic and animated.

As to the particular approach of rendering the objects to off-screen textures and then compositing to a card, the main advantage that I see is that I don’t need variant shaders, and indeed should be able to use any of my extant shaders: since it’s just rendering to a texture, that stage should work just like normal rendering. The only new shader would be the compositing shader.

That said, I now remember a thought that came to me previously: it seems to me that if I’m not careful with things like texture-resolution and mipmapping, there may be a mismatch between the appearance of geometry not rendered in this fashion and geometry rendered in this fashion (which in the game may neighbour each other).

(To perhaps explain a bit more what I’m trying to do:

The player, while exploring a level, will sometimes find areas that are marked as being able to transition. Such an area could be a wall, or a shut door, or even just an otherwise-unremarkable section of a corridor.

At these areas, they can use an ability that causes that area–not all of the level, but just the affected area–to shift, with some of its geometry disappearing and being replaced by new geometry. The wall might be gone; the door might be open; the corridor might now run in a different direction, the new walls blocking passage back the way the player came.

As things stand, this transition is just a pair of alpha-fades: the old geometry fades out, while the new geometry fades in. I want to do something visually better than that (and less prone to sorting issues).)

As a transition option, you can consider a shader that creates an explosion of triangles on one object, and on the other it gathers into an object.

But at the moment I don’t understand how you can assemble an object in reverse order from triangles.

1 Like

Hmm… That’s an interesting idea–thank you for it! :slight_smile:

I’m not sure that it’ll fit what I have in mind here–but I do like the effect, and will think on it, I intend!

(Fun fact: I actually have a triangle-explosion effect in use elsewhere, to depict exploding crystals.)

My first thought is that to do it without doing it:

Essentially, if you can be confident of roughly where the “exploded” triangles are going to end up, then you just “explode” the first object, hide it, show the second object with the shader set to show it already “exploded”, and then run the “explosion” process in reverse.

(That “explosion” process just being a vertex-shader effect that literally moves vertices.)

As to being confident of where the triangles will end up, what I might imagine is pre-processing (in the 3D modelling program) a set of world-space positions for each triangle, then baking those positions into vertex colours for both objects.

Otherwise, maybe just fade out the outgoing triangles and fade in the incoming triangles, and hope that, with lots of triangles and perhaps rotating triangles to add more visual noise, it’s not very obvious that they’re not the same set of triangles…

1 Like

Okay, so in the end I decided to follow that inclination and just make a variant shader based on my most-common shader (the one that I use for most game-objects, and some game-environments).

And overall, I believe that I’m happy with it! :slight_smile:

Indeed, I thought that I’d share the result in this thread, for others to see:
aetherealEffects

(A longer version can be found on my BlueSky, here.)

5 Likes

Congratulations, your project looks amazing! :clap:

1 Like

Thank you so much! Such praise is much appreciated! :slight_smile:

1 Like

Ah, isn’t this similar to the fog of war effect?

Hmm… I hadn’t thought of it that way.

But I suppose that it is similar, in effect.

The main differences, I suppose, being that a “fog of war” is usually revealed according to which areas the player has seen, and that “fog of war” usually affects the entire game-world.

In this case, on unveiling a specific, applicable area, the entirety of that area is revealed. And instead of the whole game-world, only certain areas have this effect at all.

(Note, for example, that the big pile of rocks to the right isn’t affected.)

Well, and a minor difference: “fog of war” tends to be one way, while–as the longer version on BlueSky shows–this effect goes both ways.

Ah, one thing that should be mentioned: You should see some areas being blacked out, and that which areas are blacked out is changed when this effect is applied. That’s actually a separate (and more universal) effect, applying line-of-sight. It’s just that, since this effect changes what walls are present or absent, the underlying code also changes what line-of-sight geometry is present or absent.