Decals on arbitrary surfaces

I wondered about that. And I was tired last night (seems I always get on these forums late!), so my description wasn’t very accurate. What I want is to build a decal mesh that conforms to the surface of another mesh, something more than just simple quads that are parallel to another surface.

Imagine non-planar decals that conform to a highly-tessellated terrain mesh. This is a good way to render extra detail upon a terrain mesh. As long as you’re careful with render order and Z-bias, you can build up quite a lot of detail for only a little overdraw cost.

Here’s a description of the algorithm from someone on the Gamedev forums:

Wolfire’s blog has an indepth article (with pictors!) using the same algorithm: Projected decals upon terrain.

I was just curious if there were already clippers in Panda, and if not, what would be the best way to extract an arbitrary set of triangles as described from the entire scene graph. I don’t particularly care if this all occurs in Python, either, because as stated in the quote, this clipping is only done initially.

I understand if there is no code to do this in Panda, as you said, normally you don’t actually care much about the view->clip->-rasterization phases of 3d as it’s done by the driver/hardware. If not, what would be the best way to extract scenegraph triangles while taking advantage of Panda’s culling code?

Well, sounds like you’ll have your work cut out for you here. So to speak. :slight_smile:

Walking through the scene graph and extracting triangles is not too difficult; the manual shows an example of doing this using the GeomVertexReader. If you want to optimize, you can use the BoundingVolume comparison tests to reject nodes that don’t intersect at all. You’ll need to deal with coordinate system transforms by composing TransformStates as you go (or by using node.getNetTransform() or the relative node1.getTransform(node2) methods). You’ll also need to call BoundingVolume.xform(ts.getMat()) to convert a BoundingVolume into the appropriate coordinate space before comparing volumes.


Or, to cut a model in two, you could just duplicate it, and apply a clip plane to the one and a flipped clip plane to the other.

That’s a simple solution that would work for simple cuts. If you want to do stuff like, fill up the gap with geometry, you need either a geometry shader or alter the GeomStuff to do this.

Clip planes are generally more useful than altering the geometry yourself, when you are constantly changing the clipping area.

I’ve heard of ScissorEffect too - not sure how to use it or what it does though, read up the docs on NodePath.setScissor for more info.

That’s not a bad idea: using a ClipPlaneAttrib still continues to let the graphics driver do the clipping, saving you from having to do it on the CPU. Not sure whether this is supported by the auto-shader yet, though.

ScissorEffect is less general; it scissors rendering to a rectangular region onscreen. It’s primarily useful for gui systems.


Yes, ClipPlaneAttrib has been fully supported by the ShaderGenerator since Panda3D version 1.6.2.

Good idea with the ClipPlaneAttrib, as basically all I need to do is just clip to a unit cube. Documentation is sparse on this, but from poking in the shader code, it looks like it’s doing fragmentation clipping.

Though I admit, it seems like overkill for the effect I want – my decals won’t be moving around, after all. So I don’t really care how inefficient my clip phase will be, as I plan on doing it just once. It can happen on the CPU side.

So, sounds like I will have to walk the scene graph myself (using BV checks like David mentioned to keep it sane) and pull the vertices out using the GeomVertexReader?

I think doing it the other way would be overkill - making a clip plane is just as easy as creating a Plane object (with refpoint and vector), attaching it to a PlaneNode, reparenting that into the scene, and then using NodePath.setClipPlane(planenodepath) on the node that should be clipped.

The other way would be difficult especially for complex meshes. Basically you’d need to loop through the primitives, and ignore primitives if all of the vertices is out of range, and edge-slide the out-of-range vertices towards the clipping border if not all of the primitive’s vertices are out of range.

Decals! … ion-video/

prequel: … art-three/

actually it has some amazing stuff:

The other way would be difficult especially for complex meshes. Basically you'd need to loop through the primitives, and ignore primitives if all of the vertices is out of range, and edge-slide the out-of-range vertices towards the clipping border if not all of the primitive's vertices are out of range.

Not if most of the decals where done at cook or load level time ~ offline.

Many games use decals. I think its very important for panda3d to actually get this system in. I am planning to look at decals for my 2aw game levels and ship damage. I am only unsure how much of real time i can make them.

I was also thinking of not cutting ploy edges off but just hiding them with shader and alpha clip parameters. Don’t do “edge-slide the out-of-range vertices towards the clipping border” but just hide it with shader. Then the only problem remains is packing the decals efficiently into sing geometry and hiding old ones. I am thinking this for ship damage where weapons hit and stuff.

I agree that this is probably going to be slow in Python, so such a system would need to be converted into pure C++ code if you wanted to do it per-frame.

I do however think that Python would be fast enough, as long as you kept the poly count reasonable and cull out as much as possible. Also, in my case, I don’t need to clip-and-project to every polygon out there – Just what’s on my terrain. And I’m not doing this per-frame, and against static geometry. That’s enough optimizations that I think it’s doable.

My goal is to have decals for both terrain detail and if things are fast enough, for explosions, tank treads and other realtime effects. You could as easily apply this to models for things like Treeform’s idea of using them for weapon damage. However, that might require a C+±speed clipper. It’s kind of early to know. :slight_smile:

I wanted to share my attempt at implementing this kinda thing. I was thinking about setting up a new thread (because time’s passed), but then I thought it would be best to keep it in one place.

Here are my results:

And here’s the code and the art needed to run it:

The relevant part is the file (how convenient, isn’t it?), which contains the decal class. To run it, use, which sets up the scene etc. etc.

I took the “clipping planes” route. For now, I only use 4. No depth ATM, because when it was there, I forgot about it and got confused over why there are holes in my decals. The code’s performance seems not too bad.

Anyhow, I was wondering about something else. Is it possible to avoid using texture projection here? I don’t know, make the texture coordinates by hand, so the texture can just be put there, or something like that? I don’t care about it being real time, if I need to bake it, so be it. I don’t think it’s possible (or worth the effort needed), but it’s worth a shot. I’m asking about this because texture projection doesn’t work well with the shader generator, killing the performance.

Of course, I know one can just dump the shader and everything should work well, but, as far as I understand, as soon as I change something in the material or lighting (?) I will need to get myself another generated shader. Is this correct?

So that’s that, I’m awaiting any feedback on the code.

I haven’t looked at your code yet, but there is a class in Panda that does a baked-in version of texture projection. It’s called ProjectionScreen, and the way it works is you parent the geometry you want to apply texture coordinates to the ProjectionScreen node, and specify the projector with ProjectionScreen.setProjector(), and call ProjectionScreen.recompute(). This actually modifies your vertices to apply the projected texture coordinates in-place. Then you can remove it from your ProjectionScreen and put it wherever you want.

It’s a bit of a clumsy interface, and I don’t know how appropriate it is to your purposes; but I thought it was worth a mention.


Sorry for the lag.

Thank you very very much Drwr. I missed that class when browsing the API reference (is it mentioned in the manual?), but it’s exactly what I was looking for. The materials work great and the performance is, at least, decent.

I’ll drop the ProjectionScreen-based version here as soon as I have a moment.

Ok, in case anyone’s interested, the ProjectionScreen version can be downloaded here:

Thanks again for pointing me at this class David.

After some more testing it seems like the ProjectionScreen doesn’t make that much difference after all. It seems faster than the standard texture projection, but it still doesn’t run as fast as it should with shaders. The difference between with shaders and without shaders is still a lot bigger than with “natively” textured objects. I.e., the ones that don’t use projection, but were uv mapped in Blender.

Are the texture coordinates, and thus shaders, still recalculated every frame for ProjectionScreen-based projection? And if so, why is that the case even after reparenting stuff away from the projection screen?

The ProjectionScreen modifies the vertices in-place, very similar to a model that has been loaded with “native” UV’s. (The only difference might be the question of whether the UV’s are interleaved with the existing vertex data or appear in a parallel array, which shouldn’t much affect performance either way.) Also, once you parent things away from the ProjectionScreen, it can no longer modify their vertices.

So, there shouldn’t be any observable performance difference between using the ProjectionScreen to modify vertices and using the vertices as they were loaded. So perhaps you’re seeing a performance difference for some other reason? Any possibility you’re still inadvertently modifying the render state every frame, for instance?


Thanks for the light speed reply David.

I got some, apparently unjustified, doubts about the ProjectionScreen approach and how it works.

Still, though, the scene and code I’m referring to is what I’ve linked here (the last link), and there’s not much going on in there aside of projection itself, and textures clipping planes setup. So I have no idea where else the difference might be coming from.

I’m finding the performance of the ProjectionScreen-based version better by only a couple (~10, and both run at ~300 FPS, for 25 decals with shaders, so it’s practically negligible. Without shaders, it’s twice as fast) of frames from the non-ProjectionScreen version. I don’t know, maybe PStats could tell more? I’ll fire it up, but I’m not sure what to look for…

Some news. I’ve commented almost everything out in the code and started uncommenting things until I get a slowdown. And I did. Here’s where:

colStage = TextureStage("col")
colStage.setMode(TextureStage.MModulate)	self.decal.setTexture(colStage, tex)

If I do it this way, I get ~450 frames for a scene composed of 1 base model and 25 decals (being copies of the base model with no clipping planes assigned ATM). If I add a normal map stage to it, it drops further, oscillating around 350 and 400 frames.

However, if I do this instead:

self.decal.setTexture(tex, 0)

I get ~750 frames for the same scene.

Adding normal map to the second variant hinders performance to ~400 frames again.

Can anyone help me on this? Is there something I missed from the manual about texturing?


After further experimenting on the setTexture(tex, 0) variant, I got more interesting results. As I said, the performance for 25 decals, one texture and no clip planes for this variant is 750 frames.

But after adding clip planes, it went down to 350 frames. Quite surprising considering what they’re supposed to do… I guess they cause shader recreation in every frame.

I’ve also experimented with Transparency, which I’ve initially disabled for testing. Obviously, it also causes loss in performance, but in a different way.

In all other cases (using new texture stage and/or using clip planes), the framerate drops to this 350-450 frames and stays there, no matter what. In case of transparency, if I enable it when using setTexture(tex, 0), the framerate drops only when all 25 geoms are visible (for instance, when looking down on the whole stack), but when I move the camera so the objects are out of the view frustum or just hidden by backface culling, the framerate goes back up to ~800 frames. With new texture stage and/or clip planes, it doesn’t.

This is all the information I’ve gathered so far. I’ll be very very grateful if anyone can help me on this. Thanks.

Wait, you’re talking about a performance drop from 700fps to 350fps?

Although this sounds like a 50% drop in performance, in fact it’s almost negligible, because at frame rates like this the smallest change in frame time makes a huge difference in the fps.

Instead of comparing fps, it’s usually more meaningful to talk about frame time, or 1/fps. In this case, the performance difference you’re talking about is 1/350. - 1/750. seconds, or about 1.4 ms. So something in the new setup requires an additional 1.4 ms to process. That could easily be the additional overhead of creating a ClipPlane object internally or something like that, and in a normal scene, the additional 1.4 ms would easily be lost in the noise.


Yeah, I know that in most cases differences like this mean nothing and I probably shouldn’t care about them. I might have overreacted about this – that happens to me sometimes… Luckily, not that often, heh. I just don’t want to leave any mines on my road unnoticed and might have acted paranoid about it. Maybe it’s just that I’m not used to working with this high frame rates? On my old system they were out of reach.

Anyway, I find it interesting that setting the texture one way is more demanding (even if it’s only that little more) than setting it another way, when in fact I still have only one texture in both cases. But I guess if there ever was anything wrong about this, it would’ve been noticed before.

I’ll just keep working with this. It seems to work fine.

Thanks for reassurance David.