projectTexture help

This is my code.

from pandac.PandaModules import *
import direct.directbase.DirectStart

# setup Lense NodePath
proj = render.attachNewNode(LensNode('proj'))
lens = PerspectiveLens()
proj.node().setLens(lens)
proj.node().showFrustum()
proj.reparentTo(render)
proj.setPos(0,0,4)
proj.setP(-30)

# texture to project
tex = loader.loadTexture('maps/envir-reeds.png')
ts = TextureStage('ts')
ts.setMode(TextureStage.MDecal)

# model to project to
env = loader.loadModel('environment')
env.reparentTo(render)
env.setTwoSided(True)
env.setZ(-4)

# project
env.projectTexture(ts, tex, proj)

run()

I highlighted what seems wrong to me


though I think I use the same code from the manual. What am I doing wrong? How can I prevent anything being projected outside of the frustrum? and in the back

Projective texturing only computes the UV’s as if the texture were projected from your specified projector. The (0,0)-(1,1) UV’s are right in front of the projector, but every vertex has some UV value, including the vertices that aren’t directly in front of the projector–these vertices just have a UV value that is somewhere outside of (0,0)-(1,1). When you apply the texture to the surface, it is applied everywhere, not just in front of the projector; the UV’s just control which part of the texture you see.

One thing you can do is set the wrap mode of your texture so that it is invisible outside of the (0,0)-(1,1) range, like this:

tex.setWrapU(tex.WMBorderColor)
tex.setWrapV(tex.WMBorderColor)
tex.setBorderColor((1, 1, 1, 1))

See the Manual section on texture wrap mode for more information about this.

However, this won’t help for geometry that is behind the frustum. Because of the nature of projective texturing, it’s also got a range of (0,0)-(1,1) directly behind the projector, as well as in front of it. It’s as if the projector shines in both directions. Nothing much to do about this, other than make sure you only apply the texture to geometry that is in front of the frustum.

David

I’m afraid I don’t see any difference after applying those changes you mentioned.

That’s a problem. I thought it had something to do with the NearFar value of the lens.
Is there really no way? In the manual one of the uses for this was said to be flashlight effect. Flashlight is applied to your world, it’s very likely the world geometry in the front of your player and back of your player are the same. What then?

What, no difference at all? What if you set WMRepeat instead in both directions? That should set the texture to repeat over the entire world. If you don’t see a difference even for WMRepeat, then you’re doing something wrong. If you do see a difference for WMRepeat, but you still don’t see it for WMBorderColor, then it must be that your graphics driver doesn’t support WMBorderColor. I didn’t realize there were any graphics drivers out there that still didn’t support WMBorderColor, but it’s certainly possible. In this case, the best thing you can do is make your image a bit smaller within the texture, and don’t paint it all the way out to the edge of the texture. Make sure there is a white border of several pixels all the way around the edge of your texture. Your graphics card will stretch this border around the entire world.

Yes, but in a first-person point of view, the camera will only be looking forward. If you want the camera to turn around and look backward for some reason, you could hide the flashlight spot at that point.

The bottom line is, you have to be clever. Setting the near/far plane will have no effect, because the camera’s not really rendering anything, it’s just being used to compute UV’s.

I suppose you could also write a shader that would apply the desired texture only to the part of the world that was forward of your camera. But if your graphics card doesn’t even support WMBorderColor, I can’t believe it will support shaders.

Well, and you could also play tricks with the stencil buffer, to clip out the texture to only the forward part of your camera. This would risk lowering your frame rate unless you did it very carefully.

David

Still no difference.

Right, but I’m not talking about first person shooters only. Even then you can think of numerous cases when it would still be noticeable, like multiplayer mode, cutscenes, death cam, etc.
This video is a good example: youtube.com/watch?v=xkSKbLgG … ure=relmfu

I don’t get why projectTexture doesn’t just apply a texture, then use setTexPos/setTexHpr and apply a UV transform according to the lens? I mean, I don’t know how it works internally, but I don’t see why it’s impossible for it to work this way, which would solve this issue.

Then something’s wrong with what you’re doing–you’re not applying the mode to the right texture, or something. The image you show is what WMClamp looks like, which is the default. WMRepeat looks very different.

I fully agree this would be a useful thing to be able to do. That has little to do with the current discussion, which is what the hardware can do. The fact that the lens projects in both directions has nothing to do with Panda or any design decisions in Panda–this is just the way projective texturing works. In any graphics engine, on any graphics card.

As I describe above, you can do some clever tricks to avoid it. One of the simplest clever trick is to ensure that your camera is always facing the forward end of the projection. If your game design won’t let you do that, you have to do one of these other clever tricks instead, which might be greater or lesser amounts of work depending on your precise needs.

That’s sort of what it does do, but the transform is made in 3-D space, and the mathematical properties of the lens extend in both directions in 3-D space.

If you just want to slide the texture around in 2-D space, you can do this too, and that will avoid the problem. But in order for this to work, you will need to have special knowledge about how your 2-D texture space relates to the 3-D environment (and specifically how it relates to the position of your 3-D projector). If you have this knowledge, then great–you can use it, and compute the setTexPos() values appropriately. I would classify this as one of the possible “clever tricks” you can use to solve this problem. But it’s not a solution in general, because it doesn’t work unless you know this 3-D relationship.

David

Can you please show your own code so I’ll be sure?

But is the lens a Panda object, or OpenGL level? Do people really need to use such lens for projected textures in the first place?

tips?

I’m not sure how they relate, how can you get the needed UV position given the position in 3d space?
I think theres more work than just setTexPos and setTexHpr/setTexScale, there’s also the distortion caused by the “lens”. How to achieve that?
And if you can do all that, which can basially do the same as projectTexture, but without the issues with the traditional projectTexture, why not have a helper class for that?

Not complaining about how Panda does things, just curious.

I did:

tex.setWrapU(tex.WMRepeat)
tex.setWrapV(tex.WMRepeat)

At the level we’re talking about, a lens is a mathematical construct. It’s a 4x4 matrix that transforms 3-D points in space to the equivalent 2-D point on the film plane. Panda computes the matrix and gives it to OpenGL.

This is how projective texturing works. It just computes a 2-D UV coordinate from each 3-D point in space, and it uses the lens matrix to do this.

I gave you a bunch of ideas. None of them are the right answer for all situations. I don’t know anything about your situation, so I can’t tell you which one to use. It’s up to you to figure out which one(s) are most appropriate for your application and your needs.

You can’t, unless you know something special about your geometry, for instance if you know that your geometry is a flat plane that covers the range (100,100,100) to (1100,1100,100) with the UV range (0,0) to (1,1), then you can easily scale the XYZ to UV’s. If your geometry is more complex, you might have to do something different. Perhaps you can pre-compute easy-to-scale UV’s and apply them in the modeling package.

Unless, of course, you apply the lens matrix to convert 3-D coordinates to 2-D coordinates. This works on every model. But now you’re doing projective texturing, and the matrix computes the same values on both sides of its origin.

David

I meant the complete code.
here is mine:

from pandac.PandaModules import *
import direct.directbase.DirectStart

# setup Lens NodePath
proj = render.attachNewNode(LensNode('proj'))
lens = PerspectiveLens()
lens.setNearFar(2,100)
proj.node().setLens(lens)
proj.node().showFrustum()
proj.reparentTo(render)
proj.setPos(0,0,4)
proj.setP(-30)

# texture to project
tex = loader.loadTexture('maps/envir-reeds.png')
tex.setWrapU(tex.WMRepeat)
tex.setWrapV(tex.WMRepeat)
tex.setBorderColor((1, 1, 1, 1)) 
ts = TextureStage('ts')
ts.setMode(TextureStage.MDecal)

# model to project to
env = loader.loadModel('environment')
env.reparentTo(render)
env.setTwoSided(True)
env.setZ(-4)

# project
env.projectTexture(ts, tex, proj)

run()

I’m just thinking, isn’t it possible to create an object similar to lens node which will allow to project a texture only from one “side”?

one of the warkarounds you mentioned was to make sure camera doesn’t render what’s behind the projector, which I said won’t work for my case.
The other was using stencil buffer, but I can’t find an example for this.
I can’t really use custom shaders.
And you can’t really get the UV position given the 3d space position, so I’m not sure if I have too many options.
The only thing which could work is the stencil buffer.

Ah, I see. The WMRepeat flag wasn’t sticking in your case, because you are using the same texture that is also referenced in the model file you loaded subsequently (envir-reeds.egg). Unfortunately, this model file has an explicit setting for this texture to WMClamp, so when you load the egg file, it resets the setting on this texture. If you want to see WMRepeat in action, you can reverse the order of these operations like this:

Or, you can simply use a different texture. Of course, with the above code, you can use WMBorderColor successfully as well.

No, it’s not possible. Sorry.

The stencil buffer is not a slam-dunk either. It’s difficult to use, and it may substantially increase the fill requirements of your scene, leading to reduced render performance. But it’s worth a try.

Part of being a successful game programmer is being able to come up with clever solutions to technical problems such as these. Sometimes the final solution requires a compromise to your original game design; sometimes you can find a way to do exactly what you wanted. That’s the nature of game development. :slight_smile:

David

Here’s a clever trick that uses two clip planes, one facing forward and one facing back. The environment is instanced to the scene twice, and gets rendered once with the projected texture, and then again without it. This doubles the cost of rendering your environment, but if your environment is relatively simple it shouldn’t be a problem.

from pandac.PandaModules import *
import direct.directbase.DirectStart

# setup Lens NodePath
proj = render.attachNewNode(LensNode('proj'))
lens = PerspectiveLens()
lens.setNearFar(2,100)
proj.node().setLens(lens)
proj.node().showFrustum()
proj.reparentTo(render)
proj.setPos(0,0,4)
proj.setP(-30)

# A plane parallel with the projector's film plane.  Everything in
# front of the plane is in front of the projector.
clipFront = Plane(Vec3(0, 1, 0), Point3(0, 0, 0))
clipFrontNP = proj.attachNewNode(PlaneNode('clipFront', clipFront))

# Another plane facing the reverse direction.  Everything in front of
# this plane is behind the projector.
clipBack = Plane(Vec3(0, -1, 0), Point3(0, 0, 0))
clipBackNP = proj.attachNewNode(PlaneNode('clipBack', clipBack))

# root of environment model.
envRoot = render.attachNewNode('root')

# Stuff in front of the projector.  This receives the texture projection.
envFront = envRoot.attachNewNode('envFront')
envFront.setClipPlane(clipFrontNP)

# Stuff behind the projector.  No texture projection here.
envBack = envRoot.attachNewNode('envBack')
envBack.setClipPlane(clipBackNP)

# The environment is attached to both of those.
envCommon = envFront.attachNewNode('envCommon')
envCommon.instanceTo(envBack)

# model to project to
env = loader.loadModel('environment')
env.reparentTo(envCommon)
env.setTwoSided(True)
env.setZ(-4)

# texture to project
tex = loader.loadTexture('maps/envir-reeds.png')
tex.setWrapU(tex.WMBorderColor)
tex.setWrapV(tex.WMBorderColor)
tex.setBorderColor((1, 1, 1, 1))
ts = TextureStage('ts')
ts.setMode(TextureStage.MDecal)

# project
envFront.projectTexture(ts, tex, proj)

run()

Thanks for the workaround, I’m not sure if I can afford dublicating the whole scene which is affected by the projection, but I’ll remember this technique for future uses.

I think I finally realised how texture projector works internally. And I quickly realised it’s a real problematic to use. It’s not simply that it will render on both sides, it will also “penetrate” objects, also apply itself to the back of the object, as well as any objects in the “back” of that one.

I think I’m looking for another effect which gives similar results but uses different technique.
youtube.com/watch?v=1vnRV-1S65I
maybe not in thsi video, but in this one you can see that it doesn’t really render in the back of the object or even the lens: youtube.com/watch?v=ZJ72ktGu … ure=relmfu
it’s hard to notice, I had to stop at some frames, but it looks like the texture won’t be applied outside of the wire box thing.
I can’t find anything about these “decals”. It could simply be a shader.

I got this kind of decals working with Panda in my editor, but I’m quite sure they won’t work for dynamic stuff. They’re meant to be static and add detail to the scene, not to power a flashlight or shadows or anything like that. At least that’s what I gather from my experience, I’m not sure if Wolfire doesn’t use them that way, but I’m sure that if it was possible, it would definitely require resorting to C++.

blog.wolfire.com/2009/06/how-to-project-decals/

Here you have a demonstration of how it works. As you can see, it finds the stuff that’s completely or partially within the lens frustum, duplicates it, clips away or removes the vertices that are outside the frustum, projects the texture on the resulting mesh and sets a depth offset to avoid z-fighting.

The bigger control over what the texture is projected onto results from duplicating and manipulating the target meshes, which seems like way too expensive to be don in real time, especially for complex scenes…

In a way, that’s similar to what David suggested to you.

Way more tedious than I expected. :frowning:

OK, this is another question. Is it okay to copy some faces like this from your terrain and assign a decal texture with transparency in your 3d modeller before eporting to egg? I haven’t modelled terrains too much so I don’t know if it’s a right thing to do: I think having few decals like this on top of each other might be too fillrate intense.

Also, I was thinking of moving the decal faces a bit away from the terrain faces to avoid z-fighting. What does depth offset actually do?

I’m not sure if making terrain in a 3D editor is a good way to go. Seems like the Panda’s terrain generators are better for performance, with LOD and stuff. But then, I’m not into terrain much, so I might be completely wrong or missing something.

In any case, you can obviously do this stuff manually if you model in a 3D editor, sure. Obviously, you have to remember that each decal made this way (manually or not, doesn’t matter) adds to the overall mesh count, so you will want to flatten those that have the same textures, materials and offset settings.

That all depends on your camera settings, I guess. The usual rules apply, if you add many decals with distant camera clipping planes, you will get z-fighting and other problems.

What you’re talking about is what I was initially doing with my decals, before I found out about depth offset, but that’s not what depth offset is about. It’s more dynamic and intelligent. Offsetting manually will fail once you move away from the surface, resulting in z-fighting caused by the distance between the polygons being too little to be accounted for in the buffer. Depth offset takes that into consideration and offsets rendered polygons accordingly.

Obviously, as stated above, if you put many decals on top of each other (with many “depth offset layers”), the topmost decals may effectively be rendered on top of stuff they’re not supposed to be seen though.

Here’s the decal code from my editor (with all editor-specific code removed):
dl.dropbox.com/u/196274/decal.py

Also, please remember, that this won’t work for dynamic things, like flashlights. It will only work for adding detail to your scenes, and it does a really good job at that.