shaders and projectTexture

So after figuring everything out, project texture works great. But is it actually possible to access those texture coordinates generated by project texture within a cg shader? TEXUNIT0 is the normal diffuse and TEXUNIT1 is the projected texture. TEXCOORD0 is original UVW coordinates but TEXCOORD1 is apparently empty. Is it possible to get at this projected UVW coordinate (Texture coordinate)?


NodePath.projectTexture() relies on the standard fixed-function pipeline to compute UV’s in the graphics hardware. If you write a Cg shader, though, you are replacing the standard fixed-function pipeline with a pipeline of your own design. Therefore, if you want to have projected texture coordinates in a Cg shader, you will need to write your shader such that it computes the appropriate texture coordinates.

Another option is to use Panda’s ProjectionScreen class, which is similar to NodePath.projectTexture() except the texture coordinates are computed entirely on the CPU. In this case, the computed texture coordinates will be available to any Cg shader you write.


Is there any documentation on this… I’ve looked at the API and its rather limited in the amount of information. Perhaps an example somewhere?

I guess another idea is to get the model view projection matrix from the projector’s point of view. Is there a way to get this?

Documentation on which–the ProjectionScreen? There might be some examples here and there, but the fundamental usage is simple–just make it the parent node of whatever geometry you want to compute texture coordinates for, and then call ProjectionScreen.setProjector(myLensNodePath). You can also call ProjectionScreen.setTexcoordName(“texcoord_name”). For instance:

myModel = loader.loadModel('myModel.egg')
myLensNode = LensNode('myLensNode')
myLensNodePath = render.attachNewNode(myLensNode)
ps = ProjectionScreen('ps')
psPath = render.attachNewNode(ps)

Certainly, lens.getProjectionMat() returns the projection matrix in the projector’s coordinate space. To convert it to any coordinate space you like, multiply it by targetNodePath.getTransform(lensNodePath).getMat().


I tried the modelview projection matrix thing but I get an error.

          self.lightAttrib = LightAttrib.makeAllOff()
          self.spotlight = Spotlight( "spotlight" )
          self.lightLens = PerspectiveLens()
          self.snlp.attachNewNode( self.spotlight.upcastToLensNode() )

          self.projectionMatrix = self.lightLens.getProjectionMat
          self.lightViewMatrix = self.spotlight.getTransform(self.lightLens).getMat()
          self.viewProjMatrix = self.projectionMatrix * self.lightViewMatrix

It seems getTransform needs only 1 argument.

Known pipe types:
(3 aux display modules not yet loaded.)
DirectStart: Starting the game.
Warning: DirectNotify: category 'Interval' already exists
Traceback (most recent call last):
  File "", line 32, in ?
  File "", line 26, in __init__
    self.lightViewMatrix = self.spotlight.getTransform(self.lightLens).getMat()
TypeError: getTransform() takes exactly 1 argument (2 given)

getTransform() returns the transform applied to that particular node. You can’t get the transform with respect to another object. There is no way to really tell between two nodes because they may be in different space. So they each just have their own transform in the space that they are in. The argument that it is taking is the instance itself, so any arguments supplied will be too many.

There is a way to find out how an object is transformed with respect to another object, if that is what you were trying to do. You would have to call getNetTransform() on both which gets the total transform applied from the root. You can then compare the two net transforms any way you want. But this wont work on a lens object since they dont have a getNetTransform method. I am not even sure that transforms in the normal panda sense are applied to a lens, but I am sure there would be a way to cheat it somehow to get a ‘transform’.

OK, we are coming back to the basic PandaNode vs. NodePath confusion here.

There is a method on PandaNode called getTransform(). It returns the local transform of that particular node, as russ describes. This method does not take any parameters other than self, so it cannot take another object as a parameter.

There is also a method on NodePath called getTransform(). When called with no parameters (other than self), it returns the local transform of the node, the same way that PandaNode.getTransform() does. However, it may also be called with a parameter, which should be another NodePath. In this form, it returns the relative transform between the two nodes described by the two NodePaths.

In fact, I would say that this ability of NodePath to compute the relative transform between any two nodes, making it easy to jump around between different coordinate systems, is one of the key features of Panda.

Note that Spotlight is a kind of PandaNode. Thus, you don’t want to call spotlight.getTransform(), since that’s the wrong getTransform() call. Instead, you want to call snlp.getTransform(), for instance. (Actually, snlp is the NodePath for the parent of spotlight, not spotlight itself–you didn’t save the NodePath for spotlight itself in the above code; this is the return value of snlp.attachNewNode(spotlight). But it probably doesn’t matter, since you didn’t put a local transform on spotlight anyway.)

Furthermore, a lens is neither a PandaNode nor a NodePath. You can’t get the transform relative to a lens (and that doesn’t make sense anyway, since the lens is attached to the spotlight). What you want is the transform relative to the object that you are applying texture coordinates to. And in fact, you actually want to reverse transform of that: the transform of the model, relative to the lens. Try this:

self.projectionMatrix = self.lightLens.getProjectionMat()
self.lightViewMatrix = self.model.getTransform(self.snlp).getMat()
self.viewProjMatrix = self.lightViewMatrix * self.projectionMatrix

This will compute in self.viewProjMatrix a 4x4 matrix, that when applied to the position of each vertex in your model, will yield the appropriate texture coordinate as if the texture were projected from the LensNode.


So I tried both cases with a camera and things aren’t just going my way.

The model view projection - I guess this would be faster?
Now I’m trying to do this on a camera node instead of a light node and its giving a graphic state guardian error. Whats the deal with that?

self.projectionMatrix = base.camLens.getProjectionMat()
self.lightViewMatrix = self.model.getTransform(
self.viewProjMatrix = self.lightViewMatrix * self.projectionMatrix

Its giving me this:

Assertion failed: lens != (Lens *)NULL at line 2850 of c:\temp\mkpr\panda3d-1.0.
Traceback (most recent call last):
  File "", line 91, in ?
  File "C:\Panda3D-1.0.5\direct\src\showbase\", line 1603, in run
  File "C:\Panda3D-1.0.5\direct\src\task\", line 781, in run
  File "C:\Panda3D-1.0.5\direct\src\task\", line 728, in step
  File "C:\Panda3D-1.0.5\direct\src\task\", line 671, in __stepThroughLis
    ret = self.__executeTask(task)
  File "C:\Panda3D-1.0.5\direct\src\task\", line 602, in __executeTask
    ret = task(task)
  File "C:\Panda3D-1.0.5\direct\src\showbase\", line 1170, in igLoop
  File "GraphicsEngine", line 620, in renderFrame
AssertionError: lens != (Lens *)NULL at line 2850 of c:\temp\mkpr\panda3d-1.0.5\

Whats the deal with that?
So as for projection screen method of doing that. I’m still having a bit of trouble. How do I actually get the calculated UV’s to the shader? Here’s the code:

         self.lightLens = PerspectiveLens()
          self.lightLensNode = LensNode('myLensNode')
          self.lightLensNodePath = render.attachNewNode(self.lightLensNode)
 = ProjectionScreen('ps')

          self.psPath = render.attachNewNode(

          self.snlp.attachNewNode( self.spotlight.upcastToLensNode() )

          self.lightAttrib = self.lightAttrib.addLight(self.spotlight)
          render.node().setAttrib( self.lightAttrib )

          self.shader=CgShader("genShader", "", "")

          self.shader.addParam('lightPos', 'lightPos', CgShader.P3F, CgShader.BFRAME, 1)
          self.shader.setParam('lightPos', Vec3(self.snlp.getPos()))
          self.shader.addParam('modelViewProj', 'modelViewProj', CgShader.PMATRIX,CgShader.BFRAME, 1)
          self.shader.setParam('modelViewProj', CgShader.MTXMODELVIEWPROJECTION, CgShader.TRFIDENTITY)


In the cg shader I’m trying to access TEXCOORD0 and TEXCOORD1. TEXCOORD0 is the obvious imported UV coordinates. However TEXCOORD1 seems to be empty. Am I doing something wrong?

This assertion failure means that you are trying to render a scene using a camera that does not have a lens assigned. Initially, a camera has no lens. You must assign one via:


You have to assign a texture to your new texture stage in order for the texture coordinates to be sent down the pipeline. Something like this:

ts = TextureStage('ts')
tex = loader.loadTexture('dummyTexture.jpg')
self.model.setTexture(ts, tex)