Accesing 3D-Texture


I’m trying to access a 3-D texture from a fragment program. My 3-D texture is of size 2x2x2, and thus I have 2 slices. The first one is entirely blue, and the second one is entirely green. When I write something like this in my fragment program

return tex3D(k_myTexture, float3(0, 0, 0));

Instead of seeing my model completely colored in blue, I see it colored in an “intermediate” blue. Isn’t texture coordinates (0, 0, 0) supposed to access my first blue slice? Shouldn’t I see my model colored in blue?


Depends on your wrap mode and your filter mode. With any kind of filtering other than FTNearest, you will be sampling adjacent pixel colors, since that’s what filtering means. If your wrap mode isn’t WMClamp, the bottom slice is adjacent to the top slice in both directions.


Thanks David!

I fixed it by setting the texture mode to “clamp”.

I was wondering what’s the correct way to update a 3-D texture. What I have so far is the following:

def load3dTexture(self, textureName):
        self.texture = loader.load3DTexture(textureName)
def updateTexture(self, pixel, color):
	if self.texture:
		u = pixel[0]
		v = pixel[1]
		w = pixel[2]

		image = PNMImage(), w, 0)

		image.setXel(u, v, color[0], color[1], color[2])
		self.texture.load(image, w, 0)

The “updateTexture” method receives the position of the pixel within the 3D texture that needs to be updated, and the new color that the pixel in question should hold.

What happens is that when I update say pixel (0, 0, 0), it gets updated correctly but the rest of the pixels become black. When I update another pixel that it’s not (0, 0, 0), the pixel (0, 0, 0), returns to its previous color…

Is this the correct way to update a 3-D texture?

I’m passing the 3-D texture to a fragment shader, so I also was wondering if each time I update the 3D texture it is necessary to call model.setShaderInput() again.


texture.load() is designed to be used to update all of the layers at the same time. So, you should iterate through all of your layers and call load() on each one.

If you only want to change a few pixels, you can also use the lower-level texture.modifyRamImage() call, which requires you to consider the precise details of how the texture is laid out in memory, but does allow you to change one pixel at a time without reloading the whole texture with each operation. It’s only an optimization, though; I wouldn’t go through the trouble if calling load() is fast enough for your needs.

You don’t need to call model.setShaderInput() again–it’s still the same texture object.


Thanks David!

I’m still having trouble with this 3-D texture thing. I’m making a test in which each time I press the “u” key, my method updateTexture() gets called. The method definition looks like this:

def updateTexture(self):
            for w in range(self.textureDepth):
                print "w: " + str(w)

                image = PNMImage()
      , w, 0)
                for u in range(self.textureWidth):
                    for v in range(self.textureHeight):
                        print image.getXelA(u, v)

For now I’m just printing the color of each one of the pixels of each one of the texture’s slices, so each time I press “u”, the output shows me something like this:

As a side note, I’m also confused with the alpha value that gets printed (zero). When I create the 3-D texture I add to it an alpha channel and set the alpha to 1. I’m also saving the texture to disk, so I can clearly see that I’m setting the values correctly. Then, why is it that 0 gets printed instead of 1? Can it be related to the fact that when I create my 3-D texture I get the following warning?

Going back to the main point, I get the same correct text in the output as long as my camera does not sees a model in which I’ve set a shader that uses my 3-D texture. As soon as I point my camera towards the mentioned model, thus seeing it and the vertex and fragment shaders get executed, the output begins to show me this, each time I press the “u” key:

The information in the texture’s second slice seems to get lost. Why does this occurr? Why does this starts to happen from the moment my camera first sees my model?

Does this happens because of the explanation you gave to me in your previous post?

If that’s the case, how should my updateTexture() method look, so that the texture’s slice’s information don’t get lost?

Thank you very much, and sorry for the rather long post.


Try setting:


Every texture has both a “ram image”, which is directly accessible to Python, and a “graphics image”, which is the copy of the texture image stored in graphics memory, and is directly accessible to the graphics card for rendering.

When you load a texture from disk, it only has the “ram image” filled in. The first time you use that texture to render an object, Panda copies the “ram image” to the “graphics image”, so that it can be rendered. By default, Panda will then throw away the “ram image”, because you usually don’t need it any more.

But you are still using it, because you are calling, which queries the “ram image”. So you need to tell Panda not to throw it away.

I don’t know why you’re losing the alpha channel. It could be related to the png warning you’re seeing. I’ve seen that png warning myself in a few occasions; it’s coming from libpng, not from Panda per se; and it does seem to indicate a problem attempting to write the alpha channel. I don’t know precisely what causes it. You could try writing a tiff file or an rgb file instead; both of these formats also support alpha.


Once again thank you very much David!



fixed the problem…


Regarding this point:

I will certainly want to make this optimization, since I’m developing a 3D painting demo, where I use a weapon that shoots paint bullets in order to paint geometry.

To give an example, a typical 3-D texture of mine, will be of 32x32x32, and each time a paint bullet collides with my geometry, I need to update the 3-D texture.

Is is too difficult to use “texture.modifyRamImage()”? My 3-D textures is saved as a .png, since I need the alpha channel.


32x32x32 is only 32K pixels, so you still might not need this optimization at all. You should try it and see first. (Note that my tagger game does something like this in real time, though it only modifies a small 2-D texture. But it does it using the slow and simple PNMImage interface, because that’s sufficiently fast for this purpose. Why make things more complicated than they need to be?)

But if you decide you do need it, it’s not technically hard; you just have to understand how the pixels are arranged in memory. The fact that you loaded it from a png doesn’t have anything to do with it. You have to examine the members of the Texture object to find out the number of components, the number of bytes per component, and so on, then compute the right byte offset within the ram image and modify the appropriate bytes. You can try this interactively to see what happens to your texture.


Great David!

Thank you very much for your answer. I’ll keep experimenting to see if I ever need the optimization…