Integer textures?

Do we have a means of loading a texture (i.e. via “loader.loadTexture”) such that its components are given as integer values between 0 and 255 instead of float values between 0.0 and 1.0?

Furthermore, is there anything especial that I need to do when saving out my image in order for it to keep to integral values?

To explain, I want to use an image to encode certain id-values, used both in logic and in shader-rendering. If I’m not much mistaken, the shader-side should be handled by specifying that my texture input be a “usampler2D”, and have applicable calls to “texture2D” then fill out a “uvec4” variable. However, I’m not clear on how to handle this on the Panda side, or indeed, when generating my image in the first place.

I have found mention of the “LoaderOptions” class that may specify a “TF_integer” option. However, thus far my attempts to use this have seemingly failed, both in attempting to render the ID-values in a shader and in attempting to “peek” at them via a TexturePeeker. (Unless, of course, it’s my input texture that’s the problem…)

This is how I’m using “LoaderOptions” at the moment:

bubbleTexOptions = LoaderOptions(flags = LoaderOptions.TF_integer)
self.bubbleIDTexture = loader.loadTexture(<image path>, loaderOptions = bubbleTexOptions)

Look for the texture formats that end in “i”, such as Texture.F_rgba8i and Texture.F_r32i. Whether it’s signed or not is controlled by the component type.

Populating the data happens the same way as a regular normalized texture; you just need to change the format to change how it’s interpreted.

Hmm… That makes sense, but I’m not seeing how to combine it with the loading of the image.

I’ve tried creating an uninitialised texture, calling “setup2dTexture” (passing in a component-type of “T_unsigned_int” and a format of “F_rgba8i”), and then calling “read”. This, alas, doesn’t seem to produce any difference in the output of my texture-peekings.

That is to say, this is what I’ve tried:

self.bubbleIDTexture = Texture()
self.bubbleIDTexture.setup2dTexture(2048, 2048, Texture.T_unsigned_int, Texture.F_rgba8i)<image file-path>)

But output from TexturePeeker’s “fetchPixel” method still returns floating-point values.

I also looked at the “LoaderOptions” class, but I don’t see a way to specify anything there other than the “TF_Integer” option that I mentioned earlier.

Could you elaborate on how one goes about loading an image as an integer texture, please?

Hmm… I may have made some progress, at least:

I’ve loaded the image as normal, then called “setFormat” to convert it to “F_r32i”. I then generate a texture-peeker for the image, and attempt to “peek” at its pixels, working with a small test-image.

This works in that the image is loaded, and there seem to be no complaints when I attempt to set its format. When I initially set the format to “F_rgba8i”, TexturePeeker rejected the image, indicating that in that case at least something had been done.

However, TexturePeeker continues to return floating-point data on “peeking”. :/

Additionally setting the Texture’s component type seems to have no effect on this.

(I’ve also tried loading the image as usual, fetching its RAM-image, setting up a new texture with the desired component-type and format, then setting the new image’s RAM-image from the old one. However, doing this crashed the program.)

Alas, my searches of both the samples and the source-code have turned up little inspiration. (The above-mentioned RAM-image method was prompted by such searching, at least.)

Does anyone have any insight on this?

Indeed, looking at the source code for TexturePeeker, it seems that it returns floating-point values for the “r32i” format. :/

So, it looks like I’ll want to find another way to examine the pixels in my id-image…

[edit 2]
Okay, it looks like there was a simpler approach: PNMImage.

Loading the image-file into a PNMImage seems to allow me to read my pixel-values as integers, as expected!

It does mean storing two copies of the image–one for PNMImage and one as a texture for the associated shader–but that should be fine, I imagine.

Okay, my apologies for the multi-post, but I’ve hit another wall, I fear. :/

I have the Python-side working, I do believe: by using a PNMImage I’m able to load my integer-texture and sample it at will, and doing so retrieves integer values.

The problem, then, lies on the shader-side:

I’m loading the integer-image a second time, this time as a Texture, in order to pass it to the shader. On the shader-side I then declare it as an “isampler2D”, and retrieve a texel from it via the GLSL “texture” method.

This… sort of works. I get a value… but it seems to uniformly be incredibly huge–somewhere over 500000000, and have similar scale.

(I’m inferring the former by virtue of the fact that, if I just divide by a huge value I get something that I can test-render, but these values never dip down to zero. Subtracting that number that I gave above seems to still result in something that I can render, but with slightly higher contrast.)

I’m really not sure of where I’m going wrong here. :/

I’ve tried a variety of methods for loading the image–simple “loader.loadTexture”; “setup2dTexture” with Texture.T_int, Texture.F_rgb8i, followed by a all to “read”; setting the component-type and format after a normal load; and so on. I’ve also tried a few different formats than “rgb8i” and at least one other component-type (“T_unsigned_int”).

Does anyone know where I’m going wrong here?

If I remember correctly, will read the texture data from the file and set the texture format accordingly, if your source is not an actual integer component texture, the format will be reset.
You could try and load the texture from the RAM PNMImage you created just to be sure that the texture format is created properly.

Also, I read somewhere that when using integer textures, the filtering must be set to Nearest, never linear or mipmap, otherwise it could not work.

Hmm… Since PNMImage is producing the expected values when the “getVal” methods are called, I’m inclined to guess that my image is correct. However, I may be wrong–is there a way to check?

Okay, I just tried that–doing the following specifically:

self.bubbleIDTexture = Texture()

# This line, the loading of the PNMImage,
# is unchanged from my base code, I believe
self.bubbleIDImage = PNMImage("Moons/StandardMoons/BubbleMoon/bubbleIDs.png")


Doing so produced no apparent change in the shader’s behaviour.

Would that be the mip-map filtering? If so, I’ve just tried adding that to a call to “loadTexture”, to no avail–but then, it’s possible that some other element may be missing, however (in particular, I don’t see a way to set the component-type or format when using “loadTexture”).

I mean setting the filtering configuration using :


Are you using an option in Texture.load() to specify that ?

It aroused my curiosity and I performed some tests, here is a sample that create a valid Integer PNMImage and load it to a texture that then is used to fill a card :

Note that PNMImage does not support channels bigger than 16 bits, setting a value higher is clamped to 65535. When using an integer texture, TexturePeeker raises an exception saying that the texture format is not supported.

from panda3d.core import Texture, CardMaker, Shader, load_prc_file_data, LColor, PNMImage
from direct.showbase.ShowBase import ShowBase

load_prc_file_data("", "textures-power-2 none")
load_prc_file_data("", "win-size 512 512")

def shader():
    return Shader.make(Shader.SL_GLSL,
#version 450

uniform mat4 p3d_ProjectionMatrix;
uniform mat4 p3d_ModelViewMatrix;

in vec4 p3d_Vertex;
in vec4 p3d_MultiTexCoord0;

out vec4 texcoord;

void main() {
    gl_Position = p3d_ProjectionMatrix * (p3d_ModelViewMatrix * p3d_Vertex);
    texcoord = p3d_MultiTexCoord0;
#version 450

uniform usampler2D data;
in vec4 texcoord;

out vec4 frag_color;

void main() {
    uvec4 tex0 = texture(data, texcoord.xy);
    frag_color = vec4(float(tex0.r) / 255.0, 0, 0, 1);

base = ShowBase()

cm = CardMaker('card')
cm.set_frame(-1, 1, -1, 1)
card = render.attachNewNode(cm.generate())
card.setPos(0, 10, 0)

image = PNMImage(1, 1, 1, (2**16)-1)
image.setXelVal(0, 0, 128)
print(image.getRedVal(0, 0))
data_texture = Texture()


card.set_shader_input("data", data_texture)

And the output is :

image: 1 by 1 pixels, 1 channels, 65535 maxval.
  2-d, 1 x 1 pixels, each 1 shorts, r16i
  sampler wrap(u=repeat, v=repeat, w=repeat, border=0 0 0 1) filter(min=nearest, mag=nearest, aniso=0) lod(min=-1000, max=1000, bias=0)  2 bytes in ram, compression off

1 Like

Ah, that did it!

It looks like the key was–as you did above–setting the format after loading the image; previously I was setting it beforehand, I suppose on the principle that it made sense to specify the desired format before attempting to load.

Thank you so much–all seems to be working now! :smiley:

(Before I saw your post, I was thinking to try constructing the texture pixel-by-pixel from the PNM image. ^^; )

This isn’t important now, since the problem has been found, but since you asked:

I didn’t see such an option in “load”, as I recall, but I was using those values in “loader.loadTexture”.

[edit] Correction, sorry: I was mistaken about those values; I was setting “minfilter = SamplerState.FTLinearMipmapNearest” in my call to “loadTexture”, and wasn’t using “Texture.FT_nearest” at all.

Good to know it works :slight_smile:

To replace the TexturePeeker, if you actually need to retrieve some pixels from the Texture, you can use get_ram_image() and feed it to a numpy array using the right width, height and component size, it’s a bit convoluted but it’s fast and it works. (I can provide an example if needed)

Thanks! :slight_smile:

For now the PNMImage seems to be working for the purposes of retrieving pixels; I have yet to stress-test it, however. First, getting the basics of what I intend working, then I can worry about any performance issues that might crop up! ^^;

(But I have noted that numpy method as a possibility, should it be called for.)