Is it possible to tell Panda3D to pass a texture to the shader using a signed texture type (i.e. specifically the GL 3.x “SNORM” types such as RGBA8_SNORM)?
For context, I’m implementing blended normal maps in my own shader and it seems really silly/wasteful to have to do “(normal * 2) - 1” multiple times for each pixel instead of just telling the hardware/driver to interpret the passed data as signed values to begin with…
You can use a texture format like Texture.F_rgba16 and set the component type to T_float. With that way you can have signed and unsigned floats.
I’m not sure how you would write to them tho, since PNMImage does not support negative values.
However, your operation * 2 - 1 does cost really nothing, GPUs have special units called Fused Multiply Add,
so the operation a * b + c is only one cycle. Compared to your texel lookup, thats probably a few hundred times faster.
If you are using glsl 4.0 +, you can even tell the gpu to explicitly use those units, for example:
vec3 normal = fma(sampled_normal, vec3(2), vec3(-1));
TLDRL; *2 -1 is almost free, and you are probably overoptimizing.
I’d be willing to add it to Panda, though, for the sake of completeness.
It wouldn’t be much work. I’ll look into it today or tomorrow.
Committed. Now when you use T_byte or T_short as component type, Panda will use an snorm format where it would otherwise have used an unorm format.
Awesome! That was a lot quicker and easier than I expected
Thanks for the quick response/update!