Step-Scroll UV on the GPU [SOLVED]

I have a texture atlas, with a 8x8 grid of images - the typical particle texture atlas, each square is a frame of a 64 frame long animation.

I’m trying to write a shader that jumps from one image in the grid to another every 1/60 of a second (or any other time when I got this done) - but alas I can’t figure how.

I can send an offset calculated in python with something like this:

    self.U=0.0
    self.V=0.0
    taskMgr.doMethodLater(1.0/60.0, self.scrollUV,'uv_scroll_task')

def scrollUV(self, task):
    self.U+=0.125   
    if self.U>=1.0:
        self.U=0
        self.V-=0.125     
    render.setShaderInput('uvoffset', Vec4(self.U, self.V, 0.0, 0.0))    
    return task.again

But I would like to send just a uniform ‘time’ (render.setShaderInput(‘time’, globalClock.getFrameTime() or whatever 1.9 has) and let the shader do the moving.

Anyone got an idea how to do it without 64 if-else statements in the shader?

Use an array texture (loader.load2dArrayTexture) and simply use a variable index into it that is calculated in the shader.

Or if they are different locations in the same texture, you could pass an array of UV locations and then index into that array to determine the UV coordinates for that texture.

Is it a new feature? I can’t find “loader.load2dArrayTexture” in the python reference:
devel:
panda3d.org/reference/devel/ … Loader.php
1.8.1:
panda3d.org/reference/1.8.1/ … Loader.php

Trying to use it (in 1.8.1) tells me that I can’t (AttributeError: Loader instance has no attribute ‘load2dArrayTexture’)

But I did find TexturePool.load2dTextureArray().

Still, can’t get it to work.
This code:

tex=TexturePool.load2dTextureArray("boom_fire/frame00#.png")
plane.setShaderInput('fubar', tex)

Throws me a warning:

:display:gsg:glgsg(warning): Ignoring unrecognized GLSL parameter type!

This code doesn’t:

tex=TexturePool.load2dTextureArray("boom_fire/frame00#.png")
plane.setTexture(TextureStage.getDefault(),tex,1) 

but the plane is not rendered in both cases.

Maybe it’s because I’m trying to use the array from a 110 glsl shader. My shaders look like this:
vert

//GLSL
#version 110
uniform mat4 p3d_ProjectionMatrix;
uniform mat4 p3d_ModelViewMatrix;

uniform float tile;
uniform float time;
uniform vec4 uvoffset;

void main()
    {         
    //vec4 uv=gl_MultiTexCoord0*tile+uvoffset;
    
    mat4 modelView = p3d_ModelViewMatrix;
    //http://www.geeks3d.com/20140807/billboarding-vertex-shader-glsl/
    // First colunm.(sic!)
    modelView[0][0] = 1.0; 
    modelView[0][1] = 0.0; 
    modelView[0][2] = 0.0; 
    // Second colunm.
    //modelView[1][0] = 0.0; 
    //modelView[1][1] = 1.0; 
    //modelView[1][2] = 0.0; 
    // Thrid colunm.
    modelView[2][0] = 0.0; 
    modelView[2][1] = 0.0; 
    modelView[2][2] = 1.0;     
    
    //gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
    vec4 P =  modelView*gl_Vertex;
    gl_Position = p3d_ProjectionMatrix * P;
    gl_TexCoord[0] = gl_MultiTexCoord0;//uv;
    }

frag

//GLSL
#version 110
#extension GL_EXT_texture_array : enable

uniform sampler2DArray p3d_Texture0;
//uniform sampler2D p3d_Texture0;

void main()
    { 
    //vec4 color_tex=texture2D(p3d_Texture0,gl_TexCoord[0].xy);
    float index=12.0;
    vec4 color_tex=texture2DArray(p3d_Texture0,vec3(gl_TexCoord[0].xy,index));
    gl_FragData[0]=color_tex;
    }

I’ve also tried a 130 version (I’m almost sure texture array was ‘core’ by then) but the plane still won’t render:

//GLSL
#version 130
//#extension GL_EXT_texture_array : enable

uniform sampler2DArray p3d_Texture0;
//uniform sampler2D p3d_Texture0;

void main()
    { 
    //vec4 color_tex=texture2D(p3d_Texture0,gl_TexCoord[0].xy);
    float index=3.0;
    vec4 color_tex=texture(p3d_Texture0,vec3(gl_TexCoord[0].xy,index));
    gl_FragData[0]=color_tex;
    }

The shader is run, and if I put some debug color into the output it gets rendered, it’s like texture lookup always returns (0.0, 0.0, 0.0, 0.0).

This feature is only available in development builds of Panda.

(I haven’t tried this myself, so the following may not work. ^^; )

It seems to me that in the case in which your frames are laid out in a grid on a single image, what you have, essentially, is a tilemap that you want to find indices for. In this case, your coordinates should be calculable via a matter of modulos and integer- (or floored-) divisions.

I’m not sure of whether floating-point modulo is available in a shader; if not, since I believe that GlobalClock’s “getRealTime” method returns a time in seconds, you should, I think, be able to circumvent the issue by simply multiplying your time value by one thousand.

So, I imagine something like this:
(Off the top of my head, and untested! ^^; )

In your Python code:

np.setShaderInput("numFramesInRow", howeverManyFramesThereArePerRow)
np.setShaderInput("time", globalClock.getRealTime()*1000)

In your shader:

float u = time / 60;
//  Note: If the following produces a floating-point value,
// floor it or convert it to integer to remove the decimal part.

float v = u / numFramesInRow;
// Similarly, truncate if called for.

u = u % numFramesInRow;

Ok, back to the texture atlas :mrgreen:

I’ve got a array of 64 uv-cords in the shader and I’m calculating the index with this crazy function of mine:

//time= globalClock.getFrameTime()
//reset_time=time_when_the_effect_should_start
int index=int(floor(mod((time-reset_time)*fps, 64.0)+0.5));

and with a bit of hacking I can get the index for the ‘current’ and ‘previous’ frame and how much to blend them:

float frame=mod((time-reset_time)*fps, 64.0);
int index1=int(floor(frame+0.5));
int index2=int(floor(frame-0.5));
blend=fract(frame);