shader instancing: matrix as texture ?


I’m finally testing the new geometry instancing feature.

And I must say I’ve got a lot of problems:

On a not completely related issue:

  1. I’m confused with the matrix passed as parameter in the shader.

usually, I’m just doing a

  position      = mul(mat_modelproj, vtx_position);

and that works just fine.

but since I want to create the position of my object inside the shader, I need to decompose the matrix a bit

I tried to do :

  position      = mul(mat_modelview, vtx_position);
  position      = mul(mat_projection, position);

still working

Now I’m trying:


  position      = mul(wstrans_model, vtx_position);
  position      = mul(vstrans_view, position);
  position      = mul(mat_projection, position);

and that’s not working anymore.

I would though that:
“model -> world -> view” could be done both way ?

If I have the position of my object ( x,y,z) which matrix should I change ?

  1. I try to pass the position and rotation information by texture. But to have a minimum of precision I can only pass 1 variable by texture ! that 6 textures for the placement.
    Or I am missing something ?

  2. I’m tried to pass these texture as a shader input.
    But strangely, they seems to override the standard texture, of the object.
    Is this the same thing ? Then why having two way of defining them ?

  1. You can alter vtx_position first (it’s in model-space), and transform it to clip-space afterward by left-multiplying mat_modelproj with it.
    You can use trans_model_to_world (and back) to transform the vertex position into world-space, if you really need to.

  2. What do you mean? You can pass an arbitrary number of textures, either via the setTexture interface or via the setShaderInput interface.

  3. setTexture passes textures to the tex_0, tex_1, etc. inputs. With setShaderInput, you can pass textures as k_anything.

well I can’ seems to get anything done, so I guest I’m not making sense.

How do you have pass your position inside texture ?
I’ve tried more than one technique but I can’t seems to get it done:

last try:

tex_x = Texture('posx')
        tex_y = Texture('posy')
        tex_z = Texture('posz')
        imgx = PNMImage(128,128, 4)
        imgy = PNMImage(128,128, 4)
        imgz = PNMImage(128,128, 4)

        tex_x.setup2dTexture(128, 128, Texture.TFloat, Texture.FRgba)
        tex_y.setup2dTexture(128, 128, Texture.TFloat, Texture.FRgba)
        tex_z.setup2dTexture(128, 128, Texture.TFloat, Texture.FRgba)
        for el, mat in enumerate(list) :
            x, y = el %128, el /128
            val   = mat.getRow(3)
            imgx.setPixel(x, y, PNMImageHeader.PixelSpec(*[ ord(a) for a in struct.pack('f',val[0])]))
            imgy.setPixel(x, y, PNMImageHeader.PixelSpec(*[ ord(a) for a in struct.pack('f',val[1])]))
            imgz.setPixel(x, y, PNMImageHeader.PixelSpec(*[ ord(a) for a in struct.pack('f',val[2])]))

Maybe that for nothing, but since I can’t declare One float canal in the texture parameter, I’ve tried this way.

Hm… the most ideal way to do this would be to pass matrix arrays. This will be possible in an upcoming release of Panda3D.

release that is scheduled for ?

While waiting for the matrixArray attribute, I still need to have something running.

Can someone help me with the creation of those texture ?

it seems that I do something wrong, because I can get a correct texture, but I can’t manage to understand exactly what.

Am I using PnmImage the way they should be used ?

PNMImage isn’t really meant for loading a float texture. You’re probably best just stuffing the data directly into the array returned by texture.modifyRamImage().

Other than that, I don’t see anything obviously wrong.


thak for the help.
Now I can created float texture that seams valid.

My shader seams to go wrong now:

//Cg profile gp4gp gp4fp

float float_from_rgba(in float4 intValue)
    const float fromFixed = 256.0/255;
    return intValue.r*fromFixed/(1)

void vshader(   in float4 vtx_position                    : POSITION,
                in float2 vtx_texcoord                    : TEXCOORD0,

                in varying int o_instansid                : INSTANCEID,
                in uniform sampler2D k_posx               : TEXUNIT0,
                in uniform sampler2D k_posy               : TEXUNIT1,
                in uniform sampler2D k_posz               : TEXUNIT2,
                in uniform float4x4 mat_modelview,
                in uniform float4x4 mat_projection,

                out float4 l_position                     : POSITION,
                out float2 l_texcoord                     : TEXCOORD0
    float2 texcoord = float2( o_instansid%32, o_instansid/32) ;
    float new_x =  (float_from_rgba(tex2D(k_posx, texcoord))) * 50 ;
    float new_y =  (float_from_rgba(tex2D(k_posy, texcoord))) * 50 ;
    float new_z =  (float_from_rgba(tex2D(k_posz, texcoord))) * 50 ;
  float4x4 new_mat  = float4x4(1 , 0, 0, new_x, 0, 1, 0, new_y, 0, 0, 1, new_z, 0, 0, 0, 1) ;
  float4 position = mul(new_mat, vtx_position);
  position = mul(mat_modelview, position);
  l_position      = mul(mat_projection, position);
  l_texcoord      = vtx_texcoord;

if I replace :

float4x4 new_mat  = float4x4(1 , 0, 0, new_x, 0, 1, 0, new_y, 0, 0, 1, new_z, 0, 0, 0, 1) ;


float4x4 new_mat  = float4x4(1 , 0, 0, instansid%32, 0, 1, 0, instansid/32, 0, 0, 1, new_z, 0, 0, 0, 1) ;

I get a nice grid of object.
but without it, I only get one object ( new_x and new_y don’t change with the INSTANCEID, but are not zero )

So it would seams my textures are the problem.
However, if I just affect my 3 textures without using shaders, I get a good looking object ( that’s it: random color pattern, but never seams to have the same values )

I’m wondering what am I doing wrong.

It is in the way that I use the texture in vertex shader ? Should they be declared in another way ?

ok, I’m so stupid once again, that I’m ashamed.

If I use 0-1 tex coordinate that work better:

float2 texcoord = float2( float(o_instance%32)/32., float(o_instance/32)/32.)