Shader Problem

I’m working with a Cg shaders and I’m having a problem passing arguments from the vertex shader to the fragment shader. Only the first vector parameter that is not one of the ‘special’ parameters (color, position, etc…) gets the appropriate value. Subsequent parameters get the value of the first parameter.

In the manual I noticed a line that said there was a problem with assigning registers, which seems like it might be the problem I am having. The solution listed was to ‘give a semantic string for each input’ but I can’t find any reference on what this means how how to do it. Can someone give me a hint?

Here is the code of the shader I am currently picking at.

It won’t pass the values correctly unless they have resource names. Use TEXCOORD2 - TEXCOORDN for all of those otherwise unlabeled l_xxx parameters.

I have problems employing regular hardware shadow mapping.
in my script :

    self.Ldepthmap = Texture()
    self.Ldepthmap.setComponentType(Texture.TFloat)  # 32bit depth map
    self.Ldepthmap.setMinfilter(Texture.FTLinear)'depthmap', mapsize, mapsize,self.Ldepthmap)

I’ve tried a very simple shadow mapping shader, no cool stuff so far.
in VS :

float fOffsetX = 0.5f + (0.5f / 512);  // mapsize=512
float fOffsetY = 0.5f + (0.5f / 512);
float fZScale = pow(2,32)-1;   //2**bitdepth
float fBias = -20;

float4x4 scaleBiasMatrix = {

float4x4 texMatrix = mul(scaleBiasMatrix, trans_model_to_clip_of_light);

l_texcoord0 = vtx_texcoord0;

l_texcoord1 = mul(texMatrix, vtx_position);

in FS :

float4 shade = tex2Dproj(k_Ldepthmap,l_texcoord1);
o_color *= shade;

It gives me exactly the RGB of the depthmap, rather than 0 (shadowed) or 1 (lit). My GFFX5200 supports these 2 extensions : SGIX_depth_texture and SGIX_shadow.
So, I believe it’s not due to this issue :
[page 54] :

Then I tried to emulate the pixel depth, but the scaling is messed up. So, when the light-objects distance changed (the light is translated a little bit along Y-axis), the shadow collapsed.

What i’ve done wrong ?
Oh, when I pass Vec4 parameter from Panda, for this shader only, it didn’t read IF using ARB profile. Is it due to shader complexity regarding to clock cycles ?

The big mistake is this:


That should be:


Both filters need to be set to FTShadow. That’s how you activate the functionality of SGIX_shadow.

There’s a second mistake, which is not really hurting you:


With depth-component textures, you don’t control this. The component-type is whatever your video card uses for the Z-buffer. On nvidia cards, the Z-buffer is a 24 bit fixed point number, so your depth-component texture is also a 24 bit fixed point number. I should mention that an IEEE 32 bit float only contains a 23 bit mantissa, so the depth-component texture may be more precise.

I should mention that I haven’t checked your math at all. I’m terrible at math, that’s why I implemented “trans_model_to_clip_of_light” — so that I could do it once and never have to look at it again.

Also, I notice that you’re doing a matrix multiply (scaleBiasMatrix * trans_model_to_clip_of_light) inside the shader. I believe that takes 16 pixel-shader instructions. By contrast, doing this takes one instruction:

vtx_position = (vtx_position + offset) * scale;

I should also mention this: applying your scale and bias in a separate instruction, makes it easier to check your work. You can try it without the scale and bias, and see how it looks, and then add the scale and bias later. That makes it easier to see what’s going wrong.

Thanks for the correction. Yes, I should have known there was something wrong with my depthmap setup.

I haven’t able to break down the matrix to calculate it that way. :frowning:

Thank you again.

ynjh_jo: Your pictures cannot be viewed :frowning:
Connection is timing out. :wink:

Regards, Bigfoot29

I can see it perfectly. Maybe it was just the traffic.