Need help with Panda3d shader tutorial

I am following the shader tutorial for Panda3d and until now everything was very well explained and understandable. Thank you very much, I was looking for exactly such a tutorial.

However, I now ran into a problem, and I was hoping you might be able to help me. In the 4th tutorial we see a custom variable, l_my. In the vertex shader we assign vtx_color to l_my, then in the fragment shader we assign l_my to o_color.

What I don’t understand is the fact that doing that the colors of the three cubes are not the correct ones from the egg file. The shader comments mention that we assign the vertex coordinates to the color directly, which would explain the odd colors on the cubes. However. I don’t see that coordinates are being assigned to the color – as said, the vtx_color is assigned to l_my (which sounds correct to me as a shader noob).

Any ideas what I am misunderstanding here?

Could you share the shader that you use and the egg file?

Sure, so I first tried with this shader:

void vshader(
    uniform float4x4 mat_modelproj,
    in float4 vtx_position : POSITION,
    in float4 vtx_color : COLOR,
    out float4 l_color : COLOR,
    out float4 l_position : POSITION)
{
    l_position = mul(mat_modelproj, vtx_position);
    l_color = vtx_color;
}

void fshader(
    in float4 l_color : COLOR,
    out float4 o_color : COLOR)
{
    o_color = l_color;
}

This works fine, I can see the colors on the cube interpolated between the vertices, based on the color values of the vertices set in to the cube model.

Now I changed that shader a bit to the following:

void vshader(
    uniform float4x4 mat_modelproj,
    in float4 vtx_position : POSITION,
    in float4 vtx_color : COLOR,
    out float4 l_my : TEXCOORD0,
    out float4 l_position : POSITION)
{
    l_position = mul(mat_modelproj, vtx_position);
    l_my = vtx_color;
}

void fshader(
    in float4 l_my : TEXCOORD0,
    out float4 o_color : COLOR)
{
    o_color = l_my;
}

However, using this custom variable l_my the coloring of the cube does not work correctly any more. Instead of the interpolated vertices colors I only see the default gray-ish color on the whole cube.

I understand that in the second example l_my is a TEXCOORD0 instead of a COLOR, but isn’t it still a normal float4 value?

PS: it works again fine if I change l_my to COLOR – but why does that solve the problem, if in both cases its of type float4?

PPS: the forum doesn’t allow an .egg or .txt file to be uploaded, should I copy&paste the whole file here as text?

Really no ideas? :frowning:

The shaders should be identical. I’ve tried both shaders but neither of them actually work. I think there’s some other bug going on here that causes vertex colours not to work correctly with shaders - perhaps it was just a matter of a flip of the coin that your first shader worked while your second didn’t. I’ll investigate further.

OK, I found out that when I tried it, it used unsigned chars instead of floats to store colour information, and the Cg runtime is for some reason not providing a way to normalise the data from the 0-255 to the 0.0-1.0 range. If I divide vtx_color by 255.0 in the shader, it seems to work. Can you confirm whether that works for you?

Actually, now that I think of it, maybe the magic behind the COLOR semantic could be that it automatically normalises it from a 0-255 range to a 0.0-1.0 range, whereas TEXCOORD0 does not? That would not explain why the shaders do work identically on my system, though, hmm.

Hi rdb,

you’re right, dividing the vtx_color by 255.0 in the vshader worked fine.
I guess your assumption is correct then :slight_smile:

Can you confirm that the divide is not necessary when you use the COLOR semantic to pass l_my between the vertex and fragment shaders? Or do you now find that the divide is always necessary?

The seemingly erratic behaviour worries me, and we’ve had issues in the past with vertex colours not working properly in Cg shaders. If it is indeed a driver discrepancy that causes GL_UNSIGNED_BYTE colour columns not to get rescaled in some cases, then we might have to just use floating-point colours across the board in order to avoid the bug. (There’s no issue with GLSL shaders - OpenGL asks nicely whether the column should be normalised to the 0-255 range using an extra parameter that the Cg developers decided to omit for whatever reason.)

drwr, do you know if there is a particular reason why we encode the colours as unsigned bytes in OpenGL? It seems to be a lot more common to pass the colours as a set of four floats.

Yes, when I use COLOR it works “out of the box”.
Cheers!

Presumably you would prefer to store colors as bytes when they appear in the vertex data, to avoid wasting 12 useless bytes for each vertex; but there’s no reason we shouldn’t pass colors as floats in individual calls.

David

Right, hmm, it is a bit of a waste of data, even though it seems to be common practice to store them as floats instead of unsigned bytes. Well, I’ll keep looking for a way to make Cg normalise the data - it has to be possible somehow.