Wrong Space (GLSL)

I noticed that in my editor (github.com/wezu/koparka) the lighting of the terrain is all wrong. It looks like the light moves with the camera, but in fact I’ve got the normals wrong.

I generate the normals in the pixel shader based on some code I found on the net… and as it often is in code found in the net, it turns out it’s not what I expected. The generated normals are in object (model) space.

    vec3 norm=vec3(0.0,0.0,1.0);    
    const vec3 vLeft=vec3(1.0,0.0,0.0);     
    const float pixel=1.0/512.0;
    const float height_scale=100.0;
    
    //normal vector...
    vec4 me=texture2D(height,texUV);
    vec4 n=texture2D(height,vec2(texUV.x,texUV.y+pixel)); 
    vec4 s=texture2D(height,vec2(texUV.x,texUV.y-pixel));   
    vec4 e=texture2D(height,vec2(texUV.x+pixel,texUV.y));    
    vec4 w=texture2D(height, vec2(texUV.x-pixel,texUV.y));
    //find perpendicular vector to norm:        
    vec3 temp = norm; //a temporary vector that is not parallel to norm    
    temp.x+=0.5;
    //form a basis with norm being one of the axes:
    vec3 perp1 = normalize(cross(norm,temp));
    vec3 perp2 = normalize(cross(norm,perp1));
    //use the basis to move the normal in its own space by the offset        
    vec3 normalOffset = -height_scale*(((n.r-me.r)-(s.r-me.r))*perp1 + ((e.r-me.r)-(w.r-me.r))*perp2);
    norm += normalOffset;        
    norm = normalize(norm);
    //TBN
    vec3 tangent  = normalize(cross(norm, vLeft));  
    vec3 binormal = normalize(cross(norm, tangent));

I need to either get the normal to view space or get the light vector (of a directional light) from view to object space… I honestly don’t know how to do that. Probably I’ve got to use a matrix of some sorts (gl_NormalMatrix stops the light from moving, but the normals are still wrong), then again passing a light vector from python might be a better idea.

Help!

Hmm… Looking at the list of matrices in the GLSL input list, have you tried putting your normals through the model-view matrix? (“uniform mat4 p3d_ModelViewMatrix;”)

I’m not sure that model-view is the correct matrix, but it seems like a decent guess.

If they are in model space, and you want them in world space you have to use the trans_model_to_world matrix, like that:

vec3 world_normal = trans_model_to_world * vec4(model_normal, 0);

You also have to normalize the normal after that IIRC.

Does “trans_model_to_world” exist in Panda’s support for GLSL? I see mention of the “trans_x_to_y” pattern in the list of CG inputs, but not the GLSL list…

trans_model_to_world in Panda can be getting as model.getMat(world) if I not mistaken. Usually world = render.
Also in my case (Blender’s shaders) if I works with lights, I should multiple they matrices with convert space Matrix
Mat4.convertMat(CSYupRight, CSDefault) * light.getMat(world) Not sure - is it Blender’s feature or GLSL.

AssertionError: Shader input trans_model_to_world is not present.

For 1.8 there is no ‘trans_model_to_world’ :cry:

The autoshader for CG uses something like this:

l_eye_normal.xyz = mul((float3x3)tpose_view_to_model, vtx_normal.xyz);

But tpose_view_to_model is just as available as trans_model_to_world.

Maybe they work in 1.9

This:

render.setShaderInput("model_to_world",self.mesh.getMat(render))

gave:

AssertionError: Shader input model_to_world is not a nodepath.

There was some trick with a temp node… but is it really what I need? My “self.mesh” is at point 0,0,0 with a rotation of 0,0,0, and a scale of 1 - that info will get me a matrix to go from object (same as world in this case I think) to view? Because gl_LightSource[0].position will get me a view space light vector, right? (btw how do you get the light info without the fixed function pipeline in panda3d/glsl? )

So maybe it would be simpler to get the light vector in model space? I have one directional light with a hpr of (90, -45, 0), I’m not gonna move it (often) and I’m not gonna move the terrain at all. I’d just pass it as a shader input… if I’d only knew how to calculate it in the first place. :mrgreen:

Okay, uh, trans_model_to_world exists in GLSL in 1.9, but it is equivalent to p3d_ModelMatrix. It is is recommended you use that instead (and p3d_ModelMatrixInverse if you want the opposite, you can also add Transpose to the end if necessary). trans_model_to_world will probably eventually be deprecated.

Also, before 1.9, setShaderInput only took a NodePath for setting matrices, like this:

dummy = NodePath("dummy")
dummy.setTransform(self.mesh.getTransform(render))
render.setShaderInput("model_to_world", dummy)

In 1.9, this requirement is lifted, but it is still better to use p3d_ModelMatrix.

Panda3D currently provides no light info structure. It sounds like a useful feature to have. If you want this feature, I suggest you file a bug report. :slight_smile:

Hmm… I think I must have the normals wrong. Maybe the x and y are switched or something…or it’s all just rubish :frowning:

Anyway. I don’t think trans_model_to_world or p3d_ModelMatrix is what I need. I think the normals are in object or world space (should be the same in my case) and I need them in view space for lighting.

So the first thing I wanted to use (gl_NormalMatrix or p3d_NormalMatrix) should be the right thing.

// This is the upper 3x3 of the inverse transpose of the ModelViewMatrix.  It is used
// to transform the normal vector into view-space coordinates.
uniform mat3 p3d_NormalMatrix;

I’ll need to make some debug model and shader to see how world/model normals should look like just to tell what’s wrong (flat surface should be blue (0,0,1) but I can’t tell the other colors).

Some light structure when gl_LightSource[] is not there would be nice… unless one can pass all the needed data from python with little to none performance hit. For now I have no need for it, I’m using the oldest glsl version that will work (120-150).

So I got this from debuging:

The left mound is a model with a cg shader dumped from the auto-shader where I just removed the parts where normlas are transformed and I write the normal to ‘o_color’.
The right mound is using my terrain shader.

Some directions are mixed, but I fixed that:

I was thinking ‘p3d_NormalMatrix’ is what I need but:
:display:gsg:glgsg(error): Unrecognized uniform matrix name ‘NormalMatrix’!
So I used gl_NormalMatrix in the hope it is the same, but things still look wrong:

Then again if I render it all with colors and the light… it looks sort of ok:

Any tips?

Try to normalize all normals (in both case Cg and GLSL) before output to image and comare results

Already normalized, but thanks anyway :wink:

I’ll soon push all the shaders to the git, in case someone wants to toy with them.

Ok, another question are you translate normals form (-1.0+1.0) diapason to (0.0+1.0) before write it to the image?
But yes, seems that Panda show normals in view space xyz = right, depth, up, and your normals in model space (?) Y - up

In fast test I get something similar your image:
vertex:
N = vec3(gl_ProjectionMatrix * vec4(normalize(gl_NormalMatrix * gl_Normal),1.0));
fragment:
gl_FragColor.xyz = (N*0.5+0.5).xzy;

not sure that it’s right, just similar image

Didn’t know I should translate them :mrgreen:
The shaders I use are here:
Vertex:
github.com/wezu/koparka/blob/ma … er_v2.glsl
Fragment:
github.com/wezu/koparka/blob/ma … er_f3.glsl

The normals are calculated in the fragment shader. There’s quite a lot of detail in the height map that gets lost if I do it only on the verts - at least it was true when I used a 10k mesh, at some point I’ll test if it’s true for the 35k mesh that I use now - but first getting valid normals is more important for me then getting them faster.

Ok, ok, look at my signature )

I can’t run koparka.

Can you make simple sample?

Your gpu can only have 16 samplers in a shader. I use 8x color, 8x normal, 1x height, 1x splat-mask, looks like I have to go down to 7 textures.

Made a version that uses 6x diffuse texture +6x normal +2x attribute map + 1x height map + 1x walk map, for the editor all are needed but the walk map is not needed to be displayed for a game and the height can be packed to the alpha channel of either attribute maps. 6 textures should be enough if I can ad the ability to mix them at will (not present at the moment).

Anyway the shader generating the normals is here now:
github.com/wezu/koparka/blob/ma … ain_f.glsl

I still can’t run it on my outdated video hardware (HD4850 and Intel onboard). More exact - it running, but I see white screen with GUI.

I’m running it on a radeon 3000 series, so the 4000 series should handle it ok.

texture2DLod error - looks like it’s not popular on some systems, it was a crude fix for grass quality anyway - fixed for grass

GL_EXT_draw_instanced - it’s just a warning, should be fine. I’m using GL_ARB_draw_instanced and GL_EXT_draw_instanced so one of them should work and the other will fail depending if you are on a ATI/AMD or Nvidia… might fail in both cases on Intel (can’t tell don’t have one) - ignored

GL_EXT_gpu_shader4 - fxaa will not work if this is not supported.

None of this is important to the normal problem, so I really should make a simpler example, but I’m stuck at getting my editor working again :neutral_face:

Ah, ok, seems that it my video driver on Ubuntu joking. On win I can run your editor. Is this result that you want to see?