Normal mapping without tangent/binormal

Since tobspr is posting all the cool snippets, I also want to post something while there is something left :mrgreen:

The original implementation is by by Chris­t­ian Schuler from thetenthplanet.de/archives/1180

The typical implementation using pre computed tangent and binormal vectors looks like this
vertex shader:

tangent = p3d_NormalMatrix * p3d_Tangent; 
binormal = p3d_NormalMatrix * -p3d_Binormal;

fragment shader:

    vec4 normal_map=texture(p3d_Texture1, uv);
    normal_map.xyz=(normal_map.xyz*2.0)-1.0;
    vec3 N=normal;
    N *= normal_map.z;
    N += tangent * normal_map.x;
    N += binormal * normal_map.y;    
    N = normalize(N);

If your model has no tangent and binormals then this code will produce a black-ish model. In a situation like that you can use this code:
vertex shader:

world_pos=p3d_ModelMatrix* p3d_Vertex; 

fragment shader:

//TBN by Chris­t­ian Schuler from http://www.thetenthplanet.de/archives/1180
mat3 cotangent_frame( vec3 N, vec3 p, vec2 uv )
    {
    // get edge vectors of the pixel triangle
    vec3 dp1 = dFdx( p );
    vec3 dp2 = dFdy( p );
    vec2 duv1 = dFdx( uv );
    vec2 duv2 = dFdy( uv );
 
    // solve the linear system
    vec3 dp2perp = cross( dp2, N );
    vec3 dp1perp = cross( N, dp1 );
    vec3 T = dp2perp * duv1.x + dp1perp * duv2.x;
    vec3 B = dp2perp * duv1.y + dp1perp * duv2.y;
 
    // construct a scale-invariant frame 
    float invmax = inversesqrt( max( dot(T,T), dot(B,B) ) );
    return mat3( T * invmax, B * invmax, N );
    }

vec3 perturb_normal( vec3 N, vec3 V, vec2 texcoord )
    {
    // assume N, the interpolated vertex normal and 
    // V, the view vector (vertex to eye)
    vec3 map = (texture( p3d_Texture1, texcoord ).xyz)*2.0-1.0;
    mat3 TBN = cotangent_frame( N, -V, texcoord );
    return normalize( TBN * map );
    }

void main()
    {    
    vec3 N=normalize(normal);    
    vec3 V = normalize(world_pos.xyz - camera_pos);    
    N = perturb_normal( N, V, uv);
   //rest of the code....

where camera_pos is an vec3 uniform set like this

render.setShaderInput("camera_pos", base.cam.getPos(render))

This is extra work for the fragment shader but less for the vertex shader, on newer gpu this may be actually faster.

Ah cool!, this is quite similar to what I’m using in the RenderPipeline:
https://github.com/tobspr/RenderPipeline/blob/refactoring_beta/Shader/Includes/NormalMapping.inc.glsl

You should include a check for invalid gradients tho, maybe. This happens when two adjacent vertices have the same texture coordinate, the gradient will be 0 then and you will get a bunch of NaN’s. See Line 11 of my implementation.