I’m having trouble getting my lighting equations to work in view space. I finally debugged my problems down to my confusion with Panda’s use of coordinate systems in viewspace.
The semantic vs_* gives the coordinates in Panda’s Z-up right handed system.
The itp_modelview, Inverse Transpose Model View matrix however gives the coordinates in opengl (I’m using pandagl) Y-up system. The two doesn’t match.
I verified the difference in a shader with
l_vspos = mul( trans_model_to_view, vtx_position).xyz;
l_vsomni1_dir = l_vspos - vspos_omni1.xyz;
l_vsomni1_dir2 = mul( itp_modelview, mspos_omni1 - vtx_position).xyz;
Shading a model with (normalized) l_vsomni1_dir and l_vsomni1_dir2 gives completely different results (that are consistent with the aforementioned coordinate system differences.)
Is there a semantic similiar to vs_* that uses the API’s view coordinate system?
I ask this because for deferred rendering, the viewspace position is regenerated in presumbly the API viewspace unless you write to your own depth buffer. But lighting volumes are most conviently calculated using the vs_* in Panda’s viewspace system. The two doesn’t match unless you do an extra (costly) coordinate transformation somewhere.
Incidently, looking at the Firefly’s demo, I notice that the normals are stored in the API’s viewspace using itp_modelview, but the light direction is calculated using Panda’s Z-up system float3 lightvec = float3(vspos_model) …
This is wrong isn’t it? (Am I just very confused?)
Firefly demo writing API viewspace normals to buffer
l_normal = (float3)mul(itp_modelview, vtx_normal);
o_normal.rgb = (l_normal * 0.5) + float3(0.5, 0.5, 0.5);
Firefly calculating Panda Z-up viewspace light direction dot with API viewspace normals
float4 normal = tex2D(k_texnormal, texcoords);
float3 view = (screen.xzy * k_proj.xyz) / (depth + k_proj.w);
float3 lightvec = float3(vspos_model) - view;
float brite = falloff * falloff * dot(lightdir, float3(normal));