shaders, shaders, shaders...

Hi all,

I am really busting my head on this one: I want to pass a material (M=material()) to my cg shader. now I’ve been succesfull in programming per pixel lighting, but now i want to include material properties (such as diffuse and specular). Please, if you know anything let me know.

greets,

emiel

There’s no means to pass a “material” object to a shader. However, you can take the numbers out of the material object, and pass them to the shader as vectors, using calls like setShaderInput(“specular-color”,Vec4(1,2,3,4))

yeah, i understand. But the problem is that I passed all the variables to the shader (i even tried setting them in the CG program) and i did the correct (i think) calculations on them, but the specular / diffuse map doesn’t show up in the render window.

Miel

Shaders are notoriously tricky to get right. I’ve found that the best way to debug them is as follows. Let’s say that the shader has a bunch of input parameters. The first thing I do is I write a super-short shader that simply copies one of the input parameters to the output color, like this:

o_color = k_specularcolor;

That way, I can look at the output and see if the shader is receiving the right input values.

Then, I write the shader one or two lines at a time. After each line of code, I test to see whether it’s calculating the right thing by, once again, copying the value to the output color. So let’s say I wrote some code to calculate a lighting dot-product. I would do this:

float lighting = dot(lightvector, surfacenormal)
o_color = lighting;

so that way, I could see whether or not it’s computing the right thing. I use this method to check each of my calculations in turn.

  • Josh

yeah, this is all pretty obvious to me. But what i’m asking is if anyone had any succes with this (specular / diffuse lighting) I don’t really consider myself to be a total newbie on this one… But I just can;t get it to display, i don’t get errors or anything. The specular just doesn’t show.

Miel

No clue. Post the shader, maybe I’ll be able to see the bug.

Specular lighting is diffinitely possible with cg shaders. Make sure your camera vector is converted to tangent space if your using normal maps.

If your not using normal maps, make sure you convert normals from model space to world space and just do everything in world space… or view space… your choice.

Also, what equation are using to calculate your half angle vector? For normal maps I like to use

halfAngle = normalize(Tangent Space Light Vector + Tangent Space View Vector/2) <-The divide by 2 is optional

specular = pow(dot (normal map normal, halfAngle), material shininess)

well… all my CG knowledge so far is purely self tought, so I’ll try and keep up with you. I know how to calculate the light vector in CG but how do I pass the camera nodapath to the shader? Should I create a custom camera or can I pass base.camera?

Can you enlighten me on this thing you call “converting to view space” ?

cheers,

miel

In 3d graphics there are basically several coordinate systems that exists. The useful ones are model space, world space, view space, clip space.

Each one is basically a coordinate system. When you do a Nodepath.setPos(x,y,z) you are setting the world space coordinate of the nodepath.

However, when you get vertex position and normals from the NORMAL and POSITION keyword in cg… or any shading language for that matter, you are getting it model space.

Model space is coordinate systems of your model or in panda, your nodepath. Every nodepath has different model space origin. So if your vertex is at point (1,1,1) and your node is at (0,0,0) then since model space origin and world space origin are the same, the vertex model space coordinate is still (1,1,1).

However, if you move your node to (1,0,0). The vertex’s world space position is now (2,1,1), but the vertex’s model space position is still (1,1,1). Its just simple addition, but it gets more complicated with scaling and rotation.

The next useful coordinate system is the view space. You can think of view space coordinates as the model space coordinate system for the camera. Its basically the same thing.

So… when you doing stuff, always make sure you are calculating vectors and comparing vectors in the same coordinate space.

You can read more about it here: panda3d.org/manual/index.php/S … ate_Spaces

In any case, to help you out panda can automatically give you the matrices you need to convert a vector from one coordinate space to another. So all you need to do is look at what is provided and then muliply it using the mul command.

for example the modelview matrix transforms something from model space to view space.

So if I calculate a light vector in model space and I want to compare it with something in view space, I can do this:

mul(trans_model_to_apiview, vector)
[/code]

well, looks like I got it! thanks to you Bei, here’s the code:

//Cg
//
//Cg profile arbvp1 arbfp1

void vshader(float4 vtx_position : POSITION,
float3 vtx_normal : NORMAL,
float4 vtx_color : COLOR,
float3 vtx_texcoord0 : TEXCOORD0,
out float4 l_position : POSITION,
out float4 l_brite : TEXCOORD1,
out float3 l_texcoord0,
out float4 l_color : COLOR,
uniform float4 mspos_light,
uniform float4 mspos_view,
uniform float4x4 mat_modelproj)
{
l_position = mul(mat_modelproj, vtx_position);
float3 N = normalize(vtx_normal);
float3 lightVector = normalize(mspos_light - vtx_position);
float3 viewVector = normalize(mspos_view - vtx_position);
float3 halfAngle = normalize(lightVector + viewVector);
l_brite = pow(dot(N,halfAngle), 50.0);
l_texcoord0 = vtx_texcoord0;
l_color = vtx_color;

}

void fshader(float4 l_brite : TEXCOORD1,
float4 l_color : COLOR,
sampler2D tex_0,
float2 l_texcoord0,
out float4 o_color : COLOR)
{
float4 tex = tex2D(tex_0, l_texcoord0);
o_color=l_brite * tex;
}

tell me if i’m missing out on something. And then: on to dynamic shadows, on which I will have some more annoying questions :slight_smile:

cheers,

Miel

Congrats on finishing. I hope it helped. Your shader is compling for arbvp1 arbfp1 so I’m assuming thats why your not doing your calculations in the pixel shader. Also you seem to be missing your diffuse component.

  1. Normalizing things in the pixel shader is tricky in earlier profiles because you don’t have normalize(). You can solve this by using a normalization cube map which panda can automatically generate for you by using Texture.generateNormalizationCubeMap(). Basically you generate the texture, plug it in a shader input and when you have a vector you want to normalize, you just do a look on it as the lookup vector using texCube(nomralizationCubeMap, vector). This will allow you to move some of the lighting calculation into the pixel shader when you want to do per-pixel specular or per-pixel diffuse.

  2. You also seem to have no diffuse component, don’t really know why. Remember phong’s simplified lighting equation is

color = (ambient+diffuse)textureColor + specularlightColor

ambient is just a color and diffuse is simply dot(normal, lightvector)

yay, thanks so bleeding much. I think i’m really gaining ground on this shader thing. Already a successfull HDR and a succesfull pixelshader with this stuff and all… just gives me a headrush everytime i finish one :slight_smile: thanks soo much.

you mentioned that arbvp1 and arbfp1 are older profiles, i thought that those were the ones that panda supported?

miel

Panda uses cg which uses cgc.exe which supports any profile currently supported by cg. If you want panda to cg to choose the most best one (which works 90% of the time) then just delete that commented line.

ok, so now i’ve succesfully done phong shading with .one. PointLight, how do i do multiple light sources (primarily pointLights but maybe even ambient?)

miel