Problem with GLSL shader

I’m trying to use GLSL with the Cg Tutorial.
I tested it on 2 systems with GNU/Linux Debian Squeeze and Panda3d deb package version 1.7.2, equipped with:

  • “NVIDIA GeForce 9300 GE”
  • “ATI Mobility Radeon HD 3430”
    Both video cards use the latest proprietary drivers from respective vendors (both with “OpenGL 3.3 compatibility profile”)
    These are the scripts (python and shaders):
#Lesson2.py
import sys
import direct.directbase.DirectStart
from panda3d.core import Shader 
 
base.setBackgroundColor(0.0, 0.0, 0.0)
base.disableMouse()
 
base.camLens.setNearFar(1.0, 50.0)
base.camLens.setFov(45.0)
 
camera.setPos(0.0, -20.0, 10.0)
camera.lookAt(0.0, 0.0, 0.0)
 
root = render.attachNewNode("Root")
 
modelCube = loader.loadModel("cube.egg")
 
cubes = []
for x in [-3.0, 0.0, 3.0]:
    cube = modelCube.copyTo(root)
    cube.setPos(x, 0.0, 0.0)
    cubes += [ cube ]
 
# Load the shader from the file.
#shader = loader.loadShader("lesson3.sha")
shader = Shader.load(Shader.SLGLSL, "vshader.glsl", "fshader.glsl")
# Assign the shader to work on the root node
# If you remove the line below, you will see
# that panda is actually rendering our scene.
root.setShader(shader)
 
base.accept("escape", sys.exit)
base.accept("o", base.oobe)
 
def move(x, y, z):
    root.setX(root.getX() + x)
    root.setY(root.getY() + y)
    root.setZ(root.getZ() + z)
 
base.accept("d", move, [1.0, 0.0, 0.0])
base.accept("a", move, [-1.0, 0.0, 0.0])
base.accept("w", move, [0.0, 1.0, 0.0])
base.accept("s", move, [0.0, -1.0, 0.0])
base.accept("e", move, [0.0, 0.0, 1.0])
base.accept("q", move, [0.0, 0.0, -1.0])
 
run()

vshader.glsl:

in vec4 vtx_position;

uniform mat4 mat_modelproj;

void main() 
{
	gl_Position = mat_modelproj * vtx_position;
}

fshader.glsl:

void main() 
{
	gl_FragColor = vec4(1.0, 0.0, 1.0, 1.0);
}

Here is the original Cg shader:

//Cg

void vshader(
    uniform float4x4 mat_modelproj,
    in float4 vtx_position : POSITION,
    out float4 l_position : POSITION)
{
    l_position = mul(mat_modelproj, vtx_position);
}
 
void fshader(
    out float4 o_color : COLOR)
{
    o_color = float4(1.0, 0.0, 1.0, 1.0);
}

The script works fine only with the Cg shader.
With the GLSL shaders these are results:

  1. on ati card always a black screen is shown
  2. on nvidia card sometimes the black screen and sometimes these errors on console are shown:

I don’t understand if there is some code error or if GLSL is not fully supported in 1.7.2.
Can you help me?

Thanks

PS: I searched both the forums and bug-tracker with the string “glsl” but found nothing useful

The equivalent vertex shader in GLSL would be:

void main() {
  gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}

or, simplified:

void main() {
  gl_Position = ftransform();
}

Thanks for answer, now it works.
But this means that OpenGL core profile isn’t supported? And it will be supported in the future?

What do you mean? Your other code would have worked if you had a vertex column named “vtx_position” and had set a model projection matrix using setShaderInput. But by using gl_Vertex etc, you’re simply requesting this data from OpenGL instead of relying on your own app to provide the data.

You can also use inputs provided by Panda3D, if you use p3d_ModelViewProjectionMatrix and p3d_Vertex, but I don’t recommend using that unless you’re using a non-fixed-function context, like if you’re using the OpenGL ES 2 renderer, where gl_Vertex etc is not available.

The method I suggested is pretty standard though, and you’ll find it in GLSL tutorials around the web.

OpenGL core profile (for version >= 3) doesn’t use fixed-function-pipeline so data have to be provided by application.
By reading the Cg tutorial (I don’t know much Cg) it seems that application provides data (from named vertex attributes to named uniforms), and so I thought was the same for glsl, because OpenGL core profile states something similar (as explained in chapter 2 of the standard).