Infinite far plane flickering

I think the support of a frustum with an infinite far plane has still some problems (or I have missed something, projection matrices are still black magic to me :slight_smile: )

In my scene I have aGeomPoint object with points scattered everywhere, I have created my perspective lens as usual :

self.camLens.set_near_far(1.0, float('inf'))

The resulting projection matrix is as follow :

2.03377 0 0 0
0 0 1 1
0 3.86364 0 0
0 0 -2 0

So far so good, however when sprites are getting really, really far they start popping in and out of existence (It’s not z-fighting, they are randomly clipped out of the scene).

If I force the points to be on the infinite plane in the vertex shader, the problem disappears :

gl_Position.z = -gl_Position.w;

(Another strange thing, I would have assumed that the far plane is at +1 and not -1, there is probably something else I don’t understand here…)

According http://www.terathon.com/gdc07_lengyel.pdf (and a few other articles) the problem is due to precision errors and some points are clipped away as the result after division by w is outside the clipping range. The solution would be to either add a small epsilon in the projection matrix or use GL_DEPTH_CLAMP, however it seems this is not supported by Panda (and actually not available before OpenGL 3.2)

Are they clipped by Panda, or on the GPU? What happens if you set view-frustum-cull false in your Config.prc?

The culling is done by the GPU : it’s not all the points that flickers at the same time, but only the farthest ones, and view-frustum-cull false does not help.

Edit: Just to clarify, I’m using hardware point sprites, not the software ones

Here is a short sample program to demonstrate the problem. The geom is 1000 points in a half circle around the camera. The camera frustum is configured to use an infinite far plane. When scaling up the geom, using key Z, around scale 80000000 the points starts flickering in and out of existence. However, in the vertex shader if one uncomment the line setting the gl_Position.z, there is not flickering at all even with far greater scaling factor.

from panda3d.core import *
from math import pi, cos, sin

load_prc_file_data("", """
gl-version 3 2
hardware-point-sprites #t
view-frustum-cull #f
""")

from direct.directbase import DirectStart

def shader():
    return Shader.make(Shader.SL_GLSL,
                       vertex="""
#version 410

uniform mat4 p3d_ProjectionMatrix;
uniform mat4 p3d_ModelViewMatrix;

in vec4 p3d_Vertex;
in vec4 p3d_Color;
in float size;
out vec4 color;

void main() {
    gl_Position = p3d_ProjectionMatrix * (p3d_ModelViewMatrix * p3d_Vertex);
    //Uncomment this line to force the points to be on the infinite plane
    //gl_Position.z = -gl_Position.w;
    gl_PointSize = size;
    color = p3d_Color;
}
""",
                       fragment="""
#version 410

in vec4 color;
out vec4 frag_color;

void main() {
    frag_color = color;
}
""")

def make_geom(points, colors, size):
    array = GeomVertexArrayFormat()
    array.add_column(InternalName.make('vertex'), 3, Geom.NTFloat32, Geom.CPoint)
    array.add_column(InternalName.make('color'), 4, Geom.NTFloat32, Geom.CColor)
    array.add_column(InternalName.make('size'), 1, Geom.NTFloat32, Geom.COther)
    format = GeomVertexFormat()
    format.addArray(array)
    format = GeomVertexFormat.registerFormat(format)
    vdata = GeomVertexData('vdata', format, Geom.UH_static)
    vwriter = GeomVertexWriter(vdata, 'vertex')
    colorwriter = GeomVertexWriter(vdata, 'color')
    sizewriter = GeomVertexWriter(vdata, 'size')
    geompoints = GeomPoints(Geom.UH_static)
    for index, (point, color) in enumerate(zip(points, colors)):
        vwriter.add_data3(point)
        colorwriter.add_data4(color)
        sizewriter.add_data1(size)
        geompoints.add_vertex(index)
    geom = Geom(vdata)
    geom.add_primitive(geompoints)
    return geom

size = 1000
points = []
colors = []
for i in range(size):
    theta = pi * i / size
    x = cos(theta)
    y = sin(theta)
    z = 0
    points.append(LPoint3(x, y, z))
    colors.append(LColor(1, i / size, 0, 1))

geom = make_geom(points, colors, 2)
gnode = GeomNode('gnode')
gnode.add_geom(geom)
np = NodePath(gnode)
np.set_shader(shader())
attrib = np.getAttrib(ShaderAttrib)
attrib = attrib.setFlag(ShaderAttrib.F_shader_point_size, True)
np.setAttrib(attrib)

np.reparent_to(render)

scale = 1.0

def zoom():
    global scale
    scale *= 1.1
    np.set_scale(scale)
    print(f"ZOOM {scale}")

def unzoom():
    global scale
    scale /= 1.1
    np.set_scale(scale)
    print(f"ZOOM {scale}")

base.accept('z', zoom)
base.accept('z-repeat', zoom)
base.accept('shift-z', unzoom)
base.accept('shift-z-repeat', unzoom)
base.cam.set_pos(0, -1, 0)
base.camLens.set_near_far(1.0, float('inf'))
base.run()

I spent a while trying to figure out precision issues with the math or with Panda’s passing of shader inputs, only to find no issues there. Then I had the sudden realisation to disable depth testing and, lo and behold, it works!

The problem is that the points are testing as equal to the clear color of the depth buffer, and the default depth test mode is “less”. Either set depth testing to “always” and make sure the points are rendered first, or you should set it to less-equal:

np.set_attrib(DepthTestAttrib.make(DepthTestAttrib.M_less_equal)

I feel silly for not having thought of this earlier. It seems so obvious in hindsight.

1 Like

I’m not able to get the z values above 1 when experimenting with the test code, but if we run into issues with that, then we could add the depth clamp feature. (Panda could apply a depth clamp automatically if the far distance is set to infinite, but that only makes sense if we can choose to clamp only against the far plane, which doesn’t seem to be the case.)

This solves easily my problem, and indeed it’s retrospectively obvious :slight_smile: Thanks a lot!

Maybe the problem could arise with actual triangles where fragments could be discarded due to precision errors in the hardware interpolators, but so far I never encountered the problem (maybe modern hardwares are more clever and implement some correction mechanism ?)

I think there’s an argument to be made for making “lequal” the default depth test mode, so I opened an issue to discuss that here: