Perspective correct texture mapping in Panda3d?

I apologize because I already know this is a dumb question. I am returning to opengl (and panda3d) after a bit of an absence. I’ve been googling and searching the forum here and so far can’t figure this out.

I am attempting to texture a quad with a image from a [real] camera. When I draw my quad with all the points equal distant from the viewer, I get a nice looking image (like “Flat” in the diagram below.) But I am estimating the true 3d location of the corners of the image in the world, so the corners are actually not equidistant to the viewer. Visually my corners are still approximately along the same projection vector so the shape of my quad stays the same, but when I move the corners nearer/further relative to each other I see a result similar to “Affine” in the diagram below. What I want is perspective “Correct” as in the image below.

image

When I google around, it seems like perspective correct texture interpolation is the default for modern graphics cards? But I am not seeing this result in my panda3d app. I haven’t been able to find any panda3d configuration or setting that speaks to enabling/disabling perspective correct texture mapping.

Is there anyone here who can point me in the right direction?

For now I’m just drawing textured triangles (not optimized tristrips/fans). I don’t think that should make any difference, but mentioning in case it does.

I’m running panda-1.10.11 on ubuntu-20.04.5 with python-3.8 (intel graphics hardware I believe … on a thinkpad x1) Edit: intel i915 graphics hardware … lsmod says the i915 kernel driver is loaded.

Am I imagining a feature that doesn’t exist in panda3d or opengl? Should this be working (and I’ve found some other way to shoot myself in the foot?) Any help or guidance or nudges or comments would be wonderful, thanks in advance!

OpenGL (and Panda, by extension, since it just passes off the geometry to OpenGL) does projective texture mapping correctly for you if you tilt the card back due to the existence of the w coordinate in the vector resulting from multiplying the vertex position with the projection matrix.

But in this case, you have no perspective. To OpenGL, you just have two triangles making up a trapezoid shape and the texture coordinates are interpolated linearly.

Ways you might look into addressing this problem:

  • Just tilt the quad back instead of stretching it like you’re doing.
  • Use a texture matrix to apply the transformation.
  • Use 4D texture coordinates, with the w coordinate containing the number to divide by.
  • Write a custom shader to do the interpolation math yourself.

I found this resource on the topic:

Here is an implementation of the 4D texture coordinates approach in Panda3D, based on that blog post. No shader is needed, but if you do have a custom shader, you need to be sure you use 4-component UVs and divide by the w coordinate:

from direct.showbase.ShowBase import ShowBase
from panda3d.core import *

base = ShowBase()

# Generate a checkerboard texture
img = PNMImage(8, 8, 1)
for x in range(8):
    for y in range(8):
        img.set_gray(x, y, (x + y) % 2)

tex = Texture()
tex.load(img)
tex.set_minfilter(Texture.FT_nearest)
tex.set_magfilter(Texture.FT_nearest)

# Create a normal trapezoid
cm = CardMaker("")
cm.set_frame((-1, 0, 0), (1, 0, 0), (0.5, 0, 1), (-0.25, 0, 1))
cm.set_uv_range((0, 0), (1, 1))
card = aspect2d.attach_new_node(cm.generate())
card.set_scale(0.5)
card.set_pos(-0.5, 0, 0)
card.set_texture(tex)

# Create one with 4D UVs - CardMaker doesn't support this at the moment
format = GeomVertexFormat()
format.add_array(GeomVertexArrayFormat('vertex', 3, Geom.NT_float32, Geom.C_point,
                                       'normal', 3, Geom.NT_float32, Geom.C_normal,
	                                   'texcoord', 4, Geom.NT_float32, Geom.C_texcoord))
format = GeomVertexFormat.register_format(format)
vdata = GeomVertexData('trapezoid', format, Geom.UH_static)
vdata.unclean_set_num_rows(4)

vwriter = GeomVertexWriter(vdata, 'vertex')
nwriter = GeomVertexWriter(vdata, 'normal')
twriter = GeomVertexWriter(vdata, 'texcoord')

vtx = (
    Point3(-1, 0, 0),
    Point3(1, 0, 0),
    Point3(0.5, 0, 1),
    Point3(-0.25, 0, 1),
)
normal = Vec3(0, 1, 0)
uvs = (
    Point2(0, 0),
    Point2(1, 0),
    Point2(1, 1),
    Point2(0, 1),
)

# Calculate the intersection between the diagonals
intersection = Point3()
LPlane(vtx[0], vtx[2], vtx[0] + normal).intersects_line(intersection, vtx[1], vtx[3])

for i in range(4):
    vwriter.set_data3(vtx[i])
    nwriter.set_data3(vtx[i])

    # Calculate distance to opposing point
    diag = (vtx[i] - vtx[i - 2]).length()

    # Calculate distance to intersection of opposing vertex
    dist = (vtx[i - 2] - intersection).length()

    twriter.set_data4(VBase4(*uvs[i], 0, 1) * (diag / dist))
    print(VBase4(*uvs[i], 0, 1) * (diag / dist))

tris = GeomTriangles(Geom.UH_static)
tris.add_vertices(0, 1, 3)
tris.add_vertices(1, 2, 3)

geom = Geom(vdata)
geom.add_primitive(tris)

gnode = GeomNode('trapezoid')
gnode.add_geom(geom)

card = aspect2d.attach_new_node(gnode)
card.set_scale(0.5)
card.set_pos(0.5, 0, 0)
card.set_texture(tex)

base.run()

1 Like

Thanks for the reply!

So just a little light reading for new years morning! :slight_smile: But it’s too much for my brain to take in all in a single reading.

(I wish I was better at explaining stuff, here is my attempt at clarification)

My quads are not 2d – they work and seem to look correct when they are 2d. But I want to give the vertices their true depth so I can move the opengl camera around the scene a bit.

My imagery already has the perspective cooked into it (imagine an actual photo looking forward over a flat landscape.) So I’m not dropping a top down (orthogonally captured) picture on a 3d quad and looking at it from a perspective view. I’m dropping an already perspective view on the quad and then looking at it from that same perspective view (and in the future moving the opengl camera … hopefully.)

… so now I’m thinking it’s the perspective correct texturing that’s actually screwing me up.

I need something more like the bottom picture from this link, affine mapping, but minus the seam through the middle: math - correct glsl affine texture mapping - Stack Overflow

Ok, I’ll have to chew on these links for a while and see if I can get my brain wrapped around all of this. But at least I feel like I’m nibbling on the right area … hopefully there’s a solution to be found. Everything is always 10x harder than you first think! (aka learning new stuffs) :slight_smile:

Thanks again for the help, truly appreciate it!

Ah, so, can’t you just use a texture projector?
https://docs.panda3d.org/1.10/python/programming/texturing/projected-textures

1 Like

Oh! That looks interesting (and promising) … I’ll chew on that for a while and see how it goes.

Just wanted to send a quick reply and a big thanks! Yes, the texture projection feature has been very productive. I still have some artifacts near the horizon because I can’t automatically match the horizon line algorithmically with super precision, but I can now project the real camera image onto somewhat approximate 3d geometry. That allows me to move the virtual camera away from the actual camera location and see pretty plausible results – and the artifacts are understandable/expected. Thanks for the tip! panda3d ends up being cooler and cooler the deeper I dive in!

2 Likes