Get debug detail on Panda3d rendering 3d to 2d for simple shapes eg cubes

I’d like to get debug info from Panda3d on how exactly it’s doing the 3d render, for simple shapes e.g. cubes.
Just the basics of (3d vertices a,b,c became 2d vertices p,q,r).
E.g. some real-time logging that shows the 3d vertices a,b,c and then the 2d vertices p,q,r that they were transformed into.
Is that possible please?

I have written a simple render pipeline, and get basic results similar to panda3d, but my 2d primitives are approx 2.3 times smaller than panda3d’s; I’d just like to understand why.

I don’t want to rebuild panda3d’s engine, just provide some basic, minimal direct-3d-to-2d functionality as a development aid.

Is there a config setting that enables this?
Eg in this huge list List of All Config Variables — Panda3D Manual?

Thanks in advance.

P.S. I have seen this logging option for “glgsg spam”, Log Messages — Panda3D Manual, but it does not give me quite the info I’d like: it shows matrices GL_PROJECTION, GL_MODELVIEW, but not the 3d-vertices before transform, or the 2d-vertices after transform.

These are pipeline functions built into the graphics card. Panda3D transmits only an array of geometry data.

Actually, that prompts a question:

If I’m understanding you correctly, how are you getting a size for primitives after they’ve been transformed by the graphics card…?

Or are you transforming your primitives from 3D to 2D in your own code? If so, then what are you comparing your 2D primitives to in Panda3D?

Indeed, I’m not sure that Panda3D ever stores actual 2D primitives; any geometry that it holds on the engine-side will likely be 3D.

I can assume that you are using orthographic lens, and have not set the size of the film. ShowBase uses a 2 by 2 film size.

coords = (-1, 1, -1, 1)
left, right, bottom, top = coords
print(right - left, top - bottom)

Just set the film size for the 2d lens camera:

lens = OrthographicLens()
lens.setFilmSize(2, 2)

load-display tinydisplay

In the case of software rendering, these transformations are performed on the processor, respectively, on the engine.

I thought about that, but I wouldn’t be surprised if even TinyDisplay uses 3D coordinates all the way through. (Or, indeed, 4D, as screen-space coordinates not-uncommonly have a w-component.)

In any case, the rasterization process in no way entails changing the actual coordinates, much less storing such data.

1 Like

@serega-kkz thanks for info re tinydisplay; I got it working with pd-tinydisplay, but the log looks similar to glgsg-spam, ie the log shows matrices, but not vertices-3d and vertices-2d.

@Thaumaturge I am transforming from 3d to 2d in my own code; I’ve written a very basic pipeline just to convert vertices3 to vertices2, partly to check my own understanding, and partly for some possible 2d effects; I draw into PnmImage or numpy-array hence write into Texture then use OnscreenImage. To compare to panda3d, I just use screenshots.

Aaah, I see, yes!

In that case, I doubt that what Panda is doing is directly comparable. As mentioned above, I really doubt that it stores 2D points anywhere–even when using TinyDisplay.

I have figured this out, just by comparing panda3d-screen to my-render-screen (that one is from Texture, setRamImage with numpy-array). (I just needed to adjust the focal-length and adjust height by aspect ratio). Thanks all for the info.
This ticket can be closed.

1 Like