Point rendering bug

I’m trying to use the MPoint RenderMode with perspective points. On one machine I have (laptop with ATI Mobility 1400) it works fine. On another machine (with GeForce 8800GT) it render the points the same size regardless of what value I set for the thickness. In my actual program, when I set the scale of the parent node of the GeomNode to anything other than 1.0 then it works correctly. When the scale is 1.0 it always renders the points the same size. When I tried to do a simple test interactively, setting the scale caused Panda to crash. Here’s the sample:

import direct.directbase.DirectStart
from pandac.PandaModules import *

panda = loader.loadModel('panda')
parent = render.attachNewNode('parent')
panda.reparentTo(parent)
panda.setRenderMode(RenderModeAttrib.MPoint,.02)
panda.setRenderModePerspective(True)

# with this line the program will crash, 
# without it the points will render at the same size
#  regardless of what I set the thickness to
parent.setScale(1.01)

run()

Any thoughts?

i can confirm the crash. but for me it crashed at the line

panda.setRenderModePerspective(True)

setting thickness works for me too, gf8600gt on dell notebook, ubuntu 32bit 8.04 default kernel + nvidia driver version 196.12

if i comment out the above line, it works for me. if i dont, it crashes with glibc memory corruption. -> traceback http://dpaste.com/hold/78374/

All right. Here’s what’s going on.

First, Panda has two ways to render perspective points. One way is to do the calculations internally and send quads to the graphics card. Another way is to use the glPointParameter() extension and send the raw vertices to the graphics card, and let it compute the quads.

The first way is more flexible, and handles more cases–such as, for instance, the whole node being under a scale. So, setting a scale on your root node forces Panda to do the calculations, instead of letting your graphics card to the calculations.

Also, Panda’s routine doesn’t have any limits on the largest size point it can generate, while your graphics card does. So if you do the calculations on the graphics card, the points will only grow so large, and then no larger. And that size is usually not very large. If all your points are trying to be larger than this, they will all appear to be the same size. Panda has no control over this, and has no way to even find out what that limit actually is (it’s set by the graphics driver writer).

So, it sounds like you want to force Panda to compute the points all the time anyway. That will allow you to grow the points really big. Put:

hardware-point-sprites 0

in your Config.prc to force this. This will solve the problem with requiring a scale to get your large points.

As to the crash, well, the Panda model is a complex model. It looks like something in its vertex specification is freaking out Panda’s quad computation. I’m investigating this, and when I find a fix I’ll check it in. Looks like a fringe condition, though.

David

Found the bug. Yeah, it’s only likely to occur on complex models like the Panda.

David

Yes, thank you so much. I continue to be very impressed with the speed of response. That prc setting fixed it.

Now I’ve added a size column to my vertex data. I was kind of hoping that setRenderModeThickness would be a multiplier on the per-point size. What do you think about this idea?

Unfortunately, although it’s easy to change this behavior internally, doing so would be likely to break existing code that has already been written for the current behavior.

So, short answer is, maybe; but I’m nervous.

The inherited scene graph scale does modify the size column. Will that do?

David