WMClamp issues?

Hi guys,
I’m seeing some strange things happen when using the wmclamp setting for sprite textures.

On my radeon9600 if I use anything other than clamp the framerate dies when rendering a sprite.
On my geforce5600 the rendering dies regardless (in dx9 and gl). This would lead me to believe its probably a driver issue but maybe i’m just doing something funky.
Pstats reports that the transparency bin is taking huge amounts of time when a sprite is on screen.

Has anyone experienced this problem or knows of a workaround?

Some of the more advanced settings, like WMMirror, WMMirrorOnce, and WMBorderColor, may cause some graphics drivers to fall into software rendering. We’ve seen this in particular with WMBorderColor on GeForce2 and earlier; a workaround for this is:

gl-support-clamp-to-border 0

in your Config.prc. However, I’m not sure that’s your problem.

Can you clarify the cases in which frame rate is good, and those in which frame rate is poor? I’m a little confused by your email. You mean WMClamp is good and WMRepeat is poor? Are you using Panda3D 1.0.5 or 1.1.0?

Of course, to render sprites, you almost certainly want to use WMClamp anyway. It may be that the driver writers have optimized for this case in particular.


Yeah this problem seems weird to me.

On my geforce5600 the sprite never renders at the proper speed, my fps takes a massive hit on either repeat or clamp. On my ati9600 the sprite only renders properly when the texture is set to clamp.

If I dont set the texture on the sprite both run correctly.

Here is what i’m doing

	spriteTex = loader.loadTexture(texFile.getFullpath())
	## turn off filtering

                ## clamp texture
	res = 22 # 22 pixels per unit
	texwidth = float(spriteTex.getXSize())
	texheight = float(spriteTex.getYSize()) 
	width = texwidth / res 
	height = texheight / res 
	vdata=GeomVertexData("SpriteVertices", format, Geom.UHStatic)
	vertex=GeomVertexWriter(vdata, 'vertex')
	texcoord=GeomVertexWriter(vdata, 'texcoord')

	vertex.addData3f(0.0, 0.0, height)
	texcoord.addData2f( 0.0, 1.0)
	vertex.addData3f(width, 0.0, height)
	texcoord.addData2f( 1.0, 1.0)
	vertex.addData3f(0.0, 0.0, 0.0)
	texcoord.addData2f( 0.0, 0.0)
	vertex.addData3f(width, 0.0, 0.0)
	texcoord.addData2f( 1.0, 0.0)
	## build a quad
	spriteNode = render.attachNewNode(GN)

	ts1 = TextureStage('ts1')	
	spriteNode.setTexture(ts1, spriteTex)

Oh, and i’m using panda 1.1.0

If anyone is interested, the problem with my video dropping to software rendering appears to happen if I’m trying to use a non-clamped non-power of two texture. Clamping non power of two textures seems to work just fine, unclamped the driver freaks out.

So let’s say, hypothetically, that your hardware only supports power-of-two textures, but it also supports the new OpenGL extension that clamps to a subrectangle of a texture.

In that case, the driver could pretend to support non-power-of-two textures by loading the data into a subrectangle of a texture, and then clamping to that subrectangle. But of course, that only works when clamping is enabled. When you turn wrapping on, it all falls apart.

I happen to suspect that the radeon 9600 does, in fact, support the new OpenGL subrectangle extension. And it does, in fact, require power of two textures. So this hypothesis would make some sense.

Oh, by the way, you can try this:

if (base.win.getGsg().getSupportsTexNonPow2()):
print “Driver supports non-power-of-two textures”