different TexGens for different TextureStages possible?

Can you render a vertex as sprite, but have normal UV texture assigned in another TextureStage?

nodepath.setRenderMode(RenderModeAttrib.MPoint, size)

You might wonder why on earth I would want to do this. I just want to use a texture to change the sprite’s color over time. It works if I render it as point and assign the gradient texture to it. But if I have an actual sprite texture in the first TextureStage, it seems to be ignored (not multiplied).

nodepath.setTexGen(TextureStage.getDefault(), TexGenAttrib.MPointSprite)

I don’t understand what you’re asking.

If you render a vertex as a sprite with setRenderMode(), but do not apply TexGenAttrib.MPointSprite, then what you have is a sprite with a single UV coordinate over its entire surface. This is OK if you want the sprite to show just a single uniform color of your texture (or you’re not applying a texture at all).

If you want to apply a texture across the surface of the sprite, so that you can see the texture normally, you have to apply MPointSprite. This generates texture UV’s to allow rendering the texture.

If you are applying multiple textures via different TextureStages, you don’t have to use MPointSprite on all of them. You can decide which TextureStages get this attribute. The ones that don’t will apply a uniform color of their texture across the surface of the sprite.

As far as multitexture goes, sprites work the same as any other geometry. If you want to apply multiple textures, you need to use a different TextureStage for each one.

David

Well I hoped I could do this, but I it didn’t seem to work like that:

from pandac.PandaModules import *
import direct.directbase.DirectStart
import random

base.cam.setY(-20)

format = GeomVertexFormat.getV3t2() # position and texcoord

vdata = GeomVertexData('sprites', format, Geom.UHStatic)

vwriter = GeomVertexWriter(vdata, 'vertex')
uvwriter = GeomVertexWriter(vdata, 'texcoord')

geompoints = GeomPoints(Geom.UHStatic)

for i in range(64):
	
	x = random.uniform(-4,4)
	y = random.uniform(-4,4)
	z = random.uniform(-4,4)
	
	vwriter.addData3f(x,y,z)
	uvwriter.addData2f(random.random(), random.random())
	
	# add to GeomPoints
	geompoints.addVertex(i)
	geompoints.closePrimitive()
	
# create GeomNode and put it in NodePath
geom = Geom(vdata)
geom.addPrimitive(geompoints)
gnode = GeomNode('gnode')
gnode.addGeom(geom)
nodepath = NodePath(gnode)
nodepath.setRenderMode(RenderModeAttrib.MPoint, 0.4)
nodepath.setRenderModePerspective(True)
nodepath.setTransparency(TransparencyAttrib.MAlpha)
nodepath.reparentTo(render)

nodepath.setTexture(loader.loadTexture('sphere.png'), 1)
nodepath.setTexGen(TextureStage.getDefault(), TexGenAttrib.MPointSprite)

# second texture for coloring the sphere sprites
colortexture = loader.loadTexture('gradient.png')
colorts = TextureStage('colorts')
colorts.setMode(TextureStage.MModulate)
nodepath.setTexture(colorts, colortexture)

run()


Hmm, does it work if you add:

hardware-point-sprites 1

to your Config.prc file? This asks the graphics driver to compute the sprite quads instead of the default of having Panda do it on the CPU.

It looks like there’s a minor bug in Panda’s implementation that prevents this from working with the default settings right now. I’ll investigate further.

David

I should have probably posted some images:

The problem is with the perspective mode. When I use perspective mode (image 1), the gradient texture is also applied like the sprite texture. In non-perspective (image 2) it’s as it should be.

However, if I tell the GPU to handle it, then it looks wrong also in non-prespective.

Out of curiosity, why are you not simply setting a vertex color instead of using a UV lookup into the gradient image?

Edit: I’ve checked out the Panda code, and I can see that it does indeed completely fail to support multiple different sets of texture coordinates. However, I’m reluctant to fix this, because doing so would add complexity to this code, which would slow it down even in the normal case of just a single texture, and I’d hate to make the normal case suffer in order to support what appears to be a bit of a fringe case.

For the record, it works fine in both perspective and non-perspective on my machine with “hardware-point-sprites 1”. Perhaps your driver has the same bug.

David

The whole point of using a texture comes when using something like LerpTexOffsetInterval to animate the UV offset.
It is a nice way to create dynamic star field.

It sounds pretty surprising to me that fixing this bug would have any effect on the performance in cases when it’s not even used. But you know better than me.

I think sprite bugs like this are not very uncommon, so it would be nice to be able to pass it to the CPU and not worry that some users will see something quite different than wha was intended.

I think I could render a texture and assign it to a sphere in my case and still manage to animate the stars somehow, but the quality of each star would be far worse unless I would use really hi-res textures.

Ignoring my case, I think it’s bad to not fix a bug/implement a missing feature because it might slow down general use cases. I still don’t imagine how that would happen, I can only agree that it would make the code more complex, like always when adding new features. I would like to learn more why it is so if you don’t mind.

Currently, the code is written to assume it only has to process the default set of texture coordinates for each vertex. This means that as it walks through the set of vertices, creating cards for each one, it reads (a) the vertex position, (b) the vertex color, © the texture coordinate, and applies all of those to the four corners of the quad that it generates. If TexGenAttrib.MPointSprite is applied, then instead of © it doesn’t copy the existing texture coordinate, but instead (c2) it generates a new set of (0,0)-(1,1) for the quad.

In order to handle multiple texture stages and multiple texture coordinates, this algorithm has to become a lot more sophisticated. Before it begins processing, it has to examine the set of texture stages, figure out which ones keep their texture coordinates and which ones generate new texture coordinates. Then it has to store this information in a data structure, a list of texture coordinates that have to be copied, and another list of texture coordiantes that have to be generated, and for each vertex, it has to walk through these lists and copy or generate the appropriate texture coordinates.

So this means the logic is more complicated; instead of simply doing either © or (c2) for each vertex, it has to walk through a list and process each item in the list. Even in the ordinary case, where the list only has one item in it, the overhead of walking through a list is more than hardcoding © or (c2).

Normally this additional overhead wouldn’t be that big a deal, but this is very low-level code that has to run many thousands of times a frame, in order to process the thousands of vertices that you might have; and so even a very tiny difference can add up to a noticeable drop in frame rate.

There are many cases in which a software package such as Panda must choose to do the less correct behavior in order to improve performance for 99% of the use cases. Collisions are a classic example of these kinds of compromises. It happens a lot in rendering, too, and this would not be the first case in Panda where this kind of compromise is made.

Still, it’s possible to fix it without imposing additional overhead for all cases, but it means replicating the code at the outer level to handle each case separately. You make a good argument for doing this, and I’ll put it on my list of things to do. :slight_smile:

Of course, if someone else wanted to volunteer to do the needed work and submit patches, it would happen sooner. :slight_smile:

David

Well all I can do is wish patience to the person who will do it. Pretty boring stuff.