Multiple UVs with MeshDrawer, or via TextureStages?

Is there any way to employ multiple UV-maps when using MeshDrawer (either the 2D or 3D version)? If not, then, when working with multiple texture-stages, is there any way to use some other input for the UV-coordinates in a given stage?

To explain, I have a feature that I’m working on for which MeshDrawer would be an excellent fit. However, I want to use two UV-maps–one for basic texture-mapping, and one for a texture that controls the visibility of parts of the model. However, as far as I see, MeshDrawer only supports a single UV-map…

(I could probably do what I want by manually constructing the geometry at a vertex-level, without MeshDrawer–but I really don’t want to do that for this project. I could also construct the model from premade 3D-modelled parts, but that complicates part of the construction process.)

It is not currently possible to use multiple UV sets in MeshDrawer.

Ah, fair enough, and thanks for the answer. That’s a pity!

What about feeding alternate UV-coordinates into a texture-stage? I’m guessing that it’s not possible, but it seems worth asking, in case I’m wrong…

It’s proving surprisingly difficult to find a solution for this that doesn’t involve custom shaders or hand-made procedural geometry! (I’m trying to stick to Panda’s out-of-the-box features for this project, and would rather avoid procedural geometry for this specific element.)

[edit]
Nevermind–I think that I may have another approach to achieving the goal that I have in mind…

A TextureStage can only specify which vertex column to take the coordinates from. Without that vertex column existing, there’s no way to select it.

The only alternative is to use automatic texture coordinate generation, or to use texture transforms, or a custom shader, or using something other than MeshDrawer.

Fair enough–I thought that something like that would likely be the case.

I did consider texture-transforms. However, as I had previously been planning on implementing this element, I would likely have wanted to flatten the geometry due to the potential for otherwise ending up with a large number of nodes. And of course flattening would presumably destroy the individual texture-stage transforms.

I suppose that I could use the RigidBodyCombiner, if that preserves such transforms–but in any case, I think that I have a simpler solution via implementing the element in question in a rather different manner.

Thank you for the answer! :slight_smile:

I do not believe flattening would destroy texture transforms; it tends to err on the side of caution when considering combining Geoms with different render states. I think only flattenMultitex() might do something with texture transforms at all.

RigidBodyCombiner uses the same mechanism as flattenStrong (and friends) do under the hood.

Well, it would presumably either leave the nodes present–in which case there might well be too many of them–or it would combine them–in which case the transforms would presumably be lost, as there would be only one node to which to attach them.

(To clarify: In what I was considering, each node involved would have had a separate texture-transform applied to a texture-stage, and there could easily have been hundreds of them.

In all fairness, they would have been attached to the 2D scene-graph, and I don’t know whether node-counts are an issue there as they are in the 3D scene-graph.)

Ah, fair enough–that likely wouldn’t have worked either, then.

OK, fair enough; you would indeed still have to contend with the high node counts (which are indeed equally a problem in 2-D scene graphs).

Out of sheer curiosity, could I ask what your use case is? I have been thinking of adding a feature to Panda that would make it easy to render a lot of quads in the 2-D scene graph, useful for 2-D games and GUIs, and am just wondering if perhaps I could take your scenario into account in my design.

It’s a map–the nodes are individual walls, with the additional UVs being used to reference a “visibility texture” that would allow them to be revealed bit by bit as the player explores.

To elaborate: A side-project that I’m working on includes an element of first-person tile-based exploration, much as in certain old RPGs. As in those games, the walls are simple rectangular things running along the edges between tiles.

I had thought to define the in-game maps by the walls of their associated areas. A decent-sized area could easily end up with hundreds of walls–hence the desire to flatten them.

In order that it be possible to reveal walls as the player explores, and without accidentally revealing adjacent walls, I had thought to have each map-wall UV-map into its own set of pixels in the “visibility texture”.

Hence the conflict between flattening and UV-mapping.

(What I have in mind now is to instead use mini-tiles with image-representations of walls. The “visibility texture” remains, but now just overlays the map, one pixel per tile. It should thus be safe to flatten the map, as the mapping of tile to “visibility texture” is a straightforward one of (col, row) to (xPixel, yPixel), if I’m not much mistaken.

In addition, not only would this–I think–be easier to implement, but it also allows me to represent floor-features where called for.)