Wrapping textures according to model scale

For my game project (A rogue-like with random levels) I need to create different size cubic models and apply textures to them on the fly.

I’m currently doing this by loading the default cube.egg model and scaling it accordingly. Trouble is, the texture doesn’t wrap…obviously because the coordinates are already specified in the model file.

Thing is, I need that texture to wrap according to the size of the cube…for example:

  • If a cube’s x and y scale values are set to (1, 1) the texture won’t wrap.
  • If they are set to (1, 3) the texture would wrap three times on the appropriate faces.

I’ve come up with a few ideas of going about this, but I’m quite new to this engine, Python (I already know programming from Java and C# at least :slight_smile:) and 3d game development in general, so I’d appreciate a little guidance.

Here are the approaches I’ve come with:

  1. Actually create the cube model dynamically, specifying it’s vectors and texture coordinates from the application itself. [I couldn’t find anything about this, so I’m not thinking it’s possible without communicating directly with the renderer in C++…which is something I want to avoid at all costs]

  2. Modify the texture coordinates of the model in memory once I load it [I’ve looked over the reference section and couldn’t find any methods which would allow me to do this]

  3. Use “tiles”, so that, for example, a wall is actually made up of several cubes. This was actually the approach I originally tried, but a little experiment proved that this was a bad idea.

I basically loaded 10,000 cubes into memory and set them up on the map as one big chunk. Needless to say that my video card had a hard time digesting all those vertices when viewed from a distance (10,000 * the 23 vertices in the cube file = 230,000 vertices!).

This means that speed will not be consistent, and I was hoping to aim this game at a large audience, which may not have the same hardware specs as I do–Roguelikes usually require extremely minimal specs.

To be honest, there will never be an instance in the game where the camera will have all those vertices on screen. However, it’s still extremely limiting. I abandoned this approach pretty early, but maybe there’s some way to improve the speed?

I’ve got other ideas in mind, which are easier, less time consuming, and more efficient. But the above three would certainly make my game a whole lot more dynamic and (dare I say it?) cool!

Any advice would be appreciated, but I’m not looking for algorithm handouts, more like something along the lines of “check out this module”.

P.S. Big props to the python community and the Panda team. This is probably the easiest and cleanest Engine/Script combination I’ve experimented with. It’s slowly transforming me into a python addict.

so many questions i dont even know where to start answering.
maybe with textures and uv stuff.
there is a nice section about automatic uv-coord generation in the manual which could be of intrest for you.
http://www.panda3d.org/manual/index.php/Automatic_Texture_Coordinates MWorldPosition mode looks something you could use.
alternatively you can manipulate the uv-coordinates manually to some extend using those techniques:
http://www.panda3d.org/manual/index.php/Texture_Transforms

for a rouge like world you might want to use planes instead of cubes. (created with the cardmaker for example. i’d advice against totaly self-creating the surfaces simply because there are conventient-to-use ways to do it).

230000 vertices are no big deal for somewhat modern gpu’s. they can literally handle millions.
the problem here is. the gpu wants those vertices in a few big chunks of data. for example 100 chunks with 2300 vertices each. but definetly not 10000 chunks with 23 vertices.
fortunately panda provides a nice way to fix this.
the method “flattenStrong” can reduce the number of geoms (that’s the term for the chunks i mentioned earlier) by merging the vertex data as good as possible.

The cardMaker and render.flattenStrong work like a dream together, thanks a lot for the help :slight_smile:.

render.flattenStrong is … not the right thing to do actually.
instead you should make groups of nearby cards. say all cards within a 10x10 tiles or 20x20 tile region. and then call flatten on that group.
also note that flattenstrong has its issues with flatten geometry across nodepathes. use the analyze() function to check if the number of geoms was really reduced.

You’ll also need to smartly use clearModelNodes() here and there, since flatten operations don’t flatten anything if a model node obstructs it.

Your original question does seem like it’s well suited for automatic texture coordinate generation via nodePath.setTexGen(), as Thomas pointed out, or simply texture coordinate scaling, via nodePath.setTexScale(). No fancy tricks needed in either case.

David