Ask for help with texture settings

Can anyone help me explain exactly what these two methods do? I still don’t understand the documentation. It is best to give examples to explain under what circumstances will these two methods be used, and is there a connection between the two methods?

setTexScale(TextureStage(''), scale)
terrain.setTexOffset(TextureStage(''), Uoffset,Voffset)

set_tex_offset, set_tex_scale and set_tex_rotate are methods that transform the texture coordinates (UV’s) of each vertex of a model. The order in which transformations are combined matters. The default order is: first scale, then rotation and finally translation (offset). The only way to change that order is by using a matrix; simply changing the order of the calls to the corresponding methods has no effect.

One thing that might be confusing about transforming a vertex in texture space is that the texture itself seems to be inversely transformed. For example, if you scale by 2, the texture will appear only half as big.
And if you offset by a positive amount, the texture moves in the negative direction. Also the order of transformations seems reversed.

Here is some code you can play around with to more clearly see how it all works.

from direct.showbase.ShowBase import ShowBase
from panda3d.core import *

class MyApp(ShowBase):

    def __init__(self):


        cm = CardMaker('square')
        cm.set_frame(0., 1., 0., 1.)
        # the square has default texture coordinates like so:
        # (0, 1) ------------- (1, 1)
        #   |                    |
        #   |                    |
        #   |                    |
        #   |                    |
        # (0, 0) ------------- (1, 0)
        square = render.attach_new_node(cm.generate())
        square.set_pos(-.5, 3., -.5)

        tex = loader.load_texture('maps/lilsmiley.rgba')
        ts = TextureStage.get_default()
        square.set_texture(ts, tex)
        square.set_tex_rotate(ts, 30.)  # texture rotates 30 degrees clockwise
        square.set_tex_scale(ts, 2., 1.)  # texture is halved in X-direction
        square.set_tex_offset(ts, .3, 0.)  # texture moves .3 to the left
        # alternatively, use the following matrix, which reverses the order
        # of transformations; this gets rid of the shearing
#        mat = Mat3.translate_mat(.3, 0.)
#        mat = mat * Mat3.rotate_mat(30.)
#        mat = mat * Mat3.scale_mat(2., 1.)
#        square.set_tex_transform(ts, TransformState.make_mat3(mat))

app = MyApp()

The texture seems skewed because the non-uniform scaling combined with the rotation introduces shearing. This is the result of the default transformation order being seemingly reversed (first rotation, then scale).

Hopefully this will make things a bit more clear.

Thank you very much for your reply, your explanation has made me fully understand their function. But what puzzles me is why do you need to perform these rotations and scaling operations? Can’t I just use the default one? Or do they help?

Normally, the UV’s are edited in a modelling program, so they usually should not need to be transformed.
But these methods can be useful for things like special effects; for example, if you have a model for the sky and you would like to simulate moving clouds or rotating stars, then you could translate or rotate the texture across that model in a task to get the desired effect.

Another example would be that you want to completely replace the existing UV’s of a model by generating e.g. world-position UV’s for it; the default result might not be what you need, and then you would also use the transformation methods to further adjust those UV’s.

In addition to Epihaius’ answer above:

The default (by which I presume that you mean the NodePath) position and rotation methods will place and rotate a model, with any textures applied to that model moving with it naturally. However, they won’t move or rotate the position of a texture on the surface of a model. These other position- and rotation- methods allow one to do that then: to move or rotate a texture on the surface of its model, without affecting the position or rotation of the model itself.

Let me give an example:

Let’s say that you have a model of a glowing, rainbow-coloured sword.

The standard position- and rotation- methods would allow you to move the sword, but the rainbow colouring would remain static on the surface of the sword.

These position- and rotation- methods would have no effect on the position or rotation of the sword–but they could allow you to move the rainbow colouring scroll along the surface of the sword, making it look like it’s made of light.

Thank you very much for your patient answers. I fully understand this time.

1 Like

I fully understand this time. I will try it myself ! thank you very much!

1 Like