Linear corridor model scaling and incorrect shader application

Using the box.egg.pz model to create the linear corridor. Now I want to add specific patches of textures at certain positions of the entire corridor as visual cues. In real life, this corridor will be mapped onto a 300 cm belt. I want to have 3 patches of textures on both the right and left wall witht he first texture starts from realworld 80 cm and the last texture ends at 220cm. All the textures have width of 20cm and they are interleaved by 40cm. I tried doing this with shaders along with the custom python code using panda3d. I realized in the fragment shader, there is an offset while calculating the y position. Also the last patch extends beyond the desire 220cm. Thanks in advance for any help or suggestions.

    def setup_corridor(self):
        """Sets up the VR corridor environment by creating walls."""
        self.left_wall = self.create_wall("left", scale=(0.5, self.corridor_length, 50), pos=(-100, 0, -20))
        self.right_wall = self.create_wall("right", scale=(0.5, self.corridor_length, 50), pos=(100, 0, -20))
        self.floor = self.create_wall("floor", scale=(200, self.corridor_length, 0.2), pos=(-100, 0, -20))
        self.roof = self.create_wall("roof", scale=(200, self.corridor_length, 0.2), pos=(-100, 0, 30))
        self.end_wall = self.create_wall("end", scale=(200, 0.5, 50), pos=(-100, self.corridor_length*2, -20))

        self.walls = {
            "left_wall": self.left_wall,
            "right_wall": self.right_wall,
            "floor": self.floor,
            "roof": self.roof,
            "end_wall": self.end_wall,
        }

    def create_wall(self, name, scale, pos):
        """Helper method to create and position a wall."""
        wall_node = self.app.loader.loadModel("models/box")
        wall_node.setScale(*scale)
        wall_node.setPos(*pos)
        wall_node.reparentTo(self.app.render)
        return wall_node
    def apply_textures(self, wall_texture_map):

        for wall_name, tex_info in wall_texture_map.items():
            node = self.walls.get(wall_name)
            if not node:
                continue
            if isinstance(tex_info, dict):

                vert_p = os.path.join("shaders", "wall_shader.vert")
                frag_p = os.path.join("shaders", "wall_shader.frag")
                node.setShader(Shader.load(Shader.SL_GLSL, vert_p, frag_p))
                tex1 = self.textures.get("left_right_texture1")
                tex2 = self.textures.get("left_right_texture2")
                tex3 = self.textures.get("left_right_texture3")
                base = self.textures.get("base_texture")

                node.setShaderInput("patch1",  tex1)
                node.setShaderInput("patch2",  tex2)
                node.setShaderInput("patch3",  tex3)
                node.setShaderInput("baseTex", base)

                node.setShaderInput("wallLength",   self.corridor_length* 2)
                node.setShaderInput("beltLength",   300.0)
                node.setShaderInput("patchLen",     20.0)
                node.setShaderInput("patchStride",  60.0)
                node.setShaderInput("patchStart",   80.0)

            else:
                tex = self.textures.get(tex_info)
                if tex:
                    node.setTexture(tex, 1)
                else:
                    print(f"[WARN] Texture '{tex_info}' not found for '{wall_name}'")

and this is the fragment shader:

#version 150

in vec2  v_uv;
in float v_world_y;

uniform sampler2D patch1;
uniform sampler2D patch2;
uniform sampler2D patch3;
uniform sampler2D baseTex;

uniform float wallLength;
uniform float beltLength;
uniform float patchLen;
uniform float patchStride;
uniform float patchStart;

out vec4 fragColor;

void main() {
    float t    = (v_world_y + (wallLength * 0.5)) / wallLength;
    float pos = t * beltLength;

    // Try adding a small offset here to shift the 'pos' value
    float adjusted_pos = pos - 80.0; // Try adding 80.0 to see if the red appears

    if (adjusted_pos >= 80.0 && adjusted_pos <= 220.0) {
        float p = adjusted_pos - patchStart;
        float cyclePos = mod(p, patchStride);
        int   idx = int(floor(p / patchStride));

        if (cyclePos < patchLen) {
            if      (idx == 0) fragColor = texture(patch1, v_uv);
            else if (idx == 1) fragColor = texture(patch2, v_uv);
            else if (idx == 2) fragColor = texture(patch3, v_uv);
            return;
        }
    }
    fragColor = texture(baseTex, v_uv);
}

Greetings, and welcome to the forums! I hope that you find your time here to be positive! :slight_smile:

As to your question: Hmm… Well, looking at your code, the mapping of the patches should come down to the value of “v_uv”, but I don’t see that defined anywhere in your code. Could you show the code that sets this value, please?

Second, let me check: does the texture that contains your patch-images have dimensions that are powers of two?

And third, you mention an offset when calculating a y-position. Do you mean the following line?
float t = (v_world_y + (wallLength * 0.5)) / wallLength;

If so, then–at a guess–that may be because the original “box” model is centred–i.e. its origin is at its centre-point.

As a result, it doesn’t extend in just one direction on each axis, running its full length. (e.g. Extending a distance of 1 in the y-up direction.) Instead, it extends in all directions, in each direction running for half its full length. (e.g. Extending a distance of 0.5 in each of y-forwards and y-backwards, making for a total extent of 1 on the y-axis.)

Hi! Thank you so much for the reply. To answer your questions:

  1. Yes the mapping of the patches are dependent on the value of “v_uv” which is defined in the vertex shader as follows:
#version 150

in vec4  p3d_Vertex;
in vec2  p3d_MultiTexCoord0;
uniform mat4 p3d_ModelViewProjectionMatrix;
uniform float wallLength;

out vec2 v_uv;
out float v_world_y;

void main() {
    gl_Position = p3d_ModelViewProjectionMatrix * p3d_Vertex;
    v_uv       = p3d_MultiTexCoord0;
    // built-in box ranges Y ∈ [–1, +1]; map to model-space cm
    v_world_y  = p3d_Vertex.y * (wallLength * 0.5);
}
  1. Do you mean the resolution of the textures that I want to apply in patches? All of them are 2048*2048
  2. The offset that I meant was referring to this line:
float adjusted_pos = pos - 80.0;

if I don’t use the manual adjustment by subtracting 80 from the pos value to get the adjusted_pos then I don’t see the first patch anymore. But now, even if I use this manual offset to calculate the adjusted_pos then I see all the three textures but the last texture exceed its defined range of 200-220cm.
4. Yes the box model is centered at the origin. Do you think this is the reason which creates a scaling issue or an improper uv coordinate mapping for the left and right walls?

Oh! So it’s just the UV-coordinates of the box?

Hmm… Given that you’re not seeing even worse issues, I presume that you have your patches spaced out within your texture. (And that does seem to match up with what you wrote in your first post, now that I look again.)

Okay, that’s good! I thought it likely that you were using powers-of-two, but it was worth checking just in case. (As not using power-of-two sizes can lead to offsets, I believe.)

Ah, I see!

Thinking about it, I think that this may stem from the first line in your fragment shader:

Presumably, the view is going to be standing at a y-position of 0. Thus, the value of “v_world_y” at the viewer’s position should likewise be 0.

However, in that first line, you’re adding half the wall-length to “v_world_y”, resulting in a value at the starting-position being (before the division) equal to half the wall-length. After dividing by the wall-length, this results in a value of “t” that at the starting-position has a value of 0.5.

(And since “wallLength” seems to contain the length of the corridor * 2, I imagine–with at least one assumption–that the value of “t” at the end will be 1.5.)

The value of “pos” is then calculated as “t * beltLength”, and since “beltLength” has a value of 300, “pos” will have a value of 150 at the starting-point.

But your patches start at 80–a value that will be ‘‘behind’’ the starting point (being less than the starting-point’s 150).

Subtracting 80 then reduces the value of “pos” to 70, meaning that the point at which the patches start will be 10 ahead of the start-point–thus allowing you to see your first patch!

Or, putting it another way, subtracting 80 “moves the patches forwards”, allowing you to see your first patch, which is otherwise behind the viewer.

That said, I’m not sure of why you’re seeing your last patch extending ‘‘further’’ than you expect–I would think that you wouldn’t get the patches extending as far as expected. (Since your starting-point is at 70, not 0.)

How are you measuring that it’s ending further than 220cm from the starting-point…?

I define my wall length as 2*self.corridor_length according to the code snippet I posted originally:

                node.setShaderInput("wallLength",   self.corridor_length* 2)
                node.setShaderInput("beltLength",   300.0)
                node.setShaderInput("patchLen",     20.0)
                node.setShaderInput("patchStride",  60.0)
                node.setShaderInput("patchStart",   80.0)

and I define my corridor length as

class VRManager:
    def __init__(self, app, environment):
        self.app = app
        self.environment = environment
        self.textures = {}  # To store textures for specific walls
        self.startingPos = -4500
        self.endingPos = 4500     
        self.corridor_length = self.endingPos - self.startingPos      # that is 9000
        self.walls = {}
        self.setup_corridor()
        self.load_textures()

So according to this in the vertex shader I calculate the v_world_y as v_world_y = p3d_Vertex.y * (wallLength * 0.5);. So v_world_y should be within -9000 to +9000. If that’s the case then in the fragment shader, if v_world_y is -9000 then float t is: =(-9000 +18000/2)/18000 =0 and similarly for v_world_y of +9000 float t should be 1. So, float pos will be from 0 to 300.

and for your second question I just see the 3rd patch extending to the end of the corridor which should not be the case. The 3rd patch should end at 220cm and the corridor end should be 300cm

The first image show how the corridor looks if I add the manual off set in the fragment shader

and the last two are how it looks without the manual shader. Interestingly I don’t add the manual offset of 80 cm then the 3rd patch does not extend to the end of the corridor.

But where are you placing your camera? If it’s still at the origin (i.e. at (0, 0, 0)), then regardless of anything else, your “v_world_y” should (from a visual perspective) start off at 0, not at -9000. (The “-9000” being somewhere behind the viewer.)

As a result, the calculations for the point right next to the viewer on the y-axis should be:

float t = (v_world_y + (wallLength * 0.5)) / wallLength;
float pos = t * beltLength;

Which gives:

float t = (0 + (18000 * 0.5)) / 18000;
float pos = t * 300;

Which equals:

float t = 0.5;
float pos = 0.5 * 300 = 150;

So, do I understand correctly that the issue is that there is an “extra” patch near the end of the corridor?

If so, then I think that that’s just because you’ve made your corridor very, very, very long, and then are normalising for that length and multiplying it by the expected 300 length. The result is that the entire corridor–regardless of actual length–is being treated as though it has a length of 300.

I keep my camera at the origin and move the wall components to give the feeling of motion. I understood your point that the first patch might be rendered behind my camera position but when I change my camera positon to base.camera.setPos(0, -self.corridor_length, 0) even then I see 2nd patch rendered at the start of corridor.


This was my original VR corridor update logic by moving the walls:

    def update_corridor(self, task):
        """Handles the infinite corridor movement and recycling."""
        if not self.vr_manager:
            return Task.cont
        # Read the encoder value from the NI board
        self.encoder_value = self.read_encoder()

        # Speed of the corridor movement
        corridor_speed = (self.encoder_value - self.previous_encoder_value) * self.mouseMoveSpeed
        self.previous_encoder_value = self.encoder_value

        # Move the corridor components
        for wall_name, wall_node in self.vr_manager.walls.items():
            if wall_name == "end_wall":
                wall_node.setY(self.vr_manager.corridor_length + self.vr_manager.roof.getY())
            else:
                wall_node.setY(wall_node.getY() - corridor_speed)

        self.relative_pos = self.vr_manager.roof.getY()   

        if self.relative_pos>self.IR_LED_status_pos:
                if time.time()-self.last_IR_LED_status_Read>=self.readInterval:
                    self.IR_LED_status = self.lightswitch_read_task.read()
                    
        else:
            self.IR_LED_status = 0

            self.encoder_value = self.vr_manager.startingPos
            self.previous_encoder_value = self.vr_manager.startingPos
            self.laps += 1
            # (Additional lap-related flags could be reset here if needed.)

        # Recycling logic
        if self.vr_manager.roof.getY() < -self.vr_manager.corridor_length:
            self.recycle_corridor(forward=True)
        elif self.vr_manager.roof.getY() > 0:
            self.recycle_corridor(forward=False)

        return Task.cont

    def recycle_corridor(self, forward=True):
        """Recycles the corridor components to create an infinite effect."""
        direction = 1 if forward else -1
        corridor_length = self.vr_manager.corridor_length

        for wall_name, wall_node in self.vr_manager.walls.items():
            wall_node.setY(wall_node.getY() + (direction * corridor_length))

Umm… I don’t think so. There should be 3 patches. the last patch is ether extending beyond its defined dimensions and stretches along the end of the corridor or the corridor is short relative to the 3rd patch.

Hmm… Very odd.

Would it be possible for you to put together a minimal program that demonstrates the problem, so that I can try debugging it on my end?

Something without any of the VR stuff, and with only a single wall, and none of the wall-recycling, etc. Just a basic scene with one wall and perhaps a simplified version of the shader.

(Although note that I may take a few days to get back to this, as I intend to take my computer in for repairs…)