Altering the transition distance for mip-maps?

Is it possible to alter the distance at which a texture transitions between its mip-maps, and to do so for a single texture on a model that has more than one?

To explain, I’ve been experimenting with the aesthetics of my game, and I have an idea to have normal-maps simplify with distance from the viewer. However, I don’t want the same to happen with the colour-maps.

Now, if I understand correctly, something like this is effectively already present in the automatically-generated mip-maps of the textures in question: they’re downscaled versions of the base texture, and thus effectively “blurred”. However, the transition happens at a distance that hides this fact. So, I want to experiment with the transition happening at a nearer distance–but again, only for the normal-maps.

You could try setting the LOD bias parameter.

from panda3d import core

sampler = SamplerState()
sampler.minfilter = SamplerState.FT_linear_mipmap_linear
sampler.magfilter = SamplerState.FT_linear
sampler.lod_bias = 0.1
sampler.min_lod = 0.0
sampler.max_lod = 9.0
sampler.wrap_u = ...

# Either
tex.default_sampler = sampler

# Or
model.set_texture(tex, sampler)

Hmm… That might work. It would presumably call for searching through my scene graph for all models that have a texture with the word “normal” in it, and then re-applying the texture with the new sampler.

When I call “setTexture” on a model that already has the texture being applied, and with other textures besides, do I have to do anything to ensure that the re-applied texture doesn’t end up replacing one of the other textures, or will it automatically assign itself to the same “slot” as the original?

Calling setTexture without specifying a TextureStage will specify it on the default texture stage. So when reassigning it you do need to specify it with the same stage parameter.

However, you can avoid needing to reassign it by simply changing the texture’s default sampler settings.

Ah, so if I simply use the first method above, but simply operating on the texture as acquired from the node instead of assigning one to the node, I should be fine?


(Obligatory padding to meet minimum character count.)

Ah, excellent, and thank you. :slight_smile:

I’ll hopefully report back when I’ve tried it!

Well, it works, but the effect was not as I’d hoped. Ah well, so it may go when experimenting, I suppose. It was worth a shot!

Thank you for your help! :slight_smile:

If you write your own shader, you can use the textureLod() function when sampling the texture in order to use your own distance determination. You can also use an explicit bias in the regular texture() function.

Thank you. :slight_smile:

That said, it’s not the LOD bias that isn’t working–I do indeed see the normals change at different distances when I change the bias. The simplified normal-maps simply aren’t quite what I’d hoped that they’d be, and thus don’t produce the aesthetic effect that I’d hoped.

(Looking back at my previous post, I see now that it was somewhat ambiguous regarding just what wasn’t working–my apologies for that! ^^; )

The normal map generation in Panda is a bit naïve, and might not be well-suited for normal maps. It might be worth an experiment to see what would happen if you let Panda generate all the mipmaps and then manually renormalized all the normals in the mipmap levels to ensure that they still contain a unit vector.

There may also be tools to do this offline, which would let you save it out as a DDS file that contains all the mipmap levels.

Hmm… I do normalize my vectors in the relevant fragment shaders, but it’s possible that I’m doing so in a way that’s producing poor results.

That said, I’m inclined to suspect that mip-maps are simply not well-suited to what I was trying to do. Perhaps too much information is lost in scaling down, that might not be in an image that’s blurred, but not scaled.