Mip Mapping Levels and Register Combiners

Happy new years. :laughing:
Just out of curiousity, 2 quick questions

  1. Is there a way I can get a texture that has already been scaled down by the hardware through Mip-Mapping. If I have a 512x512, can I get the Mip Mapped Level 4 of the texture somehow at 32x32? I know there is a work around where I can just render to texture at a smaller size but thatā€™s a little slower.

  2. nVidia allows register combiners to allow render to texture of 16bits per channel, 32bit textures (unless Iā€™m understanding something incorrectly). Is there a way to access this? I also know that certain video cards support 16bit per channel, 48bit textures but that would kill the fill rate.

Iā€™d be interested in hearing some of the answers as it relates to Panda3D

  1. I highly doubt reading video card memory would be very fast(I could be wrong) and something very low-level(ala Direct3D calls). Why do you need a specific mipmap level texture?

Normally the texture artist manually mipmaps a texture - automatic mipmapping with some possible adjustments later. Does Panda3D support pre-mipmapped textures? I wonder if implementing swizzled/compressed textures would be difficult - only necessary for games with large number of textures.

  1. As I examined source, I noticed ā€œdx_force_16bpptexturesā€ in config_dxgsg9.h

The codebase appears to support 16-bit per channel textures and devices, but Iā€™m not sure how to access it myself (not that I want to unless Iā€™m using OpenEXR files) and not sure if you have to rebuild Panda3D to get it - I think the answer is no to the latter as it appears to be a global boolean.

extern ConfigVariableBool dx_force_16bpptextures;

any extern configvariable will show up in a setup file with an associated prc file name. In this case, its

dx-force-16bpptextures true

by default, the flag is set to false. This may or may not be supported yet, but it may be a start

Panda only supports auto-generated mipmaps that it generates (or it asks the hardware to generate) at texture load time; we donā€™t have a facility for loading hand-painted mipmapsā€“no oneā€™s ever wanted to do that before. You can enable a special mode where bogus flat-colored textures are loaded for each mipmap level, just so you can see where the mipmap switches happen; you can set gl-show-mipmaps to True in your Config.prc to enable this. (This requires that youā€™re running with pandagl, as implied by the name.)

Similarly, thereā€™s no facility for extracting a particular level of the mipmap back out, although I guess you could do this by rendering the texture to a polygon that was chosen to be the appropriate size for the mipmap level you wanted, and then copying the bits of the framebuffer out. Not particularly speedy, as both Bei and voxel pointed out, but it is a way to examine what the automatic mipmapper is doing. On the other hand, you could simply set gl-save-mipmaps to True, and Panda will dump out all of the mipmap levels to disk for your inspection when it loads the texture. (You probably just want to pview the texture by itself when this mode is on, unless you want to fill up your disk with mipmap levels.)

The variable dx-force-16bpptextures just means to downgrade all textures to 16bpp mode at load time (when running pandadx). Thatā€™s 16 bits per pixel, not 16 bits per channel. Our pandadx driver doesnā€™t currently support textures of 16 bits per channel.

Our pandagl driver does support textures with 16 bits per channel. You can specify the number of bits per channel you want in the egg file. In fact, if you load a 16bpc TIFF file, I think Panda will automatically request a 16bpc texture with it, unless you specify otherwise.

Of course, none of this applies to render-to-texture. You can request a 16bpc render-to-texture by setting the appropriate number of color bits (16 * 3 = 48) in FrameBufferProperties, which you need to pass to the GraphicsStateGuardian when you create it. If you donā€™t want your main window to have the same number of color bits, then youā€™ll need to create a separate GSG for the render-to-texture. Driver support for all this is a little spotty in OpenGL, and youā€™re wandering into a part of Panda that receives relatively little exercise, so your mileage may vary.

I have no idea what it means to render to 16 bits per channel, 32 bit textures. That sounds like a contradiction to me. Can you explain this in more detail?

David