Large Number of Textured objects

Hello,

I am creating a simulation which has Tens of Thousands of textures displayed on the screen. Each texture is a different image and has slightly different size. The user is allowed to zoom in/out and pan within this 3D world. Is there anything in Panda3D that can handle this many textures with decent frame rate?

Before Panda3D I tried OGRE3D and on my hardware I was only able to get 10,000 textures with very bad frame rate (1 frame per second).

I appreciate you suggestions.

Vance

Hi, and welcome to the Panda3D forums!

Panda just sends your textures to the GPU directly - if your GPU can’t handle your amount of textures, it’s the GPU’s problem, not the problem of the 3D engine.

10,000 textures is a lot. If you consider 1MB per texture, you’ll end up needing 10 gigs of video card memory, while an average video card probably only has something like 256 MB!

Panda has texture compression support though, in case your textures don’t fit inside your VRAM:
panda3d.org/manual/index.php/T … ompression

Also, texture memory aside–even if each of your textures is very small–the number of individual textures is an issue: 10,000 independent textures means 10,000 independent Geoms, and therefore 10,000 draw calls each frame. That’s going to be slow in any graphics engine.

But Panda also provides tools such as egg-palettize, which allows you to automatically combine a large number of independent textures into a considerably smaller number of composite textures.

Of course, this also depends on your application. If you are creating 10,000 textures at runtime, there’s not much we can do to automatically optimize that. You’d have to design your application to construct suitable composite textures yourself.

David

Thank you guys.

The textures I have are all static, (actually they are images: jpg, png, bmp,…) and 99.9% of time time they don’t change at runtime.

Regarding the egg-palettize, does it work at runtime? For example as the user zooms out, can it combine X number of textures as 1?

Ill do some searching on egg-palettize to see what it actually is.

Regards!
Vance

No. It would be a very clever algorithm indeed that combined textures at runtime in that way. It’s certainly possible, at least in principle, but I haven’t seen any graphics engines that provide that sort of functionality.

But there may not be a need for that level of cleverness. If your textures are small enough that you can put, say, 50 of them on a palette, then you have reduced 10,000 textures to 200. Even if the user spends most of his time looking at only a small subset of your 10,000 textures, that’s still a huge performance improvement.

David

Well even if you’d make 200 combined textures, with each atlas of 50 textures you’d have about 40x40 pixels per one texture if every combined texture was 2048x2048.

The quality you’d get would be silly and you’d still need ~1,5 GB of texture memory for that.

I suggest you’d rethink your concept, maybe you can split the scene into smaller parts with 50-100 textured object each and load the textures when they are needed?

hm… well if all textures are static AND if the geometry you put them on is somewhat static,too. it might be possible to have something like a 40x40 pixel version for all textures simulatinously. if you now involve a check between camera and nearby tiles (i suggest octrees for that) you could manually load single textures as the user is nearing a certain image, create new geometry directly above the original one and use the new high-res texture on it.

you could repeat this mechanism several times. should keep texture-memory useage reasonable low while maintaining a good quality.
i once did simmilar things with a fully-zoom-able environment-cubemap.
as long as the users movement is not totaly insane (means as long as your machine can load, process and send the texture to the gpu faster then the user is approaching the image) it will work quite well.

you can also use this with asyncron loading, so in worst case the user will look at the blury image first which will then get sharp as soon as the requested textures was loaded.

Interesting, to make sure I understand correctly, is this the general method:

Zoom Out:

  1. The app starts showing 5 boxes - each having a high resolution texture mapped.
  2. The user zooms-out
  3. During zoom-out, when more than 20 textures are on the screen, then the following happens:
    3.1. Render the current 20 textures in an offscreen buffer (exactly as they appear on the screen)
    3.2. Create a texture from this offscreen buffer
    3.3. Repeat the same for the 8 surrounding, adjacent, Width x Hight areas of the world.
    3.4 Then render the center texture and as much as visible from the surrounding 8 textures depending on the zoom level - as the user further zooms out, the more of the 8 surrounding textures will become visible.
  4. The user is free to zoom out until he fully sees the 9 textures from which point the same operation is repeated

This might create the illusion of seamless zoom out and the user will think he is looking at 180 independent objects but underneath only 9 textures do the job.

Render to texture has to be used for this:
panda3d.org/manual/index.php/L … to_Texture

I appreciate your suggestions and help!!!
Regards
Vance

well. my idea was more the other way round.
using low-quality images for ALL cubes, and only the higher-res ones when nearing a cube.
so when nearing a cube, you create a new cube which is a tiny bit bigger than the original one (scale*1.001). you load the higher-res texture for this cube, if you move away from the cube, delete the high-res cube again.

Thank you! I have to give all this some thought and see what is the best way to integrate with the rest of the application. Ill post further question after analyzing some of the details.
Regards,
Vance