ok here is my problem… i have 128mb on my gfx card … (yes its a problem but that is not what this post is about)
when i try to load all textures for my game it runs out of memory. Is there a way to determine how much memory i got on a computer i am running and scale the textures i need to fit into the memory range. Or better yet unload the textures from the graphics card when they don’t need to be shown and maybe if there is lots of room upscale the textures i have downscaled before.
So i am ask how would i perform this texture management?
Lots of answers.
which reduces the texture memory requirement by a factor of about 8.
Also, you have:
which (further?) reduces texture memory requirement by a factor of 4. Smaller scales are also possible.
To determine amount of available texture memory is a bit tricky. DirectX provides an interface for this, but OpenGL doesn’t; so Panda doesn’t. Even if you query DirectX, it might be wrong. Most applications just put a slider and leave it up to the user anyway.
Note that both OpenGL and DirectX are supposed to automatically unload textures that are no longer being rendered. If they are failing to do that, it might be a driver bug. Try switching between OpenGL and DirectX to see if it stops crashing. However, DirectX provides the option for the graphics engine to do the texture management itself, and we do this by default in pandadx9. If you are running pandadx9 and failing, it might be a Panda bug. Try setting:
which means to use DirectX’s texture management instead of Panda’s built-in management code. (In OpenGL, we always use the OpenGL texture management; there’s no option not to.)
What is the nature of the failure? Does it stop rendering, or does it get an error code, or does it crash?
i either get the million of assertions that fireGL: __ momory error __ ~ no usefull info and 4fps or graphics artifacts which happen on some commercial games too.
I did not know about “texture-scale 0.5” i bean doing doing it myself as an automated script - so i feal stupid.
If openGL already does texture management then i cant do it better. I don’t have option of dx on linux.
Thank you very much drwr, you been very helpful!
I was going to add that there is a base.pipe.getDisplayInformation() call available on Windows that provides much of this information, such as it is, including available texture memory; but appears that this is not implemented on Linux, so it won’t be much help to you.
It sounds like you’ve got a graphics driver bug. It certainly shouldn’t crash just because you’re rendering lots of big textures. (I’ve seen similar problems in other ATI drivers.) You can try checking for a more recent driver, but if that doesn’t help, you have to either live with it, and scale your textures down; or you have to aggressively unload textures when you know you’re not using them. You can unload a texture with something like tex.releaseAll() to unload an individual texture, or even base.win.getGsg().unloadAllTextures() to unload all of your textures at once. They’ll get automatically reloaded the next time they’re rendered, or you can reload them on demand with nodePath.prepareScene().
If you are going to aggressively unload textures like this, you might want to set:
to avoid dumping them all the way out of system ram and forcing Panda to go back to disk when it’s time to reload them.
yes i have a point “zone” jump when all textures could be unloaded to wash out any textures that might not be used int he next zone.
This is very helpful indeed yes many people report the ati’s card artifact with polygons sticking at the center of the screen but that is some thing i would like to eliminate.