Panda3D 1.2.3 and DX8: Out of memory error

Well, it finally happened; I think I really managed to code myself into a corner on this one :wink:

We have an effect that involves rendering to a texture, which we’ve set up using a method similar to the render-to-texture example. We did this in a codebase that’s been out in the field for about four or five months now; by and large, it’s working fine.

We have one very stubborn piece of hardware that’s using a Radeon Mobility 9000 card with DirectX8. If we use OpenGL, everything is functional, but we end up with a terrible-looking stippling effect (as if the card knocked itself down to a lower transparency resolution or a lower color resolution). In DirectX8, however, our render-to-texture effect fails. We can set up the texture, wrap a buffer around it, and bind it to a camera with no problems. But when we set the buffer active, the following error appears one time and we never see the render-to-texture effect occur:

:display:gsg:dxgsg8(error): D3D create_texture failed! at (c:\temp\mkpr\panda3d-1.2.2\panda\src\dxgsg8\dxTextureContext8.cxx:680), hr=D3DERR_OUTOFVIDEOMEMORY: Out of video memory

(note: this error comes from 1.2.2; we see the same error under 1.2.3).

Looking at pstats, I’m willing to believe that we are, in fact, pretty close to being out of memory. I’m a little surprised that Panda can’t respond to this error by evicting something else to guarantee that we have room for this texture effect, however.

My question is this: given that there is great reluctance to solve this problem by simply building to the head, can anyone suggest a workaround? Is there a way to finagle the texture initialization (maybe by forcing a render on this texture before anything else happens) to force the graphics card to reserve us some space for this texture?

Sorry to be a pain today; it’s been a busy day for us :slight_smile: Thank you for the help!

Take care,
Mark

I’m a little surprised too, especially because Panda isn’t responsible for managing the texture memory–we leave this entirely up to DirectX. But, when you need to create a new resident texture buffer for rendering, I guess it won’t evict textures that are already resident, for whatever reason.

Sounds like a fine idea. Can you open the render buffer before you put anything onscreen? You could try this in a standalone app just to convince yourself that the error message is honest when it’s complaining about lack of memory (I’ve seen this sort of error message come out of DirectX even though the real problem was something completely different). If so, then it might be as simple as opening this render buffer first, maybe with a base.graphicsEngine.renderFrame() to insist it goes all the way through, and then starting up your main application.

You could also try going back to making OpenGL work properly. One thing to try might be one or more of:

depth-bits 32
color-bits 32
alpha-bits 8

in your Config.prc file.

Thank you for the advice!

Unfortunately, I wasn’t able to fix the stippling in OpenGL mode, and the machine we’re testing on is borrowed (so we’re time-crunched in getting any functional solutions up). I went ahead and back-reverted our python codebase to Panda3D-1.1.0, ran under that engine, and the problem went away.

I’m going to have to give this machine back today, so we won’t be able to try any more of the suggestions that were given :frowning: If anyone else has access to an ATI Mobility 9000 graphics card and a computer running Service Pack 1, they may want to try and re-produce this problem. We weren’t able to isolate the cause in Panda’s source code, but we were able to determine that if we throw out most of the other textures in the scene (so that we’re operating well below maximum memory), things worked correctly in DirectX8 mode.

To summarize, we saw two issues. The “no render-to-texture” problem only manifested in DirectX8 mode. In OpenGL, that problem went away but another problem manifested—we saw stripes of the same color where we were attempting to show color (or transparency) gradients, as if our bit-depth was below 32 bpp. Explicitly requesting 32 bpp on color and 8 bpp on alpha seemed to have no effect.

I’m hoping that someone can either (a) isolate these issues or (b) verify that they’ve been fixed at the head of the CVS tree. Hopefully, someone else with access to this hardware / software configuration can nail this one!

Take care,
Mark