Panda3D performance hit once creating a Render-to-texture

Hi David,

  1. I follow the tutorial http://www.panda3d.org/manual/index.php/Using_Intervals_to_move_the_Panda and I can get 260FPS on my PC.
    If I create a new rendered buffer to capture frames from main panda window and show it in another window, the FPS is down to 9-10FPS

here is my code to create and show the buffer:

	PT(GraphicsOutput) mybuffer;
	PT(Texture) mytexture;
	PT(Camera) mycamera;
	NodePath mycameraNP;
	mybuffer = window->get_graphics_output()->make_texture_buffer("My Buffer", 512, 512);
	mytexture = mybuffer->get_texture();
	mybuffer->set_sort(-100);
	mycamera = new Camera("my camera");
	mycameraNP = window->get_camera_group().attach_new_node(mycamera);

	DisplayRegion *region = mybuffer->make_display_region(); 
	region->set_camera(mycameraNP); 
	region->set_active(true);
	WindowProperties props; 
	props.set_size(512, 512); 
	int flags = GraphicsPipe::BF_require_window; 
	WindowFramework *window_target = framework.open_window(props, flags);
	mybuffer->add_render_texture(mytexture, GraphicsOutput::RTM_copy_ram);
    CardMaker cm("cm"); 
    cm.set_frame(-1, 1, -1, 1); 
    NodePath card(cm.generate()); 
    card.reparent_to(window_target->get_render_2d()); 
    card.set_texture(mytexture);

I don’t know where is the bottle neck of this code.

  1. Can I show/hide a child panda window without destroying it? The idea is I just need an offscreen rendered buffer and use this buffer (texture?) for my application. I know that I can fake it by setting the child panda window size to 1 pixel, however how can panda processes mouse event with the 1x1pixel window and it should reflect the mouse effect on the offscreen buffer?

Your code seems a little confused. You have this:

mytexture = mybuffer->get_texture();

which returns the Texture object that is already bound to the buffer and which is already set up to be rendered into. But then later you do this:

mybuffer->add_render_texture(mytexture, GraphicsOutput::RTM_copy_ram); 

which adds another binding of the same texture to the same buffer. Furthermore, you specify RTM_copy_ram, which is by definition a very very slow way of binding a texture, and should only be used when you really need to copy the texture image to RAM (for instance, to write it to disk).

Just eliminate the call to add_render_texture(). You don’t need it.

Also note that opening a second window (as you do when you call framework.open_window(), presumably to display the contents of your texture) can sometimes severely impact your render performance, depending on your drivers.

This isn’t really reliable, and I don’t recommend it. You can create and destroy them reliably, though. But do you really need to be creating and destroying windows, or are you just trying to work around some perceived problem with offscreen buffers?

David

If I comment out:

mybuffer->add_render_texture(mytexture, GraphicsOutput::RTM_copy_ram);

then I got a black window
The purpose of opening new window to show the content is just to make sure the created offscreen has exactly the same content of main window. Can you show me other way to display the offscreen buffer in the main window?

Ah, that must be because you are not passing the current window’s gsg into framework.open_window(), which means you are creating a new graphics context for the new window. The new graphics context doesn’t share any of the texture memory with your original graphics context, so the only way you can see the texture data is if you force it to copy (slowly) into ram and back out again.

You could solve this problem by passing window->get_graphics_output()->get_gsg() as the gsg parameter to framework.open_window(). But a better solution would be to display your texture in the main window.

One easy way to do this would be to parent your card to aspect2d:

    CardMaker cm("cm");
    cm.set_frame(0.5, 1, -1, -0.5);
    NodePath card(cm.generate());
    card.reparent_to(window->get_aspect_2d());
    card.set_texture(mytexture);

This will, of course, overlay part of your main window’s display, but it’s just for debugging purposes, after all. Once you’re satisfied that it’s working, you can take it down.

David

Thank you for pointing out my trouble, it helps a lot

In Panda3D, is it possible for sharing textures between multiple devices? Let say I create my own DirectX9 device, can I pass the offscreen texture to my renderer? Can I use this method OpenSharedResource?
Please show me the easiest way to obtain pure texture (IDirect3DTexture9) from Texture class.

Panda3D isn’t designed for that kind of low-level sharing. But does even DirectX allow you to share IDirect3DTexture9 objects between different DirectX9 devices? I thought each texture object was associated with one particular device.

Still, it is possible to obtain the low-level IDirect3DTexture9 object. One way is to call gsg->traverse_prepared_textures(), which will make a callback for each Texture object currently loaded into the GSG, and for each one, you will be given a TextureContext pointer. If you are sure that you are running with the pandadx9 backend, then you can downcast this to a DXTextureContext9 pointer, and call context->get_d3d_2d_texture() to return the IDirect3DTExture9 pointer.

You can also get the TextureContext pointer for a single texture, without traversing through all of the textures, by calling texture->prepare_now(). It is safest to call this method from a draw callback, for instance as assigned by DisplayRegion::set_draw_callback(), because at the time of a draw callback the requirements of Texture::prepare_now() will be met, which is to say, the GSG will be currently active and ready to accept texture commands. But in most simple, single-threaded applications, with only one GSG in effect, that will be the case all the time anyway, so you don’t strictly need to use a callback in most cases.

David