Using an existing window for panda to render into

I’d really like to have a workaround for this issue: (or a fix, but that seems unlikely).

In my case I have a window made with wxPython, and want a panda panel. I’m already using cython in the project, so some C calls and such are not completely out of reach. Ideally no modifications to panda would be needed (or someone else could contribute them). I can compile panda, but I really don’t have skills to modify it (yet).

Incase it matters, I’m not using the default window, and I don’t mind needing to generate my own mouse events as long as its possible to do so.

A few approaches I though of:

Re-parent the window (well, we know this has issues on mac)

Get panda to use an existing window from wx instead of creating and re-parenting a window.

Get panda to render into a wx GLCanvas.

Use a wx GLCanvas to render a simple quad displaying a texture rendered with panda (somehow that requires getting the contexts to share the texture and getting a wx object for the texture)

Copy image through ram (way too slow, but really easy).

Anyone have thoughts on what approach would be a good idea to try?

As a last resort, I can split my window up into a panda window and floating tool bars on mac, but I’d really like to avoid it.

We actually use the “copy image through ram” approach to embed the window in the browser in the web plugin. It’s surprisingly not as slow as you’d imagine it to be.

I suppose you could piggyback onto the same code and write an appropriate wxWidget object that would receive the framebuffer data and display it in a wx window. You’d have to be familiar with wxWidgets and coding for it in C++.

The other approaches you suggest sound plausible, too, but they will all require some additional low-level coding at some level.

I’d love to have a robust solution to this problem as well. :slight_smile:


Using ramImage.getData(), I got the image data, reformatted it in a bit of cython code to proper RGB, and fed it into a wxBitMap. It seems like the ram-Images lag a frame behind actual renderings, which in my case is a bit of an annoyance for my particular use case(I deactivate everything to idle the CPU most of the time), but tolerable. With a bit more work, I should have a usable solution.

Performance: Going from the RamImage to a BitMap on screen seems to take about 0.013 seconds for my window (about 900*900px) and scales with area. Not good by any means, but usable. It looks like about half the time is spend in wx, and the rest is split between the ramImage.getData() and my formating code.

ramImage.getData() likely makes an extra copy, and so does my current use of wx.

I found some issues with the docs while working on this. Suppose I want to learn about that handy “getData” function? Well, it ain’t in the python docs: … 68fcf43989

Taking the retuned value from that, and using dir on it shows getData, as does some poking in the C++ referance. There simply is no ConstPointerToArrayunsigned class in the python referance.

Is there a simple way to pull a pointer to the actual data from the python ConstPointerToArrayunsigned wrapper? I know about the “.this” to get a pointer to the object (btw, where the heck is that documented?). I’d like to get around the copy I’m assuming getData() makes.

There’s no way to get a pointer from the Python level, but you can do it via C++ code easily–the CPTA_uchar return value from get_data() (this is the object that is called ConstPointerToArrayunsigned in Python) has a p() method which returns its actual pointer; or you can simply cast it directly to a void pointer.

Note, by the way, that there is also a Texture.getRamImageAs(‘RGB’) method, which will twiddle the byte order to RGB for you and then return the twiddled version. Of course this does make another copy.

Edit: you might be able to remove the one-frame latency by putting “auto-flip 1” in your Config.prc file. Also, if you’re copying the texture in a task, be sure to add the task with a sort value greater than 50 (so that it appears following igloop).


Texture.getRamImageAs(‘RGB’) was slower than my conversion. It was handy for testing though, and would allows a pure python implementation.

I managed to get access to the raw data using .p() with cython. It seems to be pretty responsive now. About 0.009 seconds to update the display (900*900) + what ever overhead the copying to ram render texture has. About 2/3 of the overhead is in wx.BitmapFromBuffer which apparently makes a copy and format conversion.

One odd thing though: the coloration between the panda window and wx version of the same image have (quite) different coloring.

Heres the rendering from the panda window and a section of the same frame from the wx window copied over it. They should be exactly the same color, but they arn’t.

Thats panda’s standard background gray. Assuming its supposed to be 50% gray, I think the version from wx has the correct colors, and the direct panda rendering’s values are funny. It needs some more investigation though.

Anyway, this is good enough for my needs. Thanks for the help! Once I’ve given up on making any more improvements, and done some cleanup, I’ll post my code.