New version of Panda Tools?

Any idea/timeframe on when a new version of the Panda binaries will be available for download? Specifically a version that support BAM 4.8?


Are you edyensid???

well, 4.8 is old now, i’m looking for the version that supports BAM-4.11, why does Disney Imagineering get the newer BAM converters first? :laughing:

I wish they’s make it to support 4.11 bam files… the newest version of toontown i using those

they should make a forum just for toontown rippers, i bet most people here were referred to Panda3D by toontown central forums :unamused:

A forum just for Toontown rippers? But the only thing anyone would ever post there would be “when can we get support for bam version x?” :slight_smile:

It is true that you can use Panda to view the Toontown assets, but I had hoped that people would find grander uses for the software. For instance, for developing your own game, or for learning about 3-D graphics engines.

When you ask the question “Why does Disney get the latest Bam version first?” you demonstrate a very limited understanding of what Panda is and how the CMU team and the Disney VR Studio are working together. I invite you to read some of the documentation on the Panda3D website, in particular the FAQ at .

But no hard feelings. :slight_smile: If you want to be able to decode the latest Bam version, your two choices are to (a) wait for the next CMU release, or (b) get the current version of Panda3D from the SourceForge CVS repository and build it yourself. Note that the latter choice is a serious undertaking if you have never done this sort of thing before.


no, it is a great engine and i respect that, however, i can’t get any models for panda(other than the limited resources) so i just extract mine from ToonTown… I also have a few questions regarding how you do some ToonTown things in Panda…

First. how do you get the sky without using a skybox? How does toontown do it?
Second with the fish… How do you have a 2-d object be a drawing surface for 3-d(same with the toons’ heads on the buttons in the pick a toon screen)?
Finally, how come i can’t get LevelEditor to work? it’s calling for DNAStorage…

EDIT: i forgot about this… how do they do custom driving controls? I can’t find the messenger.accept events for the cursor keys being pressed/released

There are lots of ways to do a sky; I don’t know what you mean by a skybox, but if you mean a big box over the camera with the sky painted on the inside of it, well, that’s a fine way to do the sky, and you can certainly do it that way in Panda.

There’s nothing special about the 2-d scene graph, except that it is drawn with an orthographic camera and is drawn after the 3-d scene graph. You can put 3-d objects under render2d and they will appear on top of your 2-d GUI objects. You might need to call nodePath.setDepthTest(1) since by default the 2-d graph is not depth-tested.

The LevelEditor code in direct is designed to edit Toontown levels only, and will not run without additional Toontown-specific code that is not part of the Panda3d release. It is provided just as example code; it is not itself useful.

There are two kinds of driving controls in the Panda3d distribution: there are objects like Trackball and DriveInterface, which are C++ nodes that watch the mouse input directly, and there are higher-level objects written in Python like direct/src/showbase/ that listen for keypress events. You can also write your own driving control. The C++ nodes are the easiest to use without any additional setup.


Ok, i think i put everything in my weird terms, sorry…

Here’s the translated version of each question:
By ‘sky’, i mean:
When the engine is finished with the previous screen, it clears it to a color(set by base.setBackgroundColor)
Then it draws a 2-D image for the sky
After that, it draws where there are 3-D objects(Gag shop, trolley, etc.)
Finally, it draws the GUI(Chat Buttons, Friends List, Shticker Book, Laff Meter)
In that process, how do you draw the sky?

I figured out the fish/pick-a-toon, but is there a way to create a secondary camera and have it draw on to a texture? (i want to make a portal in my game, it’s one of my fav. things)

Got better at making levels through the Scene Editor, don’t need the LevelEditor stuff, anymore…

I’ll look at the Custom Driving later…

BTW, do you guys have an IRC Chat Channel?

If you want to put a 2-d image for the sky, you could create a GraphicsLayer that would be drawn first (give it a low sort index), and set up an orthographic camera and its own scene graph. It would be just like render2d–see ShowBase.makeCamera2d()–except it would be drawn first (behind everything) instead of last (on top of everything). Then you could put whatever 2-d stuff you like in this scene graph, and it would be drawn behind everything else in the regular 3-d scene graph.

Another, simpler thing to do this would be simply to parent your sky geometry to the camera, far enough away that it would be behind everything in your scene.

Sorry, no IRC for us.

Oh yes–render to a texture can be done by calling, which returns an offscreen buffer that can have GraphicsLayers and DisplayRegions just like a window. You will need to set up a camera and a scene graph to render into this buffer, just like ShowBase does on the main window. And you can use buffer.getTexture() to return a texture object that you can apply to geometry in your scene, to apply the output of the rendering to an object in your scene.

panda/src/doc/howto.use_multipass.txt describes this process in a bit more detail. You can find that document in the CVS tree:


hmm… How do you create your own scene graph? I can’t do mySceneGraph.reparentTo(render.getParent())

Also, how do I render to a texture? I’m going to do a TV + Camera system:

TelevisionModel = loader.loadModel(‘models/television.egg’)

survailenceCamera = render.attachNewNode(‘survailenceCamera’)
survailenceCamera.setPos(5, -5, 10)

myBuffer =
myCamera = base.makeCamera3d(myBuffer)
TelevisionModel.setTexture(myBuffer.getTexutre(), 1)

Do I do that?

That’s pretty close. The actual code would be more like this:

televisionModel = loader.loadModel('models/television.egg')

surveillanceCamera = render.attachNewNode('surveillanceCamera')
surveillanceCamera.setPos(5, -5, 10)

myBuffer ='surveillanceBuffer', 256, 256)
myCamera = base.makeCamera(myBuffer)

televisionModel.setTexture(myBuffer.getTexture(), 1)

But this will apply the texture to the entire television model. Presumably you meant for the image to appear just on the screen; this means you need to find the screen node of the television model (whatever it’s called in your model) and do a setTexture() on that node instead of the above setTexture() statement.

I’m not advocating stealing Toontown models, but just as an example, if you should happen to be using a television model you found in the Toontown tree, the code would look something like this:

screen = televisionModel.find('**/toonTownBugTV_screen')
screen.setTexture(myBuffer.getTexture(), 1)

This particular TV model from Toontown wasn’t designed for applying a texture to the screen, so you will have to adjust the UV’s to correct for this (the following code requires the latest Panda). The UV adjustment values are based on the current toontown models, and may change in the future as new patches are downloaded.

screen.setTexScale(TextureStage.getDefault(), 9.1429, -9.1429)
screen.setTexOffset(TextureStage.getDefault(), -8.0714, 7.9286)


Ohh… Well, I just did that. I also downloaded a few models. But when I play them, they just scrunch up in to a pile of polygons on the floor. I get a few error messages, one of them saying “FFTW library is not available, cannot read compressed data” the other one saying “part has something not in anim” or “anim has something not in part” Any suggestions on fixing that? How did you take movies of panda3d and put them in the video gallery? I tried Fraps, but it didn’t work. Finally, how do I set the icon of a Panda3D window? I know how to set the title, though.

winProp =
winProp.setTitle('Window title')

The Toontown animated models are stored compressed. You will need to compile your own version of Panda that uses the free FFTW library if you want to decompress them.

One way to make movies of a Panda program is to use, e.g.: = 15, format = ‘bmp’)

which will run the program in slow-motion for the next 15 ‘seconds’ while it dumps out a bmp file for each frame. Then you can load up the bmp files in your favorite movie creator program.


seems like this should be easy but…

say I’ve done as suggested above and created a texture buffer which is outputting the render view of a camera. Now I want to freeze on a frame and stop having the camera update that texture buffer. How might one go about doing that? Can i disconnect the camera and texture buffer? Can i get a copy of a single instance of the texture buffer?


Yes. You can temporarily stop rendering into your texture with:


You can permanently stop rendering into your texture and release the graphics resources associated with your offscreen buffer with:


In both cases, the texture will remain for as long as you still have a pointer to it, with the contents it held as of the last time it was rendered.


slight clarification: I’d like to continue using the buffer with other textures… effectively i want to take freeze frame shots from the texture buffer and then continue using the texturebuffer for another object’s texture.
i could release the buffer and redefine it but if there was a way to grab a single frame from the camera texture buffer that might be more ideal…

No, we don’t have an interface to do that; sorry. The problem is that on some hardware, the texture memory may be shared directly with the framebuffer memory for your offscreen buffer, so you can’t redirect the buffer to a new texture without invalidating the original texture. You’re probably best off recreating the buffer each time.

Say I want to render 2 cameras to the same window, Is there a way I can set the order in which the cameras appear? I want something kinda like this order:

Background color set by base.setBackgroundColor(, , , )
First camera
Second camera

How would I do that?