Feature Requests.


Like a multifile? But I still don’t understand what for. If I use one texture for 5 models then I’ll have 5 copies of the same texture in 5 different files?


No, rather one file, referenced in 5 egg files. Loading one of the eggs initiates the loading and caching of the texture, so the next time a model needs that texture, it’s loaded in no time.

  1. A collision system that can work with procedurally generated meshes like GEOMIP Terrains , Craig Macomber’s shader generated infinite terrains and things like procedural trees and grass. Right now it does NOT, you apparently HAVE to HAVE that whole “polyset keep descend” thing in an EGG for the geometry to even BE recognizable as an “into” object.

  2. A better scheme for animated textures than EGG Texture Cards. Me I want to put animated textures onto real meshes rather than some useless square card divorced from everything. Ten years ago the folks at Flatland Online gave their users the ability to use GIF89a anims as textures complete with transparency, it wasn’t a perfect implementation but it was highly useable. Heck I’d settle for being able to include the EGG Texture Card’s “sequence node” in my Blender generated EGG’s. Yes I could use AVI’s (maybe??) but more than a few would be a performance killer.

  3. better keyboard and gamepad/joystick support … seems ridiculous to have to drag in ALL of Pygame just to read my Xcontroller, not to mention those four totally unreadable numpad keys ( “/”, “*”, “-”, “+” )

. This is the fourth time I’ve posted in a long time … after being roundly castigated by a bunch of “Ivy League” “professional developers” over my favorite choice of text colors ( 1.0, 0.0, 1.0, 1.0 ) over black and my stunning lack of professionalism… GODALMIGHTY what would they have said if my website provider allowed me to use my favorite font too (Mistral)? What I’m trying to do isn’t about money to me its about supporting creativity which moneygrubbing VR sites like Second Life care little about.


I actually have full collision working with my terrain now (did it this week), it wasn’t very hard either. Its less than ideal generation speed wise, but faster than the visible mesh generation (at least for the trees that use collision tubes). I’ll push some public updates soon. I also got pre-generating and caching to bam files and async loading, so no need to use slow/big egg files.

You can use 3D textures for this, or MovieTextures, or Pointer Textures. 3 ways seems like enough to me, but you can also use texture arrays (new!), sequence the shader inputs through different textures manually, or UV pan a set of frames in a texture. Personally I’ve only used 3D textures, but the other approaches should work fine.

There are 3 features I would really like:

  • A method I can use to get collision geometry generated from visible geometry (like colliding with geoms generates. My python version is slow, and knowing there is a C++ version and no way to call it makes me sad)
  • A method to get visible geometry (geoms) from collision geometry (like showing collision generates) (I want this for dumping my collision data to a mesh to generate nav meshes)
  • A method of ShaderAttrib to list all shader inputs (I need to find them to serialize the parts of ShaderAttribs I need to keep, currently I have to iterate through a list that contains the names of all the shader attributes I might use and see if they are present on every geom!)


It’s really a shame fog is yet to be working with the shader generator, I’d say that’s top on my list of wanted things. :wink:

volumetric fog would be even more amazing, I know I’m shooting for alot here, but if someone could implement volumetric fog for the shader generator, I’d be willing to… kill… for that.

Kidding kidding, I seriously would love for panda3d to have some sort of fog that could be used with the shader generator though.



So it IS possible to do collisions, without resorting to using some other “physics engine” as this seemed to be my only thought, ODE and Bullet have been successfully used w/ Panda and I was sure I’d have to do something like that after I traced Panda’s collision code into the C++ side of things. A place I cannot go, my skills with ANY of the C languages are S^^^fully bad, in fact C, C++, and Windows were what ENDED my carreer as a “professional” MSDOS / FORTH programmer back in the late 80’s.

As to the texture animation 3d textures seemed to be the way I was going to go with it which would allow me to apply sequences of either JPG or PNG files at a much higher quality level than the old GIF 89a anim allowed for. To apply them I will have to do my first attempt at actually writing a shader from scratch, wish me luck.

Curious if you know anything about my vid chip related issues, Animate Dream’s slope texturing demos ( even the early one ) do not run correctly sometimes not at all and neither dos your GEOCLIP grass. I’ve managed to determine my ATI Radeon HD3200 seems permanenently stuck in single precision mode. Any ideas, there was a whole thread about a year ago as I remember concerning ATI cards.


Nothing in this post relates to the thread. I won’t answer any of it here to avoid further distracting a useful thread. Post such things elsewhere or not at all.




You can do fog as a post process (aka filter) and thats compatible with the built in shader generator. Anyway, since there is interest, I made fog for my shader generator (not as a post process) as another example effect: github.com/Craig-Macomber/Panda … -Generator

Discuss my shader generator here if you wish to avoid derailing this thread: [What do you want in a shader generator/editor?)


I would like low level OpenGL access; namely the ability to create and color primitives such as triangles and points.


Have you taken a look at PyInstaller?


Thank you for the hint. Currently I have a solution which uses pdeploy and it seems to work (I haven’t done “massive” tests yet, though). If this won’t work I will consider your hint, but generally I prefer a pdeploy-based solution, if possible. :wink:


A deb file is a GNU ar archive containing a data.tar.gz file, which contains the filesystem (usr/lib/game/ directories etc). You can simply extract that from the result and use that.

Or you can let pdeploy generate an archlinux package, which is really just a tarball with the filesystem and a package info file.

In any case, the binaries are compiled against such an old version of GLIBC that it should work on any recent Linux distribution (the binaries in the .deb will be the same as in the Arch package). You can just take out the binaries and redistribute them.


I’m using this currently, it is very useful, thank you!


1.I would like one function on nodepaths.


Basically, it would have same effect as calling .lookAt() every frame.

Usage would be something similar to setCompass(), so maybe name it setFocus or something similar?

  1. make variables returned by globalClock.getFrameTime()
    and globalClock.getDt() accessible as python attributes.
    “x = globalClock.dt”


Somewhere on the forum there were some scripts to octree a mesh, but I can’t find the one that worked for me (I’ll have to test them all, I’m sure at least one worked)…

So my first request is to add a script like that to the sdk.

The second one is about gui.
I’d like a SIMPLE method to place gui elements in common locations like corners or center.
I know there are base a2d* nodes that I could parent the elements to, but I find them not very usefull (or I’m doing it wrong, there’s nothing about them in the manual). So what I would like are nodes like ‘top’, ‘bottom’, ‘center’, ‘left top’, etc for every gui element. The idea is to use these to align one element to another, so if I align my buton panes ‘bottom’ node with the screens (render2d or aspect2d) ‘bottom’ node, then the butons will stick like glue to the bottom of the screen (not outside the screen) no matter what size the pane or the window is, or will be. If I’d align ones button ‘rigth’ to anothers ‘left’ and vice versa then the buttons will stay side by side even when one of the butons is moved, scaled or rotated.
I tried to make wrapers to d gui to work this way but failed ;(


If you want a button to stick to the bottom left corner of the screen, then you can achieve that by setting the button frame to [0, , 0, ] and reparenting to a2dBottomLeft.

If you want to stick another button to the first button’s top right corner, you can do it like this:

button2 = DirectButton(
frameSize = [0, <width2>, 0, <height2>]
pos = Vec3(button1["frameSize"][1], 0, button1["frameSize"][3])

I know it’s not as convenient as having the corners exposed as nodes, and I would be all for that.


I agree that automatically (or manually) octreefying a collision mesh for Panda’s collision system in code is a good idea. The Bullet physics engine does something like that already.

How about a sprite creation class? Right now I can make a single vertex and make it render like a sprite, but I helper class would make it cleaner.


I was thinking more of a stand alone commandline utility like egg-optchar that ships with panda… but sure if the loader can optimize (octreefie)a collision mesh at runtime then that’s more the fine with me :smiley:


while we are at standalone tools.
texture2txo please, for now i am working around it by creating an egg file for each texture and converting with egg2bam with -txo set.