Feature Requests.

This isn’t a big deal, but I don’t see why we have setPosHpr() and setPosHprScale() but no getPosHpr() and getPosHprScale().

We don’t have a data type that can hold that information for us to easily provide that method.

Recast/Detour support http://code.google.com/p/recastnavigation/

Recast - automatically generates path-finding meshes from input meshes

Detour - offers basic steering behaviors for path-finding meshes


On an entirely different note, also:

panda-AI needs a real code cleanup, mostly blender exporter… (it’d be possible to just use the egg-pipeline instead of .csv 3d-modeler exported files)

I’d love for a magic fairy who knows C++ to grant me all of these wishes :smiley: I’m aware it may be unlikely though :slight_smile:
~powerpup118

Xidram and I made a bare-bones Recast & Detour implementation a long time back. I think it’s on panda3d_2_0_branch, in the directory panda/src/navigation.

Just wondering, how good is pandaAI? I am wondering if I give it a try or if I should be write my own AI.

I’d like asynchronous loading of audio, video, images and fonts.

What would also be quite cool is the possibility to load and/or assign sound files to certain AudioManagers. As of now you can only load a sound through an audio manager and from there on the sound is bound to that AM.

How about a Metaball/Metaballs node?

Some people don’t seem to be happy with panda’s particle system.
How’s this? spark.developpez.com/index.php?p … es&lang=en

I’m not using panda particles because the particle panel is a bit…well cryptic. Also texured, transparent particles near the camera kill the framerate on my pc (a drop from 30-60 to 1-12fps).

If this Spark thing has a editor (with some docs) and can somehow make the particles render faster then I’m sure I could find some use for it.

I believe your observed frame rate drop is due to the way particles are rendered on the GPU - they have a high fill rate. Because of that, when your camera gets close to the particle emitter, your frame rate goes down. Long story short, it’s your GPU’s fault, not Panda’s. :slight_smile:

I think some particle systems allow you to disable depth testing on particles and use blending instead though.

It’s true about my gpu. But I still played games on this pc where particles don’t push fps from 60 to 5 with just one smoke emitter.

Close particles are large in screen space, and have very little detail. Render the particles in another buffer at reduced scale, then enlarge and overlay. Also when possible, keep the particles as small as possible (including the transparent parts) and keep the particle count low. Sometimes more complex shaders on them can look as good with less particles, and render faster.

Very intriguing. Certainly worth looking into.

Looks like C++ knowledge will be needed here to write the glue code for making the library use Panda’s renderer, not as simple as just generating Python wrappers and making it work with Panda’s task system, etc.
Also a particle config file writer/ reader would be a must I think.
So if any C++ coder is interested, let us know. I would probably be able to write a version of the particle panel for that library if this was done.

It seems that the engine should load BHV or other format skeleton animating data, and could be integrated with kinetic devices. Do you think so? :open_mouth:

That would be sweet! Especially since I already have a motion capture setup that uses BHV and Kinetic APIs.

I have a what I hope is a very simple request. Could we make GeoMipTerrain.setBorderstitching() take an integer instead of a bool, and have the detail level at the edge of the tile match that integer.

There is a bug in its current behavior anyways. As it functions right now it does not use detail level 0 as stated in the docs. In fact judging by how few vertices are on the edge when i use border stitching, I bet that the detail level is equal to getMaxLevel().

You’re right, it is max_level, and not 0.

I remember considering to make it an integer instead of a boolean, but I don’t remember why I opted against it. I think that would have made it significantly more difficult to avoid seams in some special cases, or something of the sort.

Why not allow Intervals to run with a given delay time, not each frame? We have DoMethodLater, but nothing similar for Intervals, which are basically tasks too.

For intervals which take long time to finish, it seems pointless and waste of resources changing the value (say, color scale) by 0.0001 each frame instead of 0.006 each second.

Of course we can use tasks instead, but they are not as convenient as Intervals, that’s why Intervals exist in the first place.