tippcollection:good&bad stuff to do with panda3d

although you can find some hints in the forum when it comes to performance issues caused by hadware or driver problems, you can hardly find any information on how your code is affecting the performance…
since thats the case i want to start a small collection here:
it would be nice if everyone would write some lines with theyr experiences when it comes to coding issues and how to avoid them…

first off my personla experience while trying to load huge outside levels (continous world consisting of smaler terrain-chunks)… result:

Good:
—panda has no problems rendering half a million or a million triangles, even faster than my modelling-tools, so you can greatly waste details.
—bam files are fast when it comes to loading. a 2.3 meg bam file (containing a 256x256 terrain) is still pretty fast. about 0.09 seconds
—culling does a great job. rely on that even with larger scenes. think about splitting very huge stuff.

Bad thing to do:
—splitting the terrain into very small pieces, the more pieces the slower. got 11fps with 1024 8x8 terrain-pieces… while keeping 70fps (monitor resfresh rate) when using a single block equivalent to the small patches.
—having a lot of nodes visible… the more nodes the more work for the cpu.
if you have lots of nodes but only a few visible it should be ok.

as far as i can tell… panda is damn fast but when it comes to a lot of objects it seems like python needs the cpu, workarounds are often easy or not even neccessary.
now. feel free to add your own experiences about how the get the best out of panda3d.

Load model from disk only one time, not every time you use it.

–You can watch what is exactly consuming most of the time by using PStat.
–The bottleneck could be on your CPU (in a simulation or when there is collision detection for many many objects)
–or on your GPU (too many vertices or too many small objects/nodes that move together as a unity or not moving at all) --> you should flatten those nodes into a single one to maximize the number of vertices sent to GPU in a single batch.
–too many lights is not good. Limit the use of lights, bake/freeze any static lights’ effects onto textures.

just made some tests with texture-loading speed
tested 24bpp rgb images 512x512 (random noise, smooth colors and a ground texture from planeshift),tested png,jpeg,tif,bmp,tga

loaded the texture, started a realtimeclock (not pandas) , reloaded the texture 4 times from !harddisk!(my needs), stopped the clock.
fastest are uncompressed png and tiff. with 0,14seconds
most efficient is highly compressed png. with the real texture i got ~0,17 seconds with minimum filesize
uncompressed tif’s are fast,too but large file size (up tp 10 times of a png)

jpeg, compressed tif’s , bmp and tga are all significanntly slower ~30%

–use png textures (at least when using normal 24bpp),

they are smaller and faster to load from disk. especialy when you’r dealing with a lot of textures beeing read from the disk

If you ever find that python becomes a bottleneck you could always try Psyco.

Psyco is a Python extension module which can massively speed up the execution of any Python code. Often Python code with Psyco can execute faster than code in native C! It works kind of like a JIT compiler so the trade-off is memory for speed.