List of known bugs in 1.5.3

maxegg plugin :

  1. no overwrite confirmation
  2. pview not spawned

Hmm, thats plenty of bugs in the max exporter. Pity that there is no one to maintain it now…

Also, I noticed EggNurbsSurface is not exposed to python, while EggNurbsCurve and EggSurface are – was this intentional?

No, that’s an oversight. Please feel free to correct it. Thanks!

David

Where is the source code for the new Max Egg exporter?

On the 1.5.3 download page, in either the Complete source code or in the piecewise sourceforge source code, you can find the relevant stuff in the pandatool/src/maxegg directory.
The actual CVS version is here:
panda3d.cvs.sourceforge.net/pand … rc/maxegg/

If you would like to volunteer in trying to fix the bugs (or at least locating them or guessing where they would be) that would be great. The author (Josh) left shortly after creating the new exporter, so afaik there’s no one to maintain it (I myself am running windows and dont have a max copy unfortunately.)

Really? If you do per-triangle MIP map selection, at least NEAREST_MIPMAP_NEAREST filtering should be faster than just doing NEAREST on the top-level texture image, because most texture reads would be from a smaller texture image, and thus cache better.

Also, software renderers are interesting in that they scale well with multiple cores. Have you tried splitting your mini renderer across multiple cores?

This is true, and we do this. I should have said that bilinear or trilinear filtering is too expensive for the CPU (though both are a selectable option).

Panda does have a mode in which the draw calls will be performed in a separate thread from the main application and most scene graph operations. This appears to work fine in conjunction with the software renderer, as well as with hardware rendering, though it is especially useful with the software renderer, of course. This mode is still somewhat experimental, though, and isn’t stable enough for prime time–it’s been one of my back-burner projects for a while.

I hadn’t yet thought of subdividing the renderer itself into multiple cores. It’s a fine idea, though some care will be necessary to avoid race conditions on the zbuffer. Maybe each core should draw into its own buffer, and then the results combined afterwards? This will become especially appealing when we start to see really high-parallel machines hit the mainstream, quad core or higher.

David

The 16 core processors are going to be available in the next couple of years. Intel Research demonstrated an 80 way CPU last year…

Yes, I would tile the entire framebuffer. If you want even distribution of effort, then tiling the framebuffer into a number of tiles (20 or more) will allow you to schedule tiles across a thread pool. You would have to sort all primitives by which tiles they affect up front, though – which isn’t terribly costly with a grid and a per-cell list of primitives.

In fact, the software and hardware rendering architectures are slowly converging on something like this model. The Xbox, for example, uses predicated tiling if your title uses anti-aliasing, and it works somewhat like the above. Also, Intel integrated graphics does the same thing, but with smaller tiles.

And Dell is now selling quad-core laptops.

I’m just sayin’ :slight_smile:

I think ray tracing would power next-next gen graphics. And will be performed on and more more general purpose gpus basically CPU’s of better architecture then x86. Is there anything on the back burner of implementing software based ray tracer for panda3d? So that we could position panda3d at the fore front of the ray tracing revolution that’s coming?

Well, I’m not sure how soon the ray tracing revolution will arrive. But fundamentally, Panda is all about managing render state and triangles. Anyone who is so inclined could bolt a ray-tracing renderer on as a separate GSG, similar to the way I just bolted on tinydisplay.

But the short answer is, no, to my knowledge no one’s working on ray tracing for Panda. I certainly have no objections to anyone wanting to pursue that interest, but for myself, I think a good threaded pipeline is probably a higher priority right now in terms of meeting the needs of upcoming hardware. Followed closely by a really excellent auto-shader generator.

David

"Followed closely by a really excellent auto-shader generator. "

I was thinking about that too in the passing weeks. I don’t really think that fully auto generated shaders is a good idea. You loose lots of flexibility and that’s what shaders give you in the 1st place. Having good shaders is as important as having good models and textures. The problem with panda3d shadrs is that they are low level enough.

To see what i mean look at ogre3d:
ogre3d.org/wiki/index.php/Compositor
ogre3d.org/wiki/index.php/Cu … ow_Mapping

With in one “shader” they define multiple render buffers and they way they are combined.

It would be awsome if we can create a shader system that would be more powerful in its constructs and include basic building blocks.

I disagree. I think that modern forward renderers (like, say, CryEngine, id Tech 5 or even Renderman) have at this point out-performed ray tracers even for reflected image quality – you can easily do reflections using cube maps and refraction with render targets, for example.

However, I believe that graphics will become more and more programmable. If you want to implement a ray tracing renderer on top of that programmable hardware, then you’re welcome. However, I think that high-performance future renderers will have more to do with pre-computed radiance transfer, spherical harmonic textures, and volume lighting environments than they will have to do with ray-tracing. Ray tracers don’t do anything for global (indirect) lighting, after all.

Will the programming environment be x86 or not? If Intel gets their way with Larrabee, then yes, it will be x86. If NVIDIA or ATI get their way, no, it will be some other ISA. Currently, Larrabee whitepapers talk about 16-40 cores, although they are in-order CISC with texture fetch and Z buffer acceleration hardware, whereas NVIDIA and ATI shipping graphics cards have hundreds of cores that are SIMD. It’ll be interesting to see how it all plays out in the end!

The modern forward renderers generate shadows very badly. Ray tracers generates shadows very well. I think hope of getting easy and cheap shadows into 3d will drive ray tracing.

Guys, we should be getting 1.5.4 out now. I’ve fixed most of the remaining bugs, but there are still the maxegg bugs that remain. Although workarounds have been found, I really think we need to get them fixed in 1.5.4. What shall we do about it, now it’s only maintainer and author left?
Is anyone up for taking a poke at the maxegg source code, or should we contact Josh or so?

… I would fix it, but I’m not a windows user, and, don’t own Max.

Submitted openAL performance bug
bugs.launchpad.net/panda3d/+bug/287110

sigh, okay, I grabbed a windows machine and fixed most of the Max exporter bugs. (Please don’t ask me how I got a max copy so quickly.)
All of them are fixed now, except the character hierarchy bug. This is the bug, basically; the character hierarchy is incorrect when you specify individual models to export instead of “Export entire scene”.

Problem is, I don’t know beans about Max. So, either someone else needs to fix this remaining bug or someone needs to send me an animated max file thats supposed to work, please one that works with 3dsmax 8.

There are basically two bugs standing in the way for 1.5.4.

There’s an ‘undocumented function’ bug for the docs of C++ classes, one that seems unrelated to the bug for Python classes. It’s beyond my scope to fix, it seems to be some lower-level interrogate thing, I think. Someone else needs to look at it.
An example is the Filename class. There are a few “undocumented function” items there that are actually documented.

The second bug is a Max exporter hierarchy bug, but I really need a 3dsmax-8 compatible animated .max model if any of you expect me to fix this bug.

Hmm, this will be troublesome to fix. This is happening because the Filename class is defined in dtool, but it is not instrumented by interrogate until panda/src/express. This means interrogate never gets to read filename.cxx, only filename.h and filename.I, so it only gets a chance to read the documentation for the inline methods.

A few classes have to be defined in this way: they’re defined in dtool, because they’re very low-level, but interrogate itself is also defined in dtool, so it can’t run until we start to build stuff in Panda. We solve this problem by explicitly instrumenting these classes in the first directory within Panda.

That works fine, but I didn’t think of the documentation problem. I’ll have to think of a better solution for that.

David