This is pretty long, so I’ll start with an introduction:
I don’t have a lot of experience with Panda, graphics, or really much of anything relevant here. I picked up Panda a few weeks ago, and started learning python and cg. The extent of my accomplishments over that time are summarized best by my creation, a realtime procedural planet (see images at end). It however awaits a true-scale galaxy, as well as LOD to revile both scales far larger and far smaller than currently expressed. My apparition of fractals has lead my to search for a LOD and Precision handling solution that would allow each level to be specified relative to its parents, and function across all levels of zoom. Inspired my success in having solved the precision issue before allowing unlimited zoom of a tree of objects (tested to hundreds of orders of magnitude), I now search for the best solution for it in Panda, as well as a comparable solution to the related LOD issue for deep zooming meshes.
This is a pondering of the subject, mixed with questions and ideas. A general purpose solution seem not too hard for solving the deep zooming of a tree without precision loss, and some approaches are discusses below, but doing so for continuous meshes (The LOD problem), primarily presented in the contest of approaching and/or landing on a planet are currently beyond me. If a unified general solution could be reached, it would be fantastic, and free developers from the struggling with scale.
So I’m making a game which happens to be set in a galaxy. As you know, galaxies are very large. What I’m looking for here is roughly 25 orders of magnitude of usable and navigable scale (milky way, to around a meter). Those are base 10 orders of magnitude. Clearly, it is impossible to put the camera next to a ship or planet with .1 unit precision when the object is 1*10^24 units from the origin. This effect least to the camera shake issues (well, really, everything including the camera is shaking, and in this extreme case, it would amount far worse than simply shaking)
The solution to this is a dynamic origin. Stars are placed relative to the galaxy, planets placed relative to stars etc.
The trick comes when you have to render such a setup. I got it working in a different system than Panda, and it was quite a pain. Panda has a nice system of nodes which make such relative placements easy. It is my understanding however that all locations are computed relative to render, not relative to the camera, or the camera’s parent (or some conveniently settable origin object). When I implemented my own scene graph, I dynamically set which node was used to place everything relative to. The tree was traversed in both directions (up and down) from the origin node (which was always the target of my camera). When combined with a few other details, I was able to get 100 orders of magnitude of zoom toward moving objects.
Imagine the node tree as a bunch of balls connected by rope. The act of changing the origin node simply amounts picking up the new origin node and letting everything hang from it. All the ropes that get flipped (what what the top is now at the bottom) represent transforms and parent-child relationships that need to be inverted. This is only for rendering however, the computation and general use of the node tree still desires the original configuration.
I’m pretty new to panda, so I don’t really know how the node system is stored. Are the relative positions that we see with getPos and setPos what is actually stored (This is what I expect/hope), or do they just transform absolute positions?
Can I get a clarification on whether Panda really does compute all vertex locations relative to the render node?
If this really is the case, what is the best way to get around this? I would love to be able to select any node in the for the rendering coordinate system origin. This could work as a feature that would not break any existing code (it remains as currently implemented unless the origin node is changed from the render node)
Another approach, which may be better suited for my purposes would be rendering multiple scenes. In my base, things will pretty much never overlap (space is empty, and gigantic) so I can swap the origin node off to a new scene, it’s parent to a new scene, and its parent to a new scene recursively. Then I can use compute the corresponding camera position in all the scenes and render then all. I really have no idea what the best way to do that is. The only way I can think of is doing a bunch or render to textures and overlaying them.
What ever the approach, I have a few additional issues. Some particular levels of my proposed node tree are troublesome. Planets! On a planet, it would be nice to have maybe 7 orders of magnitude of zoom. Clearly some sort of mesh LOD system is in order, but subdividing the planet into a node tree to get the required precision using one of the methods discussed above would produce seams I think. What is a good approach for rendering planets with such a wide range of LOD scales?
To be truthful, my game will not actually require several orders of magnitude zoom on planet meshes, but the rest of the issue there, placing objects relative to each other in vastly enormous coordinate systems is required. I would like to be able to generate high detail planets though, and the LOD and precision issues currently seem to be my main roadblock. It would be so cool to have enough precision to have something like the procedural jungle (http://www.nouser.org/DW/doku.php?id=projects:sparetime:jungleengine) for planet surfaces to top off my procedural galaxy. If a recursive system were devised that allows for the adding of LOD levels without accumulating precision issues, I could truly add as much detail as I wanted. Stare out across the galaxy from standing on a grain of sand.
Possible solution: With the rendering multiple scenes design, sub nodes of planet would produce a separate scene that could be overlaid on the larger scale ones. This would cause some seam artifacts, but possibly only as far as sharp LOD transitions in the terrain. Blending of the transitions could be possible if there is overdraw for the scenes, but that brings up another issue. With so many scenes, preventing tons of overdraw would be important.
Those are my thoughts. Thanks for your time, and feel free to discuss various scale and LOD related ideas. Have a planet:
Thanks to everyone who has helped my get this far.