Very large variety of scale

This is pretty long, so I’ll start with an introduction:

I don’t have a lot of experience with Panda, graphics, or really much of anything relevant here. I picked up Panda a few weeks ago, and started learning python and cg. The extent of my accomplishments over that time are summarized best by my creation, a realtime procedural planet (see images at end). It however awaits a true-scale galaxy, as well as LOD to revile both scales far larger and far smaller than currently expressed. My apparition of fractals has lead my to search for a LOD and Precision handling solution that would allow each level to be specified relative to its parents, and function across all levels of zoom. Inspired my success in having solved the precision issue before allowing unlimited zoom of a tree of objects (tested to hundreds of orders of magnitude), I now search for the best solution for it in Panda, as well as a comparable solution to the related LOD issue for deep zooming meshes.

This is a pondering of the subject, mixed with questions and ideas. A general purpose solution seem not too hard for solving the deep zooming of a tree without precision loss, and some approaches are discusses below, but doing so for continuous meshes (The LOD problem), primarily presented in the contest of approaching and/or landing on a planet are currently beyond me. If a unified general solution could be reached, it would be fantastic, and free developers from the struggling with scale.

So I’m making a game which happens to be set in a galaxy. As you know, galaxies are very large. What I’m looking for here is roughly 25 orders of magnitude of usable and navigable scale (milky way, to around a meter). Those are base 10 orders of magnitude. Clearly, it is impossible to put the camera next to a ship or planet with .1 unit precision when the object is 1*10^24 units from the origin. This effect least to the camera shake issues (well, really, everything including the camera is shaking, and in this extreme case, it would amount far worse than simply shaking)

The solution to this is a dynamic origin. Stars are placed relative to the galaxy, planets placed relative to stars etc.

The trick comes when you have to render such a setup. I got it working in a different system than Panda, and it was quite a pain. Panda has a nice system of nodes which make such relative placements easy. It is my understanding however that all locations are computed relative to render, not relative to the camera, or the camera’s parent (or some conveniently settable origin object). When I implemented my own scene graph, I dynamically set which node was used to place everything relative to. The tree was traversed in both directions (up and down) from the origin node (which was always the target of my camera). When combined with a few other details, I was able to get 100 orders of magnitude of zoom toward moving objects.

Imagine the node tree as a bunch of balls connected by rope. The act of changing the origin node simply amounts picking up the new origin node and letting everything hang from it. All the ropes that get flipped (what what the top is now at the bottom) represent transforms and parent-child relationships that need to be inverted. This is only for rendering however, the computation and general use of the node tree still desires the original configuration.

I’m pretty new to panda, so I don’t really know how the node system is stored. Are the relative positions that we see with getPos and setPos what is actually stored (This is what I expect/hope), or do they just transform absolute positions?

Can I get a clarification on whether Panda really does compute all vertex locations relative to the render node?

If this really is the case, what is the best way to get around this? I would love to be able to select any node in the for the rendering coordinate system origin. This could work as a feature that would not break any existing code (it remains as currently implemented unless the origin node is changed from the render node)

Another approach, which may be better suited for my purposes would be rendering multiple scenes. In my base, things will pretty much never overlap (space is empty, and gigantic) so I can swap the origin node off to a new scene, it’s parent to a new scene, and its parent to a new scene recursively. Then I can use compute the corresponding camera position in all the scenes and render then all. I really have no idea what the best way to do that is. The only way I can think of is doing a bunch or render to textures and overlaying them.

What ever the approach, I have a few additional issues. Some particular levels of my proposed node tree are troublesome. Planets! On a planet, it would be nice to have maybe 7 orders of magnitude of zoom. Clearly some sort of mesh LOD system is in order, but subdividing the planet into a node tree to get the required precision using one of the methods discussed above would produce seams I think. What is a good approach for rendering planets with such a wide range of LOD scales?

To be truthful, my game will not actually require several orders of magnitude zoom on planet meshes, but the rest of the issue there, placing objects relative to each other in vastly enormous coordinate systems is required. I would like to be able to generate high detail planets though, and the LOD and precision issues currently seem to be my main roadblock. It would be so cool to have enough precision to have something like the procedural jungle (http://www.nouser.org/DW/doku.php?id=projects:sparetime:jungleengine) for planet surfaces to top off my procedural galaxy. If a recursive system were devised that allows for the adding of LOD levels without accumulating precision issues, I could truly add as much detail as I wanted. Stare out across the galaxy from standing on a grain of sand.

Possible solution: With the rendering multiple scenes design, sub nodes of planet would produce a separate scene that could be overlaid on the larger scale ones. This would cause some seam artifacts, but possibly only as far as sharp LOD transitions in the terrain. Blending of the transitions could be possible if there is overdraw for the scenes, but that brings up another issue. With so many scenes, preventing tons of overdraw would be important.

Those are my thoughts. Thanks for your time, and feel free to discuss various scale and LOD related ideas. Have a planet:



Thanks to everyone who has helped my get this far.

Wall of text! First paragraph summarizes what I’m designing with the rest of it. Maybe this is most useful as a note to myself, and you guys should wait to play with the potential engine described, but I find the subject pretty interesting. Read on if you wish. A question though, can one setup an infinite far clip plane in panda? The goal here is to use the full dynamic range of float vertex positions, so that would be useful.

Here is my design for how to zoom to any level on a true scale planet mesh. I wish to deal with orders of magnitude, rather that scaler zooms. The planet should be navigable regardless of scale. Nature is fractal, the interest transcends scale. My goal is along the lines of O(log n) or even constant time computation for deep zooms where n is the zoom scaler, and to do so across many orders of while keeping detailed mesh and textures procedurally generated along the way. This is how in intend to reach the goal. The design here however only works for planets, I would still like a more general solution. It could be extended, but that is a subject for later discussion.

I’ve done a bit more thinking, and there might be a special case solution for planets. Before I go into the solution, I’ll discuss the source of the issue.

First lets assume the origin is at the center of the planet (both for our current coordinate system, and for rendering). Lets also assume the planet is a perfect sphere (planets are very close). We want to make a high rez mesh for part of the surface of the planet. Vertexes are all the radius of the planet from the origin, and thus, as you get higher and higher detail mesh (for more and more zoom), the precision breaks down. This also happens for the camera precision.

The solution, as mentioned in my first post here, is to dynamically move the origin closer to the camera and mesh in question. Clearly the optimal location is directly on the surface of our sphere in front of the camera.

Now the question is how does one calculate vertex positions relative to such an origin? The simple approach of computing the center of the sphere relative to the new origin and placing vertexes according to that does work, however if suffers from precision issues. The rendering person issues are avoided, but during the vertex location computation, the radios of the planet was involved, and once again the small difference of large numbers problem destroys the precision. This effect however happens with CPU floats, which can have more precision. This gains quite a bit, but not enough! There is a better approach.

The better approach is to compute the vertex positions relative to the origin, without letting any proportionally big numbers like the planet radius get involved. The key here is that the whole issue is only a problem for vertexes very nearby when compared to the sphere itself. Thus, for far away vertexes where the curve of the planet is significant, you can use more traditional methods. For nearby vertexes, the ones with the problem, you can simply use an approximation of a spherical surface. For intermediate vertexes, you smoothly blend the imprecise accurate sphere with the precise locally accurate simplification.

With this in mind, picture an endless plane. To make it appear spherical, you can depress (lower normal to the plain) the vertexes by a function of their distance from the origin. Lets call the distance d. The function for the depression of perfect sphere is (1-d2).5-1. This is fairly easy to derive, and if you graph it (set d=x and graph) you should see a nice semicircle below the x axis. It is undefined for d>1 because those points are beyond the sphere. This approach to generating a sphere has some benefits. It is basically a height map approach to a sphere.

That function can not be used to approximate a sphere for high zooms however because of the 1-x2 term. When x is close to 0, that is imprecise. It can be use for far away points though. For local points, a basic flat (y=0) can be used, but you might prefer a more accurate approximation. The infinite series -(d2)/(2!)-(d4)/(3!)-(d6)/(4!)… seems to converge, and just using the first term -(d**2)/(2!) works great for points even several degrees away from the origin.

So what is needed to to smoothly transition between (1-d2).5-1 and the approximation (-(d**2)/(2!) or 0) as d goes from 0 to larger values (like .001, or even just .1). What you are left with is a simply height map equation that produces a sphere under the origin which can be zoomed pretty much endlessly with proper mesh LOD.

Now that that is dealt with, someone might want to use the super HD LOD sphere mesh technology to do something like hills, mountains, grains of sand, what ever. Remember, this should work across all scales, so you should be able to have any of these on a true scale planet sphere. There are two issues though, you can’t zoom in much past the scale of the largest bumps, and what about the bumps out where the surface of the sphere is not normal to the height map plane?

Here we will assume that the mesh is derived from fixed frequency height noise, such as purlin noise, with many frequencies added together.

First, I’ll address the issue of multi-scale noise. How the noise, or mesh deformations are implemented is that they are simply added to the heigh map. This however can mean the the surface gets well above or below the origin. The solution is to displace the height map vertically such that the origin always falls on the surface. Thus you can still have fine detail on top of a mountain. To avoid issues with the low frequency tall noise being imprecisely interpolated and messing up nearby points, again an approximation is needed for local use. It can be a simply sloped plane with x and y derivatives so that it lines up with the corresponding noise, and can be blended with the true noise over the range from d=0 to d=n*f where n is a small scaler, and f is the frequency of the noise in question.

The issue with the height map not being parallel to the sphere surface only is an issue far from the origin, so it could be dealt with using other methods. The height map approach is only reasonable (and needed) for small parts of the sphere, traditional approaches can be used for the very far away areas and from views from space.

I keep referring to it as a height map, but it does not have to be. It simply is an easy way to present the subject. The concepts still work with some adjustments for placing meshes, caves etc. If zooming in on such more complex meshes is allowed, a more general solution would need to be developed. One that could get derive local approximations for such meshes, and specify a rough frequency for the mesh. Deformations also become more complex that simply adding smaller scale noise to the height map. Deformation normal to the surfaces is possible. If this is properly implemented, it could potentially be applied to non planet like objects with the overall effect amounting to something along the lines a rendering engine that supports endless zooms on fractal algorithm based meshes (detail added with amplitude proportional to its frequency). The number of octaves or layers of noise/deformations processed is proportional to the log of the zoom, so in the case where you are unable to cash anything at all, you get zoom with computation time of O(log n) free from precision issues. (n=zoom)

This design awaits implementation. I haven’t really delved into procedural geometry yet, so I need to learn that. Also, there are some pretty complex issue regarding camera control to be dealt with as well. I forgot to mention that as you zoom in, higher frequency noise/deformation would be faded in which would resemble mip/map effects as far as avoiding high frequency jitter. As far as the O(log n) time, that assume that each octave takes the same amount of time to calculate (you have log n octaves at n zoom). This is unrealistic because the vast amount of computation for high n will be done using local approximations of high frequency deformations, which can be summed and evaluated all at once, so once the zoom is pretty deep, it should become constant time. The precision limits should only come from how small of floating point values can be used on the CPU (they can be scaled for the GPU) so a few hundred orders of magnitude should be the limit from that, and even that could be worked around with some work. I don’t see any other limits, though at such high zooms, depth of field may restrict the viewing of large scale distant things. I’m really not sure what there is in the way of depth of field limits when the camera is looking outward from the origin in panda. I have scaled and moved in super distant objects to solve this problem when infinite far clipping plane was not an option.

As far as texturing the mesh, the same approach using summed local approximations should result in constant time texturing, but different distances require different details, thus when viewing the full depth of field, O(log n) shaders are needed. This gets even more complex if you want different shaders in different places at the same zoom level (so you can have separate forest and desert shaders for example).

Implementing this will be interesting. I’m pretty busy with other projects, so it may be a while, but when/if I get anywhere, I be sure to report back. If anyone else wants to try and implement it, I would be happy to try and help them. I would recommend starting with a basic fractal height map.

Oh, and lastly, one detail I did not address is getting input for your noise functions. Because of the dynamic origin, and deep zoom, getting XYZ for the noise functions is not straightforward. They will probably have to repeat, and transforming the positions to get consistent noise could be pretty hard, but I think it is possible. I’ll think about it.

Hi Craig,

Very interesting post to as I’m trying to achieve a similar result, but on a smaller scale : “only” stellar system scale !

I’ll fully read your wall of text as soon as I get some time, here I’m about to quit computer for lunch ^-^ Then I think we’ll have some discussions.

I’ve already coded a simple stellar system editor in VB.Net using TV3D engine and had solved some issues, now I’m trying python and Panda3D for a more advanced editor, and I’m beginning to get more enthusiastic as I climb the learning curve ^-^ This package really seems a breeze !

In my game to get good zoom going i used 3 scenes.

I draw the star scene first. Which includes stars and planets.
Then I draw the system scene which includes real plant objects and other huge rocks.
Then I draw the acctual game scene at Km scale.

Each scene roughly gives you 2-3 didgets. So with 3 scenes i get about ~1,000,000,000 possible locations. To go to milky way level one might need an extra galaxy sene.

Each of the 3 scenes have a camera. You can stack scenes on top of each other:
firs camera does x = (000),000,000 next camera does x = 000,(000),000 and the smallest camera does x = 000,000,(000)

So locations 663,444,112 in the galaxy would be locations 663 in galaxy scene, 444 in the star sytem scene and 112 in the final scene. The harder problem would come when you move between scene edges… but i solve that by simply not allowing that and requringing a “jump” to get any were far.

What treeform does is one of the better approaches, though you may still have some issues if you wished to support deep zooming on objects (which you probably don’t need). Actually zooming past the precision limit on a single object is where the vast majority of challenges arise (suppose you want to zoom in to 1 meter off the ground on a planet). You do still need to be carful with the relative positioning of objects though. Even zooming in enough on a planet to make it fill a significant portion of the screen at true scale will have issue if the camera or planet is positioned relative to something like the center of the solar-system.

My solution ended up being to parent the camera to the object being zoomed in on, and compute the model view matrix myself by traversing the scene graph from the camera to the object (rather than what happens by default, starting at render). This actually was pretty easy (panda has a method to get a transform from one node to another that works this way), but applying the custom model view matrix required custom shaders. This had some side benefits for my personal project. It allowed me to place a node on the surface of the planet just below the camera, and use that as the focus and build the planet relative to it. This allowed deep zooming on my planet. Getting it all working though is still a work in progress: f-g.wikidot.com/

I wouldn’t bother sifting through my wall of text. It was written before I accomplished anything, and thus is mostly theory and no application.

Hi all,

I’m not getting this far towards either planet or galaxy, just limiting to a stellar system and getting to view a planet from orbit ^-^

The three main tricks I used in previous instalments were :

  • Using all mks data,
  • Applying custom clip planes to prevent clipping,
  • Always translating the universe’s origin to be the camera to fix precision issues.

I’m currently beginning to test that in Panda3D, it should work the same in python as in other languages.

That approach is fine if you have enough precision CPU side for everything (Via doubles). Doubles have a 52 bit Significand, so 1 part in 2^52. So precision in a galaxy the size of the milky way is (100 000 light-years) / (2^52) = 210.065929 kilometers. In my case, I wanted a design that exceeded that, so I needed a different approach.

If you recompute all of the positions on the CPU in python relative to the camera every frame with double precision, you might have performance problems if you have a lot of objects (especially if they are moving and such). Some of the other approaches allow you to leave all distant objects fixed and just screw with the closest ones (where you need the precision). This can have much higher performance.

Hi Craig, hi treeform,

Indeed for your largely higher scales, my approach is crude and not powerful enough. One have to resort to some sort of submapping, as the one explained by treeform.

But for a stellar system some hundreds of UA wide, I think I could retain enough precision for distant objects by always using doubles, like python does. What I fear is that, when translated into the 3D pipeline, the precision will wear off. That was what I avoided by using the camera as origin.

I’ll think about possible performance issues with origin at camera. The idea, however, was that only nodes positions would be recomputed.

By reading the discussion, I can realize I have much to learn to reach your level of knowledge !

I should note that as a fan of fractals, I generally look at the scale problem searching to the solutions that allow infinite zoom (or at least zoom with cost on the order of log(N)). In most cases such solutions are a rather poor choice. The basic positioning everything relative to the camera using doubles on the CPU is really quite a good approach for most things (just do the positioning that way though, go ahead and rotate the camera!), though keeping objects grouped (parented to nodes) by distance so you can place the distant objects (which don’t need carful placing) all at once would be a good optimization and should get you pretty much around any major performance issues from that approach.

Hi Craig,

Your approach is far more general than mine and fractals is generally a good way to understand and simulate the world !

I dream of being able to make full fractal planets (not metal ?), but for now I must restrain myself to viewing planets from afar, Freelancer-like.

At what point are you precisely now on your development ?

I haven’t touched that project in months. While I tried for an approach that supports infinite zoom, I haven’t gotten tillable world space noise fully working, nor have I gotten CPU/GPU sync on the noise which limits the zoom depth as well (I can’t cash local large scale features). Also the spherical nature of the planet is driving me crazy. It makes textures, cashing and just about everything else a pain.

Also, I opted for some limiting zoom stuff because a double on a planet is about 1 nano meter precision, and thats good enough.

All the deep zoom and sphere stuff inspired my to make terrain without spheres or deep zoom (so much less pain) - github.com/Craig-Macomber/Panda3D-Terrain-System