I’ve started to think in detail about how I’m going to implement panoramic terrains in my application.
Since my application involves real-time streaming of the terrain as people move around, then there are going to be issues not just with renderering level of detail, but also with download/streaming level of detail.
I was wondering if anyone can help me out on this?
So, basically, lets imagine we have an 8km by 8km terrain, with say 1m resolution. We’re standing on the ground and walking around so this is a big terrain. Thats about… 64 meg? … in total.
There’s mountains in the distance, and you can see those just great. As you get closer, the details fill in, and you see smaller crags appear, until you get really close, and then the smallest (1m) potholes appear.
So, on the rendering side, I’ve seen Josh allude to various LOD optimizations that are possible. Rendering doesnt really worry me; I dont think its high risk. Its been done tons of times before; and besides I can think of a few very trivial algorithms to handle it, like just using really big triangles for the distance, and small ones up close.
So, the issue/risk is going to be the streaming/downloading, so thats what I want to focus on here.
Whilst the renderering and streaming are definitely similar in that they are both involving level of detail centred around the avatar, they are different because:
- streaming is probably optimized using wavelets?
- rendering uses triangles: well-defined linked points
What I’m thinking is that if we use wavelets for the streaming, then the rendering algos can be entirely independent of the streaming algos, since each wavelet gives just as much information about any single point in the heightfield. So, our renderer can just say “ok, give me the height at 0,0 and 10,10, and 10,0 please”, and the terrain streaming object can give it that information, using the wavelet information it currently holds.
Next: obviously, whilst wavelets provide for progressive downloading, they dont on their own provide level of detail. The whole terrain would download in an evenly distributed fashion, and very slowly.
So, we need to add some bits onto this for level of detail. The wavelets is just to provide for easy progressive downloading.
For the level of detail, what I’m thinking is: first we divide our terrain into a grid, of 8092m x 8092m boxes. Since thats the size of our terrain, they’ll just be the one square
We run wavelts on that to maybe 4? 8? terms.
Next, we divide this square into 16 smaller squares, 2048m x 2048m, and we run wavelets on the 8 small squares around where our avatar is, plus the one where it is standing.
So, now we’ve run wavelets on 9 + 1 squares. If we run to 4 x 4 terms for each square, then that is 160 numbers so far.
Next, we resubdivide the grid into 512m x 512m squares. So each of the previous gridsquares contains 16 of these squares. We rerun wavelets on the 8 512m squares around where our avatar is, plus the one where it is.
We repeat this for: 128m x 128m, 32m x 32m, and 8m x 8m.
This gives us 5 sets of 9 squares, and the 8092m x 8092m square.
Thats 5 * 9 + 1 squares = 46 squares.
For each square, we’re generating maybe 4 *4 wavelets (2 axes), so 16 numbers.
so, in total thats 46 * 16 numbers = 736 numbers
If we provide two bytes for each number, to allow a height resolution from 0 to 65535, then a single view of our 8km by 8km terrain is:
736 * 2 bytes = 1472 bytes
Thats about the same size as a single Ethernet packet! So, its definitely looking potentially feasible?
What are your thoughts on this? What references are out there that could help me with this?
Hugh