Capabilities of Panda?

Josh, for me, I think a picture is worth a 1000 words :smiley:

This of course is from the Star Wars Galaxies engine. Torque is capable of a scene such as this.

So, say we are roaming around in the desert and we come across those buildings. We can then go inside really without a pause and come back out. I’m not sure if you’ve ever tried Torque but there is a free demo you download and look at if you have the time.

In this scene pictured, those buildings obviously would not be shown from really far away but the distance is something that can be adjusted based on your performance of the PC.

The ground details like shrubs and rocks are only rendering as far as the buildings. The mountains may have some details on them but you are currently too far away.

Does this make sense?

Good point I didn’t look at it from that angle. I was too busy concentrating on technology. Cool - look forwarding to reading this.

Thanks!

Complete sense, sm3.

Your goal is like stodge’s - panoramic vistas, huge terrain, can walk anywhere. I’ll make sure I cover this in the manual, soon.

Outstanding! I look forward to reading it.

Thanks Josh.

Count me in as another one looking forward to reading that. I’m trying to develop a turn-based game that needs high visibility as well. :laughing:

^------------- I posted that.

I think this will be a big step in bringing Panda3d to a larger audience.

Panda3d is still not well known which is a shame, or maybe not depending on how you look at. :wink: That’s why I’m trying to limit my “How do you do this?” threads.

sm3, if you’re not concerned about having to comply with GPL, there’s a better than 0% chance that I’ll be creating a terrain library soonish. This will be integratable in multiple renderers, and will almost certainly work in Panda. That’s assuming a little Panda utility called interrogate.exe doesnt stress me out too much :wink:

The goal of this terrain engine is to be able to create largish outdoor panoramas such as you see in Everquest, DAOC or SL.

Now obviously you have no guarantee that I’ll either create this or it will work as intended, but there’s a better than zero chance.

As far as technical details, I guess I’d just use some sort of level of detail device, so that the mesh a mile away is rendered at one triangle to 200m or so, whilst the mesh under your feet is perhaps one triangle to 50cm, something like that. Not sure entirely, but I’m sure we’ll figure something out.

That said, even if I do this, it probably wont be this month, maybe not much before August, so you’ll probably want to use other things up till that point, maybe the procedure that Josh is going to write about.

btw, just for completeness, since it seems possibly relevant to you, here is the website for the project I am working on.

metaverse.sf.net

It’s an MMOG where people can build their own objects, add scripts to them and so on.

Ancient screenshots from last year:
metaverse.sourceforge.net/screenshots.html (note: you can add textures to everything; I’m just not really the artist type)

You can see that the game lets you move around in a virtual world on the internet, interact with other people, create objects, edit them, and script them.

Screenshots of new primitive engine:
sourceforge.net/project/screensh … ssid=11783

This project is relevant because obviously this incorporates the editor, scripting and so on that you seem to be looking for.

Now, I wouldnt recommend you actually try using this right now, its not really ready for users just yet, and besides our file releases are really old.

However 1.0 will hopefully be coming out soonish, and then it might become interesting, so you might want to check out what we are doing once a month or so. If you want, you can click on the following link to be informed when there is a new public release:

sourceforge.net/account/login.ph … d%3D121957

Hugh

That’s awesome Hugh!

I’ve actually stumbled on this before in my net travels :slight_smile:

I’ll definately keep an eye on the development. I learn something new in this field (3d graphics/game programming) everyday!!

I noticed on one of the pages it said you use the language Lua. Did you change to Python or still using Lua?

Thanks,

Steve

Both!

Actually:

  • we’re using Python increasingly for the project development itself
  • the in-game scripting is Lua

Basically:

  • Python is an awesome scripting language. So much speed, so much concision, really easy to debug, elegant, and really powerful all in the same language!
  • on the other hand, Lua is a KISS-type language which makes it ideal for embedding where you want the scripting engine to be reasonably immune to scripting attacks from the users.

Hugh

Ah, ok. Makes sense! Awesome work though.

I’m looking forward to see what Josh comes up with for the terrain and such. I’ve always been of the mindset that we shouldn’t let the technology dictate what kind of game we make, but beggars can’t be choosers! :slight_smile:
so I may have to revise my plans a bit.

I don’t think you’ll need to let the engine dictate anything. The two most modern algorithms for LOD terrain are geomipmapping and geometry clipmaps. I don’t think either is out-of-reach.

I’m still working on that heightfield terrain. I’m actually writing a tutorial program (like the samples in the start menu), so it’s going to take me a little longer than I intended.

Sounds good Josh.

Josh,

Out-of-curiosity, are you a student at the Carnegie ETC?

Reason I ask… how do most people find out what the latest ways of doing things in the 3d world?

That is for us non-student types that is.

I work in the IT field during the day so I keep up by reading everything. In the 3d graphics world though, I find a limited amount of resources to learn from.

Just from the few posts I’ve had with you I’ve the impression that some of the ways I’m used to from other 3d environments are not necessarily the best or the latest ways of doing things.

Any thoughts?

Thanks,

Steve

I’m a teacher at the ETC. Last semester, I taught a class on games programming. I had to do a lot of studying, I learned some amazing things.

The best resources are:

  • the gamasutra featured articles
  • the nvidia developer website, with all their white-papers and notes
  • the ATI developer website, same
  • certain posts by a guy named “Yann” at gamedev.net

Good to know.

I already follow gamasutra.com though I never really thought to look at the ATI and nVidia sites before now.

I’ll add them to my list of places to visit on the web.

Thanks.

Steve

I was trying to implement a terrain demo, but I learned that the necessary API is being rewritten. Here are David Rose’s comments on the subject:

“Maybe it’s best to write the tutorial against the new Geom interface, and publicly post just a ‘tutorial coming soon’ message for now. Then when we release Panda3D 1.1, you can release the new tutorial at the same time. Probably better than teaching people to use an interface whose future lifespan is measured in weeks, anyway.”

As I understand it, this new interface will be released in mid to late June. I hate postponing something that you guys need… do you mind if I put this off for a few weeks?

  • Josh

Josh, I personally don’t mind. There is plenty of other stuff I can work on until then :slight_smile:

Thanks for your time with this anyway.

Steve

I’ve started to think in detail about how I’m going to implement panoramic terrains in my application.

Since my application involves real-time streaming of the terrain as people move around, then there are going to be issues not just with renderering level of detail, but also with download/streaming level of detail.

I was wondering if anyone can help me out on this?

So, basically, lets imagine we have an 8km by 8km terrain, with say 1m resolution. We’re standing on the ground and walking around so this is a big terrain. Thats about… 64 meg? … in total.

There’s mountains in the distance, and you can see those just great. As you get closer, the details fill in, and you see smaller crags appear, until you get really close, and then the smallest (1m) potholes appear.

So, on the rendering side, I’ve seen Josh allude to various LOD optimizations that are possible. Rendering doesnt really worry me; I dont think its high risk. Its been done tons of times before; and besides I can think of a few very trivial algorithms to handle it, like just using really big triangles for the distance, and small ones up close.

So, the issue/risk is going to be the streaming/downloading, so thats what I want to focus on here.

Whilst the renderering and streaming are definitely similar in that they are both involving level of detail centred around the avatar, they are different because:

  • streaming is probably optimized using wavelets?
  • rendering uses triangles: well-defined linked points

What I’m thinking is that if we use wavelets for the streaming, then the rendering algos can be entirely independent of the streaming algos, since each wavelet gives just as much information about any single point in the heightfield. So, our renderer can just say “ok, give me the height at 0,0 and 10,10, and 10,0 please”, and the terrain streaming object can give it that information, using the wavelet information it currently holds.

Next: obviously, whilst wavelets provide for progressive downloading, they dont on their own provide level of detail. The whole terrain would download in an evenly distributed fashion, and very slowly.

So, we need to add some bits onto this for level of detail. The wavelets is just to provide for easy progressive downloading.

For the level of detail, what I’m thinking is: first we divide our terrain into a grid, of 8092m x 8092m boxes. Since thats the size of our terrain, they’ll just be the one square :slight_smile:

We run wavelts on that to maybe 4? 8? terms.

Next, we divide this square into 16 smaller squares, 2048m x 2048m, and we run wavelets on the 8 small squares around where our avatar is, plus the one where it is standing.

So, now we’ve run wavelets on 9 + 1 squares. If we run to 4 x 4 terms for each square, then that is 160 numbers so far.

Next, we resubdivide the grid into 512m x 512m squares. So each of the previous gridsquares contains 16 of these squares. We rerun wavelets on the 8 512m squares around where our avatar is, plus the one where it is.

We repeat this for: 128m x 128m, 32m x 32m, and 8m x 8m.

This gives us 5 sets of 9 squares, and the 8092m x 8092m square.

Thats 5 * 9 + 1 squares = 46 squares.

For each square, we’re generating maybe 4 *4 wavelets (2 axes), so 16 numbers.

so, in total thats 46 * 16 numbers = 736 numbers

If we provide two bytes for each number, to allow a height resolution from 0 to 65535, then a single view of our 8km by 8km terrain is:
736 * 2 bytes = 1472 bytes

Thats about the same size as a single Ethernet packet! So, its definitely looking potentially feasible?

What are your thoughts on this? What references are out there that could help me with this?

Hugh

Well, I may be able to help, since I’ve implemented this before. But I have no idea what a wavelet is… so you lost me.

In ATITD, we used a heightfield terrain, with control points every 16 feet. A control point consisted of an elevation (16 bits) and a terrain type (8 bits).

This data is stored on the server in patches, each patch is 16 control points by 16 control points. The elevations and the terrain types are stored in separate arrays. I use lossless compression algorithms on these. Not all patches are compressed using the same algorithm. For example, I might look at one patch and say “this elevation patch can be stored as a base elevation plus a 4-bit offset.” Some other patch might not be representable that way.

I believe that our terrain-type patches usually compressed down to about 30 bytes (terrain tends to be in large contiguous areas), and our elevation patches tended to compress down to about 120 bytes.

We stream these in a straightforward way. If there’s a patch we’ve already streamed, we don’t stream it again unless it’s been modified. When you teleport into an area, there’s a lot of data to download – it takes several seconds. But if you’re traveling on foot, then we’re only streaming data near the horizon — it isn’t much bandwidth.

On the client side, we increase the resolution to every 8 feet, generated using splining.