Panda3d paged geometry?

Right. Without hardware geometry instancing, you’re paying for every single geom that is being rendered by the GPU.

Right, with batches you really want to keep them as low as you can while not wrecking culling (that’s why you shouldn’t just flatten everything, because that’d actually have a negative effect because less nodes could get culled away.)

To clarify a bit on that: if you flatten all the trees into a single node, then all the trees will be seen as a single object. Because this single huge object is always in view, it will always be sent in its entirety to the GPU. If you have many separate trees, then only the ones that are partially or entirely in view are sent to the GPU.

So you’ll want to smartly flatten groups of trees that are standing close to each other, and find the perfect tradeoff between efficient culling and efficient batching.

It all makes sense now.
One last thing, you said the functionality is already there in Panda for paging. Can you explain what you meant? Im still want to have pages-batches for the terrain chunks.
Are there some classes for paging built into Panda?

Panda3D doesn’t support paging (by the definition of swapping terrain pieces on disk) currently. Note that this will only be necessary if you have gigantic worlds though. Because adding paging will make the game lag/chop when chunks are loaded and unloaded, only use it when you get into system memory issues.

Dont we have async loading for these cases?
Im not sure what you mean by gigantic. Hows aroud 10 square miles?

Async loading can help, but you’d need a true threading build to take full advantage of multi-core systems.

“square miles” means nothing to me if I don’t know how many terrain pixels make one mile, in your game.

I thought the official builds are build with threading support. And it only works on multi core CPUs or something?

Im not really using heightmaps, so something like 1 panda unit == 1 yard.

So, according to Google Calculator, that’d be 17600 units. Depending on how many triangles you have per unit, that might be too much indeed.

Well yeah, big island.
So about the paging, when checking if my version of Panda is compiled with threading support, I get 1, I dont know if its in C++ or Python, I think “1” is “True”, right?

from pandac.PandaModules import Thread
print Thread.isThreadingSupported()

is threading only possible on multicore CPUs? I really dont know that.

Another thing:
You said that in Panda if I load a single tree multiple times, it will still use as much memory as a single tree.
But if I group trees into “batches” and turn them into a single geom, wouldnt they be treated as different in memory then?

Yeah, but your build of Panda is compiled with so-called “simple threadïng”.

I didn’t say you can’t use threading, all I said is that you can’t fully take advantage of multi-core CPUs.

I don’t know if those flattened groups will end up taking up more memory.

I have no idea what those advantages are. If threads work as in ‘dont take up all resources so loading a terrian chunk wont freeze Panda, but will finish slower’, then it works for me.

Well, if they are a single geom now, how will Panda know they are the same trees? Maybe “paging” the trees lke the terrain chunks makes sense.

That’s possible, but I’d only worry about that if you notice it become a problem.

I have like 200 trees only right now and its like 200000 vertices. Theres nothing else going on and I dont even use the amount of trees I want, so its a problem.

But how much memory is your program consuming? It’s always better to look at your actual memory usage than speculating about whether it may or may not become too high.

Vertices don’t take up a lot of space.

Also high memory usage doesn’t cause your FPS to drop. Let’s be clear, your problem is speed, right? Disk i/o slows things down. Paging = slower. Besides, paging is for stuff that isn’t on-screen. As for batching, you might get a small FPS boost if you find an optimal balance, but probably nothing huge.

Have you tried disabling textures to see if you get a boost? Maybe the models themselves are just exported with some incorrect settings. (I just tried it now but only got a small speed increase)

Current problem is speed, right. But paging will be a must for a 17000 square unit island, no? I think a vertice takes up around 32 bytes with normals and UVs, but no weights and bone assignments. My little scene with few hundred trees uses 200000 vertices, its not much, but having the whole island loaded like that would be crazy in my opinion.

Yeah. Did I say the opposite? I have 2 problems currently, im not trying to increase the FPS caused by the tree alpha textures with this. this has nothing to do with it.

I want to have thousands of trees on screen. The geom limit by my knowledge is around 1000, so I dont see how I can do this without batching.

base.textureOff()


:unamused: this keeps getting weirder and weirder

I’m no expert, but I’d like to share some thoughts.

That’s only important when you send that into the GPU. You can have millions of trees, but if one tree model is, say, 1000 verts that’s how many verts you keep in the memory. Paging, batching and so on is pointless from that standpoint, and rendering can be solved with LOD and culling.

As long as you don’t have thousands of types of trees, you should really be fine memory-wise.

I would actually worry about the land itself more, because if you want it to be that big, and probably very detailed, it might be an issue. Especially if you want to use multiple textures, because I can’t imagine stretching one, say, 2k texture over the whole land, and Panda has nothing megatexture-esque (does it?).

The how is known as hardware geometry instancing.

Finally, judging by the framerate on your texture-less screenshot I guess we can safely assume that texture transparency has nothing to do with your problems. Unless you forgot to turn off transparency then maybe it still does something for some reason.

I’ve run your sample code and I had to add literally thousands of trees to get really low framerate.

You said you tested that on some hardware, mainly laptops IIRC. What GPUs and what OS?

On a side note, you said that it’s the same exact models as in the OGRE demo. However, if I understand correctly, you imported them into Blender from an OGRE mesh format and then exported them into EGG with Chicken, right? So they went through two levels of format changing – something could get lost in between. I’m not saying something did, but I’m not sure you can safely say they’re the same thing.

But you can have only around 300 geoms, if you have more you have to flatten them. When you flatten them ALL to one, you loose culling. So you can flatten them by grouping them first into ‘batches’ and flattening them (but making sure you dont have more than 300 ‘batches’.
So anyway, if you flatten geoms into 1, I’ll guess you break caching. So its not that simple.
Ive already explained that, please dont make me repeat myself because then my posts which contain present questions are left behind newer posts where I repeat myself, and make them harder to find.

I fail to understand why everyone here is opposed to terrain paging. Its the first word you hear when you want to have huge or infinite worlds. Whats the big deal?

I see NO mention of hardware geometry instancing in the OGRE page. Again, if it can be done with Ogre, it can be done with Panda, right?
Isnt hardware geometry instancing a relatively new thing anyway? Far Cry was released in 2004 and it had an island with large amount of vegetation.
And from what I learned, hardware geometry instancing just saves you from batching and the trees are sent as 1 batch, but the amount of geoms is not what is causing the framerate drop, its even there with 100 geoms.
That is also kind of misleading as you kind of make it sounds like its the only possible way to solve this.

i dont know what base.texturesOff() does internally, but I would guess either it still caclculates alpha for whatever reason, or something else, like calculating the render order of the branch planes.

Answered that. Look above, not sure why the OS is important here. Using OpenGL.

Could be, but ive tried MAlpha, MDual and MBinary. I havent gotten any reply on what settings I can use in the egg files.

Yes, but I meant the memory. And, as I said later, the problem of having many geoms can easily be solved by hardware instancing. Combining the standard Panda’s memory managements features with hardware instancing you have every tree loaded and rendered once, but placed many times in the scene. Combined with culling (most importantly frustum culling, because you obviously don’t see all of your scene at once unless you make some kind of vista from a high position for the player to enjoy) it’s a complete win.

It’s not that anyone’s opposed to anything, it’s just that from what you say, you simply don’t need that. If you have very complex terrain combined with some houses, and possibly other “it’s only here but takes a lot of mem” stuff, sure, you should then divide stuff into chunks with their own LOD and load only those that are needed here and now, but you’re talking about trees. And for trees that’s pointless.

So?

I’ve been following this thread since it begun and I still can’t say I dig what “it” actually is. All I see is “paging” used as a buzzword, or rather “the word that OGRE developers used, whatever that means”. Sorry, but that’s the expression I get.

Quick googling reveals that geometry instancing was described in GPU Gems 2, which was released in 2005. So I humbly assume it was already there in 2004 or earlier.

Well, again, I have a feeling the point of this thread was somehow lost in translation. You’ve started this thread to find out about how to implement whatever the OGRE developers actually made. It was soon pointed out that you don’t need whatever that was and that Panda provides certain features that can do the job of rendering a dense forest.

Then, somehow, the point changed into stuff that is completely unrelated to either that OGRE tech or rendering a large terrain covered with thousands of trees – it became what causes your framedrop, which happens even when you render one and only one tree. So I have a suggestion. Let’s switch to debugging mode full time and try to find out what makes you unable to render one tree and just then we can get back to rendering a million. How’s that?

Because the fact is, if you get a frame drop with 1 or 10 trees, then there’s no technology, not geometry instancing, nor paging, nor the mysterious OGRE code, that could possibly allow you to render a thousand. Just no freaking way :wink:.

Ok, but just to be sure, don’t enable transparency on the nodes, if you still do.

Because the quality of drivers differs a LOT between Windows and Linux (not sure about Mac), especially for AMD and Intel (Nvidia is pretty much on par), so if you tried that on Intel GPU running Linux, you’re results could by off by the driver. At least I would consider such possibility.

And if it’s a bug in Panda, it might as well be bound to a specific hardware-software combination mentioned above. If that was the case, providing that info is crucial to fixing the bug (you generally should begin any kind of complain regarding performance with extensive information on hardware, OS, driver versions and so forth).

Sorry, can’t help with that. But I would start by disabling everything and seeing if the number of geoms, their complexity, the number of lights (if any) has any effect.

Well the thing is, youre saying its OK as they share they same memory. Im saying that they can’t go over 300, and if I flatten them, they dont share the same memory anymore. Get what I mean?

Is it? Look at above. Again, if you flatten them into batches, the batches are different now (dont share the same memory), like terrain chunks. So paging them does make sense to me.

Wow. If its not mentioned in the OGRE pages, then its likely not used. If its not used, it means I dont need to use it too to get the same results. So its both difficult and not necessary, so why mention it now like its the only way?

If you dont understand what I want to achive, even if I/the Ogre devs used the wrong term, then I give up.

Even if true, I see even better looking jungle in the Ogre demos which presumably doesnt use it, so even if that game used it, its not necessary. And if you can’t give me a working instancing shader and culling code working with it, you shouldn’t really mention it in this topic again, as I know nothing about shaders, and it looks like its not the only possible way.

And Im actually working on the batch creation code right now. The only questions left now are:

  1. whats with the low framerate with the trees?
  2. does batching (flattening) brake “caching”, and if so, doesnt “paging” for trees start to make sense again?

You misunderstand, I have 2 questions now which are unlrelated. I dont think issue 1 is caused by issue 2. i dont know why you think like that.

the are set in the egg file, no line of code for that. I will delete the materials and textures and export the tree now and post my results

I think I posted 2 nvidia gpus above.
I use Windows 7 and Ubuntu. I can try on Ubuntu, but I dont think the difference is that big.

For the record, the main problem of flattening everything down to one geom is that you break culling, not ‘caching’, whatever you meant by that.

And yes, you can obviously pull off the same scene in Panda that you can pull off in Ogre. Both are thin wrappers around OpenGL.