Minecraft clone graphics poor performance

Hello there,

So recently I started working on Minecraft-like engine written with Python and Panda3D, but I ran into some problems.

At the moment I am trying to generate one big mesh for worlds geometry, but for textures and colors to be applied correctly I need every face to be separate and that seems to bring performance issues. I get only 30 fps with 3072 separate faces visible and I am pretty sure you can see a lot more in Minecraft and it runs fine for me. Everything else isn’t rendered, just visible faces. My graphics card is gtx 460.

Can’t current graphics card handle that amount of geometry?
As far as I know it isn’t possible to generate mesh with connected faces and also apply different textures and color settings to them. So that’s why I am doing it this way. I would like to get some suggestions.

Maybe you have any idea how other people render Minecraft-like world?? I googled a bit and it seems that everyone is rendering the world the same way I do and they get 200-300 fps with bigger amount of faces visible on the screen.

Render:

You can’t apply a different texture to each face with Panda or any other graphics engine, but you can easily apply a different color. You can also apply a different UV pair to each face, so one simple way to apply a different texture to each face is to actually create a single large mosaic texture that has all of the pieces you need in one texture, and then change the UV’s of each face to access the part of the texture that you want.

Panda’s egg-palettize tool can do this automatically, but it’s an offline process (it works on egg files, and isn’t designed to be used at runtime).

If you need to do this at runtime, there are other approaches, but you have to start to get clever with your coding.

David

Sounds interesting. Maybe there’s some information in the manual about that?

EDIT: I figured it out. Thank you for your help.

The manual talks more about how to use Panda’s interface in general, rather than about specific render techniques.

But there was a longish thread on a similar subject a while back that might give you some insight. Try: [hitting a speed limit)

David

So I wrote mesh generator class, which generates one model from 16x16x16 geometry and applies needed textures, but the problem is generation is too slow. geom.addPrimitive() takes most of the time. Is there any way to speed it up without using cython? I have trouble with Visual Studio C++ on my pc. Or maybe it’s now possible to compile panda3d with MinGW? I get error about minmax.h being not found. I am using panda3d 1.7.1.

Try the latest buildbot release; I recently checked in some changes in the CVS tree that should improve the performance of geom.addPrimitive(). Let me know if it seems to help.

David

I have a simple mine-craft clone i can share. Will try to make it presentable.

Well I tried. It seems faster, but still too slow. I am trying to rewrite meshgenerator in cython, but it’s really complicated. How much panda3d C++ geometry generating peformance is better than pythons??

Also, is it possible to keep color settings so that they would stay as they were after flattening?

It would be great to find out how others are doing it. If this geometry generation stuff appears to be too slow then I’ll use flattening instead.

Is it presentable yet? :slight_smile:

(Even if not, it would probably be useful example code for anyone trying something similar, so I encourage you to post it somewhere.)

Do you have a function which only sends potentially visible faces to your geom constructor, or are you sending it a solid block of 16**3 cubes?

With a 1616 solid block you would be sending 16tris/cube * 16**3 cubes = 65536 tris to your constructor. If you sent just visible geometry, it’s 2tris/cube * 6 sides/block 36 cubes/side=432 tris. Pretty big difference in the amount of data to send to your geom constructor.

I guess this is pretty complex, but does anyone know how to implement this (a quick way of checking cubes for shared faces)?

May be my snippet will be useful. To increase rendering speed I join faces with the same texture to the one geom. So for 3 different textures and 100x100 tiled field I have only 3 mesh.
[Creating a tiled mesh at runtime)

It’s not complex if you remember where your cubes came from in a 3d array, and index them by their central coordinates: they share a face if two coordinates are equal and the other one differs by exactly 1. That is, if the vector between the centers is (0,0,1) or (0,0,-1) or (0,1,0) or… etc.

You can keep a set of faces to display (as well as a set of filled cubes), and update them incrementally in an efficient way to just get the visible surfaces, which are the faces between two cubes, one of which is filled and one isn’t. So when you change a few cubes you only recompute and change visibility on a few faces (at most 6 times the number of changed cubes).

Yes, I saw that earlier, it’s a very useful example. Thanks.

With that vector solution you have to check each cube against each adjacent cube, instead of all at once. I was thinking of storing where the cubes are in a BitArray, and then xor against each other to get where the faces should go.

10110101011 cube row 1, 7 cubes
01101101010 cube row 2, 6 cubes
----------- xor
11011000001 resulting in 5 total faces to be rendered between row 1 and 2

This works for checking the top, bottom, front and back faces. To check the left and right faces, just shift a copy of the row:

10110101011  cube row 1
010110101011 shifted row 1
------------ xor
111011111101

I haven’t tried this yet, but I think it will work.

ninth, thanks for the link to your code. That is really useful!

Edit: In png mode, to see if it works for the shift by 1. I shifted right, and made faces on the left side, but it’s better to shift left and make the faces on the right because that last bit will get cut off.

I figured out what was the problem in the other thread. And here’s a short video of a render.

http://www.youtube.com/watch?v=qg4gMMDaJM8&list=PLFE62BDC2DDE5F110&feature=mh_lolz

I have since dropped the project, because I lost interest in Minecraft. Still would be interested in doing marching cubes based destructible terrain, but at the moment I am working on other projects.

To determine if face should be rendered I check all adjacent cubes if they’re empty and if they are I add faces. There’s probably faster method, though.

Video looks like it has good fps. Could you share your code please? I would like to see.

Here you go:

pastebin.com/xxL1ZKj4 - main file
pastebin.com/1dc4xjsM - mesh generator

Also you’ll need a texture:

Bear in mind, that this is just a quick prototype so some parts are just hacked into, like setNumRows(). I didn’t know how to calculate number of vertices, without extra iteration over cubes, but it doesn’t seem to really matter. Also just two textures are supported. And you wouldn’t want to use dict to store cubes, because it uses way too much memory.

Thanks a lot. I get really good fps after the cube is generated. However, it does take over 10 seconds to generate on my computer. I’m looking for a way to generate the cube data at 30fps. I don’t know if this is possible with panda.

Hmm. It really shouldn’t take that long. For me it takes 0.59s to generate 25x25x25 chunk and my cpu is quite old (ic2d 6420 OC @ 2.9 ghz). Not sure what might be the problem.

Also, if you want to keep decent fps then you should not generate entire chunk at once, but generate small parts of the chunk over a few frames. It would really help to keep steady fps no matter how big chunks are.

And if you’re planning to have big dynamic worlds, then you’ll definitely need to split a world in a few chunks. There are a lot of optimizations, which have to be done for this kind of project no matter what programming language you’re using.

Thanks, that is a nice example, and it renders fast. (I had to rename the texture but otherwise it worked fine.)

Can you specify a license (or say it’s public domain) so we can legally use it in other projects? (I know it’s very small and simple, but it’s nice to have this option whenever there’s some working example code around. I also see it’s partly made from the procedural cubes sample from Panda, so I’m just talking about your additions/changes to that.)

I’m sure it just preallocates memory, so as long as it’s too large but not way too large, it should be ok. (Other forum threads talk about how to ask your GL context for a good maximum for this, to optimize it for a particular graphics card.)

It seems to work fine in this code :slight_smile:

Of course you are right if you are storing all cubes in a volume, as large as what you can typically see in Minecraft (which can be 10 million cubes or more). But as soon as you cut it down to something sparse, e.g. all solid cubes which touch non-solid cubes (which are the only ones you might need to render at a given time, and might number more like 150K), a dict should work fine in terms of storage space. [Edit: of course it’s still way more space than really needed if you optimize it; as always, you’d measure performance and compact it further if necessary.] (So you’d have a compact database of all cubes whose contents were known and loaded, plus a dict of currently rendered cubes whose storage space you don’t worry about since it’s sparse. Of course you’d end up needing lots of other optimizations…)

Yeah, it’s public domain. Use it however you like.