Minecraft style terrain generation (new issues)

Hey all,

I don’t know if this is the right subforum for this as I don’t really have any code to show, this is a more abstract question. I have become enamored with how versatile terrain generation can be in Minecraft and have become interested in something similar for playing with genetic algorithms and artificial life.

I’ve done some reading up on how this is accomplished within Minecraft, and honestly, I don’t see much room for improvement. Basically it’s a 3D perlin noise function (which Panda certainly supports!) What I did to start; just to learn a bit more about the engine, is I made a very simple 10x10xN grid. There is a random int value between 0 and 2 for each space at the second level so that I have a solid bottom. If the value of my random number is 0, then I instance no cube; if it’s 1 or 2, then I do.

Once I get to building the next layer, if there is no block below the current block (everything is stored in a multi-dimensional array, so I can do like if (chunk[x][y][z -1] == 0): don’t create a cube). But this gives ugly and jagged terrain. Which brings me back to the perlin noise function used by Minecraft…

I have figured out how to create a geom plane and apply perlin noise to it; but what I can’t for the life of me figure out how to conceptualize is how to use the values of the perlin noise function to determine where to place my instanced cubes.

I assume I need to somehow reduce the resolution of the function but increase the scale so that it looks sort of “blocky” if it were rendered, then load up each pixel value into a multidimensional array and draw cubes based on where I want to draw them from the functions result.

So…is there a way in Panda3D that I can save the pixel data and access each pixel of the perlin noise function to accomplish this?

edit to add for clarity: Specifically, saving the pixel data for the perlin 3d noise function. I’m sure there are plenty of python plug-ins for reading image data on a pixel by pixel basis (though I’ve yet to look into it as my interest is the 3D function)

Thanks in advance!


First off, I don’t have Panda on this computer, so I haven’t been able to test this out.

This panda3d.org/dox/python/html/ … l#_details page may be what you’re looking for (though you may have already found it). From what I understand reading it, it ought to work for what you have in mind.

First create the Perlin noise object, with set sizes for x y and z, and then you ask for the values of specific points from that. X and Y would be 10, and Z N for you. Then play around with the scale a bit until you’re happy :slight_smile:

Alternatively, if all else fails, this webstaff.itn.liu.se/~stegu/TNM02 … h-faq.html site explains Perlin noise, which isn’t all that complicated.

Hope that helps.

Hehe, I feel dumb. So, what I was trying to do was:

test = PerlinNoise3(10,10,10)

and this returned

DirectStart: Starting the game.
Known pipe types:
(all display modules loaded.)
<libpanda.PerlinNoise3 object at 0x4d47b30>

What I was too tired to realize was that I needed to try to extract specific data point, such as:

test = PerlinNoise3(10,10,10)

Which gives me what I would expect, a value between 0 and 1.


Note that if you simply want to create an image filled with perlin noise, you can create a PNMImage object and initialise it the usual way, and then call perlinNoiseFill. Refer to the API reference for more information.

Excuse me if this is a silly question please,

is the perlin noise algorithm seamless? (or rather does panda have a way to make it seamless?)

Thank you,

You mean seamlessly repeatable? I don’t know. I don’t think so. There are various basic ways to make any image seamless, though I’m not sure how much they affect the quality of the resulting image when applied as a heightmap.

The simplest way I can think of is to flip the image once in one direction, once in another direction, and once in both directions. You’ll then have 4 images that you can average together into one that will be seamlessly repeatable. You’ll probably need to then enhance the contrast because of the washed out detail.

I have more questions here (and have edited the title to reflect that :slight_smile: )

So, for everyone’s benefit, here’s what I’ve done so far:


import direct.directbase.DirectStart
from createChunk import *

chunk1 = createChunk()



import direct.directbase.DirectStart
from random import randint
from direct.showbase.ShowBase import ShowBase
from pandac.PandaModules import *

class createChunk():
	def __init__(self):
		self.block = {}
		self.blocks = {}
		self.world = PerlinNoise3(30,30,15)
	def setBlock(self, x, y, z):
		self.box = loader.loadModel("assets/models/cube.x")
	def buildChunk(self, x1, x2, y1, y2, z1, z2):
		for x in range (x1, x2):
			self.blocks[x] = {}
			for y in range (y1, y2):
				self.blocks[x][y] = {}
				for z in range (z1, z2):
					self.blocks[x][y][z] = self.world(x,y,z)
					if z == 0:
						if self.blocks[x][y][z] > 0.22:

So, this is the relevant parts thus far. This creates a nice little scene (my blocks are 1 panda unit in size to keep things simple). But, I have a few issues.

  1. Is panda actually attempting to render my non-visible geometry? I’m not sure how to check and find out, but my performance isn’t very good when I increase the scale of my “chunk.” If Panda is trying to render my non-visible geometry, is there a way I can turn that off?

  2. Any advice on how to “load” or “unload” chunks would be good. Since the perlin noise function is being generated once when I start the engine, I should be able to dynamically re-build them at will; however, as my start up time is quite slow, I’m concerned that I’m going to start hitting some performance issues well before I get there.

Those are my two main questions right now. I think I’m going to go ahead and not put a solved tag in the title here as I’ll just ask more questions as they come up and so long as they’re related to the terrain generation. (If mods and users are cool with that. I’m new to panda so am bound to have lots of questions come up and don’t really want to spam the forum with hundreds of new topics due to my inexperience)

Okay, first problem was solved by building one huge terrain and one tiny terrain. When looking at the tiny one I got extremely high fps. When looking at the bigger one, much lower. So, Panda is not rendering outside my field of view; but is it attempting to render hidden geometry? (My suspicion is it’s not, just want to confirm).

My second question above I have solved by not rendering my perlin noise from within my createChunk class. So, that’s it’s own thing now.

It looks like my performance issue has been solved through instancing; but I may be coding myself into a corner in that regard. Time will tell!

Perlin Noise is not seamless. But, you can create a perlin noise object in one class that’s massive and save it to a text file, then create a function to do x,y,z lookups of the values if you’re trying to use it like I am. (That’s what I’m doing)

Your first question, I’m pretty sure, is the same thing I had problems with a while ago as well: Panda dislikes having more than around 200 visible nodes on the screen. It doesn’t draw the hidden ones (so your GPU is safe), but it has to calculate some other stuff, so your CPU takes a hit.

You can use node.flattenStrong() to fix that (don’t do it to the whole surface, because then it will render everything, visible or not), but then you can’t edit the cubes. What would probably work best is having the entire structure of the world in a hidden node (one not parented to render), then copy chunks of that into chunk nodes whenever they pop into view (range) or they get changed.

All in all, more complicated, but with a minecraft terrain you’ll be amazed how fast the visible node count rises, if you do each cube separately.

Do you happen to know if performance can be improved at all by breaking chunks into their own node but still having multiple chunks parented to the render node? I guess what I’m asking is, is the bottleneck having a lot of geometry parented to a single node, or having a lot of geometry parented to the render node?

Does Panda have a way to check “proximity”? Basically, I don’t really want to set up a collision sphere that would be massive and attempt to collide with hidden geometry to determine if I should render it as it seems to me that would be overly complicated…plus I’m trying to avoid collision detection right now until I get my terrain engine worked out a bit more :wink:

Also, as I stated above I plan on doing some artificial life/genetic algorithm type stuff but the actual game play is going to be survival horror. Do you or anyone happen to know if the player carrying a “flashlight” (a spotlight parented to the camera, looking where the camera looks) would count as not having geometry visible? Or am I going to further hurt performance as Panda attempts to figure out how the light is going to affect the geometry? I plan on implementing this anyway, but if it saves on performance I might do it sooner rather then later.

And finally (for my further performance tuning questions) the texture I’m using for my cubes is a 64x64 .png file. Are there certain image file formats that Panda can handle better then others? And I know if I create a new cube for each location I need a cube I load up a new cube + texture into memory. When I use instancing, I know the geometry is instanced; is the texture also instanced? I imagine have thousands of 64x64 textures loaded up is pretty memory intensive. Maybe a .gif would be better at that size?

Yes, I’ve noticed that :laughing:



Here’s a picture of what I’ve got. The textures are temporary and ugly (and not UV’d perfectly hence the seams)…but if anyone is reading this and want to know how well Perlin Noise works for rendering terrain…

Another issue:

When I export to .x format from blender I can’t use a .gif or .jpg image, only .png.

Opening the .x file and manually editing the texture file reference from .jpg back to .png also doesn’t work. I have to re-export from Blender for some reason. What?

Edit: I can’t get any textures to load now. Strange…

First off, with the textures, I’m pretty sure that they are instanced as well (I’d be very surprised if they weren’t).

Neither having lots of geometry in one node nor having lots of geometry parented to render should be the bottleneck. What I had meant was that having many model or geom type nodes (or visible collision ones) in the node tree creates issues. The flattenStrong command doesn’t simplify geometry, rather it makes a bunch of separate nodes into one.

What I’d do (I think), is the following:
Have a ‘parent’ node, World or such, with the instances of the cubes on it, as you had before (that is, each cube in it’s own node). Then, for all ‘chunks’, lets say 16x16 cubes, around the player get created. Basically this:


where X is the chunk that the player is on, and the other ones the ones around him. This gives a total of 25 chunks, or 25 nodes.

To build each chunk, copy every cube that’s supposed to be in the chunk into the chunk node, then call flattenStrong on the chunk. This is, from what I’ve tested, surprisingly quick (though I did it with a 2d array of cubes, not a 3d one).

As for the flashlight, your GPU should not be the problem, and one spotlight shouldn’t kill it. Also, it’s from the camera, so you don’t need shadows for it, which makes it cheaper.

Hope that helps.

Yes, that helps a lot! I probably have enough to play with terrain generation for a while now! If I can only get my textures to behave properly on my mac…


Does Panda create a cache somewhere of textures that I can purge? I replaced texture.png in Blender with tex2.png. The change didn’t take effect even though I had verified that the contents of the .x file were correct.

I was able to delete texture.png and rename tex2.png to texture.png and now my texture is rendering. Does anyone have any idea what’s going on?

This might help you: [Minecraft-like chunk generator)

It doesn’t allow for separate textures, though. All textures should be in one big image like in Minecraft.

Heh, that was the first thing I picked apart for my code. The problem with that comes from having to rebuild the entirety of the geometry when it’s manipulated. I’m just as well off using flattenStrong in conjunction with building my local terrain right around the player to be modified via user input.

Are you using minecraft-like chunks (16x16x128) or cubic chunks (16x16x16)? You may find the latter a helpful optimization if not. You can even LOD a far away cubic chunk into four cubes or one giant cube.

Well, I’m still tweaking a lot of values. The way I’ve written my code it’s very easy to change the chunk size, so I’ve just been playing with the values. I’ll keep the cubic chunk in mind, I’m currently doing 10x10x10.

chunk1 = createChunk()

This is currently how I’m generating chunks for testing, but I don’t think this is very scalable as I’ll have to start worrying about keeping track of things like chunk95467 and manipulating that.

I’m still designing in my head how to make it a bit easier on myself :slight_smile:


I don’t really understand the problem. You wouldn’t need to regenerate entire landscape, if that’s that you’re saying. Only the chunk that has been altered. Of course you’d need to do that asynchronously if you don’t want any ugly framerate drops. For a 16x16x16 chunk it takes about 0.04s.

So, what I’m doing is something like the following:


X = terrain chunk that has been flattened strong
O = terrain chunk that the player is on, no flatten strong applied.

This allows me to directly interact with blocks on the current chunk without having to redraw the entire chunk every time a change is made. When the player gets close to an adjacent chunk then it will be redrawn without flatten strong applied. Once the player has left the old chunk, it will then be flattened.

If I use the method you’re referring to, then I have to redraw the terrain for the chunk that the player is on every time a change is made.