Hi, I have been playing around Panda3d for a couple of months and I thought I’d share current results from my project. I have been working on ways to simulate water in real time in an open world-type setting. By simulate – I mean the water actually flows from A to B, fills depressions and makes it own path, not the pretty but fixed, hand-placed rivers of Zelda, Skyrim, HZD, etc… You can see it in action here:
To stress the simulation engine a bit I made an island rise out of the sea which carries up a million tonnes of water with it which has to flow off. This makes the transient flow features in this image:
I uploaded another video https://youtu.be/NTpyWaVdN2U of a simpler case – hanging a few 100 tonnes of water in place until I let it drop (the simulation is allowed to start) about 4 seconds in. As an aside: I put a Panda3d logo on the movies in the hope it brings more traffic here if anyone watches them.
My project uses C++ code based on shaderterrain and incorporated as a python module into the dev branch of Panda3d using tobspr’s https://github.com/tobspr/P3DModuleBuilder on windows 10 (VS15).
I extended it using glsl tessellation shaders for LOD for the ground and water surfaces and animated sea tiles to the horizon. It still reads in an arbitrary heightmap from a png file.
Water simulation has been a hobby of mine for a few years. I do 3d fluid simulations professionally (for super rarefied gases in galaxies and star systems so water is a fun variation for me). Here I used the shallow water equations (2d) with strong drag and a highly stable solver purely for speed. I can get 200 FPS in HD on my Dell laptop (GTX 980M) with 256^2 terrain for now but I still have some things I hope to add.
As you might guess, there are a lot of visual effects in the shaders. These include 3d texture noise for normals (for specular reflection), foam effects and flow tracers. I saw some comments on the dev blog that shaders are hard and that users need to be insulated from them. I’d like to make the complete opposite plea. Shaders are awesome, give amazing bang for the buck in both programming time to neat effects and the quality of effects you can produce at high FPS. To be honest it was tobspr’s neat RenderPipeline and other shader-heavy showcases that convinced me that Panda wasn’t dead and was worth investing time in. Otherwise I was looking at the rather old game demos and screen shots and wondering if Panda was a good choice or too far behind. I am not an expert in Python or shaders but a few weeks of playing convinced me really quickly that shaders are the way forward. I think Python and shaders are a killer combination and the goal should be to expose as much of the raw vertex and image data in Panda3d for easy use by shaders as possible.
I intend to play around with other things like shadows, reflections, actual variation in ground types and trees in the same code framework as I get time.
I just created a better quality video and edited the link into the original post.
(Also here: https://youtu.be/1sY7sk2fUhI )
I don’t have much experience making videos from screen captures. I am using OBS. I increased the resolution and the bitrate. The pre-upload fle is 150 MB now but I guess if youtube doesn’t blink I shouldn’t be concerned.
This code will create a sequence of images, the duration of 1m 42sec with a calculation of 60 frames per second. Pre-disabling the fullscreen parameter, you can set the win-size window resolution for Panda3D as desired in the Config.prc file
Create a video from png using free VirtualDub.
The only problem is, the camera will have to be started on the spline.
It’s just the noise-like shimmering of the water in the distance that is very hard to compress efficiently with video codecs, so the quality suffers during those parts of the video. Not a big deal. Using base.movie with ffmpeg would not necessarily help with the quality compared to OBS.
Hi Huitre39, I assume you are talking about the P3DModule maker. It allows you to write C++ code to do whatever you want and integrate it into Panda, call that code from python and pass things back and forth like textures, images or other variables. Tobspr gives basic usage instructions at the github page:
P3DModule builder should really be part of the main Panda SDK because its essential to implement new algorithms that are CPU-intensive in a clean way. The alternatives would be hacking the core code of Panda itself or working solely in C++.
You should have already downloaded the SDK and compiled it. I was able to get 1.9 and the dev SDK to work with the module builder. I am not using any code specific to the dev branch.
I was able to get the P3DModule builder example.h (function and class) to work pretty easily. There is a python build script that does most of the work for you as long as you have something like VS 2015 (e.g. free community version) installed on windows 10 (where I am working for this project). I run the build script from a VS “x64 native tools” command prompt. The build script lets you choose the name of the module file, e.g. ModuleJW. Some things are hardwired like the source directory, called “source”. The build script will attempt to compile any C++ you put in there and it is all incorporated into the one module. You copy the ModuleJW.pyd to your working directory where your Panda code is and your can import it as import ModuleJW in python. Replace ModuleJW by whatever you call your module.
Moving beyond the simple example: Integrating more complex code with the main parts of Panda is non-trivial so I started with the shaderTerrain class which is nice, self-contained code with well defined usage given by the sample shader terrain python code provided with the Panda SDK and associated panda manual pages.
Most classes have 3 files with the same name and 3 extensions for different purposes .h, .I and . cxx. I built my own class from it but since my class was not integrated into Panda’s core code the same way I had to remove or modify some parts of the code. Most of this was just renaming to avoid clashing with the existing shaderTerrain class. The C++ naming conventions for panda3d are pretty clear and the automatic C++ documentation is mostly good (if a little brief on some things).
As a general approach I find that storing data in textures the way shaderTerrain does is very effective. Panda handles getting them onto the GPU for you. The quirkiest thing shaderTerrain does is hijack the add_for_draw method to tweak the terrain geometry data every frame. You can achieve a lot of other frame by frame results without having to do that using tasks. However, since LOD is so directly tied to the camera/projection matrix for each specific frame it makes it hard to do anything else. I tried to avoid doing it but in the end it was the most direct approach.
As for my code itself, it is a bit of a hack at the moment. The C++ does more than it should. A lot of the parameters and settings could and should be set in the python which would avoid a lot of recompiling to try different set-ups. My intention is to reduce it to the bare bones: a simple terrain LOD code with a similar interface to shaderTerrain and have the water simulation live elsewhere. Then they just share the texture that contains the water simulation data and otherwise they are pretty separate pieces of code.
The short answer is that the code is not currently available. That is what the last paragraph above is about. The code is a hack right now and I would need to clean it up a lot before I would make it available. Even then, I am not sure I want to release all of it. For example, the water simulation code is nothing to do with Panda and I see it as separate even though I am calling it from Panda.
In the post above I focused on what I did to make it possible to take water heights that can be placed into a texture (e.g. a 256^2 array) and then render that in Panda. You could generate those water heights in many ways, including building an array by hand as many games do.
The basic rendering strategy is effective and would work for a variety of projects. One trick is to instance the terrain node to two separate NodePaths so that it is used twice – once for land and then again for water each frame. I used a different shader input on each NodePath to tell the shader whether it should be rendering land or water on a given instance and this affects which height texture it uses. The shaderTerrain code was quite close to what I needed to do this effectively but benefited from a few improvements. Depending on your hardware you could probably still get high FPS with a more brute force approach.
I’d be interested to know if there is a way to change the shader used on a lower NodePath based on some change on a higher up one in the scene graph. Right now I have a bunch of tests and branches in the shaders which seems sub-optimal. In this case its about shaders for land and for water. However, more generally it might be about a light source shader (viewing from the sun to make a shadowmap) vs. a final stage shader (viewed from the player camera) to add shadows. My scene graph would have a variety of final stage shaders for different models so I couldn’t just attach one shader to each camera and be done.
rdb: Thanks for the suggestion. I mean to have the node pick which shader to use depending on which render pass it is in within a single frame. Stepping through render stages (going light to light to camera etc…) is entirely internal to panda AFAIK so my python has no opportunity to change any priorities mid-frame using the set shader with priority mechanism.
For example, if I am in the light casting pass, I have a simpler shader which just sets the shadowmap and if I am in a later pass I use a more complex shader (that uses the shadowmap values and does visual effects). So the priority would have to be tied to the render pass in some way to switch over shaders for each node. I have several shaders (some with tessellation stages and some without) so I can’t just have the switch over occur by setting different shaders on the light/camera nodes themselves.
One approach would be to replicate the entire scenegraph for each light/camera using instances of nodes on each and replace the shaders as needed along each scene graph. That seems a bit clunky but would work.
Right now I am using a shader input to let me know if the camera is the light or the final viewer camera and deciding inside the shader which pass I am in. This means I test which pass it is for every single fragment and vertex which seems wasteful. There might be some magic in the GLSL compiler that means there is no real issue to worry about of course and it essentially just does the test once per Node in practice.
Oh, I see what you mean. If you use different cameras for each pass, then you can use setInitialState on the camera to set a particular set of render attributes with corresponding override values that would override the render attributes on the scene.
If you need per-object control over this, this could be accomplished by using tag states; you can set a tag on an object indicating that it should be subject to a particular tag state, and then set a RenderState on the camera with setTagState that indicates that when the object is being rendered by that camera, and has that specific tag value, it should take the values in that state as overrides.
It’s a bit clunky, I will admit; we do intend to introduce a better mechanism for this sort of thing.
Let me know if you need additional help with this.
Here is the same simulation/Panda code, now with screen space reflections (SSR) and shadows. I did the SSR at full window resolution which hammers the frame rate a bit (down to about 40-60 fps now even after I optimized the shader – seems limited by the number of texture look-ups). Standard SSR seems to be O(Screen_height^2 x Screen_width) speed-wise. I was thinking it should be possible to get close to O(log2(Screen_height) x Screen_height x Screen_width). As a short cut I’ll probably go to a lower res reflection buffer. It shouldn’t have too much effect on the top two pictures (where I have ripples = procedural noise tweaking the water surface normals). However, in the bottom two images with a still water surface I think you’d notice.
One thing that’s changed is no more dramatic specular highlights (super bright “sun sparkles” on the distant waves) compared to the earlier images (before reflection). That is because in the early images I by-hand over-saturated the sun brightness by a factor of 40 or so to make them come out. In real life the dynamic range between looking at the sun and the sky is pretty extreme. With actual calculated reflections based on pixel values 0-255 in the image itself the dynamic range is too low for “sun sparkles” to come out. I think I can add them in again by hand though.
For the following two images I turned off the small ripples to have a better look at the reflections. The image below is fun because the curved water surface distorts the reflection of the spiky peaks. You get multiple images and other fun optical stuff as well.
This one below is in mainly just 'cause it’s pretty. However, you can see all five effects: reflections, shadows, transparency, white water and procedural suspended flow particles all making contributions to the final colours of the water pixels. Compared to the first images (before SSR), the water now has no faked intrinsic colour, just what it gets from its surroundings.
I know this is a very old post. Lol.
However, I was looking fo a video showing a good water system in Panda3D, and your video was the first I saw… and what an inspiration!
So I just wanted to commend you on such a good job. Not only with the simulation, but getting this showcased, in the top search results.
It inspires the use of Panda3D, which I had not used since 1.5 when my pc crashed, and I lost some files to a game I was working on.
I decided to revive that game now, and it’s good to see Panda3D is still in the running. Good engine.
I haven’t found anything yet with wind, and a good cloth simulation, but maybe they are out there, and I haven’t found them.
One thing though, where I think you can greatly improve the simulation, and make it more realistic, is if you can make the water at the top, run off quicker when it is expected to.
For example, from around frame 0.30 to 0.40, the water on the top of the rock does not drain for 10 seconds, but it should have been gone in 2.
However, I don’t know how much time you have, and how detailed you want to get. I’m a perfectionist, so don’t mind me.
Again. Great job.