Looks great! But the YouTube videos was recorded in bad quality: low bitrate and resolution.
I just created a better quality video and edited the link into the original post.
(Also here: https://youtu.be/1sY7sk2fUhI )
I don’t have much experience making videos from screen captures. I am using OBS. I increased the resolution and the bitrate. The pre-upload fle is 150 MB now but I guess if youtube doesn’t blink I shouldn’t be concerned.
base.movie(namePrefix=‘OUTFrame/frame’, duration=102, fps=60, format=‘png’)
This code will create a sequence of images, the duration of 1m 42sec with a calculation of 60 frames per second. Pre-disabling the fullscreen parameter, you can set the win-size window resolution for Panda3D as desired in the Config.prc file
Create a video from png using free VirtualDub.
The only problem is, the camera will have to be started on the spline.
It’s just the noise-like shimmering of the water in the distance that is very hard to compress efficiently with video codecs, so the quality suffers during those parts of the video. Not a big deal. Using
ffmpeg would not necessarily help with the quality compared to OBS.
wow very nice !
please can you explain how to use it in the dev branch ?
you use official dev branch and it’s included in devel sdk ? http://www.panda3d.org/download.php?platform=windows&version=devel&sdk
what is name of module (for importing) ?
can you add basic code example like P3DModuleBuilder :
import panda3d.core # Make sure you import this first before importing your module
print(TestModule.multiply(3, 4)) # prints 12
example = TestModule.ExampleClass()
print(example.get_answer()) # prints 42
thanks for advance
Hi Huitre39, I assume you are talking about the P3DModule maker. It allows you to write C++ code to do whatever you want and integrate it into Panda, call that code from python and pass things back and forth like textures, images or other variables. Tobspr gives basic usage instructions at the github page:
P3DModule builder should really be part of the main Panda SDK because its essential to implement new algorithms that are CPU-intensive in a clean way. The alternatives would be hacking the core code of Panda itself or working solely in C++.
You should have already downloaded the SDK and compiled it. I was able to get 1.9 and the dev SDK to work with the module builder. I am not using any code specific to the dev branch.
I was able to get the P3DModule builder example.h (function and class) to work pretty easily. There is a python build script that does most of the work for you as long as you have something like VS 2015 (e.g. free community version) installed on windows 10 (where I am working for this project). I run the build script from a VS “x64 native tools” command prompt. The build script lets you choose the name of the module file, e.g. ModuleJW. Some things are hardwired like the source directory, called “source”. The build script will attempt to compile any C++ you put in there and it is all incorporated into the one module. You copy the ModuleJW.pyd to your working directory where your Panda code is and your can import it as import ModuleJW in python. Replace ModuleJW by whatever you call your module.
Moving beyond the simple example: Integrating more complex code with the main parts of Panda is non-trivial so I started with the shaderTerrain class which is nice, self-contained code with well defined usage given by the sample shader terrain python code provided with the Panda SDK and associated panda manual pages.
Most classes have 3 files with the same name and 3 extensions for different purposes .h, .I and . cxx. I built my own class from it but since my class was not integrated into Panda’s core code the same way I had to remove or modify some parts of the code. Most of this was just renaming to avoid clashing with the existing shaderTerrain class. The C++ naming conventions for panda3d are pretty clear and the automatic C++ documentation is mostly good (if a little brief on some things).
As a general approach I find that storing data in textures the way shaderTerrain does is very effective. Panda handles getting them onto the GPU for you. The quirkiest thing shaderTerrain does is hijack the add_for_draw method to tweak the terrain geometry data every frame. You can achieve a lot of other frame by frame results without having to do that using tasks. However, since LOD is so directly tied to the camera/projection matrix for each specific frame it makes it hard to do anything else. I tried to avoid doing it but in the end it was the most direct approach.
As for my code itself, it is a bit of a hack at the moment. The C++ does more than it should. A lot of the parameters and settings could and should be set in the python which would avoid a lot of recompiling to try different set-ups. My intention is to reduce it to the bare bones: a simple terrain LOD code with a similar interface to shaderTerrain and have the water simulation live elsewhere. Then they just share the texture that contains the water simulation data and otherwise they are pretty separate pieces of code.
I misspoke myself
my question was how we use your program for the tested in panda3d
where to download the source code and how to compile/use it in panda3d ?
The short answer is that the code is not currently available. That is what the last paragraph above is about. The code is a hack right now and I would need to clean it up a lot before I would make it available. Even then, I am not sure I want to release all of it. For example, the water simulation code is nothing to do with Panda and I see it as separate even though I am calling it from Panda.
In the post above I focused on what I did to make it possible to take water heights that can be placed into a texture (e.g. a 256^2 array) and then render that in Panda. You could generate those water heights in many ways, including building an array by hand as many games do.
The basic rendering strategy is effective and would work for a variety of projects. One trick is to instance the terrain node to two separate NodePaths so that it is used twice – once for land and then again for water each frame. I used a different shader input on each NodePath to tell the shader whether it should be rendering land or water on a given instance and this affects which height texture it uses. The shaderTerrain code was quite close to what I needed to do this effectively but benefited from a few improvements. Depending on your hardware you could probably still get high FPS with a more brute force approach.
I’d be interested to know if there is a way to change the shader used on a lower NodePath based on some change on a higher up one in the scene graph. Right now I have a bunch of tests and branches in the shaders which seems sub-optimal. In this case its about shaders for land and for water. However, more generally it might be about a light source shader (viewing from the sun to make a shadowmap) vs. a final stage shader (viewed from the player camera) to add shadows. My scene graph would have a variety of final stage shaders for different models so I couldn’t just attach one shader to each camera and be done.
ah ok !
I thought you integrated it into panda3d.
Do you mean override the shader on a lower node? This can be done by applying that shader with a higher priority value, for example,
rdb: Thanks for the suggestion. I mean to have the node pick which shader to use depending on which render pass it is in within a single frame. Stepping through render stages (going light to light to camera etc…) is entirely internal to panda AFAIK so my python has no opportunity to change any priorities mid-frame using the set shader with priority mechanism.
For example, if I am in the light casting pass, I have a simpler shader which just sets the shadowmap and if I am in a later pass I use a more complex shader (that uses the shadowmap values and does visual effects). So the priority would have to be tied to the render pass in some way to switch over shaders for each node. I have several shaders (some with tessellation stages and some without) so I can’t just have the switch over occur by setting different shaders on the light/camera nodes themselves.
One approach would be to replicate the entire scenegraph for each light/camera using instances of nodes on each and replace the shaders as needed along each scene graph. That seems a bit clunky but would work.
Right now I am using a shader input to let me know if the camera is the light or the final viewer camera and deciding inside the shader which pass I am in. This means I test which pass it is for every single fragment and vertex which seems wasteful. There might be some magic in the GLSL compiler that means there is no real issue to worry about of course and it essentially just does the test once per Node in practice.
Oh, I see what you mean. If you use different cameras for each pass, then you can use
setInitialState on the camera to set a particular set of render attributes with corresponding override values that would override the render attributes on the scene.
If you need per-object control over this, this could be accomplished by using tag states; you can set a tag on an object indicating that it should be subject to a particular tag state, and then set a RenderState on the camera with
setTagState that indicates that when the object is being rendered by that camera, and has that specific tag value, it should take the values in that state as overrides.
It’s a bit clunky, I will admit; we do intend to introduce a better mechanism for this sort of thing.
Let me know if you need additional help with this.
Here is the same simulation/Panda code, now with screen space reflections (SSR) and shadows. I did the SSR at full window resolution which hammers the frame rate a bit (down to about 40-60 fps now even after I optimized the shader – seems limited by the number of texture look-ups). Standard SSR seems to be O(Screen_height^2 x Screen_width) speed-wise. I was thinking it should be possible to get close to O(log2(Screen_height) x Screen_height x Screen_width). As a short cut I’ll probably go to a lower res reflection buffer. It shouldn’t have too much effect on the top two pictures (where I have ripples = procedural noise tweaking the water surface normals). However, in the bottom two images with a still water surface I think you’d notice.
One thing that’s changed is no more dramatic specular highlights (super bright “sun sparkles” on the distant waves) compared to the earlier images (before reflection). That is because in the early images I by-hand over-saturated the sun brightness by a factor of 40 or so to make them come out. In real life the dynamic range between looking at the sun and the sky is pretty extreme. With actual calculated reflections based on pixel values 0-255 in the image itself the dynamic range is too low for “sun sparkles” to come out. I think I can add them in again by hand though.
For the following two images I turned off the small ripples to have a better look at the reflections. The image below is fun because the curved water surface distorts the reflection of the spiky peaks. You get multiple images and other fun optical stuff as well.
This one below is in mainly just 'cause it’s pretty. However, you can see all five effects: reflections, shadows, transparency, white water and procedural suspended flow particles all making contributions to the final colours of the water pixels. Compared to the first images (before SSR), the water now has no faked intrinsic colour, just what it gets from its surroundings.
This is looking better and better, I hope you’ll share the code for this one day.
I know this is a very old post. Lol.
However, I was looking fo a video showing a good water system in Panda3D, and your video was the first I saw… and what an inspiration!
So I just wanted to commend you on such a good job. Not only with the simulation, but getting this showcased, in the top search results.
It inspires the use of Panda3D, which I had not used since 1.5 when my pc crashed, and I lost some files to a game I was working on.
I decided to revive that game now, and it’s good to see Panda3D is still in the running. Good engine.
I haven’t found anything yet with wind, and a good cloth simulation, but maybe they are out there, and I haven’t found them.
One thing though, where I think you can greatly improve the simulation, and make it more realistic, is if you can make the water at the top, run off quicker when it is expected to.
For example, from around frame 0.30 to 0.40, the water on the top of the rock does not drain for 10 seconds, but it should have been gone in 2.
However, I don’t know how much time you have, and how detailed you want to get. I’m a perfectionist, so don’t mind me.
Again. Great job.
Once upon a time I built a sailcloth simulation using, I believe, Bullet Softbody Patch — Panda3D Manual with wind as a time-dependent linear vector force on the softbody patch.
Sounds creative. However, I’m not sure how that will work for hair, and robes, etc. Unless there is an extreme amount of work required.
For hair and robe physics one might use bones, something like this: Hair physics/simulation
Yup. Extreme amount of work required. Thanks.
No problem. And yes, building a fully fledged video game from raw code is very involving work, especially without years of experience doing that sort of thing.