Usage of Fog

Is Fog safe to use?

I can set it and even get good results, but there is an error message in the console that says,

“pgraph(error): shader generator does not support fog yet”

So why am I seeing a nice fog effect then? Should I disable fog? If it’s going to cause problems then it’s best to disable it (although that would be sad since fog can do so much for an environment)

In 1.7.2, the shader generator did not support fog, so you could either (a) use fog or (b) use the auto-shader, but not both on the same node. If you’re seeing fog, it must be on nodes that don’t have the auto-shader applied. If you’re getting this error message, it must be that at least some of the nodes that you are applying fog to also have the auto-shader applied.

For the upcoming 1.8.0, I believe rdb has added fog support to the shader generator, so this is no longer an issue. You could install the buildbot release to get an interim build to develop with in the meantime.


Will 1.8 of P3D release within 2012? and will any apps made with 1.7 be compatible?

Lastly, will 1.8 support auto DOF?

We don’t have a timeline for release, but I think it is likely that 1.8 will be ready before the end of 2011. Like all Panda3D releases so far, it will be largely compatible with previous releases. As I said, you can try out the buildbot release to get a preview of what’s coming.

No one is working on auto DOF in Panda at the moment, to my knowledge. That’s the sort of thing that can be implemented in the application layer, with effort, but it isn’t part of the engine. However, if you feel this feature is important and you’d like to help add it, we’d be happy to accept contributions.


Can multiple Fog nodes be set within a scene? I see no indication in the manual that it can be done.

Yes, of course. Like any other attribute, you can apply fog to render, which applies it globally; or you can apply it to any subnode, which applies it only to the nodes at that point and below. If you apply it locally, you can apply a different fog attribute to different nodes.


Hello. I just tried the Windows buildbot version of Panda3d and fog is still not supported by the ShaderGenerator.
I understand that it might still not be included with the latest release, but I thought I’d let you guys know if there was a problem.

Huh? I don’t think I did, nor do I remember saying I did/would.

Adding exponential fog support wouldn’t be hard, but it’d probably be inefficient compared to a post-processing filter, which would be fairly easy to write. In the simplest case, one could use a simple lerp instruction with the depth as factor.

But I’ll see what I can do.

Oh, my apologies. I’m sorry, I don’t know why I thought that, then. :frowning:


No worries. I did it anyway, just committed the changes. All three fog modes are now supported with the shader generator:
All fog modes produce an effect identical to the fixed-function pipeline fog.
You’ll need to wait a day or so till the buildbots pick up my changes.

I haven’t tested DX9, just OpenGL.

Wow, I only have to imagine that something is done, and boom, it’s done! :slight_smile:



I’ve been waiting for auto-shader fog since September 2009. And it’s not even Christmas! Sweet.

Thank you rdb.
I see that you are familiar with the ShaderGenerator class. Perhaps you could fix another issue with the ShaderGenerator? We are offsetting a texture on a lava node and we are experiencing very noticeable fps drops.
We have experienced this on all the GPUs we tested.
Other than that the ShaderGenerator works like a charm and saves us from writing our own Cg code.
I’m sorry if I should have posted this in a separate thread.

This is a known issue with the shader generator: any change to the render state for a given node forces a regeneration and recompilation of the shader applied to that node, potentially quite expensive. This includes innocuous state changes such as adjusting a texture offset.

This is a difficult problem to solve, because it goes to the fundamental way that states and shaders are implemented. We’ve been thinking about solutions for Panda3D 2.0, but the solution won’t be ready any time soon.

So, in the meantime, the usual workaround is to use a hand-coded shader, rather than the auto-shader, on any node that you have to apply frequent state changes to. If you’re not versed in writing shaders, you can start with the shader generated by the auto-shader, which you can inspect by setting the config variable “dump-generated-shaders 1”.


I’m sorry, but if the state change causes recomputation of the shader, couldn’t there be an extra conditional statement for shader recomputation set by a method such as NodePath.setShaderUpdate(bool)?
It’s just a pity that we can’t fully rely on ShaderGenerator and would need to find shader programmers now for only a small number of effects.

It’s a much lower-level problem than that. The shaders are cached on the RenderState objects, and there is a unique RenderState object for each unique combination of state attributes.

When you call nodePath.setTexOffset(), you are forcing the generation of a new RenderState object. This new object does not have a pointer to the previously-generated shader, so keeping the previous shader is not even an option.

There are many things that are good about Panda’s RenderState design, but the storing of auto-generated shaders in the RenderState object is not one of them. In order to solve this problem, we will need to find a better place to store these shaders. That is the low-level problem that is difficult to solve in the short term.

But as I said, you don’t actually have to know anything about Cg. You can still use the shader generator to make all of your shaders, you just have to save the results of a few of them and apply those shaders by hand.


Hmm… David, I’m wondering if this dirty hack could work. We could do something like this in set_state_and_transform:

  if (_target_shader->auto_shader()) {
    CPT(RenderState) new = _target_rs;

    // Filter out attribs that are irrelevant for shader generation
    new = new->remove_attrib(tex_matrix_slot);

    if (new->_generated_shader == NULL) {
      new->_generated_shader = _shader_generator->synthesize_shader(new);

etc etc. This would be done every frame, though, so I’m not sure of the performance implications of that.

Well in that case I can think of another dirty hack.
How about having a method like NodePath.bakeShaderAuto(), which will dump the shader file to Panda3d’s cache folder, disable ShaderGenerator on the Nodepath, then apply the dumped shader to the NodePath via setShader() and set the shader inputs automatically? Depending on how these classes work, maybe you wouldn’t even need to save it to disk, but just pass the shader as a string.
It would be a hack, but I think it could work and would be better than waiting before ShaderGenerator is redesigned in the not very near future.

These both sound like fine suggestions.

Actually, rdb’s idea is similar to one I was just thinking about, but perhaps a little easier to implement. I was thinking about building a hash of the relevant RenderState attributes, and storing the shader in a separate hashtable that all RenderStates shared, but simply hashing to a different RenderState pointer is easier and arguably more elegant. We could then cache that pointer within the original RenderState to avoid the runtime hit every frame, and I think that should solve the problem nicely. We’ll just need to take a bit of care to avoid circular reference counts in this cache.

But preusser’s bakeShaderAuto() hack is also a nice suggestion in case we can’t find a better solution. It basically automates the workaround I had suggested, making it slightly less clumsy.

Let me think about these some more.


Nice to see some progress on these issues.