Carnegie Mellon’s entertainment technology center is a master’s program for future game developers, theme park designers, and CGI filmmakers. Every semester, students do group projects.
One such project right now is the “Bamboo” project, which is writing an automatic shader generator for panda. The basic idea is this: you create a 3D model that includes things like normal maps, gloss maps, or the like. Of course, these maps aren’t supported by the conventional rendering pipeline. But nonetheless, the model exporter will export these maps into the egg file, and panda will read them in. Then, an automatic shader generator will do whatever’s necessary to make these maps appear. Long story short, you will be able to use features like normal mapping, gloss mapping, and shadow mapping without knowing anything about how to write shaders.
It’s going to take quite a while until this project reaches fruition, but it’s nice to know it’s underway.
LOL, I was just planned to start making such a Shader Generator myself this month, though, since you are already doing it, I can move to other things Though if you need help, I’m available, in both Python territory and C++.
Its always nice if you can directly have Phong, blinn, normal mapping, etc. without having to use shaders.
Anyway, good luck and I would be happy to help.
Aaaand, hmmm, by the way, I think someone was planned to work on an MMORPG framework for panda thats also called Bamboo but he/she will have to pick a different name
Excellent venture! Good luck!
this sounds nice, and it’ll probably be implemented before I ever get around to needing it lol
what is the state of this any one have a site link to their project?
I spoke someone working on this some time ago on the IRC, and they seem to already have features like HDR etc. I’ve seen a screenshot, and it looks really promising.
Can’t wait till it’s finished.
Website is here: (second result on google)
It lists some very nice techdemos.
I’m getting really close on this. It’s almost all completely functional. The shader generator works, and supports:
- Normal maps
- Gloss maps
- Glow maps
- HDR tone mapping
- Cartoon lighting
In addition, I’ve written an image postprocessing utility. So far, it supports:
- Cartoon inking
- Bloom filters
You can use all of this without writing a line of shader code. I’ve also written:
- A new sample program for normal maps.
- A new sample program for bloom filters.
- A new sample program for cartoon shading / inking.
These new sample programs use the shader generator, so they don’t include any explicit shaders. I’ve also written some (admittedly thin) documentation for all this. If you search the manual for “1.5.0” (the version of panda which will include all this), you’ll see it.
I still have to document the process of creating models with these new map types. Then, I can start building the 1.5.0 distro.
Hi Josh, great work!
And what about shadow mapping, is it still in plan for 1.5.0?
1 toon shader not powerfull enough
3rd toon shadre does not support multiple render torgets
File “./Procedural-Cube/Tut-Procedural-Cube.py”, line 156, in toggleTex
TypeError: NodePath.setTexture() argument 1 must be Texture, not NoneType
particle pannel Z and Y cords need to be flipped
shadow demo: video driver cannot create an offscreen buffer
shadow demo: video driver cannot create an offscreen buffer
When i think of a shader system i think of some thing like this:
blenderartists.org/forum/attachm … 1175897424
Where you can mix and match different effects.
I would like to add a feature request for the shader generator: SSAO (Screen Space Ambient Occlusion). This technique is used by Crysis, for example, and it gives much nicer results compared to usual direct lighting, and it is easy to compute. One of the implementations of this method is described here: rgba.scenesp.org/iq/computer/art … o/ssao.htm
lol birukoff thats some serous shader magic - i don’t think Josh has that much up his sleeve!
For now you should just bake it into the texture of your models.
I am wondering how hard can it be to get the fx composer working with panda3d pipe line.
developer.nvidia.com/object/fx_c … _beta.html
I think all we need is a way to tern the cg shaders it produces into some thing panda3d shader system can understand. Because they both derive form cg it should work in theory.
Josh added the MHeight and MNormalHeight stage modes but never implemented them. They are meant for parallax mapping, which should be trivial to implement. (I also need it myself, plus it has been requested by some other users.)
So, I am planning on implementing it, maybe even for upcoming 1.7.0.
There’s only one thing I need advice about though - unlike normal mapping, parallax mapping requires an offset scale factor - it indicates how much effect the given height map has. Usually a value like 0.02 or so.
How would I do this? I’m not sure if I should add a new variable like setParallaxOffsetScale or so since that would also require me to add a whole new shader input just for that variable.
Maybe I could just hardcode a value for it (ugly)? Or I could abuse a current scale value for it? (Maybe TextureStage.setColor, although I have no idea what it’s for at all)
@Cronos, birukoff: For the record, those things are targeted for 1.7.0.
For the record I also would love to see paralax or so mapping in panda. It would be nice not to have the value hardcoded, though. Just my 2c
Done. I quickly grabbed a texture from the web that is actually not a true parallax map (so you’ll see horrible bleeding in the following pictures):
Just normal mapping:
Both normal mapping and parallax mapping:
You’ll see that it considerably increases detail and depth perception (even though the textures are bad. I couldn’t find better).
Right now the value is hardcoded. David, would you have anything against a nodePath.setParallaxScale(texturestage, scale)? Or isn’t that a good idea?
I don’t see any problem with that.
I’ve got a silly question… can I use my usual tangent space normal maps for parallax mapping or does this technique use something totally different?
You cannot use a normal map for parallax mapping. You will need to convert it into a height map first. However, programs like GIMP with the Normal map plugin, or similar programs, can do this with ease.
But you still need to apply both the normal map and the height map to Panda - the normal map is for the lighting, the height map for the parallax effect. You can do either, but then it will not look as good as when you have both.
Because usually you would apply both the normal and height map, it’s an optimization to store the height map into the alpha channel of the normal map - that’s what the MNormalHeight mode is for.
I’ve just checked in support for cube maps, 1D textures, 3D textures, TexMatrixAttrib, TexGenAttrib modes MWorldPosition, MWorldNormal, MEyePosition, MEyeNormal.
Now you can get the Nature Demo’s water effects without writing a line of shader code.