I’d like to integrate this as well if it’s possible in python, but I don’t know how contributing to panda project works and if this code worth it. Can anybody add code to panda source, how does it work in general?
the changelog is good idea, I’m on it.
Speaking about transparency, you are right, this is the main limitation of the deferred shading, the solution is to use fixed pipeline to render transparent objects.
i think commonfilters is a python module. You can find it in “C:\Panda3D-1.X.X\direct\filter\CommonFilters.py”. I didnt mean for you to do that though, you could just give an example of using the shader with filtermanager.
Anyone got experience with this model ? and what lighting model do you use in your own applications ?
Phong model :
Btw, I can’t set my texture format, setFormat(Texture.Format) doesn’t have any incidence, I want to try fp16 textures for hdr, I’ve searched on the forums, someone asked for 128 bits textures format but he couldn’t change the format of texture as well. What is the current state on this ? I’ve tried to include setFormat() in the filtermanager panda class but no success .
As I assume you may be noticing, having a lot of effects that differ on different objects in your scene can be a pain (writing awesome shader is fun; writing every version of awesome shader you need for you game is not). My shader generator is an attempt to provide a tool for managing and merging all these effects onto the various geoms in the scene. Unlike panda’s shader generator, mine does not have major performance issues when you change the renderstate (I don’t force regenerating shaders automatically, and my cache only looks at the parts of the render state than are specifically needed to generate the shaders, so updating shader inputs every frame works well).
Technically my shader generator isn’t really a shader generator, its more of an implementation of a meta-language for writing custom shader generators, which you could use to make a customizable deferred alternative to panda’s built in one. An example of is use is that in my game, some things have normal maps, and some don’t, so I used a conditional shader input node to capture the normal map, and use another type of conditional node to select from a sample of that texture, or a plain vertex normal depending on availability. The result is a code generation time switch that writes the correct shader for the requested geom. (I’ll mention that this is currently not working quite right, I think it has to do with the actor loading setup though.)
Anyway, when my shader generator is a bit further along, it can serve as a way for you to manage and share all your effects in an easy to use fashion. I’m not looking for much help with the shader generator, but I am looking for lots of help making effects/content/samples to use with it, and you seem to be good at that. If you have any questions regarding my projects, feel free to ask. I’d really like some input from a fellow shader programmer. I want to make sure I’m not missing anything important, and that its actually useful.
I also have tried and failed to have 16 bit textures/buffers.
I’ll be happy to provide you effects and samples for your system, I need to clean the code before I post here. I’m currently having problems to deferr shadow maps. About your system do you plan to use different lighting models, and for buffer creation how do you plan to have different materials (for my game I’m planning to store materials information in the unused alpha channel as they do in various game, but for a general purpose generator it might be different). Nice screens you have there. More questions, do you allow dynamic branching in shader and what average user specs do you target with your new generator. It will be nice for your system if you or someone else finds a solution to set textures format.
My generator system isn’t really tied to any set of buffers/bitplanes. It works fine for forward shading, deferred, etc. I’ll even probably use it for the lighting pass and post process shaders in my deferred setup. Its just a shader meta-language. The complexity of the shader generators made with it up to the user. I’d like to provide both low and high end samples, and have a system for doing quality fallbacks for performance (simply dump a quality constant, perhaps as a tag on render, into the renderstate before generating the shaders).
I’d love to have a big pile of lighting models to choose from. Theoretically it shoule be possible (when I add the feature of composite/hierarchical nodes to the generator) to implement a generic lighting model node that lets you select the lighting model based on tags in the scene graph (ex: provide the coefficients needed for lighting model X, and you automatically get it). Such a lighting node could be used in both forward and deferred configurations (though you might have to settle for a single lighting model for the whole scene with deferred)
Regarding static branching, its possible to generate any possible shader code with my system (you can put a whole custom fshader and vshader in as single nodes if you want), so yes, you can use static branching, but its not too practical. The current system is really great at generation time conditionals, and handling shaders with complex data-flow, but its not very good at runtime conditional stuff (currently any conditional stuff needs to be contained within a single node in the shader graph). In short: possible but not good to use with sub nodes and such. I’m thinking about possible fixes for the design deficiency, and I consider it a major issue. (In fact, conditionals are currently the only major issue I see with the design, and I will find a fix, even if it adds a lot of complexity. The main/worst case it applies is if you want an early exit via discard, its not yet practical to force it to occur as early as possible).
Anyway, while its not too relevant to my shader generator, it is relevant to your thread, so here is how my rendering setup works:
Currently my game renders to 3 buffers in the first pass (Diffuse, normals, misc). Misc holds glow and specular information. I also have a (some what experimental) decal rendering pass where I blend in projected decals (using the same projection approaches as the lights). I couldn’t think of a good way for normal map them, so I don’t render normals in that pass. Currently that renders to a second diffuse buffer and second misc buffer. Those textures are then pulled in along with the normals for the lighting phase, which generates a lighting buffer thats used along with the normals diffuse and misc to do the final rendering of the deferred shaded objects, including cartoon inking and cell shaded directional lighting. That is then rendered into the window (on a full screen quad), the sky box is then rendered, and then particles and other transparent effects will be rendered (not implemented yet). Then comes the bloom/glow post process, which currently uses alpha, but will use my misc buffer once I start drawing transparent things.
I currently don’t have shadows.
I get the positions using the depth texture, and do the most basic normal encoding (norm/2+.5) and stick it in a regular 8bit per channel buffer. Not ideal, but without floating point/16 bit textures, that seems like the best choice.
Example use of my decal system: projecting symbols on the ground (shown in images). Perhaps an more interesting application is projecting damp areas (darken the color, increase the specular coefficient a ton, and the specular amount a bit), or flaming areas (project glow and red patches with no specular). I haven’t really had a chance to try anything with it.
Your code is interesting. Your setup process is very different from mine. I create a bunch of custom buffer and cameras (4 camaeras) explicitly and set them all up, configure depth sharing, make most of my own full screen quads without the provided filter stuff and such. I don’t touch any AuxBitplaneAttribs. I even see a setShaderAuto() in your code, which I don’t touch. I have maybe 4-5 shaders just to do a basic rendering (skybox, models, lights, decals, inking+directional lights+assorted) and I’ll need a lot more. My module for just setting up the buffers, cameras, display regions, shaders, lights etc. is just over 400 lines, more than double your entire code.
Edit: I forgot, specifically regarding materials. On load I convert the material object from the egg files (exported from blender) into shader inputs. I also allow a material texture to be blended over this if provided. My shader generator could potentially make it easy to support models with other approaches, such as vertex data, tags, or something else. Mostly this data just ends up packed into the misc buffer. One good approach is to have a list of materials (256 of them) and use an index and lookup table (texture) for the data, but that required more setup so I didn’t bother.
A question : how to get the maps directly from the .egg ?
At the moment, I’m sending the various maps as inputs to the shaders for each model, is there a way to have a simpler system (import a full working blender scene with multiple textures would be the goal).
I have a rather complex model importer that does a lot of things (especially for my multi part actors). On load, I search for all the textures, and attach them to shader inputs. I have a texture naming convention, so whatever_norm goes on as a normal map etc. Then I use my shader generator to take the existing shader inputs and tags and generate the correct shaders. I’m planning to have a converter application that does most of this work, then dumps the models to bam files. Stuff that does not fit in bam files (shaders and shader inputs) will be put in tags and fixed on load. I believe the normal maps, if setup correctly, are available as some special shader input, but I like to only use my own manually setup ones.
I tried different particles rendering system. The particle system seen in the panda samples is nice but it gets slower when particles get closer to the camera (overdraw), apparently it’s a problem with all engines. So I had a look at gpu gems 3 on nvidia site and here is the result :
(particles ideally should come from the bottom of the ship )
I use a texture buffer for particles, the resolution is 1/8 of the main window, I have no more slowdowns now ! (from 30fps to 120fps). However I coulnd’t make soft particles work, maybe because I render all the particles to a single texture buffer and composite it with the final lit scene. The bloom effect helps to reduce square effects anyway.
I’ve reused the cloud system from Flock author to make it work with deferred lighting, I’m using a few quads for each cloud, the lighting changes with the direction of the sun.
(only one texture at the moment)
Also there is some halo effect around transparent textures, and I can’t get true transparency with the deferred system.