This is related to the other shader-related thread that I posted earlier, but, as the topic is slightly different, it seemed worth creating a separate thread for it.
Simply put: is there a way of employing a shader without applying it to geometry and rendering the result?
I have a case in which I want a shader to render to a buffer (specifically, to render the “attr_color” value (I think it was) of the geometry), and then use that buffer as input to another shader, which would produce the final output.
I don’t think that the two shaders would likely work within a single program–the second shader involves sampling points in the output of the first shader, which may not have been handled yet as of a given fragment.
As things stand, the only way of doing this that I know of is for each shader to be applied to a piece of geometry, and to use a camera to render that geometry into an offscreen buffer and to use the output as a texture applied to the next shader. Since all that I really want is to operate on the buffer produced by the shader, this seems a little inefficient.
So: can I simply take the output of a given shader, pass it into another, and then take the output of that and apply it as a texture to a piece of geometry?
To illustrate, this is what I have in mind:
<Geometry>[Shader]----renders buffer---->[Shader]----renders buffer---->(Texture)----applied to----><Geometry>---->renders to screen.
The reason why it’s not possible to run a shader without geometry because the shader wouldn’t have an opportunity to run. The vertex shader runs once per vertex, and the fragment shader runs once per fragment. Without geometry, you have no vertices, no triangles, and therefore also no produced fragments.
If you’ve kept up with the dev blog lately, you might have noted a new feature that was added to Panda recently, compute shaders. They are described here: panda3d.org/manual/index.ph … te_Shaders
This provides a way to run a shader without using a piece of geometry. Instead, you define how often the shader is called, and can read to and write from images arbitrarily.
However, reading your description, I’m getting the impression that you’re really talking about doing post-processing filters. This is usually done by pointing a camera at a quad that covers the entire screen. You don’t have to do the set-up yourself for this. Panda wraps this functionality in the FilterManager class: panda3d.org/manual/index.ph … ge_Filters
This would be faster than using compute shaders, and work on a greater variety of hardware.
You’re right, I do believe; it does indeed look as though I should look into the FilterManager class! Thank you again.
I’m not sure of whether I saw the blog entry, but I do recall seeing mention somewhere of support for compute shaders; this does seem like a feature that could be very useful. However, I think that I recall some mention of compute shaders being available only on particularly recent hardware (or perhaps a set of hardware limited by some other factor? I forget), which put me off looking into them further.
Late to this party, but this information may be useful:
You are correct. Specifically, compute shaders were added in OpenGL 4.3, which was released in 2012. Thus, only relatively recent graphics cards can run them.
According to Wikipedia, Nvidia GeForce 400/500/600/700 series and AMD Radeon HD 5000/6000/7000/8000 series are the first GPUs to support OpenGL 4.3. More recent Nvidia and AMD GPUs are of course also fine.
Ah, thank you for that. Well, as I believe that my development machine has a Radeon HD 3650 I daresay that I won’t be using compute shaders any time soon!
However, FilterManager seems to be doing what I intended quite well, I’m glad to say.
FilterManager is pretty useful for managing fullscreen 2D postprocessing. It forms the core of render buffer management for the new postprocessing framework, too.