The texture will have to be adjusted to match the framebuffer properties. Since there is no single-channel framebuffer provided by your graphics hardware, it’s not technically possible to render directly into a grayscale (or even red-channel) texture.
You could, of course, render grayscale content into an RGBA texture, so that the result appears to be totally grayscale. If your goal is to eliminate color from an otherwise colored model, you could write a custom shader to do this for you.
I am currently rendering grayscale to an rgba image using a custom shader ok. But I looking at ways to reduce the GPU memory footprint, since i’m juggling lots of these things around.
Because the texture is in between 2 custom shaders I can use whatever type.
Maybe I could use the depth channel which is 2 bytes right?
Is there any 1 byte frame buffer channels?
Even if you render only to a depth channel–which might work, though I’m not sure–I think you’d still have to pay the burden of the RGBA bits, as well as whatever stencil bits and multisample bits and auxiliary bits the framebuffer configuration you selected contains. When you render to a texture, the texture is the framebuffer, so everything that the framebuffer has must persist. You could try to insist on having as few of these bits as possible, but I suspect most graphics cards only offer framebuffer configurations that use all available bitplanes for something. I haven’t really played with this much, though; someone else might have more information.
You can also copy to a texture instead of rendering directly to the texture. In this case, you only have the RGBA texture itself, and not all of the additional bitplanes. So it’s an improvement. There still isn’t a way to copy from the RGBA framebuffer into a grayscale texture (that would require an expensive interleaving operation, which I don’t think the hardware offers), but this might be your best bet for reducing GPU memory.
(Actually, some drivers support copying to compressed textures, which will reduce your GPU memory substantially, but it’s slower than slow.)
You could do all this with only one framebuffer, by opening it once (or even using the main window only) and using ParasiteBuffers for all of the texture you need. A ParasiteBuffer simply reuses the same framebuffer memory.
I’m using a single off-screen GraphicsBuffer, which has 32 parasite buffers, each with an associated texture.
See [GPU thrashing) for how I got that to work.
Even if I’m using bind_or_copy and the texture becomes the frame-buffer, it is still gonna take up extra memory though right? It’s just that each texture will take turns at becoming a channel of the frame buffer.
I’m just seeing if I can reduce the total GPU memory footprint.
So in this case a copy_texture would still use the same amount of memory, am I right?
bind_or_copy means to try to bind if possible, then fall back to copy if binding is not possible.
Binding means the texture and the framebuffer are the same thing. In that case, the texture consumes no memory beyond the framebuffer memory, but you need to maintain a unique framebuffer for each texture (and framebuffers are particularly heavy pixels).
Copying means a separate texture object is created, which has to be “compatible” with the framebuffer (generally meaning the texture has to be RGBA), and the pixels of the framebuffer are copied into the texture’s memory each frame. So now you pay the cost for each framebuffer, plus a bit more for the texture’s copy.
Unless all of the framebuffers are really the same framebuffer, because they’re all parasite buffers. This is ideal, because you pay only for the one framebuffer, and then you have the memory for each texture (which is cheaper than a full framebuffer).
And if they are all parasite buffers, binding isn’t even possible, so bind_or_copy really means copy. So this is already my proposed best case: one framebuffer, and multiple textures which have been copied from it.
I don’t know how to reduce GPU memory further than that, other than by using fewer or smaller textures. Maybe someone cleverer than me has some other ideas.
Incidentally, even if you load a grayscale texture from disk, it is likely to be expanded to RGBA internally anyway, depending on your graphics driver. There’s just no getting around the fact that the graphics hardware is inherently 32-bit based, and likes to sling around pixels of 32 bits each.
I just thought of another approach to reduce your GPU memory: since you’re compelled to have 3- or 4-channel textures anyway, perhaps you can use each of those channels for a different content. Render whatever you’re rendering three times to the same buffer, to the R, G, B channels of the buffer, respectively. (Do this with three overlapping DisplayRegions on the same buffer, and place a ColorWriteAttrib at the root of each scene to select the appropriate channel.) Now you only need 1/3 as many textures.
You could also use all four channels, R, G, B, A, though some (lame) graphics cards have difficulty rendering to an offscreen alpha channel, so this could cause portability issues, but probably any card that supports a shader sophisticated enough to process these textures can render to an offscreen alpha channel.
Not sure what you mean here. The output of the shader is the contents of the framebuffer, and this gets copied to a texture. What else are you asking for?
To clarify the question, lets use the example of a custom blur filter, which takes TextureA as input. Can the texture framebuffer get copied back into TextureA.
In the case of a “bind” this is unsafe as you are accessing the pixels you are writing to. But in the case of a “copy” I’m guessing this is safe, as the pixels will only be copied back over after the shader has finished.
Also, is there some way to request that a greyscale texture be expanded to rgba on the gpu, or to query how many bits you got on the gpu? In which case I can use these other channels on the gpu, whilst still only having to transfer 1 greyscale channel to the gpu.
As I assume get_format() returns the format in cpu memory, not gpu memory.
Ah, I see. Yes, I think this can be done safely, simply by using the same texture as the “copy-to” source that you use in the scene.
I don’t believe that there is such a thing as a grayscale texture on the gpu. All textures are rgba. Grayscale textures loaded in Panda are automatically expanded to rgba by the graphics driver; Panda has no control over that. I haven’t looked into this a lot, though; it’s possible that there is a special exception to support grayscale textures on the gpu, but if there is I’m not familiar with it.