Write in to image in shader then read into image on CPU side

Simply put, I want to write to an image in a shader. Then, read the results in Python. Setup probably happens in Python first, then write in the shader, then read in Python again, I’d imagine

Maybe it’s poor design, but I want to create a heightmap editor that uses layers like Photoshop. So, the results of one layer will be fed into the next layer

I’ve done this writing to an image and reading it CPU side before I’m sure, but I can’t find the code, and Google is no longer nice to me. I prefer not to have the hardware requirements for compute shaders either. I’m not finding it searching the forums either.

You will need to create a buffer and assign a texture for rendering.

texture = Texture("texture_in_memory")
buffer.add_render_texture(texture, GraphicsOutput.RTM_copy_ram)

GraphicsOutput.RTM_copy_ram - it will do all the work for you.

Now you have access to the data in pixels.

color = LColor()
texture.peek().fetch_pixel(color, 15, 15)
print(color)
1 Like

Thanks. This looks like the answer, but it’ll take me e couple of days to hash it out.

When creating a FrameBufferProperties with 16 red bits for a greyscale image I get this error:

:display(error): Could not get requested FrameBufferProperties; abandoning window.
requested: red_bits=16
got: color_bits=24 red_bits=8 green_bits=8 blue_bits=8 alpha_bits=8 force_hardware

Do I need to look up the buffer specs on my card, amd radeon 7600? Doing so only leads me to the total memory on the card, 32 MB buffer.

edit: PS there’s this, but calling it the way they do, props.set_rgba_bits(16, 0, 0, 0), doesn’t seem to help EGL doesn’t support R16i buffer format (among others) · Issue #1137 · panda3d/panda3d

edit: removed something I said in error

It seems that this whole bit system for configuration is not working correctly, the buffer is always created as 24-bit.

from panda3d.core import *

engine = GraphicsEngine.get_global_ptr()
pipe = GraphicsPipeSelection.get_global_ptr().make_module_pipe("pandagl")

win_prop = WindowProperties()
win_prop.size = (800, 600)

fb_prop = FrameBufferProperties()
#fb_prop.set_rgba_bits(16, 0, 0, 0)
fb_prop.set_rgba_bits(8, 0, 0, 0)

win = engine.make_output(pipe, name="window", sort = 0, fb_prop = fb_prop, win_prop = win_prop, flags = GraphicsPipe.BF_require_window)
win.engine.render_frame()
print(win.get_fb_properties())

it clearly making a mistake here that needs to be reported on the github, here, there are no other ideas.

Thank you. I reported it. Is this why base.graphicsEngine.makeOutput() always returns None? I’m at a loss for what to do. I need this to begin my project, but it was also a bug in the previous apparently.

Yes, because the make_output constructor requires frame buffer settings, but due to the complex logic of format definition, based on the declaration of requirements, the format is undefined.

It would be more reliable to specify explicitly than to declare the requirements.

fb_prop = FrameBufferProperties()
fb_prop.format = "GL_R16F"

There would be less code and errors.

This is a bit strange, but in fact it requires a window to create an offscreen buffer with the required parameters.

from panda3d.core import *

engine = GraphicsEngine.get_global_ptr()
pipe = GraphicsPipeSelection.get_global_ptr().make_module_pipe("pandagl")

win_prop = WindowProperties()
win_prop.size = (800, 600)

# Setting up the win/host.
fb_prop_win = FrameBufferProperties()
fb_prop_win.rgb_color = 1
fb_prop_win.color_bits = 24
fb_prop_win.depth_bits = 24
fb_prop_win.back_buffers = 1
win = engine.make_output(pipe, name = "win", sort = 0, fb_prop = fb_prop_win, win_prop = win_prop, flags = GraphicsPipe.BF_require_window)

# Setting up the buffer.
fb_prop = FrameBufferProperties()
fb_prop.set_rgba_bits(16, 0, 0, 0)

texture = Texture("texture_in_memory")
buffer = engine.make_output(pipe, name = "buffer", sort = 1, fb_prop = fb_prop, win_prop = win_prop, flags = GraphicsPipe.BF_refuse_window, gsg = win.get_gsg(), host = win)
buffer.add_render_texture(texture, GraphicsOutput.RTM_copy_ram)

print(buffer.get_fb_properties())
print(texture.get_format())
print(buffer.get_texture(0))
Types of buffers and textures.

(‘1’, “depth stencil”, “”),
(‘2’, “color index”, “”),
(‘3’, “red”, “”),
(‘4’, “green”, “”),
(‘5’, “blue”, “”),
(‘6’, “alpha”, “”),
(‘7’, “rgb”, “”),
(‘8’, “rgb5”, “”),
(‘9’, “rgb8”, “”),
(‘10’, “rgb12”, “”),
(‘11’, “rgb332”, “”),
(‘12’, “rgba”, “”),
(‘13’, “rgbm”, “”),
(‘14’, “rgba4”, “”),
(‘15’, “rgba5”, “”),
(‘16’, “rgba8”, “”),
(‘17’, “rgba12”, “”),
(‘18’, “luminance”, “”),
(‘19’, “luminance alpha”, “”),
(‘20’, “luminance alphamask”, “”),
(‘21’, “rgba16”, “”),
(‘22’, “rgba32”, “”),
(‘23’, “depth component”, “”),
(‘24’, “depth component16”, “”),
(‘25’, “depth component24”, “”),
(‘26’, “depth component32”, “”),
(‘27’, “r16”, “”),
(‘28’, “rg16”, “”),
(‘29’, “rgb16”, “”),
(‘30’, “srgb”, “”),
(‘31’, “srgb alpha”, “”),
(‘32’, “sluminance”, “”),
(‘33’, “sluminance alpha”, “”),
(‘34’, “r32i”, “”),
(‘35’, “r32”, “”),
(‘36’, “rg32”, “”),
(‘37’, “rgb32”, “”),
(‘38’, “r8i”, “”),
(‘39’, “rg8i”, “”),
(‘40’, “rgb8i”, “”),
(‘41’, “rgba8i”, “”),
(‘42’, “r11 g11 b10”, “”),
(‘43’, “rgb9 e5”, “”),
(‘44’, “rgb10 a2”, “”),
(‘45’, “rg”, “”),
(‘46’, “r16i”, “”),
(‘47’, “rg16i”, “”),
(‘48’, “rgb16i”, “”),
(‘49’, “rgba16i”, “”),
(‘50’, “rg32i”, “”),
(‘51’, “rgb32i”, “”),
(‘52’, “rgba32i”, “”)),
default = ‘7’)

color_bits=16 red_bits=16
27
2d_texture texture_in_memory
  2-d, 1024 x 1024 pixels, each 1 bytes, r16, compression off
  sampler wrap(u=repeat, v=repeat, w=repeat, border=0 0 0 1) filter(min=default, mag=default, aniso=0) lod(min=-1000, max=1000, bias=0)  no ram image

The documentation, these host gsg flags are listed as optional, but in fact they are not. In fact, their presence guarantees that the required buffer format is obtained.

Thanks. Where did you get “GL_R16F” from? I can’t find it in the FrameBufferProperties code. Also, is there a downside to just using the main window as the host?

I don’t think there is a disadvantage in physically creating a window, this is to some extent logically true for the rendering system, since the result must be output somewhere. The surprise is that two types of buffers are needed for the correct result.

As for “GL_R16F”, this definition is in the Texture class.

This is described in detail here, however, I did not actually check FrameBufferProperties because if you can assign a texture, then you can set the buffer format accordingly.

By the way, I gave this as an example, if it were
possible to set the required format, it would be more convenient. It’s just that today you can just set the bits parameters and Panda3D will automatically create a buffer. However, it is not always clear how to get a specific format…

However, this logic exists, and I don’t quite understand how to use it…