Question about render-to-texture-array

To implement deferred shading, I want to make texture containing 4 layers.
So I try to render to the texture by the below code, but I can’t.

PT(Shader) gBufShader = Shader::load(Shader::SL_GLSL, "shaders/gbuffer.vert", "shaders/gbuffer.frag");
PT(GraphicsOutput) gBufOut = window->get_graphics_output()->make_texture_buffer("gBuf", 256, 256);
PT(Texture) gBufTex = gBufOut->get_texture();
PT(DisplayRegion) gBufRgn = gBufOut->make_display_region();
PT(Camera) gBufCam = new Camera("gBufCam");
NodePath gBufCamNode(gBufCam);
gBufOut->set_clear_color(LVecBase4f(0, 0, 0, 1));
//gBufTex->setup_2d_texture_array(256, 256, 4, Texture::T_float, Texture::F_rgba32);
gBufCam->set_initial_state(RenderState::make(ShaderAttrib::make(gBufShader), 1));

window->get_graphics->output()->make_texture_buffer() function does not have z parameter (number of layers).
How can I implement render-to-texture-array?

Don’t use make_texture_buffer for such advanced usage. Use the lower level make_output function instead, and then call add_render_texture to bind a texture array to it.

I adjust my code like this:

void render(int size_x, int size_y)
	// GBuffer Rendering
	PT(Shader)				shader = Shader::load(Shader::SL_GLSL, "shader.vert", "shader.frag");

	WindowProperties		window_properties/* = WindowProperties::get_default()*/;
	FrameBufferProperties	frame_buffer_properties = FrameBufferProperties::get_default();
	PT(GraphicsPipe)		graphics_pipe = framework.get_default_pipe();
	int						flag = GraphicsPipe::BF_refuse_window | GraphicsPipe::BF_size_power_2 | GraphicsPipe::BF_can_bind_every | GraphicsPipe::BF_rtt_cumulative;

	window_properties.set_size(size_x, size_y);

	PT(GraphicsEngine)		graphics_engine = framework.get_graphics_engine();
	PT(GraphicsOutput)		graphics_output = graphics_engine->make_output(graphics_pipe, "output", -1, frame_buffer_properties, window_properties, flag);
	PT(DisplayRegion)		display_region = graphics_output->make_display_region();
	PT(Texture)				texture = new Texture("texture");
	PT(Camera)				camera = new Camera("camera");
	NodePath				camera_node(camera);

	texture->setup_2d_texture_array(size_x, size_y, 4, Texture::T_float, Texture::F_rgba32);
	graphics_output->add_render_texture(texture, GraphicsOutput::RTM_bind_layered, GraphicsOutput::RTP_color);
	graphics_output->set_clear_color(LVecBase4f(0, 0, 0, 1));
	camera->set_initial_state(RenderState::make(ShaderAttrib::make(shader), 1));

But it not works.
What the problem?

What do you mean by “it doesn’t work”, exactly? What is the result you are seeing, and what is the result you are expecting?

Please note that RTM_bind_layered requires you to use a shader to indicate which cube map face to render into. To render the same scene into N images, you can use nodepath.set_instance_count(N) to render that many instances, and write the value of gl_InstanceID into gl_Layer using a geometry shader to indicate which layer to render into.

Please note that you need a geometry shader to use RTM_bind_layered meaningfully unless you use the GL_AMD_vertex_shader_layer extension, which allows you to write to it in a vertex shader. (This extension is also supported on NVIDIA GeForce 900 and 1000 series hardware.)

If you instead want Panda to handle the rendering to the individual cube map faces rather than a shader, you can just bind it regularly using RTM_bind_or_copy, and then create one DisplayRegion per cube map face, and call dr.setTargetTexPage(n) to indicate which layer of the texture array that the associated camera renders into.