Creating a Second Camera

Well, I don’t know. Are you sure the sphere is not co-located with the camera?

David

If I comment out the following it works:

camera_np.look_at(sphere);

Great!

Howevever, the second camera doesn’t track the sphere. It duplicates the first camera. Any idea how to fix this?

And I have a few more questions:

At the moment the second window is the same size as the main window - how do I change its size (and its title)?

The second thing I’d like to be able to do is change the colours of the objects in the second window without affecting the main window - what’s the best way to do that?

Do you want the second camera to adjust to face the sphere every frame? Then you will need to call look_at(sphere) every frame.

To set the size of the window, set it up in the WindowProperties structure you pass to the open_window() call.

Rendering the objects in the scene in a different color in the second window is possible; there are a few different approaches. One approach is to apply an initial state to the secondary camera via camera->set_initial_state(). This will apply that same state to all objects viewed by this secondary camera, as if you had applied it to render. (You might need to set the override on some attributes to nonzero, to override the attributes on the nodes themselves.)

That will limit you to rendering all objects the same color. If you wanted to render each object in a different color, you have to have a way to set a different per-object state for this secondary camera. That’s possible, but a little clumsy: you have to use the tag system, and associate a different state with each value of a tag, then assign the different tags to the nodes appropriately. There are (Python-based) examples of this being done elsewhere in the forums.

Or, you could simply duplicate the scene graph for the secondary camera, and call setColor() on differently on each object in the duplicate scene graph.

Note, by the way, that if all you wanted was a second window, you don’t need to create an offscreen buffer first: you can just open a second window and set up your new camera within it. Rendering to an offscreen buffer, copying that to a texture, then rendering the texture in the window is a pretty circuitous route to get a secondary window.

David

I’m opening the window using

WindowFramework *texWindow = framework.open_window(gsg->get_pipe(), gsg);

How do I get the WindowProperties structure from this?

Use the four-parameter form of open_window():

WindowProperties props;
props.set_size(128, 128);
int flags = GraphicsPipe::BF_require_window;
WindowFramework *texWindow = framework.open_window(props, flags, gsg->get_pipe(), gsg); 

David

I’d like to try the method suggested above.

Regarding the rendering, what I’d like to do is render the object the camera is looking at in one colour (e.g. green) and everything else in the scene in a different colour (e.g. black).

My first question is

camera->set_initial_state(renderState);

requires a RenderState object. The docs say the following:

How do I create it?

Other questions to follow :slight_smile:

I’ve figured out the following:

// Setup the camera for the offscreen buffer.
	PT(Camera) newCamera = new Camera("secondCamera");	
	secondCamera = window->get_camera_group().attach_new_node(newCamera);
	const RenderState* rs = secondCamera.get_state(Thread::get_current_thread());

This returns the current RenderState (?)

What I’m not sure about is how to change the attributes of the render state.

Altering the properties of a RenderState will actually return a new RenderState rather than modifying the one you’re operating on.

Could you give me an example of changing the attributes of the render state.

You can create a new RenderState using one of the make() methods, as the comment suggests:

CPT(RenderState) rs = RenderState::make_empty();

And then you can add attribs by calling add_attrib() repeatedly and saving the results:

rs = rs->add_attrib(DepthTestAttrib::make(DepthTestAttrib::M_off));

This is the general model for manipulating both RenderAttrib and RenderState objects. Note that there are numerous more examples of this being done within the Panda code itself; for instance, see NodePath::set_depth_test() and related methods.

Note also that, as a reference-counted object, you should always store a RenderState pointer in a CPT(RenderState) rather than a const RenderState *. (And it’s a CPT rather than a PT, because you can only get a const pointer to a RenderState–they’re immutable by design, to facilitate caching.)

David

What I’d like to do next is render the object the camera is looking at in one colour (e.g. green) and everything else in the scene in a different colour (e.g. black).

So what I need to do is the following:

(1) Set the second camera to render everything in black.

(2) Set the colour of the objects I’m interested in using tags to a different colour.

For the first part, what RenderAttrib should I use to set the rendering colour?

I understand how to set a tag for an object, but how do I get the camera to use this during rendering?

This is very easy to do if you have only one camera looking at the scene, or if you want all of your cameras to show the same color-change effect. In that case, you just do:

render.set_color(0, 0, 0, 1, 1);
selectedObject.set_color(0, 1, 0, 1, 1);

The first line sets all objects’ colors to black (with r, g, b, a = (0, 0, 0, 1); the fifth parameter is an override, to replace any color that may already be set on the objects themselves.

If, however, you only want to do this color modification on your auxiliary camera, it’s clumsier. This would be something like this:

CPT(RenderState) initial_state = RenderState::make(ColorAttrib::make_flat(Colorf(0, 0, 0, 1)), 1);
aux_camera->set_inital_state(initial_state);
CPT(RenderState) green_state = RenderState::make(ColorAttrib::make_flat(Colorf(0, 1, 0, 1)), 1);
aux_camera->set_tag_state_key("aux");
aux_camera->set_tag_state("green", green_state);
selectedNode->set_tag("aux", "green");

The idea is to define a “tag state key”, which is the name of the tag key that indicates a state change; and for each value of the tag state key, you associate a particular state, which the camera will apply to the node when it encounters a node with that value.

Tag state keys are described in the manual under Multi-Pass Rendering.

David

Great, that works perfectly!

Is there a way to switch off texture rendering for the second camera?

initial_state = initial_state->add_attrib(TextureAttrib::make_off(), 1);

David

For the last part of what I want to do I need to test a texel value (only one) in the buffer.

I’m using RTM_copy_ram for this.

Previously it was suggested that I could use a TexturePeeker or that I could directly examine the buffer using tbuf->get_ram_image().

I’m not clear about how to do this.

If I use a TexturePeeker how do I attach it to the texture?

You still need to use RTM_copy_ram. TexturePeeker only examines the RAM image of the texture; it cannot directly examine the copy on the GPU (nothing can, other than a shader program).

TexturePeeker is a convenience class to examine the color of a texture at a particular (u, v) coordinate. If what you want is the value at a particular texel, it’s probably not the tool for you. But for the record, you get a TexturePeeker by calling Texture::peek().

The buffer returned by Texture::get_ram_image() returns an array of bytes, formatted according to the specifications in the Texture itself, e.g. according to get_x_size(), get_y_size(), get_num_components(), get_component_width(), get_component_type(), and get_format(). With this information you can calculate the particular byte or bytes that has the value you need and look up the value there.

David

I’m a bit confused about this.

What I need to know is the colour value at a certain position in the buffer.

Above you wrote the following:

I’m not clear about the difference between Texture::peek() and Texture::get_ram_image(). Could you explain this a bit more.

Also, could you give me a short example of how to use TexturePeeker (I’m having trouble with the syntax).

I’ve implemented the following however I think the last line is wrong.

How do I actually get the value I want in the array?

CPTA(uchar) textureArray = tbuf->get_ram_image();
	unsigned int texSize = textureArray.size();
	std::cout << "Texture Size: " << texSize << " (uchars)" << std::endl;
	unsigned int index = (texSize / 2);
	int numComponents = tbuf->get_num_components();
	std::cout << "numComponents: " << numComponents << std::endl;
	Texture::ComponentType componentType = tbuf->get_component_type();
	std::cout << "component type: " << componentType << std::endl;
	Texture::Format format = tbuf->get_format();
	std::cout << "format: " << format << std::endl;
	unsigned int offset = 1;
	
	// If the format is RGBA, then we need the middle value of the array plus one.
	uchar value = textureArray.get_element(index + offset);

That seems fine, if all you’re interested in is the dead center pixel (assuming an even number of pixels, which is a reasonable assumption).

Your last line is OK, though you could also write it this way (a little more C+±y):

uchar value = textureArray[index + offset];

It’s the difference between the mathematical abstraction of the texture as a series of colors ranging over (0, 1) in two dimensions, and the practical definition of a texture as an array of texels.

TexturePeeker treats a texture as a mathematical abstraction, and includes functions to filter down a rectangle of texels in a particular (u, v) range into a single color, similar to what the graphics hardware does when it renders the texture (except that the computation is made on the CPU).

If all you want is a single texel in the center of the image, the TexturePeeker can also tell you this:

PT(TexturePeeker) tp = tbuf->peek();
Colorf result;
tp->lookup(result, 0.5, 0.5);
uchar value = (uchar)(result[1] * 255.0);

If you look at the code for TexturePeeker::peek(), you’ll see that it’s essentially similar to what you’re doing in the above example.

David