About graphics buffer and attaching a camera to it

I tried to create a hidden buffer and then render to it the code to attach a camera to it is like :

mybuffer = base.win.makeTextureBuffer("My Buffer", 512, 512)
mytexture = mybuffer.getTexture()
mybuffer.setSort(-100)
mycamera = base.makeCamera(mybuffer)
myscene = NodePath("My Scene")
mycamera.node().setScene(myscene)

but in c++ the window->make_camera() does not have a parameter for buffer.
Also i want the buffer to contain the same models i used for window i will get the texture by get_texture()
Can someone tell me how its done.

My code :

// MovingPanda.cpp : Defines the entry point for the console application. 
// 

#include "stdafx.h" 
#include "pandaFramework.h" 
#include "pandaSystem.h" 
#include "displayregion.h" 
#include "genericAsyncTask.h" 
#include "asyncTaskManager.h" 
#include "eventHandler.h"    
#include "buttonThrower.h" 
#include "PNMImage.h" 
#include "texture.h" 
#include <cv.h>
#include <highgui.h>

using namespace cv;

PandaFramework framework; 
PT(AsyncTaskManager) taskMgr = AsyncTaskManager::get_global_ptr(); 
PT(ClockObject) globalClock = ClockObject::get_global_clock(); 

PT(Texture) ptex; 

NodePath camera; 
NodePath camera2; 

NodePath pandaActor; 
WindowFramework *window; 
//GraphicsWindow window;
PT(GraphicsBuffer) buff;
PT(DisplayRegion) dRegion1; 

int done;
IplImage *image;

void event_button_down (const Event* evenmt, void* data) ; 
// Task to move the camera 
AsyncTask::DoneStatus SpinCameraTask(GenericAsyncTask* task, void* data) { 
	double time = globalClock->get_real_time(); 
	double angledegrees = time * 6.0; 
	double angleradians = angledegrees * (3.14 / 180.0); 
	// Using PNMImage too slow    
	//   PNMImage pImage; 
	//   window->get_graphics_output()->get_screenshot(pImage);    
	printf("%d\n",ptex);

	ptex = window->get_graphics_window()->get_texture();
    printf("%d\n",ptex);
	if(done)
	{	

		int width = ptex->get_x_size();
		int height = ptex ->get_y_size();
		image = cvCreateImage(cvSize(width,height),IPL_DEPTH_8U,4);
		const unsigned char *img = &(*(ptex->get_ram_image()));
		int ws = image -> widthStep;
		for(int i=0;i<2 * height;i++)
			for(int j=0;j<4 * width;j++)
				image->imageData[(i) * ws/2+ j/2] = img[(2 * height - i) * ws + j];

		cvShowImage("wind", image);
		cvReleaseImage(&image);
	}
	done = 1;

	//window->get_graphics_output()->get_texture()->get_ram_image().get_data(); 
	// Error in the above lines NULL pointer error. 
	//ptex->get_x_size();
	//bgrimg = ptex->get_ram_image().
	//ptex->get_ram_image();

	pandaActor.set_pos(10*cos(angleradians),10*sin(angleradians),0); 
	pandaActor.set_hpr(180+angledegrees,0,0); 
	return AsyncTask::DS_cont; 
} 

int main(int argc, char *argv[]) { 

	framework.open_framework(argc, argv); 
	framework.set_window_title("My Panda3D Window"); 

	// Open the window 

	//window = framework.open_window(); 
	window = framework.open_window();
	buff = window->get_graphics_output()->make_texture_buffer("my buffer",512,512);

	window->make_camera(buff);
	dRegion1 = window->get_graphics_output()->make_display_region(.75,1,.75,1);
	dRegion1->set_sort(10); 

	camera2 = window->make_camera();
	dRegion1->set_camera(camera2); 
	camera2.set_pos_hpr(0,0,40,0,-90,0);

	camera = window->get_camera(0); // Get the camera and store it 
	ptex = new Texture("tex");


	NodePath environ = window->load_model(framework.get_models(), "models/environment"); 
	environ.reparent_to(window->get_render()); 
	environ.set_scale(0.25 , 0.25, 0.25); 
	environ.set_pos(-8, 42, 0); 
	camera.set_pos(0,-20,3); 
	camera.set_hpr(0, 0, 0); 


	// Load our panda 
	pandaActor = window->load_model(framework.get_models(), "panda-model"); 
	pandaActor.set_scale(0.005); 
	pandaActor.reparent_to(window->get_render()); 

	cvNamedWindow( "wind", CV_WINDOW_AUTOSIZE );
	image = cvLoadImage( "C://Users//Public//Pictures//Sample Pictures//Desert.JPG");
	//	cvNamedWindow( "wind", CV_WINDOW_AUTOSIZE );
	cvShowImage( "wind", image );


	// Load the walk animation 
	window->load_model(pandaActor, "panda-walk4"); 
	window->loop_animations(0); 

	taskMgr->add(new GenericAsyncTask("Spins the camera", &SpinCameraTask, (void*) NULL)); 
	window->get_graphics_output()->setup_render_texture(ptex,false,true); 
	framework.main_loop(); 


	framework.close_framework(); 
	return (0); 
}

I am able to get the texture for the window and then convert to an opencv based iplimage format and display it too, but in this i even get the displayregion output which i do not want in the copy so i was trying to create a hidden buffer render it to that buffer get its texture and finally display it into the window(i do not know how its done yet help would be appreciated here too.)
Thanks.

You can create a camera and a DisplayRegion as described on the page Display Regions.

David

No thats not the problem, i can create display region and associate cameras with it but when i create a display region i am going to try to do texture.get_ram_image but when i do that i also get the different display region, i do not want that so i am trying to get a buffer to render just one display region.

Ill explain again:
window : Main Display Region + another display region
texture : Main Display Region + another display region
what i want : Just the main display region into some texture while window has both displayregions.
Can someone help me? [/img]

You can extract just a single DisplayRegion only with the slower PNMImage approach. If you want to go with the faster render-to-texture approach, you have to have the whole buffer.

David

but can i not render the image of the display region into a hiddenbuffer and take its get_texture and then finally merge the two images to give the final output??

The texture is bound to the overall offscreen buffer, not to the DisplayRegion. The DisplayRegion is just a region for rendering within the buffer, so the DisplayRegion doesn’t have a get_texture.

I’m a little confused what you’re asking for. Of course you can render anything you like to the offscreen buffer, and use that as the contents of your texture. But the entire buffer will always be in your texture.

David

Thanks for such a prompt reply.
I will try to explain better so i can get better help.

I am trying to create 2 cameras both pointing at the same place and are identical in all aspects i will take the output of one cam and show it in a display region, output of the other to go into a buffer and finally take the texture of the buffer and then extract the image.
So i wrote it as :

	buffcam = new Camera("my cam");
	buffcamNP = window->get_render().attach_new_node(buffcam);
	buffcamNP.set_pos_hpr(0,0,40,0,-90,0);
	buffcam->set_scene(window->get_render());

and

	dRegion1 = window->get_graphics_output()->make_display_region(.75,1,.75,1);
	dRegion1->set_sort(10); 

	camera2 = window->make_camera();
	dRegion1->set_camera(camera2); 
	camera2.set_pos_hpr(0,0,40,0,-90,0);

but when i extract the texture from the buff i get an error when doing get_ram_image.

   ptex = buff->get_texture();
	printf("%d\n",ptex);
	if(done)
	{	

		int width = ptex->get_x_size();
		int height = ptex ->get_y_size();
		image = cvCreateImage(cvSize(width,height),IPL_DEPTH_8U,4);
		const unsigned char *img = &(*(ptex->get_ram_image()));
		int ws = image -> widthStep;
		for(int i=0;i<2 * height;i++)
			for(int j=0;j<4 * width;j++)
				image->imageData[(i) * ws/2+ j/2] = img[(2 * height - i) * ws + j];

		cvShowImage("wind", image);
		cvReleaseImage(&image);
	}
	done = 1;

Can you help me here?
The address for the ptex is a nonzero ,valid address but a runtime error occurs at img*= &(*(ptex->get_ram_image()))

My entire code is :

// MovingPanda.cpp : Defines the entry point for the console application. 
// 

#include "stdafx.h" 
#include "pandaFramework.h" 
#include "pandaSystem.h" 
#include "displayregion.h" 
#include "genericAsyncTask.h" 
#include "asyncTaskManager.h" 
#include "eventHandler.h"    
#include "buttonThrower.h" 
#include "PNMImage.h" 
#include "texture.h" 
#include <cv.h>
#include <highgui.h>

using namespace cv;

PandaFramework framework; 
PT(AsyncTaskManager) taskMgr = AsyncTaskManager::get_global_ptr(); 
PT(ClockObject) globalClock = ClockObject::get_global_clock(); 

PT(Texture) ptex; 
PT(Texture) tex2; 

NodePath camera; 
NodePath camera2; 
PT(Camera)  buffcam; 
NodePath myscene;
NodePath buffcamNP; 
NodePath pandaActor; 
WindowFramework *window; 
//GraphicsWindow window;
PT(GraphicsOutput) buff;
PT(DisplayRegion) dRegion1; 

int done;
IplImage *image;

void event_button_down (const Event* evenmt, void* data) ; 
// Task to move the camera 
AsyncTask::DoneStatus SpinCameraTask(GenericAsyncTask* task, void* data) { 
	double time = globalClock->get_real_time(); 
	double angledegrees = time * 6.0; 
	double angleradians = angledegrees * (3.14 / 180.0); 
	// Using PNMImage too slow    
	//   PNMImage pImage; 
	//   window->get_graphics_output()->get_screenshot(pImage);    
	printf("%d\n",ptex);

	//ptex = window->get_graphics_window()->get_texture();
    ptex = buff->get_texture();
	printf("%d\n",ptex);
	if(done)
	{	

		int width = ptex->get_x_size();
		int height = ptex ->get_y_size();
		image = cvCreateImage(cvSize(width,height),IPL_DEPTH_8U,4);
		const unsigned char *img = &(*(ptex->get_ram_image()));
		int ws = image -> widthStep;
		for(int i=0;i<2 * height;i++)
			for(int j=0;j<4 * width;j++)
				image->imageData[(i) * ws/2+ j/2] = img[(2 * height - i) * ws + j];

		cvShowImage("wind", image);
		cvReleaseImage(&image);
	}
	done = 1;

	//window->get_graphics_output()->get_texture()->get_ram_image().get_data(); 
	// Error in the above lines NULL pointer error. 
	//ptex->get_x_size();
	//bgrimg = ptex->get_ram_image().
	//ptex->get_ram_image();

	pandaActor.set_pos(10*cos(angleradians),10*sin(angleradians),0); 
	pandaActor.set_hpr(180+angledegrees,0,0); 
	return AsyncTask::DS_cont; 
} 

int main(int argc, char *argv[]) { 

	framework.open_framework(argc, argv); 
	framework.set_window_title("My Panda3D Window"); 

	// Open the window 

	//window = framework.open_window(); 
	window = framework.open_window();
	
	buff = window->get_graphics_output()->make_texture_buffer("my buffer",512,512);
	buff->set_sort(-100);
	tex2 = buff->get_texture();
	buffcam = new Camera("my cam");
	buffcamNP = window->get_render().attach_new_node(buffcam);
	buffcamNP.set_pos_hpr(0,0,40,0,-90,0);
	buffcam->set_scene(window->get_render());
	
	
	dRegion1 = window->get_graphics_output()->make_display_region(.75,1,.75,1);
	dRegion1->set_sort(10); 

	camera2 = window->make_camera();
	dRegion1->set_camera(camera2); 
	camera2.set_pos_hpr(0,0,40,0,-90,0);

	camera = window->get_camera(0); // Get the camera and store it 
	ptex = new Texture("tex");


	NodePath environ = window->load_model(framework.get_models(), "models/environment"); 
	environ.reparent_to(window->get_render()); 
	environ.set_scale(0.25 , 0.25, 0.25); 
	environ.set_pos(-8, 42, 0); 
	camera.set_pos(0,-20,3); 
	camera.set_hpr(0, 0, 0); 


	// Load our panda 
	pandaActor = window->load_model(framework.get_models(), "panda-model"); 
	pandaActor.set_scale(0.005); 
	pandaActor.reparent_to(window->get_render()); 

	cvNamedWindow( "wind", CV_WINDOW_AUTOSIZE );
	image = cvLoadImage( "C://Users//Public//Pictures//Sample Pictures//Desert.JPG");
	//	cvNamedWindow( "wind", CV_WINDOW_AUTOSIZE );
	cvShowImage( "wind", image );


	// Load the walk animation 
	window->load_model(pandaActor, "panda-walk4"); 
	window->loop_animations(0); 

	taskMgr->add(new GenericAsyncTask("Spins the camera", &SpinCameraTask, (void*) NULL)); 
	window->get_graphics_output()->setup_render_texture(ptex,false,true); 
	framework.main_loop(); 


	framework.close_framework(); 
	return (0); 
}

I don’t understand why you’re rendering both to the window and also to an offscreen buffer. And you have both tex2, and ptex, as well? It seems a little confused. Why do the same rendering twice?

Anyway, you’re also a bit confused as to whether you are rendering the texture to RAM or not. The default is not to render to RAM (because that’s slower), but if you don’t render to RAM, then tex->get_ram_image() will be NULL, which will cause a program error if you just dereference it like you are doing.

You should be more robust with your error checking. Instead of:

const unsigned char *img = &(*(ptex->get_ram_image()));

Do something more like:

if (ptex->has_ram_image()) {
  CPTA_uchar ram_image = ptex->get_ram_image();
  const unsigned char *img = ram_image;
  ...
}

It’s important to save the result of ptex->get_ram_image() into a temporary variable before you cast it to an unsigned char pointer, so that it will be held (not deleted) for the duration that you are processing it. It’s also a good idea to check has_ram_image() before you just grab it and assume it exists.

Anyway, you may also want to pass true for the to_ram parameter to make_texture_buffer(). This is necessary to make it render to RAM. This is for tex2, not ptex, so maybe that doesn’t matter in your case; you are already passing true for the to_ram parameter to setup_render_texture(), which you’re using for ptex.

Still, there’s no guarantee that the RAM image will appear immediately after you do that. It may take a frame or two. So that’s why it’s also a good idea to check has_ram_image().

David

Thanks but i am still fairly confused,

I wanted a main window which has 2 outputs from 2 cams(in 2 different display region) also i wanted a ram copy of one of the image from one of the cams, you told me i could not extract the texture of just 1 display region, so i decided to create 1 more cam(now total 3) of which 2 point to the same location so that i can associate one of the cams to a display region and the other to the buffer so that i can get the ram copy from the buffer and at the same time get the output in the display region too.

Also my confusion arises partly from the documentation
the c++ documentation http://www.panda3d.org/manual/index.php/Low-Level_Render_to_Texture says that :

PT(GraphicsOutput) mybuffer;
PT(Texture) mytexture;
PT(Camera) mycamera;
NodePath mycameraNP;
NodePath myscene;
 
mybuffer = window->get_graphics_output()->make_texture_buffer("My Buffer", 512, 512);
mytexture = mybuffer->get_texture();
mybuffer->set_sort(-100);
mycamera = new Camera("my camera");
mycameraNP = window->get_render().attach_new_node(mycamera);
myscene = NodePath("My Scene");
mycamera->set_scene(myscene);

while the same in python is

mybuffer = base.win.makeTextureBuffer("My Buffer", 512, 512)
mytexture = mybuffer.getTexture()
mybuffer.setSort(-100)
mycamera = base.makeCamera(mybuffer)
myscene = NodePath("My Scene")
mycamera.node().setScene(myscene)

the python documentation has the line
mycamera = base.makeCamera(mybuffer)
So that i can see that we have a new camera which renders its output into the buffer, but the c++ has no such reference we create a buffer, create a camera but never tell the cam to render into the buffer, also i could not find a line corr. to base.makeCamera(mybuffer) in c++.
So how do i go about it??
Also do i incur a big time penalty if i add another camera to render into the buffer, is there a way such that the same camera (camera2 in this case) can render its image into both the buffer and the displayregion??
Thanks again will work on checking the ram images.

1 Like

I see, there was a mistake on that page; it omitted the creation of the DisplayRegion. I’ve just added the relevant lines. I also removed the reference to setScene(), that’s not recommended (you should use reparentTo() instead).

Yes, each time you add a new camera, you force the scene to be rendered all over again. So two cameras means double the render time; three cameras means triple the render time.

Not unless the buffer and the DisplayRegion are the same thing.

So, if I understand you, you want to render an image into an offscreen buffer and also see the same image in a DisplayRegion on the main window? Why not just render offscreen, then apply the resulting texture to a card that you attach to render2d? That way you only render the scene once, but you still see it onscreen.

David

1 Like