create only graphical buffer and no graphical screen

I want to build only a graphical buffer, and not send my graphics to screen.

I will catch the buffer with openCV periodically and send the pics over UDP to another computer (using gstreamer), where the graphics will be shown.

How can I make a buffer without it being shown on screen (of the local computer) . I read the API and making a OsMesaGraphicsPipe would be the trick (it only resides in buffer ??) but could not make it work.

Below the code to create the openCV buffer (just one, not periodically called), I can only create it after the mainloop is called (or a few PandaFramework.do_frame calls).

I suspect that I only have to change the line: WindowFramework *window = framework.open_window();
and call it with a pipe , but dont know how to do this succesfully.

//g++ one.cxx -o one -I/usr/include/panda3d -L/usr/lib/panda3d -I/usr/include/python2.7 -lp3framework -lpanda -lpandafx -lpandaexpress -lp3dtoolconfig -lp3dtool -lp3pystub -lp3direct -lopencv_core -lopencv_highgui

#include "pandaFramework.h"
#include "pandaSystem.h"

#include "genericAsyncTask.h"
#include "asyncTaskManager.h"

#include "cIntervalManager.h"
#include "cLerpNodePathInterval.h"
#include "cMetaInterval.h"

#include "auto_bind.h"
#include "directionalLight.h"

#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/core/core.hpp>
#include <iostream>

using namespace cv;

// Global stuff
PandaFramework framework;
PT(AsyncTaskManager) taskMgr = AsyncTaskManager::get_global_ptr();
PT(ClockObject) globalClock = ClockObject::get_global_clock();
NodePath camera;

AsyncTask::DoneStatus example_task(GenericAsyncTask* task, void* data){
	WindowFramework *window = (WindowFramework *) data;
	PT(DisplayRegion) displayRegion = window->get_display_region_3d();
	int width = displayRegion->get_pixel_width();
	int height = displayRegion->get_pixel_height();

	PT(Texture) tex = displayRegion->get_screenshot();
	CPTA_uchar ding = tex->get_ram_image_as("BGR");
	string buffer = ding.get_data();
	void * c =  (void *) &buffer[0];
	//	void * ptr = (void *) (&ding);
	//void * ptr3 = (void *) ding;
	Mat cv_frame = Mat(cv::Size(width, height), CV_8UC3, c);
    cv::flip(cv_frame,cv_frame,0);
	imshow("CV window", cv_frame);
	waitKey(2000);

    return AsyncTask::DS_done;

}


int main(int argc, char *argv[]) {
	// Open a new window framework and set the title
	framework.open_framework(argc, argv);
	framework.set_window_title("My Panda3D Window");

	// Open the window
	//OsMesaGraphicsPipe *pipe = new OsMesaGraphicsPipe();
	WindowFramework *window = framework.open_window();
	camera = window->get_camera_group(); 

	// Load the environment model
	NodePath Actor = window->load_model(framework.get_models(), "panda.egg");
	// Load the walk animation
	window->load_model(Actor, "panda-walk.egg");

	AnimControlCollection anim_collection;
	auto_bind(Actor.node(), anim_collection);
	anim_collection.pose("panda-walk", 0);

	Actor.reparent_to(window->get_render());
	camera.set_pos(0, -30, 6);

	PT(DirectionalLight) d_light;
	d_light = new DirectionalLight("my d_light");
	NodePath dlnp = window->get_render().attach_new_node(d_light);

    PT(GenericAsyncTask) task;
    //Add the task_func function with an upon_death callback
    task = new GenericAsyncTask("CV bufferke", &example_task, (void*) window);
	task->set_delay(1);    // zodat eerst mainloop wordt gedraaid !!
	taskMgr->add(task);

	// Run the engine.
    // hereunder is same as framework.main_loop(); 	
	// But now you can let other mainloop (gstreamer) run panda loop
	while(1) {	
		framework.do_frame(Thread::get_current_thread());  
	}

	framework.close_framework();
	return (0);
}

A regular graphics pipe will work. You probably don’t want to be using the osmesa pipe. Instead of using a window framework, you just have to use GraphicsEngine::get_global_ptr().make_output with BF_refuse_window as flag to create the buffer, or use the higher level make_buffer.

Some pipes don’t support offscreen buffers, though.

Thanks rdb for your reaction.

I made a graphics engine and graphics output buffer, and deleted the WindowFramework.

However, a lot of things (Cameras etc) were ‘hanging’ on the WindowFramework so I had to think of another way to add my actor, light and camera to eachother.

I had myself inspired (pretty much copied) by the post [url]Shadows in Showbase AND offscreen buffer]. I seem to be further off than before, because I get an error when I run the program:

OpenGL
:display:glxdisplay(warning): No suitable FBConfig contexts available; using XVisual only.
depth_bits=24 color_bits=24 alpha_bits=8 stencil_bits=8 back_buffers=1 force_hardware=1 
After buffer creation 
Segmentation fault (core dumped)

The segmentation fault occurs when I do: buffer->make_display_region();
The default pipe seems to be made correctly, it is OpenGL.

[size=85]Can anyone tell me why the display_region cannot be made ?[/size]
EDIT: Actually, every call to method from buffer results in segmentation fault. Clearly the buffer is not created (correctly)!!!

//g++ two.cxx -o two -I/usr/include/panda3d -L/usr/lib/panda3d -I/usr/include/python2.7 -lp3framework -lpanda -lpandafx -lpandaexpress -lp3dtoolconfig -lp3dtool -lp3pystub -lp3direct -lopencv_core -lopencv_highgui

#include "graphicsPipeSelection.h"
#include "pandaFramework.h"
#include "pandaSystem.h"

#include "genericAsyncTask.h"
#include "asyncTaskManager.h"

#include "cIntervalManager.h"
#include "cLerpNodePathInterval.h"
#include "cMetaInterval.h"

#include "auto_bind.h"
#include "directionalLight.h"

#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/core/core.hpp>
#include <iostream>

PandaFramework framework;
PT(AsyncTaskManager) taskMgr = AsyncTaskManager::get_global_ptr();
PT(ClockObject) globalClock = ClockObject::get_global_clock();

int width  = 400;
int height = 420; 

AsyncTask::DoneStatus example_task(GenericAsyncTask* task, void* data){
	GraphicsOutput *buffer = (GraphicsOutput *) data;
	Texture * tex = new Texture();
	buffer->add_render_texture(tex, GraphicsOutput::RTM_copy_ram, GraphicsOutput::RTP_color);
	CPTA_uchar ding = tex->get_ram_image_as("BGR");
	void * ptr = (void *) ding.p();
	cv::Mat cv_frame = cv::Mat(cv::Size(width, height), CV_8UC3, ptr);
    	cv::flip(cv_frame,cv_frame,0);
	cv::imshow("CV window", cv_frame);
	cv::waitKey(2000);

	return AsyncTask::DS_done;
}

int main(int argc, char *argv[]) {

	NodePath root = NodePath("root");
	root.set_shader_auto();
 
	Loader load;
	PT(PandaNode) actor = load.load_sync("/usr/share/panda3d/models/panda.egg");
	NodePath actor_np = root.attach_new_node(actor);

	PT(DirectionalLight) d_light = new DirectionalLight("my d_light");
	NodePath dlnp = root.attach_new_node(d_light);
	root.set_light(dlnp);

	PT(Camera) camera = new Camera("camera");
	NodePath camera_np = root.attach_new_node(camera);
	camera_np.set_pos(0, -30, 6);	

	PT(GraphicsEngine) ge = GraphicsEngine::get_global_ptr();
	GraphicsPipeSelection *ps = GraphicsPipeSelection::get_global_ptr();
	PT(GraphicsPipe) pipe = ps->make_default_pipe();
	cout << pipe->get_interface_name() << endl;

	PT(GraphicsOutput) buffer = ge->make_output(
		pipe, //
		"my buffer",
		1, 
		FrameBufferProperties::get_default(),
   		WindowProperties::size(width, height), 
		GraphicsPipe::BF_refuse_window);

	cout << "After buffer creation "  << endl;

	PT(GenericAsyncTask) task = new GenericAsyncTask("CV bufferke", &example_task, (void*) buffer);
	task->set_delay(1);
	taskMgr->add(task);

	PT(DisplayRegion) dr;
	dr = buffer->make_display_region(); <-- here error
	dr->set_camera(camera_np);

	while(1) {	
		framework.do_frame(Thread::get_current_thread());  
	}

	framework.close_framework();

	return (0);
}

People, it is a bug in Panda3D. It is not possible to create a buffer offscreen, without having ANY window.
I want to do just that. I want to use Pand3D for rendering on one machine, and moving the image to another.
To test to be able to capture the buffer, I want to show it via OpenCV in the program below.

The program below illustrates the problem. If you compile it, it will work.
But if you change BF_require_window to BF_refuse_window the program fails.

//g++ one.cxx -o one -I/usr/include/panda3d -L/usr/lib/panda3d -I/usr/include/python2.7 -lp3framework -lpanda -lpandafx -lpandaexpress -lp3dtoolconfig -lp3dtool -lp3pystub -lp3direct -lopencv_core -lopencv_highgui

#include "graphicsPipeSelection.h"
#include "pandaFramework.h"
#include "pandaSystem.h"

#include "genericAsyncTask.h"
#include "asyncTaskManager.h"

#include "cIntervalManager.h"
#include "cLerpNodePathInterval.h"
#include "cMetaInterval.h"

#include "auto_bind.h"
#include "directionalLight.h"

#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/core/core.hpp>
#include <iostream>

using namespace cv;

// Global stuff
PandaFramework framework;
PT(AsyncTaskManager) taskMgr = AsyncTaskManager::get_global_ptr();
PT(ClockObject) globalClock = ClockObject::get_global_clock();

AsyncTask::DoneStatus example_task(GenericAsyncTask* task, void* data){
	WindowFramework *window = (WindowFramework *) data;
	PT(DisplayRegion) displayRegion = window->get_display_region_3d();
	int width = displayRegion->get_pixel_width();
	int height = displayRegion->get_pixel_height();

	PT(Texture) tex = displayRegion->get_screenshot();
	CPTA_uchar ding = tex->get_ram_image_as("BGR");
	void * ptr = (void *) ding.p();
	Mat cv_frame = Mat(cv::Size(width, height), CV_8UC3, ptr);
    	flip(cv_frame,cv_frame,0);
	imshow("CV window Edward", cv_frame);
	waitKey(2000);

    return AsyncTask::DS_done;
}


int main(int argc, char *argv[]) {

   // Open a new window framework and set the title
   framework.open_framework(argc, argv);
   framework.set_window_title("My Panda3D Window");

   GraphicsPipeSelection *ps = GraphicsPipeSelection::get_global_ptr();
   //ps->load_aux_modules();
   ps->print_pipe_types();

   PT(GraphicsPipe) pipe = ps->make_default_pipe();
   if (pipe == (GraphicsPipe*) NULL) {
	cout << "pipe not created succesfully  " <<endl;
	return 0;
   }
   cout << "Interface naam pipe:  " << pipe->get_interface_name() << endl;

  WindowProperties props;
  framework.get_default_window_props(props);

   PT(WindowFramework) window = framework.open_window(WindowProperties::size(400,420),
	GraphicsPipe::BF_require_window, 
	//GraphicsPipe::BF_refuse_window, //  WILL RESULT IN SEGMENTATION ERROR
	pipe, NULL);

   if (window == (WindowFramework*) NULL) {
     cout << "The buffer is not created succesfully  " <<endl;
     return 0;
   }

   NodePath camera = window->get_camera_group();
   camera.set_pos(0, -30, 6);

   Loader load;
   PT(PandaNode) actor = load.load_sync("/usr/share/panda3d/models/panda.egg");
   NodePath actor2node = window->get_render().attach_new_node(actor);

   PT(DirectionalLight) d_light = new DirectionalLight("my d_light");
   NodePath dlnp = window->get_render().attach_new_node(d_light);

   PT(GenericAsyncTask) task = new GenericAsyncTask("CV bufferke", &example_task, (void*) window);
   task->set_delay(1); 
   taskMgr->add(task);

   while(1) {   
      framework.do_frame(Thread::get_current_thread()); 
   }

   framework.close_framework();
   return (0);
}

The only way to create an offscreen buffer is using a pbuffer, which are deprecated and are no longer (well) supported by many implementations. This is a sad reality we can’t get around. Nowadays, you have to create a window in order to be able to open a buffer (or just use the window in the first place).

It is theoretically possible to create a window and never show it, which is how offscreen rendering is done nowadays, but Panda3D doesn’t implement this. Of course, you will still need a working X11 screen to do this, so if the machine is used as server, you might as well just open a window anyway.