Convert texture to Opencv Mat

0. Background

Hello, everybody.
I am a newbie. Recently I want to build up a 3d-simulator and panda3d was recommended by the forums. It’s wonderful but I met some problems which discourses as well as documents may not include.

My intended simulator has a god view which is display on a window. In addition, it has some offscreen buffers which indicate the characters’ view sights. Just like a god have a global viewpoint seen from the sky and several persons’ sight from their eyes, while god view is displayed on window and individual views are intended to be converted to opencv Mat format for further processing.

Below is the god view:

1. Details

I met the problem at converting individual views to opencv mat in c++ coding. Sample project named render-to-texture coded in python has been referred but some differences of API between python and
c++ confuse me. makeCamera in python should pass the parameter buffer while make_camera in c++ needn’t. I noticed a similar question but I was still unable to acquire image data. Here is my code:

void Scene::setup_quadcopter_camera(std::string label) {
        std::string _name = generate_object_name(label); // a member function just generate name
        auto _buffer = window_framework->get_graphics_output()->make_texture_buffer(_name, 320, 240);
        auto _texture = _buffer->get_texture();
        _buffer->set_sort(-100);

        auto _camera = new Camera(_name);
        auto _lens = _camera->get_lens();
        _lens->set_fov(90, 60);
        _lens->set_film_size(0.032, 0.024);
        _lens->set_near(0.05);

        auto _camera_np = window_framework->get_render().attach_new_node(_camera);
        _camera_np.set_pos(0.0, 0.0, 0.1);

        auto _region = _buffer->make_display_region();
        _region->set_camera(_camera_np);
        _camera_np.reparent_to(window_framework->get_render());

        graphic_aeros.push_back(_buffer); // graphic_aeros is std::vector<GraphicsOutput*> type
}

void Scene::get_quadcopter_fpv_image(std::string label, cv::Mat &image) {
        size_t index = get_camera_index_from_label(label);
        auto _buffer = graphic_aeros.at(index);
        auto _texture = _buffer->get_texture();
        CPTA_uchar _pic = _texture->get_ram_image_as("BGR");
        void *_ptr = (void*)_pic.p();
        image = cv::Mat(320, 240, CV_8UC3, _ptr, cv::Mat::AUTO_STEP);
        cv::flip(image, image, 0);
}

It compiles well, however the program run and quit immediately without any errors or warning but exit code =17. I have no idea about how to solve the problems. Did I misunderstand the API or any ignorance?

Thanks.

2. Information

  • Panda3D Version: 1.10
  • OS Version: Mac OSX 10.14.6
  • OpenCV Version: 4.1.0 (I noticed panda3d has 2.4.3 version internal libraries)
  • Clang Version: 11.0.0

It might have something to do with the fact that you’re not using any smart-reference-counting pointers where it’s needed (such as in PT(Camera) _camera = new Camera(_name);, and NodePath _camera_np = ...). Excessive use of auto can easily obscure what types of references your variables are being stored as.

Does cv::Mat copy the data for the pointer that is passed in, or does it continue to refer to the data in _ptr?

Thanks for your reply.
I followed your instructions and added graphics_engine->render_frame() finally it works :smiley:

But now I have another problem about acquiring depth image. A similar topic was discussed and I followed its codes but unfortunately it didn’t work for me. As you see above, the depth image is incorrect. Something wrong also displayed:

Known pipe types:
  CocoaGraphicsPipe
(all display modules loaded.)
:display:gsg:glgsg(error): GL error 0x502 : invalid operation
:display:gsg:glgsg(error): An OpenGL error has occurred.  Set gl-check-errors #t in your PRC file to display more information.
:display:gsg:glgsg(error): GL error 0x502 : invalid operation
:display:gsg:glgsg(error): An OpenGL error has occurred.  Set gl-check-errors #t in your PRC file to display more information.

This is my codes:

void Scene::setup_depth_camera() {
        WindowProperties _wp;
        panda_framework->get_default_window_props(_wp);
        _wp.set_title(_name);
        _wp.set_size(320, 240);

        FrameBufferProperties _fbp;
        _fbp.set_rgb_color(true);
        _fbp.set_alpha_bits(1);
        _fbp.set_depth_bits(1);

        auto _gpo = window_framework->get_graphics_output();

        auto _buffer_depth = panda_framework->get_graphics_engine()->make_output(panda_framework->get_default_pipe(), _name,
                -2, _fbp, _wp, GraphicsPipe::BF_refuse_window, _gpo->get_gsg(), _gpo);

        auto _texture_depth = new Texture();
        _buffer_depth->add_render_texture(_texture_depth, GraphicsOutput::RTM_copy_ram, GraphicsOutput::RTP_depth_stencil);
        _texture_depth->set_minfilter(Texture::FilterType::FT_shadow);
        _texture_depth->set_magfilter(Texture::FilterType::FT_shadow);

      auto _camera_depth = new Camera(_name + "_Depth");
        auto _lens_d = _camera_depth->get_lens();
        _lens_d->set_fov(90, 60);
        //_lens_d->set_focal_length(0.028);
        _lens_d->set_film_size(0.032, 0.024);
        _lens_d->set_near(0.05);
        _lens_d->set_far(100);

        auto _camera_depth_np = window_framework->get_render().attach_new_node(_camera_depth);
        _camera_depth_np.set_pos(0.05, 0.0, 0.1);
        _camera_depth_np.set_hpr(-90, 0, 0);

        auto _region_depth = _buffer_depth->make_display_region();
        _region_depth->set_camera(_camera_depth_np);
        _camera_depth_np.reparent_to(window_framework->get_render());

        auto shader = Shader::load(Shader::SL_GLSL, "glsl-simple.vert", "glsl-simple.frag");
        _camera_depth_np.set_shader(shader);
}


void Scene::get_depth_image(cv::Mat &image) {
        if(_texture_depth->might_have_ram_image()) {
            CPTA_uchar _pic = _texture_depth->get_ram_image();
            void *_ptr = (void *) _pic.p();
            image = cv::Mat(_texture->get_y_size(), _texture->get_x_size(), CV_8UC(_texture->get_num_components()), _ptr,
                            cv::Mat::AUTO_STEP);
            cv::flip(image, image, 0);
        }
}

Firstly, which version of Panda3D are you using? Please make sure you are using 1.10.5, the latest.

Secondly, it may be useful to run your application through “apitrace trace” and send the resulting .trace file, so that I can see whether there are invalid OpenGL calls being made by Panda.

You can also try RTP_depth instead of RTP_depth_stencil to see if that has a different result.

Oh, I just noticed you are using FT_shadow. I recommend you take that out unless you are trying to do shadow mapping. Just in case that is interfering somehow.

Yeah I changed to use RTP_depth, disabled the shader and the output of depth image seemed correct :smiley: Thanks @rdb

But now I wonder how to understand the result of the depth image which is specified as CV_32FC1 format for each pixel. A ball was used to test the pixel value with respect to distance between the camera and the ball. Result was drawn below by Matlab where seemed a bit strange… The relationship between pixel-value and distance is not linear. I used tool to fit and the equation y=a*x^b+c where a=-0.1016, b=-0.9952, c=1.01. Also the parameters showed kind of relevance with camera lens.

No, it is certainly not linear. You will need to feed the depth value through an equation that takes the near and far distance of the camera lens as parameters in order to get the linear depth. Just Google “linear depth”; there are plenty of resources on this.

The alternative is to write a shader that writes the linear depth values to the color buffer.

Sorry for the delay.

Thanks to @rdb help and guidance, I completed a simple simulator for quadcopter. According to the suggestions of panda3d docs, finally I transferred the C++ code into Python. Amazingly, there is mere significant difference in the performance of speed between them. Moreover, python is more convenient to debug.

The simple simulator, which was first aimed at emulation for rapid navigation in forests, contains the primary physical model of quadcopter, the simple terrain, the simulation sensors of IMU, RGB-camera, depth-camera, event-camera, interfaces with Matlab. The code of PyRealSim is on GitHub if someone needs.

2 Likes