OpenCV VideoCapture to Panda3D Texture?

I want to read a frame from an OpenCV camera frame which is accessed by doing

camera = cv2.VideoCapture(0)
success_state, image =

Where “image” is a numpy ndarray object. I suppose I could convert this array into a Panda PNMImage to generate a texture from that, but I don’t think this is a very efficient and fast approach for realtime use.
Is there a better approach here?

Set up the texture with the exact format (dimensions, number of channels, etc.) using tex.setup_2d_texture(...), then call tex.set_ram_image(image) with your numpy array after receiving every frame.

Okay, the API doesn’t seem to provide any info about the formats and if strings (eg. “BGRA”, “RGB”…), exactly what strings from a list the args of setup_2d_texture() expect,

``setup2dTexture ( *x_size: int* , *y_size: int* , *component_type: ComponentType* , *format: Format* ) → None

I can’t find anything about “component_type” and “format”.

The enumerations are documented at the bottom of the page. I suppose that it should be linking them properly, not sure why it doesn’t. You probably want to use Texture.T_unsigned_byte for the component type and Texture.F_rgba for the format, if you have a 4-component 8-bpc image.

All 3-component and 4-component texture formats in Panda are in BGR or BGRA ordering. If you want to reorder the channels, you can use tex.set_ram_image_as(image, "RGBA").

Ah my apologies, the page was rather long so I didn’t expect an unlinked list would be in the very bottom.
I’ll just check now if the numpy array is properly accepted by tex.set_ram_image() or whether it needs to be converted first.
After I get this working I’ll post a code snippet for refernence for others in the future.

The below code snipped seems to work, but the performance is dreadful: 8 fps with GTX1070 and i7 CPU. Maybe I’m doing something extremely inefficiently here.

import cv2
from panda3d.core import *
load_prc_file_data("", "show-frame-rate-meter #t")
load_prc_file_data("", "sync-video #f")
from direct.showbase.ShowBase import ShowBase
base = ShowBase()

frame = loader.load_model("smiley")

cv_camera = cv2.VideoCapture(0 + cv2.CAP_DSHOW)
camera_x = 640
camera_y = 480
cv_camera_frame_texture = Texture()
cv_camera_frame_texture.setup_2d_texture(camera_x, camera_y, Texture.T_unsigned_byte, Texture.F_rgb8)

def update_usb_camera_frame(task):
    success_state, image =
    frame.set_texture(cv_camera_frame_texture, 1)
    return task.again
base.task_mgr.do_method_later(60/1000, update_usb_camera_frame, "update_usb_camera_frame")

You’re running your task once every 60 milliseconds, which is about 16.7 fps. Did you mean to use 1000/60?

For the record, are you aware of the WebcamVideo class in Panda? This allows you to get frames from an attached camera straight into a MovieTexture, without needing to interface with OpenCV yourself.

The issue is with USB cameras I don’t think you can access it by two separate processess at the same time. My OpenCV code already needs to access the same camera for machine vision tasks, so I don’t think I can access it simultaneously with Panda and instead need to access the frame data already available to me by my OpenCV code. Correct me if I’m wrong.

And my issue is not the fps of the texture but the whole Panda window dropping below 8 fps.

Have you timed how long takes? Depending on your camera/setup, it might take the whole exposure duration for that frame and delay everything. You could try continuously writing frames into a io.BytesIO() object or similar in a separate thread, and just grab the latest one when your task needs it.

Does read() block waiting for the next frame? I’m no expert, but based on a cursory look at the OpenCV API reference, maybe you instead want to call grab() and if there is no new frame, return right away, before calling retrieve()?

Otherwise, @Made_Whatnow’s suggestion is a good one: do the interaction with VideoCapture in a thread. Actually, you could just create a threaded task chain and put your task in a task chain, since set_ram_image() should be thread-safe.

You don’t need to call frame.set_texture(cv_camera_frame_texture, 1) every frame, that’s unnecessarily inefficient (though not enough to be the cause of your slowdown). You can just call it once.

I can verify that by lowering the exposure on the USB camera by shining a small flashlight on it (camera seems to have auto exposure on) the FPS jumps from ~8 to whopping ~2800, so the issue is likely what you described and there’s nothing slow with these Panda3D methods.

I’ll update when I get threading working properly.