Is there a more efficient way to get image from texture buffer? (For AI Training)

Hi, guys, I am training an autonomous vehicle using Reinforcement Learning in the environment built by panda3d. It succeeded with a high sample efficiency when I use Lidar cloud points as input, and now I want to use camera data as input. I find it an inefficient way to get the image from buffer. It takes 0.04s to obtain an image, convert the image to numpy array, feed the array to the autonomous vehicle and finish one step. However, millions of experience (steps) are required for one agent to search a driving policy. So I’d like to know if there is a more efficient way to get the image from the buffer.

Are you concerned with latency or throughput?

The main problem is that getting data from the GPU is a high-latency process, since the graphics pipeline is heavily pipelined and particularly designed for one-way traffic. It’s possible to do asynchronous downloads from the GPU and continue rendering the next frame while processing the previous frame once it’s become available, but that won’t help to reduce latency.

You could also consider, if your use case allows, to do onward processing of the images on the GPU (eg. via a compute shader) before transferring the results back to CPU memory.

Thank you rdb! The asynchronous downloads doesn’t work in my simulation, since in next frame the vehicle should move to a new position according to the image it captured in last frame. And the time wasting to download from GPU, the latency, is the main problem resulting in a low frame rate. May be I should try other algorithms with high data utilization.