How to get GPU memory pointer from `Texture` Object

Recently, I found some perf issue when using, GraphicsOutput.RTMCopyRam)

with 1080p images. I think it should be caused by the bandwidth to move data from GPU to system RAM.
Actually in our project, we need to encode the rendered frame with h264 and then stream to other place. And since NVIDIA GPU also supports hardware encoding, I am thinking if I use, GraphicsOutput.RTMBindOrCopy)

then the rendered frame should be on the GPU and the GPU encoder should be able to use it directly. The question is can I get the memory pointer of GPU buffer from the Texture object of Panda3D?

I don’t believe OpenGL provides a (standard) way to get the raw GPU memory pointer for a texture.

However, you can get the OpenGL texture handle from the TextureContext object that is asynchronously returned from a tex.prepare() call, via tc.getNativeId().

Thanks @rdb. I found some useful information here: OpenGL Interoperability with CUDA | 3D Game Engine Programming ( for how to get the device memory pointer of the OpenGL texture. I am trying to test E2E, but I am stuck by how to call text.prepare()? What parameter should I pass in for the prepared_objects? Do you have sample code for this? Also, can I use prepareNow for my case?

I am testing below code and get invalid resource handle when call cudaGraphicsGLRegisterImage(&cgr, handleId, GL_TEXTURE_2D, cudaGraphicsMapFlagsReadOnly) in C++.

        self.screen_texture = Texture()
        self.screen_texture.setFormat(Texture.FRgba32)"Format is {self.screen_texture.getFormat()}")
        #, GraphicsOutput.RTMCopyRam), GraphicsOutput.RTMBindOrCopy)

        self.gsg = GraphicsStateGuardianBase.getDefaultGsg()
        texture_context = self.screen_texture.prepareNow(0, self.gsg.prepared_objects, self.gsg)

If I create the texture myself with below code and use it, all API works fine.

    glGenTextures(1, &mTestViewGLTexture);
    glBindTexture(GL_TEXTURE_2D, mTestViewGLTexture);

    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 1920, 1080, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
    glBindTexture(GL_TEXTURE_2D, 0);

So is the return value from tc.getNativeId() a OpenGL texture handle, or it is something else?

Ah, I need to use, GraphicsOutput.RTMCopyTexture) intead of, GraphicsOutput.RTMBindOrCopy), now it is working.

From the doc, GraphicsOutput.RTMCopyTexture means it will copy from buffer every frame. Even though the copy happens on GPU, still want to avoid it. I am trying to register the buffer directly, can I get the offscreen render buffer handle? @rdb.