Allowing Torch/Tensorflow to directly access rendered image in GPU

Hi, rdb

Many thanks for your reply! I also checked this post: How to get GPU memory pointer from `Texture` Object - #3 by Hao-Yan. I think I can roughly understand the workflow now. As I don’t want to touch any C++ stuff at this stage, I would like to use Nvidia/CUDA-Python to finish the OpenGL-CUDA interoperation: cudart - CUDA Python 12.0.0 documentation.

Do you think is it possible to finish all cuda-opengl actions with this? Actually, I tried this through the following code:

from cuda.cudart import cudaGraphicsGLRegisterImage, cudaGraphicsRegisterFlags, GLuint, GLenum
  # set texture
    my_texture = Texture()
    my_texture.setMinfilter(Texture.FTLinear)
    my_texture.setFormat(Texture.FRgba32)

    engine.win.add_render_texture(my_texture, GraphicsOutput.RTMCopyTexture)
    gsg = GraphicsStateGuardianBase.getDefaultGsg()
    texture_context = my_texture.prepareNow(0, gsg.prepared_objects, gsg)
    # texture_context = my_texture.prepare(gsg.prepared_objects)
    identifier = texture_context.getNativeId()
    flag, resource = cudaGraphicsGLRegisterImage(identifier, 1,
                                                 cudaGraphicsRegisterFlags.cudaGraphicsRegisterFlagsNone)

But the returned flag is CudaError: InvalidValue, which indicates that the arguments are not in an acceptable range, and thus the resource handle is an invalid one as well. I am inexperienced at these CG stuff, actually, so could you share with me some intuition, like if this Cuda-Python lib works or not? If not, should I write a CUDA backend and provide python bind to my Panda3D program?