Accessing OpenGL structures to get mix treatment wth DirectX

Hi guys,

The following question may sound weird… but please have a look

My Panda3d app written in c++ is using the wglGraphicsPipe and quite a lot of cg shaders.

For some (good) reason, I’m now importing directx9 surfaces and would like
to write onto them with the underlying openGL in Panda3D.

In order to do so I’d need to get the OpenGL resource references (Gluint, FBOs, …) of offscreen
buffers obtained as a result of post processing camera views through a cg mixing shader.

The aim is to merge this kind of buffers with a directx9 surface, and have directx
blit the whole stuff in the display backbuffer of the main window.

So the question is how to get descriptor of the OpenGL surfaces created and managed through the
Panda3d interface???

Thanks for your hints

Eww… really?

You’ll have to get at some private data structures within Panda for this. Since you’re working in C++, that’s possible, at least, but it may require a recompile of Panda to expose the necessary data members.

The bottom line is, you have to downcast your GraphicsBuffer pointer to its proper specific type, and then access the private data members stored there (for instance, by adding a public method to retrieve them to the appropriate class).

For instance, if your GraphicsBuffer is actually a wglGraphicsBuffer, then you need to downcast your GraphicsBuffer pointer to a wglGraphicsBuffer type, and then access the _pbuffer member, or whatever it is that you will find useful. On the other hand, if your GraphicsBuffer is a glGraphicsBuffer, then you need to downcast it to glGraphicsBuffer, and access the _fbo member.

Whether you have a wglGraphicsBuffer or a glGraphicsBuffer or something else altogether depends largely on your graphics drivers. Panda will attempt to select the interface that works with whatever the driver can provide. So it is essential that you check the actual type of your GraphicsBuffer (for instance, with is_of_type()) before downcasting it. If you don’t want to handle all of the possible buffer types, you need to have a sane fallback in case the buffer is of an unexpected type (like ParasiteBuffer or something else unhelpful).


Thanks David,

Since I’m not at all familiar with the intricacies of OpenGL/DirectX/Panda3D, here is a sketch (stuttering…) of what I’ve understood:

(1) in glGraphicsBuffer_src.h, make public:

GLuint _fbo;
PT(Texture) _tex[RTP_COUNT];

(2) then the code could look like

#include "glgsg.h"
#include "glGraphicsBuffer_src.h"

// assuming directx already on with d3ddev ptr
if( NULL == (d3d = Direct3DCreate9(D3D_SDK_VERSION))) exit(0);	// create the Direct3D interface
// panda actions:
// set texture buffer (ABuffer) that will be written as a result of CameraA seiing the scene
PT(GraphicsBuffer) ABuffer = 
 DCAST(GraphicsBuffer,window->get_graphics_output()->make_texture_buffer("ABuffer", wtexture, htexture)); 

PT(Texture) ABuffertexture = ABuffer->get_texture();
camA = new Camera("camA");
camA_NP = camera.attach_new_node(camA);

PT(DisplayRegion) ADr = ABuffer->make_display_region(0,1,0,1);

// will do the same for additional cameras (B,C,D)

// If GraphicsBuffer is wglGraphicsBuffer get gl descriptors
if (ABuffer->get_type().get_name() == "GLGraphicsBuffer") {
	PT(GLGraphicsBuffer) gl_ABuffer = DCAST(GLGraphicsBuffer,ABuffer);

	// Register directx device for interop with opengl
	gl_handleD3D = wglDXOpenDeviceNV(d3ddev); // interop device handle

	// Create d3d surface (gImage)
		D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT,	// Surface is in video memory
		&gImage, NULL);

	// Register this DirectX buffer as GL texture object
	GLuint gl_name_image;
	glGenTextures(1, &gl_name_image);		// reserver le nom d'une texture

	// prepares this DirectX object for use by the GL
	HANDLE handle_image = wglDXRegisterObjectNV(gl_handleD3D, gImage,

	// Lock the render target for GL access
	wglDXLockObjectsNV(gl_handleD3D, 1, &handle_image);

	// Now transfer openGL ABuffertexture into part of Directx gImage
	BindFramebufferEXT(READ_FRAMEBUFFER_EXT, gl_ABuffer->fbo);
	BindFramebufferEXT(DRAW_FRAMEBUFFER_EXT, gl_name_image );
	BlitFramebufferEXT(0, 0, 1024, 768, 80, 80, 1024-80, 768-80,GL_COLOR_BUFFER_BIT,GL_NEAREST);

	// Do the same for cameras B,C & D

	// Unlock gImage from openGL 
	wglDXUnlockObjectsNV(gl_handleD3D, 1, &handle_image);

	// Do some additional processing in directx on gImage

	// Get the Backbuffer then Stretch the Surface on it.
	RECT srcRect= { 0, 0, gImageWidth/2, gImageHeight/2};
	RECT dstRect= { 0, 0, gImageWidth,   gImageHeight};
	d3ddev->GetBackBuffer(0, 0, D3DBACKBUFFER_TYPE_MONO, &gBackBuf);
	d3ddev->StretchRect(gImage, &srcRect, gBackBuf, &dstRect, D3DTEXF_NONE); 

	// then Direct3D presents the results on the screen
    d3ddev->Present(NULL, NULL, NULL, NULL);    // displays the created frame

By far I’m not sure that this is the correct approach, and obviously this rough code is not likely to work as such :cry:

How to make it work?? :blush:

It sounds like you have the right concepts.

I followed the exploratory path, after a few hours in the trenches… I have gladly to report: no progress at all!

So I’m running on Windows7/64 + NVidia GeForce GT series
ABuffer->get_type().get_name() reports “GLGraphicsBuffer” (BTW: shouldn’t be reporting “wglGraphicsBuffer” ?)


b[/b] including :

#include "glgsg.h"
#include "glGraphicsBuffer_src.h"


1>C:\Panda3D-1.7.1\include\glGraphicsStateGuardian_src.h(636): error: identifier "PFNGLDRAWARRAYSINSTANCEDPROC" is undefined
1>    ^
1>C:\Panda3D-1.7.1\include\glGraphicsStateGuardian_src.h(637): error: identifier "PFNGLDRAWELEMENTSINSTANCEDPROC" is undefined

1>C:\Panda3D-1.7.1\include\glGraphicsBuffer_src.h(61): error: invalid redeclaration of type name "GLGraphicsBuffer" (declared at line 61)
1>  class EXPCL_GL CLP(GraphicsBuffer) : public GraphicsBuffer {

b[/b] on the front of using NVidia openGL extensions: interop directx/openGL


"wglDXOpenDeviceNVX", "wglDXCloseDeviceNVX", "wglDXRegisterObjectNVX", "wglDXUnregisterObjectNVX"
"wglDXLockObjectsNVX", "wglDXUnlockObjectsNVX", "wglDXObjectAccessNVX"

everything is fine upto calling wglDXRegisterObjectNVX which crashes miserably…

Querying the driver through wglGetProcAddress I notice there are also extensions with a slight different name (no trailing X) ie

Have you guys ever used these openGL extensions?

No, as I said, your buffer type is not necessarily a wglGraphicsBuffer; it might be a GLGraphicsBuffer or ParasiteBuffer or something else. Each of these buffer types uses a different interface to get an offscreen buffer, and thus stores a different kind of handle to it internally. Since you have a GLGraphicsBuffer, it means you have to downcast it to that buffer type (and not wglGraphicsBuffer) to extract the data that is specific to that class. Fortunately, that’s what you appear to be doing.

I haven’t used them myself; they sound like trouble to me. :wink: But there are many, many OpenGL extensions. I’m guessing that the trailing X adds a new revision to the original extension. Are you sure that your graphics driver actually provides the extensions that you’re trying to call? If you call an extension that isn’t provided, you will certainly crash miserably.


Hi David,

Well I actually interrogate the graphic driver with the following handy small procedure, and all the wglDX…NVX are reported present (the same btw with the wglDX…NV version).

bool check_DX_GL_interop() {
	char wgldx[7][80]={"wglDXOpenDeviceNVX", "wglDXCloseDeviceNVX","wglDXRegisterObjectNVX",

	bool all_found = true;
	for (int i=0; i<7; i++) {
		void *pproc = (void *) wglGetProcAddress(wgldx[i]);
		if(pproc==NULL) {
			printf("%s isn't there\n",wgldx[i]);
			all_found = false;
		else    printf("%s is found\n",wgldx[i]);
	return all_found;

<.> On the front of OpengL/DX interop I now have the “wglDXRegisterObjectNVX” not shouting anylonger since I realized that the expected Directx object was a Texture9 and not a Surface9… so at least on this one some progress was made…

<.> On my question (1) related to :

1>C:\Panda3D-1.7.1\include\glGraphicsBuffer_src.h(61): error: invalid redeclaration of type name "GLGraphicsBuffer"

what would you suggest ?

You shouldn’t be including glGraphicsBuffer_src.h directly. Including glgsg.h should be sufficient.

In general, within the Panda source code, if “_src” appears in a filename, it means that the file is meant to be included in a very specific context with certain #defines made. In this case, glGraphicsBuffer_src.h s included indirectly by glgsg.h after it has defined the appropriate context.

I’m not certain that this is the cause of your problem, but it seems a likely guess. If fixing this doesn’t solve your problem, some more detective work is in order. The typedef PFNGLDRAWARRAYSINSTANCEDPROC in particular is defined (in my version of the code) at line 153 of glGraphicsStateGuardian_src.h, so if you don’t have that typedef by line 636 of the same file, something is certainly wrong somewhere.


well, the reason why PFNGLDRAWARRAYSINSTANCEDPROC appears as undefined is that within glGraphicsBuffer_src.h the block

@lig 50	
#ifndef __EDG__  // Protect the following from the Tau instrumentor.


@lig 151
#ifndef OPENGLES
typedef void (APIENTRYP PFNGLDRAWARRAYSINSTANCEDPROC) (GLenum mode, GLint first, GLsizei count, ...
typedef void (APIENTRYP PFNGLDRAWELEMENTSINSTANCEDPROC) (GLenum mode, GLsizei count, GLenum type, ...
#endif  // OPENGLES
#endif  // __EDG__

is strapped. Certainly EDG maybe defined somewhere else…

so, a quick and ugly fix for the time being :

#undef EDG
#include “glgsg.h”
#define EDG

BTW. Don’t know what the Tau instrumentor may be :slight_smile:

(2) Having done so I’m getting some annoyance from

1>jctut.obj : error LNK2019: unresolved external symbol "__declspec(dllimport) public: static class TypeHandle __cdecl GLGraphicsBuffer::get_class_type(void)" (__imp_?get_class_type@GLGraphicsBuffer@@SA?AVTypeHandle@@XZ) referenced in function "class TypeHandle __cdecl _get_type_handle<class GLGraphicsBuffer>(class GLGraphicsBuffer const *)" (??$_get_type_handle@VGLGraphicsBuffer@@@@YA?AVTypeHandle@@PBVGLGraphicsBuffer@@@Z)
1>jctut.obj : error LNK2019: unresolved external symbol "__declspec(dllimport) public: static void __cdecl GLGraphicsBuffer::init_type(void)" (__imp_?init_type@GLGraphicsBuffer@@SAXXZ) referenced in function "void __cdecl _do_init_type<class GLGraphicsBuffer>(class GLGraphicsBuffer const *)" (??$_do_init_type@VGLGraphicsBuffer@@@@YAXPBVGLGraphicsBuffer@@@Z)

The related definitions appear as public in glGraphisStateGuardian_src.h…

static TypeHandle get_class_type() {
return _type_handle;
static void init_type() {
register_type(_type_handle, CLASSPREFIX_QUOTED “GraphicsBuffer”,

Is there some issue with public static?

Are you linking with libpandagl.lib?

ooops… I wasn’t !

Hi, I’m back to the subject, so :

DirectX & OpenGL are coexisting: DirectX textures are seen by OpenGL

I can generate a FBO in OpenGL, attach the DirectX textures and
BlitFramebufferEXT one to each other

now, the point is that I’d like to access some textures previously built
under Panda and blit part of them to one of my DirectX structure

texture A comes from:

PT(GraphicsBuffer) ABuffer =
 DCAST(GraphicsBuffer,window->get_graphics_output()->make_texture_buffer("ABuffer", wtexture, htexture));

PT(Texture) ABuffertexture = ABuffer->get_texture();

if (ABuffer->get_type().get_name() == "GLGraphicsBuffer") {
   PT(GLGraphicsBuffer) gl_ABuffer = DCAST(GLGraphicsBuffer,ABuffer);
   fboA = gl_ABuffer->_fbo; // I've made this field accessible

I do the same for textures B,C,…

What I basically need to get performed is :

// assign read/write targets
glReadBuffer(GL_COLOR_ATTACHMENT1_EXT); // ie A,B,C,.. textures
glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT); // ie my DirectX texture to be overwritten

glBindFramebufferEXT(READ_FRAMEBUFFER_EXT, fb_panda); // fboA,...
glBindFramebufferEXT(DRAW_FRAMEBUFFER_EXT, my_fb);

glBlitFramebufferEXT(0, 0, 1024, 768, 100, 100, 1024-100, 768-100, GL_COLOR_BUFFER_BIT, GL_NEAREST); 

either using 2 different fbos, or a new one with all textures attached

Here are the questions :

(1) fboA = gl_ABuffer->_fbo, fboB,… appear to be 0.
Are fbos reallocated, or should I rely upon them after initial set-up?

(2) As several textures are supposed to be bound to the same fbo, how can I find out who is who?

(3) ** The key point is how can I get the openGL id of the textures A,B,C?

Thanks again for your support.

Hmm, the _fbo value won’t be filled in immediately after you call make_texture_buffer(), because the actual buffer hasn’t been created yet (only a placeholder in the Panda space).

Try calling GraphicsEngine::open_windows() after you call make_texture_buffer() and before you query _fbo, to ensure that the buffers are fully created.




_fbo still reported at 0

int nb_windows_or_buffers = framework.get_graphics_engine()->get_num_windows();

reports 4 : does this include main window + A,B,C textures ?

Hmm, what happens if you also call GraphicsEngine::render_frame()? Maybe the fbo’s don’t actually get created until the first frame is rendered to them.


ok, good one additional step made:

int nb_windows_or_buffers = framework.get_graphics_engine()->get_num_windows(); 

nb_windows_or_buffers gives 4
fboA now reported as 2
fboB now reported as 3

Should I assume that there is one fbo per texture (attached as GL_COLOR_ATTACHMENT0_EXT)?
Then to perform my blit, is the following ok?

glBindFramebufferEXT(READ_FRAMEBUFFER_EXT, fb_panda); // fboA,...
glBindFramebufferEXT(DRAW_FRAMEBUFFER_EXT, my_fb);
glBlitFramebufferEXT(0, 0, 1024, 768, 100, 100, 1024-100, 768-100, GL_COLOR_BUFFER_BIT, GL_NEAREST); 

Or wouldn’t it be preferable to deal directly with the texture id?

I think you’d rather deal with the fbo id, as you show above. But you’re operating outside of my experience now. :slight_smile:


For whatever reason, with the fbo approach the end result of the blit produces only a grey area :cry:
Need to understand what’s going on…

WRT the possible second approach (texture id based), assume I have generated a texture this way
PT(Texture) MyTexture = TexturePool::load_texture(“tex.bmp”);

How would I get its associated openGL id?

Call texture->prepare_now(), passing the GSG in as both pointers. This will return a TextureContext, which you will need to downcast to a GLTextureContext, which has an _index parameter, representing the OpenGL id of the texture.


Thanks David,
I’ll have a look and try to make some progress on this tomorrow.