DirectShow Capture to Texture Issues

Hello there,

I’m trying to do something simple in concept:

  • capture a live input video feed from a BlackMagic Intensity capture card, and set it as a texture
  • apply this texture to a model

I can “sort of” get this to work. Here’s the dilemma…

My goal capture resolution/framerate (for now) is 1280x720 @ 60Hz. If I open up GraphStudio and pipe the Blackmagic Capture to a Renderer, it renders just fine at 60Hz (see http://i53.tinypic.com/15ejl2p.png).

If I try to capture to a 2D card in Panda3D at 60Hz, I get nothing but black. If I change the setting down to 50Hz, I get a gray, warped, distorted, partial image (see http://i51.tinypic.com/8xo83l.png).

And (just for grins) if I try to set GraphStudio to 50Hz, I get nothing but black.

Here is the code I’m using (fyi, I’m using WebcamVideo option 3 because it’s the one that shows up at 1280x720)…

import sys 

from pandac.PandaModules import * 
from direct.directbase import DirectStart 

# manually select the video mode 
option = WebcamVideo.getOption(3) 
cursor = option.open() 
videoTexture = Texture('movie') 
cursor.setupTexture(videoTexture) 
# calculate the same as openCVTexture has as function 
videoTextureScale = Vec2(option.getSizeX()/float(videoTexture.getXSize()), option.getSizeY()/float(videoTexture.getYSize())) 

# under windows the webcam must be updated manually 
def updateVideo(task): 
  if cursor.ready(): 
    cursor.fetchIntoTexture(0, videoTexture, 0) 
  return task.cont 
taskMgr.add(updateVideo, 'updateVideo') 
  

# generate a card to show the texture on 
cardMaker = CardMaker('cardMaker') 
# define the card size 
#cardMaker.setFrame(-videoTextureScale[1]/2.,videoTextureScale[1]/2.,-videoTextureScale[0]/2.,videoTextureScale[0]/2.) 
cardMaker.setFrame(-videoTextureScale[1],videoTextureScale[1],-videoTextureScale[0],videoTextureScale[0]) 
# set the uv coordinates of the card, y-scale must be reversed 
cardMaker.setUvRange(Point2(videoTextureScale[0],0), Point2(0,videoTextureScale[1])) 

# create a card and attach to render 
card = render.attachNewNode(cardMaker.generate()) 
card.setTexture(videoTexture) 
card.setTwoSided(True) 

# rotate it 
#card.hprInterval(10,Point3(360,0,0)).loop() 

base.disableMouse() 
base.camera.setPos(0,-3,0) 

run() 

And here is the output that I get (seems fine, right?)…

DirectStart: Starting the game.
Known pipe types:
  wglGraphicsPipe
(all display modules loaded.)
Added device: DirectShow: 01DD3BFC:2
Added device: DirectShow: 01DD3C84:2
Added device: DirectShow: 01DD3D24:2
Added device: DirectShow: 01DD3DAC:2
  IID_IGraphBuilder & IID_ICaptureGraphBuilder2 are established.
  IID_IMediaControl interface is acquired.
  The capture filter is acquired.
  The capture filter has been added to the graph.
  IID_IBaseFilter of CLSID_SampleGrabber is acquired.
  The media type of the sample grabber is set 24-bit RGB.
  The sample grabber has been added to the graph.
  IID_IBaseFilter of CLSID_NullRenderer is acquired.
  The Null Renderer has been added to the graph.
Connected media type 1280 x 720

Any thoughts?

Also worth noting…

I found that if I use the WebcamVideo Option 0 (Directshow @ 720x576) and set the display resolution of the system being captured to 720x576 @ 60Hz, it does appear to render fine in Panda 3D…

BUT, there seems to be tons of interlacing present in any lateral movement (and it’s way too low-res).

But I thought it was worth mentioning. And I’m curious… is there a way to “deinterlace” a texture in Panda3D?

Thanks!