Alpha Channel Video problem

Hello! I’ve become interested in panda3d for it’s video features. I’ve been doing some simple testing using the Media Player sample and some of my own videos. I’ve been trying to get alpha channel video to playback but the alpha channel appears to be ignored. I have tried using the transparency mode as someone suggested/helped on IRC but it still shows my alpha areas as black, despite the world color having been set to white. Someone mentioned I could use a secondary greyscale video as the alpha channel, but I’m not sure how to do this, and I worry about sync issues possibly. I can render a second version of my video with just the alpha channel if that would work. I was also thinking perhaps it’s not blending with white because thats maybe just a card surface color or something? Maybe theres nothing behind it to blend with? From what I can tell the media player sample doesn’t draw anything but the card the video is placed on. Here is the code I have so far (only a few lines differ from the original media player sample), it works, but again, the alpha channel of the video file (Quicktime Animation codec in a mov) appears to not have any effect. Could that also perhaps be because of ffmpeg? Also I have tried motion png in a mov, same result.

Here are 2 pictures showing the current result, and a photoshop of how it should look if the alpha was working:
Problem: pasteall.org/pic/show.php?id=20938
Desired: pasteall.org/pic/show.php?id=20939

Thanks for any help!!

Using 1.8.0 nov 11th release

from panda3d.core import *
# Tell Panda3D to use OpenAL, not FMOD
loadPrcFileData("", "textures-power-2 none")
loadPrcFileData("", "audio-library-name p3openal_audio")
from direct.showbase.DirectObject import DirectObject
from direct.gui.OnscreenText import OnscreenText
import direct.directbase.DirectStart

# The name of the media file.
MEDIAFILE="_TStv_WSW_Transition.mov"

# Function to put instructions on the screen.
def addInstructions(pos, msg):
    return OnscreenText(text=msg, style=1, fg=(0,0,0,1), mayChange=1,
                        pos=(-1.3, pos), align=TextNode.ALeft, scale = .05, shadow=(1,1,1,1), shadowOffset=(0.1,0.1))

# Function to put title on the screen.
def addTitle(text):
    return OnscreenText(text=text, style=1, fg=(0,0,0,1),
                        pos=(1.3,-0.95), align=TextNode.ARight, scale = .07, shadow=(1,1,1,1), shadowOffset=(0.05,0.05))


class World(DirectObject):

  def __init__(self):
    self.title = addTitle("Panda3D: Tutorial - Media Player")
    self.inst1 = addInstructions(0.95,"P: Play/Pause")
    self.inst2 = addInstructions(0.90,"S: Stop and Rewind")
    self.inst3 = addInstructions(0.85,"M: Slow Motion / Normal Motion toggle")
    
    base.setBackgroundColor(1,1,1,1)

    # Load the texture. We could use loader.loadTexture for this,
    # but we want to make sure we get a MovieTexture, since it
    # implements synchronizeTo.
    self.tex = MovieTexture("name")
    assert self.tex.read(MEDIAFILE), "Failed to load video!"

    # Set up a fullscreen card to set the video texture on.
    cm = CardMaker("My Fullscreen Card");
    cm.setFrameFullscreenQuad()
    cm.setUvRange(self.tex)
    card = NodePath(cm.generate())
    card.reparentTo(render2d)
    card.setTransparency(TransparencyAttrib.MAlpha)
    card.setScale(card, 0.8)
    card.setTexture(self.tex)
    card.setTexScale(TextureStage.getDefault(), self.tex.getTexScale())
    self.sound=loader.loadSfx(MEDIAFILE)
    # Synchronize the video to the sound.
    #self.tex.synchronizeTo(self.sound)
    
    

    self.accept('p', self.playpause)
    self.accept('P', self.playpause)
    self.accept('s', self.stopsound)
    self.accept('S', self.stopsound)
    self.accept('m', self.fastforward)
    self.accept('M', self.fastforward)

  def stopsound(self):
    self.sound.stop()
    self.sound.setPlayRate(1.0)

  def fastforward(self):
    print self.sound.status()
    if (self.sound.status() == AudioSound.PLAYING):
      t = self.sound.getTime()
      self.sound.stop()
      if (self.sound.getPlayRate() == 1.0):
        self.sound.setPlayRate(0.5)
      else:
        self.sound.setPlayRate(1.0)
      self.sound.setTime(t)
      self.sound.play()

  def playpause(self):
    if (self.sound.status() == AudioSound.PLAYING):
      t = self.sound.getTime()
      self.sound.stop()
      self.sound.setTime(t)
    else:
      self.sound.play()

w = World()
run()

Panda’s video support library, ffmpeg, doesn’t support QuickTime alpha channels (to my knowledge), so Panda doesn’t either.

However, you can certainly store the alpha channel as a separate grayscale QuickTime movie, and apply it to the main movie at runtime, using the two-parameter form of loader.loadTexture() or texture.read(). There will be no sync issues; Panda guarantees that the frames are matched up by frame number. There is an additional performance overhead for doing this, but it may not be enough of an issue to trouble you.

Also note that the current cvs trunk has significant video performance improvements over the 1.7.2 version.

David

Blender 2.6 appears to support quicktime alpha (and 2.49 does fully thanks to quicktime support) but 2.6 has some issues with it that I’ve just made a bug report about. Since quicktime was removed ffmpeg is used for all video and alpha does appear to work (again with some buggy issues that hopefully will get fixed) so I would think it should work in panda as well. The issue isn’t that it doesn’t have anything to blend with? (nothing behind it) If that is indeed the case I’ll give the 2nd video as alpha option a try and see what I can come up with :slight_smile: