Some weird visual artifacts in panda3d video textures...


I’m seeing some weird visual artifacts when I try to render a kinect data stream using panda3d. At the moment I’m using the setRamMipmapPointerFromInt() method to pass a void * pointer holding BGRA data back into panda3d… and it works, kinda.

However, when the video changes quickly, weird glitchy graphics turn up on the texture. It’s a little difficult to explain, so I’ve put a video of it happening up here (excuse the quality, I’m pretty sure the glitches are extremely obvious to see): …

By comparison this is the same video stream being rendered pixel by pixel using libSDL: …

Notice how in the second video there are no weird previous-frame-overlay issues; that’s because the raw pixel stream from the kinect is actually not glitching; somehow pushing these pixels into the panda texture is causing the problem.

The code I’m using for my texture is:

import pyar
from pandac.PandaModules import *
from direct.task import Task

class Vision(object):
  """ Vision helper """

  def __init__(self):
    self.api = pyar.Api()
    self.width = self.api.prop(self.api.AR_PROP_WIDTH)
    self.height = self.api.prop(self.api.AR_PROP_HEIGHT)
    self.depth_max = self.api.prop(self.api.AR_PROP_DEPTH_MAX)
    self.depth_none = self.api.prop(self.api.AR_PROP_DEPTH_NONE)

  def next(self):
    rgb = self.api.rgb()
    depth = self.api.depth()
    return (rgb, depth)

  def shutdown(self):

class VisionTexture(Texture):
  """ Texture using the vision class to read data """

  def __init__(self):
    self.__vision = Vision()
    taskMgr.add(self.updateTextureTask, "updateTextureTask")

  def updateTextureTask(self, t):
    (rgb, depth) =
    rgb = long(rgb)
    self.setRamMipmapPointerFromInt(rgb, 0, self.__vision.width * self.__vision.height * 3)
    return Task.cont

…and to render it:

import bootstrap
import pyar
from pandac.PandaModules import *
from direct.task import Task

if __name__ == "__main__":
  from direct.directbase.DirectStart import *
  cm = CardMaker("cm")
  cm.setFrame(-1, 1, 1, -1)
  card = render2d.attachNewNode(cm.generate())
  tex = pyar.VisionTexture()

Does anyone know if there’s some kind of locking I need to do when updating the texture data or something like that?

It seems really strange to me that I ever see a texture which has raw pixel data that is not the same as the pixel data set using setRamMipmapPointerFromInt(), which makes me think somehow… the texture is being buffered or something?

Any help much appreciated!


setRamMipmapPointerFromInt is pretty low-level, and you shouldn’t use it unless you know what you’re doing. What’s probably happening is that the pointer to the image returned by is probably getting destructed internally, so that Panda reads back into memory that has already been freed, and therefore may not contain the correct data anymore as it may have been reused for storing other data, or perhaps it’s already partially containing data from another frame at that point.

Consider creating a PTAUchar holding the data and passing it to set_ram_image, or if the image is of a different format (like RGBA), using set_ram_image_as(data, “RGBA”). This ensures that the data is copied, and that you’re not reading the data directly from the pointer that OpenCV is managing.


So basically, make a copy of the data in python and pass it into setRamImage()?

huh. Seems pretty lame to have a function that lets you avoid having to create a copy of every frame but that isn’t actually usable though.

As I said in the original post; the vision class itself generates valid rgb data; I’ve rendered it and it has none of that past frame ghosting happening, and valgrind asserts that there is no memory corruption (ie. free’d frame pointers) going on.

If setRamMipmapPointerFromInt() calls free() on the pointer it’s passed EVER that makes the function completely useless; you have to reallocate a buffer everytime you pass a new frame in (not to mention it contradicts the api docs).

It’s also worth noting that the example here does not allocate a new pointer each time:

Still seems to me like I’m doing something wrong in my texture class that’s making the update method get called at the same time frame rendering is going on or something…

Still, I’ll try your suggestion and see if it works.

I’ll also try dropping the frame rate of texture updates and see if I can create like a minimal test case or something I guess.


setRamMipmapPointerFromInt does not call free() on the pointer it’s passed. setRamMipmapPointerFromInt is made to allow you to tell Panda to read from an arbitrary buffer. In this case, that’s a bad idea because OpenCV is free to modify the buffer after you pass the pointer to Panda.
It is important to copy the data for this reason, to ensure that you’re reading a full frame and not reading into whatever OpenCV puts into that buffer after it is done with it.

The blog post clearly states that Panda does not copy the data into a buffer- setRamMipmapPointerFromInt will make panda upload the data directly to the graphics card when the graphics card needs the texture. But at the time when this happens, OpenCV is already writing other data to the buffer.

That is, if I’m right about what’s happening.



This would be true (perhaps; opencv doesn’t randomly overwrite data buffers either afaik) only if I was using openCV; I’m not. I’m using libfreenect and I’m using a custom driver that I’ve written myself to read the kinect data.

The c driver carefully triple buffers the video frames to handle the situation where:

  1. You have a new pixel buffer pointer waiting to pass to the texture.


  1. You’re busy using the old pixel buffer on a current texture.


  1. In that small interval frames are rendered by a separate thread.

(it’s written this way specifically so that way the rendering can be done in a separate thread safely).

Accessing the new frame of data iterates through the buffer set such that you never encounter the situation where the buffer in (2) is being written to in the brief window between when (1) is accessed and (1) is passed into the texture. The third unused buffer is used in this case.

For this to be an issue, the updateTextureTask() call on my texture would have to be invoke simultaneously in multiple threads at the same time, and a separate rendering thread would have to be rendering frames between the lines:

    (rgb, depth) = 
    rgb = long(rgb) 
    self.setRamMipmapPointerFromInt(rgb, 0, self.__vision.width * self.__vision.height * 3) 

To be fair I suppose that is a possibility; the SDL renderer that I wrote to check the data stream isn’t running multiple threads updating at the same time and rendering at the same time, but … I’d be surprised if panda is doing this? Surely not?


Panda doesn’t upload the data to the GPU right away when you call setRamMipMapPointerFromInt, it is free to do that whenever it is needed. At the point when it does, when your updateTextureTask has long finished, your buffer may already contain other data, right?


Oh wow.

You mean to tell me that when I pass a pointer into setRamMipMapPointerFromInt() it sits there for a random amount of time before assigning that pointer to the texture, during which the old pointer (passed during the previous call to setRamMipMapPointerFromInt) is still assigned to the texture.

You’re right, that’s totally what’s happening.

If I push 0xff0000 into the original pointer after calling setRamMipMapPointerFromInt on a different value I totally get red flashes.


:confused: That’s not very intuitive behaviour.

Still, thanks for your help. Looks like I’ll be using setRamImage after all~ [/i][/b]


I wouldn’t say a “random” amount of time - it happens whenever Panda is rendering the geometry holding the texture, which is when it passes it to the GPU. You might be able to force this to happen earlier using Texture::prepare(), but this is not what setRamMipMapPointerFromInt was designed for in the first place. It was created by someone who needed very low-level access, and the method should be avoided unless you’re sure you need it and can guarantee that the data will be around for long enough.

setRamImage/modifyRamImage is usually the method to use for purposes like yours.


The issue is not that the data needs to be around for ‘long enough’; the issue is that the API doesn’t expose any means of strictly bounding how long ‘long enough’ is that data needs to be available for.

I maintain this is basically no better than just sleeping for a random length of time before swapping pointers.

It’s slightly vexing that the blog example (from the panda blog no less) of ‘how to make a webcam /movie texture’ uses this exact method; but then that’s not the right why of doing it after all apparently.


You’re right, and I’m not particularly excited about how the blog post presents this method either.