Getting the byte data from PNMImage

Hi,

I am trying to send a screenshot of my window over serial to another screen.

I have done the following to get the screenshot:

self.pp = PNMImage()
        if(base.win.getScreenshot(self.pp)):
            self.pp = PNMImage(320, 240, self.pp.getNumChannels())

And when I try writing this to file it is fine and looks how it should.

I am using PySerial and that works fine for sending bytes over serial to my hardware such as strings etc. but I want to send over the PNMImage.

I saw a thread [url]PNMImage.write Arguments Make No Sense! (Solved)] where drwr talks about StringStream so I tried using that by doing this:

self.pp = PNMImage()
        if(base.win.getScreenshot(self.pp)):
            self.pp = PNMImage(320, 240, self.pp.getNumChannels())
            ss = StringStream()
            self.pp.write(ss, 'jpeg')
            self.ser.write(ss.getData())

Where self.ser is my serial connection.

Bytes of data do go across but I’m not sure if they are the jpeg, is there a way to send the StringStream data to a file so I can make sure getData() contains the image?

Or is there another Stream object I can use which is better suited? Or even a way to just pull the raw byte array from PNMImage?

Thanks for your time,

poncho

self.pp = PNMImage()
        if(base.win.getScreenshot(self.pp)):
            self.pp = PNMImage(320, 240, self.pp.getNumChannels())

I’m not sure how this code can work. The first line creates self.pp, an empty PNMImage. Fine. The second line stores the screenshot in self.pp, also fine. Then the third line replaces self.pp with a new, blank PNMImage of 320x240, replacing the screenshot you had just saved in self.pp. So you end up with a blank PNMImage.

The second block of code looks fine, other than the fact that it also throws away the screenshot image and replaces it with a blank image.

David

Oh god of course!

My panda3d window is 800x600 so I was trying to scale what I had to 320x240, is there a method that’ll do this?

Thanks.

self.pp = None
ppfull = PNMImage()
if(base.win.getScreenshot(ppfull)):
    self.pp = PNMImage(320, 240, ppfull.getNumChannels())
    self.pp.quickFilterFrom(ppfull)

Ah thanks very much.

Is there a way to write from StringStream to a file so I can check it has gone into the StringStream okay?

Thanks.

Well, since ss.getData() returns the data, you can just do:

open('file.jpg', 'wb').write(ss.getData())

David

Sorry I’m pretty new to Panda3D so thanks for answering the simple questions :slight_smile:

I’ve got it working now and the streamed images are appearing on my second screen over serial so thanks for the help!

My problem now is, the panda3D rendering lags everytime an image is sent because it’s taking a screenshot -> scaling to 320x240 from 800x600 then writing to serial.

I thought I could try running my screenshot_to_serial task on a separate thread using my own task chain but it seems like that task chain is blocking the render or something because my task prints “screenshot taken” every 3 seconds (as it should) but the window stays black. This only happens if numThreads is 1 or greater, meaning that when it isn’t running on the main thread it means nothing gets rendered.

Here is my task chain code:

taskMgr.setupTaskChain('screen_chain', numThreads = 1, tickClock = True,
threadPriority = 1, frameBudget = 1,
frameSync = True, timeslicePriority = False)        taskMgr.add(self.screenshotTask, "Screen Shot Task", taskChain = 'screen_chain')

I looked at the documentation and think that I have created that properly, have I? Is the fact that getScreenshot() must run on the render thread mean something?

Sorry for all the questions, thank you for all the help so far drwr!

Poncho

After much experimentation I think it is just a bottleneck problem so I’m just reducing the amount of images sent over serial.

Thanks for the help drwr

You shouldn’t pass tickClock = True (you should almost never set tickClock = True unless you have supplanted the main render thread for some reason). That’s only going to cause subtle timing issues, though; it’s nothing to do with your bottleneck issue.

Taking a screenshot is likely to cause a visible hitch in the rendering, no matter what thread you do it in, because the graphics pipeline has to be held up while it captures the screenshot. You can bind a texture with RTMCopyRam instead, and grab the texture pixels; but you have to be careful that you don’t do this while the main thread is drawing the scene, or you’ll cause a crash. You can use base.graphicsEngine.getRenderLock() to return a Lock object that is held while the main thread is drawing the scene, so if you only extract the texture while you’re holding this lock you should be OK. But threading issues get messy real fast; you have to be careful what you’re doing.

Note that all of this has to be in 1.8.0 to avoid threading hitches too, since in 1.7.2 threading is simulated.

David

Ah okay, I understand. I am only on 1.7.2 at the moment but may go to 1.8.0 if it’s pretty much stable.

I was thinking, could I use the low-level renderToTexture method of grabbing the scene info straight into a smaller buffer, say 256x256, and then grab the image data from this buffer and write it to my String Stream?

For example, this is kind of pseudo-codey but:

self.altBuffer = base.win.makeTextureBuffer("hello", 256, 256)
        self.altRender = NodePath("new render")
        self.altCam = base.makeCamera(self.altBuffer)
        self.altCam.reparentTo(self.altRender)
        self.altCam.setPos(self.cam.getPos())

#Want to take a screenshot
tex = self.altBuffer.getTexture()
self.pp = PNMImage()
tex.store(self.pp)
self.pp.write(self.ss, 'jpg')
self.ser.write(self.ss.getData())

I tried doing this but I get a:

when I call store(), is this because the image texture data isn’t in RAM but on the graphics card’s RAM because it was rendered from the scene not, for example, grabbing it from a file from disk?

I also looked at the Generalized Image Filters manual page and tried something like this:

manager = FilterManager(base.win, base.cam)
tex = Texture()
quad = manager.renderSceneInto(colortex=tex)

then try tex.store(self.pp) there but that didn’t work and also didnt render in a hidden window like I thought it would be on the main window and tinted everything purple.

Would either of those routes be possible? Either using FilterManager? Or a secondary buffer?

Thanks for all the amazing help so far drwr, I knew I chose right with Panda3D when I saw how great the support and community is.

Poncho

Any idea on that error?

I’m using the same filtermanager way of getting the “screenshot” and using

loadPrcFileData('', 'show-buffers 1')

shows me that it is being rendered to the buffer but if I try tex.store() or tex.write() I get the

AssertionError.

The data exists somewhere in memory right? Or does it only exist on the gfx card memory and so can’t be written to the disk/through the CPU when trying to store to PNMImage or something?

Any ideas?

Thanks for your time,

Poncho

When you call makeTextureBuffer(), you need to pass True as the optional “to_ram” parameter in order to tell Panda to make the texture memory available in RAM to the CPU (otherwise, it exists only on the graphics card, and you can’t extract the image in code).

This is the second optional parameter, so you have to pass an empty Texture() object as the first optional parameter to reach the second one:

self.altBuffer = base.win.makeTextureBuffer("hello", 256, 256, Texture(), True)

David