Pixel size of contents in textureBuffer

Hi all,
I have a question regarding the number of pixels allocated to content written to disk from a textureBuffer; my case is actually very specific:

Say I have a flat-plane, with x and z dimensions, of 16x16, that is:

planeStartX=0
planeEndX=16
planeStartZ=0
planeEndZ=16

I create geometry from this, then parent it to a scene connected to a textureBuffer:

#Create the hidden window:
tex=Texture()
mybuffer = base.win.makeTextureBuffer("My Buffer", int(greatestX), int(greatestY),tex,to_ram=True)
mybuffer.setSort(-100)
mycamera = base.makeCamera(mybuffer)
myscene = NodePath("My Scene")
mycamera.reparentTo(myscene)
#parent the flat plane to this scene:
flatGeometry=makeFlatPlane(16)#<-function that would return a flat plane, with the sent dimension size, in this case, 16.
flatGeometry.reparentTo(myscene)
#position the camera some distance behind the flat-plane:
centrePoint=flatGeometry.getBounds().getCenter()
mycamera.setPos(centrePoint)
mycamera.setY(-350)
#aspect ratio:
mycamera.node().getLens().setAspectRatio(float(greatestX) / float(greatestY))
#save a screenshot of the flat-plane:
file_name=Filename.fromOsSpecific("save_gameDat_001.png")
base.graphicsEngine.renderFrame()
mybuffer.saveScreenshot(file_name)
  1. How would I make this flat 16X16 plane occupy 16X16 pixels in the resulting saved image?
  2. How would I specify the pixel starting point for the plane in the resultng saved image? For instance, if I wanted it to start at pixel (20,40) and of course, being 16X16 end at (35,55).

Does anyone know how I would go about achieving that? It probably involves manipulating the camera’s fov and/or position before writing the plane out to disk as an image, but in what exact way?

Thanks in advance, if my question isn’t clear, then please ask and I’ll clarify it.

Hmm… You could perhaps render into a 16x16 buffer, read the resulting texture into a PNMImange, then use PNMImage’s “copySubImage” method to transfer that 16x16 image into the target image at the location of your choice.

That could work, but I’m really trying to avoid using PNMImage, since in my experience it’s rather slow, whereas simply getting a screenshot from another window is very fast.
I’ve tried manipulating the camera’s fov and position and while that has given me varying results, I’d need to know some concrete way of doing it.

Hmm… In that case, have you experimented with using the “pixel2d” scene-graph? That might provide an easier way of positioning and scaling things in pixel-coordinates.

1 Like

Ah, I hadn’t yet thought of experimenting with that, let me play around with it and see what it yields, I’ll report on the results later.

What I would suggest is to use an OrthographicLens and call its set_film_size method, rather than playing around with aspect ratios and fov’s. You don’t really need pixels as units either; as long as the units passed to set_film_size correspond to the size of your texture buffer, it should work fine.

Something like the following might help:

from panda3d.core import *
from direct.showbase.ShowBase import ShowBase


class MyApp(ShowBase):

    def __init__(self):

        ShowBase.__init__(self)

        tex = Texture()
        mybuffer = self.win.make_texture_buffer("my_buffer", 512., 512., tex, to_ram=True)
        mybuffer.set_sort(-100)
        mycamera = self.make_camera(mybuffer)
        lens = OrthographicLens()
        lens.set_near(-100.)
        lens.set_far(100.)
        mycamera.node().set_lens(lens)
        myscene = NodePath("my_scene")
        mycamera.reparent_to(myscene)
        #parent the flat plane to this scene:
        cm = CardMaker("flat_plane")
        cm.set_frame(-8., 8., -8., 8.)
        flatGeometry = myscene.attach_new_node(cm.generate())
        flatGeometry.set_pos(20. + 8., 0., -40. - 8.)
        lens.set_film_size(512., 512.)
        lens.set_film_offset(256., -256.)
        self.graphicsEngine.render_frame()
        #save a screenshot of the flat-plane:
        mybuffer.save_screenshot("save_gameDat_001.png")


app = MyApp()
app.run()

In the code above, you don’t really have to call lens.set_film_offset(256., -256.); you could call mycamera.set_pos(256., 0., -256.) instead.
To set the starting point for the plane geometry, you can just set it to that position in your scenegraph.
One important thing to note is that the Y-axis of the texture corresponds to the negative Z-axis of myscene, hence the -40. - 8. instead of 40. + 8..

2 Likes

Just figured out an alternative approach, which is probably more in line with what you’re trying to accomplish: to do a partial render of your scene, so only one particular piece of geometry gets rendered to the corresponding region of pixels of the texture buffer (leaving the other pixels untouched), correct?
If that’s what you want, try this:

from panda3d.core import *
from direct.showbase.ShowBase import ShowBase


class MyApp(ShowBase):

    def __init__(self):

        ShowBase.__init__(self)
        
        tex = Texture()
        self.max_x = 512
        self.max_y = 512
        mybuffer = self.win.make_texture_buffer("my_buffer",
            self.max_x, self.max_y, tex, to_ram=True)
        mybuffer.set_sort(-100)
        # prevent existing content to be cleared
        mybuffer.set_clear_color_active(False)
        mycamera = self.make_camera(mybuffer)
        # the existing lens is by default set to render into the default
        # DisplayRegion (which apparently can't be deactivated) of mybuffer,
        # so to avoid anything being rendered into that region, the existing
        # lens is deactivated instead
        mycamera.node().set_lens_active(0, False)
        # create a new lens to render specific geometry
        self.lens = lens = OrthographicLens()
        lens.set_near(-100.)
        lens.set_far(100.)
        # the new lens has index 1
        mycamera.node().set_lens(1, lens)
        # create a new scenegraph
        self.myscene = NodePath("my_scene")
        # disable culling on the new scenegraph
        self.myscene.node().set_bounds(OmniBoundingVolume())
        self.myscene.node().set_final(True)
        mycamera.reparent_to(self.myscene)
        # create a new DisplayRegion to render only specific geometry to
        self.dr = dr = mybuffer.make_display_region()
        dr.set_camera(mycamera)
        # set the new lens to render into the secondary display region
        dr.lens_index = 1
        # create and render two different planes, using start and end coordinates
        self.__render_plane(20., 36., 40., 56.)
        self.__render_plane(120., 132., 350., 378.)
        mybuffer.save_screenshot("save_gameDat_001.png")

    def __render_plane(self, start_x, end_x, start_z, end_z):

        cm = CardMaker("flat_plane")
        cm.set_frame(start_x, end_x, -end_z, -start_z)
        flatGeometry = self.myscene.attach_new_node(cm.generate())
        l = start_x / self.max_x
        r = end_x / self.max_x
        b = 1. - end_z / self.max_y
        t = 1. - start_z / self.max_y
        self.dr.set_dimensions(l, r, b, t)
        self.lens.set_film_size(end_x - start_x, end_z - start_z)
        self.lens.set_film_offset((start_x + end_x) * .5, -(start_z + end_z) * .5)
        self.graphicsEngine.render_frame()
        # apparently two frames need to be rendered (double-buffering?)
        self.graphicsEngine.render_frame()


app = MyApp()
app.run()

Although I’m generating cards whose origin is always at the (0., 0., 0.) position, I hope you will be able to adapt this code so it works with your own models as well.

Thanks to the both of you, after testing all approaches, @ Epihaius’s approach worked beautifully for me, the first one he suggested.
@ Epihaius, what I am specifically trying to do, is to treat the texture buffer itself as a blank canvas that I can then draw on, before saving the result as a png image. This is because procedurally generating images via PNMImage has been painfully slow, at least going by my experience and so I had to look for alternatives. Using textureBuffers has been very fast.
I appreciate your help but I have one question, if it won’t trouble you too much, is there a way to achieve the same thing but using plane geometry generated without using the card-maker? Using “standard” procedural geometry, with geom nodes, primitives, vertices etc? The reason I ask is because that would enable me to draw on the texture without knowing the dimensions of the buffer beforehand, i.e. I won’t have to know self.max_x and self.max_y before creating the geometry. I could just draw my geometry at some arbitrary point within the future texture-bounds, as compared to first having to compute the texture-bounds and then drawing the geometry within it e.g. I could draw one plane at point (8,8), then another at point (12,8) etc., before creating the texture-buffer and then generating the texture out to disk, after setting up everything else properly. So is there a way to achieve the same using “standard” procedural geometry?

1 Like

Sure it’s doable. Here is a new version of the first approach which uses low-level geometry creation instead of CardMaker:

from panda3d.core import *
from direct.showbase.ShowBase import ShowBase
import array


class MyApp(ShowBase):

    def __init__(self):

        ShowBase.__init__(self)

        tex = Texture()
        mybuffer = self.win.make_texture_buffer("my_buffer", 512., 512., tex, to_ram=True)
        mybuffer.set_sort(-100)
        mycamera = self.make_camera(mybuffer)
        lens = OrthographicLens()
        lens.set_near(-100.)
        lens.set_far(100.)
        mycamera.node().set_lens(lens)
        myscene = NodePath("my_scene")
        mycamera.reparent_to(myscene)
        #create the flat plane node:
        plane_node = self.__create_plane(-8., 8., -8., 8.)
        #parent the flat plane to this scene:
        flatGeometry = myscene.attach_new_node(plane_node)
        flatGeometry.set_pos(20. + 8., 0., -40. - 8.)
        lens.set_film_size(512., 512.)
        lens.set_film_offset(256., -256.)
        self.graphicsEngine.render_frame()
        #save a screenshot of the flat-plane:
        mybuffer.save_screenshot("save_gameDat_001.png")

    def __create_plane(self, start_x, end_x, start_z, end_z):

        cm = CardMaker("flat_plane")
        vertex_format = GeomVertexFormat.get_v3t2()
        vertex_data = GeomVertexData("plane_data", vertex_format, Geom.UH_static)
        vertex_data.unclean_set_num_rows(4)
        data = (
            start_x, 0., -end_z, 0., 0.,
            end_x, 0., -end_z, 1., 0.,
            end_x, 0., -start_z, 1., 1.,
            start_x, 0., -start_z, 0., 1.
        )
        data_array = array.array("f", data)
        data_view = memoryview(vertex_data.modify_array(0)).cast("B").cast("f")
        data_view[:] = data_array
        prim = GeomTriangles(Geom.UH_static)
        prim.add_vertices(0, 1, 2)
        prim.add_vertices(0, 2, 3)
        geom = Geom(vertex_data)
        geom.add_primitive(prim)
        geom_node = GeomNode("flat_plane")
        geom_node.add_geom(geom)

        return geom_node


app = MyApp()
app.run()

The generated geometry contains UVs but no normals (let me know if you want another vertex format).
Still, I don’t see how this makes it easier regarding the texture buffer setup, though. How can you draw on a texture if you haven’t created a buffer with a specific size yet? But if you mean drawing all planes on it at once after creating them, determining their total size and using a single render_frame call, then that’s no problem of course, but then you could do that with the cards as well.

And that’s what made me think that you gradually wanted to build up that texture by creating and rendering one plane, then creating and rendering another plane etc. over time, as needed (for example whenever a plane is added or removed by the player).

Either way, I’m glad I could help :slight_smile: .

Thanks; the planes I’m generating are placed in rows and columns that are not equidistant from each other. What I wanted to do, was to generate the planes as faces of a single model within the main-window, then, parent that entire model to the hidden-window before taking a screenshot of it and writing that out to disk.

The planes are generated in a loop and the position of the last plane is not known until the loop is done running. The planes are of different sizes as well. So to generate the size of the textureBuffer, I’d have to wait for the loop to finish, then use the last plane’s position (maxX,maxZ) plus some padding as the dimensions for the textureBuffer.

If I generate all the planes as faces of a single model, I could do that in the main-window, re-parent the resultant model to the hidden-window and then just save the screenshot of that model.

Otherwise, I’d have to run the loop twice, once to get the dimensions of the textureBuffer and then once more to populate the created textureBuffer with the plane geometry.

That’s why I would opt to use low-level geometry instead of geometry from CardMaker, since I believe it to be a faster approach.

As for the UV and normal data, no, that would be unnecessary, the only thing the geometry needs is colour and nothing more, of course, with 4 channels since the transparency is important.

I hope this new information helps in conveying exactly what I’m trying to achieve, if not, please ask and I’ll clarify.

Addendum:
I did get it to do exactly as I wanted, by using low-level geometry directly parented to the newly created scene, with your much appreciated help of course, thank you very much! :+1:

2 Likes