Pointer Textures?


I found a blog post about them, allowing us to write directly to vram.
That is literally ALL I found when googling: panda3d “pointer textures”.
The example-file does not exist anymore.

I haven’t encountered the terms anywhere, so I’m wondering if this got abandoned?

How does it work?


I keep reading about setram, but that can’t be it.

That way I only get access to stuff in standard RAM …
… while Pointer Textures give direct access to videoram.

I know it’s perfectly doable, but I don’t know how it’s done in panda and I can’t seem to find any information?

I think you misunderstood pointer textures. They do not allow direct access to VRAM. It just allows you to manage texture memory yourself or have a different library/program manage texture memory.

If you described what you were trying to do we could possibly advise you further on how to achieve it.

Thank you for your reply, as usual. :slight_smile:

I want to update a texture I use as array in a fragment shader. I’ve tried implementing arrays,
but all the PTs and CTPs seem confusing and only update in ram?

I want random access so I can update, say, texture[10,10]
instead of having to upload the whole texture from ram each time.

I know it’s possible, because I did so ages ago using SDL.
Iirc it enabled mapping vram to ram and gave out a pointer to it,
and the hardware/driver did the rest. Direct pixel access to buffers in vram.

Doesn’t have to be a texture, as long as I can update and access a sequence of bytes/floats
and can access them in the fragment shader.

Hope that cleared it up…

		self.array= numpy.zeros(1024*1024, numpy.float32)

		for x in xrange(0,1023*1023):
			self.array[x] = 1.0;
		self.imageTexture = Texture("image")
		self.imageTexture.setup2dTexture(1024, 1024, Texture.T_float , Texture.FRed) 

		import sys
		print sys.getsizeof(self.array.tostring())

		p = PTAUchar.emptyArray(0)
		except AssertionError:

So I managed to find this tidbit that seems to work, at least to a small degree.
So … not really. It works only for the red component. Then I get to see a big red square.
Trying to use it with blue or green yields black, FRgb yields me the “size does not match” error.

The size of the textures in memory never really fits according to the math,
not even with the most basic one that works only for red components.


Besides this most likely not being what I wanted (direct access), it’s the closest thing I’ve found.

There’s got to be a SANE and straight forward way to get access to texture memory?

This isn’t actually a thing. The closest you can get is for the driver to allocate a buffer in DMA-able memory, and schedule a DMA transfer to video memory. And in most cases, the driver will just allocate a new region in RAM every time you map texture memory to avoid read-write hazards.

SDL probably does something closer to what you get with modifyRamImage() in Panda, which gives a direct handle into Panda’s texture memory. You can use this with Python’s memoryview support:

self.imageTexture = Texture("image")
self.imageTexture.setup2dTexture(1024, 1024, Texture.T_float , Texture.FRed)

p = memoryview(self.imageTexture.modifyRamImage())
print len(p)
p[0] = 1.0

Note that Panda uses BGR ordering, not RGB.

Also note that sys.getsizeof doesn’t do what you think it does. Instead, you should simply use len().

Thank you so much! I will test that!

But why Fred and not Frgb* ? Why didn’t that work at all?

There’s no reason why it wouldn’t work with F_rgb. Perhaps you can try again with the multiview approach with F_rgb and share your code if you encounter more issues.

Note that you have your component type set to T_float, though, so you might want to change your format to F_rgba16 or F_rgb32 accordingly (since there are no 8-bit floating-point texture formats). Alternatively, use T_unsigned_byte if you prefer 8 bits per channel.

Also note that texture formats that aren’t aligned to a 32-bit boundary will be padded by the driver. So, don’t use F_rgb or F_rgb16, but use the rgba equivalents instead for better upload performance. F_rgba32 is fine, though.

Aye, will do! o7


This doesn’t work.

p = memoryview(self.imageTexture.modifyRamImage())

Responds with:

Not knowing what I am doing, I swapped it with …

p = memoryview(self.Spheres)

… which runs, but gives a really, really weird result:

That’s not even pixels. There’s squares and circles and the texture itself isn’t even filled as a whole,
but has a huge black part right in the bottom half.

I’ve tried mixing with bytes, which produces equally corrupt results,
one of them even having actual text in it. Numbers I have no idea where they’re from.

		self.Spheres = numpy.zeros(1024*1024, numpy.uint8)
		for x in xrange(0,1024*1024):
			self.Spheres[x] = 1.0;

		self.imageTexture = Texture("image")
		self.imageTexture.setup2dTexture(1024, 1024, Texture.TUnsignedByte , Texture.F_rgb) 

		p = memoryview(self.Spheres)
		print len(p)

Please note that it does not matter if I use uint8, float32, TUnsignedByte, TFloat, F_rgb, F_red
and as written above, memoryview on the imageTexture does not work at all.

Well … almost. Using F_red I only get a random static of red and black,
with all kinds of attempts (bytes, floats, etc etc.) which still doesn’t fit, considering I’m filling it with 1.0.

Which version of Panda3D are you using?

I’m using 1.9.0 …

I don’t understand this. It works with F_red, at least to the extent
that I can see lots of red pixels. That’s great, but where’s the rest?

No matter what else I try, it simply keeps throwing errors at me.
These commands make no sense whatsoever.

I declare a 1024x1024 uint8 numpy array. It’s one megabyte in size. No matter what I do,
unless I declare F_red, nothing will work. And then, appropriately, my whole array
filled with 1s only fills the red pixels. And the error message doesn’t even tell what
size it expects.

Trying a 4096x4096 and rgb32 doesn’t work either. Nothing works, except the useless one.

Could you show the code you’re using? Then I could point out what’s going wrong.

There’s not much more to it. This is how it was initially.

At first it constantly complained about arrays of length-1 when I used numpy.zero to set up a 2d array,
so I changed that to 1024*1024 instead of (1024,1024). I think initially it was at (64,64,3), but I don’t recall exactly anymore.

		self.arr = numpy.zeros(1024*1024, numpy.float32)

		for x in xrange(0,1024*1024):
			self.arr[x] = 1.0;

		self.imageTexture = Texture("image")
		self.imageTexture.setup2dTexture(1024, 1024, Texture.TFloat , Texture.FRed) 

	 	p = PTAUchar.emptyArray(0)
		except AssertionError:

I got this snipped from the web. It outputs a red texture. Swapping FRed to FBlue or FGreen yields black.
Changing anything yielded mostly errors and even if I didn’t get any errors, the texture was just messed up.

So how does it work? How can I avoid using tostring? Memoryview doesn’t work either,
it complains that there’s no support for buffers. Having it copied or being forced to
re-upload the whole texture every frame is not an option.

I don’t know what to do with this.


Found it, it was this one: [Small issue with using numpy arrays as textures)

Doesn’t even work for me, it tells me that at p.setData(self.arr) there’s a TypeError,
that it expects a string or Unicode object, but instead it found a numpy.ndarray.

I’m not sure if it’s any help, but if you want to push a lot of data to a shader then maybe a Buffer Texture can help (opengl.org/wiki/Buffer_Texture).

I have some working code here: github.com/wezu/gpu_morph

Thanks wezu!

I’ve tried running it, but it doesn’t work. The view_morph tells me …

AttributeError: type object 'panda3d.core.ShaderAttrib' has no attribute 'F_hardware_skinning'

I’m running on a relatively modern GTX765m with newest drivers.

This part seems to be specifically what you were referring to…

def addMorph(morph_name):
    buffer = Texture("texbuffer")
    buffer.setup_buffer_texture(NUM_VERT, Texture.T_float, Texture.F_rgb32, GeomEnums.UH_static)
    with open(morph_name) as f: 
    image = memoryview(buffer.modify_ram_image())    
    for i in range(len(morph)):
        off = i * 12    
        image[off:off+12] = struct.pack('fff', morph[i][0], morph[i][1], morph[i][2])
    return buffer

I know I can look up most of it, but that won’t necessarily explain the sense behind it.
GeomEnums. UH_static? struct.pack?

As I can’t run it, I can’t test it … so I have to ask: Is it necessary to deal with textureaccess that way?

Thanks for your help!

I’m wondering why we can’t enable/disable OpenGL flags directly, instead of having to wait for someone to implement it …
… just like the POINT_SIZE flag. Not complaining, just wondering. I believe having direct access to OpenGL would make things much easier to deal with …
… and makes people less independent of those who are behind panda.

Sorry, just frustrated. :slight_smile:

You need Panda 1.10 to get access to buffer textures and hardware skinning in Panda. Grab a development build from the devel download page.

You do have direct access to OpenGL. You can just create a draw callback and PyOpenGL to set the right point size flag if you want to.

I don’t understand. There’s no mentioning about this anywhere … ?
I’d assume just importing PyOpenGL (never used it) and randomly hoping it works would be silly?

Why isn’t this mentioned anywhere? How can I access panda’s opengl context?

(yes, I have no idea how this would work)

Reading this now: www.panda3d.org/forums/viewtopic.php?t=5855


So I can do this once and it works? I’m looking at the example. Seems weird, but will give it a try.

I know I sound mostly clueless and I admit I am having a really hard time using panda3d, but it’s not 100% as it seems. :stuck_out_tongue:

Oh btw, silly question but this confuses me now …
Why didn’t you tell me there’s a way to enable GL_VERTEX_PROGRAM_POINT_SIZE_ARB without having to wait for you to do it ? :slight_smile: