Pointer Textures?

Aye, will do! o7


This doesn’t work.

p = memoryview(self.imageTexture.modifyRamImage())

Responds with:

Not knowing what I am doing, I swapped it with …

p = memoryview(self.Spheres)

… which runs, but gives a really, really weird result:

That’s not even pixels. There’s squares and circles and the texture itself isn’t even filled as a whole,
but has a huge black part right in the bottom half.

I’ve tried mixing with bytes, which produces equally corrupt results,
one of them even having actual text in it. Numbers I have no idea where they’re from.

		self.Spheres = numpy.zeros(1024*1024, numpy.uint8)
		for x in xrange(0,1024*1024):
			self.Spheres[x] = 1.0;

		self.imageTexture = Texture("image")
		self.imageTexture.setup2dTexture(1024, 1024, Texture.TUnsignedByte , Texture.F_rgb) 

		p = memoryview(self.Spheres)
		print len(p)

Please note that it does not matter if I use uint8, float32, TUnsignedByte, TFloat, F_rgb, F_red
and as written above, memoryview on the imageTexture does not work at all.

Well … almost. Using F_red I only get a random static of red and black,
with all kinds of attempts (bytes, floats, etc etc.) which still doesn’t fit, considering I’m filling it with 1.0.

Which version of Panda3D are you using?

I’m using 1.9.0 …

I don’t understand this. It works with F_red, at least to the extent
that I can see lots of red pixels. That’s great, but where’s the rest?

No matter what else I try, it simply keeps throwing errors at me.
These commands make no sense whatsoever.

I declare a 1024x1024 uint8 numpy array. It’s one megabyte in size. No matter what I do,
unless I declare F_red, nothing will work. And then, appropriately, my whole array
filled with 1s only fills the red pixels. And the error message doesn’t even tell what
size it expects.

Trying a 4096x4096 and rgb32 doesn’t work either. Nothing works, except the useless one.

Could you show the code you’re using? Then I could point out what’s going wrong.

There’s not much more to it. This is how it was initially.

At first it constantly complained about arrays of length-1 when I used numpy.zero to set up a 2d array,
so I changed that to 1024*1024 instead of (1024,1024). I think initially it was at (64,64,3), but I don’t recall exactly anymore.

		self.arr = numpy.zeros(1024*1024, numpy.float32)

		for x in xrange(0,1024*1024):
			self.arr[x] = 1.0;

		self.imageTexture = Texture("image")
		self.imageTexture.setup2dTexture(1024, 1024, Texture.TFloat , Texture.FRed) 

	 	p = PTAUchar.emptyArray(0)
		except AssertionError:

I got this snipped from the web. It outputs a red texture. Swapping FRed to FBlue or FGreen yields black.
Changing anything yielded mostly errors and even if I didn’t get any errors, the texture was just messed up.

So how does it work? How can I avoid using tostring? Memoryview doesn’t work either,
it complains that there’s no support for buffers. Having it copied or being forced to
re-upload the whole texture every frame is not an option.

I don’t know what to do with this.


Found it, it was this one: [Small issue with using numpy arrays as textures)

Doesn’t even work for me, it tells me that at p.setData(self.arr) there’s a TypeError,
that it expects a string or Unicode object, but instead it found a numpy.ndarray.

I’m not sure if it’s any help, but if you want to push a lot of data to a shader then maybe a Buffer Texture can help (opengl.org/wiki/Buffer_Texture).

I have some working code here: github.com/wezu/gpu_morph

Thanks wezu!

I’ve tried running it, but it doesn’t work. The view_morph tells me …

AttributeError: type object 'panda3d.core.ShaderAttrib' has no attribute 'F_hardware_skinning'

I’m running on a relatively modern GTX765m with newest drivers.

This part seems to be specifically what you were referring to…

def addMorph(morph_name):
    buffer = Texture("texbuffer")
    buffer.setup_buffer_texture(NUM_VERT, Texture.T_float, Texture.F_rgb32, GeomEnums.UH_static)
    with open(morph_name) as f: 
    image = memoryview(buffer.modify_ram_image())    
    for i in range(len(morph)):
        off = i * 12    
        image[off:off+12] = struct.pack('fff', morph[i][0], morph[i][1], morph[i][2])
    return buffer

I know I can look up most of it, but that won’t necessarily explain the sense behind it.
GeomEnums. UH_static? struct.pack?

As I can’t run it, I can’t test it … so I have to ask: Is it necessary to deal with textureaccess that way?

Thanks for your help!

I’m wondering why we can’t enable/disable OpenGL flags directly, instead of having to wait for someone to implement it …
… just like the POINT_SIZE flag. Not complaining, just wondering. I believe having direct access to OpenGL would make things much easier to deal with …
… and makes people less independent of those who are behind panda.

Sorry, just frustrated. :slight_smile:

You need Panda 1.10 to get access to buffer textures and hardware skinning in Panda. Grab a development build from the devel download page.

You do have direct access to OpenGL. You can just create a draw callback and PyOpenGL to set the right point size flag if you want to.

I don’t understand. There’s no mentioning about this anywhere … ?
I’d assume just importing PyOpenGL (never used it) and randomly hoping it works would be silly?

Why isn’t this mentioned anywhere? How can I access panda’s opengl context?

(yes, I have no idea how this would work)

Reading this now: www.panda3d.org/forums/viewtopic.php?t=5855


So I can do this once and it works? I’m looking at the example. Seems weird, but will give it a try.

I know I sound mostly clueless and I admit I am having a really hard time using panda3d, but it’s not 100% as it seems. :stuck_out_tongue:

Oh btw, silly question but this confuses me now …
Why didn’t you tell me there’s a way to enable GL_VERTEX_PROGRAM_POINT_SIZE_ARB without having to wait for you to do it ? :slight_smile:

Oh, I forgot that the hardware skinning was added in a dev build, not 1.9 and now that I think of it… maybe so was buffer texture support…
If you want to try a never build yourself look for something here:

All I know about “GeomEnums. UH_static” is that it’s a “usage hint” I think rdb is needed to explain how it works with texture buffers.

‘struct’ is a Python module to pack data into a C thingy (‘struct’ it’s called for all I know), there’s more here:
“fff” here means the next 3 arguments will be floats

I’m not sure if it’s the only or best way to pack/send data, but it worked for me.

Thanks, I’ll add 1.10. to the others and will test this then.

:display:gsg:glgsg(error): An error occurred while compiling GLSL shader morph_f
morph_f.glsl(27) : error C7616: global variable gl_LightSource is removed after
version 140
morph_f.glsl(27) : error C7616: global variable position is removed after versio
n 140
morph_f.glsl(31) : error C7616: global variable diffuse is removed after version
morph_f.glsl(33) : warning C7533: global variable gl_FragData is deprecated afte
r version 120

With this error message I’m wondering how you got it to work …

Must be a driver issue. Most of the time ATI/AMD gives problems and Nvidia drivers eat all errors and just work… I found it’s exactly the opposite for me (except on linux, AMD/ATI drivers there are a abomination).

Just made a commit with a fix.

I’m using nvidia. After two notebooks and a desktop ATI card, over the course of ten years, I have stopped accepting that my driver BOMBS without me doing anything.
And that hasn’t changed. In over a decade at least. No change. Seriously. The only reason my nvidia drivers ever bombed was because I had endless loops running in shaders.
The ATI drivers otoh bomb just because they can. Sheesh, I sound like I hold a grudge against them… cough

I’ll check it out later, first I’ll have to test the opengl callback stuff…


Just so you know, it still spits out error messages.

morph_f.glsl(7) : error C7616: global type gl_LightSourceParameters is removed a
fter version 140
morph_f.glsl(29) : error C7616: global variable position is removed after versio
n 140
morph_f.glsl(33) : error C7616: global variable diffuse is removed after version

Not sure you fixed it yet, but I wanted to tell you that the (for me) important part of your gpu_morph works
and I’m already trying to abuse it somehow. Thanks, helps me a ton! :slight_smile: