Procedurally Generating animated 3D Models (and textures)?

In the manual sections “How Panda3D Stores Vertices and Geometry” and “Procedurally Generating 3D Models” the vertex columns mentioned are ‘vertex’, ‘normal’, ‘color’, ‘texcoord’. And in the end of the section you get a nodePath. Is there no way to create animated Actors like this?
For that ill probably need additional “BoneData” and a “weight” column for vertices…

If you scroll down to the bottom of GeomVertexFormat you’ll see a reference to those columns. Adding them to a format seems easy enough, but I’m guessing there is something more complex involved in making them usable. I’m sure David can shed some light on it.

Yeah, but its a bit confusing. The vertice would need a weight column, but Ill also need to tell which bone affects which vertice. Also, Im not sure whats the best way of creating a new bone. Im guessing I should make a Actor or some other related class instance and reparent the geom to it, and there would probably be a method for creating new bone object, but thats just a part of the job and Im not sure about this either.

Also, I know this isnt very related, but I have another question again about creating stuff from scratch, in this case PNMImage for textures.
This code below takes around 8 seconds on my machine:

import direct.directbase.DirectStart
from pandac.PandaModules import *


# plane to apply the texture on
plane = loader.loadModel('texplane')
plane.reparentTo(aspect2d)

# the original image to copy pixel values from
myImage = PNMImage()
myImage.read(Filename("image.png")) #is 1024x1024

# second PNMImage will copy pixels from first
myEmptyImage = PNMImage(myImage.getXSize(), myImage.getYSize())

# copy pixels
for x in xrange(myImage.getXSize()):
	for y in xrange(myImage.getYSize()):
		red = myImage.getRed(x,y)
		green = myImage.getGreen(x,y)
		blue = myImage.getBlue(x,y)
		
		myEmptyImage.setRed(x,y,red)
		myEmptyImage.setGreen(x,y,green)
		myEmptyImage.setBlue(x,y,blue)

# create a texture object
myTexture=Texture()
 
# This texture now represents myImage
myTexture.load(myEmptyImage)

# apply texture on model
plane.setTexture(myTexture)


run()

In my real application of course I dont just copy the pixel values from one PNMImage to anothet (that would be silly), I actually read it from another source. I don’t know if PNMImage isn’t meant for such tasks, because I have few dozen of such operations and they together will take minutes to finish. Is PNMImage not meant for such tasks? Then could you maybe suggest me another image library for python? Or maybe I shouldnt be using Python for this?

Creating an Actor from scratch is indeed quite complicated, though it is possible. You’ll need a TransformBlendTable which is indexed from a vertex column called “transform_blend”. Each entry in the TransformBlendTable defines the weighted composition of a number of different joints, each of which is probably a JointVertexTransform object. Then you can animate your model by moving the JointVertexTransforms.

But this is just the quick high level introduction; the full details are more complicated. Are you sure you want to be doing this? Try inspecting the C++ code in RigidBodyCombiner for inspiration; it basically does its work by creating an Actor on the fly.

As to the performance issues, yes, you’re experiencing a Python issue. PNMImage isn’t the fastest tool in the world for filling images, but Python is your bottleneck here–it’s just not a language meant for low-level per-pixel processing. You can use almost any Python image library, though; then take the raw data from the image library and feed it to the PNMImage all at once.

David

Hey drwr, I found your last code snippet on this thread to be very helpful: [Procedural character skeleton hierarchy generation?)

Im pretty sure.

I finally understood how it works.

So i dont get it, should i even bother using another python image library? pythonware.com/products/pil/
Or is a code like this always going to be slow in python?

for x in 1024:
    for y in 1024:
        #do something

Im very new to C++, if it is required though, i would need to use the interrogate tool, right?

“Slow” is relative. But, generally, yeah. Code like that is slow in Python, which is why you have tools like Panda to handle the chores of sending vertices to the graphics card, instead of writing the OpenGL calls directly in Python. And you have tools like PIL to handle the chores of copying pixels from one place to another. PIL would be a fine choice, by the way.

David

So Im confused. Python is slow for this, but using Python with PIL is ok? The PNMImage class is C++ too, even with PIL, i would still need to have a for in loop like this to be able to save data from some source to pixels:

for x in 1024:
    for y in 1024:
        pilImage.setPixel(x, y, rgba_value)

(pseudocode).
So if Python is slow in this kind of operations, will it really matter what library I use?

You may be calling a function that is implemented in C++ but you’re basically bouncing back and forth between the Python layer and the C++ layer in a Python loop. To put it another way, consider these two made-up functions implemented in C++:

[size=92]void frobnicate(Widget& widget);[/size]

[size=92]void multifrobnicate(Widget* array_of_widgets);[/size]

If I want to frobnicate a lot of widgets, I should call multifrobnicate with an array and let it all happen at the C++ layer, rather than doing frobnicate(mywidgets[i]) one at a time in a Python loop. This of course limits you to operations that the C++ layer has already implimented.

So I should just do it with PNMImage class in C++ panda3d.org/manual/index.php … tudio_2008
and generate python wrappers with panda3d.org/manual/index.php/Interrogate

EDIT: Actually, I could try to convert that function to compiled C++ code with Cython, like explained in the latest blog entry. What do you guys think? I already got my first panda C++ program to run, but this sounds like less hassle and won’t require me to rewrite all this, not to mention im not really comfortable with C++.

The point is to avoid doing per-pixel computations in Python. If there’s some operation you’re trying to do that is already implemented in PIL, you can ask PIL to do it for the entire image, then ask PIL to give you the raw image data as a single string, and feed that data to the PNMImage as a single string. That’s only three operations in Python instead of three million, which makes a big difference.

If you per-pixel computation you want to perform is not already implemented in PIL or in Panda or anywhere else, then you’ve got two choices:
(1) Implement it yourself in C++ somehow and call it from Python.
(2) Go ahead and implement it in Python and suck up the performance cost.

(1) has lots of different approaches, none of which are trivial.

David

Well, im not talking about the part of actually generating the pixel values (no problem there), im talking about the part when i assign those values to a PIL Image with a for loop. Actually, PIL seems pretty fast for this when compared to PNMImage, it takes less then a second for few images, compared to around 7 for PNMImage in my case, but yeah, every milisecond matters, if I could squeeze some more without too much effort, I would try it.
Passing all the values as a string sounds like a good idea, ill see if its possible for my case.
But as using PIL greatly effected the speed, im not sure if calling Python’s ‘for loop’ million times is so slow. And since Cython basically converts the code to C++, that should solve the problem, right?
And so I compiled it with Cython to cpp and then pyd, and I cant really notice any difference with naked eye.

It’s not so much the Python loops themselves that are slow. It’s what you do inside the loop that makes a difference. In Python, the slowest single operation is a function call (and it’s very very slow compared to other languages).

In your code sample:

for x in xrange(myImage.getXSize()):
   for y in xrange(myImage.getYSize()):
      red = myImage.getRed(x,y)
      green = myImage.getGreen(x,y)
      blue = myImage.getBlue(x,y)
      
      myEmptyImage.setRed(x,y,red)
      myEmptyImage.setGreen(x,y,green)
      myEmptyImage.setBlue(x,y,blue) 

you are making six function calls for every pixel. That’s quite expensive. You could reduce that to two:

xsize = myImage.getXSize()
ysize = myImage.getYSize()
for x in range(xsize):
   for y in range(ysize):
      r, g, b = myImage.getXelVal(x, y)
      myEmptyImage.setXelVal(x, y, r, g, b)

But the call to getXelVal(), and the subsequent unpacking into a tuple, is still fairly expensive. Depending on the form in which you already have your r, g, b data, you may or may not need to pay this cost in your final solution. The call to setXelVal() (or just setXel() if you prefer scaled values) with five parameters is relatively efficient.

David

Forgot to update the code, i dont assign the r,g,b and a separately anymore. I dont know why, but with PIL its still faster, well ‘comparably’.

Well, this sure was the most difficult thing I had done before.
But it wasnt impossible. I got most of the stuff figured out and working, only things are perhaps assigning the bones to vertices and setting weights part, and setting the animation transforms for each frame. that part is a bit weird.
[Procedural character skeleton hierarchy generation?)
drwr posted a snippet there:

from direct.directbase.DirectStart import *
from pandac.PandaModules import *
from direct.actor.Actor import Actor

# Create a character.
ch = Character('simplechar')
bundle = ch.getBundle(0)
skeleton = PartGroup(bundle, '<skeleton>')

# Create the joint hierarchy.
root = CharacterJoint(ch, bundle, skeleton, 'root',
                      Mat4.identMat())
hjoint = CharacterJoint(ch, bundle, root, 'hjoint',
                        Mat4.translateMat(Vec3(10, 0, 0)))
vjoint = CharacterJoint(ch, bundle, hjoint, 'vjoint',
                        Mat4.translateMat(Vec3(0, 0, 10)))

# Create a TransformBlendTable, listing all the different combinations
# of joint assignments we will require for our vertices.
root_trans = JointVertexTransform(root)
hjoint_trans = JointVertexTransform(hjoint)
vjoint_trans = JointVertexTransform(vjoint)

tbtable = TransformBlendTable()
t0 = tbtable.addBlend(TransformBlend())
t1 = tbtable.addBlend(TransformBlend(root_trans, 1.0))
t2 = tbtable.addBlend(TransformBlend(hjoint_trans, 1.0))
t3 = tbtable.addBlend(TransformBlend(vjoint_trans, 1.0))
t4 = tbtable.addBlend(TransformBlend(hjoint_trans, 0.7, vjoint_trans, 0.3))

# Create a GeomVertexFormat to represent the vertices.  We can store
# the regular vertex data in the first array, but we also need a
# second array to hold the transform blend index, which associates
# each vertex with one row in the above tbtable, to give the joint
# assignments for that vertex.
array1 = GeomVertexArrayFormat()
array1.addColumn(InternalName.make('vertex'),
                3, Geom.NTFloat32, Geom.CPoint)
array2 = GeomVertexArrayFormat()
array2.addColumn(InternalName.make('transform_blend'),
                 1, Geom.NTUint16, Geom.CIndex)
format = GeomVertexFormat()
format.addArray(array1)
format.addArray(array2)
aspec = GeomVertexAnimationSpec()
aspec.setPanda()
format.setAnimation(aspec)
format = GeomVertexFormat.registerFormat(format)

# Create a GeomVertexData and populate it with vertices.
vdata = GeomVertexData('vdata', format, Geom.UHStatic)
vdata.setTransformBlendTable(tbtable)
vwriter = GeomVertexWriter(vdata, 'vertex')
twriter = GeomVertexWriter(vdata, 'transform_blend')

vwriter.addData3f(0, 0, 0)
twriter.addData1i(t1)

vwriter.addData3f(10, 0, 0)
twriter.addData1i(t2)

vwriter.addData3f(10, 0, 10)
twriter.addData1i(t3)

vwriter.addData3f(8, 0, 2)
twriter.addData1i(t4)

# Be sure to tell the tbtable which of those vertices it will be
# animating (in this example, all of them).
tbtable.setRows(SparseArray.lowerOn(vdata.getNumRows()))

# Create a GeomTriangles to render the geometry
tris = GeomTriangles(Geom.UHStatic)
tris.addVertices(2, 3, 1)
tris.closePrimitive()
tris.addVertices(1, 3, 0)
tris.closePrimitive()

# Create a Geom and a GeomNode to store that in the scene graph.
geom = Geom(vdata)
geom.addPrimitive(tris)
gnode = GeomNode('gnode')
gnode.addGeom(geom)
ch.addChild(gnode)

# Now create the animation tables.  (We could also load just this part
# from an egg file, if we already have a compatible table ready.)
bundle = AnimBundle('simplechar', 5.0, 10)
skeleton = AnimGroup(bundle, '<skeleton>')
root = AnimChannelMatrixXfmTable(skeleton, 'root')

hjoint = AnimChannelMatrixXfmTable(root, 'hjoint')
table = [10, 11, 12, 13, 14, 15, 14, 13, 12, 11]
data = PTAFloat.emptyArray(len(table))
for i in range(len(table)):
    data.setElement(i, table[i])
hjoint.setTable(ord('x'), CPTAFloat(data))

vjoint = AnimChannelMatrixXfmTable(hjoint, 'vjoint')
table = [10, 9, 8, 7, 6, 5, 6, 7, 8, 9]
data = PTAFloat.emptyArray(len(table))
for i in range(len(table)):
    data.setElement(i, table[i])
vjoint.setTable(ord('z'), CPTAFloat(data))

wiggle = AnimBundleNode('wiggle', bundle)

# Finally, wrap the whole thing in a NodePath and pass it to the
# Actor.
np = NodePath(ch)
anim = NodePath(wiggle)
a = Actor(np, {'simplechar' : anim})
a.reparentTo(render)
a.setPos(0, 50, 0)
a.loop('simplechar')

Look at this part:

# Now create the animation tables.  (We could also load just this part
# from an egg file, if we already have a compatible table ready.)
bundle = AnimBundle('simplechar', 5.0, 10)
skeleton = AnimGroup(bundle, '<skeleton>')
root = AnimChannelMatrixXfmTable(skeleton, 'root')

hjoint = AnimChannelMatrixXfmTable(root, 'hjoint')
table = [10, 11, 12, 13, 14, 15, 14, 13, 12, 11]
data = PTAFloat.emptyArray(len(table))
for i in range(len(table)):
    data.setElement(i, table[i])
hjoint.setTable(ord('x'), CPTAFloat(data))

Now I would expect to animate the joints for each frame with a Mat4, not a Python list converted to some kind of Array object.
I would like to know what the values in the table are. I would guess scalex,scaley,scalez, roth,rotp,rotr, posx,posy,posz, but there are like 10 members. What are they?
I think completely don’t understand what those objects are.

At least I managed to make the procedural geometry, textures and bones, though.

I still dont really undertsand how to assign bones and weights to vertices…
Im just guessing these functions and classes are for that:

transformblendtable.addBlend(TransformBlend(trans1, weight1, trans2, weight2, ...)) 

Should I just create a trans (JointVertexTransform) for each bone? is there some kind of limit to the amount of bones (blends) per vertice?

bump…

The different channels that may be set on an AnimChannelMatrixXfmTable are the same as allowed in an egg file, and documented in eggSyntax.txt:

      i, j, k - scale in x, y, z directions, respectively
      a, b, c - shear in xy, xz, and yz planes, respectively
      r, p, h - rotate by roll, pitch, heading
      x, y, z - translate in x, y, z directions

You would use this structure (and a JointVertexTransform) only if you want to create animation tables for pre-canned animation sequences. If you’re animating your actors dynamically, you would create a UserVertexTransform instead. Both can be used interchangeably.

Yes, you need a different transform object for each bone. There’s no hard limit to the number of bones per vertex, but there could be performance implications, of course. It just means more math per frame.

David

OK.
The animations arent really dynamically generated as the geometry.

Im having problems assigning weights to vertices, I’ll make an example code and post here.

OK, the whole code is a bit long and complex, so I’ll just tell the parts where Im not sure what is being done/ if Im doing it right.

The ‘Actors’ are composed of multiple parts.
each part is turned into a geom, then geomnode, then nodepath (so I can have nodepath functions like setTexture for each part).

I create a Character object, get a handle to its ‘bundle’ (which im not sure what is for), ‘PartGroup’ and create a ‘TransformBlendTable’.

ch = Character('character')
bundle = ch.getBundle(0)
skeleton = PartGroup(bundle, '<skeleton>') 
tbtable = TransformBlendTable() 

I then make a nodepath from the Character object:

nodepath =  NodePath(ch)

then

for i in parts:
    i.reparentTo(nodepath)

where “parts” is a list of the nodepaths i generated previously.
or can I only parent the nodes (geomnodes), not nodepaths?

for i in parts:
    ch.addChild(i.node())

and finally

actor = Actor(nodepath)

to be able to play animations on it.

joints (bones) are created like this:

joint = CharacterJoint(ch, bundle, skeleton, name, matrix)
jointtrans = JointVertexTransform(joint)
jointtranslist.append(jointtrans)

Now Im pretty sure I assign the “weights” incorrectly, or the geomnodes altogether, as moving the actor doesnt move the joints and transforming a joint doesnt affect the vertices.
So how do you assign weights?
I do this for every vertice:

table = tbtable.addBlend(TransformBlend(jointtranslist[0], weight1, jointtranslist[1], weigth2...))

Oh and I have 2 arrays like in your sample code:

array = GeomVertexArrayFormat() #vertex, normal, etc
array2 = GeomVertexArrayFormat()
array2.addColumn(InternalName.make('transform_blend'), 1, Geom.NTUint16, Geom.CIndex) 

and

format = GeomVertexFormat()
format.addArray(array)
format.addArray(array2)

# this object describes how the vertex animation, if any, represented in a GeomVertexData is encoded.Vertex animation includes soft-skinned skeleton animation and morphs, and might be performed on the CPU by Panda, or passed down to the graphics backed to be performed on the hardware (depending on the hardware's advertised capabilities). Changing this setting doesn't by itself change the way the animation is actually performed, this just specifies how the vertices are set up to be animated. 
aspec = GeomVertexAnimationSpec()
aspec.setPanda() # specifies that vertex animation is to be performed by Panda. 
format.setAnimation(aspec)
	
# Finally, before you can use your new format, you must register it:
format = GeomVertexFormat.registerFormat(format)
	
# Once you have a GeomVertexFormat, registered and ready to use, you can use it to create a GeomVertexData.
vdata = GeomVertexData('vdata', format, Geom.UHStatic)

Please tell me if Im doing something the wrong way, Im pretty sure I am.
I think I posted the necessary code, I didnt post how I assign vertices, normals and such, and create primitives which I assign to geoms, because they all work perfectly fine and I have no issues there.

The PartBundle is the root of the PartGroup hierarchy; which is to say, the root of the skeleton. That’s all it means.

A NodePath is a handle to its underlying node. Reparenting a NodePath means (almost) the same thing as reparenting its node. The difference between your two versions of reparentTo() vs. addChild() is what happens to the NodePath you created: in the former, it becomes the handle to the new path you created; in the latter, it is unchanged.

This doesn’t sound like a weighting issue to me. You’re not seeing any animation at all? It sounds like it’s not seeing the updates to your transforms.

Try using a UserVertexTransform for now, just to eliminate the complexity of the animation tables. Try setting the transform explicitly on the UserVertexTransform. It should move the character’s vertices. If it doesn’t, then something’s wrong with the TransformBlendTable or the GeomVertexData–somehow the vertices aren’t associated with your transform the way you think they should be.

Also, just use on transform per blend for now, with a weight of 1.0. That’s the simplest relationship, and it means the transform exactly operates on the vertices assigned to the blend.

Be sure you put the blend index in the appropriate vertex column; that’s how the vertices are matched to the blend.

David