Some transform errors...

I’ve finally managed to successfully generate proper procedural actor files, but there are some minor problems regarding transformations: both texture transformations and 3D geometry transformations.

1. Texture transforms:

I’ll start with the texture transformations. The image below highlights the problem:


The right segment of the above image shows the texture on a cube as generated within the game under a certain transform. The left segment shows the same texture on the cube saved as an .egg file but, under the same transform. Visually as you can see, they are different. Here is the in-game code used to set the texture transform:

      nodeP.setTexOffset(TextureStage.getDefault(), self.u_offset, self.v_offset)
      nodeP.setTexRotate(TextureStage.getDefault(), self.tex_rotate)
      nodeP.setTexScale(TextureStage.getDefault(), self.u_scale, self.v_scale)

And when writing out the file to disk, here is the code used to set the texture transform data:

x_s=nodeP.getTexScale(TextureStage.getDefault()).x
y_s=nodeP.getTexScale(TextureStage.getDefault()).y
rot=nodeP.getTexRotate(TextureStage.getDefault())
x_off=nodeP.getTexOffset(TextureStage.getDefault()).x
y_off=nodeP.getTexOffset(TextureStage.getDefault()).y

The values set both within the game and written to disk are these ones: scale:2.0 and 2.0, rotation: 4.0, translation: 0.10000000149 and 0.10000000149. So if both the texture values in-game and in the file are the same, why do they look different visually? How would I resolve this?

2. 3D Geometry Transforms:

Regarding this issue, I discovered that the translation facet of the transform error could be resolved by multiplying the vector that stored the translation data by 10. The scale transform applied both in-game and the in the file is alright, but I am not sure about the rotation transform, it looks slightly erroneous:


The left segment of the above image shows the two models as generated within the game, whereas the right segment shows the file generated from the two models. To get the hpr, pos and scale of the models as I write them to disk, this is what I do:

         transformz=EggTransform()
          #rotation:
          transformz.addRotx(nodeP.getP(render))
          transformz.addRoty(nodeP.getR(render))
          transformz.addRotz(nodeP.getH(render))

          #scale:
    transformz.addScale3d(VBase3D(nodeP.getSx(render),nodeP.getSy(render),nodeP.getSz(render)))
          
#translation:
          transformz.addTranslate3d(Vec3D(nodeP.getX(render),nodeP.getY(render),nodeP.getZ(render))*10)

As you can see on the last line above, to fix the translation error, I multiply the Vec3D object by 10, and all is apparently well. Here is the fixed file:


So the last question: why does the transform data from the models in the game not bring about the same result in the generated .egg file? For instance, regarding the translate issue, if the value of the position [relative to the render nodePath] of one model is “-1.668”, why must I multiply it by 10->"-16.68" to get the same visual result when the .egg file is loaded?

Thanks.

I’m not exactly sure without being able to compare the results for myself, but I will suggest that it probably has to do with the order in which you are adding the transforms. In the .egg file, the exact order in which you add the transform elements matter - if you add the translation first, and the scale second, the translation will be scaled by that amount, whereas if you add them in the reverse order, the translation itself won’t be scaled.

I think to get the same effect as in Panda, you need to add the scale first, the rotation second, and the translation third. But I could be wrong, try playing around a bit with the order.

This applies to both texture transformations and geometry transformations. However, do note that (without use of an tag), vertices in the .egg file are always specified in global coordinates.

And to set those global coordinates, you can use the net transform of the NodePath:

mat = nodeP.getNetTransform().getMat()
r = range(4)
net_transform = Mat4D(*[mat.getCell(i, j) for i in r for j in r])
egg_group.transform(net_transform)

As to the entry, in the eggSyntax.txt file it states:

which seems to be correct, while in the API for EggTransform it says:

which is probably a mistake then.
Instead of worrying about the order in which the component transforms need to be applied, you could just set the overall transform on the EggGroup:

mat = nodeP.getTransform().getMat()
r = range(4)
local_transform = Mat4D(*[mat.getCell(i, j) for i in r for j in r])
egg_group.setTransform3d(local_transform)

The following code sample illustrates the use of the above to export a hierarchy to an egg file:

from panda3d.core import *
from panda3d.egg import *
from direct.showbase.ShowBase import ShowBase



class Cube(object):

  def __init__(self, parent_np, name):
  
    polys = []
    normals = []
    positions = []
    uvs = []

    vertex_format = GeomVertexFormat.getV3n3cpt2()
    vertex_data = GeomVertexData("cube_data", vertex_format, Geom.UHStatic)
    tris_prim = GeomTriangles(Geom.UHStatic)

    pos_writer = GeomVertexWriter(vertex_data, "vertex")
    normal_writer = GeomVertexWriter(vertex_data, "normal")
    uv_writer = GeomVertexWriter(vertex_data, "texcoord")

    vertex_count = 0

    for direction in (-1, 1):

      for i in range(3):
      
        normal = VBase3()
        normal[i] = direction

        for a, b in ( (-1., -1.), (-1., 1.), (1., 1.), (1., -1.) ):

          pos = VBase3()
          pos[i] = direction
          pos[(i - direction) % 3] = a
          pos[(i - direction * 2) % 3] = b
          uv = VBase2(1. - max(0., a), max(0., b))
          
          positions.append(pos)
          uvs.append(uv)

          pos_writer.addData3f(pos)
          normal_writer.addData3f(normal)
          uv_writer.addData2f(uv)

        vertex_count += 4
        
        indices = [vertex_count - i for i in range(4, 0, -1)]

        tris_prim.addVertices(indices[0], indices[1], indices[2])
        tris_prim.addVertices(indices[2], indices[3], indices[0])
        
        poly = [(positions[i], uvs[i]) for i in indices]
        polys.append(poly)
        normals.append(normal)

    cube_geom = Geom(vertex_data)
    cube_geom.addPrimitive(tris_prim)
    cube_node = GeomNode(name)
    cube_node.addGeom(cube_geom)

    self.origin = parent_np.attachNewNode(cube_node)
    self.polys = polys
    self.normals = normals



class MyApp(ShowBase):

  def __init__(self):

    ShowBase.__init__(self)

    # set up a light source
    p_light = PointLight("point_light")
    p_light.setColor(VBase4(1., 1., 1., 1.))
    self.light = self.camera.attachNewNode(p_light)
    self.light.setPos(5., -10., 7.)
    self.render.setLight(self.light)

    do_export = True # set to False after the egg file has been generated
    egg_filename = "cubes.egg"
    
    if do_export:

      # create cube1
      cube1 = Cube(self.render, "cube1")
      origin = cube1.origin
      origin.setColor(VBase4(1., 0., 0., 1.))
      origin.setScale(1.5)

      # create cube2 as a child of cube1
      cube2 = Cube(origin, "cube2")
      origin = cube2.origin
      origin.setColor(VBase4(0., 1., 0., 1.))
      origin.setPos(10., 15., 12.)
      origin.setScale(2., 1., 1.5)
      origin.setHpr(30., 50., 10.)
      tex = self.loader.loadTexture("my_texture.png")
      tex_stage = TextureStage.getDefault()
      origin.setTexture(tex_stage, tex)
      origin.setTexRotate(tex_stage, 65.)
      origin.setTexScale(tex_stage, 1.5, .8)
      
      self.cubes = [cube1, cube2]
      
      print "Original hierarchy:\n", cube1.origin.ls()

      self.export(egg_filename)
  
    else:
    
      cubes = self.loader.loadModel(egg_filename)
      cubes.reparentTo(self.render)

      print "Loaded hierarchy:\n", [cubes.ls()]

    self.run()


  def export(self, filename):
  
    egg_data = EggData()
    egg_data.addChild(EggCoordinateSystem(CSZupRight))
    parent_grp = egg_data
    
    for i, cube in enumerate(self.cubes):

      origin = cube.origin
      name = "cube%d" % (i + 1)
      group = EggGroup(name)
      parent_grp.addChild(group)
      vertex_pool = EggVertexPool(name)
      group.addChild(vertex_pool)
      color = cube.origin.getColor()
      
      tex_stage = TextureStage.getDefault()
      
      if origin.hasTexture(tex_stage):
        tex = origin.getTexture(tex_stage)
        tex_filename = tex.getFilename()
        egg_tex = EggTexture("tex", tex_filename)
        egg_data.addChild(egg_tex)
        mat = origin.getTexTransform(tex_stage).getMat()
        r = range(3)
        tex_transform = Mat3D(*[mat.getCell(i, j) for i in r for j in r])
        egg_tex.setTransform2d(tex_transform)
      else:
        egg_tex = None
      
      for poly, normal in zip(cube.polys, cube.normals):

        egg_poly = EggPolygon()
        egg_poly.setColor(color)
        
        if egg_tex:
          egg_poly.addTexture(egg_tex)
        
        for pos, uv in poly:
          egg_vert = EggVertex()
          egg_vert.setPos(Point3D(*pos))
          egg_vert.setUv(Point2D(*uv))
          egg_vert.setNormal(Vec3D(*normal))
          vertex_pool.addVertex(egg_vert)
          egg_poly.addVertex(egg_vert)

        group.addChild(egg_poly)

      r = range(4)
      # the net transform is needed to give the vertices global coordinates
      # (i.e. coordinates relative to render/the world)
      mat = origin.getNetTransform().getMat()
      net_transform = Mat4D(*[mat.getCell(i, j) for i in r for j in r])
      group.transform(net_transform)
      # the local transform (i.e. relative to the parent NodePath) needs to be
      # set for the EggGroup, since all of these will be post multiplied in the
      # order they are encountered to produce a net transformation matrix, with
      # which the geometry assigned to this group will be inverse transformed to
      # move its vertices to the local space
      mat = origin.getTransform().getMat()
      local_transform = Mat4D(*[mat.getCell(i, j) for i in r for j in r])
      group.setTransform3d(local_transform)
      
      parent_grp = group
    
    egg_data.writeEgg(Filename.fromOsSpecific(filename))



MyApp()

EDIT:
updated above code to include texture transforms.

Yes, that seems to be correct.

NOTE:
After loading the egg file, the vertices are always at the global coordinates, no matter what transforms were set for the EggGroup. That doesn’t surprise me (e.g. if no transforms were set, it’s like baking the net transform into the vertices), but what I find funny is that the normals do seem to be affected.

Thanks for the help. One thing though, instead of using the egg library, I just redid my old code to produce what YABEE produced, and it worked, save of course for those transform errors. All the 3d geometry errors are fixed, scaling and rotation are working and for the translate as I said I just multiplied the vector by 10. However, for the texture transform, only the scaling and translation work properly. The rotation still does not produce the result produced in the game. I’ll shift to the egg library for this part of the process and see what happens, otherwise I’ll just kill the ability to rotate textures for now.

On the same note, I also managed to successfully produce procedural animations. Rotation and scaling work properly, however, translation yet again brings up a problem. I have attached a model, as well as three animation files: one for scaling, one for rotation, and one for translation. It consists of two models, and the error produced for the translation animation is that, while the parent model moves properly, the child model only partially mimics its movement. Play them using pview to see what I mean. You’ll see that scaling and rotation work fine in this case, but translation produces that error. I hope you can offer some insight on how this might be resolved. (For both the model and the animation files, I mimicked YABEE’s output, so if anyone knows how YABEE works that’d be great.)

Model file:
153_p3d_9.0.0.egg (120 KB)

Link to scale animation:

s000.tinyupload.com/index.php?fi … 8434256183

Link to rotate animation:

s000.tinyupload.com/index.php?fi … 9339731974

Link to translate animation:

s000.tinyupload.com/index.php?fi … 9524226814

Actually, on trying to upload the animation files, I get this error:

So if anyone wants to help me and doesn’t want to download the files from the provided links, please pm me and I can send you the animation files to view via Pview using some other means.

Thanks.

@ Epihaius

After testing out your code snippet, and applying it to my situation, it really fixes everything properly, thank you so much for this help. I see that there are two, and not one transform involved; so I guess that means starting from scratch again… :frowning: Now I just have to with help, tackle this last animation error and I suppose it has to do with more or less the same thing. By the way, I see that the snippet involves use of groups, how would one set it to use joints instead to create an actor file?

Thanks.

Yeah, on the one hand you need the net transform to add global vertex coordinates, while on the other hand you need the local transform for the NodePath itself. Unless you don’t care about restoring the original child-to-parent transforms when loading the egg file? In that case you don’t need to set any local transform for the EggGroup; the vertices will still have the correct (global) coordinates.

Well, I only started to look into egg file animation today, so there’s not much yet that I can tell you, but an EggGroup can be used to define an Actor, while it can also define a Joint:

actor = EggGroup("actor")
actor.setDartType(EggGroup.DTDefault)
joint = EggGroup("joint")
joint.setGroupType(EggGroup.GTJoint)
actor.addChild(joint)

I see that there is EggGroup.refVertex to create the entries, but I haven’t gotten that far yet :stuck_out_tongue: .

Nope, it was something entirely different in fact. Take a look at the matrices in 153_p3d_9.0.0.egg that define both joint transforms; the bottom-right entry equals 10. That is very unusual IMHO. Normally this should always be 1.
For example, the matrix for the transform of Joint_9.0.0:

 <Matrix4> { 
 1 0 0 0 
 0 1 0 0 
 0 0 1 0 
 0 0 0 10 
 }

should really look like this:

 <Matrix4> { 
 1 0 0 0 
 0 1 0 0 
 0 0 1 0 
 0 0 0 1 
 }

The same goes for the Joint_9.0.1 transform matrix.

Then you need to scale down the animated coordinates in 153_p3d_36-anim-translate.egg by a factor of 10 to compensate, so change this:

      <Table> Joint_9.0.0 { 
        <Xfm$Anim> xform { 
          <Scalar> order { sprht } 
          <Scalar> fps { 24 } 
          <Scalar> contents { ijkprhxyz } 
          <V> { 
            1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 
            1.0 1.0 1.0 0.0 0.0 0.0 0.0 123.000288 0.0 
            1.0 1.0 1.0 0.0 0.0 0.0 0.0 246.000576 0.0 
            1.0 1.0 1.0 0.0 0.0 0.0 0.0 369.000864 0.0 
            
            ...
      <Table> Joint_9.0.1 { 
        <Xfm$Anim> xform { 
          <Scalar> order { sprht } 
          <Scalar> fps { 24 } 
          <Scalar> contents { ijkprhxyz } 
          <V> { 
            1.0 1.0 1.0 0.0 0.0 0.0 -469.838715 80.120349 0.0 
            1.0 1.0 1.0 0.0 0.0 0.0 -469.838715 80.120349 0.0 
            
            ...

to this:

      <Table> Joint_9.0.0 { 
        <Xfm$Anim> xform { 
          <Scalar> order { sprht } 
          <Scalar> fps { 24 } 
          <Scalar> contents { ijkprhxyz } 
          <V> { 
            1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 
            1.0 1.0 1.0 0.0 0.0 0.0 0.0 12.3000288 0.0 
            1.0 1.0 1.0 0.0 0.0 0.0 0.0 24.6000576 0.0 
            1.0 1.0 1.0 0.0 0.0 0.0 0.0 36.9000864 0.0 
            
            ...
      <Table> Joint_9.0.1 { 
        <Xfm$Anim> xform { 
          <Scalar> order { sprht } 
          <Scalar> fps { 24 } 
          <Scalar> contents { ijkprhxyz } 
          <V> { 
            1.0 1.0 1.0 0.0 0.0 0.0 -46.9839 8.01204 0.0 
            1.0 1.0 1.0 0.0 0.0 0.0 -46.9839 8.01204 0.0 
            
            ...

It should work correctly now :slight_smile: .

Note that if you really want to scale by 10 (providing you do not apply a rotation), you could write the first matrix like this:

 <Matrix4> { 
 10 0 0 0 
 0 10 0 0 
 0 0 10 0 
 0 0 0 1 
 }

Once you apply rotations, those factors get multiplied by sine and cosine values of the angles, so it’s usually not a good idea to write a matrix by hand like that.

Thanks, that did solve the translate error. I was intentionally multiplying the last row of the matrix by 10 before I wrote it, so that’s why in the file it was scaled up. Removing that, from the part that saves the egg file, and the part that saves the animation file solved it.

However, there is another error relating again to translation, if you look at the model in the attached zipped file, without playing the animation, you will see the model as it is supposed to be. If you play the animation, which just changes its Heading and nothing else over time, you will see that the two cubes are interlocked as the animation plays by just changing the heading. This behavior doesn’t come up with other files I’ve generated, just with this one. Any pointers as to why this is happening?
rot_files.zip (97.9 KB)

Just a small update; when I change the translation entries in the animation file, by multiplying each entry by 10 for instance, then the animation looks a bit more like the actor, the only issue is the translation entry in the animation file, as both scaling and rotation work fine. But I don’t get why even when the last row of the Transform matrix for the joint in the actor file looks like this:

 -1.20709 -0.350966 -0.859492 1

And in the animation file it looks like this:

 -1.20709 -0.350966 -0.859492

This error occurs. However, when I multiply both entries by ten, then the issue is fixed, with the exception of causing another error when a translation animation occurs. So your answer, that of removing the multiplication of 10 while it does solve one issue, appears to raise another…[if interested, try multiplying the translate entries in both files by 10 and then play the animation to see what I mean.] I guess if this persists, since using actor files over using one’s own animation system has great optimization benefits by automatically flattening nodes, I might just have to remove the ability to create translation animations.

Sorry for replying a bit late, but all this is still new to me as well so I needed some time to figure it out - which I did :slight_smile: .

It’s actually not a translation problem at all, but rather a rotation problem. But the angles are all correct, you say? Yes, they are… but they are applied in the wrong order.
Here is the offending entry in the animation file:

<Scalar> order { sprht }

If you use the egg library, you can get the correct order like this:

EggXfmSAnim.getStandardOrder()

which yields this:

order { srpht }

(i.e. roll-pitch-heading instead of pitch-roll-heading)
and that fixes everything beautifully - so you can stop multiplying stuff now :stuck_out_tongue: .

Thanks once again! I can’t believe what kept me uneasy for two days was putting one letter in the wrong position :blush: . Does that mean that this:

 ijkprhxyz 

Also changes to this:

 ijkrphxyz 

?

By the way I wasn’t using the egg library for that; I just tried to mimic Yabee’s output and the arrangement in a file exported by Yabee had that letter arrangement: “sprht”. All does work beautifully now; there should be a way to tip people that answer you online in forums. I have one more question though not that there appear to be any other problems; instead of setting a transform matrix on a group or vertex pool what do you make of setting it on a vertex? From what I’ve done it also works okay. Doing this:

v=EggVertex()
#positions;uv;normals:
...
#now for the transform:
        transformz=EggTransform()
        transformz.addRoty(40)
        transformz.addScale3d(VBase3D(2,2,2))
        transformz.addTranslate3d(Vec3D(0,0,1))
       
        mat=transformz.getTransform3d()
        v.transform(mat)

Does give me the scale, rotation and translation arrangement I want on saved geometry. [ Even though I do not set any net transforms ]. With those major problems out of the way, viz creating actor files and animations, I can now hopefully wrap this thing up! Thanks rest assured all your advice won’t go to waste! :smiley:

Sorry to bother you, but there appears to be yet another error. I used gifs to make it easier to show, the errors are based on changing the r, p and h over time. Here are gifs illustrating the differences of what happens in the game and when the animation file is set. First, changing the p:

This gif shows what changing the pitch in the game looks like, this is what it should also look like in the animation file:

But this is what it looks like when the animation file is played instead:

And here is what changing the roll over time looks like:

In the game:

When the saved animation file is played:

To save data to the animation file for each nodepath, this is what I do:

r=np.getR()
self.file.write(...str(r)...)

Same goes for all the other transform attributes. I really am just adjusting the value of the “roll” for each frame, and then getting it as shown above, before writing it out to the animation file. Is that the correct way to get and store these values? [And here I thought I was out of the woods… :cry: ]

Attached is both the actor file and the rolling animation if you wish to view them.
rot_2.zip (28.8 KB)

That’s OK, just send me a box of mint chocolates. Just kidding :smiley: . Seriously though, this motivated me to look into procedural animation myself, which I dreaded at first due to its complexity, but which could benefit my own project, so in fact I thank you :wink: .

Lol I felt the same when, at long last, I almost accidentally noticed that important difference between your code and mine. It’s all in the details :slight_smile: .
The order of the contents ("ijkprhxyz “) should simply match the order of the corresponding values in the <Xfm$Anim> table, for the rest it doesn’t really matter what that order is.
Since the API mentions that this is an older syntax of egg anim table, I’m using <Xfm$Anim_S$> entries instead, which do not contain a " contents” entry.
Speaking of older syntax, perhaps “sprht” was the standard transform order in older Panda versions? Anyway, even if you don’t use the egg library for anything else, I guess it’s always safest to call EggXfmSAnim.getStandardOrder(), just to be sure.

Yes, as long as you make sure it’s the net transform and not the local transform you’re setting on the vertices, then you don’t need to set it on the group or vertex pool. But do be mindful of the order in which you apply the component transforms; it should be:

        transformz.addScale3d(VBase3D(2,2,2))
        transformz.addRoty(40)
        transformz.addRotx(10)
        transformz.addRotz(30)
        transformz.addTranslate3d(Vec3D(0,0,1))

As you can see, I added all three rotation angles because, as it turns out (not surprisingly), their order matters as well.

Good luck with your project :slight_smile: !

EDIT:

The angle values appear to be correct; it’s the joint’s origin that is in the wrong place, so this looks to be a local transform issue.
First off, have a look at the translation part of the transform set on Joint_11.0.0 in 161_p3d_24.0.0.egg (it’s the bottom line of the matrix) and the translation in 161_p3d_44-anim.egg for that same joint: they’re different, while they shouldn’t be.
The local transform of a joint (which is the default pose of that joint) should always be identical to the transform values set in the corresponding animation table (except for the values you’re actually animating, of course).
Secondly, when I multiply the translation values in 161_p3d_44-anim.egg by 10 and copy them to 161_p3d_24.0.0.egg, it looks quite close to what you want; it seems the value of -45.3542 for the X-coordinate places the joint’s origin exactly on the left side of its geometry, which is what appears to be the case in the gif you posted as well. Not sure about the other coordinates, though.
So at the one hand, it could be that you’re still multiplying by 10 somewhere, while on the other hand there appears to be a mistake in the way the local transform of a joint is calculated (at least as to why it ends up being different in both files).
Perhaps it’s again related to the fact that the code you’re basing yours on relies on the transform order being sprht instead of srpht? That’s all I can think of right now.

Thanks for all the luck, and I also wish you ten times more luck on your project! :smiley:

That’s unexpected, welcome!! :slight_smile:

Thank you once more for pointing out that the order in which I set that data in the transform matters, I was doing it like this: xyz, then scale and then translate, so shifting it by your suggestion actually made the data for the actor I was making match the game data.

Yes, I just noticed that, it is strange, but even after I make the values in the animation file equivalent to the values in the actor file, the error still persists. Similarly, I am not multiplying anything by 10, I triple checked. [So that the new values in the animation file after making them equivalent to what is there in the actor file look like this]:

            2.0 2.0 2.0 118.0 0.0 0.0 -4.53542 -0.598684 -6 
            2.0 2.0 2.0 109.525 0.0 0.0 -4.53542 -0.598684 -6 
            2.0 2.0 2.0 101.05 0.0 0.0 -4.53542 -0.598684 -6 
            2.0 2.0 2.0 92.575 0.0 0.0 -4.53542 -0.598684 -6 
            2.0 2.0 2.0 84.1 0.0 0.0 -4.53542 -0.598684 -6 
            2.0 2.0 2.0 75.625 0.0 0.0 -4.53542 -0.598684 -6 
...

Even with that change the error still persists.

This is how I store the transform matrix for the actor file:

hom_mat=sentNp.getMat(render)
#then write this matrix data for its respective joint into the file row by row.
...

And this is how I store the animation data :

Some additional information: to play an animation in the game [I don’t use intervals or sequences…] I have a series of frames, say two frames. All the nodepath data in the scene has values set at those frames, for instance if there are 2 nodepaths, a and b, a has all its transform data at frame 1 as well as at frame 2 and so does b. I then associate a slider with each animation, that moves from one frame to another at a certain rate. Each time the slider position changes, I calculate the transform data for that new slider position for each nodepath, and then set the data to whatever result the calculation yields. Here is how I set the transform data:

#scale:
np.setScale(calc_S)

#h, p and r:
np.setH(calc_h)
np.setP(calc_p)
np.setR(calc_r)

#translation:
np.setX(calc_x)
np.setY(calc_y)
np.setZ(calc_z)

That is the order with which I set the nodepath’s transform data. It is also the order with which I write data out to the animation file, I set the nodepath data for that frame, in that order, first scale, then h, p and r, and finally x, y and z, before I write it out to the file.

So if it has to do with the joint’s origin, that is how I write it’s transform data to the actor file, and also, that is how I write it to the animation file. I don’t know if telling you that helps…

(By the way, since storing the actor file works properly and performance benefits from flattening nodes is hence already gotten, what about just using that and then procedurally manipulating the joints of the loaded actor in the game? Of course there’s the issue of resetting the nodePaths back to their original transforms after playing the animation, and also dealing with a scenario where: both the user and the game are moving the player: say the user moves the player forward, or turns the player left,right, the code to do that would be something like: player.setX(…), player.setH(…), and as you have seen, the animation works by also accessing the xyz etc… data of the nodepath and changing it too, so I suppose if both access it at the same time, the geometry on screen would start shaking…anyways, just thinking out-loud).

Thanks.

Ah, but that’s an error: you are getting the transform relative to render, which is the net transform (yes, it’s equivalent to calling NodePath.getNetTransform()), while you need the local transform (relative to the parent NodePath, which NodePath.getMat() gives you if you don’t pass in a reference NodePath). And now I see it: when you subtract the Joint_12.0.0 translation from the Joint_11.0.0 translation, you end up with:

-4.0 -1.0 -12.0

which was therefore written correctly to the animation file after all. Well, that’s one mystery solved :smiley: .

What I’d like to see is the local position of the rotated NodePath (the one exported as Joint_11.0.0) in-game, before you start rotating it, using np.getPos(). Is it really x=-4.0, y=-1.0, z=-12.0?
What I also don’t know, is exactly how you calculate that position before you write it to file. Maybe something goes wrong there?

Hmmm, that depends on how the user will be able to interact with the player character I guess. If the user can only manipulate the character NodePath, then playing its animation shouldn’t be a problem. But I don’t know if that would still work when controlling a joint procedurally…
At any rate, yeah, if the animation moves joints in a way opposite of what the user is doing, it will probably look weird.

Okay, so after removing the render and just doing this: hom_mat=sentNp.getMat(), the translate values for the transform for the Joint_11.0.0 are indeed -4.0 -1.0 -12.0, and match what is written to the animation file. Even so, when I save and play the animation, the error still persists and the rolling rotation looks the same.

Yes, that is the output is really x=-4.0, y=-1.0, z=-12.0. There is one thing I forgot to tell you about the whole multiplying by ten thing, this is how I get the transform data that I will then set to each vertex that I write out:

          transformz=EggTransform()
          #scale:
          transformz.addScale3d(VBase3D(self.vertex_pool_data[ixi].getSx(render),self.vertex_pool_data[ixi].getSy(render),self.vertex_pool_data[ixi].getSz(render)))          
          #rotation:
          transformz.addRoty(self.vertex_pool_data[ixi].getR(render))
          transformz.addRotx(self.vertex_pool_data[ixi].getP(render))
          transformz.addRotz(self.vertex_pool_data[ixi].getH(render))
          #translation:
          transformz.addTranslate3d(Vec3D(self.vertex_pool_data[ixi].getX(render),self.vertex_pool_data[ixi].getY(render),self.vertex_pool_data[ixi].getZ(render))*10)
mat=transformz.getTransform3d() 
#setting on a vertex:

              v=EggVertex()
              v.setPos(Point3D(x,y,z)
              v.transform(mat)

“self.vertex_pool_data[ixi]” is just a nodepath. I’m getting the transform data relative to the world, render, as you told me to. But for the translate data, as you can see, I am multiplying it by ten. If I don’t, then the actor gets exported in a jumbled up way. However, I did remove the multiplication by ten, and exported the actor file, and then I exported the animation. The roll rotation looks the way I want it to, even though the actor is jumbled up:


That is exactly the motion that is made within the game. The problem is that things are jumbled up because in this case, I didn’t multiply the translation by 10. In this case, I get the local transform for the joint, i.e., hom_mat=sentNp.getMat() and also, the output for np.getPos() is x=-4.0, y=-1.0, z=-12.0.

To calculate the position before I write out to file I just do this:

maxd=sort_buf[i+1].frame_numm
crnt=sort_buf[i].frame_numm
chng=maxd-crnt
chng_2=crnt-sort_buf[i].frame_numm
posi_x=sort_buf[i].posi_x+((chng_2*(sort_buf[i+1].posi_x-sort_buf[i].posi_x))/chng)
sort_buf[i].nodp.setX(posi_x)
#same for all other transform data. 
#sort_buf  is a list of "animation objects", each object has a frame number, "frame_numm", and a value for that frame number
#the values are just xyz,hpr,scale, e.g. animation_object.posi_x is just the x value for that particular frame number. "crnt" is the slider I mentioned, that is moved from one frame number to another for each
#animation object. So as we move the slider, we calculate what the value of "x" should be for its position, then we set that value on the nodepath associated with that animation object. 

Attached is the “jumbled up” actor file and the roll animation.

Well I suppose to move the joint, it would be parented to a dummy nodepath, and then the transform data of the dummy nodepath would be manipulated over time, using the technique I describe above. But there would be a series of dummy nodepaths that correspond to all the joints of the actor, and this series would have to mimic the joint hierarchy too, so we would have a series of parent-child nodepaths whose hierarchy is similar to that of the joints in the actor file. Then we would just manipulate as I said, the scale, hpr and xyz values of these nodepaths as per the animation, I think that would work. The user would interact with the player character by just accessing the actor’s nodepath I suppose, and not any of the actor’s joints. So player.setX() etc would mean “player” is the nodepath for the actor. But to control joints, we would be controlling other dummy nodepaths that the joints themselves are parented to. Would doing that in a way be very close to what panda does when playing an animation?

Anyways, I think the error in this case might have something to do with multiplying the translation entry by 10, but if I don’t, the actor looks jumbled up…
rot_3.zip (28.9 KB)

Alright, but is it the same one that you use to get the local transform from, as in:

If not, then you should check that they have identical (local) transforms (specifically the same translation). In case they’re different, then the net transform will also be wrong, since this is just a multiplication of all the local transforms, starting at the root NodePath.

If that’s not it, perhaps it would be easiest if you could provide an egg file of the original hierarchy, converted from a bam file. So, when everything is ready to be animated, call writeBamFile(“hierarchy.bam”) on the root NodePath of your hierarchy and then type “bam2egg hierarchy.bam hierarchy.egg” in the command console to convert it to an egg file. Then I can check if/how it differs from the exported actor file in terms of transforms.

Well it looks like Panda already provides such a way to manipulate joints, by calling controlJoint on the Actor.

Yes, it is the same nodepath.

Okay I created the file and attached it for you to view.

But will moving the controlJoint just result in this same problem? I mean, for instance, loading the actor file and changing the roll of “Joint_11.0.0”; will the motion still be flawed or will it work as desired? I guess I’ll check it out…
hierarchy_file.zip (31.1 KB)

Thank you. Ah yes, now it’s immediately obvious where your need to scale the translations by 10 comes from! You are using an extra NodePath parented to the one that represents a joint (e.g. “11.0.0”), and it’s the one that actually contains the geometry. That would be “prototype11” for joint “11.0.0”, for example. And guess what: all of these extra NodePaths have an additional transformation… a scale of 0.1 :unamused: . And that, right there, has been the cause of your translation problems from the start.

If you really want to keep that scaling (or any other transform you decide to put there), then you need to get the net transform of that extra NodePath “prototype11”, not the one of the “11.0.0” NodePath. And that’s the only change you need to make; the local transform should still be obtained through the “11.0.0” NodePath. The animation file is written correctly as well.

NOTE: don’t try to solve the problem by adding the scale to the transform of the joint NodePath (e.g. “12.0.0”), because any child joints would inherit that scale as well, which is not what you want.

So, hopefully this will finally blow that nasty bug to smithereens :slight_smile: .

The problem would most likely remain, yes, but we don’t need to worry about that anymore :wink: .

Hahaha! You must be some kind of esoterist! https://www.youtube.com/watch?v=Xgiw9Y2sqV0

Yes it completely nuked that bug. The roll animation works properly and I removed all multiplications by 10. To generate those actors, I was simply bringing together a series of generated models, and then forming a hierarchy. To transfer original models to the actors, I copied them using the .copyTo() method. However, before I copied them, I scaled down the original model to 0.1[I can’t remember why, maybe at the time I was thinking they were too big?..]. And that is why I had to multiply everything by 10 at a later stage, but without knowing why I had to multiply by 10… You pointed that out without even looking at my code :astonished: . Thanks, very much! I hope that this thread helps someone out in the future with a similar problem.
Thank you once again! :smiley: :smiley: :smiley: :smiley: