Contribution ideas - more GAME samples!

Let me share a work-in-progress look at the character-model that I’ve been working on!

This model is primarily intended to be a stand-in character for devs to use–a bit like the Unreal Engine robot-figure, but a little fancier. However, it’s also intended to be available should it prove useful toward the samples or the showcase game, whether as protagonist or NPC.

At current the model is modelled and rigged, I believe, with a small set of animations. UV-mapping and texturing are still underway, and normal-mapping has yet to be begun. The final model might be slightly lower-poly, as some of the corner-sharpening perhaps gets moved to the normal-maps.

The rough-looking sections–on the lower -arms and -legs, and at the back of the head–are intended to resemble crystal in the final texturing.

So then, this is PAnDA–(P)anda-(An)thropomorph (D)evelopment (A)ssistant!
(Screenshots and gifs simply recorded from PView.)


The other animations that have been made at time of writing are these: a stun-loop, an idle, and a static looping “stand”.

2 Likes

Seamless animations, and rather well optimized I imagine! I wonder if we could demonstrate different levels of polygon complexity on this model, with different triangle counts.

Thank you! :slight_smile:

I honestly don’t know how well-optimised it is–I’m somewhat self-taught in this, and so don’t know what I don’t know about the craft! ^^;

Still, I am trying to apply what I do know, and thus to make it at least somewhat optimised!

Hmm… What did you have in mind? Just different levels of simplification, with the normal-maps taking on more or less of the work of detailing it?

Something like that might work, but I’m not sure that this model is ideal for it: with such a large, smooth body, there’s isn’t as much space for lots of detail in a high-poly version as I’d like.

Well, Panda does have a LOD system, as far as I recall. And we could also do something more custom, like using a singular animation Armature for a player character and then applying that armature with weights to our various polygon models, to fit the different specs we’re targeting. I know it’s a little unusual, but it’s totally possible to target an integrated laptop card from 6 years ago and a recent RTX series card, for instance, using Panda3D.

Hmm… note that the whole thing right now takes up about 4000 vertices, I believe–it’s not a hugely high-poly model, I feel!

The main thing for less-powerful machines might be to reduce the texture-size; at the moment I’m planning to use 4096x4096 textures.

But I don’t think that it automatically generates LODs–unless I’m mistaken? We’d still be talking about separate models to be made.

And as I said, I’m not convinced that this specific model is apt to demonstrating such LOD differences: it’s largest–and thus most visible–section is pretty much smooth.

I have the source code to David Rose’s Tagger sample lying around somewhere. There is also this, made by @wolf:

And these, which will likely make it into 1.11:

It stood for Disney’s Interactive Real-Time Environment Construction Tools. It was one of several libraries (one other being PANDA—also an acronym!) that they used together. When Panda3D was open-sourced, they were packaged into the same distribution.

Generally, DIRECT contains more high-level code that is implemented in Python. We generally try to be more inviting to C++ users nowadays, so we try to implement things in C++, except where there is a clearly established precedent.

For now, I wouldn’t worry about it and simply use direct as needed in your samples.

This is fine. I think it is too early to think about further reducing the polygon count. This is a fairly simple model so it might make more sense to demonstrate LOD using a more complex demo piece.

A great way to make a multi-LOD-level model in Blender, by the way, is to use multires modelling, where you start with a coarse shape and subdivide it over time to add more detail. With this feature, Blender keeps the original detail levels before subdividing, so that you can use those for the more distant LOD levels.

I do agree, I believe.

Funnily enough, I intend to use this feature to do some sculpting for the model, which I intend to then bake into a normal-map.

Using it as a LOD manager is a good idea, I do think! :slight_smile:

@svf if you need a “sample” game using the distributed networking, you could also look at my game Multiplayer Boardgame which features a lot of what the distributed networking system has to offer. It probably also goes beyond some of the things I’ve done in the to be added samples which @rdb already mentioned.
I can also provide further specific help on the networking system as needed.

Wow! I should have searched a bit more, this is great. Looked at the code and it seems also well documented, very nice job putting these together!

I’ll definitely try to pull the code and see if I can make things work as well, maybe see if I can simplify things a bit for mini-game sample. But as-is, much of what is done here is already very befitting to just get posted into Panda as official networking samples I feel. @wolf thanks for sending these!!

Nice work @Thaumaturge ! I think we’ll see lots of use for this! It’s easy to underestimate the amount of work it is to do this, really appreciate your doing this.

One idea - I’ve been looking to try this myself - is if the armature is named/split properly, one can actually upload the model to mixamo.com and download/apply any of their animations - they work on biped type models which you have here. I wonder how much work it is? Because we could grab hundreds of animations from that site and instantly apply if the armature is layed out accordingly. If someone was able to do this I’d be really, really interested in hearing the details on how difficult it was!

I’ve updated the top post with all the great info here, thanks all for contributing! Great to see all the cool things coming together!

1 Like

Thank you! I appreciate that in turn! :slight_smile:

Hmm… I’ve never tried to do that, and so don’t know what their armatures look like.

I’m rather hesitant to make my current armature more complex than it already is–and I’ve made a point of keeping it simple, for the most part.

That said, I won’t entirely dismiss the idea, especially as you remind me that I still have yet to go back and rename my armature’s bones to something more sensible than “Bone.000”, etc… ^^;

I’m not sure what you mean by “splitting” the armature, and “biped” is a very large group of possible armatures. It’s not too hard to rig an armature from scratch in Blender, but organizing hundreds of animations (armature actions) is not trivial, and it would surprise me if a website converter could do something that general robustly.

On a totally unrelated note, I found this website with a thousand free PBR materials: https://cc0textures.com/

An armature is a set of bones. How many bones / how to divide the armature up is up to the artist usually. A biped is a type of armature with two legs, etc - many 3d modelling packages can do things automatically with ‘bipedal armatures’ if the bones are layed out properly. Let me know if that makes sense at all. For example - footstep / walk tools are very common in tools like Maya/Max/etc, but the biped needs to be setup properly.

I did a test with Mixamo / Blender / Panda and it was pretty impressive. I was able to get an animated mesh into Panda, and every animation from Mixamo did work (that I tried.) here’s the output:

So it does indeed work for a properly rigged mesh - but, when I uploaded a ‘custom’ mesh to the site, it /almost/ worked, Mixamo auto-rigged the entire model, but some vertices in the feet were missed. So I know it’s possible - just haven’t had time to read up on how to get ‘custom’ meshes fully compatible. My assumption is that I need to fiddle the foot bones, but haven’t confirmed, yet.

It would be great to be able to tap into all those free animations, some are very high quality.

this is great!

1 Like

If you can post information (a graph, picture, etc) regarding the armature standards I’d need to meet, I would probably upload an armature here that somebody could use. But, if the armature needs to be provided, why couldn’t somebody just download all the prebuilt “biped animations” they already have in .blend format? Sorry if this sounds a little dense.

My process is to 1. build the armature 2. bind a mesh to it 3. animate the whole thing

I do wonder what kind of “tricks” are involved in this website converter process that could lead to weird bugs and bone scaling issues. I totally understand why a team or a particular person might like these prebuilt animations, but it’s not really something I’m interested in for my own work. I’d rather not scale nodes.

And sorry to be a little nitpicky, but the animation you have shown does not appear seamless. It would be better to have a fluid motion for an effective advertisement of a website converter.

The lanky bald guy with a beer belly in a wet suit I have recently made, in Blender:

You can see the animation dope sheet there.

The Gifs don’t do justice, much nicer and smoother in real-time.

Most of the 26,000 or so animations seemed to work in Panda that I tried, at random. Note that some of these are created using pretty expensive mocap hardware, so quality is high. Although not all are great, there’s a mix.

Not all anims work on all of their meshes, some have minor deformation.

Great questions. No idea, my extent of Mixamo experience is 1) Export FBX 2) Import to Blender 3) Export to Panda

I have uploaded a custom mesh - like I said previous it almost worked. Got very close. Not sure what is needed beyond that. (Haven’t spent much time at this).

But I really think a process for using all these Professional anims in Panda is a very cool thing.

I’ve given the question of Mixamo and the PAnDA model a little more thought. I think that I’m not going to aim to support Mixamo, although nor do I intend to work against such support.

Simply put, I’m making this model in spare time, and there’s a fair bit of work yet to be done as it is–I haven’t even started on normal-mapping, let alone the bit of sculpting that I intend, or a gloss-map, or a glow-map!

So I rather don’t want to add to my workload by making sure that the armature conforms to Mixamo’s requirements.

Totally reasonable.

I’ll get to this soon myself at some point, when the samples need a bit more advanced animation logic. Ideally after I figure it out I’ll provide a step-by-step tutorial on how to make it work, as I think it’ll be a major boon for Panda users to be able to tap into these more easily.

I did a bit more research, it looks like my original assumption might have been wrong almost completely. I don’t think Mixamo uses the armature from custom meshes that are uploaded, but actually regenerates them directly from the mesh and UI at upload! Which is pretty insane, that it might be able to auto rig meshes like that. There were some notes about putting the model in T pose at upload and centering the mesh at 0,0,0, and ensuring feet were touching ground at 0. That may have been why my custom mesh didn’t work. Some trial and error and I think I can work the last few kinks out here. I’ll definitely post a tutorial when I get there.

This seems to have good info on the process:

1 Like

Indeed, impressive tech! This makes quite a lot more sense – taking a mesh with an intricately laid out armature that fits their analysis tools.

I think I understand what you’re implying with regard to model positioning, but if we centered the mesh at (0, 0, 0), the feet would be below the ground.

Yeah, that seems like quite a lot of work, and I’m 99% sure it does not allow full control over character self-clipping. But, as an automation procedure, cool! Good luck with that if you are committed to the task.

1 Like

Per the tutorial video it looks like you might not need an armature at all, just a mesh. A couple minutes in they show a little gizmo to help their tool auto-rig. If it’s the case it should be a huge time-saver. Take any humanoid model, let their tool do the anim work.

(I need to look closer at their licensing… just to make sure time invested here is really justified for Panda sample dev purposes). But I’ve seen a few non-panda community projects use their stuff recently, which piqued my interest.

1 Like

Maybe it can do rigging for humanoid characters only, automatically. I admittedly watched the tutorial without sound, so I just saw the large bone list and assumed. :slight_smile: