The application for automated animation using NLP (and other video stuff)

Hello all,

This is honestly my first animation project, so please be kind :slight_smile:
I have been planning on using NLP for tasks which are otherwise considered in domain of “difficult”, and as Covid stuck, we thought to create a tool where users can easily create animation for their family with no direct hands to underlying technology, thus the idea of this Meme’er was born.

What is this:
1 line summary: Application uses NLP to create animation. No coding. +Some video/ image processing capability.

Code at Github (MIT lic.)

This application has 2 sections: First is the story telling, where the user can write some simple english sentence (like lady was walking, or lady is walking or lady walked etc.) and the NLU will interpret it, provided there is a model for “lady” (that is one of the synonym of a model should be lady") and an animation “lady__walk” exists (that is one of the synonym of “walk” exists). In case there are 2 or more “lady” objects, it will also check adjectives (like tall lady vs. young lady), same for adverbs for walk also (as in walked fastly, or walked stylishly etc.). The models could also be named (for ex. lady is named Susan) and all through next lines, this actor could be referred by this proper noun.
Though this covers the easy part (noun verb combos), for things involving 3 or more items (like lady walked over the pavement), logicals (under logical tab)can be used. Again NLP is used to get the valid logical (ex. lady was walking over pavement is valid in this case, lady over pavement is not as all three relations do not match) . In logical, we can set the case for which model need to perform which action at what location etc…
If you wish to check, under documentation → hour long challenge, there are few examples.

The second section are some standard video/ image processing (under processing tab). All this uses standard Python code (mostly pillow and openCV) and ffmpeg. The whole list is Documentation – Meme'er.

What more to do
To be frank, the codebase is very basic, as I was learning Panda3D/ animation on the job, so lots of improvements can be done (and if any of you have some, i will be very very thankful).
In Panda3D: The most glaring issue with story telling so far is coordinate (if you look this story, it have more number characters than A-Z chars, which might be a bummer for non-geek). We are working on ways to simplify this, but so far other options looks even more complex. Other than that only few of the P3D capabilities are used, which might be expanded later.

I can go on (as developer :wink: ), including website itself, but will be more interested in your opinions. Do tell us how you think this should progress, direction to proceed, any new feature or change/ improve existing ones, literally anything is welcome!

Thanks for your time.


Interesting work indeed, especially the demo video of the character doing a generated action with a picture on the wall.

Do you have any long-term goals for this work? It seems you are poking at story generation from a developer-writer’s perspective, something along the lines of speaking story commands to a holodeck but in a current-gen game engine.

Asset labeling, which may eventually be solved by something akin to deep learning, would seem to be a precursor for fully fledged story generation in 3+1 space representations. Though, if you have any ideas on how this possibility space might be reduced, that would be intriguing.