Animata, according to the Web site, is “an open source real-time animation software, designed to create animations, interactive background projections for concerts, theatre and dance performances.”
And other things, of course. It’s free to download, and the souce code is available as well.
It’s amazing software. It provides both an “IDE” of sorts to create animation scenes as well as a renderer for showing them. It also has built-in support for Open Sound Control.
However, despite all that’s good, I’ve had a few issues running it. Segfaults were common for me on Ubuntu, and crashes would happen every so often on Windows. There’s a very good chance that these things happen because there’s something goofy about how my machines are set up, but happen they do. Animata is written in C++, and I haven’t the skills to dig in and try see why things go awry for me.
I was happy to learn that there was a Processing library that would render Animata scenes. It was in the rendering that I had my problems, and I’ve been doing assorted coding in Processing anyway, so the idea of being able to mix in Animata scenes with Processing sketches intrigued me.
Turns out that this library was not quite complete, and some of the things it did do it did incorrectly. That is, it’s core behavior differed from what Animata would do with the same scene. However, the key business of mapping images to a mesh of vertices, and in turn associating that mesh with joints and bones, seemed complete.
Since this library was written in Java I was in a much better position to fix things. I forked the repository and started poking around. Before long I was seeing behavior that matched that of Animata itself. I also started adding some features to expand what I could do.
At this point this my code is more than simply bug fixes, but an expansion of behavior. I’ve now renamed my version to AnimataP5-ng.
About the best thing you can do is experiment. Look at the examples that come with Animata, poke around, try making your own.
The Animata “IDE” has no undo. In some cases this is no big deal. If you add a joint or bone someplace and then quickly realize you made a mistake you can use the IDE to delete it. However, if you have set up your mesh and bone association, and start moving things, you may discover you’ve not made the right associations, or, worse, see that you’ve missed associating a mesh vertices with a bone. In these cases your carefully arrange work will become unpleasantly distorted; restoring it by hand will be tedious at best.
You should either make use of a revision control tool, such as git, and ensuring that you can roll back to a more stable version, or be in the habit of saving off your work (possibly with versioned names) so that you can dump whatever mess you’ve made and re-load any earlier version.
An Animata scene consists of one or more layers that contain some number of texture-mapped meshes that have been associated with some sort of joint and bone structure.
When you create a new scene you get a default layer. You can add an image to that layer; technically it will be stored as that layer’s texture. As best I can tell a layer can have only one texture.
Your scene can have multiple layers; and layers can have their own child layers. The IDE lets you add and delete layers; you need to be sure you have selected the proper parent layer before adding a new layer.
Once you have a texture image loaded you need to place some vertices over it. You than then create triangles by connecting these vertices. Make sure you cover all of the image you want to have visible. Then you click the “texturize” button to map the image to the triangles.
You then add joints and bones to the textured image. One thing I’ve learned the hard way is to try to place your vertices in such a way that you do not end up covering them with your joints and bones. After you’ve created your bone structure you need to associate the texture vertices with bones, and this is very hard to do if have bones obscuring vertices.
I have tried to make fairly dense vertex meshes but found it way too tedious to map them to bones. I’m sure there is some optimum degree of meshiness but I’ve yet to discover it. I suggest starting simple in order to get a usable end result, and then seeing if you can go back and adjust it should it not quite move as you like.
Before long you should have an image mapped to some bone structure such that when you move a bone or joint the image shifts or moves in some happy way. Animata gives you two ways to make bones move. The first is to set the “tempo” for one or more bones. What this does is automate the expansion and contraction of the bone. Something to note is that bones are not visible. This means you can use bones simply for this push/pull effect. Look at the arm example that comes with Animata.
This type of animation has a few settings to control the speed and range of the motion. Things can get quite complex if you like. Quite often you will get some unexpected, and possibly unwanted, results. The bouncing of bones can cause your entire structure to shift, rotate, and shimmy across the screen. I’ve had objects simply spin around on me after upping the tempo on a bone.
One way to reduce unwanted movement is to set one or more joints as “fixed”. I had shied away from this at first because I wanted to be able to move my structures around. Lately, though, I’ve been using some fixed joints to eliminate unwanted shape distortion, and moving the entire layer. I cannot over-emphasize the need to experiment, especially with small structures, to get develop a feel for what works and how well.
Once you have you layers, textures, and bone/mesh mappings you can view you finished scene. This is where you can employ the other way Animata allows you move things: OSC, or Open Sound Control
Animata has a built-in OSC server, and responds to a somewhat limited set of address patterns for controlling the contents of a scene.
You can move joints, change the length of bones, toggle layer visibility, change the transparency of layers, and (if you built the program from the source code) change layer location. You need to have assigned names to you joints an bones to make them available via OSC commands.
It is with OSC that Animata becomes a real-time live-action animation tool. You can, for example, use the Xbox 360 Kinect to send OSC messages based on skeleton tracking of real people. You could create a custom screen for Control or TouchOSC and manipulate the scene from your smart phone or tablet.
I somehow came across the AnimataP5 library on GitHub.
It hasn’t been updated in some years. I tried it out and it failed because of changes in Processing 2.0. I forked with the goal of getting it to run on Processing 2.0. That was fairly simple, but I ran into a few places where it was not giving the same results as I would get from regular Animata, so I started fixing those, too.
As I was doing this I was beginning to hook in my Leap Motion controller. While the Leap code was directly controlling the scene behavior I was doing it in a way where I could swap out direct API calls with OSC.
I did not, however, add the OSC handling to the AnimataP5ng library. Since this was Processing, adding in OSC to a sketch is almost trivial. I created an example scene that responded to some of the default Animata OSC commands. However, aside from their limited scope, the Animata OSC uses a somewhat clunky address pattern that did not lend itself to expansion.
Expand I did as I kept wanting to have my animation do more.
AnimataP5ng loads scenes from the
data/ folder of your sketch. All images used by the sketch to be in that same folder. This is a hold-over from Animata, where no matter what you do it always saves project with zero path information and expects to load images from the same folder as the
One thing that sits in the back of my mind while hacking on AnimataP5ng is how, and to what extent, I might have it work with
nmt files while deviating from what Animata allows. My library allows for things that Animata does not do, but it does not expect you to use a non-standard scene file. You still have to use Animata to create and revise scenes, and when you use Animata to save your work it will save it back out as it sees fit. This is how I learned about the file location thing. I edited an
nmt file so that a texture would load its image from a specific directory path. It seemed to work, but once I saved my project I lost all that path info, and all that remained was the name of the image.
My first additions were to expand the public API so that a Processing sketch (including any OSC) could have more control. I then added some new behavior. The times I’ve given presentations about Animata I’ve always been asked, can you dynamically change the images? In Animata, you cannot. The best you can do is set up, in advance, multiple layers that differ only in the image used, then swap around their visibility. In AnimataP5ng you can replace the image used by any texture, while rendering.
Once I had this in place I wanted more dynamic images. As far as I can tell, Processing will not load and render animated GIFs. Since the API allows for changing the texture image I created a sprite effect by defining a list of images and, on each call to
draw(), updated the texture for the main layer.
I needed to make a few adjustments to this. First, changing the image on every frame didn’t look so good so I added some counter code so that it would update every 20 or 30 frames or so. Next, I changed the AnimataP5ng API so that I could update a texture image by passing in a
PImage instance instead of a file name. The reason was that constantly creating
PImage objects when updating the texture image file name was causing memory problems. Since I was always reusing the same five or so images for my sprite there was no reason to reload the image files and re-instantiate the objects. This allowed be to use an array of loaded images in place of a list of image file names.
At first I just used a list as part of my main sketch code, and this was fine so long as I was using only one scene in my sketch. The Animata renderer will only show one scene at a time. With Processing, though, you can render multiple scenes.
I had been using
HashMap variables to associate scene names with assorted data. It become unwieldy to have multiple such maps tracking different data for a single scene. Each scene could have multiple images, and the scenes needed to be rendered in some specific order.
I decided to add some properties to the AnimataP5ng class itself. I added one to track rendering order, and another to hold a list of images. That list of images is as simple as it gets. It’s a HashMap public property. Since a scene can have multiple layers, and each of those could plausibly have multiple texture images (for the sprite effect), the proper place fr this would be on the Layer class. The thing is, when I thought of this is wasn’t sure that it would work well, or how best to implement it. I decided to do something quite and easy so I could start using it before I got caught up in implementation details.
There’s a technical argument for having the texture image list be part of the Layer (or Texture, though a Layer can have only one Texture instance). There’s also a practical argument for keeping at the top-level AniamtaP5ng class.
Animata does not let you run multiple scenes, let alone have sprite effects using multiple texture images. The single-scene limitation means you have to think through how you can get all of what you want into that one scene, and often this means having multiple layers.
With Processing, however, your scenes can be much, much simpler because you can layer them. While there are still reasons to have multiple layers in a scene rendered using AnimataP5ng (for example, animating eyes or hair while coupling the location and scaling of all layers in the scene) you can likely get away with doing much less in each scene.
Further, this list of images is just that: a list. The code for swapping these images around is, so far, something you have to write. You could, for example, use a naming convention for your images as a way to associate images with layers, and in your
draw() method use that to update specific layers.
I have to see how I actually use this sprite thing. That will drive code changes. I can see adding, at some point, an API that makes it easier to load and manipulate images at the layer level. I’ve been pondering auto-loading images based on file name; my code so far is how a fair amount of image-loading code that could be simplified if I made some assumptions about file locations and names. Or I may start using a configuration file for that, a sort of
nmt extension. The code for this may end up as part of AnimataP5ng class. Just writing about this is giving me ideas I’d like to try, and having things as part of the core classes would make sketch code a lot cleaner.
Another option is to create a helper library to add this additional behavior while not having to make the core AnimataP5ng classes more complex than they need be. This might have the added benefit of make that core API more stable while I hack around with extensions. If I do that I might remove that image list property. Or move it to the Layer class. Or something.
Finally, you’ve no doubt noticed that I’ve not included a single line of code here. There are some examples that come with the library but they do not show off the more recent additions. I’m still shaking out the API and trying out ideas to see what works well. When I have a suitable demo I’ll write that up in a future post.
Please give AnimataP5-ng a shot and let me know how it works for you.