Interview with Mikros Image on Astérix: Le Domaine des Dieux
The famous comics written by Goscinny and illustrated by Uderzo, starring the Gallic heroes Astérix and Obélix, were created in the seventies. Astérix: The Land of the Gods is the first album to be adapted into 3D animation by directors Louis Clichy & Alexandre Astier. For this M6 production, the French studio Mikros Image was appointed executive producer of the film, overseeing the overall production, including supervision of the work by two studios in Belgium, of the 85 minute film in 2D & 3D over a period of less than 24 months. Released in France at the end of November 2014, the film has already exceeded 3 million viewers. Julien Meesters (CCO), Laurent Clavier (TD), Simon Thomas (CGI sup) and Nicolas Trout (EP) shared with us some of the technical choices they faced throughout production.
Can you tell us a bit about yourself and the history behind Mikros Image?
[JM] Mikros Image was created in 1985 in Paris, pioneering digital imagery with early Quantel machines. We kept on growing our activities towards advertising, institutional projects and shorts. CGI started at Mikros in the early 90s. We added an extra market in 1998, moving towards feature film VFX work. The company was doing well on VFX, commercials, feature film and post, but we were missing feature animation projects. This is why we got involved in the project Logorama, directed by H5 and which ended up winning the Academy Award for Best Animated Short Film in 2010. This marked our starting point of development into animation. Today, we are operating in France, Canada, Belgium, USA and UK (in partnership with post house EightVFX).
I started my career at Mikros in 1997 as a Flame operator. I then joined Digital Domain as DFX supervisor and Flame operator, overseeing digital visual effects for various projects. I developed my skills working with the most creative directors such as Joe Pytka (Gatorade / Jordan vs Jordan) and Mark Romanek (Linkin Park). I honed a comprehensive approach to VFX supervision on particularly ambitious projects involving 3D, 2D and physical techniques. I came back to Mikros in 2003 and took over the Creative Direction of the studio. I'm now Chief Creative Officer and Deputy Managing Director of the group.
How did you end up doing an Asterix animated feature film?
[JM] Since its Oscar won for Logorama in 2010, Mikros has been pushing towards animated feature film projects, a strong part of the strategy alongside VFX and post-production. In this context, we had the chance to meet Nathalie Altman (Asterix Executive Producer), Philippe Bony (M6/Producer), Louis Clichy (Director) and Patrick Delage (Animation Director). We were thrilled. The project and the team behind it were exactly in sync with our ambitions! We spent time discussing the strategy of the films, what makes it specific. Then we had the chance to take part in a test, along with two other studios. We gave it our best and were chosen by the producers and creatives to be part of this outstanding adventure!
How many studios worked on it and what was the level of collaboration among them?
[JM] Mikros was executive producer and lead studio, with around 200 people in France working on Asterix, including 140 artists. Grid and Nozon were our two partner studios in Belgium. Grid participated with sets modeling and animation for half the film, and Nozon produced lighting/compositing for one third of the film. We were responsible for their work and all the technology choices.
Which modelling, animation and texturing packages were used?
[JM] Our CG workflow was Maya > Katana > Arnold > Nuke. For texturing we used Mudbox and Photoshop.
Why did you use Arnold for this project?
[JM] Mikros has a long and tightly knit history with Arnold; we were at the forefront of studios using it in production since 2001. The first time was to render occlusion passes for the ferris wheel in the French movie Le boulet. Over the years, and according to our needs, we used the Arnold API to enhance an earlier Maya exporter called mArny, developed shaders and procedurals, and described the base specifications of MtoA (the official Maya to Arnold plugin). Mikros developed internally and used its own version of MtoA since 2008 and switched to the Solid Angle version only last year. The Asterix project started before that and so we made use of the Mikros plugin.
A large part of Mikros' reputation is based on the quality of our final images. At the end, this is always compositing work, but we believe that pushing 3D renders to their best can greatly simplify our pipeline, reduce the amount of data and comp time, and enhance the final result. We have been sharing our vision of what a renderer should be with Marcos Fajardo (CEO of Solid Angle) since the beginning of our collaboration, and we still believe that things are going the right way. The support we get from Marcos himself first and from the Solid Angle team after has always been both an important point in our choice and a precious help. When schedules are short, it's good to know we can rely on a strong support team.
The look of the Arnold tests that Mikros delivered to compete for the Asterix project was one of the points that convinced M6 SND to work with us.
What was the biggest challenge that you had to overcome in this film?
[JM] The biggest challenges were all related to the fact that it was Mikros', and our partner studios', first time making an animated feature film of this scale. Particularly challenging were:
- Bringing a new breed of talents with a wealth of experience in animated features.
- Creating a workflow from scratch, based on the team’s previous experience in different French, European or US studios.
- Working on 4 different sites with 2 other studios, sharing the work on 5 departments.
- Adapt, dismiss or redesign from scratch available VFX technologies and methodologies within Mikros around the specificities of animated features.
- Using The Foundry's Katana combined with Arnold to manage the huge number of shots and assets in an efficient way.
Was it difficult to render scenes with so many different characters?
[LC] In some shots, you have more than 50 non-crowd, fully-rigged characters. A Maya scene containing all characters is not an option. So we used references and geometry caches inside Maya for the animators (using GPU mode), and also to transfer animated characters/props to Katana using Alembic. Katana, as it doesn't keep things in memory and just gets geometry on demand, has no problem dealing with those heavy scenes. The Katana project loads quickly, you can have multiple shots in it and share your edits between them, it's pretty neat. The render scene construction time is longer but works.
We certainly underestimated the amount of memory our render stations needed. There were shots with a lot of characters with heavy hairs and SSS, grass and trees. We knew that rendering wasn't so efficient in these cases. We tried to avoid using semi-transparent objects as much as possible, but get better renders with it. During production we discovered that we were unable to optimize every single shot the way it should be. This would have asked us too much time. So we dedicated one person full time to take care of the most expensive shots and upgraded the farm for the rest. We separated the heavier shots in more layers. We also benefited from the work of Solid Angle: we began with Arnold 4.0 and finished with the much faster 4.2.
How did you match the groom of the original characters so well?
[LC] We used QUIFF, a homemade technology that generates hair within 3D "envelopes" that define the orientations and boundaries of the haircuts, moustaches, etc. There were many advantages to this approach:
- A very simple and controllable way to design the character's hair.
- Zero need to go through overkill simulations for each shot: not only it would have been an extremely costly process, but simulations offer results which may be physically correct, but completely different from the animation style of the film.
- As we would only animate the hair envelopes, the animation team had complete control over the animation and shape of the haircuts.
- This was also a very efficient technique to deal with hair collisions.
Did you have to write any custom shaders to achieve a particular effect?
[LC] Not so much in fact. The graphists were pretty confident with the bunch of shaders that were available inside the Mikros libraries and MtoA before Asterix started. The most important were a skin shader with multiple SSS layers, a blackhole/matte shader, the texture reader with advanced tag resolution, and an AOV collector. We still had to write some additional utilities to comply with the way Katana connects shading networks, as the version we were locked down on does not deal with array parameters. For the QUIFF suite, we wrote two specific shaders and a camera, so we've been able to suppress annoying interpenetrating hairs, bake and render a mesh using the same shader as the one used for the curves.
Gaël Honorez at Nozon gave us a stereo camera shader that allowed us to reduce total render time: we compute the left eye and recreate the right eye in Nuke using information stored in a custom AOV. This worked well for background layers, when disparity between the two viewpoints is low.
We wrote some procedurals too: for the QUIFF hair primitive and Shave’n Haircut using the cache files, to deal with fluids and some .ass pipe within Katana, and to instantiate objects on a Katana point cloud. We also modified the Alembic procedural to render our specific primitives so we can use Arnold to check and conform the shot after the animation step and before lighting.
Which character was the most difficult to shade?
[LC] Asterix was certainly the most difficult one because of his feathers, it took some time to find the right look: white objects are always difficult to shade! So was the dog, Idefix. Oursemplus (the centurion) also took some time. He was used as a starting point to find the proper look for metals. We used a lot of SSS for skin and a few eatable props: cake, grapes, etc. A large number of characters are lightly dressed, so we finally computed a lot of skin pixels. Obelix alone is a big piece of SSS cake!
Can you talk a bit about the shading and look for the volumes?
[LC] When the project started in January 2012, there wasn't a volumetric primitive available with Arnold. We began to look for alternative solutions: Mantra and Houdini, Mental Ray inside Maya, and compositing to integrate things back into the Arnold renders. Finally, Arnold 4.0.12 was released with a volume primitive, and we integrated this very lately in regards to our initial plans. We adapted the Solid Angle code to our in-house MtoA and built the tools to be able to pass volumes to and edit their shaders inside Katana. The tools were ready for the FX team in September 2013. Maya 2D fluids came at the end of 2013.
The animation and render of the fluids is very cartoony in some shots to match the comic books. The FX team shaded the volumetrics inside Maya with MtoA and they were able to launch Katana renders with integration of their work in progress in the shot, so they could see what their settings looked like once in the shot's lighting context. But we didn't have time to implement OpenVDB and benefit of Houdini tools. The most common problems we faced were due to scale that applied once volumes were placed on set and to the fact that we worked with linear inputs in Katana and sRGB inside Maya. The Maya fluids were missing detail, so we backlit them, and shadows, propagated through the fluids, brought back some fine details.
Which types of lights did you use most and why?
[LC] As the lighting base, each shot had a skydome light with an environment defined by the layout department. On some shots, when shadows were part of the action, the layout defined a spot light and we get back that info inside Katana. The lighting team used mostly quad lights because they give a softer look, in line with the artistic direction, and a few directional lights.
How important is 3D motion blur in a project like this?
[LC] We've got gallic junkies under magic potion. They move and act fast, when they run, you can see the wheel formed by their feet as in a Tex Avery cartoon. The animation director wanted to get that look in the movie. This is something that he missed and that he requested when he saw the first motion blurred renders. We knew we'd have to end up exporting a lot of subframe keys if we wanted to achieve nice motion trails because only mesh deformation applied in this case. We also did tests and saw that too much blur length made our objects simply disappear. Arnold also offered a new feature: the custom motion blur shutter curve. So we took time and improved the pipeline to be able to use any blur range, number of keys or custom curve on demand. We finally found a curve that did the job, giving us a nice trail effect while accentuating the frame key. But the rolling feet effect still needed some trick. To directly get the effect with Arnold, we would have to break some light energy conservation law by artificially boosting the weights of samples.
How big was your render farm for this project and what were your render times?
[NT] In France we used 4200 cores for rendering: 3860 dedicated renderfarm cores and 340 workstation (night time only) cores. Most (>80%) of the French render farm consisted of 2.6 GHz CPUs with 32GB of RAM, about 8% was 2.6 GHz / 64GB, and another 8% was 2.8 GHz / 128GB. In Belgium there were 2400 cores: 2300 renderfarm cores and 100 workstation cores. For lighting only, render time was on average about 1 hour per layer, with about 4 or 5 layers per frame.