We use cookies on this website. By using this site, you agree that we may store and access cookies on your device. Find out more and set your preferences here.

CG Lead Artist at The Mill Alex Hammond talks about how Arnold was used on the SSE ‘Maya’ spot

Collaborating with adam&eveDDB and Academy director Frederic Planchon, The Mill's VFX team created a 100% CG orangutan, seamlessly composited into a live action environment for renewable energy company SSE. The photo-real great ape has won three VES awards for Outstanding Compositing, Outstanding Animation and Outstanding VFX. We talked to Alex Hammond about his experience using Arnold for this beautiful project.

Can you tell us a bit about yourself?

I am a senior CG lead artist at The Mill (where I have worked for 10 years). I work across all areas of CG, particularly in the development of The Mill's creature work. I grew up constantly drawing and designing characters and knew I wanted to do VFX the first time I saw Jurassic Park (unashamedly admitting!). I chose a degree in Computer Animation and shortly after graduating managed to get a running job at The Mill where since I have worked my way up and am now focusing efforts on the generation of photo-real animals.

The SSE spot looks gorgeous. Did you feel from the start that you were working on something special?

Yes, we knew from the beginning it was going to be special. After receiving Fredric Placheon's treatment we realised that he was as conscious about making the ad look good as we were. All the lighting on shoot was natural but he had an amazing way of recognising what would look good on camera and how to really get the most out of a shot compositionally.

I think that people engage with apes/monkeys more than any other animal due to the close evolutionary link so you immediately have a character that has a high intensity of emotion which helps sell the performance.

The Mill is getting really, really good at doing CG animals. Is there a dedicated group working on them?

We don't have a specifically dedicated team of artists who solely work on animals and creatures but lately because of the high level of work we have produced it allows us to be in a stronger position to pitch for more creature work... or not even have to pitch at all because a lot of directors and agencies really want to work with us more then ever!

The driving force comes from an ever increasing pipeline we have from modelling to animation through to rendering. We have a strong core of Mill talent referred to as Lead CG ops as well as some very high end specialists within each field and we call these discipline supervisors. Each supervisor ensures we are on top of new technologies and techniques whilst the leads will usually drive the current jobs in the building.

What was the size of the team that worked on this spot?

About 12 in total, however we had a very concentrated team early on to accommodate the initial build of Maya (the orangutan). Throughout the job we were flexible by expanding and contracting the team to accommodate work-loads.

Which modeling, animation and texturing packages did you use?

We used ZBrush and Maya to model, Maya for muscle simulation, Softimage to generate and simulate hair and Mari to texture.

Why did you use Arnold for this project?

The Mill has a good relationship with Solid Angle and it’s a fantastic physically based render engine. In particular, it's very fast at dealing with large data which we needed in creating the hair for our CG orangutan. It also takes any guess-work out of realistically matching lighting on set because we can actually place lights in our 3D scene and get an accurate intensity to them driven by the values in our HDRIs and the GI of Arnold.

What was the biggest challenge that you had to overcome in this project?

Really the biggest challenge was ensuring the orangutan was as realistic as possible.  We cast an orangutan that lived in a zoo in Albuquerque called Rubi. Rubi ticked all the right boxes with both SSE and the director.  She was of an age where by she had a certain maturity, whilst still having a sensitive curiosity with her surroundings. Plus she had a well-groomed hairstyle!

Basing our model on a single real orangutan meant that we eliminated any design process and had a clear goal to what we had to achieve. It also meant that we could obtain perfect reference and take out any guess-work.

Did you reuse techniques from last year's PETA spot?

Peter Agg was the rigging artist and also Jimmy Gas helped us out with some of the simulation, they both worked on PETA: 98% Human. So yes, it was possible to reuse some of the ICE technology from PETA from a simulation point of view. We ended up using a lot of ZBrush to get initial groom states for various parts of the body, and then re-interpret these back in Softimage using ICE.

Can you talk a bit about the shading and lighting of the fur?

It was important to get a good dynamic range of lighting to ensure the correct specularity was achieved in both skin and the hair shading. The orangutan's hair was shaded using proprietary hair shaders written at The Mill and this meant we could control details such as transmission, which is the effect of a rim-light or back-light, and how much light was reflected off the hair.

Our beauty pass of the orangutan contained all the relevant AOV’s or passes which the shading was made up of so we could control individual aspects in compositing. Often we would look-dev the sub-surface lighting of the skin by adding in a skin cavity pass and blending this into the shallow scatter of the skin in order to enhance wrinkles. Extra specular reflections of the skin and hair were rendered, including a glint pass, which helped achieve nice pings of highlights over the hair.

Which types of lights did you use most and why?

Skydome and quad lights with HDRIs. We basically created our panoramic HDRIs and then extracted the direct light sources so we could put this onto individual quad lights in Softimage.

How many hair strands did the orangutan have?

Each hair was built as a strand in Softimage ICE which was built as an array of 10 points. To maintain the hair groom and keep the simulation data down we would simulate every other point leaving the remaining points to interpolate and follow the simulated ones. There were around 2 million hairs at most and for us Arnold was a massive benefit in handling this data. It really does prove its speed and stability when it comes to rendering hair.

Can you tell us a bit about the texturing workflow?

Most of the texturing was done through Mari, with multi UVs and then processed through a piece of software we have written called Texman. This allowed us to ensure all the high-resolution textures were optimised in size, contained only the relevant channels and were the right file format (i.e. EXR .tx files).

How important is 3D motion blur in a project like this, and how practical is it in Arnold?

3D motion blur is essential for shots where the orangutan is performing fast dynamic moves. We would often use 2D motion blur for shots where there was little movement from the camera or the orangutan but it would not hold up with the faster moving shots. This is where Arnold proved itself in getting good results for a very small (sometimes unnoticeable) effect to our render times.

How big was your render farm for this project and what were your render times?

We have a render farm of about 280 machines. These are variable but typically (and these are not new or quick) Intel Xeon E5620 @ 2.40GHz with 8 physical cores and 48GB of memory. Render times ranged from about 15 minutes a frame, right up to 2 hours if we were rendering a close-up or the mother and baby orangutan in the same frame. We used Arnold version 4.1.3.3 and we are now in the process of upgrading to 4.2.x.

Gallery