Digital Domain & The Many Layers of Maleficent
COW Library : TV & Movie Appreciation : Tim Wilson : Digital Domain & The Many Layers of Maleficent
Digital Domain has been a visual effects success story since their founding in 1993, for both artistry and science.
In fact, the company has split its seven Academy Awards between the arts and the sciences: Best Visual Effects Oscars for Titanic, What Dreams May Come and The Curious Case of Benjamin Button; and four Scientific and Technical Achievement awards for proprietary technology for tracking, compositing (aka, Nuke! Yep, that Nuke), volumetric rendering, and fluid simulation.
Digital Domain has in fact done over 100 features (as well as advertising and games), and you own a good number of 'em, from early successes like Apollo 13, The Fifth Element and True Lies, to recent hits like Iron Man 3, X-Men: Days of Future Past and, now, Maleficent.
Kelly came on to the project in July 2012, as live-action shooting began. Maleficent's first motion capture shoot followed three months later. As many as 500 members of Digital Domain's teams worked on parts of the film, with a peak of 300 or so through much of production.
The early days of the project, Kelly told us, were particularly devoted to research and development. While Digital Domain had two primary responsibilities, they were two especially complicated tasks that pushed the team's expertise to entirely new places. The first sounds easy enough: transforming three human actors into pixified, yet realistic, versions of themselves. Not so easy if you're determined to set new benchmarks for realism.
Kelly's team started by analyzing the actors skin under 300 light positions, enabling them to examine the actors' individual layers of skin. Digital Domain evolved their skin shaders to be able to replicate those underlying layers, and not just the surface.
They applied the same rigor to the movement of blood through the actors faces as they spoke. "We analyzed how the blood flow changed in the actors faces in various extreme facial expressions, and we derived maps for all these dynamic facial color changes," Kelly says. "These were integrated into our facial rigs and were automatically driven, based on the characters skin tension and facial expression.
"The blood-flow system also took the amount of time an expression was held into account, so that the longer a character held a tense expression, the redder their faces would get -- and the longer it would take them to return to normal."
Getting the pixie wardrobes to move realistically as if they were made of flowers, but still appear free flowing and complementary to the pixies' figures proved to be a considerable challenge. L to R: Thistlewit (Juno Temple), Knotgrass (Imelda Staunton), Flittle (Lesley Manville) in Disney's MALEFICENT. ©Disney 2014
Digital Domain's other major area of expertise was Maleficent herself. They created a digital double for Angelina Jolie, as well as a spectacularly complex and expressive set of wings that play a major role in the story.
"It's all a process," says Kelly. "From the beginning of a film like Maleficent, you have to start by building teams, and finding the right the people for the job.
"We specifically needed to build on the years of work we had already done for photorealistic humans and facial animation. All of the people that we got on this team in both our Los Angeles and Vancouver offices were experts at that particular craft, and the kind of people who can continue to build upon that expertise."
Angelina Jolie (Maleficent) on set. Photo by Frank Connor. ©Disney Enterprises, Inc. All Rights Reserved.
Digital Domain did a lot of work around Maleficent herself. Can you talk about some of the projects that were involved with that specific character?
We were responsible for Maleficent at three ages: a young version, a teen version and an adult version -- all of which have different costumes. The adult Maleficent has several different costumes, as well. For our digital double versions, the costumes needed match the look and the movement of the practical costumes identically. So initially, we recommended some simpler costumes, just for ease on the technical side, but of course, they went for something as complicated as you can get. [chuckle]
Things like semi-transparent fabric with many, flowing layers and lots of different materials -- plus feathers, jewelry, and hair blown by the wind. All of these things made the character effects a bit more difficult and challenging, but in the end, it really looked great.
Even in extreme closeup, the CG Angelina and CG wardrobe shots had to be indistinguishable from the practical ones. ©Disney 2014
Then, of course, her wings were very complex. Every individual feather was modeled, and individually rigged. The rigs for that were relatively difficult because of how the wing needs to fold and pose in different positions, without the feathers ever appearing to intersect?xAD?xAD.
Can you tell me a little bit more about the wings? That sounds particularly arduous.
The wings are such a key component of her character and her look that they went through a pretty significant design process. They were a part of her, and reflected internal emotions and complemented them, but at the same time, they were also meant to be slightly independent from her, and almost have a character of their own.
If you think of it in terms of an extension of her own body, like another set of arms if you will. It's really just expressing another layer of body language. If she's feeling sad, for example or threatened, you can imagine what your arms may be doing in a situation like that, or perhaps your shoulders. And the wings would do something similar in those circumstances, so they're just an extension, another layer of body language.
Maleficent (Angelina Jolie). Each feather on her wings was individually modeled and rigged.
We had her body modeled for all this, so we knew where her torso was, and we knew where the wings connected. The wings were rigged in slightly different proportions, but essentially the same structure as a bird wing in terms of bone structure and proportions.
One of the issues when she was flying is that sometimes we needed to offset her photographic plate slightly to tie in to the up and down motion of the wings. Obviously, when you're flying, your weight distribution is slightly different as the wings move up and down. These are the things that we had to just eyeball, and see what looked right to make it feel realistic. Mixing up the weight distribution, and getting a little bit more active with the camera, helped a lot.
What were some of the other challenges in the movie that struck you as needing a particular finesse?
The biggest challenge for us was definitely the pixies, because of the complexity of their assets. The costumes were made up of hundreds of little individual leaves and flowers and twigs and grasses, dandelions, seed puffs -- so you're dealing with issues of transparency, multi-layers, many different kinds of materials that have different characteristics in terms of how they interact with each other. And these are all combined with the relatively complex hair grooms.
The CG pixies' performances were driven by real actors - on the left, Knotgrass (Imelda Staunton) and Flittle (Lesley Manville). These CG versions of the actors were detailed down to the finest wrinkle and pore. Digital Domain also did extensive analysis on the actors skin under 300 different light positions and ensured that their CG actor skin matched at every level. ©Disney 2014
This was especially true on Juno Temple's character, Thistletwit. She had long, very curly blonde hair. Her hat was sort of interwoven within her hair with fine little thistles and individual flowers, all which flung around and interacted with her body as well. The body and facial hair, the fine little peach fuzz, all had to be groomed.
We paid extreme attention to eye detail and photo realism around the eyes, modeling in muscle and connective tissue, -- even eye water. We did extreme closeup tests on the eyes, way closer than we ever got in the film, to make sure that that held up. Each one of these three pixies was that complicated!
We wanted to make big improvements to our facial transfer tools, but also make big improvements on the interface for the animators. Making it faster. Taking advantage of the GPU on the workstations by building CG effect shaders that actually shows the animators the wrinkle and fine details pretty much in real time. You're actually seeing something close to the final result in terms of placement of fine, detailed wrinkles, which especially help when you're doing dialogue and expressive close-ups.
One of the things that's difficult is the balance between making faces seem magical and making them recognizable. How did that factor into your approach to the pixies?
That was actually a very critical component of this project, because they weren't just pixies the whole time. In the first act of the film, they're small pixies. Later, they are assigned tasks with the responsibility of looking after Baby Aurora until she turns 16. At that point, they transform into full, live-action versions of themselves.
So it was even more critical than usual that the pixie versions had a strong resemblance to the live action actor. Throughout the design process for the pixie versions of each actor, we looked very closely at the proportions of the eyes, and the relationship of the eyes and the nose and the mouth.
In general we tried to make them look in generally a little bit cuter. [chuckle] Bigger heads and bigger eyes, smaller nose, things like that. But that was a time consuming process. We had to find the balance between achieving the essence of a particular actor, but not just a small version of that actor, which we could also have settled for -- but a specifically pixie version of the actor.
We also made an upfront decision to replicate the actors as fully realized in CG. This required a very complex network of face shapes -- up to 3000 shapes for a single face -- that could be accessed and translated through the animation interface. Using our proprietary transfer tool, we could actually transform muscle shapes, bone proportions and things like that, that we could then transfer back onto any revised pixie bodies or faces. We could also keep up with any changes in dialogue very quickly.
Digital Domain used a proprietary facial transfer toolkit, which took into account the differences in bone structure, skin -- even blood flow in expressions of anger, fear and delight -- from the actor/character and rebuilt each of the face shapes -- up to 3000 -- one at a time, translating them from a natural look on the actor's face to her "pixified" form.
How did the live-action motion capture connect to the animation process?
For the capture shoot, we had two cameraman for each of the actors: one getting a reference for close up, and one for a full shot. Actors also used a full-body capture suit, and a helmet with four cameras to capture detailed facial tracking. with 200 facial markers. Our custom head camera system also operated at a higher resolution and frame rate for greater accuracy.
Editorial would take that footage, and cut it together with picture in picture inserts. Once they made the selects from the takes that they liked and we had a rough cut together, we would do very simple projections of the actors faces onto the CG characters that we had pre-vis'd and roughly blocked out.
Then they'd do another editorial pass. Once they signed off on that, that information would then go back to our virtual production department, and they would do all the facial tracking and clean up the motion capture data, and feed that to our main production line.
At that point, it gets into the animators' hands. They continue to refine the blocking and start on the animation.
Every once in a while, there are some dialog changes that happen, and we get new audio for that, and new camera reference shots, and have to adjust for that. Sometimes that wasn't even captured, so we would have to do that as key frame.
At some point, you go through these iterations with me and the director and then you basically nail that down. Once that animation gets signed off on, you send that off to lighting and lighting does its own iterations, as does compositing.
Sometimes these different departments overlap, of course. We decided early on that we wanted to build comps with a huge number of layers, providing control of virtually every aspect of the models -- the pixie wardrobes, their faces, their bodies -- as well as the lighting and material properties. That meant that any small changes could be handled in 2D by the compositors and lighters, rather than needing to go back to be rendered in 3D from the beginning.
With all of this happening in near-real time, we were able to reduce the amount of time spent on integrating CG into the plates, and more time on beauty -- while also reducing the amount of time it took for the scenes to leave our hands to wind up in theaters.
Disney's Maleficent & Re-creating Fully Digital Characters-Design by FX-WIRED