Posted by insidereel
After the essence of pre-vis, seeing things in advance is very helpful especially when planning or in some cases, adding material for a project, because of an ability to accomplish something more with what is available. At the 2009 Produced By Conference on its second day at Sony Studios, the aspects of Mo Cap (or Motion Capture), which is being optimized by the top people in the business from Bob Zemeckis to David Fincher to James Cameron, was brought into focus.
Performance/Motion Capture Production Technology Performance basis of creatures is motivated ultimately by camera angle. Certain examples of what was done in the past to emphasize mo cap was always personified by the dotted faces of actors used to create the computer grids. To personify this point, footage is shown of Bill Nighy paying the slithery captain in “Pirates Of The Caribbean: Dead Man’s Chest”. Nighy called his getup when he was playing the creature: his “funky computer pajamas. The thing is that mo cap still can’t capture directly around the eye sockets and the actual eyes. Steve Sullivan, Sr. Technology Officer at ILM, breaks down the different perspectives of motion capture. “Facial Mo-Cap” involves a “zillion” dots on the face and uses the same fundamental techniques as “Body Mo-Cap”. Make up can be sometimes used to make the dots. Sometimes it works but it depends on the lighting. When you start working on facial re-targeting where you actually align a new face with a different performance, it becomes a subjective artistic endeavor. These re-targeting exercises however can now be done in real time. Sullivan then shows a test using a live low res example running off an XBOX 360 console and not one of their super computers. The result is still fairly comprehensive.
The realization is that once you do the actual mo cap and get the data, then you can do camera coverage. After that, you just go back and key out any problematic frames. Sullivan then showed an “Indiana Jones” video game engine optimizing mo cap which was made in two hours. It wasn’t done to show the possibilities of FX but more to see how the actual story could play out. This kind of low tech Mo-Cap with facial re-targeting is effective in his mind for actor freedom and director control. It also allows on-demand capture for quick experimentation and real time documenting for timing.
Rob Readow and Debbie Denise from Sony Imageworks began the ball rolling on Mo-Cap back when director Bob Zemeckis first came in with “Polar Express”. They went about with different tests. The second one they did for “Polar” at Imageworks was green screen with full digital environments like “300”. They suggested the InMotion Mo-Cap system they had been working on to Zemeckis. Zemeckis was hesitant but it was Tom Hanks who said that it could allow him to play both the boy and the conductor in the same movie which seemingly ultimately swayed the director. Despite what anyone thinks, according to Readow and Denise, these movies are still mounted as big pictures. The box that they first worked on for “Polar” was only 10 feet by 10 feet which is the most the computer could do at the time. On “Monster House” they could have the ability of 200 cameras shooting on once. And by the time “Beowulf” came around, they could capture horses running across the soundstage.
The importance aspect to remember is that it is still a live action shoot of sorts. The actors just don’t have to hit their marks. The “Beowulf” shoot itself took 38-40 days. Denise says Hanks said that it was the hardest he has ever worked because you never go back to your trailer. Readow follows up Denise’s observations talking about using the Sony DXC-M3A camera as a virtual camera rig where you are using its systems to shoot in a virtual world. That is how they got the realistic camera movements in “Surf’s Up”. By comparison when they worked on some of the elements of “Watchmen” specifically Dr. Manhattan with Billy Crudup, the scanning was done low impact on set. After the initial scan done in live action with the dot structure, the body fabrication was scanned off a bodybuilder with the head and partial torso of Crudup melded together. The simulation software even worked to cause the muscles to ripple. However the camera still racked Crudup in 2D space. Even in the small sequence where Manhattan is forming, the dyanmic simulation of the circulatory systems was done on set with Mo-Cap with the final VFX adding the other different elements such as heat, static and luminescence.
Greg LaSalle, Founder of Mova, works in the essence of the balance between real and photoreal. He worked on “The Curious Case Of Benjamin Button” using aspects of their proprietary Mo-Cap system but wanted to let Oscar winner for “Button”: Steve Preeg talk about that show. Greg talks about Mova in a different light saying that his company works with effects companies but is not actually one themself. Their specific system is called Contour Digital Capture. It records the precise motion that is needed to make something photo real like Brad Pitt’s older head replacement in “Benjamin Button”. When this information is outputted from their system it is like a raw reconstruction. The result is alot of reference video. It becomes what is known as Data Driven Mesh (DDM). From this information you can create FACS which is normal face shape computer systems but ones that operate in real time. This way the expressions are more natural and you are able to get subtle and accurate skin, bone and muscle positions. They have also recently integarted invisible make up which lets the pores and ridges of the face shine through on the scan. Mova is a subsisdiary on On Live which is a new video game avatar system that will be able to be optimized in the future. The test we saw has divisive possibilities in many different sectors. This is not a rendered scan operation but instead it would have scans of people filtered through an engine running in literal real time.
From that, Steve Preeg, Oscar winner as an animation supervisor for “Benjamin Button” last year from Digital Domain, talked about what Mo-Cap allowed them to do on that specific movie as a workshop example. In “Benjamin Button”, they had to do 325 body double head replacements in 325 shots over 52 minutes. The big obstacles and goals was locking Pitt’s performance to the body double’s head motion, making sure the body motion adhered to the dialogue timing and then that the eye line was right. They tended to use a blendshape route which allows for linear transformations between individual shapes. They originally thought about doing emotional study and creating some elements ala Andy Serkis in “King Kong” but that was out of and beyond the budget. What they did was strap Pitt into the Mova system. Unlike the dot system, Pitt’s face was covered with a green paste which allowed a more infinite scan of his skin. Preeg said this allowed much more pore detail to come through. The problem can be stablization because nothing at any time in this process is stable. They actually had to build a plug in for Maya to deal with this issue.
The fear was that at some times the wrong part moved in simulation or if Pitt couldn’t hit that particular point. The other concern is that there is no data captured right around the eyes. They had to create an eye rig to match that. The hardest part of the eye to recreate, according to Preeg, was the goopy part where there is an angle of reflection. Add these elements to the onset data capture where you have to track 3-4 cameras per shot. You shoot the material on set in layers to get positional information but then you have to track the head markers through the other elements. The onset was the first part. Now several months later you need to get Pitt’s take on Ben. Pitt was strapped into the HD Viper Cam rig. He basically had to angle into the audio keys (what Preeg calls almost “visual ADR”). Add to that equation, “Image Analysis” which brings the timings closer together. The thing is no matter what you do, the computer still can’t get across intent. This is a creative endeavor but there always needs to be an artist behind the notion. This technology will never replace that kind of talent in terms of the actor. A big point was made on this.
Patrick Runyon, Product Specialist at Xsens, brought a real world example of Mo-Cap with a portable system that is used in the industry. Specifically cited was Third Floor who was represented in the pre-vis panel. Their system called MVN uses the continuing trend of flexible capture. It doesn’t require cameras but uses motion trackers and wire frames inside a suit along with scopes and magnetic parameters. Biomechanics eventually comes into play for the aspect of the precise measurements. You just need a laptop, the case with the hardware and the suit to make it work. The onscreen motion set within a 3D grid was real time and showed the practical application of scanning real world movements on the fly in a virtual setting.
In the essence of wrap up, several questions were posed. Rob Readow at Imageworks spoke about future technologies specifically the aspect of “passive digital” where the scan doesn’t need to be in “line of sight”. He said that the aerospace arena is leveraging the data but that currently optical is still the highest fidelity. Readow was also asked about adding stereoscopic elements in post in terms of Mo-Cap and also animation. He says that it is fairly easy to add an extra eye for Stereo 3D as long as they have the other data from the opposite eye complete. He brings up the point that “Polar Express” was originally not in 3D. They were told 3 or 4 months ahead of release that this was a new angle. They got it done but that was not the plan from the beginning.
Denise from Imageworks also was asked about a rig that was supposedly used on Spielberg’s upcoming “Tin Tin” movie which apparently incorporated mounted head cams with the body suits instead of dots. She says that it is just another way to capture all the facial tracking markers. It is simple image recognization. James Cameron’s “Avatar” is using this kind of rig as well. It simply uses a different marker set.
Mo-Cap like Pre-Vis offers more complicated and infinitely fulfilling ways in which to realize many possibilities that couldn’t be done before but also make the realization process a little smoother by allowing virtual worlds to come alive in a more organix way.
For more info on Mo-Cap, visit The Motion Capture Society.
Posted in Entertainment Industry Coverage
Tags: 3D, Animation Supervisor, Beowulf, Billy Crudup, Body Mo Cap, Brad Pitt, Contour Digital Capture, Data Driven Mesh, David Fincher, DDM, Debbie Denise, Digital Domain, Dr. Manhattan, Entertainment Industry Coverage, Facial Mo Cap, FACS, Game Engine, Greg LaSalle, HD Viper Cam, ILM, Indiana Jones, InMotion, inside reel, James Cameron, Mo Cap, Monster House, Motion Capture, Motion Capture Society, Mova, MVN, On Live, Oscar, Oscar Winner, Passive Digital, Patrick Runyon, Pirates Of The Caribbean, Pirates Of The Caribbean: Dead Man's Chest, Polar Express, Produced By Conference, Producers, Rob Readow, Robert Zemeckis, Scanning, Sony DXC-M3A, Sony Imageworks, Soundstage, Stereo, Stereoscopic, Steve Preeg, Steve Sullivan, Steven Spielberg, Surf's Up, The Curious Case Of Benjamin Button, tim wassberg, tom hanks, Watchmen, XBOX 360, Xsens