Blog Archives

First Look: THE LOSERS – WB

Warner Brothers just provided IR with this still from “The Losers”, an action film starring Jeffrey Dean Morgan (“Watchmen”) and Zoe Saldana (“Avatar”), as part of their 2010 preview. The movie slams into theaters on April 9th, 2010.

Capping The Image: 2009 Produced By Conference – Los Angeles – Part II

prodby2

After the essence of pre-vis, seeing things in advance is very helpful especially when planning or in some cases, adding material for a project, because of an ability to accomplish something more with what is available. At the 2009 Produced By Conference on its second day at Sony Studios, the aspects of Mo Cap (or Motion Capture), which is being optimized by the top people in the business from Bob Zemeckis to David Fincher to James Cameron, was brought into focus.

Performance/Motion Capture Production Technology Performance basis of creatures is motivated ultimately by camera angle. Certain examples of what was done in the past to emphasize mo cap was always personified by the dotted faces of actors used to create the computer grids. To personify this point, footage is shown of Bill Nighy paying the slithery captain in “Pirates Of The Caribbean: Dead Man’s Chest”. Nighy called his getup when he was playing the creature: his “funky computer pajamas. The thing is that mo cap still can’t capture directly around the eye sockets and the actual eyes. Steve Sullivan, Sr. Technology Officer at ILM, breaks down the different perspectives of motion capture. “Facial Mo-Cap” involves a “zillion” dots on the face and uses the same fundamental techniques as “Body Mo-Cap”. Make up can be sometimes used to make the dots. Sometimes it works but it depends on the lighting. When you start working on facial re-targeting where you actually align a new face with a different performance, it becomes a subjective artistic endeavor. These re-targeting exercises however can now be done in real time. Sullivan then shows a test using a live low res example running off an XBOX 360 console and not one of their super computers. The result is still fairly comprehensive.

The realization is that once you do the actual mo cap and get the data, then you can do camera coverage. After that, you just go back and key out any problematic frames. Sullivan then showed an “Indiana Jones” video game engine optimizing mo cap which was made in two hours. It wasn’t done to show the possibilities of FX but more to see how the actual story could play out. This kind of low tech Mo-Cap with facial re-targeting is effective in his mind for actor freedom and director control. It also allows on-demand capture for quick experimentation and real time documenting for timing.

Rob Readow and Debbie Denise from Sony Imageworks began the ball rolling on Mo-Cap back when director Bob Zemeckis first came in with “Polar Express”. They went about with different tests. The second one they did for “Polar” at Imageworks was green screen with full digital environments like “300”. They suggested the InMotion Mo-Cap system they had been working on to Zemeckis. Zemeckis was hesitant but it was Tom Hanks who said that it could allow him to play both the boy and the conductor in the same movie which seemingly ultimately swayed the director. Despite what anyone thinks, according to Readow and Denise, these movies are still mounted as big pictures. The box that they first worked on for “Polar” was only 10 feet by 10 feet which is the most the computer could do at the time. On “Monster House” they could have the ability of 200 cameras shooting on once. And by the time “Beowulf” came around, they could capture horses running across the soundstage.

The importance aspect to remember is that it is still a live action shoot of sorts. The actors just don’t have to hit their marks. The “Beowulf” shoot itself took 38-40 days. Denise says Hanks said that it was the hardest he has ever worked because you never go back to your trailer. Readow follows up Denise’s observations talking about using the Sony DXC-M3A camera as a virtual camera rig where you are using its systems to shoot in a virtual world. That is how they got the realistic camera movements in “Surf’s Up”. By comparison when they worked on some of the elements of “Watchmen” specifically Dr. Manhattan with Billy Crudup, the scanning was done low impact on set. After the initial scan done in live action with the dot structure, the body fabrication was scanned off a bodybuilder with the head and partial torso of Crudup melded together. The simulation software even worked to cause the muscles to ripple. However the camera still racked Crudup in 2D space. Even in the small sequence where Manhattan is forming, the dyanmic simulation of the circulatory systems was done on set with Mo-Cap with the final VFX adding the other different elements such as heat, static and luminescence.

Greg LaSalle, Founder of Mova, works in the essence of the balance between real and photoreal. He worked on “The Curious Case Of Benjamin Button” using aspects of their proprietary Mo-Cap system but wanted to let Oscar winner for “Button”: Steve Preeg talk about that show. Greg talks about Mova in a different light saying that his company works with effects companies but is not actually one themself. Their specific system is called Contour Digital Capture. It records the precise motion that is needed to make something photo real like Brad Pitt’s older head replacement in “Benjamin Button”. When this information is outputted from their system it is like a raw reconstruction. The result is alot of reference video. It becomes what is known as Data Driven Mesh (DDM). From this information you can create FACS which is normal face shape computer systems but ones that operate in real time. This way the expressions are more natural and you are able to get subtle and accurate skin, bone and muscle positions. They have also recently integarted invisible make up which lets the pores and ridges of the face shine through on the scan. Mova is a subsisdiary on On Live which is a new video game avatar system that will be able to be optimized in the future. The test we saw has divisive possibilities in many different sectors. This is not a rendered scan operation but instead it would have scans of people filtered through an engine running in literal real time.

bbhv1
From that, Steve Preeg, Oscar winner as an animation supervisor for “Benjamin Button” last year from Digital Domain, talked about what Mo-Cap allowed them to do on that specific movie as a workshop example. In “Benjamin Button”, they had to do 325 body double head replacements in 325 shots over 52 minutes. The big obstacles and goals was locking Pitt’s performance to the body double’s head motion, making sure the body motion adhered to the dialogue timing and then that the eye line was right. They tended to use a blendshape route which allows for linear transformations between individual shapes. They originally thought about doing emotional study and creating some elements ala Andy Serkis in “King Kong” but that was out of and beyond the budget. What they did was strap Pitt into the Mova system. Unlike the dot system, Pitt’s face was covered with a green paste which allowed a more infinite scan of his skin. Preeg said this allowed much more pore detail to come through. The problem can be stablization because nothing at any time in this process is stable. They actually had to build a plug in for Maya to deal with this issue.

bbhv2

The fear was that at some times the wrong part moved in simulation or if Pitt couldn’t hit that particular point. The other concern is that there is no data captured right around the eyes. They had to create an eye rig to match that. The hardest part of the eye to recreate, according to Preeg, was the goopy part where there is an angle of reflection. Add these elements to the onset data capture where you have to track 3-4 cameras per shot. You shoot the material on set in layers to get positional information but then you have to track the head markers through the other elements. The onset was the first part. Now several months later you need to get Pitt’s take on Ben. Pitt was strapped into the HD Viper Cam rig. He basically had to angle into the audio keys (what Preeg calls almost “visual ADR”). Add to that equation, “Image Analysis” which brings the timings closer together. The thing is no matter what you do, the computer still can’t get across intent. This is a creative endeavor but there always needs to be an artist behind the notion. This technology will never replace that kind of talent in terms of the actor. A big point was made on this.

Patrick Runyon, Product Specialist at Xsens, brought a real world example of Mo-Cap with a portable system that is used in the industry. Specifically cited was Third Floor who was represented in the pre-vis panel. Their system called MVN uses the continuing trend of flexible capture. It doesn’t require cameras but uses motion trackers and wire frames inside a suit along with scopes and magnetic parameters. Biomechanics eventually comes into play for the aspect of the precise measurements. You just need a laptop, the case with the hardware and the suit to make it work. The onscreen motion set within a 3D grid was real time and showed the practical application of scanning real world movements on the fly in a virtual setting.

In the essence of wrap up, several questions were posed. Rob Readow at Imageworks spoke about future technologies specifically the aspect of “passive digital” where the scan doesn’t need to be in “line of sight”. He said that the aerospace arena is leveraging the data but that currently optical is still the highest fidelity. Readow was also asked about adding stereoscopic elements in post in terms of Mo-Cap and also animation. He says that it is fairly easy to add an extra eye for Stereo 3D as long as they have the other data from the opposite eye complete. He brings up the point that “Polar Express” was originally not in 3D. They were told 3 or 4 months ahead of release that this was a new angle. They got it done but that was not the plan from the beginning.

Denise from Imageworks also was asked about a rig that was supposedly used on Spielberg’s upcoming “Tin Tin” movie which apparently incorporated mounted head cams with the body suits instead of dots. She says that it is just another way to capture all the facial tracking markers. It is simple image recognization. James Cameron’s “Avatar” is using this kind of rig as well. It simply uses a different marker set.

Mo-Cap like Pre-Vis offers more complicated and infinitely fulfilling ways in which to realize many possibilities that couldn’t be done before but also make the realization process a little smoother by allowing virtual worlds to come alive in a more organix way.

For more info on Mo-Cap, visit The Motion Capture Society.

Prepping The Image: 2009 Produced By Conference – Los Angeles – Part I

producedby1

The Produced By Conference, held for the first time at Sony Studios, gives a look to aspiring producers of where the new technology is going and how to get there. There was a diversity of angles to see from film office to new HD high-end taps to camera packages. One of the more interesting developments has been the use of computers to not just do the visual effects but to actually bring the title to pitch. Now more than ever studios and financiers want to see a visual representation of their investment and where it will go. Two specific sessions spoke to this possibility as well as revealed some interesting tidbits regarding recent developments.

The Collaborative Process of Visual Effects: From Previs To Post Previsualization is becoming an increasingly prevalent element in current film production techniques especially for large production but there is a chain of command in what each level might do. According to the panel, there are six different iterations: pre-vis, pitch vis, technical pre-vis, onset pre-vis, post vis and d-vis. David Morin, a technologist at Autodesk, first approaches the history of pre-vis. First it is sheer numbers. 49% of all box office receipts come from some sort of visual effects elements created in the computer. The percentage is up 5% from 2007. The timeline of pre-vis shows where it has come from. It was first used with action figures in a low tech version to plan the bike chase for “Return Of The Jedi” in 1983. In 1986, vector graphics were used to pre-vis for “The Boy Who Could Fly”. The next jump up  was in 1992 and involved more motion animation done by Frank Foster at Sony Imageworks for the car chase in “Striking Distance”. The flying sequence for “Judge Dredd” only two years later combined animation and the increasing propensity of vector. “Starship Troopers” in 1997 was the first pre-vis situation that was able to integrate camera movement. The big leap forward happened in 1999 with David Dozoretz in the design of the pod race as a pre-vis animatic for “Star Wars: The Phantom Menace”. “Fight Club”, the same year had sequences pre-vised by Colin Green at Pixel Liberation Front for the airline destruction sequence in which colors coded set extensions and layers. Then 3 years later virtual camera work for invisible effects was again integrated by Colin Green in pre-vizing “Panic Room”, again for director David Fincher. This is where the aspects of what are possible with this process begin.

Gale Ann Hurd, producer of “Terminator” and “Aliens”, then discussed “pitch vis”. The key she says is that in this current marketplace the reality is that you still need to pitch your product and set it up. Pitch Vis is especially good if you are working with first time filmmakers. You can show the financiers and the studio, if need be, the filmmaker’s sensibility through these animations and show how he/she can move the camera albeit in a virtual environment. She uses the example that she is currently working with a first time filmmaker who is a graphic novelist looking to make a debut as a feature director. They are working on pre-vis with a company called Image Engine who did work on “Fantastic Four 2” to create visuals to use as a pitch for a project entitled “The Hunted”.

wtc1

Ron Frankel, President of Proof, next approached the basis of technical pre-vis. The definition that they had come to structure involves a collaborative process that generates preliminary versions of shots or sequences. The aspect of cost is always of interest to filmmakers. It depends how much needs to be done. A whole film with a budget of 15 million dollars will cost about $50,000 to pre-vis. Frankel uses the example of the film “21” which only needed certain things done cost about $4000 in pre-vis work. His prevalent example was the work he did in technical pre-vis on “World Trade Center” working with director Oliver Stone. One of the first things he did was create a 3D model of the Towers when they fell textured with photographs. It was not supposed to be referenced as a shot. It instead gave context to where the rescue workers within the film were at any given time. The moving tech-vis goes above from a moving target to show how these workers ended up in essence where they were trapped. A majority of the work was still frames of the environment to give perspective. There was a real effort for historical accuracy which acted as quite a reality check according to Frankel. He shows a shot that Oliver had him work on while they were on set in Marina del Rey where the camera pulls out over the island. The Double Negative completed VFX looked very similar to the actual tech-vis on the day. Oliver got to see it on the day so he knew what he was getting.

Alex McDowell, Production Designer most recently on “Watchmen”, discussed the essence of “d-vis” which by common sense incorporates design elements.  Design visualization tests practical and virtual locations in relation to camera. He first worked with pre-vis with Director David Fincher on “Fight Club”. Pre-vis was brought in initially to get more control over the visual effects. He used a grid to show the balance of different departments working and how the visualization allows them to function independently. The concept elements of art follow into d-vis which is congruent with set and 3D design. From here the pipeline follows into set construction and decoration. For his most recent project with “Watchmen”, the art department started off with a concept paintings. They derived this perspective from the initial pre-vis as well as distance elements from Google Earth. This helped with the initial physics of The Comedian’s apartment and how much needed to be constructed versus the amount of CGI facades that extended beyond and more specifically downwards. This d-vis also using colors allows one to see how much practical location will work and the actual cut off where digital extensions begin. D-Vis allows precise placement in terms of actual measurements. They built three city blocks in Vancouver to stand in for NY. The painting in pre-vis uses color coding which can be broken down to the crew. Another example for use of pre-vis in “Watchmen” was the Owl Ship, even though it was actually built full size (I actually did a stand up in it at the “Watchmen” junket). The CG model was done in d-vis to get director approval. The key especially when the Owl Ship was integrated into the hangar at Nite Owl II’s home base was that the concept art had to be data accurate.

Chris Edwards, CEO of The Third Floor Pre-Vis Studio explored technical pre-vis as well. Technical pre-vis incorporates a generated and accurate camera, lighting, design and scene layout. The first example he cites that they worked on was “Valykrie” where everything was scaled in a real world environment. Instead of using the actual template, they used the essence of planning out the move in the animation incorporating the perspective of the soundstage using green screen breakdown as well as camera placement and movement. The key is to place a diagramming tool that measures distance from both the top and side views. This measurement also takes into account the velocity of camera and actors at any given point. Usually this kind of tech pre-vis can be displayed in the matte box of the camera (precisely in the heads up display). The aspects of different layers of compositing can also be integrated to show the different elements at play.

clover1

Another example Edwards showed was an overhead tech pre-vis showing the swath of a camera and what it sees while moving down the street in the trailer tease elements for “Cloverfield”. Edwards also addresses post-vis in congruence. It combines digital elements and production photography to validate and aid in the footage selection process. An example he offers in this segue is from “Prince Caspian” for the Walt Disney Company. The River God sequence in that movie was going to be cut. The director had to find a way to somehow justify the sequence to the studio. What ended up being done is that the animation from the pre-vis was comped into the sequence with the live action and bolstered the studios confidence. In this instance and others, Edwards says that post-vis helps strengthen a sequence before the final FX. It also helps extensively when showing to test audiences when the final FX aren’t done. VFX producers can also use post-vis as a bidding tool and focus efforts.

Dan Gregoire, CEO of Halon Entertainment, focused his perceptions on onset pre-vis. The definition of onset pre-vis is to create a real time visualization on location to help the director evaluated captured imagery. The first interaction he had was when they were working on “THX-1138” for the DVD Director’s Cut. They were able to do 54 set ups in one day. Pre-vis made it happen. When he was working on “Revenge Of The Sith” they had a guest director named Steven Spielberg for a couple sequences who changed the mindset of how they could work.

war1

Dan went to work for Spielberg for “War Of The Worlds”. Spielberg has said that he could not have made the release date for “War” if he had not done pre-vis. On set Dan said there is certain things you need on mobile call: laptop, Maya, Adobe, a table, laser measure and a GPS locater integrated with a digital camera, Google maps and a roving internet connection. He also said you need one extra of everything. He jokes that on “War Of The Worlds” sometimes he had to jump an internet connection from nearby churches. Dan would travel with 1st Unit. The pre-vis on the day allowed Spielberg to make the decision to blow up the bridge which was a big set piece. Before it was simply going to be the destruction of a gas station. Pre-vis made this possible. For the final take down of the pods, Steven could not go to all the locations so they had to do virtual site scouts using pre-vis.Dan went up in a chopper and tasked the scanned footage/pictures into the sequence.

crystal1

Dan next worked doing onset pre-vis for Indiana Jones and The Kingdom Of The Crystal Skull”. Spielberg found out on this show that the key at times was control the message to all department heads especially if you are shooting at a breakneck pace. Sometimes Spielberg would come up with an idea and poeple would go right to work on it. He didn’t want people to spend money unless he was firm on the idea. Using this kind of pre vis allowed him to dessimate out information to specific people.

st1

Roger Guyett, VFX Supervisor on “Star Trek” for ILM talked about the importance of pre vis aided by David Dozoretz who showed the pre-vis of the planet dive sequence. It is better to do these pre-vis to see if an actual sequence should be in a movie or not. For “Star Trek”, Guyett says that pre-vis took close to a year. But, for him, looking at pre-vis, the shooting criteria was different and had to be maintained. He wanted to try to do as much in camera as possible. He wanted to be able to shoot in real light and create a natural realism. During this he says that the gimble was always locking up on set so they had to keep replacing the hot heads. To add to this, he also wanted to us minimal green screen and use the sky whenever possible.

st2

The rub was that the only way they could make it happen is that they would have to realize two locations on two sets in one location. He had to figure out a way to do the weapon platform and the ice world planet on the same location. The way he ended up doing it was shooting at a wide swath of the parking lot at Dodger Stadium where a clear horizon could be seen. JJ Abrams thought he was insane but they made it work. It was just a matter of angle the structure with the pre-vis and the green screen just right.

2012t

The sneak peak at the panel was from Marc Weigert who is the co-producer and VFX Supervisor on 2012, Roland Emmerich’s new action picture about the Mayan prophecy about the end of the world. He brought pictures showing the extensive green screen that was constructed in Vancouver for the shoot. The aspect was to have the flow of green screen on either side of a moving car for a respective chase sequence. The scene he was building up to show involves a 10.5 earthquake hitting Los Angeles. The story set up on the scene is that John Cusack’s limo driver goes to save his kids and his ex-wife and, by extension, her new husband. The pre-vis on the effects that he shows involve the question of how do you create the aspect of such an earthquake? Roland and Marc’s perception seemed to be like a big rolling wave swallowing up everything in its path. The scene itself, which was rough and seemingly had not been shown publicly before, shows Cusack running into his former house and getting his ex-wife (played by Amanda Peet) and their kids into the car just as the earthquake consumes their house. He is driving ahead of  the rolling destruction but just barely as you see houses simply swallowed by the earth. In a short piece of Emmerich’s humor, Cusack gets stuck behind two old ladies in a car who can barely see over the . He eventually drives around them but the old ladies’ car goes headfirst into a big piece of rock. The car heads on but as they turn down a street, cars from a parking garage are being thrown out into their way and right ahead of them the freeway begins to topple over sending more cars careening. Cusack accelerates as he has to clear under one part of the freeway before it completely collapses. He does so but a high rise begins to fall in front of them. There is no way around. I guess he is going in. After applause, Marc says they have 7 or so more weeks of FX work on the film to do. 2012 is being released in November.

The aspect of pre-vis speaks to the aspect of all different types of production. With a high end panel like this with past, present and definitive future experience, the real world applications to producers on this front is quite specific.

Part 2 of our coverage of the 2009 Produced By Conference will explore the intracies of motion capture and its integrations into such systems as pre-vis.

%d bloggers like this: