In 2007, I worked as one of the match move/layout supervisors on thefirst digital stereoscopic feature film, Journey To The Center Of TheEarth -3D. I would like to share my experiences from this production,especially concerning the use of 3D Equalizer and Maya for stereoscopicwork.
JCE (Journey Center Earth) was produced by Walden Media, directed byEric Brevig, photographed by Chuck Shuman and stars Brendan Fraser.Principal photography and visual effects work was primarily produced inMontreal. The overall vfx supervisor was Chris Townsend and the vfxsupervisor for Meteor Studios was Bret St. Clair. Other important vfxstudios also did considerable work on JCE.
Previously, I had worked on other stereoscopic films, including the65mm productions The Ant Bully - 3D (Imax) and T2-3D (5-perf). AntBully was pure CGI animation and so the stereoscopic problems were verydifferent than a live action visual effects film like JCE.
JCE was photographed with the Pace stereo camera, the design of whichwas commissioned by James Cameron. The Pace design utilizes two SonyHiDef 24P cameras looking into a beam splitter. These video cameras arerelatively compact, since they consist only of a lens/image sensor andno tape, disc or other recording device. A standard camcorder containsan integral recorder, but in our application, only the camera was onstage and the MPEG4 recorders were far away, in a high tech videovillage. Traditional 35mm or 65mm stereo cameras using anoutboard beam splitter can be very large, especially in the case of thetwin 65mm Showscan/Panavision cameras that we used for Cameron's T2-3D.By using compact "film look" video cameras, the stereoscopic rig wasbrought down to a relatively small size. Although beam splitter stereorigs are somewhat silly looking and "Rube Goldbergish", they are oftenconsidered the most artistically flexible rig type.
The Pace stereo camera is typically fitted with matching, synchronizedzoom lenses on the two stereo camera heads. On any stereo show, thecritical stereoscopic settings are interocular and convergence, whichcan be changed dynamically during actual photography. The animatedfocal length, convergence and interocular values are recorded everyframe and then embedded in the 1920x1080 video image. Various "singlesystem" and "double system"techniques can be used for synchronizingthis stereo meta data to the image. A separate ASCII file can be used,or in our case, stereo meta data was placed in the .dpx header and inthe EXIF section of our .jpeg proxy images.
On JCE, Pace meta data, the i/o and convergence was floating point andthe focal length was truncated to an integer. The rounding down of thefocal length value (no fractions, just whole numbers) caused minorproblems and probably will be fixed in later versions of the Pacestereo encoding software. Interocular (i/o) is the distance between the left and right eyes. Inhumans, this distance is typically 2.5 inches, but artistic andeyestrain considerations mean that the photographed i/o may be set atmany unusual values and might even animate during the shot. The greaterthe (tx) distance between the two eyes, the stronger the stereo effect.But if the effect is too strong, the human eye will not be able to fusethe binocular images together and painful eyestrain will result. Thestereo effect would be lost and the viewers eyes would physically hurt.
Convergence is the pan angle between the two stereo cameras. Imax filmstypically maintain the left and right cameras as parallel to oneanother, but many other stereo systems "toe in" the cameras. This is acontroversial and subjective process, with different artistic camps.What is important to the stereo matchmover is that the convergence, i/oand focal length of the original photography must be determined so thatwhen visual effects are added to the plates, the stereo depth of theCGI elements closely match the live action elements.
The stereo meta data from the Pace camera is very useful, but becauseit uses mechanical encoders in the chaotic, real life field conditionsof Hollywood production, the data will not be perfect. Typically, itneed to be trimmed in the matchmove/layout process. Much more on that. In the world of matchmove, it has become well known that lensdistortion must be compensated for in demanding shots. This isespecially true for anamorphic lenses and zooms. Anamorphic lensestypically display heavy barrel distortion (where the corners of theframe bow in), which in 3DE would be a positive distortion value. JCEwas primarily shot with zoom lenses. At wide angles, the JCE zoom hadheavy pin cushion distortion (where the corners of the image bowedout), but as the lens was zoomed to longer focal lengths, thedistortion reduced and became more neutral. Because of the extensiveuse of the Technocrane on JCE, none of our shots involved actualzooming and the zoom lenses were merely used as variable primes. 3DEdoes possess strong tools for calculating zooming shots, but thesesituations are often very challenging, especially if the camera alsotranslates. On large productions, it is common to photograph lens distortion gridcharts at multiple focal lengths. 3DE can attempt to automaticallydetermine lens distortion, but distortion testing with grids can alsobe very helpful.
In addition, zoom lenses often display "mustache" distortion, where onepart of the image frame bows in and another region bows out (barrel andpin cushion). These distortions are more difficult to correct. When barrel distortion is corrected for in a matchmove system, many ofthe pixels at the edge of frame may be pushed outside of the frame,truncated and lost. There are different methods of dealing with this.Some processes will make the undistorted frame larger (past 2K) andother systems will maintain the pixel position at the edges of theframe the same and offset the more central pixels. On JCE, we added a25% border to every plate before we matchmoved. Later, the lightingdepartment redistorted their renders and the composite department thencropped back to 1920x1080 in the final stages.
Unified Solve vs. Meta Data
When matchmoving stereo images, both the leftand right eyes must be tracked and they must be placed in the samestereo space. Typically, the analog encoder stereo meta data from thecamera rig is not accurate enough for demanding shots such as setextensions. This is not a criticism of the Pace Camera system. Afterspending many years operating motion control systems, it became obviousthat mechanically measuring the exact sub-pixel position of cameras andoptics (outside of laboratory conditions) is almost impossible on afilm set. The stereo meta data from the Pace camera will get you closeto an accurate stereo reading, but only stereoscopic matchmove willprovide an exact result. If you attend a stereoscopic film (such as Beowulf), you cando an experiment that will illustrate some 3D principles.
First, remove the 3D glasses from your face. Observe that thestereo effect is caused by the fact that closer objects will displaymore left right separation on the screen and distant objects will bemore "converged". In Imax films, objects at infinity will typicallyhave no divergence and closer objects will diverge. But in othersystems, the stereographer will often pan the left and right camerastowards one another minutely, changing the convergence point by usingcamera toe-in (ry). In this case, both near and far stereo objects willdiverge and only a mid point will converge. A small adjustment of thepan angle between the two cameras has a large visual effect on theaudience. On a 2k image, the near objects can have no more than 80pixels of stereo shift and the distant objects can have no more thanabout 30 pixels of negative shift (wall-eye). The human eye willtolerate more stereo separation for near objects (80 pixels) than fordistant objects (30 pixels). This is because the human eye muscles arebuilt to pan towards one another, but not to pan away from one another(wall eye). These pixel separation limits (80 hither, 30 yon) aresubjective, approximate and depend on projection techniques,choreography and editorial style.
A surprisingly large amount of vibration between the left and rightcameras can exist and the stereoscopic images can still easily be fusedby the human eye. For example, on T2-3D, our twin 65mm stereo cameraswere placed on a camera car driving on rough terrain. Because of thecantilever design, the two cameras would shake against one another. Ifyou removed your stereo polarizing glasses in the theater, thevibration between the two images was disturbing. But when you put yourstereo glasses back on, your eye fuses the images perfectly and theunsteadiness between the stereo images disappears. But if we aretalking about a match move vfx shot, then the vibration between the twoeyes will not be accurately recorded in the meta data and Unified Solvemay be necessary to converge the left and right match moves properly.Even though the Pace stereo camera is a low vibration design,mechanical and optical inconsistencies between the left/right opticscan show up as the lens is rack focused, etc. The job of the stereo match mover is to figure out what the convergenceand interocular of the original camera was set at. As mentioned, forcritical shots like set extensions, the encoder data will not be goodenough and unified solve (to be defined shortly) is needed.
FYI the human eye can easily see a half pixel shift in stereoplacement. I will give an example from Ant Bully - 3D. A typical shotwould be of an ant walking on the ground, ant and ground elementsrendered in different passes and then combined in Nuke. But the uneventerrain was rendered with displacement mapping, which meant that theoriginal smooth, flat ground plane geometry that the character animatororiginally walked her ant over is now bumpy. The rendered ground thenhad variegated height that the character animator could not anticipate.In monoscopic, this is not a problem, but in stereoscopic, the CGI antwould often appear to either float above the dirt or to have her feetburied in the dirt. It is generally only practical to fine tune thisfix in the final 2D comp, not in the eariler Houdini or Rendermanstage. And we found that the eye could sense a stereo mismatch of aslittle as half a pixel (at 2K).
The important point is that for critical shots, the left and right eyesmust be match moved in depth precisely to one another. Many of the JCEshots involved actors floating in air and so their feet would notactually touch the CG set. In this less demanding situation, the stereometa data from the Pace camera was good enough, after a simple trim inthe Maya camera. But when the live action feet are touching the CGI orthere is a set extension, then the more accurate Unified Solve is used.
When using meta data, only the right eye is matchmoved and the left eyetransform is sent to Maya from the camera encoders. But in UnifiedSolve, both eyes are matchmoved together and the left and right camerasolves "talk" to one another inside of 3DE.
Unified Solve is basically bringing both the left and right plates into3D Equalizer and tracking a percentage of identical features for botheyes. Unified Solve is especially easy with blue screen markers, since3DE Marker mode finds the exact center of the dot pretty accurately forboth eyes. Finding the same center of the marker for both eyes isimportant, so that in stereo the resulting matchmove of the blue screendoesn't float in front or behind the correct stereo depth.
On JCE, wind machines often blew the blue screen markers around, so the3DE translation smoothing value was increased to avoid a noisy motionsolve. In other JCE shots, the actors would walk on rocks. These featuresrequire Pattern Tracking (not Marker Tracking) and it requires morehuman intervention to insure that the feature is tracked for exactlythe same spot in the left and right eyes.
Because earlier versions of 3DE already supported using multiple platesand cameras, Unified Stereo Solve has always been a standard feature in3DE. Unified Stereo Solve is just a nickname for an already existingcapability. Nevertheless, special 3DE stereo constraint features werecreated by Rolf and Uwe to enhance Unified Solve.
When using Unified Stereo Solve, you can combine the Autotracker withmanually created tracks. A certain number of common left/right pointswill need to be tracked with user intervention. This will ensure thatthe feature is the exact same spot on the set for the left and righteyes. The remainder of the tracks do not need to be common between theleft/right eyes and you could optionally use the autotracker. As mentioned, Unified Stereo Solve is almost always necessary on stereoset extensions since meta data will rarely be perfect enough. You cantrim the encoder meta data in Maya, but you can almost neverget all of the features in the eyes lined up without Unified Solve. Youcould use a "least square fit" (LSF) surveyed solver like rasTrack toimprove the meta data for the secondary eye (left, usually), but the3DE Unified Stereo Solve is ultimately the easy and elegant method.
It is well known that surveyless matchmove does not always createplausible solves. Sometimes you end up with a calculation that lookslike an M.C. Escher painting, lovely in 2D, but ludicrously impossiblein 3D space. Typically a 3DE user will use Reference Frames to addparallax and solve this problem. Similarly, Unified Stereo Solve maynot always provide plausible stereo solves. For this reason, 3DE addedstereo constraints. Theoretically, the left and right cameras should beexactly left/right of one another and not at different heights (localcamera space) or weird skew angles.
The 3DE stereo constraints ensure that only the i/o (tx) andconvergence (ry) between the cameras can be different between the twoeyes. All other values (ty, tz, rx, rz) will be constrained to zero. On JCE, only the convergence was animated on set. but other shows (likeT2-3D and Avatar) will have also have animated i/o. 3DE will supportthis technique in later releases. Animated i/o will be read into 3DE,from an ascii file.
Trimming meta data
Many match moves cannot be solved with Unified Solve and so meta datamust be used and then trimmed. In this mode, the user match moves themaster (usually right) eye and then trims the left eye's meta data.Typically there will be large errors to correct side to side [Yconvergence/pan/ry] and left right interocular (tx). But there willalso be slight up-down stereo errors (rx,ty), which also will need tobe trimmed. Usually the trim is in Maya, but occasionally the trim isdone 2D in Shake. The trim may even need to be animated on tricky shots.
Again, the important point here is that the eye will tolerate fairlylarge stereo convergence errors between the eyes, but when you arematchmoving and compositing between two eyes, then stereo errors mustbe fixed so that the layered elements sit properly in stereo depth. In our system, as soon as the meta data was read into Maya from theEXIF/JPEG file, the meta data was baked. This is because the rgbApremult image does not support EXIF meta data (because it is TIFF, SGI,etc., not jpeg) and we didn't want our meta data to disappear when weswitched from the blue screen plate to the extracted blue screenpremult plate.
When trimming meta data, you typically trim the convergence first andthe interocular later. If you can find a point in the image where thecameras converged, trim the secondary (left) eye so that the trackingmarks line up in both eyes (at the mid distance convergence point).Then, trim the interocular. Typically during the i/o trim, the near andfar points for the left eye will "swing and pivot". The far points(yonder of convergence) will swing one way and the near points (hither)will swing the other, all pivoting around the convergence point. If youdon't trim the convergence first, you may need to use more of aconfusing trial and error process to trim the meta data. Even if theconvergence is at infinity, it is probably easier to start trimmingwith the convergence, before i/o.
In Houdini and Maya, the view port will optionally display an overlaid12 field chart. Since there are 24 horizontal grid boxes, each grid boxcoincidentally occupies 80 pixels on HD (1920/24=80). This is extremelyconvenient, since 80 pixels is the exact maximum suggested value forthe foreground stereo divergence between the left and right eyes. Ifyou toggle the Maya/Houdini/Shake view port between the left and righteyes, you can easily see whether the fg image is shifting more than onegrid's worth of offset (in Shake, we just took a bit map image of thefield chart from Maya and added that rendered grid to the Shakecomposite tree).
Many shots will need minor stereo convergence fixes in the 2D stage(i.e. Shake). In this case, the compositor may need to zoom in slightlyon the image, since HD theoretically has no spare pixels on the left orright and the 2D shift convergence fix will reveal missing picture.This may not even be a problem on a blue screen shot where the actorsdon't touch the left or right frame line. Some isolated shots may have so much stereo eyestrain erroneously bakedinto them by the original photography, that the shot is unfixable.Cameron suggests showing such a shot flat in monoscopic, using theright eye image for the left as well. Another possible solution is totake the right monoscopic image and converting it to stereoscopic.There are several companies that specialize in the stereoscopicconversion process and their proprietary technology and patents varywidely from one another. Certain stereo plates may look good as far as eyestrain considerationsgo, until the layout process reveals unanticipated problems. Let's saythat we have an actor in a blue screen shot and he reaches his hand outtowards the camera. We know that the yonder objects usually should haveno more than -20 pixels of divergence and the hither objects shouldhave no more than +80 pixels of divergence. But what if theseconditions are seemingly met, until a CGI background greater in depththan the blue screen is added in layout? And what if foregroundparticle systems (dust, debris, rain, etc.) are added to the compositeor the actor is reaching out to a down stage CGI element? Then we mayexperience serious stereo eyestrain in the composite, even though theoriginal photography was apparently fine. So we see that the originalphotography will often need stand in objects ("stuffies") on set tohelp judge the final stereo effect.
3DE proxy image system
As you probably know, 3DE has a terrific proxy system for imagesequences. Typically, the F5 button is userassigned to bring up full res, the F6 button halfres and the F7 button is quarter res. Great feature.
Object tracking in 3DE Stereo
Often a Unified Stereo Solve will work well without 3DE stereoconstraints, or sometimes will be superior. Please experiment,matchmove is not an exact science. Warning: Object tracks in 3DE maynot work properly without the 3DE stereo constraints enabled and theobject may locate in different space left/right. This is not a problemfor camera solves and you may even wish to solve a stereo object as acamera track and then convert to object motion in Maya. Surveyless Object tracking is always ambiguous when it comes to Scale.In 3DE, you can track multiple moving Objects with cameras. On JCE, weobject tracked mine cars. But were they miniature mine cars, close tothe camera, or giant mine cars, far from the camera? The monoscopiclayout artist can make any subjective decision about scale that shelikes, but not so in stereoscopic. Since there are stereo eyestriangulating on the depth of the Object track, the scale of the Objecttrack becomes more "objective". Once the i/o and the convergence of astereo camera are calculated, the scale of an object must be at acertain value. The vanishing points and epipolars of stereocameras demand that the object have a certain scale. By using the stereo constraints in 3DE (instead of regular UnifiedSolve), the Object track will appear correct in both left andright Maya eyes.
Maya considerations for stereo layout
Manystereo tools were created for JCE in 3DE, Shake and Maya. For example, 3DE warpdistort can be performed completely in Shake,using a Shake node created by Mark Visser of Meteor Studios (availableOpen Source on the 3DE website). A stereo camera in Maya was originally designed by Eric Gervais-Despres. www.geo-z.com/ericgd
Eric now makes commercial version available to the public. Multiple stereo cameras and projectors were used, withsuitable naming conventions so that the stereo tools would act on allof the cameras and projectors in the Maya scene. A great feature in Shake and other comping software is a hot key totoggle between two images that need to be compared. In Shake, the hotkey is "1". So in Maya, we reassigned the "one" key so that the view port wouldalso toggle between the left and right stereo eyes. Very useful forrapid A/B stereo comparisons.
(Download Python script for Maya "1"button hotkey) michaelkarp.net/lr.zip
In 3DE and Maya Live, there is an Autocenter tool that will keep the 2Dor 3D feature centered in the viewport. With a MEL script, It is alsopossible to autocenter anything in Maya. So we set it up so that whenwe zoomed in the viewport, using the over scan, that the left/rightautocenter and over scan were always synchronized. Maya supports rgbA premult plates in the image planes used inPlayblasts. For stereo blue screen work, I would suggest the use of animage plane with the alpha channel enabled. It can be very distractingto judge the stereo effect on a move test with the blue screen notextracted. The visual depth cues are...weird. So on JCE, we had three pairs of image planes prewired, the rgb (withthe plate unaltered), rgbA premult (blue screen extracted) and MayaLiverotoPlane for subpixel accuracy and image caching. We could easilytoggle between the three image planes. Typically, matchmove/layoutartists would submit two playblasts for dailies, one optimized fortracking and the other for layout. The tracking test would show theoriginal blue screen and markers and the layout test would have theblue screen removed by multiplication/extraction and would be more"artistic". Surprisingly, almost all matchmove tests for Hollywood 2K feature filmsare rendered at half res 1K, which is quite adequate for most shots andallows major efficiency increases over full res 2k tests. It is often useful to adjust the Maya Image plane depth, but mandatoryfor 2.5 D "projector/card" shots. A premult Image plane depth willgenerally be near the camera and regular blue screen Image planes willbe set for a distant depth. For projector/card shots where the cameratranslates away from the projector, the image plane depth should beanimated so that the image is near the position of the critical of theaction. The image plane depth for projector shots will always be asubjective compromise. For mine car shots, the image plane depth of theactors was placed at the front of the mine car, but in other shots, thedepth was placed on an important actor.
Head lite shots in stereo
There are many shots in JCE of characters carrying flash lights. Oftentimes the beam was not bright enough in the plate or there wasn'tenough smoke on the set for the beam to illuminate, so a volumetriclight pass would need to be rendered. Beams require smoke orparticulates in the air to be visible, but that smoke can interferewith the photography and extraction of a blue screen. So many JCE shotsrequired beams to be created in post and of course to be match moved instereoscopic. We created special rigs in Maya for the headlights. Using the plate forthe right eye (master), the user animates a locator in Maya thatmatches up with the position of the light. It is very useful if theheadlight locator is rigged as follows:
- A null called Scalar is created andmade a child of the right (master) camera.
- Scalar trans/rot should be lockedat zero.
- Scalar uniform scale (XYZ withall the same value) will be animated later. Start with a value such asone (do not use zero for scale).
- A headlight locator is a child ofScalar
- Aim constrain headlight to theright camera
- Looking through the right camera,animate just the (local) tx and ty of the headlight locator, you canlock the tz at zero
Notice that any adjustments to the left eye will not change the righteye at all. This is a huge time saving technique. You can also 2D trackin 3DE and export the track to Maya withExport_Single_MM_PointsV1.3detcl. This script creates a camera in Mayawith a locator that follows the 2D track from 3DE. You can thenconstrain the 2D point Maya camera Scene node to the right Maya camera.Then, reparent (bake) the 2D locator so that it is a child of Scalar.The 3DE 2D tracker is not perfect, but overall is the best 2D trackerGUI and engine available in any 2D or 3D package. So it is veryconvenient to 2D track in 3DE and then export to Shake, Maya, etc. By Aim constraining the headlight locator to the right camera, you keepthe locator "square" to the right eye. After you animate the headlight translations for the right eye and setthe depth for the left eye, then you can animate the rotations. Createanother locator HeadlightRotate that is Point constrained to theheadlight locator and then animate only the rotations of theHeadlightRotate node. You can also constrain a cylinder to theHeadlightRotate node, for visual reference of the quality of theanimation. Obviously we created a script to automate the creation ofeach headlight rig. On one occasion, we needed a different stereo camera for the headlighttrack than we did for the set extension for the scene. Because themoving headlight was on the actors head and was distant from thestationary set, the matchmove for the set was only suitable for the setextension. In this situation, we duplicated the main matchmove cameraand gave special trims so that the headlights would track properly.Since the headlights are much closer to the camera than the set, anyminute translation errors on the main matchmove were magnified whentracking the headlights.
- After the right eye headlightanimation looks good, look through the left Maya camera. The depth ofthe locator will be wrong in the left eye, so animate the uniform scale(XYZ) of Scalar so that the depth from the left eye is correct.
It is common in matchmove to take a Maya node and give it a new parent.For example, an Object track could be converted to a Camera Track, orvice versa. A Maya node could be baked to a new parent, so that thescene is cleaned up for publishing to other departments. Mayaconstraints and baking are used for this, but we used an automatedReparent script to greatly simplify this process, available here. Besure to hide the viewports when running the script. It will speed upthe bake, since the image plane is not needlessly read in:
Variable speed shots
Several shots needed on JCE needed speed changes and time warps. Thereare two basic approaches.
- The time warp can be rendered tothe plate in Shake, Combustion, etc. using Twixtor, etc. and thererendered plate can be matchmoved
Tracking a time warped plate is problematic. Time warp creates motionartifacts that may be acceptable to the audience, but confusing tomatchmove software. The same goes for 2D repos of plates for matchmove.So we tracked clean plates and then applied the time warp in Maya,using a custom script. This way, the time warp could be modified at anytime and we weren't locked into the original time warp decisions. Thetime warp in Maya was baked back into all of the Maya animation and thefinal time warp (if different) could be exported back to Shake in asimple ascii file.
- The time warp can be calculated inShake, Combustion, but the actual plate that is matchmoved is notaltered. Rather, the speed change is exported to Maya as an ASCII fileand Maya simulates the speed change.
Projector shots and corner pins
We had a large number of stereo "projector shots". These were two and ahalf D "card" shots.
- The 3DE matchmove was imported intoMaya
- The camera was duplicated andcalled projector
The farther the camera translates from the projector, the more obviousthe "cheat" becomes. You may need to aim the projector image plane backat the camera, so the image doesn't get too squished or keystoned. Theconvergence may need to be trimmed on these fancy projector shots, sothat the stereo depth stays correct with the cheated perspective. When doing a projector shot repo, the plate needs to be rerendered.Either you can rerender the plate in Renderman or you can export acorner pin to Shake. Adapting a corner pin script from HighEnd3D, weexported the four corners of the image planes into a Shake trackerscript. Automatic compensation was made for lens distortion, timewarps, image padding, etc.
- Many elegant schemes were used tooffset the camera from the projector, both for nodal pans on theprojector and other shots where the camera translates away from theprojector.
"Stabilizing" projector shots
A cool stabilization trick is this: Let's say a crane or hand held shot has too much vibration. If youmatchmove the shot, you can take that solve in Maya and make thatcamera into a projector. Make another stationary Maya camera that looksat the projector camera and the stationary camera will see a completelysmooth, vibration free image. Since the 3D solved locators from 3DE arestationary in world space, then a stationary camera would see aperfectly stable image. You can also duplicate the projector camera toa render camera, smooth the render cameras animation and then you willretain the original motion of the plate, with the objectionable bumpsremoved. A deeper discussion of matchmove stabilization is here: michaelkarp.net/6dof_crbn_nl.mov
Many JCE shots had actors walking on treadmills. We wouldtypically matchmove the camera in 3DE and hand track the treadmillObject motion in Maya. Next we would reparent the camera to be a childof the treadmill. Reparent is completely differentthan parenting, because the child node animation is baked, so that it'sposition in world space does not change. The final step is to Mute theanimation on the treadmill Object tracking. By muting instead ofdeleting the treadmill object tracking, the artist can easily undochanges if necessary. By this simple procedure, an object track is easily converted to acamera track.
Triangulation scripts in 3DE
There are two powerful new triangulation tcl scripts from 3DE. Theseallow 3DE points that won't solve Passive, to still be passively calc'din 3D space. One script is intended for monoscopic and the other is forstereoscopic.
Animating ik characters in stereo
There are many shots in JCE where ik human characters were matchmovedover plates of live actors, in stereo. It is typical in charactermatchmove to carefully pose the ik model for the first frame lineup.This is especially important in stereo, because the character must becorrectly posed for both left and right eyes. The most important pointis that the i/o and convergence for the camera should be set so that itworks well with the posed ik character. The scale and posing of an ikcharacter in stereo must be precise and all other layout and worldspace decisions must follow downstream from this first and demandingstep.
Mine car sequence
Wehad many scenes with actors in a mine car chase. Typically, thethree actors (and personal mine car) would be shot in their own bluescreen plates and then the three plates would be choreographed andcombined in Maya. The actors would be standing on a mine car on amotion base, but the bottom chassis of the mine car would be missingand had to be added as a set extension match move. Set extension instereo can be complicated, since as soon as you translate the projectorimage, the nature of the 2.5 D cheat becomes apparent. So one of thethree mine cars (the most difficult to set extend) becomes the masterand the other mine cars set extensions need be less exact.
The stereo trim of the camera is optimized so that the most demandingmine car projector looks correct in stereo. The stereo mine car sequence was the most complicated that we worked onand there are many vital subtleties that I am leaving out. Many object tracks were done in 3DE of a rectangular mine car"chariot". These 3DE solves were very good, but the point cloud wasn'talways perfectly perpendicular. Normally we would have put lattices onthe mine car in Maya, but this would have broken the kinematics of thepump and wheel linkages of the mine car in Maya. So our brilliantrigger Marc-Andre published standard blend shapes, so that we coulddeform the model without breaking the fk of the chariot wheels. Also, the animated motion of the mine car was used as a path toprocedurally extrude the mine car rails, bridges and trestle.
One problem in stereoscopic vfx is creating roto mattes that have theproper matching stereo depth between the eyes. One possible solution tothis problem is to place a texture card in the Maya scene and 3D paintthe rotoscope split line on the texture card. Instead of rotoscopingthe bezier splines twice for the two eyes, you only rotoscope the splitonce and use the left/right Maya cameras to view the single texturecard at different perspectives. If the texture card is placed at asuitable depth in the Maya scene, then the stereo rotoscoping wouldnaturally blend smoothly between left/right images.
There are several methods of viewing in stereo. In the vfx facility theater, dual projectors with polarizers will beused or a RealD system can be installed. At the workstation, three general methods are used:
Anaglyph uses red/green glasses and doesn'tlook very good. But it has the advantage of working with both CRT andLCD monitors. 3DE has a built in anaglyph function, as does the EricGervais-Despres Maya stereo camera. Shuttered glasses only work with CRT work station monitors, because ofthe lag of LCD. Shuttered glasses are supported by Framecycler. Ididn't like the brand of shuttered glasses that we used on JCE (toodark and flickery), but I do like the Crystal Eyes glasses that I'm nowusing. An infrared transmitter sends the sync information to theglasses. Framecycler automatically loads stereo image sequences whichhave the proper naming and padding conventions for the left/right eyes. My favorite method (although not everyone's) is the mirror. A frontsurface mirror (Edmund Scientific) is mounted in front of the monitor,the glass almost sideways to the viewer. Special renders of thePlayblast were produced where the left eye was mirrored flipped in X(scale x= -1) and placed to the left side of the right playblast frame.The silvered part of the mirror points to the left. The artist puts hereye right up to the glass, views the right image with the right nakedeye and the left (flopped) image through the left eye, looking with theleft eye at the mirror. This mirror technique can be hard to get used to, but the quality ofthe image is superb, once you train your eye with the method. On AntBully, final composites for Imax were all judged with this method. Itworks with CRT and LCD. It works with single or dual monitors. Oneproblem wit the mirror is that the user can artificially fix slightconvergence problems, since the mirror can be rotated by hand. On theother hand, polarizer and shuttered glasses have projection geometriesthat can't be cheated by the viewer, so one always knows if theconvergence of the matchmove/comp is correct. In the past, I have worked with photographers who were color blind,although they succeeded professionally anyway. Similarly, on JCE, weactually had a couple of artists who could not see stereoscopicproperly. At the beginning, we kept this a deep dark secret frommanagement, but in reality, even a one eyed matchmove/layout artist cando stereo work with little problem. They can easily understand theproblem intellectually and produce great work. Stereo viewing is alwayssubjective and a certain supervisor or director with the "referenceeyes" will be the ultimate judge of the stereo effect.
Thanks to the Meteor JCE stereo matchmove/ layout team: