Manual Facial Motion Capture

Minor Project- Media Design School

 Model/Actor: Aniket Ujjainkar

07_right marker

Facial Motion Capture:

What?

Facial Motion Capture is the process of electronically converting the movements of a person’s face into a digital database using cameras or laser scanners, edited in 3D software for refined and realistic output

Why? – 

It is the simplified technique of animation (Simplified: Advancement of Animation Methods)

How?

Shoot nice footage with a dialog and some facial gestures.

Variety of techniques are used to do motion capture It depends on how high or low quality Footage you have shoot. There are plenty of other meduim through which is used to motion capture 2D and 3D are other two types of tracking through which motion capture is done.

3D: 3D tracking is the method where more than one camera is use to tracks 3D co-ordinates of a tracking markers. For this we can use motion capture suits and 3D markers. This tracks perfect motion of the entire skeleton.

eg: avatar motion capture which tracks human character movement. specific trackers/ sensors are used to track the actual co-ordinate of the sensor.

Advantage/Disadvantage: Mocap suits gives 3D position from tracker which is perfectly captured. It can be played in real time. Can be very expensive and can also need special software to Capture.

2D:

head gear

Only one camera can be used 2D tracking, which is Rigged to face with a setup to track head movement Data can only be captured in X and y, coordinates, no Z depth is involved. it is very important to have a stabilized footage.

About the Head Gear– A friend of Aniket Ujjainkar (Acrot/Model) has a tool room in his basement, I took permission and went there to built this headgear. I was well prepared as I wanted this headgear to be perfect, I had a blueprint for making this headgear, took some tools, some nails, some clamps and was able to make this structure in 3 hours, I prepared a case with polystyrene for my Iphone to fix it on the HeadGear,

Everything worked as Planned

“Thanks to Steve jobs for inventing Iphone “

This allowed me to shoot the video at 30Fps without any blur and good color contrast.

Advantage/Disadvantages:

It is cheaper to do but tracking will have to be done manually with tracking software eg, Nuke, PF Track, etc. disadvantage is that you don’t get the Output in real time.

This setup requires a light weight camera which is mounted on a helmet or a cap attached to head.

If wired then will have to connect that camera to the system. Camera should have sustainable frame rate- 30FPS- 640*480 Characters head should be very much stable and aligned to camera. if moved than the animation will give unnecessary movement to the character

Markers:

  Markers are the spot which indicats the exact location of the joints/muscles which needs to be placed and moved.

IMG_7524               IMG_7523

The above footage is the example of bad markers, this was the first attempt where Markers were drawn very big and were placed on face without knowing the anatomy.

07_right marker

The above footage is the example of good markers after few attempts of marking and shooting, I finalized these markers, as these markers are of proper in size and place properly by understanding the movement.

LightTest          LightTest_01  LightTest_02 LightTest_03     LightTest_04

Light should be proper, not too bright not too dark .avoid highlights and grains on the tracking markers

Decide the amount of markers which need to be placed for the character movement more the markers more the quality of the animation we get from the tracking data.

Fewer markers are good for games kind of animation.

The placement of the markers should be according to the bone setup of the character.

Production Workflow

Production Timeline    –  Weeks:

Link to put

Analyzed the video which I took.

(If the head is moving a little more than expected then the video needs to be stabilized using software’s like after effects, nuke, etc.) Thankfully because of the Headgear which I made, the video was much more stable and was less blurry, also the markers were sharp to track.

Modeling :

Aniket_Face_Side_for_ModelingAniket_Face_Front_for_Modeling

Went to photo studio and took HiRes front and side images of the model

This was my first attempt in Zbrush to create a face Model, I was very inspired by various model which I saw on web. Learning new tools in Zbrush and implementing them to make the model real was a challenge Thanks to my Friends and Tutors who helped me.

Placed front and side images it in Z brush and started Sculpting With the help of Maya modeling tools I Finalize the face and keep it ready for Texturing.

Zbrush_Model  Zbrush_Model_02

Uv’s & Texturing :

Uvs

Used Maya 2014 to Unwrapp the UV’s, Exported the Model to Zbrush for Texturing. Zbrush has spotlight option for texturing which allows to paint the actual texture from the reference images.

Rigging:

Rig

Rigged the character as per the markers which were placed on his face, Skinning was the easiest part as there were many joints on the face which took the sharing of weight by itself, just had to adjust few unwanted weight.

modelled, Textured and Skinned the entire Face in the First week and kept the rigged model for Tracking. By this time 25% of the work was done,

Tracking :

Interfce_PFTrack

Interface of PF Track is Node Base, Easy and user friendly.

As the footage was well shot, it saved some time of production because I didn’t have to use another software to stabilize the footage.

To start with I did some Research on few Tracking software’s like Mach mover, Boujou, Syntheyes and PF Track,

PF Track is the only software which allowed me to export the tracking data as joints for Maya, as my model and animation is in Maya

Face_PF_Track

Imported the footage in pf track, set the time line and frames per second.

Camera_PFTrack

 Imported base mesh of the model in PF track which needed to be tracked, PF track has Geometry tracking which tracks the part of geometry with relevance to the footage This imported geometry then need to be placed properly in front of the footage, make sure it matches the parts of the face. Once the mesh is imported, start creating the deformable groups and specify the area which needed to be track. Make sure to place the pivot point on the marker, also check it in the 3D view too. colorize the group for better segrigation Mention the axis in which  the geometry to be tracked, (PF Track has the option of rotate scale and translate)

refer to the image below for the data.

GeometryTrack_PFTrack

once all the trackers are placed on the specific points, use the tracking option from the tracking section, PF track allows forward and backward tracking.

TrackingOption_PFTrack

The tracked data needed to tweak more for refine animation, pftrack has gives option to key the tracking markers. once keyed, I re tracked it front and back to refine the path.

Timeline_Pftrack

Graph is used to delete the jerks in Animation means refining it, but not user friendly.

Pivot and Graph_PFTrack

once everything is done create an export node and export the track and tracking data using FBX maya option. This option allwos track to export as joints in Maya which is used for animation. It is very important to place the proper pivot as the joints will also have the same pivot when exported in Maya.

export_PFTrack

Once the track is imported in Maya, place the tracking group in alignment with the face rig, align both the rigs to same size. Parent Constraint all the corresponding joints with the face rig, make sure the joint doesn’t have any values before giving constraints.

Track_Tramsfer

Refining Animation :

Once the locators are placed and Constrained bake the constraints or joints with

edit>keys>bake simulation

and then delete all the constraints, to avoid problems

ReTweaking of joints was necessary as the PFtrack did not give precised movement, I created Animation Layers for selected Lips Joints, this entirely gives a  new layer for keying the joints wherever needed.

Re-tweaking needs maximum production time.

To develop the Quality of the animation Blend shapes are done, for pose” Hiii!! ” I have created a blend shape where the eyebrows are up and it has little deformation on his face, Also for the last Frame I created Blend shape, where his Eyes gets squeezed a little, gets a nice dimple, also few more details in lip area.

While learning 3D Rigging I learned this process. When the corner of the lips come close, Lips roll out and when the corner of the lips spreads away (Smile Pose), Lips roll in or gets flatten. I have used this technique when the Character says “Motion”

Script to Roll Lips

Create Two Locators, Place it on the end of the lips where the joints are placed, Parent Constraint it, Take a Distance tool and snap it to the locators, you will get the starting point and the ending point, the distance tool will measure the distance between the lips. Create a set range utility in hyper shade and attach Distance >Value X, Y, Z Manually Input the old Min Values as the closest distance on the two locators and Old Max as the maximum distance between the two locators the Min Value in Set Range Utility indicated how much distance you want to make the lip forward, I used 0.15 values as it is sufficient for this character. we can set Max value for the lip flatten. Create Two plusMinusAvarage Utility, One for upper lip and one for lower lip, attach out-value of the Set range node to Input 3D[0] of  first PlusMinusAverage Utility and out-value of the Set range node to Input 3D[0] of  second PlusMinusAverage Utility Add a New Item in the Node and put the normal value of Translate Z/Rotate Z in both the nodes Make sure that the operation is SUM Connect the upper node to the upper joint and lower node to lower joint ” Enjoy the Result

Lighting & Rendering :

Vray was one another thing which motivated me to do this project, learning Vray was a new thing to me,without the help of Clint Rodrigues and Saumitra Kabra this was not achievable.

Vray is little different than Mental ray, it has more control over the sharers

Create Vray HDRI  and without any lights the renders were excellent, Played with some texture maps of the characters face, used Vray material and plugged in Diffuse map, Bump map, subsurface Scatter map, Spec Map and got a realistic output

passesfor presentatiponRender out with Vray with Multipel passes

Final Ouptout looks like this

FinalRender

Thank You for Reading ..

Special Thanks to

Eddie Wong,

Arien Hielkema,

Chevy McGoram,

Aniket Ujjainkar,

Clint Rodrigues,

Saumitra Kabra,

Bhushan Purohit,

Maya,

Nuke,

PF Track,

Zbrush,

Photoshop,

and for all your support.

One response to “Manual Facial Motion Capture”

Leave a Reply

Your email address will not be published. Required fields are marked *