I had my first taste of how the motion capture pipeline works in a previous topic last year, so I had a little understanding of the process. It was a good to have a refresh on all the steps though, as it defiantly is technical process and I find it hard to remember it all!
Motion capture - or mocap for short - is a technology that records the movement of people or objects, from multiple infrared cameras spaced out around the room. The subjects are 'marked up' with reflective dots - attaching them to their motion capture suit. The animation data is then mapped to a 3D model which is enabled to perform the actions that were captured. The three main goals of motion capture are:
1. Sensing the motion
2. Processing the sensor data
3. Storing the processed data
The software that processes this sensor data is called Shogun, and once it recognises the programmed pattern of markers, it will recognise you as a human.
This technology is used in a number of different industries such as, film and television, video games, health and sport and military.
Here are some of the behind the scenes images captured for reference of process from the first week:
The outcome of first class: Students animating robot characters in real time in Unreal Engine 5
"Painting" with the wand to collect spacial data for all the cameras
Image of the collaborate session, for "painting" with the wand
These are reference pictures found on Vicon's website that we used as a guide for position of the markers.
This is called 'Marking up', where the actors get velcro reflective dots placed on particular body positions. The reflective material of these dots is picked up by the infrared cameras and then translated into a 3D animated skeleton.
Inside the Shogun software with the 'marked up' actors registered in the 3D virtual space
Live steaming motion capture animation of robots in Unreal Engine 5
Comments