Researchers at Microsoft have changed the way we look at the videos taken from the first person perspective. Using a newly developed algorithm, they can transform long, boring and sometimes nauseating footage taken from helmet cams or Google glasses into a silky smooth hyperlapse that are ultimately more enjoyable to watch.
If the source video was a walk down the street it would be jittery and have monotonous shots waiting at the cross walk for the light to turn or looking down at your feet. The algorithm would help render footage that would have steadicam like smoothness.
How the process works could be described as computer wizardry. The process begins by removing redundant frames from the source video such as shots waiting at a cross walk. Then the algorithm reconstructs the the trajectory of the camera in a in a 3D depth map using structure from motion.
Each frame is then analyzed for similar geometry to help create a path for the camera to follow. The path is chosen on a per pixel basis to remain smooth and have have a consistent render. The shots that interrupt the flow are identified and removed.
The frames for the final rendering are made up of several neighboring frames from the source video and are then stitched together and blend to match white balance and exposure. When comparing the resulting render to a stabilized time-lapse of the source, there is no contest.
Here technical break down behind the algorithms and what it does to achieve these results:
Think of all the GoPro videos out in the world that could be condensed into something more watchable. Let’s just hope this project doesn’t fall to the same fate as Photosynth.
There is no timeline to when this hyperlapse algorithm will be available. Creators Johannes Kopf, Michael Cohen and Richard Szeliski hope to make it into a Windows app soon.