I completed my M.Sc. at the Electrical Engineering faculty at the Technion, Israel Institute of Technology. The subject of my thesis1 was Foveated Video Extrapolation.

Video extrapolation is the task of extending a video beyond its original field of view. Extrapolating video in a manner that is consistent with the original video and visually pleasing is difficult. In my thesis we aim at very wide video extrapolation which increases the complexity of the task. We introduced a multi-scale method which combines a coarse to fine approach with foveated video extrapolation. Foveated video extrapolation reduces the effective number of pixels that need to be extrapolated, making the extrapolation less time consuming and less prone to artifacts. The coarse to fine approach better preserves the structure of the scene while preserving finer details near the domain of the input video. The combined method gains improvement both visually and in processing time.

The following clip demonstrates some results of using my video extraplation algorithm (orignal clips taken from the “Human Actions and Scenes Dataset”):

Here is a video of my talk at the International Conference on Computational Photography (ICCP) 2011:

1 Amit Aides, Tamar Avraham and Yoav Y. Schechner, “Multiscale ultrawide foveated video extrapolation,” Proc. IEEE ICCP (2011).