The Imagine team is a work in progress. Please visit us later.

A virtual cinematography system that learns from examples.

Responsable : Remi Ronfard

Contact : ronfard at inria dot fr

Description : Developers of interactive 3D applications, such as computer games, are expending increasing levels of effort on the challenge of creating more and more realistic experiences in virtual environments. In recent years, there has been a clear move in games towards a more cinematographic experience, by recreating and reusing the narrative devices of conventional movies (importance of cut-scenes, continuity editing in the shots).

As a result, there is a pressing requirement to automate cinematography, and develop camera control techniques that can be utilized within the context of complex, dynamic and interactive environments. Such camera control algorithms should be capable of computing optimal viewpoints and performing appropriate editing (or montage) when switching between viewpoints. While an important amount of work is devoted to optimal camera placement, there has been little effort, in the research community, on providing expressive computational models for automated editing.

In this master intenship, we propose to learn from real examples of carefully chosen movie scenes, to drive the automated editing in virtual worlds. Such real examples augmented with an annotation scheme consisting of (1) the identification of classical shots such as close-up, long shot, apex, parallel (2) a detailed description of screen composition (positions and orientations of all actors and objects) and (3) a narrative summary of the action taking place in the scene, will teach to statistical models. Such models should predict how well a given sequence of shots will convey an action and how well the editing will perform.

The main difficulties consist in:

  1. identifying and extracting a set of meaningful parameters from real movies
  2. building an expressive editing model that learns from this set of parameters
  3. evaluating the quality of the model in its application to interactive virtual environments

The work will take place within the LEAR research team (INRIA Grenoble) in close collaboration with the BUNRAKU team at INRIA Rennes.

This research is expected to shed light on the theory of film editing, and to be directly applicable to automatic cinematic replay in video games, machinima and automated video editing of home movies.

Bibliography

  1. R. Ronfard and G. Taubin. Image and Geometry Processing for 3D cinematography.
  2. C. Lino, M. Christie, F. Lamarche, G. Schofield, P. Olivier. A Real-time Cinematography System for Virtual 3D Environments, In Proceedings of the 2010 ACM SIGGRAPH / Eurographics Symposium on Computer Animation, Madrid, Spain 2010.
  3. M. Christie, P. Olivier, J.-M. Normand. Camera control in computer graphics. Computer Graphics Forum 27, 8, 2008.
  4. N. Chambers and D. Jurafsky. Unsupervised Learning of Narrative Event Chains. In Proceedings of ACL/HLT 2008.
  5. D. Elson, M. Riedl. 2007. A Lightweight Intelligent Virtual Cinematography System for Machinima Generation. In Proceedings of the 3rd Annual Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE '07), Palo Alto, California, 2007.