Imagine Internships
A virtual cinematography system that learns from examples
Advisors
Rémi Ronfard, IMAGINE teamMarc Christie, BUNRAKU/MIMETIC team (INRIA Rennes)
Contact : remi.ronfard@inria.fr (04 76 61 53 03)
Context
Developers of interactive 3D applications, such as computer games, are expending increasing levels of effort on the challenge of creating more and more realistic experiences in virtual environments. In recent years, there has been a clear move in games towards a more cinematographic experience, by recreating and reusing the narrative devices of conventional movies (importance of cut-scenes, continuity editing in the shots) emphasizing specific dimensions such as tension or fear for instance.
As a result, there is a pressing requirement to automate cinematography, and develop camera control techniques that can be utilized within the context of complex, dynamic and interactive environments. Such camera control algorithms should be capable of (i) enforcing low-level geometric constraints, such as estimating the full or partial visibility of key subjects, (ii) efficiently planning camera paths in complex and dynamic environments, and (iii) encoding cinematographic rules related to continuity editing when switching between viewpoints.
Objectives
In this master internship, we propose to learn from real examples of carefully chosen movie scenes, to drive the automated editing in virtual worlds. Such real examples augmented with an annotation scheme consisting of (1) the identification of classical shots (panning shot, dolly shot, crane shot, etc.); (2) a detailed description of screen composition (positions and orientations of all actors and objects) and (3) a narrative summary of the action taking place in the scene, will teach to statistical models. Such models should predict how well a given sequence of shots will convey an action and how well the editing will perform.
The main difficulties consist in:
- identifying and extracting a set of meaningful parameters from real movies
- building an expressive editing model that learns from this set of parameters
- evaluating the quality of the model in its application to interactive virtual environments
This research is expected to shed light on the theory of film editing, and to be directly applicable to automatic cinematic replay in video games, machinima and automated video editing of home movies.
References
R. Ronfard and G. Taubin. Image and Geometry Processing for 3D cineamtography.C. Lino, M. Christie, F. Lamarche, G. Schofield, P. Olivier. A Real-time Cinematography System for Virtual 3D Environments, In Proceedings of the 2010 ACM SIGGRAPH / Eurographics Symposium on Computer Animation, Madrid, Spain 2010.
C. Lino, M. Chollet, M. Christie, R. Ronfard. Automated Camera Planner for Film Editing Using Key Shots. Posters Track. In Proceedings of the 2011 ACM SIGGRAPH / Eurographics Symposium on Computer Animation, 2011, Vancouver, Canada.
C. Lino, M. Chollet, M. Christie, R. Ronfard. Computational Model of Film Editing for Interactive Storytelling. Short Papers Track. In Proceedings of the 2011 International Conference on Interactive Digital Storytelling, 2011, Vancouver, Canada.
M. Christie, P. Olivier, J.-M. Normand. Camera control in computer graphics. Computer Graphics Forum 27, 8, 2008.
N. Chambers and D. Jurafsky. Unsupervised Learning of Narrative Event Chains. In Proceedings of ACL/HLT 2008.
D. Elson, M. Riedl. 2007. A Lightweight Intelligent Virtual Cinematography System for Machinima Generation. In Proceedings of the 3rd Annual Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE '07), Palo Alto, California, 2007.