The film industry isn’t new to mixed reality -- the visual effects industry has been mixing real and computer-generated (CG) elements for decades, and the complexity of those effects has grown incredibly.
As an agency that specializes in developing mixed reality applications that enable creative work, we’re planning to prototype a tool to assist filmmakers.
Here’s our concept: We’d like filmmakers to be able to look through their phone’s camera and see virtual elements composited in real time. They should be able to block out elements in their scene, frame shots with a virtual lens, then record takes for reference. We think by blocking on location before the shoot -- possibly during a walk-through -- we can cut down on the setup times during the day.
Of course, it’s easier to move virtual items than real times.
We’re still nailing down what would go into our very first prototype, but I have a few ideas and a few questions, which is a good start:
On placing virtual elements throughout the scene: What types of objects should you be able to place in our MVP?
When setting the virtual lens: Should we have preset camera models and focal length, or something more open-ended, like sensor size and focal length?
Recording a take and saving it somewhere: What does that look like?
We’re working with Jyothi Kalyan Sura, an MFA at the University of Southern California, to build a proof of concept and get it into the hands of filmmakers. Luckily, for the filmmakers all around us in our home base of Los Angeles, they can test the application in its earliest stages.