Sep 2 2013
In the throng of the film set, camera operators have to determine the camera angle, the aperture, and depth of field of the camera. In the future, they will be able to change these parameters, even in post-production thanks to a new camera technology.
And – Action! The set resembles an ant hill. Actors, actresses, extras, cameras – and in between all of this, the director is calling out his instructions. The camera operator has to make sure of the correct settings, pay attention to the flow of the scene, and instruct the camera assistants. Which camera angle should be assigned to which camera? Which part of the image should be sharp, and which should retreat, diffuse and out of focus? Because once the recordings are “in the can”, as they say in the movie biz, these parameters can no longer be corrected. At least, not until now. An algorithm combined with a new type of camera array – i.e. an arrangement of several cameras – should enable these changes to be made retroactively in the future – and thereby allow for more creativity in post-production. Filmmakers can then still decide afterwards which area of the scene should be portrayed sharply. Or move around within a scene – virtually – like in the film Matrix. The actor is frozen in the scene, hanging motionless in the air, while the camera moves around, capturing the scene from all sides.
Many perspectives instead of just one
Researchers at the Fraunhofer Institute for Integrated Circuits IIS in Erlangen, Germany, have developed a camera array that makes this feasible and will be exhibiting it at this year’s International Broadcasting Convention (IBC) in Amsterdam. “The array consists of 16 cameras in total, arranged in four rows and columns”, explains Frederik Zilly, Group Manager at IIS. Instead of having just one single camera as usual, which records the scene from just one position, the 16 cameras collect the light rays at various points in the plane over which the cameras are distributed. The researchers speak of having captured part of the light field from the scene, instead of only one specialized perspective. Although the array consists of 16 cameras, its cross section is only 30 cm by 30 cm (12” x 12”). So it can be conveniently and easily employed on the set and in the studio.
But how does that work, being able to edit the recording so much better retroactively? “The software estimates a depth value for every pixel recorded by the cameras. It therefore determines how far from the camera array the object portrayed is located. Intermediate images can be calculated in post-production from this depth information, so that we have virtual data not from just four columns and four rows of cameras, but from 100 x 100 cameras instead. As the camera operator films the subject, each of the outer cameras is able to look a little bit behind the subject – they have a different angle of view than the cameras located in the middle of the array. After the recording is made, the filmmakers are able to virtually drive around a person or an object, and to change the camera angles and depth of field.
The researchers have already developed the software for processing the recording from the camera array. The graphical user interface is also ready for recording on set. The researchers are still working on the user interface for the post-editing at present; they should be finished in about six months. The scientists are planning then to produce a stop-motion film that is particularly suited as a test run of the software. “Later, we would like to use it as a demo film,” discloses Zilly. “Then we can show interested parties the kind of possibilities and opportunities offered by employing a camera array.”