The algorithms will prevent mobile phone images from being “flat” and impart a realistic 3D feel.
New Delhi, NFAPost:
Researchers at the Indian Institute of Technology (IIT), Madras and the US-based Northwestern University have developed deep learning algorithms that can greatly enhance the depth perception and 3D effects in videos shot using smartphone cameras.
According to officials, such algorithms will prevent mobile phone images from being “flat” and impart a realistic 3D feel. A crucial advantage of the algorithm developed is that it eliminates the need for fancy equipment or an array of lenses to capture videos with depth.
“It is a common complaint, especially among amateur and professional photographers, that photographs and videos shot using smartphone cameras have a flat, two-dimensional look. Apart from the flat look, some 3D features such as the ‘Bokeh Effect’ – the aesthetic blurring of the background – that are easy with the DSLR camera, are challenging in smartphone cameras,” said Kaushik Mitra, assistant professor, Department of Electrical Engineering, IIT Madras.
“While a few mid and high-end smartphone cameras are now programmed to incorporate such effects in still photographs, especially in portrait mode, it is not yet possible to render them in videos captured using smartphones,” he added.
Mitra explained that the advanced professional cameras capture information about both intensity and direction of light in a scene, known as Light Field (LF), to give the perception of depth.
“The LF capture is achieved through the use of an array of micro lenses that are inserted between the main lens of the camera and the camera sensor. Multiple micro lenses cannot be placed on mobile phones because of space constraints. Instead, algorithms that can post-process the image captured by the existing mobile cameras are being developed.
“Artificial Intelligence and machine learning techniques are used for such image manipulation. Our team looked into this issue and has built a deep learning algorithm that converts the stereo images captured using a smartphone into LF images,” he said.
The research has been published in the ‘Proceedings of International Conference on Computer Vision (ICCV), 2021’.
“The algorithm first captures two videos (called stereo pair) simultaneously using the two adjacent cameras that are present in many smartphones these days. These stereo pairs go through a sequence of steps involving deep learning models. The stereo pairs are converted into a 7X7 grid of images, mimicking a 7X7 array of cameras, thereby producing the LF image,” Mitra explained.
“A crucial advantage of the algorithm developed by our team is that it eliminates the need for fancy equipment or an array of lenses to capture videos with depth. The Bokeh and other such aesthetic 3D effects can be achieved with a smartphone that is equipped with a dual-camera system.
“In addition to providing depth, our algorithm enables us to view the same video from not just one point of view but from any of the 7×7 grid of viewpoints,” he said.