February 16, 2020

Categories: Dual Lens Camera

View Our Shop
Car DVR Camera

What exactly is a dual camera? What features does it bring? Why is it better than a single camera? First of all, we are sure that the dual camera we are talking about is not simply putting two cameras on the phone and each working. The dual camera is a function that uses different images obtained by the two cameras to get far more than a single camera.

dual camera module for smartphone

Features of the current mainstream dual cameras. It can be divided into two main categories:

1 Use dual cameras to generate stereo vision and obtain the depth of field of the image. And the use of functions such as 3D modeling of different depths of field, image processing, object segmentation, object recognition and tracking, focus assist, etc.

2 Use the left and right two different picture information to fuse, in order to expect higher resolution, better color, dynamic range and better image quality.

The two types of dual camera functions have different requirements for the camera hardware. The former requires the two cameras to obtain as large aberration as possible, so that the depth of field accuracy can be obtained higher. Therefore, the former hardware hopes that the distance between the two cameras is far. . The latter hopes that the images of the two cameras can be as close as possible in space and time. Therefore, when the hardware is designed, the two cameras are expected to be closer. In this way, when the two images are fused, no more errors will be caused due to the phase difference.

But because the camera can’t do exactly the same. Therefore, no matter which one of these two types of functions, the algorithm hopes to get more images of the actual situation of the hardware, such as: posture difference and lens distortion of the two cameras. And this information requires that the platform algorithm and the module production mobile phone production use the same and convenient engineering algorithm for calculation. Many of them involve the characteristics of the hardware itself, which cannot be solved simply by theoretical calculation. We will continue to discuss this topic in the future. In short, in the process of using the dual camera, the algorithm and the hardware are closely combined and cannot be separated. So when it comes to dual cameras, the first thing to focus on now is what kind of user experience a dual camera can bring. The requirements for the combination of software and hardware in dual cameras are much higher than for single cameras. When we see a mobile phone with dual cameras, we can see from its hardware design which type of features it is focused on. The figure below is the information on the difference in posture between the two common cameras. It is the translation and rotation of the 3D coordinate axis XYZ. In the following, we will introduce the functions that the dual cameras can implement.

3D axis to show what can dual lens camera do

A. First-Type Function

The first type of function first needs to obtain a depth map of the current scene. The basic principle is based on triangulation. If you are interested, you can refer to Chapter 12 of “Learning Opencv” Projection and 3D Vision.

3D vision for dual cameras introduction

A1. Background Blur

The most typical feature of the first type of function is background blur. The function of background blur is actually to blur (blur) objects at different distances based on the depth of field map. The ultimate goal is to get fresh fruits similar to those captured by SLR cameras with large apertures. This feature is often seen in previous dual cameras. For example, HTC’s rear dual setup and Lenovo’s front dual setup S1 both do similar functions. This kind of function is relatively mature at present, and the effect is still good. Although the depth of field judgment is sometimes incorrect, I believe that it should be able to do a good job later.

depth of field dual lens camera function

A2. Object Segmentation

At present, such functions are mainly used for picture cropping and background replacement on mobile phones. This part of the function is more similar to the post-processing of PS. Using depth of field information can better separate objects of different depth of field.

depth of field

A3. 3D scanning and modeling

These functions are modeled by depth of field maps at different angles. This has higher requirements for depth of field maps and algorithms. The accuracy of the depth-of-field map that can be obtained with the limited distance between the two cameras on the mobile phone is often not good enough. The depth-of-field information processing algorithm is complicated to calculate on the mobile phone hardware for a long time, so this function is basically not available on mobile phones. However, as 3D printing and algorithms and hardware mature, such functions should be added later.

3D scanning dual lens camera module

A4. Target object distance calculation and fast focus

Target object distance calculation and fast focus, this function is just the simplest application to calculate the depth of field using triangle positioning

testing object distance dual lens camera

A5. 3D movies

The production of 3D videos and photos is different from the shooting of ordinary 3D movies. The two cameras on the phone cannot produce enough visual aberration during the image capture process, because the distance between the two cameras is different from the human eye. And in order to allow people to more clearly get 3D visual effects. Therefore, algorithms are often required for enhancement.

A6. AR enhancement and motion recognition

The functions of this part mainly use two cameras to recognize gestures or gestures. Leap motion and Microsoft’s Kinect are similar in the market. Amazon fire phone has tried to implement similar functions on mobile phones, but in the end, the power supply and space of the mobile phone did not allow users to have a good experience. However, with the real-time depth of field calculations behind, I believe that this type of function will be more and more.

AR and motion recognition

B. Second Type of Function

The second type of function is to combine different information in different pictures into one picture. Make the synthesized picture get better effect. Therefore, the dual camera will pay attention to how to provide different information in the hardware design of the second type of function. There are many kinds of image fusion algorithms. Later, let ’s see if we can ask some of the post-processing algorithms to tell you about this knowledge. I wo n’t show ugliness here. Let ’s first introduce what kind of functions there are .

B1: Super Resolution

Super resolution, this function is mainly to use a plurality of pictures in different parts of the high frequency to generate a clear picture. Dual cameras can be used for final enhancements with different information in the two photos. However, the traditional algorithms need more pictures to generate the pictures. The difference between the two pictures is too little. For example, Huawei’s Mate6 plus claims to be two 8M images that can synthesize 13M. However, the resolution of the actual captured image is still only 8M, but the number of pixels has increased. This is no different from zooming an image on a computer. Personal feeling is deceived.

super-resolution dual lens camera

B2. HDR

The function of HDR is to use two cameras to obtain images with different exposures. In the past, this function needed to modify the exposure time through a previous camera to get pictures under different exposure conditions, but this method takes a long time. This not only causes the user experience to deteriorate, but also causes ghosting issues if there are moving objects in the scene or the camera moves. Using dual cameras can avoid similar problems. However, most of the HDR algorithms in most synthesis processes focus on brightness information, and most algorithms have some distortion in color.

HDR dual cameras

B3. Black and white camera low light brightening and denoising

Low-light brightening and denoising are basically the same in algorithm as HDR. The main difference is that one of the two cameras is a black and white camera. The response is better in low light with less noise. The advantage is that it has a good suppression of color noise and can generally improve the SNR of about 3 DB. However, in most cases, the effect of the pictures actually taken is limited. Compared with some single camera systems that have been specially processed and tuned in low light, there is no obvious advantage. Huawei P9 mainly uses this camera design this time. From most of the online evaluations we have seen, the current effect in low light is improved but not amazing. Although Qikuo used a similar hardware design to implement this function before Huawei P9. To be honest, the effect realized by Qiku is not worthy of evaluation. This function may have potential to be tapped later. At present, the improvement for users is very limited.

black and white tuning and remove noise camera

B4. Optical Zoom Effect

The better function of the dual camera recently is to use a normal FOV camera module and a telephoto lens module to achieve the optical zoom effect. Judging from Apple’s acquisition of Israel’s Linx and Apple’s latest patents. According to Apple’s latest patent, this feature is likely to be the main feature of Apple’s dual camera. The field of view of a telephoto lens is much smaller than a normal lens, but the resolution of a picture at the same distance is also much higher. The better resolution that a telephoto lens can provide is to use the fusion algorithm to improve the resolution of the central area when taking pictures in normal times. You can also use distant photos to improve the resolution of ZOOM. Judging from the effect of the CP algorithm of another Israeli company that has been seen before, if the mobile phone is implemented well, it may exceed the maximum resolution of the existing mobile phone camera in the central area. The effect is not much different from the optical zoom. Although this function has a problem that the telephoto module is too high in the hardware design, this is still the best function that can be seen in the second type of function.

optical zoom dual lens cameras

The two types of effects of the dual camera require different camera modules. The former requires a larger distance between modules, while the latter requires a smaller distance between modules. Although no matter how small it is, due to the size of the module, there must be some distance between the two cameras. For example, the distance between the two cameras of Huawei P9 is already less than 1 cm. But the smaller ones you want to do may require breakthrough design.

Although the two types of functions have different requirements for the module, it does not mean that the dual camera module designed for one type of function cannot achieve the other type of function. Just barely implementing will increase the complexity of the algorithm, and the most important thing is that it will have a great impact on the effect of the final implementation. Although the Huawei P9 also implements the depth of field function, it cannot be compared with the dual camera designed specifically for the depth of field function. So it is said that both the hardware and software algorithms are for the user’s ultimate experience. If you don’t pay attention to these when you start designing, you will often get products that are not amazing.

The Dual Camera is just at the beginning and has not really made users feel its beauty. Both types of features have their advantages, but I believe there will be markets for them. With processor speed, the development of DSP. The dual camera application will definitely bring us different feelings in various fields. Later, we will introduce some problems encountered in the implementation of dual cameras.

Written by Zhang Eric