Better Touch Better Business
Contact Sales at Fdata.
TOF camera Introduction
TOF sensor face recognition camera, the basic principle of time of flight is to transmit light pulses (usually invisible light) to the observed object, and then the sensor receives the light returned from the object. The target distance is obtained by detecting the round-trip flight time of the light pulse. This technology is basically similar to the principle of 3D laser sensor, which scans point by point, while TOF camera gets the depth information of the whole image at the same time. Similar to an ordinary camera, TOF camera is composed of a light source, optical component, sensor, control circuit and a processing circuit. Compared with 3D camera, TOF imaging system has a very different fundamental mechanism. Binocular stereo measurement through the left and right stereo image matching, and then through triangulation method for stereo detection, while TOF camera is through the incoming and reflected light detection to obtain the target distance.
TOF technology adopts active light detection mode. Different from the general lighting requirements, the purpose of TOF illumination unit is not lighting, it is to use the change of incident light source signal and reflection light source signal to conduct distance measurement. Therefore, TOF illumination unit transmits light after high-frequency modulation. For example, pulse light emitted by LED or laser diode can reach 100MHz. Similar to an ordinary camera, TOF camera chip needs to install a lens to collect light. However, different from the ordinary optical lens, a band-pass filter is needed to ensure that only light with the same wavelength as the lighting source can enter. At the same time, because the optical imaging system has a perspective effect, scenes with different distances are concentric spheres with different diameters instead of parallel planes. Therefore, the subsequent processing unit is required to correct this error in actual use. As the core of TOF camera, each pixel of TOF chip records the phase of incident light between camera and object. The structure of the sensor is similar to the ordinary image sensor but more complex than the image sensor. It contains two or more shutters to sample the reflected light at different times. Because of this reason, the pixel size of TOF chip is much larger than that of a general image sensor, which is about 100um. Both the irradiation unit and TOF sensor need high-speed signal control to achieve high measurement accuracy. For example, a 10ps shift of the synchronous signal between the illumination light and the TOF sensor is equivalent to a displacement of 1.5mm. While the current CPU can reach 3GHz, the corresponding clock cycle is 300ps and the corresponding depth resolution is 45mm. The operation unit is mainly used to complete the data correction and calculation. The distance information can be obtained by calculating the relative phase shift relationship between the incident light and the reflected light.
Advantages of TOF camera: compared with the stereo camera or triangulation system, TOF camera is small in size and almost the size of an ordinary camera. It is suitable for some occasions requiring light and a small volume camera. TOF camera can calculate depth information in real-time and quickly, reaching tens to 100fps. Depth information of TOF. However, the binocular stereo camera needs to use a complex correlation algorithm, and the processing speed is slow. The depth calculation of TOF is not affected by the gray level and features of the object surface, so it can be very accurate for 3D detection. However, the binocular stereo camera needs a good feature change of the target, otherwise, it will not be able to calculate the depth. The depth calculation accuracy of TOF does not change with the distance and can be basically stable at centimeter-level, which is of great significance for some applications of large-scale motion.
TOF sensor classification
According to different modulation methods, TOF sensor can be divided into two types: pulse modulation and continuous-wave modulation.
Schematic diagram of the basic principle of time of flight depth measurement
（1） Pulse modulation
The principle of the pulse modulation scheme is relatively simple, as shown in the figure below. It measures the distance directly according to the time difference between the pulse transmitting and receiving.
Schematic diagram of the working principle of optical pulse method
The illumination source of the pulse modulation scheme generally adopts a square wave pulse modulation, because it is relatively easy to realize with a digital circuit. Each pixel at the receiving end is composed of a photosensitive unit (such as a photodiode), which can convert the incident light into current. The photosensitive unit is connected with multiple high-frequency conversion switches (G0, G1 in the figure below), which can lead the current into different capacitors that can store charge (S0, S1 below).
The control unit on the camera turns on the light source and then turns it off, sending out a light pulse. At the same time, the control unit opens and closes the electronic shutter at the receiving end. The charge S0 received at the receiving end is stored in the photosensitive element.
The control unit then switches the light source on and off for a second time. This time the shutter is opened later, that is, at the time when the light source is turned off. The newly received charge S1 is also stored.
Because the duration of a single light pulse is very short, the process repeats thousands of times until the exposure time is reached. The values in the sensor are then read out and the actual distance can be calculated from these values.
Note that the speed of light is C, TP is the duration of the light pulse, S0 is the charge collected by the earlier shutter, S1 is the charge collected by the delayed shutter, then the distance D can be calculated by the following formula:
The minimum measurable distance is that all charges are collected in S0 during the earlier shutter period, but not in S1 during the delayed shutter period, i.e. S1 = 0. The minimum measurable distance d = 0 is obtained by substituting it into the formula.
The biggest measurable distance is that all charges are collected in S1, but not at all in S0. Then, the formula gives d = 0.5 * c * TP. Therefore, the maximum measurable distance is determined by the light pulse width. For example, TP = 50 ns, substituting it into the above equation, the maximum measurement distance d = 7.5 m is obtained.
1. The measurement method is simple and the response is fast
2. Due to the high energy of the transmitter, the interference of background light is reduced to a certain extent
1. The transmitter needs to generate high-frequency and high-intensity pulses, which requires the high performance of physical devices
2. High precision of time measurement is required
3. The ambient scattered light has a certain influence on the measurement results
（2） Continuous-wave modulation
In practical application, sine wave modulation is usually used. Because the phase offset of the sine wave at the receiver and transmitter is directly proportional to the distance between the object and the camera, the phase offset can be used to measure the distance.
Schematic diagram of continuous-wave modulation
The measurement principle of continuous-wave modulation is more complex than that of pulse modulation. We use the most commonly used continuous sine wave modulation to deduce the measurement principle.
Schematic diagram of continuous sine wave modulation measurement method
The detailed derivation process is as follows. Number 1-9 corresponds to formula 1-9 in the figure below.
1. Suppose that the amplitude of the transmitted sinusoidal signal s (T) is a and the modulation frequency is f
2. After the delay △ T, the received signal is the received R (T), the attenuated amplitude is a, and the intensity shift (caused by ambient light) is B
3. The four sampling intervals are equal, all of which are t / 4
4. According to the above sampling time, four equations can be listed
5. Thus, the phase offset △ φ of the transmitted and received sinusoidal signals can be calculated
6. According to the formula in (6), the distance d between the object and the depth camera can be calculated
7. Calculation result of amplitude A after attenuation of the received signal
8. The calculation result of the received signal intensity offset B reflects the ambient light
9. The values of a and B indirectly reflect the depth measurement accuracy, and the depth measurement variance can be approximately expressed by formula 9.
Derivation of continuous sine wave modulation formula
1. The (r2-r0) and (r1-r3) of the phase shift (formula 5) eliminate the fixed deviation caused by the measuring device or ambient light relative to the pulse debugging method.
2. The accuracy (variance) of depth measurement results can be indirectly estimated according to the amplitude A and intensity offset B of the received signal.
3. It is not required that the light source must be a short-time high-intensity pulse. Different types of light sources can be used and different modulation methods can be used.
1. Multiple sampling integration is needed, and the measurement time is long, which limits the frame rate of the camera
2. Multiple sampling integration is needed, which may produce motion blur when measuring moving objects.
At present, the main consumer TOF depth cameras are Kinect 2 of Microsoft, sr4000 of the mesa, TOF depth camera of PMD tech used in Google Project Tango, etc. These products have been widely used in somatosensory recognition, gesture recognition, environment modeling and so on. The most typical one is Microsoft Kinect 2.
TOF depth camera requires high precision of time measurement. Even if the highest precision electronic components are used, it is difficult to achieve the accuracy of the millimeter level. Therefore, in the field of near-range measurement, especially in the range of 1m, the accuracy of TOF depth camera is still far from that of other depth cameras, which limits its application in the field of close range and high precision.
However, from the previous principle, it is not difficult to see that TOF depth camera can change the camera measurement distance by adjusting the frequency of the transmitted pulse; TOF depth camera is different from the depth camera based on the feature matching principle, its measurement accuracy will not decrease with the increase of measurement distance, and its measurement error is basically fixed in the whole measurement range; the anti-interference ability of TOF depth camera is also better Strong. Therefore, the TOF depth camera has obvious advantages when measuring distance is far away (such as unmanned driving).