Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection and ranging systems

Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection and ranging... Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection and ranging systems a, b c c Roman Burkard, * Manuel Ligges , André Merten, Thilo Sandner, a a,b Reinhard Viga, and Anton Grabmaier University of Duisburg-Essen, Department of Electrical Engineering and Information Technology, Faculty of Engineering, Duisburg, Germany Fraunhofer Institute for Microelectronic Circuits and Systems, Duisburg, Germany Fraunhofer Institute for Photonic Microsystems, Dresden, Germany Abstract. In many applications, there is a great demand for reliable, small, and low-cost three-dimensional imaging systems. Promising systems for applications such as automotive appli- cations as well as safe human robotic collaboration are light detection and ranging (lidar) systems based on the direct time-of-flight principle. Especially for covering a large field of view or long- range capabilities, the previously used polygon-scanners are replaced by microelectromechanical systems (MEMS)-scanners. A more recent development is to replace the typically used avalanche photodiodes with single-photon avalanche diodes (SPADs). The combination of both technolo- gies into a MEMS-based SPAD lidar system promises a significant performance increase and cost reduction compared with other approaches. To distinguish between signal and background/noise photons, SPAD-based detectors have to form a histogram by accumulating multiple time-resolved measurements. In this article, a signal and data processing method is proposed, which considers the time-dependent scanning trajectory of the MEMS-scanner during the histogram formation. Based on known reconstruction processes used in stereo vision setups, an estimate for an accu- mulated time-resolved measurement is derived, which allows to classify it as signal or noise. In addition to the theoretical derivation of the signal and data processing, an implementation is experimentally verified in a proof-of-concept MEMS-based SPAD lidar system. © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or repro- duction of this work in whole or in part requires full attribution of the original publication, including its DOI. [DOI: 10.1117/1.JOM.2.1.011005] Keywords: light detection and ranging; single-photon avalanche diode; scanning; MEMS; histo- gram; noise reduction. Paper 21020SS received Jul. 29, 2021; accepted for publication Jan. 6, 2022; published online Jan. 31, 2022. 1 Introduction For the realization of reliable, small, and low-cost three-dimensional (3D) imaging systems, light detection and ranging (lidar) systems based on the direct time-of-flight (dtof) principle are con- sidered to be one of the most promising technologies. Especially for automotive applications as well as safe human robotic collaboration, many proof-of-concept systems are currently built and tested extensively. A new trend of scanning lidar systems is the replacement of the bulky and expensive polygon scanners by microelectromechanical systems (MEMS)-scanners. These MEMS-scanners have the advantage that they can be fabricated using standard CMOS processes, and the incorporation of these scanners offers the opportunity to greatly reduce the overall size and cost of the system. Furthermore, in many systems, it is now being tested to replace the typically used avalanche photodiodes with single-photon avalanche diodes (SPADs). A major advantage of SPADs is that they can also be fabricated using standard CMOS processes and their ability to be integrated into large photodetector matrices, which significantly increases the spatial resolution of the lidar system. Especially, in challenging applications that require long-range capabilities or a large field of view (FOV) to be covered, the combination of an MEMS-scanner and SPAD detector promises a significant performance increase. One of the most recent *Address all correspondence to Roman Burkard, roman.burkard@uni-due.de Journal of Optical Microsystems 011005-1 Jan–Mar 2022 Vol. 2(1) Burkard et al.: Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection. . . examples of a proof-of-concept MEMS-based SPAD lidar system was reported by Sony in Ref. 1 with a range up to 300 m. Even though MEMS-based SPAD lidar systems are becoming more and more prominent, the authors are not aware of any prior publication that connects the statistical detection process necessary for the working principle of SPAD-based detectors with the time-dependent scanning trajectory of the MEMS-scanner. On the one hand, these systems often utilize a pointwise illu- mination of the FOV, especially for long-range applications. On the other hand, SPAD-based detectors have to form a histogram by accumulating multiple time-resolved measurements. Since a SPAD cannot distinguish between a signal and a noise/background photon, further dis- tinguishing criteria based on the time-dependent scanning trajectory must be considered during the formation of a histogram. In Sec. 2, the acquisition statistics of a system utilizing an MEMS-scanner driven in resonance is formulated. Furthermore, the concepts used in the reconstruction process of triangulation-based sensors are briefly summarized, which will be used to derive an analogous concept for MEMS- based SPAD lidar systems. Based on the results of Sec. 2, Sec. 3 describes a proof-of-concept MEMS-based SPAD lidar system and proposes an implementation of a signal and data processing chain for the formation of a histogram that exploits the biaxial system configuration and utilizes the time-dependent scan- ning trajectory of the MEMS-scanner to further discriminate signal and background photons. Section 4 applies the proposed signal and data processing chain to measured values. For the validation, two different experiments with varying lighting conditions are conducted. Section 5 summarizes the results and provides an outlook for further improvements. 2 Model and Method The following extends the statistical detection process of SPADs to consider the time-dependent scanning trajectory of the MEMS-scanner. For simplicity, the geometry is reduced to a two- dimensional (2D) problem, but the same arguments and correspondences hold for the 3D case. Furthermore, the imaging optic is assumed to be distortion-free. After a brief overview of a reconstruction method used in triangulation-based stereo vision setups, an analogous concept is derived for the detection process of MEMS-scanner systems. This method combines the spa- tial and timing information of a SPAD with the time-dependent scanning trajectory of the MEMS-scanner. To provide a further distinguishing feature between signal and noise/back- ground photons, these information are checked for consistency. 2.1 Acquisition Statistics Considering the Time-Dependent Scanning Trajectory of MEMS-Scanners SPAD-based detectors have to form a histogram by accumulating multiple time-resolved mea- surements. The statistical detection method for SPAD-based detectors is extensively covered in recent publications, see, e.g., Refs. 2 and 3. In contrast to SPAD-based flash lidar systems, where the amount of accumulations per histo- gram in every pixel is simply given by the laser pulse repetition frequency f and the frame rate, rep in a system utilizing a scanning illumination it also depends on the scan trajectory, the FOV, and the required spatial resolution. The definition and correspondences between the mechanical scan angle θ ðtÞ, an initial rotation angle θ , and the resulting scan angle θðtÞ of an MEMS-scanner mech 0 are shown in Fig. 1. In addition, the normalized direction of the laser emission d , the normal laser vector n of the MEMS-scanner, and the resulting scan direction d are given. MEMS scan The determination of the mean number of accumulations for a MEMS-scanner driven in res- onance can be considered as a sampling problem. The sampling points of the mechanical scan angle θ , which can be represented in the time domain by a periodic cosinusoidal oscillation, mech are given by the reciprocal of the laser pulse repetition frequency f and may be expressed as rep EQ-TARGET;temp:intralink-;e001;116;101θ ¼ θ · cos 2πf · þ φ ; (1) mech;k mech;max mech 0 rep Journal of Optical Microsystems 011005-2 Jan–Mar 2022 Vol. 2(1) Burkard et al.: Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection. . . Fig. 1 Definition and correspondences between the mechanical scan angle θ ðtÞ, an initial mech rotation angle θ , and the resulting scan angle θðtÞ of an MEMS-scanner. In addition, the normal vector of the MEMS-scanner n ðθðtÞÞ, the direction of the laser emission d , and the optical MEMS laser scan direction d are given. scan Fig. 2 Exemplary distribution of the amount of measurements considering the scan trajectory of an MEMS-scanner driven in resonance. The distribution is determined for a frequency ratio of f ∕f ¼ 0.0785. mech rep where k is an integer, φ represents an arbitrary constant phase, and θ is the maximum 0 mech;max mechanical scan angle. In the case of an electrostatic driven scanner, the mechanical scan angle is a function of the geometry of the scanner, its driving voltage, and its scan frequency f . Using mech the sampled points, a distribution of the number of measurements may be stated or, by weighting it with the number of measurements, a mean number of accumulations per histogram and pixel may be obtained. For a one-dimensional oscillation, a frequency ratio of f ∕f ¼ 0.0785 and mech rep 400 consecutive accumulations both representations are exemplarily shown in Fig. 2. 2.2 3D reconstruction in Triangulation Sensors The following gives a brief summary on the reconstruction of 3D points using a stereo camera setup. The reconstruction based on a purely geometric solution will be used here as an example because of its simplicity and to illustrate the basic concepts for the subsequent discussion. The basic geometric relations and notations required for the geometric solution are shown in Fig. 3. Assuming a 3D point x is observed from two different camera locations, where their projection 0 00 centers O and O are separated by their baseline b. This point corresponds to two image points 0 0 00 00 u ¼ P ðxÞ and u ¼ P ðxÞ in the image planes, where the correspondence is given by the cam- 0 00 0 00 era-specific projection matrices P and P . The line equations l and l , which are given in Eqs. (2) and (3), in 3D space can be constructed. Their origin is the respective image point and 0 00 their direction vectors d and d are determined using their respective projection center. If the Journal of Optical Microsystems 011005-3 Jan–Mar 2022 Vol. 2(1) Burkard et al.: Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection. . . Fig. 3 Schematic representation of the setup for the 3D reconstruction process used in triangu- lation sensors based on the stereo matching principle. 0 00 0 00 epipolar constraint l · ðb × l Þ¼ 0 is fulfilled, both lines l and l intersect in the 3D space and an exact solution for the scalars k and k exists. If the epipolar constraint is violated, the geo- 1 2 metrical solution solves the system of linear equations given in Eq. (4) and estimates the 3D point 0 00 4 x as the mid-point of the shortest line segment that joins both lines l and l . More sophisticated estimates for the point x take into account uncertainties in the imaging process and can be shown to be statistically optimal. Examples for these estimators can be found in Refs. 4 and 5. Apart from the estimation of the intersection point, the epipolar constraint can be used to reduce the search space for correspondences in the image pairs from a 2D space to a one-dimensional (1D) space. 0 0 0 EQ-TARGET;temp:intralink-;e002;116;418l ¼ u þ k d ; (2) 00 00 00 EQ-TARGET;temp:intralink-;e003;116;375l ¼ u þ k d ; (3) 0 0 00 00 0 ðu þ k d − u − k d Þ · d 1 2 EQ-TARGET;temp:intralink-;e004;116;353 ¼ 0: (4) 0 0 00 00 00 ðu þ k d − u − k d Þ · d 1 2 2.3 3D Reconstruction in dtof Measurement In the following, a reconstruction method for MEMS-based scanning lidar systems utilizing the dtof method is outlined. The reconstruction process may be formulated analogously to the recon- struction presented in the previous subsection, where one of the projection centers is replaced by the MEMS-scanner. The top-view of this geometry is shown in Fig. 4. As shown in Fig. 4, the global coordinate frame is fixed to the center of the sensor and the optical axis of the receiving optics coincides with the z axis. The optical axis of the transmitter is defined by the MEMS-scanner at a mechanical scan angle θ of zero. The rotation angle θ of mech 0 the MEMS-scanner is chosen such that the optical axes of the transmitter and receiver intersect at half the maximum distance, defined here as the working distance W. If the MEMS-scanner and the projection center are separated by a baseline b ¼½x ; 0;0 , the rotation angle θ may be M0 0 expressed as π x M0 EQ-TARGET;temp:intralink-;e005;116;167θ ¼ − arctan : (5) 4 2W Using the time-dependent deflection angle θðtÞ, the normal vector n ¼½cosðθðtÞÞ; sinðθðtÞÞ MEMS may be defined. The normalized reflection direction d follows from the vector reflection law scan and is determined as Journal of Optical Microsystems 011005-4 Jan–Mar 2022 Vol. 2(1) Burkard et al.: Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection. . . Fig. 4 Schematic representation of the proof-of-concept lidar system and necessary geometric definitions. EQ-TARGET;temp:intralink-;e006;116;509d ¼ d − 2 · ðn · d Þ · n ; (6) scan laser MEMS laser MEMS where d is the normalized direction of the laser emission. laser Using the approximation of an ideal projection with a thin lens, the line l may be expressed analogously to Eq. (2). In the 1D pointwise scanning case with an active illumination and a single line sensor, the epipolar constraint must be fulfilled and may be stated as d · ðb × l Þ¼ 0. The importance of scan the epipolar constraint becomes obvious applying it to the case of a 2D pointwise scanning system with an active illumination and a 2D array detector. Since the current scan angle and therefore the scan direction d is known, only the pixels fulfilling this constraint must be scan readout, which greatly reduces the amount of data that needs to be transferred, stored, and processed. In the absence of background radiation and noise in the sensor, invoking the epipolar con- straint would be sufficient to uniquely specify the point x. Since this is usually not the case, a further criterion needs to be specified. A measurement utilizing the dtof method contains timing information for every pixel. Considering the geometry shown in Fig. 4, the time of flight t TOF must be equal to EQ-TARGET;temp:intralink-;e007;116;307c · t ¼jx − x jþjO − xj; (7) TOF scanner where c is the speed of light through the medium and x is the position of the scanner. scanner Without any prior knowledge of the scene, an analytical solution for the distance Z can be derived for a given pixel position u , a measured time of flight t , and scan angle θ. For the TOF 2D geometry, this solution is given in Eq. (8). Combining the distance Z with the imaging equa- tion, given in Eq. (9), the 2D point x ¼½X; Z can be determined: pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 ððc · t Þ − x Þ · c · t 1 þ tan θ − x · tan θ · ð1 − sin θÞ TOF TOF M EQ-TARGET;temp:intralink-;e008;116;216Z ¼ ; (8) 2 2 2½ðc · t Þ − ðx · sin θÞ TOF M u · ðf − ZÞ EQ-TARGET;temp:intralink-;e009;116;146X ¼ : (9) 2.4 Spatial Uncertainties in the dtof Measurement Spatial uncertainties in the dtof measurement can arise from either spatial or temporal uncer- tainties. Spatial uncertainties arise from the uncertainty involved in the determination of the current scan angle and the finite pixel size. Temporal uncertainties arise from the pulse-to-pulse Journal of Optical Microsystems 011005-5 Jan–Mar 2022 Vol. 2(1) Burkard et al.: Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection. . . timing jitter between subsequent laser pulse emissions and the minimum resolvable time given by the time-to-digital converter of the detector. The current scan angle is monitored with a sam- pling frequency much higher than the oscillation frequency. Therefore, this uncertainty is neglected in the following. In addition, we consider the case where the pulse-to-pulse timing jitter between the laser pulse emissions is less than the minimum resolvable time, so this is also neglected. The finite pixel size gives rise to a spatial uncertainty in the x direction of the received pho- ton. Under the assumption of a lateral uniform pixel response, this yields a uniform distribution over the active pixel area. The mean μ is equal to the center of the pixel and can be expressed pix for the sensor, which will be considered later, as EQ-TARGET;temp:intralink-;e010;116;615μ ¼ð96.5 − uÞ · w ; (10) pix pix where u is the pixel coordinate and w is the spacing between two pixels as given schematically pix in Fig. 6. Its variance σ is given as pix 2 SPAD EQ-TARGET;temp:intralink-;e011;116;555σ ¼ ; (11) pix where d is the diameter of an SPAD. Usually, this assumption is not valid for conventional SPAD photodiodes, but for SPADs the uniformity of the response to photons impinging at different positions in the active area is a key parameter. Through careful design of the device, a uniform pixel response, in terms of the photon detection efficiency and the detection delay, may be achieved. The minimum resolvable time represented by the bin width t gives rise to a spatial uncer- bin tainty in the z direction of the received photon. The discretized time of flight t is equal to the TOF bin number N multiplied by the minimum resolvable time t of the time-to-digital converter. bin bin This discretization causes a quantization error. In a time-to-digital converter, and with only minor assumptions about the underlying statistics of the photon detection, the time of arrival in a bin is uniformly distributed. A necessary and sufficient condition for this may be found in Ref. 8, and its application to a commonly used time-to-digital converter architecture may be found in Ref. 9. Therefore, the mean μ is the center of the bin given as TOF EQ-TARGET;temp:intralink-;e012;116;356μ ¼ N − · t ; (12) TOF bin bin and its variance σ may be expressed as TOF 2 bin EQ-TARGET;temp:intralink-;e013;116;299σ ¼ : (13) TOF In the following, the first-order second-moment method is used to propagate the uncertainties of the measured pixel coordinate u and its time of flight t from the image into the object TOF space. To achieve this, the point x ¼½X; Z is first expressed in polar coordinates using the known correspondences. The mean of the 2D point in ðr; φÞ space is then qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 EQ-TARGET;temp:intralink-;e014;116;207μ ¼ Xðμ ; μ Þ þ Zðμ Þ ; (14) r pix TOF TOF and EQ-TARGET;temp:intralink-;e015;116;155φ ¼ atan 2½Zðμ Þ;Xðμ ; μ Þ; (15) TOF TOF pix TOF where atan 2 is the two-argument arctangent function. Under the assumption of negligible covariance between μ and μ , which is the case if the pixel response is assumed to be uni- pix TOF form in terms of the photon detection efficiency and the detection delay, applying the first-order second-moment method to the point in ðr; φÞ space results in an uncertainty in r-direction of Journal of Optical Microsystems 011005-6 Jan–Mar 2022 Vol. 2(1) Burkard et al.: Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection. . . (a) (b) Fig. 5 Given exemplarily are uncertainty bounds in Cartesian coordinates. In (a), these are given for different pixel numbers u and bin numbers N . The uncertainty bound for the pixel number bin u ¼ 120 and bin number N ¼ 16 is given in (b). bin 2 2 dr dr 2 2 2 EQ-TARGET;temp:intralink-;e016;116;534σ ¼ · σ þ · σ ; (16) r pix TOF du dt TOF and in φ-direction of 2 2 dφ dφ TOF TOF 2 2 2 EQ-TARGET;temp:intralink-;e017;116;486σ ¼ · σ þ · σ : (17) pix TOF du dt TOF As an example, resulting uncertainty bounds in Cartesian coordinates for a bin width t of bin 312.5 ps, different pixel numbers u and distances Z are shown in Fig. 5. 3 System Description and Implementation of the Signal and Data Processing After a brief description of a proof-of-concept MEMS-based SPAD lidar system, the implemen- tation of a signal and data processing based on the results of the previous section follows. 3.1 Sensor and System Description The sensor used here is the SPADEye2 from the Fraunhofer Institute for Microelectronic Circuits and Systems. It is a 2 × 192 pixel SPAD-based dtof line sensor, where only one of these lines is actively illuminated. For timing measurements, a time-to-digital converter with a resolution of t ¼ 312.5 ps and a full range of 1.28 μs, which corresponds to a total dtof detection range of bin 192 m, is implemented in each pixel. As schematically depicted in Fig. 6, each pixel consists of four vertically arranged SPADs with a diameter d of 12 μm. The height h of a pixel SPAD pix Fig. 6 Schematic representation of the SPAD-based dtof line sensor, its dimensions, and the placement of the origin of the reference coordinate frame. Journal of Optical Microsystems 011005-7 Jan–Mar 2022 Vol. 2(1) Burkard et al.: Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection. . . Table 1 Summary of the components and their parameters used in the proof-of-concept system. Component Parameter Symbol Value Laser source Pulse repetition frequency f 20 kHz rep Wavelength λ 659 nm Peak optical power (pulsed) — 80 mW Temporal pulse width — 14.9 ns Detector SPADEye2 Pixels — 2 × 192 pixels SPAD diameter d 12 μm SPAD Pixel width w 40.56 μm pix Pixel height h 209.6 μm pix Bin width t 312.5 ps bin TDC full range — 1.28 μs MEMS-scanner Fovea3D Mirror aperture — 3.3 mm × 3.6 mm sending mirror and SiMEDri-Driving Electronics Oscillation frequency f 1570 Hz mech Mech. torsion amplitude θ 11.9° mech;max Reflectivity at 659 nm — 81.25% Receiver optics Focal length f 8mm f -number — 2 Optical bandpass filter (FWHM) — 10 nm is 209.6 μm and its width w is 40.56 μm. The coordinate system for the following discussion pix is fixed to the center of the active sensor area. The relation between the Cartesian coordinates ⊺ ⊺ ðx; yÞ and the pixel-units ðu; vÞ is given as x ð96.5 − uÞ · w pix EQ-TARGET;temp:intralink-;e018;116;333 ¼ ; (18) y 0 where u is the pixel number in the range of 1 to 192. Since only the line in the center of the sensor is actively illuminated here, the y-or v-component of the vector is zero. To illuminate the scene, a collimated laser beam is deflected in the horizontal direction by a single axis resonant MEMS-scanner with electrostatic drive. The scanner is the Fovea3D sending mirror, and its driving and monitoring electronics SiMEDri are fabricated at the Fraunhofer 12,13 Institute for Photonic Microsystems. The laser source is a pulsed laser diode with a center wavelength of 659 nm, an optical peak power of 80 nW, and a temporal pulse width of 15 ns. Although this center wavelength is not commonly used in lidar applications, it greatly simplifies the alignment procedure and the discussion is valid for arbitrary optical wavelengths. The overall lidar system is a biaxial arrangement, and the distance between the center of the sensor and the center of the MEMS scanner is 12.5 cm. A list summarizing the components parameters is given in Table 1. 3.2 Signal and Data Processing A block diagram of the proposed signal and data processing chain is shown in Fig. 7. For every measurement, the sensor outputs a vector N that contains the measured bin values of every bin pixel and a measurement timestamp T , which is used to determine the current scan angle θ. meas The zero crossings of the mechanical scan angle θ are monitored using a piezoresistive mech Journal of Optical Microsystems 011005-8 Jan–Mar 2022 Vol. 2(1) Burkard et al.: Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection. . . Fig. 7 Block diagram showing the signal and data processing chain. (a) (b) (c) (d) Fig. 8 (a) Scene and lighting conditions, (b) raw measurement data of 800 consecutive accumu- lations, (c) measurement data classified as signal, and (d) measurement data classified as noise. Journal of Optical Microsystems 011005-9 Jan–Mar 2022 Vol. 2(1) Burkard et al.: Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection. . . sensor placed on the torsional bar of the scanner. For the determination of the current scan angle θ, a vector containing these zero crossings ZC is used. As outlined in Sec. 2.4, the deter- mech mined scan angle θ and the vector of measured bin values N are used to estimate the mean bin distances μ and angles μ as well as their respective variances σ and σ for every pixel. These r φ r φ estimates are further tested for consistency using the estimated distance Z, the known position of the scanner x , and its current scan angle θ. If the determined point lies within the uncer- scanner tainty bound, the measured value is classified as signal S. Otherwise, the measured value is stored in a vector N and labeled as noise, which may further be used to estimate the background radiation impinging on the sensor for example. 4 Experimental Verification To verify the signal and data processing, different experiments were conducted, which are described in the following. A first proof of concept measurement with room lighting as back- ground radiation was conducted. The scene with annotated distances to the objects is shown in Fig. 8(a), and the lighting conditions are shown. For better visibility in the picture, the laser source was set to a constant optical output power of 10 mW. The raw measurement data of 800 consecutive accumulations are shown in Fig. 8(b). For better visibility, only the first 256 bins are displayed. As expected in room lighting conditions, the three different objects are clearly visible even without any further processing or classification. Applying the signal and data processing as (a) (b) (c) (d) Fig. 9 Same scene as before but additionally a 1-kW halogen floodlight was used to artificially increase the background illumination. (a) Raw measurement data of 3200 consecutive accumu- lations. (b) Measurement data classified as signal. (c) Thresholding and peak detection applied to (a). (d) Thresholding and peak detection applied to (b). Journal of Optical Microsystems 011005-10 Jan–Mar 2022 Vol. 2(1) Burkard et al.: Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection. . . outlined in Sec. 3 yields the histogram of the measured values classified as signal S shown in Fig. 8(c). The measurements labeled as noise N are shown in Fig. 8(d). Comparing Figs. 8(b) and 8(c), it can be seen that most of the noise is removed and only some spurious outliers that randomly satisfy the conditions of the uncertainty bounds are present. Furthermore, the total number of counts in the raw measurement data of 2737 is reduced to 879, which corresponds closely to the number of accumulations, while not altering the peak counts, for example, encountered in the pixels 14 and 81. A second measurement was conducted with the same scene as shown in Fig. 8(a), but addi- tionally a 1-kW halogen floodlight was used to artificially increase the background radiation. Using a maximum likelihood estimator, the average background rate generated by the additional illumination was estimated to be around 6 MHz per pixel. (For comparison without the additional illumination, a background rate of about 100 kHz per pixel was estimated.) For better visibility, 3200 consecutive accumulations were used and the raw measurement data are shown in Fig. 9(a). As can be seen, the scene is barely visible and dominated by the background noise. Figure 9(b) shows the histogram of the measured values classified as signal S. Before further processing both histograms with a simple peak detector, the same moving average filter and thresholding was applied to both histograms. The resulting filtered outputs are shown in Figs. 9(c) and 9(d), where the processed histogram using the data processing as outlined in Sec. 3 closely resembles the measurement without any background radiation as shown in Fig. 8(c). 5 Conclusion and Outlook In the conclusion, an extension of the statistical detection process of SPAD-based detectors con- sidering the time-dependent scanning trajectory of MEMS-scanners was derived. Based on this, a signal and data processing strategy was presented using principles known from the 3D recon- struction process of triangulation sensors to distinguish between signal and noise/background photons. The signal and data processing strategy was implemented in a proof-of-concept MEMS-based SPAD lidar system and its functionality was verified experimentally. Furthermore, it was shown that the influence of strong background radiation is largely attenuated, and an evaluation of the data was still possible. Utilizing an SPAD-based 2D array detector and an MEMS-scanner with a 2D scan trajectory, the presented method may be easily extended to distinguish between signal and noise/ background photons in the 3D case. As briefly mentioned in Sec. 2.3, by checking the epipolar constraint, the amount of data to be transmitted, stored, and processed can be greatly reduced. Acknowledgment The authors declare no conflict of interest. References 1. O. Kumagai et al., “7.3 A 189 × 600 back-illuminated stacked SPAD direct time-of-flight depth sensor for automotive LiDAR systems,” in IEEE Int. Solid- State Circuits Conf. (ISSCC), IEEE, pp. 110–112 (2021). 2. B. E. A. Saleh, M. C. Teich, and J. W. Goodman, Fundamentals of Photonics, Wiley Series in Pure and Applied Optics, 2nd ed., John Wiley & Sons, Chicester (2013, 1991). 3. M. Fox, Quantum Optics: An Introduction, Oxford Master Series in Physics Atomic, Optical, and Laser Physics, Vol. 15, Oxford University Press, Oxford and New York (reprinted twice, once with corr.) (2007). 4. W. Förstner and B. Wrobel, Photogrammetric Computer Vision: Statistics, Geometry, Orientation, and Reconstruction, Geometry and Computing, Vol. 11, Springer, Cham (2016). 5. R. I. Hartley and P. Sturm, “Triangulation,” Comput. Vision Image Understanding 68(2), 146–157 (1997). Journal of Optical Microsystems 011005-11 Jan–Mar 2022 Vol. 2(1) Burkard et al.: Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection. . . 6. P. C. D. Hobbs, Building Electro-Optical Systems: Making It All Work, Wiley Series in Pure and Applied Optics, S.l., 1st ed., Wiley-Interscience (2000). 7. I. Prochazka et al., “Photon counting timing uniformity–unique feature of the silicon ava- lanche photodiodes K14,” J. Mod. Opt. 54(2–3), 141–149 (2007). 8. A. Sripad and D. Snyder, “A necessary and sufficient condition for quantization errors to be uniform and white,” IEEE Trans. Acoust. Speech Signal Process. 25(5), 442–448 (1977). 9. T. Maeda and T. Tokairin, “Analytical expression of quantization noise in time-to-digital converter based on the fourier series analysis,” IEEE Trans. Circuits Syst. I: Regular Pap. 57(7), 1538–1548 (2010). 10. Fraunhofer IMS Duisburg, “SPADEYE2 – CMOS lidar sensor,” 2018, https://www.ims .fraunhofer.de/content/dam/ims/de/documents/Downloads/SPADeye2.pdf 11. M. Beer et al., “SPAD-based flash LiDAR sensor with high ambient light rejection for automotive applications,” Proc. SPIE 10540, 105402G (2018). 12. T. Sandner et al., “Hybrid assembled micro scanner array with large aperture and their system integration for a 3D ToF laser camera,” Proc. SPIE 9375, 937505 (2015). 13. Fraunhofer IPMS Dresden, “Driving electronics for the evaluation of 1D and 2D resonant MEMS scanners,” 2017, https://www.ipms.fraunhofer.de/content/dam/ipms/common/ products/AMS/simedri-e.pdf Roman Burkard received his BSc and MSc degrees in electrical engineering and information technology from the University of Duisburg-Essen in 2016 and 2018, respectively, where he is currently working toward his PhD in electrical engineering at the Chair of Electronic Compo- nents and Circuits. His research focuses on the development of light detection and ranging systems and on the challenges posed by the combination of single-photon avalanche diodes and a scanning illumination. Manuel Ligges received his diploma and doctorate in physics from the University of Duisburg- Essen. Until 2019, he worked as a research assistant and assistant professor in the field of solid- state physics. Currently, he leads the group of optical systems at the Fraunhofer Institute for Microelectronic Circuits and Systems (IMS) in Duisburg. Thilo Sandner studied electrical engineering at the Technical University of Dresden, Germany, where he received his doctorate in 2003. Since 2003, he has been working as a scientist at the Fraunhofer IPMS, where he headed the R&D group for MEMS scanning mirrors for more than 10 years. Currently, he works as a project manager and key researcher for the development of innovative MOEMS components, system design, and new applications of photonic microsys- tems such as MEMS-based LiDAR. Reinhard Viga received his diploma degree in electrical engineering and the Dr.-Ing. from Gerhard Mercator University of Duisburg in 1990 and 2003, respectively. Since 1990, he was with the Chair of Electromechanical System Design working on medical sensor system typol- ogies and application aspects. Currently, he is the group manager in the Chair of Electronic Components and Ciruits of the University of Duisburg-Essen. Besides sensor technology, his research interests cover the design of embedded systems for medical diagnostics and medical image processing. Anton Grabmaier studied physics at the University of Stuttgart and specialized in semicon- ductor physics and measurement technology. His dissertation was focused on laser diodes. Since 2006, he has been a professor at the University of Duisburg-Essen and is working as the director of the Fraunhofer Institute for Microelectronic Circuits and Systems (IMS) in Duisburg. André Merten: Biography is not available. Journal of Optical Microsystems 011005-12 Jan–Mar 2022 Vol. 2(1) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Optical Microsystems SPIE

Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection and ranging systems

Loading next page...
 
/lp/spie/histogram-formation-and-noise-reduction-in-biaxial-mems-based-spad-VQPvmwONzE

References (17)

Publisher
SPIE
Copyright
© The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
ISSN
2708-5260
eISSN
2708-5260
DOI
10.1117/1.jom.2.1.011005
Publisher site
See Article on Publisher Site

Abstract

Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection and ranging systems a, b c c Roman Burkard, * Manuel Ligges , André Merten, Thilo Sandner, a a,b Reinhard Viga, and Anton Grabmaier University of Duisburg-Essen, Department of Electrical Engineering and Information Technology, Faculty of Engineering, Duisburg, Germany Fraunhofer Institute for Microelectronic Circuits and Systems, Duisburg, Germany Fraunhofer Institute for Photonic Microsystems, Dresden, Germany Abstract. In many applications, there is a great demand for reliable, small, and low-cost three-dimensional imaging systems. Promising systems for applications such as automotive appli- cations as well as safe human robotic collaboration are light detection and ranging (lidar) systems based on the direct time-of-flight principle. Especially for covering a large field of view or long- range capabilities, the previously used polygon-scanners are replaced by microelectromechanical systems (MEMS)-scanners. A more recent development is to replace the typically used avalanche photodiodes with single-photon avalanche diodes (SPADs). The combination of both technolo- gies into a MEMS-based SPAD lidar system promises a significant performance increase and cost reduction compared with other approaches. To distinguish between signal and background/noise photons, SPAD-based detectors have to form a histogram by accumulating multiple time-resolved measurements. In this article, a signal and data processing method is proposed, which considers the time-dependent scanning trajectory of the MEMS-scanner during the histogram formation. Based on known reconstruction processes used in stereo vision setups, an estimate for an accu- mulated time-resolved measurement is derived, which allows to classify it as signal or noise. In addition to the theoretical derivation of the signal and data processing, an implementation is experimentally verified in a proof-of-concept MEMS-based SPAD lidar system. © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or repro- duction of this work in whole or in part requires full attribution of the original publication, including its DOI. [DOI: 10.1117/1.JOM.2.1.011005] Keywords: light detection and ranging; single-photon avalanche diode; scanning; MEMS; histo- gram; noise reduction. Paper 21020SS received Jul. 29, 2021; accepted for publication Jan. 6, 2022; published online Jan. 31, 2022. 1 Introduction For the realization of reliable, small, and low-cost three-dimensional (3D) imaging systems, light detection and ranging (lidar) systems based on the direct time-of-flight (dtof) principle are con- sidered to be one of the most promising technologies. Especially for automotive applications as well as safe human robotic collaboration, many proof-of-concept systems are currently built and tested extensively. A new trend of scanning lidar systems is the replacement of the bulky and expensive polygon scanners by microelectromechanical systems (MEMS)-scanners. These MEMS-scanners have the advantage that they can be fabricated using standard CMOS processes, and the incorporation of these scanners offers the opportunity to greatly reduce the overall size and cost of the system. Furthermore, in many systems, it is now being tested to replace the typically used avalanche photodiodes with single-photon avalanche diodes (SPADs). A major advantage of SPADs is that they can also be fabricated using standard CMOS processes and their ability to be integrated into large photodetector matrices, which significantly increases the spatial resolution of the lidar system. Especially, in challenging applications that require long-range capabilities or a large field of view (FOV) to be covered, the combination of an MEMS-scanner and SPAD detector promises a significant performance increase. One of the most recent *Address all correspondence to Roman Burkard, roman.burkard@uni-due.de Journal of Optical Microsystems 011005-1 Jan–Mar 2022 Vol. 2(1) Burkard et al.: Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection. . . examples of a proof-of-concept MEMS-based SPAD lidar system was reported by Sony in Ref. 1 with a range up to 300 m. Even though MEMS-based SPAD lidar systems are becoming more and more prominent, the authors are not aware of any prior publication that connects the statistical detection process necessary for the working principle of SPAD-based detectors with the time-dependent scanning trajectory of the MEMS-scanner. On the one hand, these systems often utilize a pointwise illu- mination of the FOV, especially for long-range applications. On the other hand, SPAD-based detectors have to form a histogram by accumulating multiple time-resolved measurements. Since a SPAD cannot distinguish between a signal and a noise/background photon, further dis- tinguishing criteria based on the time-dependent scanning trajectory must be considered during the formation of a histogram. In Sec. 2, the acquisition statistics of a system utilizing an MEMS-scanner driven in resonance is formulated. Furthermore, the concepts used in the reconstruction process of triangulation-based sensors are briefly summarized, which will be used to derive an analogous concept for MEMS- based SPAD lidar systems. Based on the results of Sec. 2, Sec. 3 describes a proof-of-concept MEMS-based SPAD lidar system and proposes an implementation of a signal and data processing chain for the formation of a histogram that exploits the biaxial system configuration and utilizes the time-dependent scan- ning trajectory of the MEMS-scanner to further discriminate signal and background photons. Section 4 applies the proposed signal and data processing chain to measured values. For the validation, two different experiments with varying lighting conditions are conducted. Section 5 summarizes the results and provides an outlook for further improvements. 2 Model and Method The following extends the statistical detection process of SPADs to consider the time-dependent scanning trajectory of the MEMS-scanner. For simplicity, the geometry is reduced to a two- dimensional (2D) problem, but the same arguments and correspondences hold for the 3D case. Furthermore, the imaging optic is assumed to be distortion-free. After a brief overview of a reconstruction method used in triangulation-based stereo vision setups, an analogous concept is derived for the detection process of MEMS-scanner systems. This method combines the spa- tial and timing information of a SPAD with the time-dependent scanning trajectory of the MEMS-scanner. To provide a further distinguishing feature between signal and noise/back- ground photons, these information are checked for consistency. 2.1 Acquisition Statistics Considering the Time-Dependent Scanning Trajectory of MEMS-Scanners SPAD-based detectors have to form a histogram by accumulating multiple time-resolved mea- surements. The statistical detection method for SPAD-based detectors is extensively covered in recent publications, see, e.g., Refs. 2 and 3. In contrast to SPAD-based flash lidar systems, where the amount of accumulations per histo- gram in every pixel is simply given by the laser pulse repetition frequency f and the frame rate, rep in a system utilizing a scanning illumination it also depends on the scan trajectory, the FOV, and the required spatial resolution. The definition and correspondences between the mechanical scan angle θ ðtÞ, an initial rotation angle θ , and the resulting scan angle θðtÞ of an MEMS-scanner mech 0 are shown in Fig. 1. In addition, the normalized direction of the laser emission d , the normal laser vector n of the MEMS-scanner, and the resulting scan direction d are given. MEMS scan The determination of the mean number of accumulations for a MEMS-scanner driven in res- onance can be considered as a sampling problem. The sampling points of the mechanical scan angle θ , which can be represented in the time domain by a periodic cosinusoidal oscillation, mech are given by the reciprocal of the laser pulse repetition frequency f and may be expressed as rep EQ-TARGET;temp:intralink-;e001;116;101θ ¼ θ · cos 2πf · þ φ ; (1) mech;k mech;max mech 0 rep Journal of Optical Microsystems 011005-2 Jan–Mar 2022 Vol. 2(1) Burkard et al.: Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection. . . Fig. 1 Definition and correspondences between the mechanical scan angle θ ðtÞ, an initial mech rotation angle θ , and the resulting scan angle θðtÞ of an MEMS-scanner. In addition, the normal vector of the MEMS-scanner n ðθðtÞÞ, the direction of the laser emission d , and the optical MEMS laser scan direction d are given. scan Fig. 2 Exemplary distribution of the amount of measurements considering the scan trajectory of an MEMS-scanner driven in resonance. The distribution is determined for a frequency ratio of f ∕f ¼ 0.0785. mech rep where k is an integer, φ represents an arbitrary constant phase, and θ is the maximum 0 mech;max mechanical scan angle. In the case of an electrostatic driven scanner, the mechanical scan angle is a function of the geometry of the scanner, its driving voltage, and its scan frequency f . Using mech the sampled points, a distribution of the number of measurements may be stated or, by weighting it with the number of measurements, a mean number of accumulations per histogram and pixel may be obtained. For a one-dimensional oscillation, a frequency ratio of f ∕f ¼ 0.0785 and mech rep 400 consecutive accumulations both representations are exemplarily shown in Fig. 2. 2.2 3D reconstruction in Triangulation Sensors The following gives a brief summary on the reconstruction of 3D points using a stereo camera setup. The reconstruction based on a purely geometric solution will be used here as an example because of its simplicity and to illustrate the basic concepts for the subsequent discussion. The basic geometric relations and notations required for the geometric solution are shown in Fig. 3. Assuming a 3D point x is observed from two different camera locations, where their projection 0 00 centers O and O are separated by their baseline b. This point corresponds to two image points 0 0 00 00 u ¼ P ðxÞ and u ¼ P ðxÞ in the image planes, where the correspondence is given by the cam- 0 00 0 00 era-specific projection matrices P and P . The line equations l and l , which are given in Eqs. (2) and (3), in 3D space can be constructed. Their origin is the respective image point and 0 00 their direction vectors d and d are determined using their respective projection center. If the Journal of Optical Microsystems 011005-3 Jan–Mar 2022 Vol. 2(1) Burkard et al.: Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection. . . Fig. 3 Schematic representation of the setup for the 3D reconstruction process used in triangu- lation sensors based on the stereo matching principle. 0 00 0 00 epipolar constraint l · ðb × l Þ¼ 0 is fulfilled, both lines l and l intersect in the 3D space and an exact solution for the scalars k and k exists. If the epipolar constraint is violated, the geo- 1 2 metrical solution solves the system of linear equations given in Eq. (4) and estimates the 3D point 0 00 4 x as the mid-point of the shortest line segment that joins both lines l and l . More sophisticated estimates for the point x take into account uncertainties in the imaging process and can be shown to be statistically optimal. Examples for these estimators can be found in Refs. 4 and 5. Apart from the estimation of the intersection point, the epipolar constraint can be used to reduce the search space for correspondences in the image pairs from a 2D space to a one-dimensional (1D) space. 0 0 0 EQ-TARGET;temp:intralink-;e002;116;418l ¼ u þ k d ; (2) 00 00 00 EQ-TARGET;temp:intralink-;e003;116;375l ¼ u þ k d ; (3) 0 0 00 00 0 ðu þ k d − u − k d Þ · d 1 2 EQ-TARGET;temp:intralink-;e004;116;353 ¼ 0: (4) 0 0 00 00 00 ðu þ k d − u − k d Þ · d 1 2 2.3 3D Reconstruction in dtof Measurement In the following, a reconstruction method for MEMS-based scanning lidar systems utilizing the dtof method is outlined. The reconstruction process may be formulated analogously to the recon- struction presented in the previous subsection, where one of the projection centers is replaced by the MEMS-scanner. The top-view of this geometry is shown in Fig. 4. As shown in Fig. 4, the global coordinate frame is fixed to the center of the sensor and the optical axis of the receiving optics coincides with the z axis. The optical axis of the transmitter is defined by the MEMS-scanner at a mechanical scan angle θ of zero. The rotation angle θ of mech 0 the MEMS-scanner is chosen such that the optical axes of the transmitter and receiver intersect at half the maximum distance, defined here as the working distance W. If the MEMS-scanner and the projection center are separated by a baseline b ¼½x ; 0;0 , the rotation angle θ may be M0 0 expressed as π x M0 EQ-TARGET;temp:intralink-;e005;116;167θ ¼ − arctan : (5) 4 2W Using the time-dependent deflection angle θðtÞ, the normal vector n ¼½cosðθðtÞÞ; sinðθðtÞÞ MEMS may be defined. The normalized reflection direction d follows from the vector reflection law scan and is determined as Journal of Optical Microsystems 011005-4 Jan–Mar 2022 Vol. 2(1) Burkard et al.: Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection. . . Fig. 4 Schematic representation of the proof-of-concept lidar system and necessary geometric definitions. EQ-TARGET;temp:intralink-;e006;116;509d ¼ d − 2 · ðn · d Þ · n ; (6) scan laser MEMS laser MEMS where d is the normalized direction of the laser emission. laser Using the approximation of an ideal projection with a thin lens, the line l may be expressed analogously to Eq. (2). In the 1D pointwise scanning case with an active illumination and a single line sensor, the epipolar constraint must be fulfilled and may be stated as d · ðb × l Þ¼ 0. The importance of scan the epipolar constraint becomes obvious applying it to the case of a 2D pointwise scanning system with an active illumination and a 2D array detector. Since the current scan angle and therefore the scan direction d is known, only the pixels fulfilling this constraint must be scan readout, which greatly reduces the amount of data that needs to be transferred, stored, and processed. In the absence of background radiation and noise in the sensor, invoking the epipolar con- straint would be sufficient to uniquely specify the point x. Since this is usually not the case, a further criterion needs to be specified. A measurement utilizing the dtof method contains timing information for every pixel. Considering the geometry shown in Fig. 4, the time of flight t TOF must be equal to EQ-TARGET;temp:intralink-;e007;116;307c · t ¼jx − x jþjO − xj; (7) TOF scanner where c is the speed of light through the medium and x is the position of the scanner. scanner Without any prior knowledge of the scene, an analytical solution for the distance Z can be derived for a given pixel position u , a measured time of flight t , and scan angle θ. For the TOF 2D geometry, this solution is given in Eq. (8). Combining the distance Z with the imaging equa- tion, given in Eq. (9), the 2D point x ¼½X; Z can be determined: pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 ððc · t Þ − x Þ · c · t 1 þ tan θ − x · tan θ · ð1 − sin θÞ TOF TOF M EQ-TARGET;temp:intralink-;e008;116;216Z ¼ ; (8) 2 2 2½ðc · t Þ − ðx · sin θÞ TOF M u · ðf − ZÞ EQ-TARGET;temp:intralink-;e009;116;146X ¼ : (9) 2.4 Spatial Uncertainties in the dtof Measurement Spatial uncertainties in the dtof measurement can arise from either spatial or temporal uncer- tainties. Spatial uncertainties arise from the uncertainty involved in the determination of the current scan angle and the finite pixel size. Temporal uncertainties arise from the pulse-to-pulse Journal of Optical Microsystems 011005-5 Jan–Mar 2022 Vol. 2(1) Burkard et al.: Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection. . . timing jitter between subsequent laser pulse emissions and the minimum resolvable time given by the time-to-digital converter of the detector. The current scan angle is monitored with a sam- pling frequency much higher than the oscillation frequency. Therefore, this uncertainty is neglected in the following. In addition, we consider the case where the pulse-to-pulse timing jitter between the laser pulse emissions is less than the minimum resolvable time, so this is also neglected. The finite pixel size gives rise to a spatial uncertainty in the x direction of the received pho- ton. Under the assumption of a lateral uniform pixel response, this yields a uniform distribution over the active pixel area. The mean μ is equal to the center of the pixel and can be expressed pix for the sensor, which will be considered later, as EQ-TARGET;temp:intralink-;e010;116;615μ ¼ð96.5 − uÞ · w ; (10) pix pix where u is the pixel coordinate and w is the spacing between two pixels as given schematically pix in Fig. 6. Its variance σ is given as pix 2 SPAD EQ-TARGET;temp:intralink-;e011;116;555σ ¼ ; (11) pix where d is the diameter of an SPAD. Usually, this assumption is not valid for conventional SPAD photodiodes, but for SPADs the uniformity of the response to photons impinging at different positions in the active area is a key parameter. Through careful design of the device, a uniform pixel response, in terms of the photon detection efficiency and the detection delay, may be achieved. The minimum resolvable time represented by the bin width t gives rise to a spatial uncer- bin tainty in the z direction of the received photon. The discretized time of flight t is equal to the TOF bin number N multiplied by the minimum resolvable time t of the time-to-digital converter. bin bin This discretization causes a quantization error. In a time-to-digital converter, and with only minor assumptions about the underlying statistics of the photon detection, the time of arrival in a bin is uniformly distributed. A necessary and sufficient condition for this may be found in Ref. 8, and its application to a commonly used time-to-digital converter architecture may be found in Ref. 9. Therefore, the mean μ is the center of the bin given as TOF EQ-TARGET;temp:intralink-;e012;116;356μ ¼ N − · t ; (12) TOF bin bin and its variance σ may be expressed as TOF 2 bin EQ-TARGET;temp:intralink-;e013;116;299σ ¼ : (13) TOF In the following, the first-order second-moment method is used to propagate the uncertainties of the measured pixel coordinate u and its time of flight t from the image into the object TOF space. To achieve this, the point x ¼½X; Z is first expressed in polar coordinates using the known correspondences. The mean of the 2D point in ðr; φÞ space is then qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 EQ-TARGET;temp:intralink-;e014;116;207μ ¼ Xðμ ; μ Þ þ Zðμ Þ ; (14) r pix TOF TOF and EQ-TARGET;temp:intralink-;e015;116;155φ ¼ atan 2½Zðμ Þ;Xðμ ; μ Þ; (15) TOF TOF pix TOF where atan 2 is the two-argument arctangent function. Under the assumption of negligible covariance between μ and μ , which is the case if the pixel response is assumed to be uni- pix TOF form in terms of the photon detection efficiency and the detection delay, applying the first-order second-moment method to the point in ðr; φÞ space results in an uncertainty in r-direction of Journal of Optical Microsystems 011005-6 Jan–Mar 2022 Vol. 2(1) Burkard et al.: Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection. . . (a) (b) Fig. 5 Given exemplarily are uncertainty bounds in Cartesian coordinates. In (a), these are given for different pixel numbers u and bin numbers N . The uncertainty bound for the pixel number bin u ¼ 120 and bin number N ¼ 16 is given in (b). bin 2 2 dr dr 2 2 2 EQ-TARGET;temp:intralink-;e016;116;534σ ¼ · σ þ · σ ; (16) r pix TOF du dt TOF and in φ-direction of 2 2 dφ dφ TOF TOF 2 2 2 EQ-TARGET;temp:intralink-;e017;116;486σ ¼ · σ þ · σ : (17) pix TOF du dt TOF As an example, resulting uncertainty bounds in Cartesian coordinates for a bin width t of bin 312.5 ps, different pixel numbers u and distances Z are shown in Fig. 5. 3 System Description and Implementation of the Signal and Data Processing After a brief description of a proof-of-concept MEMS-based SPAD lidar system, the implemen- tation of a signal and data processing based on the results of the previous section follows. 3.1 Sensor and System Description The sensor used here is the SPADEye2 from the Fraunhofer Institute for Microelectronic Circuits and Systems. It is a 2 × 192 pixel SPAD-based dtof line sensor, where only one of these lines is actively illuminated. For timing measurements, a time-to-digital converter with a resolution of t ¼ 312.5 ps and a full range of 1.28 μs, which corresponds to a total dtof detection range of bin 192 m, is implemented in each pixel. As schematically depicted in Fig. 6, each pixel consists of four vertically arranged SPADs with a diameter d of 12 μm. The height h of a pixel SPAD pix Fig. 6 Schematic representation of the SPAD-based dtof line sensor, its dimensions, and the placement of the origin of the reference coordinate frame. Journal of Optical Microsystems 011005-7 Jan–Mar 2022 Vol. 2(1) Burkard et al.: Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection. . . Table 1 Summary of the components and their parameters used in the proof-of-concept system. Component Parameter Symbol Value Laser source Pulse repetition frequency f 20 kHz rep Wavelength λ 659 nm Peak optical power (pulsed) — 80 mW Temporal pulse width — 14.9 ns Detector SPADEye2 Pixels — 2 × 192 pixels SPAD diameter d 12 μm SPAD Pixel width w 40.56 μm pix Pixel height h 209.6 μm pix Bin width t 312.5 ps bin TDC full range — 1.28 μs MEMS-scanner Fovea3D Mirror aperture — 3.3 mm × 3.6 mm sending mirror and SiMEDri-Driving Electronics Oscillation frequency f 1570 Hz mech Mech. torsion amplitude θ 11.9° mech;max Reflectivity at 659 nm — 81.25% Receiver optics Focal length f 8mm f -number — 2 Optical bandpass filter (FWHM) — 10 nm is 209.6 μm and its width w is 40.56 μm. The coordinate system for the following discussion pix is fixed to the center of the active sensor area. The relation between the Cartesian coordinates ⊺ ⊺ ðx; yÞ and the pixel-units ðu; vÞ is given as x ð96.5 − uÞ · w pix EQ-TARGET;temp:intralink-;e018;116;333 ¼ ; (18) y 0 where u is the pixel number in the range of 1 to 192. Since only the line in the center of the sensor is actively illuminated here, the y-or v-component of the vector is zero. To illuminate the scene, a collimated laser beam is deflected in the horizontal direction by a single axis resonant MEMS-scanner with electrostatic drive. The scanner is the Fovea3D sending mirror, and its driving and monitoring electronics SiMEDri are fabricated at the Fraunhofer 12,13 Institute for Photonic Microsystems. The laser source is a pulsed laser diode with a center wavelength of 659 nm, an optical peak power of 80 nW, and a temporal pulse width of 15 ns. Although this center wavelength is not commonly used in lidar applications, it greatly simplifies the alignment procedure and the discussion is valid for arbitrary optical wavelengths. The overall lidar system is a biaxial arrangement, and the distance between the center of the sensor and the center of the MEMS scanner is 12.5 cm. A list summarizing the components parameters is given in Table 1. 3.2 Signal and Data Processing A block diagram of the proposed signal and data processing chain is shown in Fig. 7. For every measurement, the sensor outputs a vector N that contains the measured bin values of every bin pixel and a measurement timestamp T , which is used to determine the current scan angle θ. meas The zero crossings of the mechanical scan angle θ are monitored using a piezoresistive mech Journal of Optical Microsystems 011005-8 Jan–Mar 2022 Vol. 2(1) Burkard et al.: Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection. . . Fig. 7 Block diagram showing the signal and data processing chain. (a) (b) (c) (d) Fig. 8 (a) Scene and lighting conditions, (b) raw measurement data of 800 consecutive accumu- lations, (c) measurement data classified as signal, and (d) measurement data classified as noise. Journal of Optical Microsystems 011005-9 Jan–Mar 2022 Vol. 2(1) Burkard et al.: Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection. . . sensor placed on the torsional bar of the scanner. For the determination of the current scan angle θ, a vector containing these zero crossings ZC is used. As outlined in Sec. 2.4, the deter- mech mined scan angle θ and the vector of measured bin values N are used to estimate the mean bin distances μ and angles μ as well as their respective variances σ and σ for every pixel. These r φ r φ estimates are further tested for consistency using the estimated distance Z, the known position of the scanner x , and its current scan angle θ. If the determined point lies within the uncer- scanner tainty bound, the measured value is classified as signal S. Otherwise, the measured value is stored in a vector N and labeled as noise, which may further be used to estimate the background radiation impinging on the sensor for example. 4 Experimental Verification To verify the signal and data processing, different experiments were conducted, which are described in the following. A first proof of concept measurement with room lighting as back- ground radiation was conducted. The scene with annotated distances to the objects is shown in Fig. 8(a), and the lighting conditions are shown. For better visibility in the picture, the laser source was set to a constant optical output power of 10 mW. The raw measurement data of 800 consecutive accumulations are shown in Fig. 8(b). For better visibility, only the first 256 bins are displayed. As expected in room lighting conditions, the three different objects are clearly visible even without any further processing or classification. Applying the signal and data processing as (a) (b) (c) (d) Fig. 9 Same scene as before but additionally a 1-kW halogen floodlight was used to artificially increase the background illumination. (a) Raw measurement data of 3200 consecutive accumu- lations. (b) Measurement data classified as signal. (c) Thresholding and peak detection applied to (a). (d) Thresholding and peak detection applied to (b). Journal of Optical Microsystems 011005-10 Jan–Mar 2022 Vol. 2(1) Burkard et al.: Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection. . . outlined in Sec. 3 yields the histogram of the measured values classified as signal S shown in Fig. 8(c). The measurements labeled as noise N are shown in Fig. 8(d). Comparing Figs. 8(b) and 8(c), it can be seen that most of the noise is removed and only some spurious outliers that randomly satisfy the conditions of the uncertainty bounds are present. Furthermore, the total number of counts in the raw measurement data of 2737 is reduced to 879, which corresponds closely to the number of accumulations, while not altering the peak counts, for example, encountered in the pixels 14 and 81. A second measurement was conducted with the same scene as shown in Fig. 8(a), but addi- tionally a 1-kW halogen floodlight was used to artificially increase the background radiation. Using a maximum likelihood estimator, the average background rate generated by the additional illumination was estimated to be around 6 MHz per pixel. (For comparison without the additional illumination, a background rate of about 100 kHz per pixel was estimated.) For better visibility, 3200 consecutive accumulations were used and the raw measurement data are shown in Fig. 9(a). As can be seen, the scene is barely visible and dominated by the background noise. Figure 9(b) shows the histogram of the measured values classified as signal S. Before further processing both histograms with a simple peak detector, the same moving average filter and thresholding was applied to both histograms. The resulting filtered outputs are shown in Figs. 9(c) and 9(d), where the processed histogram using the data processing as outlined in Sec. 3 closely resembles the measurement without any background radiation as shown in Fig. 8(c). 5 Conclusion and Outlook In the conclusion, an extension of the statistical detection process of SPAD-based detectors con- sidering the time-dependent scanning trajectory of MEMS-scanners was derived. Based on this, a signal and data processing strategy was presented using principles known from the 3D recon- struction process of triangulation sensors to distinguish between signal and noise/background photons. The signal and data processing strategy was implemented in a proof-of-concept MEMS-based SPAD lidar system and its functionality was verified experimentally. Furthermore, it was shown that the influence of strong background radiation is largely attenuated, and an evaluation of the data was still possible. Utilizing an SPAD-based 2D array detector and an MEMS-scanner with a 2D scan trajectory, the presented method may be easily extended to distinguish between signal and noise/ background photons in the 3D case. As briefly mentioned in Sec. 2.3, by checking the epipolar constraint, the amount of data to be transmitted, stored, and processed can be greatly reduced. Acknowledgment The authors declare no conflict of interest. References 1. O. Kumagai et al., “7.3 A 189 × 600 back-illuminated stacked SPAD direct time-of-flight depth sensor for automotive LiDAR systems,” in IEEE Int. Solid- State Circuits Conf. (ISSCC), IEEE, pp. 110–112 (2021). 2. B. E. A. Saleh, M. C. Teich, and J. W. Goodman, Fundamentals of Photonics, Wiley Series in Pure and Applied Optics, 2nd ed., John Wiley & Sons, Chicester (2013, 1991). 3. M. Fox, Quantum Optics: An Introduction, Oxford Master Series in Physics Atomic, Optical, and Laser Physics, Vol. 15, Oxford University Press, Oxford and New York (reprinted twice, once with corr.) (2007). 4. W. Förstner and B. Wrobel, Photogrammetric Computer Vision: Statistics, Geometry, Orientation, and Reconstruction, Geometry and Computing, Vol. 11, Springer, Cham (2016). 5. R. I. Hartley and P. Sturm, “Triangulation,” Comput. Vision Image Understanding 68(2), 146–157 (1997). Journal of Optical Microsystems 011005-11 Jan–Mar 2022 Vol. 2(1) Burkard et al.: Histogram formation and noise reduction in biaxial MEMS-based SPAD light detection. . . 6. P. C. D. Hobbs, Building Electro-Optical Systems: Making It All Work, Wiley Series in Pure and Applied Optics, S.l., 1st ed., Wiley-Interscience (2000). 7. I. Prochazka et al., “Photon counting timing uniformity–unique feature of the silicon ava- lanche photodiodes K14,” J. Mod. Opt. 54(2–3), 141–149 (2007). 8. A. Sripad and D. Snyder, “A necessary and sufficient condition for quantization errors to be uniform and white,” IEEE Trans. Acoust. Speech Signal Process. 25(5), 442–448 (1977). 9. T. Maeda and T. Tokairin, “Analytical expression of quantization noise in time-to-digital converter based on the fourier series analysis,” IEEE Trans. Circuits Syst. I: Regular Pap. 57(7), 1538–1548 (2010). 10. Fraunhofer IMS Duisburg, “SPADEYE2 – CMOS lidar sensor,” 2018, https://www.ims .fraunhofer.de/content/dam/ims/de/documents/Downloads/SPADeye2.pdf 11. M. Beer et al., “SPAD-based flash LiDAR sensor with high ambient light rejection for automotive applications,” Proc. SPIE 10540, 105402G (2018). 12. T. Sandner et al., “Hybrid assembled micro scanner array with large aperture and their system integration for a 3D ToF laser camera,” Proc. SPIE 9375, 937505 (2015). 13. Fraunhofer IPMS Dresden, “Driving electronics for the evaluation of 1D and 2D resonant MEMS scanners,” 2017, https://www.ipms.fraunhofer.de/content/dam/ipms/common/ products/AMS/simedri-e.pdf Roman Burkard received his BSc and MSc degrees in electrical engineering and information technology from the University of Duisburg-Essen in 2016 and 2018, respectively, where he is currently working toward his PhD in electrical engineering at the Chair of Electronic Compo- nents and Circuits. His research focuses on the development of light detection and ranging systems and on the challenges posed by the combination of single-photon avalanche diodes and a scanning illumination. Manuel Ligges received his diploma and doctorate in physics from the University of Duisburg- Essen. Until 2019, he worked as a research assistant and assistant professor in the field of solid- state physics. Currently, he leads the group of optical systems at the Fraunhofer Institute for Microelectronic Circuits and Systems (IMS) in Duisburg. Thilo Sandner studied electrical engineering at the Technical University of Dresden, Germany, where he received his doctorate in 2003. Since 2003, he has been working as a scientist at the Fraunhofer IPMS, where he headed the R&D group for MEMS scanning mirrors for more than 10 years. Currently, he works as a project manager and key researcher for the development of innovative MOEMS components, system design, and new applications of photonic microsys- tems such as MEMS-based LiDAR. Reinhard Viga received his diploma degree in electrical engineering and the Dr.-Ing. from Gerhard Mercator University of Duisburg in 1990 and 2003, respectively. Since 1990, he was with the Chair of Electromechanical System Design working on medical sensor system typol- ogies and application aspects. Currently, he is the group manager in the Chair of Electronic Components and Ciruits of the University of Duisburg-Essen. Besides sensor technology, his research interests cover the design of embedded systems for medical diagnostics and medical image processing. Anton Grabmaier studied physics at the University of Stuttgart and specialized in semicon- ductor physics and measurement technology. His dissertation was focused on laser diodes. Since 2006, he has been a professor at the University of Duisburg-Essen and is working as the director of the Fraunhofer Institute for Microelectronic Circuits and Systems (IMS) in Duisburg. André Merten: Biography is not available. Journal of Optical Microsystems 011005-12 Jan–Mar 2022 Vol. 2(1)

Journal

Journal of Optical MicrosystemsSPIE

Published: Jan 1, 2022

Keywords: light detection and ranging; single-photon avalanche diode; scanning; MEMS; histogram; noise reduction

There are no references for this article.