Open Access
10 January 2023 Technical concepts of automotive LiDAR sensors: a review
Hanno Holzhüter, Jörn Bödewadt, Shima Bayesteh, Andreas Aschinger, Holger Blume
Author Affiliations +
Abstract

Automotive LiDAR sensors are seen by many as the enabling technology for higher-level autonomous driving functionalities. Different concepts to design such a sensor can be found in the industry. Some have already been integrated into consumer cars while many others promise to be in mass production soon to become cost-effective enough for broad deployment. However, automotive LiDAR sensors are still evolving and a variety of sensor designs are pursued by different companies. Here, we construct the automotive LiDAR design space to visually depict system design options for these sensors. Subsequently, we exemplify the concepts with drawings that can be found in published patent applications (focusing on scanning mechanisms and scan patterns) before discussing their advantages and challenges.

1.

Introduction

Over the past years, a variety of different automotive light detection and ranging (LiDAR) concepts and sensors have emerged. All aim at fulfilling the challenging requirements of car makers outlined in Table 1. This work concentrates on system design choices with the intention to give an overview of available options when building an automotive LiDAR sensor. The “automotive LiDAR design space” of Sec. 2 depicts these options visually. In subsequent Secs. 35, we examine concepts from drawings of published patent applications. These drawings are sometimes difficult to grasp. We therefore illustrate our understanding with a number of accompanying figures. The procedure is shown in Fig. 1 with the help of a notional LiDAR concept. For demonstration purposes, the drawing in Fig. 1(a) from published patent application Ref. 2 was taken out of context. We pretend it shows a LiDAR sensor mounted in the grill of a car. The cone indicates its field of view (FOV).

Table 1

Automotive LiDAR sensor requirements (based on Ref. 1).

ParameterSymbolShort rangeLong range
FOVFOVhor×FOVvert120  deg×30  deg40  deg×20  deg
Hor. and vert. resolutionΔφ,Δϑ1  deg0.1 deg to 0.15 deg
Range (10% reflectivity target)d10%60 m200 m
Range resolutionΔd<5  cm5  cm
Frame ratef25 Hz
Temperature rangeAEC-Q100 grade 2
(−40°C to 105°C) or better
ReliabilityAEC-Q100
Laser safetyEN 60825-1 Class 1
Size100 to 200  cm3
Power consumption<10  W
System cost (in USD)<$50$100 to $200

Fig. 1

Illustration of an imaginary LiDAR concept. (a) Drawing of a sensor concept from published patent application Ref. 2. Labels (numbers 2, 4, 6,…) will be annotated when helpful for the explanations but otherwise left blank. (b) Overlap plot in the far field with receiver spot in turquoise and emitter spot in red. (c) Image of the concept basics showing only a minimal number of components to highlight important aspects of patent application drawings. Black arrows indicate how and which parts are moving. (d) Visualization of the setup to explain how data acquisition pattern as in Fig. 1(e) are derived. (e) Depiction of a data acquisition pattern on a perpendicular wall. Sequence in time is illustrated by shades of red. ti indicates current data acquisition point, ti1 the previous and ti+1 the next point. Here, a scan from left to right is shown. The entire FOV is conceptually represented by all black circles.

OE_62_3_031213_f001.png

To visualize the overlap O between emitter (red) and receiver (turquoise) beams, we present an overlap plot, as in Fig. 1(b), where useful (mostly for systems with separated emitters and receivers as introduced later in Sec. 2.4). We also provide a basic principles illustration to highlight the aspects of a sensor concept that are covered in this work [see Fig. 1(c)]. The setup to visualize the temporal data acquisition/scan sequence is shown in Fig. 1(d). It shows a sensor pointing toward a perpendicular wall to illustrate how we derive the scan pattern plotted in Fig. 1(e) (here, the scanning direction is from left to right).

Excluded from this work are extensive discussions about individual components of LiDAR sensors, such as emitters, receivers or optics and details on achievable distance resolution, signal-to-noise ratio (SNR) derivations, and so on. For more on these topics, see Refs. 34.5.6. For general introductions to the field of automotive LiDAR sensors, see Refs. 78.9.10.11.12.13.14.

2.

LiDAR Design Space

Figure 2 shows the automotive LiDAR design space with three axes that represent main aspects of a LiDAR, namely, FOV coverage, measurement principle, and wavelength. Even though theoretically possible, not all combinations in Fig. 2 are technically viable. Every option along the three axes of the design space is briefly introduced in Secs. 2.12.3. Later on, the FOV coverage axis will serve as a guideline for the discussion of different LiDAR sensor concepts in Secs. 35.

Fig. 2

Automotive LiDAR design space.

OE_62_3_031213_f002.png

2.1.

Field of View Coverage

FOV coverage describes possibilities to cover a two-dimensional (2D) FOV, namely, FOVhor×FOVvert with resolution Δφ,Δϑ from Table 1. Options are separated into three categories, i.e., classical/mechanical or semi solid-state scanning as well as solid-state approaches.

2.1.1.

Classical/mechanical scanning

The term classical scanning in this work describes every sensor that utilizes rotating/moving parts, which are driven by direct current (DC) or stepper motors, and consequently, need to cope with friction and abrasion. The benefit of using macroscopic scanning mechanisms, such as rotating mirrors or a spinning sensor is the possibility to cover wide FOVs with relative ease. Sensor concepts utilizing motors are discussed in Sec. 3.

2.1.2.

Semi solid-state scanning

A system categorized as semi solid-state has typically two characteristics. It is operated in the elastic regime of their materials stress–strain behavior where there is no wearing/aging (hence, no need for lubrication) and it oscillates in resonance to make it robust against shocks. Representatives of semi solid-state scanning systems are for example micro-electro-mechanical system (MEMS) mirrors. However, boundaries of the semi solid-state category are not well defined. Systems that do not oscillate in resonance are sometimes still referred to as semi solid-state, see Sec. 4.

2.1.3.

Solid-state approaches

Solid-state sensors have no moving parts. They either do not scan or use a mechanism to change parameters, such as phase or wavelength (e.g., electronically) that allows for the emission and detection of light from different angles of their FOV (Sec. 5). Representatives of solid-state scanning mechanisms in Fig. 2 are flash (no scanning), spectral deflection, as well as optical phased array (OPA).

2.2.

Measurement Principle

The term “measurement principle” refers to the concept of how the sensor gathers range (and additional) information by emitting light that gets reflected off of an object and is subsequently received by the detector. We distinguish between time-of-flight (TOF) with analog and digital processing of the received signal as well as frequency modulated continuous wave (FMCW). An overview including electrical circuits for each principle is provided in Ref. 15. Amplitude modulated continuous wave applications are not covered in this work. An introduction to them can be found in Ref. 16.

2.2.1.

Time-of-flight analog

Analog TOF LiDAR sensors measure round trip time Δt (which can be converted to distance with d=Δt·c2 and c the speed of light) of a single pulse. Figure 3 shows a TOF measurement cycle. To determine received optical power Prec, the LiDAR equation1720 from Eq. (1) can be used. It relates Prec to emitted power Pem, target reflectivity ρ, optical system efficiency ηsys, one way atmospheric absorption τatm, area of detector entrance pupil Ade, and distance d. For Eq. (1) to be applicable, we assume an emitter beam with uniform beam profile and target dimensions far bigger than the emitters and receivers spot sizes. In addition, we expect Lambertian reflection properties of the target material and a complete overlap21,22 of emitter and receiver spots on the target as shown in Fig. 4.

Fig. 3

Artist illustration of the TOF measurement principle. (a) Pulse emission and start of time measurement. (b) Pulse before reflection. (c) Reflected pulse on its way back to sensor. (d) Detection of pulse, determination of Δt.

OE_62_3_031213_f003.png

Fig. 4

Illustration of assumptions for Eq. (1).

OE_62_3_031213_f004.png

The distinction between “analog” and “digital” TOF describes how the received signal is processed. Analog TOF refers to sensors whose detector output (the photocurrent) has a linear relation to the impinging optical energy. Such detectors are, e.g., avalanche photo diodes (APDs) used in conjunction with transimpedance amplifier, comparator and time-to-digital converter (TDC) as outlined in Ref. 23. The processed photocurrent is a voltage signal similar to the black bold line shown in Fig. 5. It is set from low to high whenever the photocurrent rises above the detection threshold and vice versa. TDCs24 are used to determine when the threshold crossing happened. Distance resolution of Δd=5  cm (Table 1) requires TDC clock frequencies of fTDC=c2Δd=1333  ps3  GHz.

Fig. 5

Plot of analog TOF measurement data.

OE_62_3_031213_f005.png

Unfortunately, abstracting photocurrent with a box-like voltage signal leads to loss of information. Using analog-to-digital converter (ADC) to digitize the photocurrent requires them to have a sampling rate of at least 1 to 10 GHz to be able to sample nanosecond pulses and provide the required distance resolution. Such fast ADCs tend to be too expensive for automotive LiDAR applications, cf., system cost from Table 1. However, there is an alternative way to obtain a digital signal from TOF measurements.

2.2.2.

Time-of-flight digital

Digital TOF refers to sensors that acquire a digitized representation of photocurrent by accumulating multiple (up to hundreds) emitted pulses with single-photon avalanche diodes (SPADs).25 SPADs are binary detecting devices meaning there is no linear relation between their output signal and the impinging optical energy. Instead, they indicate the arrival of a single photon at a given moment in time. The information about how many photons arrived is lost. After being triggered, SPADs need to be quenched and recharged before they are ready for another detection. The quench and recharge duration is called dead time. Each detection event is stored in a histogram whose shape approaches the shape of the photocurrent (Fig. 5) with a sufficiently high number of accumulated pulses. The technique is called time-correlated single photon counting (TCSPC).2630 Figure 6 shows a direct comparison between an analog and digital TOF measurement signal. Quantization steps in t of the digital signal in Fig. 6(b) depend (as in analog TOF) on the clock frequency of the TDCs in use. digital TOF signals are naturally processed with digital signal processing (DSP) techniques. Hence, parameters, such as noise level, peak position, height of the peak, its skewness or center of gravity can be determined more easily. However, since the accumulation of multiple pulses is required for TCSPC, a digital TOF measurement requires more time than measuring the round trip time of a single pulse in analog TOF.

Fig. 6

Graphical comparison of TOF analog and digital signals. (a) Processed photocurrent in analog TOF. (b) Processed photocurrent in digital TOF.

OE_62_3_031213_f006.png

2.2.3.

Frequency modulated continuous wave

FMCW measurements differ from TOF measurements in various aspects besides pulsed versus continuous light emission. Here, the quantity of interest is a beat frequency fbeat obtained by chirping the emitters frequency fout and subsequently mixing (optical interference) a portion of outgoing with incoming light. This method is called heterodyne optical mixing.31 Figure 7(a) top schematically shows outgoing (in red) and incoming light shifted in frequency and time (in turquoise) on a compressed time scale to visualize frequencies on the order of ν300  THz. The bottom graph of Fig. 7(a) shows the mixed signal in gray as well as its envelope, i.e., the beat signal, in black. The beat signals frequency fbeat can be determined with a fast Fourier transform applied to the output signal of, e.g., a biased photo detector and is given by fbeat=foutfin2. It contains information not only about Δt but also Doppler shift. Hence, target distance d as well as relative, radial velocity can be determined32,33 by applying signal processing techniques known from radio detection and ranging (RaDAR)34,35 sensors.

Fig. 7

Plots of FMCW signals with received signal shifted in time and frequency. (a) Visualization of FMCW signals in the time domain: outgoing (red) and incoming (turquoise) light, both mixed in gray with envelope in black. (b) Signal plots in the frequency domain: outgoing fout (red) and incoming fin (turquoise) as well as envelope frequency fbeat (black).

OE_62_3_031213_f007.png

2.3.

Wavelength

The operating wavelength of a LiDAR sensor has direct implications not only on the sensor’s emitter but also on its receiver. Automotive LiDARs are generally operated with a wavelength that is above the spectral sensitivity range of the human eye.

Near infrared (NIR) operating wavelengths typically range from 850 to 940 nm. It is silicon as detector material that makes this wavelength range attractive. Silicon can be processed with highly optimized and cost efficient semi-conductor manufacturing techniques. Its peak sensitivity is around 900 to 1000 nm with a steep drop-off at 1100 nm36 where silicon becomes transparent. The major downside of NIR wavelength are eye safety limitations that prevail in this regime (close proximity to the visible range of the human eye) and restrict the optical energy output.

Short-wave infrared (SWIR) operating wavelengths (above 1100 nm up to 1550 nm) represent all non-silicon detectors with, e.g., indium gallium arsenide (InGaAs) as detector material. The significant benefit of these wavelengths is a higher level of permitted optical output power compared to the NIR regime. Emitted pulses on the order of nanoseconds, for example, can contain roughly five orders of magnitude more optical power at 1550 nm compared to 905 nm and are still within the eye safety limitations for laser class 1.37 A compelling argument for higher wavelength since the amount of emitted energy is related to detection range d in the LiDAR equation from Eq. (1). Photons with higher wavelength (λ>1100  nm), however, are less energetic than photons from the NIR regime. The band gap of III-V materials (such as InGaAs) needed to detect these photons is narrower compared to silicon. Unwanted excitations by, e.g., thermal phonons are, consequently, more likely38 which makes these detectors more prone to noise.

Many publications about advantages and disadvantages of NIR (utilizing mature, cost efficient silicon) or SWIR (non-silicon detectors but more optical output power) operating wavelengths can be found in the literature.3941 The effects of solar background flux versus water absorption in both wavelength regimes have been discussed in, e.g., Refs. 4243.44.

2.4.

Co- and Biaxial Systems

There is another design parameter, a fourth axis that is not included in the design space shown in Fig. 2, namely, type of measurement channel (coaxial or biaxial). In most cases, choosing between the two is not an option because the channel type is dictated by the choice of, e.g., measurement principle or used components. Figure 8 shows both channel variants schematically.

Fig. 8

Visualization of coaxial and biaxial measurement channel configurations. (a) Illustration of a coaxial measurement channel. (b) Overlap plot coaxial channel. (c) Illustration of a biaxial measurement channel. (d) Overlap plot biaxial channel.

OE_62_3_031213_f008.png

Coaxial measurement channels have a shared optical path for emitter (red) and receiver (turquoise). This is achieved by a path splitting element (a splitter can be a mirror with a hole) that overlays both beams. Overlap O of emitter and receiver in this configuration is O=1 for all distances [Figs. 8(a) and 8(b)]. The downside of a shared optical path is that coaxial channels are optically shorted (direct path from emitter to receiver within the channel). The result is signal on the receiver during every emission of a pulse, which makes object detections in close proximity challenging.

Biaxial measurement channels are channels where emitter and receiver have separated optical paths. The separation lowers the risk of optical shorts but aligning emitter to receiver to have good overlap45 for all measurement distances can be demanding. Figure 8(c) shows how O=1 [as displayed for the far field in Fig. 8(d)] cannot be realized at d=0  m.

Analog TOF sensors can be found with co- as well as biaxial measurement channels (Sec. 3.1) while digital TOF sensors (Sec. 3.2) exclusively use biaxial configurations. SPADs of a hypothetical coaxial measurement channel in a digital TOF sensor would trigger during the emission of a pulse and, therefore, be blinded (dead time) for the detection of close objects. FMCW measurements require a combination of co- and biaxial channel layout, since outgoing and incoming light need to interfere on the detector when realizing heterodyne optical mixing as shown in Fig. 9.

Fig. 9

Concept view on FMCW measurement channel with heterodyne optical mixing.

OE_62_3_031213_f009.png

3.

Classical Scanning Sensors

The maturity of classical scanners has led to a variety of sensor concepts that utilize mechanical scanning methods. There are many spinning sensors as well as sensors with rotating mirrors. But also galvano scanners or rotating Risley prisms driven by motors can be found. We use frot to describe rotation frequency with, e.g., frot=1  Hz for one rotation per second. Additionally, we use tmc to indicate minimum time required for a measurement cycle. For TOF measurements tmc=Npulses2dmaxc (with number of pulses Npulses) where we neglect additional time for data processing since it is usually done in parallel. In analog TOF the number of pulses is Npulses=1 (cf., Sec. 2.2.1) and for digital TOF Npulses can reach values in the range of hundreds, see Sec. 2.2.2. During an FMCW measurement cycle (tmc,fmcw is sometimes called dwell time), no pulses but a continuous wave is emitted. Interference of outgoing and incoming light on the detector is required to obtain fbeat. An unambiguous analysis of offsets in time and frequency typically requires one period of double ramp modulation (Sec. 2.2.3), which is on the order of tmc,fmcw10  μs. Subsequent sections illustrate how various classical scanning options can be combined with different measurement principles and wavelengths.

3.1.

Classical Scanning with Analog Time-of-Flight

Many of the classical scanning LiDAR systems rely on analog TOF measurements with either NIR or SWIR operating wavelength. The most established ones are spinning sensors and sensors with rotating mirrors. Other variants use galvano scanners, rotating Risley prisms or combinations of the before mentioned components.

3.1.1.

Rotating sensor and mirror

Figure 10 shows a sensor concept utilizing mechanical scanning (sensor rotation) and analog TOF with an operating wavelength in the NIR regime. The drawings of published patent application Ref. 46 in Figs. 10(a) and 10(b) show front and back side of the sensor. It has several tens of biaxial channels stacked vertically and a lens pair (receiver and emitter) for beam forming purposes. The whole setup is rotated on a spindle. Scan pattern and basic conceptual view are shown in Figs. 10(c) and 10(d), respectively. The conceptual view only shows four vertically stacked channels with receivers on the left and emitters on the right. A rotation direction is indicated by the black arrow. For the sake of simplicity, we disregard proper visualization of emitter and receiver alignment/lenses by displaying parallel beams only (receiver beam in turquoise, emitter beam in red). In practice, emitter and receiver within a biaxial measurement channel do, of course, overlap while all channels fan out to form a vertical line.

Fig. 10

Illustration of a spinning sensor concept with drawings from Ref. 46. (a) Front side view on opened sensor. (b) Back side view. (c) Display of a conceptual scan pattern for a rotating sensor or mirror. All vertical channels measure simultaneously. The opening angle between channels causes a characteristic cushion shape with lines that expand with increasing distance to the surface normal. (d) Concept basics drawing of a spinning sensor.

OE_62_3_031213_f010.png

The vertical angular resolution, Δϑ, is determined by the channel spacing as well as used optics which affect the divergence of each channel. The horizontal angular resolution is equal to the angle the rotating sensor passes between two received pulses. It is given by Δφ=frotfrep with the pulse repetition frequency frep. Realizing Npoints,vert=FOVvert,longΔϑ=20  deg0.1  deg=200 (from Table 1) requires stacking and aligning 200 individual measurement channels. A horizontal FOV of 360 deg can be covered by mounting the sensor, e.g., on the cars roof.

A similar concept (analog TOF and NIR wavelength) with a mirror instead of a rotating sensor is shown in Fig. 11 with drawings from published patent application Ref. 47. The beams of both receiver and emitter stacks (here, positioned above each other) are simultaneously scanned by the rotating mirror. The plate separator visible in Fig. 11(a) and also shown in the sensors concept view of Fig. 11(b) minimizes crosstalk (optical shorts, cf., Sec. 2.4). Reaching Npoints,vert on the order of hundreds is challenging due to hardware/assembly limitations, but the compact design shown in Fig. 11(c) allows for a seamless integration into the body of a car. However, covering a wider FOVvert>120140  deg is basically impossible for sensors that shall not stick out of the car. The scan pattern of a sensor with rotating mirror is similar to the one of a rotating sensor [cf., Fig. 10(c)].

Fig. 11

Illustration of a rotating mirror implementation that scans biaxial stacks of emitters and receivers (Ref. 47). (a) Drawing of a sensor with scanning mirror. (b) Illustration of sensor concept basics. (c) Sensor sketch with closed cover.

OE_62_3_031213_f011.png

Figure 12 shows a LiDAR concept from published patent application Ref. 48 that utilizes a polygon mirror to scan in horizontal direction. Vertical scanning is realized with a motor that controls a mirror via pulleys and a belt. This is a point-wise scanning concept that scans a single, coaxial measurement channel (indicated by the “emitter,” “receiver,” and “splitter” annotations). High frame rates of f25  Hz for FOV, Δφ, Δϑ from Table 1 and a maximum measurement range of dmax=300  m are challenging to achieve as the calculation of the acquisition time for a single frame, tframe,ac, in Eq. (2) shows as follows:

Eq. (2)

tframe,ac=FOVhorΔφFOVvertΔϑtmc=40  deg0.1  deg20  deg0.1  deg2dmaxc160  ms>40  ms=1f.
For f=25  Hz the acquisition time is required to be four times shorter. The concept realizes a large receiver aperture Ade which is beneficial for maximum measurement range since it increases Prec in Eq. (1). To further improve d10% (cf., Table 1), this sensor is operated with a SWIR wavelength that allows for higher optical output power as outlined in Sec. 2.3.

Fig. 12

Sketch of a sensor concept with polygon mirror and coaxial measurement channel from published patent application Ref. 48.

OE_62_3_031213_f012.png

3.1.2.

Galvano scanner

Another motor-driven scanning mechanism utilizes two galvano scanners. A possible coaxial system is conceptually shown in Fig. 13. The use of galvano scanners enables basically arbitrary scan patterns (within the limits of the motors used) while the patterns of rotating sensors or mirrors are dictated by rotation axes and number of channels. A check pattern, for example, is possible [shown in Fig. 13(c)] to ease data analysis with perception algorithms originating from image processing. A single coaxial measurement channel with a double galvano scanner has the same limitations that were previously derived in Eq. (2).

Fig. 13

Illustration of a sensor concept with double galvano scanner and coaxial measurement channel. (a) Concept view and illustration of the galvano mirror movements. (b) Display of emitter and receiver beams in galvano scanner concept. (c) Plot of a possible scan pattern realized with a double galvano scanner.

OE_62_3_031213_f013.png

A biaxial, double galvano scanner sensor concept is visualized in Fig. 14. The drawing from published patent application Ref. 49 in Fig. 14(a) shows one scanner on the emitter and the other one on the receiver side. Separated emitter and receiver paths are beneficial when it comes to avoiding optical shorts but signal can only be generated where emitter and receiver lines cross each other, see Fig. 14(b). Hence, this concept scans point-wise with the challenge of achieving f=25  Hz satisfying FOV and Δφ,Δϑ requirements from Table 1. A line-wise detector does not only receive signal photons from the crossing point of emitter and receiver line, but also additional background light outside the crossing point. Like-wise does a line emitter consume more (optical) power to emit photons along a whole line instead of a point. Nonetheless, two galvano scanners allow for flexible scan patterns that can be tailored to specific use-cases.

Fig. 14

Representation of a sensor concept with biaxial galvano scanner. (a) Drawing of a LiDAR sensor with biaxial galvano mirrors scanning a emitter and receiver line (from Ref. 49). (b) Far field overlap of emitter and receiver.

OE_62_3_031213_f014.png

3.1.3.

Risley prism scanner

Rotating Risley prisms present an alternative mechanical scanning mechanism. These systems consist of two consecutive wedged prisms that can be rotated with different rotation speeds.50 A sensor implementation of such a scanner with a coaxial, analog TOF measurement channel and point-wise scanning is shown in Fig. 15. It includes drawings from published patent application Refs. 51 and 52. Figure 15(a) shows the coaxial measurement channel configuration, and a cross section of a Risley prism scanner is illustrated in Fig. 15(b). The concept basics are shown in Figs. 15(c) and 15(d).

Fig. 15

Depiction of a sensor concept with Risley prism scanner and coaxial measurement channel. (a) Coaxial measurement channel with Risley prism scanner taken from published patent application Ref. 51. (b) Risley prism scanner with bearings and middle shaft from Ref. 52. (c) Rotation visualization of a scanner with Risley prisms. (d) Coaxial emitter and receiver path of a Risley prism scanner.

OE_62_3_031213_f015.png

The pattern of a Risley prism scanner can be plotted with hypotrochoids.53 Figure 16(a) shows such a rosette-like scan pattern. They are derived from the rotation of a circle [gray in Fig. 16(b)] on the inside of a bigger circle (black). The drawing point for hypotrochoids is located at the end of a handle (red) that is fixated on the inner, rotating circle. A classical frame acquisition with a Risley prism scanner is not possible. Constant frep leads to a scan pattern as shown in Fig. 16(a). Point density is high in the center as well as the outer regions but more sparse in between. Frames also do not have a given number of horizontal and vertical points but rather build up over time.

Fig. 16

Plots of a data acquisition pattern for a sensor with a Risley prisms scanner. (a) Hypotrochoid scan pattern from a Risley prism scanner. (b) Illustration of the construction of hypotrochoids.

OE_62_3_031213_f016.png

3.2.

Classical Scanning with Digital Time-Of-Flight

Scanning mechanisms discussed in Sec. 3.1 achieve continuous rotation. A sensor keeps on spinning while measuring (i.e., emitting and receiving light). The influence of the sensor’s rotation on data acquisition in analog TOF measurements can be neglected since the ‘sweep through’ angle, αst is small. It is given by αst=tmcfrotFOVhor and results to αst,analog=0.018  deg, with a measurement cycle time of tmc2  μs for Npulses=1 and dmax=300  m. In digital TOF measurements that is TCSPC (cf., Sec. 2.2.2), the data acquisition requires up to multiple hundred emitted pulses. Hence, tmc becomes significantly longer which leads to bigger αst,digital. A longer acquisition time causes motion blur since the sensor keeps on rotating instead of eyeing at a fixed solid angle during measurement time. To avoid this effect one can either work with lower frot (linked to f) or implement a digital counter rotation to compensate for the rotation of the sensor.

In the remaining part of this section, we introduce two spinning sensors [their scan pattern are comparable to the one shown in Fig. 10(c)] utilizing digital TOF (TCSPC). The idea of a digital counter rotation54 is to electronically switch counter rotation-wise from one vertical stack of measurement channels to a neighboring stack of measurement channels during the acquisition of data. If the switching speed matches frot, the measurement angle can be kept constant so that (almost) no motion blur occurs. It is effectively an increase of tmc and, therefore, Npulses. Drawings from published patent applications Refs. 55 and 56 show concepts combining classical sensor rotation with digital TOF and are shown in Fig. 17.

Fig. 17

Drawings of spinning LiDAR concept with biaxial, digital TOF measurement channels operating in the NIR and SWIR wavelength regime. (a) Tansmitter unit drawing of a digital TOF sensor concept from Ref. 55. (b) Schematical drawing of the mechanical scanning mechanism (Ref. 55). (c) Concept view of digital counter rotation. (d) Sketch of a digital TOF sensor operating with SWIR wavelength from Ref. 56.

OE_62_3_031213_f017.png

Figure 17(a) shows a TCSPC module that is integrated into the spinning sensor of Fig. 17(b). It employs a digital counter rotation method that is visually depicted in Fig. 17(c), where the temporal activation of stacks–indicated by the fading color–is against the sensors rotation direction (black arrow). Conceptually, both sensors shown in Figs. 17(b) and 17(d) are similar. Their mechanical platforms are comparable to the one shown in Fig. 10 but their operating wavelengths differs. The sensor shown in Fig. 17(b) utilizes an NIR wavelength whereas the sensor from Fig. 17(d) operates in the SWIR regime.

3.3.

Classical Scanning with Frequency Modulated Continuous Wave

Different FMCW sensors make use of polygon mirrors for horizontal scanning, which were introduced in Fig. 12. The concept of Fig. 18 with a drawing from published patent application Ref. 57 in Fig. 18(a) uses a scanning approach based on a polygon mirror that scans three parallel FMCW measurement channels. The polygon mirror has five facets to scan horizontally while vertical scanning can, for example, be achieved with a galvano scanner. The concept basics for a single FMCW measurement channel are shown in Figs. 18(b) and 18(c). Each facet of the polygon can cover up to FOVhor=2×360  deg/5=144  deg (incident angle equals reflection angle). The vertical FOV depends on the size of the facet mirrors and the mechanical arrangement of the scanner.

Fig. 18

Illustration of a sensor concept that uses a polygon mirror to scan multiple FMCW measurement channels in parallel. (a) Sensor concept diagram of an FMCW LiDAR with mechanical scanning mechanism from published patent application Ref. 57. (b) Visualization of the basic movements of a scanner consisting of a polygon and galvano mirror. (c) Beam path of an FMCW measurement channel with scanner.

OE_62_3_031213_f018.png

Another FMCW concept is shown in Fig. 19. It does not have an additional galvano scanner but instead uses tilted facets of a polygon lens to scan vertically as depicted in the drawing of Fig. 19(a) from published patent application Ref. 58. Here, the light is deflected while passing through a rotating polygon lens instead of being reflected from a polygon mirror. An exemplary scan pattern for both polygon concepts is shown in Fig. 19(b). Ideas how to achieve a polygon scan pattern that is comparable to the one of rotating sensors [cf., Fig. 10(c)] are provided in Ref. 59.

Fig. 19

Illustration of a sensor concept with an FMCW measurement channel that is scanned by a rotating polygon lens. (a) Sketch of an FMCW LiDAR concept using a polygon lens with tilted facets from Ref. 58. (b) Scan pattern visualization of a polygon mirror either with tilted facets or an additional galvano scanner.

OE_62_3_031213_f019.png

4.

Semi Solid-State Sensors

Semi solid-state scanning refers to systems that do have moving parts but are robust against shocks and vibrations. Additionally, they are operated in the elastic regime of their stress-strain behavior where Hooke’s law60 is applicable. In this section, we distinguish between sensors incorporating MEMS mirrors, oscillating carriers and binary MEMS mirror arrays.

4.1.

Micro-Electro-Mechanical System Mirrors

MEMS mirrors are small mirrors whose movement is controlled via electromagnetic, electrothermal or electrostatic actuation. Also piezoelectric actuators are used.6163 During operation these mirrors oscillate in one or two directions64,65 either in resonance or in a more controlled, quasi-static mode. We divide MEMS mirrors into mirrors with dimensions on a millimeter-scale and mirrors on a centimeter-scale.

4.1.1.

Millimeter-scale mirrors

Mirrors with a diameter of a couple of millimeters that oscillate in resonance are often referred to as solid-state. If they are operated at their natural frequency they tend to be robust against shocks as following calculation of g-force shows. Assuming a round MEMS mirror of size Dmems=3  mm, a maximum scan angle of θmax=15  deg and a natural frequency fmems=2  kHz we can calculate the tangential acceleration at as

at=rθmaxω2sin(ωt)=Dmems2θmax(2πfmems)2sin(2πfmemst).
The above equation reaches maxima, at,max, at sin(2πfmemst)=±1. Hence, we can derive maximum g-force by dividing at,max with the gravitational acceleration g
at,maxg=Dmems2θmax(2πfmems)21g6000  g.
Such g-force at the edges of the mirror make it practically immune to vibrations in an automotive context where the forces of most shocks do not exceed a couple 100 g. The natural frequency of resonating MEMS mirrors depends on their mass and spring constant. The bigger the mirror (more mass) the smaller its natural frequency which results in less resistance against shocks (Sec. 4.1.2). Hence, for systems that scan a coaxial measurement channel there is a trade-off between size of the mirror (equal to Ade from Eq. (1) and directly linked to Prec) and robustness (increases with smaller Dmems i.e. higher fmems).

A sensor concept with two resonant mirrors is shown in Fig. 20 with drawings from published patent applications Refs. 66 and 67. Figure 20(a) shows a scanning emitter that incorporates the mirrors shown in Fig. 20(b) to form a biaxial sensor with a non-scanning detector as shown in Figs. 20(c) and 20(d). Since the FOV of the detector is significantly larger than the emitter spot [see Fig. 20(e)], there is a need to optimize SNR by operating at a SWIR wavelength allowing for more optical output power.

Fig. 20

Depiction of a sensor concept utilizing two resonant MEMS mirrors to scan the emitter. (a) Schematical drawing of a scanning emitter with the help of two resonating MEMS mirrors (published patent application Ref. 66). (b) Design of two MEMS mirrors for scanning horizontally and vertically (published patent application Ref. 67). (c) Visualization of the mirror movements. (d) Illustration of emitter and receiver path for biaxial sensor concept with two MEMS mirrors. (e) Overlap in far field for this biaxial sensor concept.

OE_62_3_031213_f020.png

Scan patterns generated by such a scanning mechanism can be described with Lissajous68 figures. Examples of these figures are shown in Fig. 21. If pulses are emitted with a constant frep, point density is high at the turning points of one or the other MEMS mirror and low where maximum oscillation speed is reached [Fig. 21(a)]. However, one can synchronize pulse emissions with one of the two mirrors to have evenly spaced points in one of the scanning directions as shown in Fig. 21(b). Ideas how to derive an application oriented scan pattern with two oscillating mirrors are described in Ref. 69.

Fig. 21

Plot of possible a Lissajous pattern from a scanner that utilizes two oscillating MEMS mirrors. (a) Lissajous scan pattern with constant frep. (b) Evenly spaced (in the horizontal direction) scan pattern on a Lissajous curve.

OE_62_3_031213_f021.png

Since a point-wise scanning mechanism has its limitations when optimizing for a higher frame rate f, there are other sensor concepts that parallelize multiple measurement channels. Some make use of bigger mirrors with one or two slower scanning axes and, consequently, smaller natural frequencies fmems. However, smaller fmems shift the scanning mechanisms away from being considered solid-state since they become more prone to vibrations in this mode of operation.

Figure 22, with drawings from published patent application Ref. 70, illustrates a sensor concept incorporating four coaxial measurement channels all scanned by a single 2D MEMS mirror [see Fig. 22(a)]. The mirror has a slow vertical and fast horizontal axis. A corresponding scan pattern can be seen in Fig. 22(b). The rows of this scan pattern are curved since the optical paths of each coaxial measurement channel are off axis with respect to the mirror’s scanning axis. For demonstration purposes we assumed instant switching from one row to the next which, in reality, is dependent on the slow scanning axis. Hence, the rows are bent up and down toward the horizontal turning points. Other than in the concept of Fig. 20, here Ade [from Eq. (1)] is equal to the mirror size as depicted in the concept basics illustrations of Figs. 22(c) and 22(d).

Fig. 22

Display of a sensor concept with 2D MEMS mirror scanner. (a) Sketch of a LiDAR concept with a single MEMS mirror scanner and four parallel coaxial, analog TOF channels (from published patent application Ref. 70). (b) Scan pattern illustration of a 2D MEMS mirror with four measurement channels. (c) Visualization of the MEMS mirror movement. (d) Illustration of four coaxial emitter and receiver paths scanned by a single 2D MEMS.

OE_62_3_031213_f022.png

4.1.2.

Centimeter-scale mirrors

Sensor concepts with larger MEMS mirrors, allowing for an increased Ade, can also be found in the automotive LiDAR industry. One such concept is shown in Fig. 23. Its mirrors have a smaller resonance frequency due to their size, and scan a coaxial, analog TOF measurement channel, as shown in Fig. 23(a). The scanner design differs from other MEMS mirror concepts in the sense that the mirror itself is not embedded in its peripherals [see, e.g., Fig. 20(b)] but rather mounted on a spring-like arm as can be seen in Fig. 23(b). Since both mirrors can have different scanning frequencies, a combination of slow and fast scanning axis are possible to generate a scan pattern as displayed in Fig. 23(c). A conceptual representation of the sensor is visualized in Fig. 23(d).

Fig. 23

Visualization of a sensor concept with a scanner utilizing centimeter-scale MEMS mirrors. (a) Illustration of a coaxial measurement channel from Ref. 71. (b) Drawing of scanner with two centimeter-sized MEMS mirrors from published patent application Ref. 72. (c) Scan pattern of two independent MEMS mirrors. (d) Conceptual display of a sensor with coaxial measurement channel.

OE_62_3_031213_f023.png

Another large MEMS mirror design is depicted in Fig. 24 with drawings from published patent application Ref. 73. The size of the mirrors allow for multiple (e.g., four) measurement channels to be scanned at once, effectively increasing frame rate. An illustration of the emitter side is shown in Fig. 24(a) with a close-up of one of the mirrors in Fig. 24(b). Note how this concept utilizes biaxial measurement channels where both emitter and receiver beams are scanned by separated mirrors as shown in Figs. 24(c) and 24(d). Such a scanner requires accurate synchronization of the mirror oscillations to guarantee the overlap of emitter and receiver FOVs at every scan angle. Due to individually controllable horizontal and vertical oscillations, an oval scan pattern as shown in Fig. 23(c) can be realized.

Fig. 24

Visualization of a sensor concept with two synchronized mirrors. (a) Emitter side sketch of a biaxial MEMS mirror LiDAR concept with four measurement channels (receiver paths are scanned similarly), from Ref. 73. (b) Drawing of a MEMS mirror from Ref. 73. (c) Illustration of the MEMS mirror movements. (d) Depiction of biaxial emitter and receiver paths scanned by four 1D MEMS mirrors.

OE_62_3_031213_f024.png

Similar types of mirrors can also be seen in conjunction with FMCW measurement channels. An example is shown in Fig. 25 with drawings from published patent application Ref. 74. Figure 25(a) shows a conceptual view on a system combining large MEMS mirrors with FMCW measurement channels. Higher tmc,fmcw and the ability to realize precise controlling favor a implementation of larger, 2D quasi-static mirrors [see Fig. 25(b)] over smaller MEMS mirrors with high scanning frequency fmems.

Fig. 25

Illustration of an FMCW measurement channel scanned by a combination of MEMS mirror and ball lens. (a) Display of FMCW sensor concept with MEMS scanner from Ref. 74. (b) Drawing of scanner with 2D MEMS mirror (Ref. 74).

OE_62_3_031213_f025.png

4.2.

Oscillating Carrier

In previous sections we introduced sensor concepts that utilize additional scanning components to deflect their optical paths. The oscillating carrier principle is based on relative movement between optics and their emitter/receivers as shown in Fig. 26. The drawings of published patent application Ref. 75 in Figs. 26(a) and 26(b) reveal mounts with springs that are operated in the elastic regime of their stress-strain curve and can be pulled toward or pushed away from each other. The movement is indicated by the arrows in Fig. 26(a) as well as Fig. 26(c). The concept utilizes biaxial, analog TOF measurement channels. Other than in the concept of Fig. 24 where a synchronization of emitter and receiver FOVs was realized by controlling both MEMS mirrors, here the synchronization is achieved mechanically by mounting emitter and receiver on a rigid plate that moves both of them together. The resulting scan pattern consists of parallelized Lissajous curves76 drawn (exemplary for four measurement channels) in Fig. 27. A constant pulse repetition rate yields a scan pattern shown in Fig. 27(a) whereas a synchronized emission of pulses can lead to evenly spaced measurements, e.g., in horizontal direction as indicated in Fig. 27(b).

Fig. 26

Sensor concept based on oscillating optics and transceiver plate. (a) Display of spring-like mounts of optics and emitter plus receiver carrier board from Ref. 75. (b) Illustration of carrier board with array of emitters and receivers (Ref. 75). (c) Visualization of the basic movements. (d) Conceptual display of four emitter and receiver channels.

OE_62_3_031213_f026.png

Fig. 27

Scan pattern depiction of a sensor concept with an oscillating carrier. (a) Scan pattern plotted with constant frep. (b) Evenly spaced scan pattern (horizontal).

OE_62_3_031213_f027.png

4.3.

Binary MEMS Mirror Arrays

As a last semi solid-state concept we also want to introduce a sensor design that utilizes small binary MEMS mirror arrays. The mirrors are called binary, because they only have two states. Either voltage applied or not. They can act as light selectors, e.g., in the receiver path of the concept from published patent application Ref. 77 shown in Fig. 28. As can be seen in Fig. 28(a), the MEMS mirror array is positioned as a focal plane array (FPA) (more on FPAs in Sec. 5.1.1) so that specific angles can be selected by applying a voltage to the corresponding binary mirror. On the emitter side, a 2D MEMS mirror is used to scan the outgoing pulses into a selected angle as outlined in Fig. 28(b). The scan pattern is linked to the control of the MEMS mirror array in combination with the 2D MEMS mirror on the emitter side. It can, in principle, be chosen arbitrarily, e.g., as shown in Fig. 13(c). A different implementation of a MEMS mirror array in a TOF LiDAR is described in Ref. 78.

Fig. 28

Display of a biaxial sensor concept with 2D MEMS mirror and binary MEMS mirror array. (a) Illustration of a sensor concept with binary MEMS mirror array in the receiver path (from Ref. 77). (b) A visualization of the basics from Fig. 28(a).

OE_62_3_031213_f028.png

5.

Solid-State Sensors

Solid-state sensors do not have any moving parts. This results in more robustness, the potential to highly integrate components but also impose the challenge of FOV coverage without rotation, actuation or oscillation. We divide the concepts into flashing (Sec. 5.1) and solid-state scanning LiDAR concepts (Sec. 5.2).

5.1.

Flash Illumination

A sensor is called a flash LiDAR when it has static beam paths, meaning there is no component or device in the paths that deflects light into varying directions (cf., Fig. 8). In this work, we distinguish between full and sequential flash LiDARs.

5.1.1.

Full

Full flash LiDARs emit a single flash of light to illuminate their entire FOV at once. These solid-state concepts require highly energetic pulses to not suffer from photon starvation, i.e., short measurement ranges. Therefore, the concept shown in Fig. 29 is operated in the SWIR wavelength regime (allowing for more optical output power cf., Sec. 2.3). The drawing of published patent application Ref. 79 in Fig. 29(a) shows a flash emitter, namely the solid-state laser, which is pumped by multiple pump lasers. The corresponding concept basics are illustrated in Fig. 29(b) with the flash emitter and an FPA detector. Figure 29(c) displays a close-up of the FPA detector with its mount, whereas Fig. 29(d) shows the overlap in the far field.

Fig. 29

Illustration of a flash LiDAR concept from Ref. 79 with SWIR operating wavelength and analog TOF measurement channels. (a) Drawings of a flash LiDAR concept. Top and side view from Ref. 79. (b) Illustration of the concept basics from a full flash LiDAR (only four detector paths drawn). (c) Drawing of a detector FPA (Ref. 79). (d) Display of the overlap for a flash LiDAR in the far field.

OE_62_3_031213_f029.png

An FPA is located in the focal point of its lens with individual receivers that are offset from the optical axis. There are physical limits to the FOV an FPA can cover, which we want to outline here (following Ref. 80). Figure 30 shows a schematical view on an FPA with size Sdetector, its lens of diameter Dlens and focal length f. The angle α is equal to two times the FOV that can be covered i.e. 2α=FOV. It is given as

Eq. (3)

tan(α)=Sdetector2f,
and poses a constraint to the ratio between Sdetector and f. Another constraint, the f-number F, originates from lens design and is defined as f divided by Dlens. There is a theoretical lower limit of F=fDlens0.5. Manufacturable lenses typically have an f-number not smaller than F1. This implies that f and Dlens are at most equal, i.e., fDlens1 (in many cases f is longer than Dlens and F>1). If we insert fDlens into Eq. (3), we find that FOV=2α and Dlens are inversely proportional to each other, tan(α)=Sdetector2Dlens. Hence, for a given FPA detector array of size Sdetector one can only increase the FOV by reducing Dlens. However, Dlens is equal to Ade from Eq. (1), which in turn is directly proportional to the received power Prec. The result are contradicting requests. On the one hand, one would like to enhance Dlens (Ade) for long measurement ranges (which comes at the cost of reducing FOV=2α), on the other hand, the FOV is desired to be as large as possible (requiring smaller Dlens). For a sensor utilizing a detector FPA a trade-off between size of Dlens and FOV coverage has to be made.

Fig. 30

Sketch of an FPA with focal length f and lens diameter Dlens.

OE_62_3_031213_f030.png

A flash LiDAR can achieve high frame rate, f since all measurement channels are active in parallel. However, data processing can become challenging for the same reason.

5.1.2.

Sequential

The sequential flash LiDAR concept (shown in Fig. 31) has a one-to-one correlation of emitters and receivers. Both emitter and receiver FPAs have identical physical dimensions and are optically aligned to each other to form multiple digital TOF measurement channels. The parallel measurement channels are shown in Fig. 31(a) with a drawing from published patent application Ref. 81 and additionally visualized in the conceptual view of Fig. 31(c). Figure 31(b) shows a compact integration of all components into a LiDAR module. Comparing the overlap indication from Fig. 31(d) to the overlap of a full flash LiDAR in Fig. 29(d), one can see that a one-to-one correlation between emitters and receivers has the potential to reduce the number of photons lost in the gaps between receiver FOVs. This concept is not called a full flash but a “sequential flash” LiDAR since scanning is achieved by electronically activating lines in both arrays one after the other, see Fig. 31(e). The use of digital TOF (cf., Sec. 2.2.2) measurement channels imposes challenging time constraints but also allows for the implementation of multiple thousands of them.

Fig. 31

Display of a sequential flash LiDAR concept. Drawings taken from Ref. 81 and 82. (a) Top view of sequential flash LiDAR from Ref. 81. (b) Design of a sequential flash LiDAR module (Ref. 82). (c) Visualization of LiDAR concept with emitter and receiver FPAs (only four measurement paths drawn). (d) Plot of overlap for a sequential flash LiDAR in the far field. (e) Illustration of a line-wise scan pattern.

OE_62_3_031213_f031.png

5.2.

Scanning Mechanisms

In this section, we introduce various solid-state scanning mechanisms. Some of them are part of automotive LiDAR sensor concepts while others are still in an early research state. Common to all ideas is to manipulate light (its wavelength, phase, intensity, polarization, etc.) in such a way that the interfering waves can be steered into different angles. Many focus on the emitter side of measurement channels since the light wave as well as the properties of photons can be easier controlled during emission. An implementation on the receiver side where the collection of back-scattered light with undefined polarization states requires large apertures is much more demanding.

5.2.1.

Optical phased arrays

A solid-state scanning mechanism can be realized using OPAs.83,84 They are typically built out of a number of waveguides each capable of introducing phase delay to the light wave that passes through them. Figure 32 shows schematically how interfering light from waveguides with varying phase can be used to steer a beam. Published patent application Ref. 85 describes a sensor concept utilizing OPAs. Some of the drawings are shown in Fig. 33. Figure 33(a) shows a sensor design next to the concept view of Fig. 33(b). Figures 33(c) and 33(d) show OPA structures.

Fig. 32

Illustration of how to steer a beam by manipulating the phase in waveguides. There is a constant shift from one waveguide to the other which results in a beam deflection toward the bottom.

OE_62_3_031213_f032.png

Fig. 33

Illustration of a solid-state sensor concept with an emitter scanned by an OPA and an FPA detector. (a) Drawing of a sensor design incorporating an OPA (from Ref. 86). (b) Concept view explaining the sensor drawing of Fig. 33(a). (c) Depiction of an OPA setup.85 (d) Schematical illustration of an OPA structure.85

OE_62_3_031213_f033.png

There are three main ways of manipulating the lights phase in waveguides by applying temperature,8791 voltage92 or structural changes.93,94 In principle arbitrary scan pattern can be chosen with an OPA scanner but the FOV coverage of the FPA detector and its controlling have to be taken into consideration. Other 2D scanning demonstrators can be found in Refs. 9596.97.98.99. Further ideas how to realize an OPA scanning mechanism are mentioned in Sec. 6.

5.2.2.

Liquid crystals

Liquid crystals (LCs) have the ability to change their optical properties, e.g., by applying voltage. They have been in use for decades as spatial light modulators (SLMs) integrated in, for example, overhead projectors. However, SLMs tend to have low resolution and switching speeds.100 Modern LCs circumvent these shortcomings101 which makes them more attractive in an automotive use-case as demonstrated by the following two scanning concepts.

Published patent application Ref. 102 introduces a sensor concept (depicted in Fig. 34) that integrates a scanner with LC structures operated in conjunction with polarized light. Figures 34(a) and 34(b) show multiple biaxial measurement channels that are scanned together by a liquid crystal polarization grating (LCPG) beam steering element. On a finer scale a detector line is used to enhance the number of vertical points while all emitters are also scanned by a one-dimensional (1D) MEMS mirror. A more detailed view on an LC polarization grating (PG) element is shown in Fig. 34(c).

Fig. 34

Illustration of a sensor concept utilizing LCs in combination with polarized light. (a) Block diagram of a senor concept from Ref. 102 utilizing LCs. (b) Sketch of a sensor concept that extends its FOV by utilizing an LCPG beam steering element (Ref. 102). (c) Model of a scanner combining LCs and PGs.102

OE_62_3_031213_f034.png

LCs can also be used in combination with metasurfaces103105 that consist of arrays of two dimensional quasi-periodic sub-wavelength-scale unit elements, so called meta-atoms (metallic or dielectric). By changing the meta-atoms geometrically in size, shape or orientation across the surface, one can locally modify the phase of the incoming light to shape the wavefront. There are many different options to steer the light with meta materials as outline in Ref. 106. An example concept with scanning emitter and receiver that combines copper rails with LC layers in between can be found in published patent application Ref. 107 and is shown in Fig. 35. Here, two independent scanners are foreseen for a biaxial emitter and receiver configuration. It is a point-wise scanning sensor as indicated in Figs. 35(a) and 35(b) allowing for basically arbitrary scan pattern. Close ups of the so-called metasurface scanner can be found in Fig. 35(c) (the entire scanner) as well as its structure in Fig. 35(d).

Fig. 35

Display of a sensor concept that includes a meta surface scanning mechanism. (a) Depiction of sensors schematics (Ref. 107). (b) Conceptual view of a sensor design with metasurfaces. (c) Illustration of the meta surface working principle from Ref. 107. (d) Close-up on the metasurfaces structure.107

OE_62_3_031213_f035.png

5.2.3.

Spectral deflection

Last but not least we also want to shortly introduce spectral deflection which is solely based on wavelength dependent refraction angles known from prisms. A LiDAR concept employing spectral deflection is described in Ref. 108 and shown in Fig. 36. The idea to realize different scanning angles by tuning the emitter wavelength is outlined in the system view of Fig. 36(a). It allows for a one parameter, i.e., 1D scanning mechanism without any moving parts in, e.g., horizontal direction.

Fig. 36

Spectral deflection scanning with lens arrangement and scan pattern. (a) Conceptual diagram for a spectral deflection LiDAR (Ref. 108). (b) Drawing of lens arrangement for 2D spectral deflection scanning.108 (c) Sensor concept view utilizing prism scanning. (d) Achievable scan pattern sketch with lens arrangement from Ref. 108.

OE_62_3_031213_f036.png

To also scan in a second (vertical) direction one can either add a galvano scanner [indicated in Fig. 36(c)] or, as Ref. 108 describes, use the sophisticated lens arrangement from Fig. 36(b) to enable scanning in 2D by only tuning wavelength. The resulting scan pattern is drawn in Fig. 36(d). Although the receivers FOV is opened by the angularly dispersive element, only photons with matching wavelengths (signal and background) are received since other photons from a different angle (and with a different wavelength) do not deflect back onto the detector.

6.

Discussion

In this section, we want to revisit the introduced concepts (grouped by section and subsection headlines) and discuss their advantages and challenges.

The FOV coverage axis of the automotive LiDAR design space from Fig. 2 served as a guideline for the introduction of sensor concepts. The chosen order from classical/mechanical scanning systems over MEMS-based solutions to solid-state approaches also represents (in first approximation) their technology readiness level (TRL). Scanning solutions based on well established motors generally have the highest maturity. They are relatively simple to use and enable a wide FOV coverage. However, car maker requirements regarding durability and cost efficiency (mass-producibility) raise the need for alternative scanning approaches. One of the biggest challenges for these new approaches is providing enough FOV coverage while being (semi) solid-state. MEMS-based concepts described in Sec. 4 try to avoid the negative aspects of mechanical scanning systems, such as friction and abrasion while keeping the benefit of moving parts to scan. Aspects like their long term durability and performance stability over the automotive temperature range (cf., Table 1) are fields of ongoing work. Although solid-state sensors have the potential to be highly integrated, with less components in the bill of material and automatic assembly lines, they–as of today–often lack performance when compared to classical (i.e., mechanical) or MEMS scanning LiDARs.

The spinning LiDAR concepts from Figs. 10 and 17 are probably the most known representatives of automotive LiDAR sensors. Their unique capability of being able to cover FOVhor=360  deg makes them especially interesting for applications where complete surround perception is of predominant importance. However, they cannot be as seamlessly integrated into e.g. the bumper of a car as the rotating mirror concept from Fig. 11. We divided TOF measurement channels in Sec. 3 into mechanical scanning concepts with analog TOF (Sec. 3.1) and digital TOF (Sec. 3.2) utilizing either NIR or SWIR operating wavelengths. An analog TOF channel in the NIR typically consist of an edge-emitting laser (EEL) in combination with an APD (for explanations on the working principle of these components see, e.g., Refs. 3 and 31). Both are mature components, cost efficient and extensively in use for decades. Higher optical output power within eye safety limits (cf., Sec. 2.3) motivate a switch to SWIR wavelengths (concept from Fig. 12). Emitters that generate such high optical output power typically utilize fiber lasers. The choice between NIR EEL and SWIR fiber lasers can be summarized in a simplified way as a trade-off between cost efficiency and performance. Moving away from silicon as detector material to, e.g., InGaAs (for SWIR wavelength) comes at the cost of higher component prices as well as noisier detectors (cf., Sec. 2.3). For analog TOF with APD detectors this results in a tolerable increase in dark current, but in digital TOF noisier detectors usually necessitate a change in the TCSPC measurement procedure. InGaAs SPADs tend to have higher dark count rates and afterpulsing which cause unwanted triggers during the acquisition of histograms. A possible counter measure to mitigate these negative effects are gating schemes as presented in Ref. 109 for the concept from Fig. 17(d).

FMCW measurement channels with their ability to measure relative velocity provide an additional feature that can be of important help when it comes to the segmentation of point clouds with perception algorithms. Another benefit is the possible optical signal amplification by enhancing the power of the portion of the outgoing that is optically mixed with the incoming light (cf., Fig. 9). It is, however, challenging to parallelize multiple FMCW measurement channels which makes wide FOV coverage at a frame rate of f=25  Hz and a high angular resolution difficult. Many of the sensor concepts currently available utilize polygon scanners as illustrated in Figs. 18 and 19. FMCW channels are also more complex (e.g., number of components Fig. 9 compared to Fig. 8) and prone to misalignment or phase noise as well as shot noise on the emitter side. The use of photonic integrated circuits (PIC) has the potential to overcome these challenges110,111 but additional work on, e.g., compact integration is required. We, therefore, see many research activities in this field.

The summary table displayed in Table 2 rates aspects like cost, FOV coverage, size/power, TRL and durability of concept groups against the requirements listed in Table 1. Qualitative ratings range from “+” over “○” to “−” and indicate how we see the agreement between merits of a concept group and desired specifications. In short, classical/mechanical scanners provide mature options to have flexible scan pattern (adaptive scan angles, e.g., Fig. 13 or rotation speeds, e.g., Fig. 15) with large apertures [Ade in Eq. (1)]. Durability, size and cost efficiency (mass-producibility) are challenges that push the development of alternative scanning mechanism.

Table 2

Summary of merits and disadvantages for different groups of concepts.

CostFOV coverageSize/powerTRLDurability
Classical scanning++
MEMS mirror (mm-scale)++
MEMS mirror (cm-scale)+
Flash illumination+++
Solid-state scanning++

MEMS concepts serve as a replacement for classical scanning mechanisms with benefits of no friction as well as semiconductor processes (for smaller mirrors). A combination with digital TOF (longer tmc) is challenging due to their high oscillation speeds which is why we see most MEMS concepts utilizing analog TOF measurement channels. The robustness and stability over temperature of MEMS mirrors are subjects of ongoing improvements. Especially, for concepts with larger mirrors as introduced in Sec. 4.1.2. However, these larger mirrors allow for longer measurement ranges [again linked to Ade in Eq. (1)] and more control over scan patterns.

Looking at solid-state approaches we have to differentiate the statement from above that their TRL is low in general. Flash LiDAR concepts are already in use and have the capability to bring down cost by highly integrating all their components. But challenges linked to the emitted optical power (full flash requires high optical pulses and sequential flash utilizes an array of individually addressable vertical-cavity surface emitting laser with limited optical output power) prevail. Furthermore, both concepts need to tackle the tasks of finding a trade-off between FOV coverage and measurement range (cf., Sec. 5.1.1) and processing high data rates from their detector FPAs. In full flash concepts data of an entire frame needs to be stored and processed. In the sequential flash concept TCSPC histogramming (digital TOF) for multiple channels in parallel leads to a few tens of GBs1 which become challenging to handle.

The solid-state scanning approaches provide elegant ways of scanning without any rotating parts. Goals for the design of OPAs comprise high-volume/low-loss laser-PIC coupling methods and effective side lobe suppression (e.g., Ref. 112) while challenges, such as a high number of electrical contacts for a PIC integration need to be addressed. Nevertheless, ideas on how to achieve a wide FOV113 or 2D scanning114116 can already be found in the literature where also other scanning approaches based on Bragg waveguides117 or photonic crystals118 are discussed.

This section is written to the best knowledge of the authors. We do not claim for this list of aspects to be indisputable or complete.

7.

Conclusion

We presented a visual depiction of the automotive LiDAR design space followed by an introduction to each of the possible options. Different LiDAR concepts were outlined with drawings from published patent applications which we explained in accompanying figures. We covered many mechanical scanning techniques that became flagships during first use of automotive LiDAR sensors. With the push toward higher level autonomous driving functionalities new alternative scanning methods emerged. Concepts for automotive solid-state LiDAR promise to fulfill carmakers requirements while being cost-efficient, robust and compact enough to be integrated into consumer cars. A field of many ongoing research activities. We presented an overview of existing and future automotive LiDAR scanning concepts and concluded with a discussion of their merits and disadvantages. This work provides orientation to the reader and serves as a starting point for further research in the field of automotive LiDAR sensors.

References

1. 

M. E. Warren, “Automotive LiDAR technology,” in Symp. VLSI Circ., 254 –255 (2019). Google Scholar

2. 

R. Stettner et al., “Ladar enabled impact mitigation system,” (2020). Google Scholar

3. 

P. F. McManamon, LiDAR Technologies and Systems, SPIE, Bellingham, Washington (2019). Google Scholar

4. 

M.-C. Amann et al., “Laser ranging: a critical review of unusual techniques for distance measurement,” Opt. Eng., 40 10 –19 https://doi.org/10.1117/1.1330700 (2001). Google Scholar

5. 

S. Royo and M. Ballesta-Garcia, “An overview of lidar imaging systems for autonomous vehicles,” Appl. Sci., 9 (19), 4093 https://doi.org/10.3390/app9194093 (2019). Google Scholar

6. 

Z. Dai et al., “LiDAR s for vehicles: from the requirements to the technical evaluation,” https://www.repo.uni-hannover.de/handle/123456789/11439 (2021). Google Scholar

7. 

C. Rablau, “LIDAR: a new self-driving vehicle for introducing optics to broader engineering and non-engineering audiences,” Proc. SPIE, 11143 111430C https://doi.org/10.1117/12.2523863 (2019). Google Scholar

8. 

Y. Li and J. Ibanez-Guzman, “LiDAR for autonomous driving: the principles, challenges, and trends for automotive lidar and perception systems,” IEEE Signal Process. Mag., 37 (4), 50 –61 https://doi.org/10.1109/MSP.2020.2973615 ISPRE6 1053-5888 (2020). Google Scholar

9. 

R. Thakur, “Scanning lidar in advanced driver assistance systems and beyond: building a road map for next-generation LiDAR technology,” IEEE Consum. Electron. Mag., 5 48 –54 https://doi.org/10.1109/MCE.2016.2556878 (2016). Google Scholar

10. 

J. Hecht, “LiDAR for self-driving cars,” Opt. Photonics News, 29 26 –33 https://doi.org/10.1364/OPN.29.1.000026 OPPHEL 1047-6938 (2018). Google Scholar

11. 

H. Gotzig and G. O. Geduld, LIDAR-Sensorik: Grundlagen, Komponenten und Systeme für aktive Sicherheit und Komfort, 317 –334 Springer Fachmedien Wiesbaden, Wiesbaden (2015). Google Scholar

12. 

D. Bastos et al., “An overview of LiDAR requirements and techniques for autonomous driving,” in Telecoms Conf. (ConfTELE), 1 –6 (2021). Google Scholar

13. 

J. Lambert et al., “Performance analysis of 10 models of 3D LiDARs for automated driving,” IEEE Access, 8 131699 –131722 https://doi.org/10.1109/ACCESS.2020.3009680 (2020). Google Scholar

14. 

J. Liu et al., “TOF LiDAR development in autonomous vehicle,” in IEEE 3rd Optoelectron. Global Conf. (OGC), 185 –190 (2018). https://doi.org/10.1109/OGC.2018.8529992 Google Scholar

15. 

B. Behroozpour et al., “LiDAR system architectures and circuits,” IEEE Commun. Mag., 55 135 –142 https://doi.org/10.1109/MCOM.2017.1700030 ICOMD9 0163-6804 (2017). Google Scholar

16. 

M. Hansard et al., Characterization of Time-of-Flight Data: Principles, Methods and Applications, 1 –28 Springer, London (2013). Google Scholar

17. 

D. L. Shumaker, J. S. Accetta, The Infrared and Electro-Optical Systems Handbook, Infrared Information Analysis Center; SPIE Optical Engineering Press, Ann Arbor, MI; Bellingham, WA (1993). Google Scholar

18. 

R. D. Richmond and S. C. Cain, Direct-Detection LADAR Systems, SPIE Press, Bellingham, WA (2010). Google Scholar

19. 

P. McManamon, “Review of ladar: a historic, yet emerging, sensor technology with rich phenomenology,” Opt. Eng., 51 060901 https://doi.org/10.1117/1.OE.51.6.060901 (2012). Google Scholar

20. 

W. Wagner et al., “Gaussian decomposition and calibration of a novel small-footprint full-waveform digitising airborne laser scanner,” ISPRS J. Photogramm. Remote Sens., 60 100 –112 https://doi.org/10.1016/j.isprsjprs.2005.12.001 IRSEE9 0924-2716 (2006). Google Scholar

21. 

T. Halldórsson and J. Langerholc, “Geometrical form factors for the LiDAR function,” Appl. Opt., 17 240 –244 https://doi.org/10.1364/AO.17.000240 APOPAI 0003-6935 (1978). Google Scholar

22. 

J. Harms, “LiDAR return signals for coaxial and noncoaxial systems with central obstruction,” Appl. Opt., 18 1559 –1566 https://doi.org/10.1364/AO.18.001559 APOPAI 0003-6935 (1979). Google Scholar

23. 

T. Fersch, R. Weigel and A. Koelpin, “Challenges in miniaturized automotive long-range LiDAR system design,” Proc. SPIE, 10219 102190T https://doi.org/10.1117/12.2260894 (2017). Google Scholar

24. 

R. Machado, J. Cabral and F. S. Alves, “Recent developments and challenges in FPGA-based time-to-digital converters,” IEEE Trans. Instrum. Meas., 68 4205 –4221 https://doi.org/10.1109/TIM.2019.2938436 IEIMAO 0018-9456 (2019). Google Scholar

25. 

M.-J. Lee, “Single-photon avalanche diodes in CMOS technology: towards low-cost and compact solid-state lidar sensors,” in Opt. Sens. and Sens. Congr., ETu3E.2 (2020). https://doi.org/10.1364/ES.2020.ETu3E.2 Google Scholar

26. 

E. Charbon, “Introduction to time-of-flight imaging,” in Sensors, 2014 IEEE, 610 –613 (2014). https://doi.org/10.1109/ICSENS.2014.6985072 Google Scholar

27. 

W. Becker, Advanced Time-Correlated Single Photon Counting Techniques, Springer, Berlin, Heidelberg (2005). Google Scholar

28. 

K. A. Zachariasse, “Einzelphotonenzählung: time-correlated single photon counting. von d. v. O’Connor und d. Phillips. Academic Press, London—New York 1984. VIII, 288 s.,” Nachrich. Chem. Tech. Lab., 33 (10), 896 –896 https://doi.org/10.1002/nadc.19850331013 NCTLDI 0341-5163 (1985). Google Scholar

29. 

S. W. Hutchings et al., “A reconfigurable 3-D-stacked SPAD imager with in-pixel histogramming for flash LiDAR or high-speed time-of-flight imaging,” IEEE J. Solid-State Circ., 54 2947 –2956 https://doi.org/10.1109/JSSC.2019.2939083 IJSCBC 0018-9200 (2019). Google Scholar

30. 

J. Massa et al., “Optical design and evaluation of a three-dimensional imaging and ranging system-based on time-correlated single-photon counting,” Appl. Opt., 41 1063 –1070 https://doi.org/10.1364/AO.41.001063 APOPAI 0003-6935 (2002). Google Scholar

31. 

G. Rieke, Detection of Light: From the Ultraviolet to the Submillimeter, 2nd ed.Cambridge University Press, Cambridge (2002). Google Scholar

32. 

D. Pierrottet et al., “Linear FMCW laser radar for precision range and vector velocity measurements,” MRS Proc., 1076 10760406 https://doi.org/10.1557/PROC-1076-K04-06 (2008). Google Scholar

33. 

D. Uttam and B. Culshaw, “Precision time domain reflectometry in optical fiber systems using a frequency modulated continuous wave ranging technique,” J. Lightwave Technol., 3 971 –977 https://doi.org/10.1109/JLT.1985.1074315 JLTEDG 0733-8724 (1985). Google Scholar

34. 

A. J. Hymans and J. Lait, “Analysis of a frequency-modulated continuous-wave ranging system,” Proc. IEE - Part B: Electron. Commun. Eng., 107 365 –372 https://doi.org/10.1049/pi-b-2.1960.0130 (1960). Google Scholar

35. 

M. Kronauge and H. Rohling, “New chirp sequence radar waveform,” IEEE Trans. Aerosp. Electron. Syst., 50 2870 –2877 https://doi.org/10.1109/TAES.2014.120813 IEARAX 0018-9251 (2014). Google Scholar

36. 

M. Vollmer, K.-P. Möllmann and J. A. Shaw, “The optics and physics of near infrared imaging,” Proc. SPIE, 9793 97930Z https://doi.org/10.1117/12.2223094 (2015). Google Scholar

37. 

“Sicherheit von Lasereinrichtungen—Teil 1: Klassifizierung von Anlagen und Anforderungen (IEC 60825-1:2014),” (2014). Google Scholar

38. 

I. S. Amiri et al., “Temperature effects on characteristics and performance of near-infrared wide bandwidth for different avalanche photodiodes structures,” Results Phys., 14 102399 https://doi.org/10.1016/j.rinp.2019.102399 (2019). Google Scholar

39. 

G. M. Williams, “Optimization of eyesafe avalanche photodiode lidar for automobile safety and autonomous navigation systems,” Opt. Eng., 56 031224 https://doi.org/10.1117/1.OE.56.3.031224 (2017). Google Scholar

40. 

R. H. Rasshofer, M. Spies and H. Spies, “Influences of weather phenomena on automotive laser radar systems,” Adv. Radio Sci., 9 49 –60 https://doi.org/10.5194/ars-9-49-2011 (2011). Google Scholar

41. 

M. Kutila et al., “Automotive LiDAR performance verification in fog and rain,” in 21st Int. Conf. Intell. Transp. Syst. (ITSC), 1695 –1701 (2018). Google Scholar

42. 

C. F. Bohren and D. R. Huffman, Absorption and Scattering of Light by Small Particles, Wiley, New York( (1998). Google Scholar

43. 

E. J. K. Isaac, I. Kim and B. McArthur, “Comparison of laser beam propagation at 785 nm and 1550 nm in fog and haze for optical wireless communications,” Proc. SPIE, 4214 26 –37 https://doi.org/10.1117/12.417512 PSISDG 0277-786X (2001). Google Scholar

44. 

J. Wojtanowski et al., “Comparison of 905 nm and 1550 nm semiconductor laser rangefinders’ performance deterioration due to adverse environmental conditions,” Opto-Electron. Rev., 22 183 –190 https://doi.org/10.2478/s11772-014-0190-2 OELREM 1230-3402 (2014). Google Scholar

45. 

K. Sassen and G. C. Dodd, “LiDAR crossover function and misalignment effects,” Appl. Opt., 21 3162 –3165 https://doi.org/10.1364/AO.21.003162 APOPAI 0003-6935 (1982). Google Scholar

46. 

D. Hall, “High definition LiDAR system,” (2011). Google Scholar

47. 

S. Suzuki, “Lidarvorrichtung, fahrassistenzsystem und fahrzeug,” (2020). Google Scholar

48. 

J. E. McWhirter, “Manufacturing a balanced polygon mirror,” (2020). Google Scholar

49. 

H. Kikuchi, “Optical scanning radar system,” (2005). Google Scholar

50. 

W. L. Wolfe, “Introduction to Infrared System Design,” SPIE Optical Engineering Press, Bellingham, WA (1996). Google Scholar

51. 

X. Hong et al., “System and method for supporting LiDAR applications,” (2018). Google Scholar

52. 

J. Wu et al., “Small bearings for multi-element optical scanning devices, and associated systems and methods,” (2021). Google Scholar

53. 

G. F. Marshall, “Risley prism scan patterns,” Proc. SPIE, 3787 74 –86 https://doi.org/10.1117/12.351658 (1999). Google Scholar

54. 

A. Pacala et al., “Optical system for collecting distance information within a field,” (2018). Google Scholar

55. 

A. Pacala and M. Frichtl, “Multispectral ranging/imaging sensor arrays and systems,” (2021). Google Scholar

56. 

G. Kamerman, C. Trowbridge and V. Negoita, “Polarization filtering in LiDAR system,” (2021). Google Scholar

57. 

B. J. Roxworthy, P. Srinivasan and A. Samarao, “FMCW lidar using array waveguide receivers and optical frequency shifting,” (2022). Google Scholar

58. 

E. J. Angus and R. M. Galloway, “LiDAR apparatus with rotatable polygon deflector having refractive facets,” (2020). Google Scholar

59. 

R. M. Galloway, E. Angus and Z. W. Barber, “LiDAR system including multifaceted deflector,” (2020). Google Scholar

60. 

S. P. Timoshenko and J. N. Goodier, Theory of Elasticity, McGraw-Hill, New York (1982). Google Scholar

61. 

P. R. Patterson et al., “Scanning micromirrors: an overview,” Proc. SPIE, 5604 195 –207 https://doi.org/10.1117/12.582849 PSISDG 0277-786X (2004). Google Scholar

62. 

H. W. Yoo et al., “MEMS-based lidar for autonomous driving,” E&I Elektrotech. Informationstech., 135 408 –415 https://doi.org/10.1007/s00502-018-0635-2 (2018). Google Scholar

63. 

D. Wang, C. Watkins and H. Xie, “MEMS mirrors for LiDAR: a review,” Micromachines, 11 (5), 456 https://doi.org/10.3390/mi11050456 (2020). Google Scholar

64. 

U. Hofmann et al., “Biaxial resonant 7 mm-MEMS mirror for automotive lidar application,” in Int. Conf. Opt. MEMS and Nanophotonics, 150 –151 (2012). Google Scholar

65. 

L. Ye et al., “A 2D resonant MEMS scanner with an ultra-compact wedge-like multiplied angle amplification for miniature lidar application,” in IEEE Sens., 1 –3 (2016). https://doi.org/10.1109/ICSENS.2016.7808932 Google Scholar

66. 

L. C. Dussan, “Methods and systems for ladar transmission,” (2018). Google Scholar

67. 

L. C. Dussan, “Ladar transmitter with feedback control of dynamic scan patterns,” (2019). Google Scholar

68. 

T. B. Greenslade, “All about Lissajous figures,” Phys. Teach., 31 (6), 364 –370 https://doi.org/10.1119/1.2343802 PHTEAH 0031-921X (1993). Google Scholar

69. 

D. Cook et al., “Ladar transmitter with ellipsoidal reimager,” (2020). Google Scholar

70. 

M. Shani et al., “LiDAR systems and methods,” (2019). Google Scholar

71. 

M. Müller and M. Schardt, “Coaxial optical system of a frictionless scan system for light detection and ranging, LiDAR, measurements,” (2019). Google Scholar

72. 

M. Müller, “Aligning a resonant scanning system,” (2019). Google Scholar

73. 

E. Matthew, “Scanning mirror system with attached coil,” (2021). Google Scholar

74. 

S. Singer, “Lichtführung in einem lidarsystem mit einer monozentrischen lines,” (2019). Google Scholar

75. 

J. Pei et al., “Scanning LiDAR system,” (2021). Google Scholar

76. 

J. Pei et al., “Methods and apparatuses for scanning a LiDAR system in two dimensions,” (2019). Google Scholar

77. 

S. Royo et al., “A vision system and a vision method for a vehicle,” (2019). Google Scholar

78. 

Y. Takashima et al., “MEMS-based imaging LiDAR,” in Light, Energy and the Environ. 2018 (E2, FTS, HISE, SOLAR, SSL), ET4A.1 (2018). Google Scholar

79. 

R. Stettner, P. Gilliland and A. Duerner, “Automotive auxiliary ladar sensor,” (2020). Google Scholar

80. 

W. J. Smith, Modern Optical Engineering – The Design of Optical Systems, 3rd ed.McGraw-Hill, New York (2000). Google Scholar

81. 

R. Beuschel and M. Kiehn, “Lidar receiving unit,” (2019). Google Scholar

82. 

S. Frick et al., “Lidar messsystem und verfahren zur montage eines lidar messsystems,” (2019). Google Scholar

83. 

P. F. McManamon et al., “Optical phased array technology,” Proc. IEEE, 84 268 –298 https://doi.org/10.1109/5.482231 IEEPAD 0018-9219 (1996). Google Scholar

84. 

C.-P. Hsu et al., “A review and perspective on optical phased array for automotive lidar,” IEEE J. Sel. Top. Quantum Electron., 27 1 –16 https://doi.org/10.1109/JSTQE.2020.3022948 IJSQEN 1077-260X (2021). Google Scholar

85. 

L. Eldada, “Planar beam forming and steering optical phased array chip and method of using same,” (2018). Google Scholar

86. 

L. Eldada, T. Yu and A. Pacala, “Optical phased array LiDAR system and method of using same,” (2016). Google Scholar

87. 

S. Chung, H. Abediasl and H. Hashemi, “A monolithically integrated large-scale optical phased array in silicon-on-insulator CMOS,” IEEE J. Solid-State Circ., 53 275 –296 https://doi.org/10.1109/JSSC.2017.2757009 IJSCBC 0018-9200 (2018). Google Scholar

88. 

S. A. Miller et al., “Large-scale optical phased array using a low-power multi-pass silicon photonic platform,” Optica, 7 3 –6 https://doi.org/10.1364/OPTICA.7.000003 (2020). Google Scholar

89. 

R. Fatemi, A. Khachaturian and A. Hajimiri, “A nonuniform sparse 2-D large-FOV optical phased array with a low-power PWM drive,” IEEE J. Solid-State Circ., 54 1200 –1215 https://doi.org/10.1109/JSSC.2019.2896767 IJSCBC 0018-9200 (2019). Google Scholar

90. 

K. V. Acoleyen et al., “Off-chip beam steering with a one-dimensional optical phased array on silicon-on-insulator,” Opt. Lett., 34 1477 –1479 https://doi.org/10.1364/OL.34.001477 OPLEDP 0146-9592 (2009). Google Scholar

91. 

D. N. Hutchison et al., “High-resolution aliasing-free optical beam steering,” Optica, 3 887 –890 https://doi.org/10.1364/OPTICA.3.000887 (2016). Google Scholar

92. 

C. V. Poulton et al., “Long-range lidar and free-space data communication with high-performance optical phased arrays,” IEEE J. Sel. Top. Quantum Electron., 25 1 –8 https://doi.org/10.1109/JSTQE.2019.2908555 IJSQEN 1077-260X (2019). Google Scholar

93. 

C. Errando-Herranz et al., “MEMS for photonic integrated circuits,” IEEE J. Sel. Top. Quantum Electron., 26 1 –16 https://doi.org/10.1109/JSTQE.2019.2943384 IJSQEN 1077-260X (2020). Google Scholar

94. 

Y. Wang et al., “2D broadband beamsteering with large-scale MEMS optical phased array,” Optica, 6 557 –562 https://doi.org/10.1364/OPTICA.6.000557 (2019). Google Scholar

95. 

J. K. Doylend et al., “Two-dimensional free-space beam steering with an optical phased array on silicon-on-insulator,” Opt. Express, 19 21595 –21604 https://doi.org/10.1364/OE.19.021595 OPEXFF 1094-4087 (2011). Google Scholar

96. 

D. Kwong et al., “On-chip silicon optical phased array for two-dimensional beam steering,” Opt. Lett., 39 941 –944 https://doi.org/10.1364/OL.39.000941 OPLEDP 0146-9592 (2014). Google Scholar

97. 

T. Kim et al., “A single-chip optical phased array in a wafer-scale silicon photonics/CMOS 3D-integration platform,” IEEE J. Solid-State Circ., 54 3061 –3074 https://doi.org/10.1109/JSSC.2019.2934601 IJSCBC 0018-9200 (2019). Google Scholar

98. 

K. Nakamura et al., “Liquid crystal-tunable optical phased array for lidar applications,” Proc. SPIE, 11690 116900W https://doi.org/10.1117/12.2591230 (2021). Google Scholar

99. 

R. Jansen et al., “Integrated calibration-free scannable structured light for fast high-resolution lidar,” in OSA Adv. Photonics Congr. (AP) 2020 (IPR, NP, NOMA, Networks, PVLED, PSC, SPPCom, SOF), Th2H.5 (2020). https://doi.org/10.1364/IPRSN.2020.ITh2H.5 Google Scholar

100. 

P. A. Blanche et al., “Holographic three-dimensional telepresence using large-area photorefractive polymer,” Nature, 468 80 –83 https://doi.org/10.1038/nature09521 (2010). Google Scholar

101. 

G. Thalhammer et al., “Speeding up liquid crystal slms using overdrive with phase change reduction,” Opt. Express, 21 1779 –1797 https://doi.org/10.1364/OE.21.001779 OPEXFF 1094-4087 (2013). Google Scholar

102. 

R. Baribault and P. Olivier, “Beam-steering devices and methods for lidar applications,” (2022). Google Scholar

103. 

M. Khorasaninejad and F. Capasso, “Metalenses: versatile multifunctional photonic components,” Science, 358 (6367), eaam8100 https://doi.org/10.1126/science.aam8100 SCIEAS 0036-8075 (2017). Google Scholar

104. 

J. Engelberg and U. Levy, “The advantages of metalenses over diffractive lenses,” Nat. Commun., 11 1991 https://doi.org/10.1038/s41467-020-15972-9 NCAOBW 2041-1723 (2020). Google Scholar

105. 

L. Zhang et al., “Advances in full control of electromagnetic waves with metasurfaces,” Adv. Opt. Mater., 4 (6), 818 –833 https://doi.org/10.1002/adom.201500690 2195-1071 (2016). Google Scholar

106. 

J. Kim et al., “Tunable metasurfaces towards versatile metalenses and metaholograms: a review,” Adv. Photonics, 4 (2), 024001 https://doi.org/10.1117/1.AP.4.2.024001 (2022). Google Scholar

107. 

G. M. Akselrod, P. Bowen and Y. Yang, “Tunable liquid crystal metasurfaces,” (2020). Google Scholar

108. 

F. Collarte Bondy et al., “An optical beam director,” (2022). Google Scholar

109. 

K. E. Yuryevich et al., “Lidar system comprising a single-photon detector,” (2015). Google Scholar

110. 

P. J. Suni et al., “Photonic integrated circuit FMCW LiDAR on a chip,” in 19th Coherent Laser Radar Conf., (2018). Google Scholar

111. 

J. Riemensberger et al., “Massively parallel coherent laser ranging using a soliton microcomb,” Nature, 581 164 –170 https://doi.org/10.1038/s41586-020-2239-3 (2020). Google Scholar

112. 

G. M. Akselrod and P. Padmanabha Iyer, “Sidelobe suppression in metasurface devices,” (2020). Google Scholar

113. 

Y. Liu and H. Hu, “Silicon optical phased array with a 180-degree field of view for 2D optical beam steering,” Optica, 9 903 –907 https://doi.org/10.1364/OPTICA.458642 (2022). Google Scholar

114. 

N. A. Tyler et al., “Sin integrated optical phased arrays for two-dimensional beam steering at a single near-infrared wavelength,” Opt. Express, 27 5851 –5858 https://doi.org/10.1364/OE.27.005851 OPEXFF 1094-4087 (2019). Google Scholar

115. 

J. Sun et al., “Two-dimensional apodized silicon photonic phased arrays,” Opt. Lett., 39 367 –370 https://doi.org/10.1364/OL.39.000367 OPLEDP 0146-9592 (2014). Google Scholar

116. 

W. S. Rabinovich et al., “Two-dimensional beam steering using a thermo-optic silicon photonic optical phased array,” Opt. Eng., 55 (11), 111603 https://doi.org/10.1117/1.OE.55.11.111603 (2016). Google Scholar

117. 

F. Koyama and X. Gu, “Beam steering, beam shaping, and intensity modulation based on VCSEL photonics,” IEEE J. Sel. Top. Quantum Electron., 19 1701510 https://doi.org/10.1109/JSTQE.2013.2247980 IJSQEN 1077-260X (2013). Google Scholar

118. 

T. Raj et al., “A survey on lidar scanning mechanisms,” Electronics, 9 (5), 741 https://doi.org/10.3390/electronics9050741 ELECAD 0013-5070 (2020). Google Scholar

Biography

Hanno Holzhüter works as a research project manager at Ibeo Automotive Systems and is also a PhD student with focus on DSP in LiDAR sensors at the Institute for Microelectronic Systems (IMS), Leibniz University Hannover and Ibeo AS. Before joining Ibeo in 2016, he worked as a scientific assistant in the engineering education research group at Technical University of Hamburg after finishing his master in 2015 in physics at the Georg-August-University Göttingen.

Jörn Bödewadt has been working at Ibeo Automotive Systems since 2018 as an optical design engineer. His tasks range from the investigation of optical effects in LIDAR sensors over simulation and experiments to specify, test, and analyze optical components. He has a background in accelerator and free-electron laser physics where he received his PhD in 2011 at the University of Hamburg. After his PhD, he joined Deutsches Elektronen-Synchrotron to work on fundamental research on improving the coherence properties of free-electron lasers.

Shima Bayesteh received her BSc degree in physics in 2006 from the University of Isfahan and her MSc degree in astrophysics in 2008 from the University of Zanjan in Iran. She completed her PhD in accelerator physics at Deutsches Elektronen-Synchrotron (DESY) and received her degree in accelerator physics in 2014 from the University of Hamburg. She joined Ibeo in 2016 as optics development engineer. She is currently working as LiDAR R&D engineer, dealing with novel technological solutions for Lidar systems.

Andreas Aschinger received his diploma in physics in 2008 and completed his PhD in plasma physics in 2012 at the Ruhr-University of Bochum. The topic of the PhD thesis was Dynamic Light Scattering on Complex Plasmas. After his PhD, he worked at Leopold Kostal GmbH & Co. KG in the field of driver assistance cameras. Since 2019, he has been dedicated to the development of future LiDAR sensors at Ibeo Automotive Systems GmbH.

Holger Blume received his Dipl-Ing and PhD degrees in electrical engineering from the University of Dortmund in 1992 and 1997, respectively. Until 2008, he worked as a senior engineer at RWTH Aachen University. There he finished his habilitation in 2008. Since then he is professor for architectures and systems at Leibniz University Hannover. His research interests are in design space exploration for algorithms and architectures for DSP with applications in biomedical and automotive systems.

© 2023 Society of Photo-Optical Instrumentation Engineers (SPIE)
Hanno Holzhüter, Jörn Bödewadt, Shima Bayesteh, Andreas Aschinger, and Holger Blume "Technical concepts of automotive LiDAR sensors: a review," Optical Engineering 62(3), 031213 (10 January 2023). https://doi.org/10.1117/1.OE.62.3.031213
Received: 20 September 2022; Accepted: 8 December 2022; Published: 10 January 2023
Lens.org Logo
CITATIONS
Cited by 6 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Sensors

Mirrors

LIDAR

Microelectromechanical systems

Receivers

Scanners

Patents

Back to Top