1550-nm Photonics Promise Eye-Safe, Cost-Effective Autonomous Machines

1550-nm Photonics Promise Eye-Safe, Cost-Effective Autonomous Machines

Sensors Insights by Mehdi Asghari

While LiDAR is needed to provide a high-resolution detailed 3D vision for autonomous vehicles and other applications, the technology is bulky and expensive. New approaches focus on coherent detection techniques that enable extraction of significant additional information from the returning photons rather than just changes in light intensity. Traditionally, these approaches have been expensive and difficult to achieve. However, with recent advances in silicon photonics, these solutions can be shrunk to chip level, reducing power consumption and cost at the same time.

Frequency modulated continuous wave (FMCW) LiDAR sensing delivers multiple additional dimensions such as depth and velocity and improves achievable accuracy by orders of magnitude over existing technologies. Utilizing 1550-nm wavelength with its 40x higher eye safety lends itself perfectly to this approach and has the potential to detect low reflectivity targets, such as a tire on the roadway, 300m out.



Like this story? Subscribe to Sensors Online!

Sensors delivers a suite of newsletters, each serving as an information resource to help engineers and engineering professionals make efficient design and business decisions. Sign up to get news and updates delivered to your inbox and read on the go.


Autonomous machines are poised to change our world, freeing us up from mundane tasks like commuter driving, giving us more time to be productive and do the things we enjoy most. However, while accidents are accepted with human drivers, machines are expected to perform their tasks error-free. To achieve this, machines need to be equipped with sensors that are multi-dimensional and significantly exceed human capabilities.

In our current place in this evolution of autonomy, its’ clear that the sensors and radar we utilize today are not enough to support this vision of a safe and accurate autonomous world. Camera sensors, one of the most common “vision” approaches, are not enough on their own, as they can be blinded by low standing sun or have a poor dynamic range in twilight conditions. And, while radar can see through rain and at ranges of hundreds of meters, the resolution is so poor that it’s only accurate in identifying large objects that have a velocity vector associated with them.


905’s Achilles Heel: Eye Damage, Accuracy and Interference

The dirty little secret is that many algorithms are ignoring non-moving radar reflections to avoid too many false positives. A third sensor is needed that can provide high resolution images, even in foggy and rainy conditions. A car needs to be able to detect a 10cm high object, like a tire on the road, as far as 300m out, to determine if it will: 1) run over it 2) change lanes or 3) perform an emergency brake. This sensor is called LiDAR (Light Detection and Ranging) and is needed to provide high resolution detailed 3D vision for autonomous vehicles.

1550-nm eye-safe spectrum
1550-nm eye-safe spectrum

Well over $200 million worth of LiDARs was sold in 2018, but the typical average cost exceeded $50,000 per LiDAR. And that cost has been the industry’s primary gating factor to growth. While there are a fair number of startups focused on reducing the cost and size of these systems, most new and established players are working on pulsed laser systems in the 905 nm near infrared wavelength region. These work by sending a powerful laser pulse (up to 130W) and recording the time it takes for a reflection. They then repeat this process a million times per second by scanning the laser in the desired field of view to get a complete 3D image. The problem with these systems is twofold:

Firstly, 905 nm is too close to the visible spectrum we can see, so at the used power levels there is the risk of human eye damage. Even at these unhealthy power levels, 905 nm LiDARs have trouble seeing an object of interest like a tire on the highway 200m away. And secondly, pulsed LiDARs are prone to interference with other LiDARs, so as these systems become more widespread, more problems will occur.


A Single Chip to Enable Them All

Hence, the latest trend in LiDAR is moving towards using a much longer wavelength, like 1550nm, which is far away from the visible light spectrum that our eyes absorb and enables 40x higher eye safety than 905nm. Rather than relying on simple pulses, companies like ours are using frequency chirps like what’s already being used in radar systems. The technique is called FMCW (frequency modulated continuous wave) and allows a much higher detection sensitivity and accuracy.

LIDAR systems incorporating 1550nm FMCW can easily detect a tire or similar low reflectivity object more than 300m ahead of the car driving on the highway. This permits the vehicle to stop or change lanes at 70mph in any weather condition. FMCW technique also enables instantaneous measurement of the speed of an object through the doppler effect. This is very important, not only to enable predictive analysis and response, but also in object identification to establish and recognize the different objects from the raw point cloud data.

Detection of tire at 25 cm
Detection of tire at 25 cm

Longer wavelength LiDAR systems with a coherent detection approach like FMCW have been hard to realize until now due to the lack of inexpensive components. Using their silicon photonics heritage, the team at SiLC technologies is working on a single chip approach that integrates the laser, the coherent optical signal processing, and the SiGe detectors into one IC package. The technology is well-established and enjoys a mature manufacturing ecosystem, based on silicon’s proven and standard equipment and processed. The platform has already proven itself with qualified products and volume manufacturing in telecom and data center optics applications.

SiLC's 4D vision sensor chip enables a range-extended, eye-safe integrated LiDAR.
SiLC's 4D vision sensor chip enables a range-extended, eye-safe integrated LiDAR.


Photonics are the Key to the Vision Systems of the Future

The key challenge in materializing an FMCW solution has been low-cost, high-volume manufacturing of the high-performance components needed. The coherent approach requires lasers with long coherence lengths (narrow linewidths) so that the approach can work for spans of up to 300m. It also requires coherent processing of the light to extract additional information carried by photons.

This means very accurate and low noise Optical Signal Processing (OSP) circuits to form a coherent receiver. In addition, polarization plays a role here as coherent beating will only work for photons of same polarization. To add to all this, wavelength stability and linearity of the laser source is critical over the measurement periods otherwise signal to noise ratio can be degraded significantly.

To create such a stable, robust and accurately defined optical system out of discrete components is very hard and expensive. The solution that SiLC offers integrates all needed optical functionalities into a single silicon chip using semiconductor manufacturing processes used to manufacture electronic IC chips. This means the approach that has enabled very complex electronic circuits to be integrated into a small silicon chips that make household consumer products at very low cost, can now be deployed to make highly complex optical circuits to enable the same for photonics applications.

SiLC’s proprietary approach offers the level of manufacturing control and accuracy needed for these circuits to achieve superior performance and the functionality needed to enable a robust, cost-effective and high-quality product. SiLC’s extremely low noise, low loss and polarization independent technology enables very high signal-to-noise ratios so that even lower optical powers can be used. This makes SiLC solutions eye safe for detection of faraway objects even when humans are very close to the sensor. This superior performance also enables extreme levels of accuracy. To measure an object a few centimeters in dimension 300m away is not a problem at all. In fact, the technology can be used to achieve precision levels in the millimeter to micrometer range to enable biometric, industrial or healthcare applications.

Given the small size, power and cost of these solutions, they can also be widely deployed in drones and robotics applications. The technology only needs milliwatts of output optical power, which makes the overall solution very power-efficient. This can be very important for battery operated machines.



The vision systems of the future will see in more than 2 dimensions. Depth is a straight forward 3rd dimension. Velocity is a very useful 4th dimension. But this is only the beginning. This technology can be used to unlock and sense much more information about the world around us and will help to not only bring to reality autonomous cars, trucks and drones, but also precision industrial robotics and seamless biometric authentication. This in turn will enable far smarter, safer systems and machines enabling far more efficient human societies.


About the author

Mehdi Asghari is the Founder and CEO of SiLC Technologies, Inc. An accomplished and successful serial entrepreneur with 20 years of experience in the silicon photonics sector, SiLC is Mehdi’s third silicon photonics company, following the IPO of Bookham, Inc. and the commercial sale of Kotura, Inc. He is an EE graduate of Cambridge University in UK and holds a Masters in Optoelectronics from Heriot-Watt and St. Andrews Universities, with a Ph.D. in III/V Integrated Photonics from the University of Bath in UK. Mehdi has authored and co-authored over 150 journal and conference publications and holds more than 40 active patents within the fields of silicon photonics and optoelectronics.