Mar 08, 2023

Lidar Business: Beam Wars

Back in 2007, the world was introduced to the now legendary Velodyne HDL-64E unit. Quite distinctive in its size and general bulkiness, it soon was referred to as the "KFC bucket". Cars popped up all over the place with this spinning unit on its roof. It was quite bulky, but perfectly suited for the early days of Autonomous Vehicle (AV) research.

Lexus Car with Google Sticker

Several years later, Velodyne released a number of more practical units. First, the Puck, a 16-beam miniaturized Lidar unit that really kick-started the Lidar revolution. This was followed soon after by its bigger brother, the VLP-32, a 32 beam version which would become the gold-standard for both automotive and non-automotive use for many years.

Velodyne LiDAR Unit

As word spread about these new, somewhat affordable Lidar units, the great Lidar-hype began. Companies popped up everywhere and had to find ways to break into the market. One of the things that kept Lidar from getting much attention outside of the industry-insiders was the relatively low resolution compared to cameras. 


For example, the below image will not impress anyone.

LiDAR Tracking Graphic

But, if you just keep adding more and more points, it starts to look something like this:

LiDAR Tracking Beams

Now, this grabs people's attention at shows like CES!


More and more beams were being packed in smaller and smaller units. 32 beams became 64 beams which became 128 beams! Single-return became Double-return became Triple-return. More companies joined in! The somewhat practical network output of the 'gold standard' Puck went from a reasonable 300,000 points-per-second (pps) to a whopping 5.2M pps. There were rumors of a 256-beam Lidar being developed and I even heard someone speak of a 1024-beam unit being considered.


The Beam Wars were in full swing.


And here the theoretical world of the lab and the practical world, once again, collide. Instead of thinking about what was best for the customers, Lidar companies looked to one-up each other and just adding more and more resolution to their units with no evidence that this would actually improve the end-product. What we ended up with is a number of Lidar sensors that had bad range noise, severe electrical and mechanical vibration, heat management problems and reliability issues.... but they sure looked pretty when they worked!


Just because the images look prettier, does it actually make the end-product better? A computer sees (processes) the world very differently from a human and it does not care how pretty an image looks. It looks for patterns and requires accurate data. 


The case against resolution.


If you wonder why more resolution is not always a good thing, you only have to ask yourself one simple question:


"If more resolution is always better, why isn't every security camera in the world an 8K resolution camera?"


The answer is quite simple: All that data has to go somewhere. It really is that simple. The camera industry understands this because they have been around for decades and they have learned from their customers. The most common security camera sold is 720p or 1080p resolution. It is not 2K/4K/8K.


An average bitrate for a common 1080p camera at 30fps is around 2.5Mbps. Compare this to the average bitrate of a standard 32-beam Lidar unit (640Kpps) which can be 25 mbps running at only 10fps (if optimized). That is ten times higher than a camera. A 128 beam Lidar (~5.2Mpps) will be nearly 100 times more data per sensor than a camera (254Mbps vs 2.5Mbps). This puts an enormous pressure on the network infrastructure as well as the processing systems.


Cameras have sophisticated chip-driven compression algorithms. In a best-case scenario, this can reduce the data rate by up to 2000:1. Which, in security fixed cameras is actually quite common because the cameras are static and compression algorithms love static backgrounds. These compression algorithms and mechanisms simply do not exist (yet) for Lidar, so the network bandwidth needed to transport Lidar data remains many times larger than camera data.


Just as I mentioned before in previous articles, this is a lab vs real-world issue. In a lab environment and when you are developing 3D Perception software in a small space with a powerful development desktop, more resolution will give you better results, most of the time. 


But now we're entering my world (speaking for customers). The practical world. Security, for example, a medium sized high-security facility may require 50 sensors. 50 Lidar at 254Mbps equals 12.7Gbps worth of data that needs to be processed every second. That fancy fiber-ring system you installed for your cameras and which you thought had plenty of headroom? Sorry bud can't use it any more...It typically holds only 1 Gbps of data. Upgrading all these modems is expensive. Even your existing 10G switches are no longer going to be enough. Your servers are becoming a bottleneck because they only have four ethernet ports and you need two for your client network, so that leaves you with only two to push all the sensor data into the server. You need larger CPU's (and possibly GPU's) to parse and process all that data, most of it being completely useless because half the beams are pointing up or in the wrong direction.


What's becoming even more important is that (non-security related) sites are moving more and more towards wireless solutions. If I take intersections again as an example. The ideal solution is to transmit the raw sensor data to the central Road-Side Unit (RSU) for processing. At 254Mbps, you are no longer able to use the 2.4Ghz or even the most advanced 5GHz WiFi standard reliably. 


I have worked with companies on point-to-point WiFi for several years and 32 beams is about the upper limit of what the wireless system could reliably handle. It would not stand a chance with 128 beams.


The saddest part about all of this? It is completely unnecessary!


I know from practical experience that I can take a 128 beam Lidar; disable 64 beams (at least) in my recommended non-uniform configuration (not the skip-a-beam configuration) and I will see no discernable degradation in object detection and tracking performance. It might actually improve! Sadly, most Lidar won't give you this option, or worse, will give you the option, but then fill the data stream with zeros and still fire the lasers anyway.

In fact, I can guarantee you that I can get better performance with 4x MQ-8 sensors (24 beams total) than a single 128 beam sensor. It won't even be close in frame-by-frame accuracy.

Most importantly, I can save thousands of dollars in network infrastructure and computing power this way. 


The other, significant problem that I, and others, have been dealing with is that unrealistic high resolutions make for very pretty pictures and tends to 'wow' people who have little understanding of how perception systems actually function. This leads to the "we want this unit!" phenomenon where a specific Lidar sensor is selected by the end-customer before the system is even designed. This forces the designers and installer to work with sub-optimal units which leads to the following scenario:


In Proof-of-Concept situations, the unit will often perform quite nicely, and it passes the PoC success criteria. Everyone is happy. However, when deployments are being designed and calculated, the cost of the unit(s) and especially the added cost of network infrastructure to move all this excessive data around drives the cost of the system through the roof. Sticker shock sets in and the person who has to sign the check is often not the same person who said: "we want this unit!" earlier. The designers and installers are then forced to redesign their system and when they finally manage to convince the end-customer that the selected unit was not the right one, the whole PoC process has to start all over again.


I have seen a number of projects not get beyond a very successful PoC stage for this very reason. Business-wise it is this kind of short-term thinking that is killing long-term revenue and seriously hurting the industry. 


A plea to the Lidar industry


Lidar companies, please, stop peddling the highest-resolution Lidar for every project and shooting yourself in the foot long term. Leave it to the experts to decide which unit is best.

Man with beanie cap

It is my recommendation that Lidar companies worry less about adding more resolution to their units and rather shift focus to making their units behave more like cameras, with built-in hardware accelerated compression algorithms, an optimized payload that minimizes the bit count and number of empty fields being transmitted and moves any non-critical information to lower frequency packets.


Work with the experts to define the optimal beam patterns. Learn and understand Field-of-View requirements. Learn what the real-world bandwidth limitations are. Work WITH the customers to define a better and more efficient API. All this will reduce the system cost and make Lidar a much more viable unit to compete against camera and radar.


Think about the system, not just the sensor.


Some positive development


There is some good news in this regard already. While this does not apply to spinning sensors, the MOEMS and Galvo-mirror based Lidar units have started to develop advanced dynamic resolution capabilities.


Last year I collaborated with two companies to define new scan patterns and optimized field-of-view. Some of this was demonstrated at CES earlier this year.


I hope to soon be able to do a deep-dive into this technology because I believe that, once it is ready for prime-time, it will be a game changer for the industry, and it will shake up who the main players in the market will be.


Stay tuned.

Naval Research Laboratory
18 Jan, 2024
Bowler Pons, a leading provider of cutting-edge technology solutions, is thrilled to announce that it has been awarded a multi-year $3,835,726 Firm Fixed Price (FFP) contract by the Naval Research Laboratory (NRL) to continue delivering highly skilled Information Technology (IT) design, development, and support services to the NRL’s Optical Sciences Division (OSD). NRL OSD plays a critical role in advancing the state-of-the-art and exploring the realm of the possible in optical technologies. By partnering with Bowler Pons, the NRL will benefit from our expertise in developing and supporting the IT infrastructure and environment critical to enable these advancements to take place. Our team of seasoned professionals is committed to ensuring seamless operations, robust security, and optimal performance across the NRL’s IT landscape. “Building upon the past five years supporting NRL OSD, we are honored to have been selected as the sole provider for this vital contract scope to the lab,” said Nate Cimo Co-Owner of Bowler Pons. “Our team is excited to collaborate with the NRL and continue contributing to the success of their mission.” For more information about Bowler Pons and our capabilities, please visit our website or email Nate Cimo at nate@bowlerpons.com About Bowler Pons Solutions Consultants, LLC Bowler Pons is a Minority-Owned, Service-Disabled, Veteran-Owned Small Business (SDVOSB), and SBA 8(a) Certified consultancy with expertise spanning Engineering, Research & Development, Cybersecurity, Information Technology, and Program & Project Management. With a foundation built upon comprehensive security acumen, from information and network security to physical security technology design, development, integration, and support, Bowler Pons offers core services focused on delivering client-specific solutions that are aimed at fostering progress in the modern world of technological challenges. Headquartered in Annapolis, Maryland, and spread throughout the U.S. in strategic DoD and Navy concentration areas, Bowler Pons is rapidly growing in the Federal and DoD markets. More information about Bowler Pons can be found at www.bowlerpons.com.
AFWERX Challenge Logo Graphic
09 Jan, 2024
Las Vegas, NV — January 18th, 2024 Bowler Pons , the innovative techno logy solutions company, has been handpicked to participate in the prestigious AFWERX Expedient Basing Open Innovation Challenge Showcase , set to take place in Las Vegas on January 18th, 2024 . This exclusive event brings together government agencies and industry leading innovators to explore groundbreaking ideas that will shape the future of technology for the military. Out of an impressive pool of 759 proposals , only 120 candidates were chosen to showcase their forward-leaning concepts and industry-best technologies to AFWERX and USAF attendees. Bowler Pons stands proudly among this elite group, a testament to our unwavering commitment to excellence and innovation. Mark Holmes, Co-owner of Bowler Pons stated, “Our committed team of experts at Bowler Pons has relentlessly pursued the development of innovative, cutting-edge solutions that push the limits of technological advancement for our military clients. Led by our Head of Innovation, Jerone Floor, Bowler Pons continues to explore the realm of the possible with both emerging and unexplored technologies to solve tomorrow’s challenges. With a focus on advanced AI algorithms, computer vision, automation, and client-specific applications, we are leading the charge in transformative technology development.” For further information, please contact: Jerone Floor , jerone.floor@bowlerpons.com About Bowler Pons Solutions Consultants, LLC Bowler Pons is a Minority-Owned, Service-Disabled, Veteran-Owned Small Business (SDVOSB), and SBA 8(a) Certified consultancy with expertise spanning Engineering, Research & Development, Cybersecurity, Information Technology, and Program & Project Management. With a foundation built upon comprehensive security acumen, from information and network security to physical security technology design, development, integration, and support, Bowler Pons offers core services focused on delivering client-specific solutions that are aimed at fostering progress in the modern world of technological challenges. Headquartered in Annapolis, Maryland, and spread throughout the U.S. in strategic DoD and Navy concentration areas, Bowler Pons is rapidly growing in the Federal and DoD markets. More information about Bowler Pons can be found at www.bowlerpons.com .
LiDAR Technology Beams
11 Jul, 2023
The datasheet looks great, the sales pitch was dazzling and even with the working knowledge from earlier posts about how to determine the actual range, the sensor seems to meet all your requirements on paper. Or does it? There are a couple of behaviors of Lidar sensors that are not well understood, not even by people who manufacture them. This post will cover one of those. Divergence This is not a post about the popular book series set in a post-apocalyptic dystopian society. This is a post about something much more exiting than that: The effects of laser beam divergence on shapes of detected objects. This is a much overlooked phenomenon in the Lidar detection world. Many datasheets don't even mention this value even though it has significant effects and is a major contributor to the maximum tracking and classification range a Lidar based perception system can reach. What is divergence? "In electromagnetics, especially in optics, beam divergence is an angular measure of the increase in beam diameter or radius with distance from the optical aperture or antenna aperture from which the beam emerges." Or in normal English: "The further from the laser, the wider the beam." Anyone who has ever held a flashlight has experienced beam divergence. You point a flashlight at a wall and walk towards it and the size of the brightest light spot on the wall gets smaller the closer you get.
Show More
Share by: