In 1886, Karl Benz revealed the first automobile in history. In 1908, Henry Ford sold the first Model T. In 2017, the $9 trillion global automotive industry sells nearly 100 million vehicles a year. Despite the incredible mechanical and manufacturing breakthroughs we’ve witnessed in the past century, one thing has remained constant: the driver. But this is about to change.

In light of the countless casualties caused by distracted and careless drivers, technology companies like Tesla, Google, Uber, and Apple have developed and pioneered new technologies causing the entire automotive industry to reconsider what is possible. Self-driving, or autonomous vehicles, purport to not only solve the glaring issue of human error and car accidents but also to reimagine the driving experience as relaxing and social, rather than vigilant and dangerous.

The obvious question is how will we get there? How will the industry evolve from current, manually driven vehicles to a fully autonomous system? Below, the National Highway and Transportation Safety Authority (NHTSA) outlines five different levels autonomic achievement necessary to turn this promise into a reality.



In the last year, we’ve witnessed a broad transition from level 2 to level 3 autonomy with Tesla launching autopilot capabilities, GM acquiring Cruise Automation for $1b, Waymo spinning out company of Google’s self-driving car project, and Uber’s purchase of Otto for $680m.

The promise of a fully autonomous vehicle, however, requires the vehicle to perform better than a human driver. Human drivers undergo a three-part process that allows them to navigate the roads.

1) Perception of the environment

2) Decision making based on the perceived surrounding

3) Timely execution of each decision

To illustrate, imagine a driver sees an obstacle on the road, decides to avoid obstacle by swerving right, and physically turns the steering wheel right and re-centers it after avoiding the obstacle. A fully-autonomous vehicle must go through this process as well. Perception is done largely by sensors such as cameras, radar, and lidar, decision making is done by algorithms and processing, and the manipulation of the vehicle based on the decisions is conducted by actuators. The following paragraphs will outline the perception function, and the technologies being developed in the space.

Perception systems can further be broken down into two categories:

The exteroceptive sensors are of particular importance for autonomous capabilities, as they are tasked with dealing with the external environment. It is their job to spot all important things on or near the road like other vehicles, pedestrians, debris and, in some cases, road features like signs and lane markings. In addition to detecting these obstacles, these systems need to identify the obstacles, measure their speed and direction, and predict where they are going. Below is an image outlining the key components of the exteroceptive sensor family:



The below table outlines the three most significant sensors: cameras, radar and lidar and discusses pros and cons, as well as its main function.

While there has yet to be industry consensus on which sensor will capture the lion’s share of the perception function in autonomous vehicles, many industry experts today are of the opinion that all of the above sensors will play a role in the autonomous vehicle. We see an investment opportunity in both the development of new, more capable sensors, as well as software created that improves the perception of each sensor.

In January 2017, we were excited to announce our first investment in the autonomous enablement category – Arbe Robotics. The Company, led by a serial entrepreneur and a team of radar and signal processing experts, develops proprietary radars designed for the autonomous vehicle that have 4D mapping and sensor fusion capabilities.