One of the chief issues in the development of autonomous vehicles is the inability for these navigation systems that use visible light to handle misty driving conditions.

To help make this crucial step in the development of self-driving cars more attainable, researchers at MIT have developed a system that can produce images of objects shrouded by fog that is too thick for human vision to see through. The system can also gauge the objects’ distance.

MIT tested the system using a small tank of water with a vibrating motor from a humidifier immersed in it. The fog was so dense that human vision could only penetrate 36 centimeters while the MIT system was able to resolve images of objects and gauge their depth at a range of 57 centimeters.

MIT admits 57 centimeters is not a great distance, but the fog created for the study is much denser than fog human drivers would encounter in the real world. More important is that the system performed better than human vision and other imaging systems performed far worse, researchers say.

"We're dealing with realistic fog, which is dense, dynamic, and heterogeneous,” says Guy Satat, a graduate student at MIT involved in the research. “It is constantly moving and changing, with patches of denser or less-dense fog. Other methods are not designed to cope with such realistic scenarios."

How It Works

The MIT system was able to resolve images of objects and gauge their depth at a range of 57 centimeters. Source: MITThe MIT system was able to resolve images of objects and gauge their depth at a range of 57 centimeters. Source: MITThe system uses a time-of-flight camera, which fires ultrashort bursts of laser light into a scene and measures the time it takes their reflections to return.

On clear days, the return time faithfully indicates the distances of the objects that reflected it. But on foggy days it would cause the light to scatter, or bounce around in random ways. Most of the light in foggy weather that reaches the camera’s sensor will have been reflected by airborne water droplets, not by the types of objects that autonomous vehicles need to avoid. Even then, the light that does reflect from potential obstacles will arrive at different times, having been deflected by water droplets on both the way out and the way back.

The MIT system gets around the problem through statistics. Light penetrates less deeply into a thick fog than it does into a light fog, but the researchers were able to show that no matter how thick the fog, the arrival times of the reflected light adhere to a statistical pattern known as gamma distribution.

Gamma distributions can be asymmetrical and they can take on a wider variety of shapes, unlike Gaussian distributions that yield a bell curve. The MIT system estimates the values of the two variables in the gamma distributions on the fly and uses the resulting distributions to filter fog reflection out of the light signal that reaches the time-of-flight camera’s sensor.

The system calculates a different gamma distribution for each of the 1,024 pixels in the sensor. This way, it can handle the variations in fog density that hampered earlier systems and the system can handle circumstances in which each pixel sees a different type of fog.

The camera counts the number of light particles that reach it every 56 picoseconds and the system uses raw counts to produce a histogram — a bar graph with heights of the bars indicating the light particle count for each interval. Then it finds the gamma distribution that best fits the shape of the bar graph and subtracts the associated photon counts from the measure totals. What remains are slight spikes at the distances that correlate with physical objects.

MIT will present its findings at the upcoming International Conference on Computational Photography in May.

To contact the author of this article, email