relying on light-based image sensors often struggle to see through blinding
conditions, such as fog. But MIT researchers have developed a
sub-terahertz-radiation receiving system that could help steer driverless cars
when traditional methods fail.
wavelengths, which are between microwave and infrared radiation on the
electromagnetic spectrum, can be detected through fog and dust clouds with
ease, whereas the infrared-based LiDAR imaging systems used in autonomous
vehicles struggle. To detect objects, a sub-terahertz imaging system sends an
initial signal through a transmitter; a receiver then measures the absorption
and reflection of the rebounding sub-terahertz wavelengths. That sends a signal
to a processor that recreates an image of the object.
sub-terahertz sensors into driverless cars is challenging. Sensitive, accurate
object-recognition requires a strong output baseband signal from receiver to
processor. Traditional systems, made of discrete components that produce such
signals, are large and expensive. Smaller, on-chip sensor arrays exist, but
they produce weak signals.
In a paper published
online on Feb. 8 by the IEEE Journal of Solid-State Circuits, the researchers
describe a two-dimensional, sub-terahertz receiving array on a chip that’s
orders of magnitude more sensitive, meaning it can better capture and interpret
sub-terahertz wavelengths in the presence of a lot of signal noise.
To achieve this, they
implemented a scheme of independent signal-mixing pixels—called
“heterodyne detectors”—that are usually very difficult to densely
integrate into chips. The researchers drastically shrank the size of the
heterodyne detectors so that many of them can fit into a chip. The trick was to
create a compact, multipurpose component that can simultaneously down-mix input
signals, synchronize the pixel array, and produce strong output baseband
The researchers built a
prototype, which has a 32-pixel array integrated on a 1.2-square-millimeter
device. The pixels are approximately 4,300 times more sensitive than the pixels
in today’s best on-chip sub-terahertz array sensors. With a little more
development, the chip could potentially be used in driverless cars and
“A big motivation
for this work is having better ‘electric eyes’ for autonomous vehicles and
drones,” says co-author Ruonan Han, an associate professor of electrical
engineering and computer science, and director of the Terahertz Integrated
Electronics Group in the MIT Microsystems Technology Laboratories (MTL).
“Our low-cost, on-chip sub-terahertz sensors will play a complementary
role to LiDAR for when the environment is rough.”
Joining Han on the
paper are first author Zhi Hu and co-author Cheng Wang, both Ph.D. students in
in the Department of Electrical Engineering and Computer Science working in
Han’s research group.
The key to the design
is what the researchers call “decentralization.” In this design, a
single pixel—called a “heterodyne” pixel—generates the frequency beat
(the frequency difference between two incoming sub-terahertz signals) and the
“local oscillation,” an electrical signal that changes the frequency
of an input frequency. This “down-mixing” process produces a signal
in the megahertz range that can be easily interpreted by a baseband processor.
The output signal can
be used to calculate the distance of objects, similar to how LiDAR calculates
the time it takes a laser to hit an object and rebound. In addition, combining
the output signals of an array of pixels, and steering the pixels in a certain
direction, can enable high-resolution images of a scene. This allows for not
only the detection but also the recognition of objects, which is critical in
autonomous vehicles and robots.
Heterodyne pixel arrays
work only when the local oscillation signals from all pixels are synchronized,
meaning that a signal-synchronizing technique is needed. Centralized designs
include a single hub that shares local oscillation signals to all pixels.
These designs are
usually used by receivers of lower frequencies, and can cause issues at
sub-terahertz frequency bands, where generating a high-power signal from a
single hub is notoriously difficult. As the array scales up, the power shared
by each pixel decreases, reducing the output baseband signal strength, which is
highly dependent on the power of local oscillation signal. As a result, a
signal generated by each pixel can be very weak, leading to low sensitivity.
Some on-chip sensors have started using this design, but are limited to eight
decentralized design tackles this scale-sensitivity trade-off. Each pixel
generates its own local oscillation signal, used for receiving and down-mixing
the incoming signal. In addition, an integrated coupler synchronizes its local
oscillation signal with that of its neighbor. This gives each pixel more output
power, since the local oscillation signal does not flow from a global hub.
A good analogy for the
new decentralized design is an irrigation system, Han says. A traditional
irrigation system has one pump that directs a powerful stream of water through
a pipeline network that distributes water to many sprinkler sites. Each
sprinkler spits out water that has a much weaker flow than the initial flow from
the pump. If you want the sprinklers to pulse at the exact same rate, that
would require another control system.
design, on the other hand, gives each site its own water pump, eliminating the
need for connecting pipelines, and gives each sprinkler its own powerful water
output. Each sprinkler also communicates with its neighbor to synchronize their
pulse rates. “With our design, there’s essentially no boundary for
scalability,” Han says. “You can have as many sites as you want, and
each site still pumps out the same amount of water … and all pumps pulse
The new architecture,
however, potentially makes the footprint of each pixel much larger, which poses
a great challenge to the large-scale, high-density integration in an array fashion.
In their design, the researchers combined various functions of four
traditionally separate components—antenna, downmixer, oscillator, and
coupler—into a single “multitasking” component given to each pixel.
This allows for a decentralized design of 32 pixels.
“We designed a
multifunctional component for a [decentralized] design on a chip and combine a
few discrete structures to shrink the size of each pixel,” Hu says.
“Even though each pixel performs complicated operations, it keeps its
compactness, so we can still have a large-scale dense array.”
Guided by frequencies
In order for the system
to gauge an object’s distance, the frequency of the local oscillation signal
must be stable.
To that end, the
researchers incorporated into their chip a component called a phase-locked
loop, that locks the sub-terahertz frequency of all 32 local oscillation
signals to a stable, low-frequency reference. Because the pixels are coupled,
their local oscillation signals all share identical, high-stability phase and
frequency. This ensures that meaningful information can be extracted from the
output baseband signals. This entire architecture minimizes signal loss and
“In summary, we
achieve a coherent array, at the same time with very high local oscillation
power for each pixel, so each pixel achieves high sensitivity,” Hu says.