Did you know that the technology that powers more than a billion smartphone cameras worldwide was developed at Columbia?
Professor Shree Nayar's work has changed the way visual information is captured and used by both machines and humans. Nayar is the T. C. Chang Professor of Computer Science in Columbia’s School of Engineering and Applied Science.
“The real world is stunning in detail. The human eye captures an approximation of it, but there are animals that do even better than we do,” Nayar, who leads the Columbia Imaging and Vision Laboratory, said. “We wanted to explore whether we could design cameras that get closer to capturing all the details of the real visual world than traditional cameras, and better even than our eyes.” The answer, they found, was yes.
Nayar’s invention is called single-shot High Dynamic Range, or HDR, imaging.
Since the advent of photography, images have had to contend with the problem of under or overexposure—parts of the image coming out clear, while other areas are either too dark or too light to see.
Nayar’s technology addresses this problem by using an image sensor that has what are called “assorted pixels”—pixels in any local area of the sensor are exposed differently to light. When any given pixel is overexposed or underexposed, the technology uses an algorithm to examine the pixels around it (often referred to as the pixels in its “neighborhood”) to determine its true color and brightness. The end result is a clear legible, final image, rich in detail, that includes information about under and overexposed areas.
The technology widely replaced an earlier version of digital photo technology that had addressed under and overexposure by taking a series of differently exposed images in quick succession and creating a composite image from the best-exposed areas of each image. The problem was the passage of time: A bird flew, someone shifted, a bike went by. The composite images were blurred or included ghost-like artifacts. Nayar’s “single-shot” technology addresses that by creating a final image using information that is all captured simultaneously, in a single image.
Nayar developed the technology in the late 1990s with the researcher Tomoo Mitsunaga, who was visiting his lab from Sony. Later, Sony commercialized it, updating the image-sensing chips that they produce, which are now in well over a billion smartphones globally, including the iPhone and the Google Pixel. The technology is also used in tablets and security cameras.
More recently, Nayar and his team have generalized the idea of assorted pixels to design image sensors that can produce images that reveal richer color information at each pixel and even determine what material the corresponding point in the scene is made off—plastic, metal, etc. He expects these extensions to his assorted pixel idea to make their way into consumer products in the coming years.
“As academics, it’s very hard for us to get our ideas out of the lab and into the hands of everyday users, and that's what I'm proud of,” Nayar said.
Single-shot HDR is an example of computational imaging, where an optically coded image is captured and then computationally decoded using an algorithm to produce an image that is richer, more immersive, and even interactive. Nayar is a pioneer in the field of computational imaging and has used this approach to also develop 360-degree (omnidirectional) and depth (3D) cameras. His inventions are today widely used in a myriad of fields, ranging from factory automation and robotics to special effects and AR/VR.
In 2017, Popular Photography magazine published a profile of Nayar in which he was credited for “transforming the camera in your pocket.” For his contributions to imaging and computer vision he was elected to the National Academy of Engineering in 2008 and the American Academy of Arts and Sciences in 2011. In 2023, he received the prestigious Okawa Prize for “the invention of innovative imaging techniques and their widespread use in digital photography and computer vision.”
Nayar is also highly acclaimed as an educator, receiving the Columbia Great Teacher Award in 2006. He developed an experiential camera, called “Bigshot,” for inspiring students in underserved communities to learn science and engineering concepts as well as the art of photography. Bigshot has been used by more than 100,000 children around the world. In 2021, he released his lecture series on the “First Principles of Computer Vision" on YouTube where it has received millions of views from students around the globe.
Because computational cameras capture light in ways that our eyes cannot, they allow us to become explorers: “We can walk into worlds and see things we've never seen before,” Nayar said.
This story was originally published by Columbia University on October 20, 2025.