Rice Computational Imaging Lab

Our lab focuses on solving hard and challenging problems in imaging and vision by co-designing sensors, optics, electronics, signal processing, and machine learning algorithms. This emerging area of research is called computational imaging or more generally computational sensing. Our group is generally application-agnostic and focuses on developing foundational theories, tools, techniques, and systems.

Research

Publications

  • Foveated Thermal Computational Imaging in the Wild Using All-Silicon Meta-Optics

    Optica, 2024

    We propose a computational foveated imaging system by leveraging the ability of a meta-optical frontend to discriminate between different polarization states. A computational backend is used to reconstruct both a large field-of-view video and a zoomed in high resolution central field simultaneously. We have demonstrated a first-of-its-kind prototype system and demonstrate 28 frames per second real-time, thermal, foveated image and video capture in the wild.

  • Mesoscopic calcium imaging in a head-unrestrained male non-human primate using a lensless microscope

    Nature Communications, 2024Mesoscopic calcium imaging

    Current systems for imaging calcium dynamics in the brains of non-human primates require the animal’s movement to be restricted. Here, the authors demonstrate a mesoscale calcium imaging device in a freely moving non-human primate which features a 20 mm^2 field of view.

  • NeuWS: Neural wavefront shaping for guidestar-free imaging through static and dynamic scattering media

    Science Advances, 2023

    Wavefront shaping (WS) is an advanced technique for imaging through scattering. We approach it as a maximum likelihood estimation problem, utilizing Neural signal representations to estimate scattering and recover distorted objects. NeuWS offers an unprecedented set of capabilities: Non-invasive, GuideStar-free, high-resolution, wide-field, and without illumination control. We experimentally demonstrate diffraction-limited imaging of extended, nonsparse, and dynamic scenes through severe time-varying optical aberrations.

  • CoIR: Compressive Implicit Radar

    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023

    CoIR is an analysis by synthesis method that leverages the implicit neural network bias in convolutional decoders and compressed sensing to perform high accuracy radar imaging. We introduce a sparse array design that allows for a 5.5x reduction in the number of antenna elements needed compared to conventional MIMO array designs. We demonstrate our system’s improved imaging performance over standard mmWave radars and other competitive untrained methods on both simulated and experimental mmWave radar data.

  • First-Arrival Differential Counting for SPAD Array Design

    Sensors, 2023

    SPAD-LiDAR arrays have suffered low resolution due to high data bandwidth and bulky timing circuits for extracting the time of flight (TOF) of photons. We propose a novel, lightweight in-pixel computing architecture that we term first arrival differential (FAD) LiDAR, where instead of recording quantized time-of-arrival information at individual pixels, we record a temporal differential measurement between pairs of pixels. FAD-LiDAR holds promises for realizing large-scale SPAD arrays and in the paper, we demonstrate how the information recorded by the FAD units can be directly processed into useful 3D features for inference and imaging without relying on conventional timing circuits.

  • PS2F: Polarized Spiral Point Spread Function for Single-Shot 3D Sensing

    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022

    We propose a compact snapshot monocular depth estimation technique that relies on an engineered point spread function (PSF). Traditional approaches used in microscopic super-resolution imaging such as the Double-Helix PSF (DHPSF) are ill-suited for scenes that are more complex than a sparse set of point light sources. We show, using the Cramér-Rao lower bound, that separating the two lobes of the DHPSF and thereby capturing two separate images leads to a dramatic increase in depth accuracy.

  • In Vivo Lensless Microscopy  

    Nature Biomedical Engineering, 2022In Vivo Lensless Microscopy image

    We show that lensless imaging of tissue in vivo can be achieved via an optical phase mask designed to create a point spread function consisting of high-contrast contours with a broad spectrum of spatial frequencies. We built a prototype lensless microscope incorporating the ‘contour’ phase mask and used it to image calcium dynamics in the cortex of live mice (over a field of view of about 16 mm2) and in freely moving Hydra vulgaris, as well as microvasculature in the oral mucosa of volunteers. 

  • Deep Learning Extended Depth-of-field Microscope for Fast and Slide-free Histology

    Proceedings of the National Academy of Sciences, 2020

    We present DeepDOF, a computational microscope that allows us to break free from this constraint and achieve >5× larger DOF while retaining cellular-resolution imaging—obviating the need for z-scanning and significantly reducing the time needed for imaging. DeepDOF offers an inexpensive means for fast and slide-free histology, suited for improving tissue sampling during intraoperative assessment and in resource-constrained settings.

Datasets

Team

Code