Rice Computational Imaging Lab

Our lab focuses on solving hard and challenging problems in imaging and vision by co-designing sensors, optics, electronics, signal processing, and machine learning algorithms. This emerging area of research is called computational imaging or more generally computational sensing. Our group is generally application-agnostic and focuses on developing foundational theories, tools, techniques, and systems.



  • High Resolution Wavefront Sensor

    Nature, Light: Science & Applications, 2019


    We present a novel computational-imaging-based technique, namely, the Wavefront Imaging Sensor with High resolution (WISH). We replace the microlens array in SHWFS with a spatial light modulator (SLM) and use a computational phase-retrieval algorithm to recover the incident wavefront. This wavefront sensor can measure highly varying optical fields at more than 10-megapixel resolution with the fine phase estimation.

  • Synthetic Aperture Long Range Imaging

    Science Advances, 2017

    We propose to use macroscopic Fourier ptychography (FP) as a practical means of creating a synthetic aperture for visible imaging to achieve subdiffraction-limited resolution. We demonstrate the first working prototype for macroscopic FP in a reflection imaging geometry that is capable of imaging optically rough objects. 

  • Ultraminiature Lensless Microscope

    Science Advances, 2017

    To break the fundamental tradeoff between device size and performance, we present a new concept for 3D fluorescence imaging that replaces lenses with an optimized amplitude mask placed a few hundred microns above the sensor and an efficient algorithm that can convert a single frame of captured sensor data into high-resolution 3D images. The result is FlatScope: perhaps the world’s tiniest and lightest microscope.

  • Camera-based non-contact vital signs monitoring 

    Biomedical Optics Express, 2015

    In this work we propose distancePPG, a new camera-based vital sign estimation algorithm. DistancePPG proposes a new method of combining skin-color change signals from different tracked regions of the face using a weighted average, where the weights depend on the blood perfusion and incident light intensity in the region, to improve the signal-to-noise ratio (SNR) of camera-based estimate.

  • Looking around the corners

    Nature Communication, 2012

    Here we report the combination of a time-of-flight technique and computational reconstruction algorithms to untangle image information mixed by diffuse reflection of light from walls. We demonstrate a three-dimensional range camera able to look around a corner using diffusely reflected light that achieves sub-millimetre depth precision and centimetre lateral precision .

  • Optically computing the first layer of CNNs  

    CVPR, 2016

    In this paper, we explore the energy savings of optically computing the first layer of CNNs. To do so, we utilize bio-inspired Angle Sensitive Pixels (ASPs), custom CMOS diffractive image sensors which act similar to Gabor filter banks in the V1 layer of the human visual cortex. ASPs replace both image sensing and the first layer of a conventional CNN by directly performing optical edge filtering, saving sensing energy, data bandwidth, and CNN FLOPS to compute.

  • Camera-based blood perfusion imaging  

    Nature Scientific Reports, 2020

    In this paper, we develop PulseCam — a new camera-based, motionrobust, and highly sensitive blood perfusion imaging modality with 1 mm spatial resolution and 1 frame-per-second temporal resolution.. PulseCam can detect subtle changes in blood perfusion below the skin with at least two times better sensitivity, three times better response time, and is signifcantly cheaper compared to infrared thermography. PulseCam can also detect venous or partial blood fow occlusion that is difcult to identify using existing modalities such as the perfusion index measured using a pulse oximeter