Computational imaging systems involve both optics and algorithm designs. Instead of optimizing these two components separately and sequentially, we treat the entire system as one neural network and develop an end-to-end optimization framework. Specifically, the first layer of the network corresponds to physical optical elements, and all subsequent layers represent the computational algorithm. All the parameters are learned based on task-specific loss over a large dataset. Such a learning-based framework can potentially go beyond the limits imposed by model-based methods and create better sensor systems.