View on GitHub

PCSI

Packet Compressed Sensing Imaging (PCSI)

Packet Compressed Sensing Imaging (PCSI)

This contains some details and theoretical background on PCSI.

For general info on PCSI and the computer program used to transmit PCSI images, return to the main page.

You can find the PDP specification used in PCSI here.

Author: KD9PDP

License: GPL-3.0

What is PCSI?

PCSI is a way of transmitting imaging data over unconnected networks where receiving stations may each receive different random packets (due to corruption from noise or blocked signals) yet each receiving station can individually reconstruct the entire original image with high fidelity only with the packets it received. High quality, full frame images, can be reconstructed with as little as 10% of the original data being transmitted or received. Even if a receiver joins the broadcast mid-transmission, it will be able to reconstruct the full image.

PCSI Reference Application

A remote amateur radio station on a high altitude balloon or satellite transmitting images back to ground stations. Remote station is assumed to have little computing power, so minimal processing will occur on the remote state. The link is unconnected (i.e., one directional broadcasts) and will lead to corrupted/lost packets.

How does it work (summary)

Using compressed sensing imaging, one can reconstruct full images from random selections of pixels from that image. We therefore transmit random selections of pixels in each packet so that each packet contains information from the entire image, and combining multiple packets improves received image fidelity (e.g., resolution). A good intro is here: http://www.pyrunner.com/weblog/2016/05/26/compressed-sensing-python/.

What are current methods for sending images from balloons/remote control vehicles/rockets/etc.?

What does PCSI do that is special?

Reconstructing PCSI Images

There is no specefication or standard on how to reconstruct the images. Users can experiment with different methods and find what is appropriate. The reference implementation follows these steps (based off of http://www.pyrunner.com/weblog/2016/05/26/compressed-sensing-python/)

  1. Decode all the pixel values and pixel numbers from as many packets as have been successfully received.
  2. For each color channel (Y, Cb, Cr), use OLW-QN for basis pursuit https://en.wikipedia.org/wiki/Limited-memory_BFGS#OWL-QN to find the discrete cosine transform (DCT) coefficients that best fit the received data AND minimizes the L1 norm. This is the key to compressed sensing! Reference python implementation uses https://bitbucket.org/rtaylor/pylbfgs/src/master/
  3. After finding the DCT coefficients, use the inverse DCT to generate the color channels for the image.
  4. Convert from YCbCr to RBG, and save the image!