Since we've now seen both compressive sensing (Lecture 11) and linear programming (Lecture 17), let's do a cute little example of sparse signal recovery.
import numpy as np
from cvxopt import matrix, solvers
import matplotlib.pyplot as plt
%matplotlib notebook
Let's recall the $s$-sparse signal recovery problem that we saw in Lecture 11, and the algorithm for it. There is an unknown vector $x \in \mathbb{R}^n$ that we are trying to recover. All we can do are linear measurements: given a sensing vector $a \in \mathbb{R}^n$, we get back the measurement which is the inner product $\langle a,x\rangle$.
Ok, let's get started: here's an $s$-sparse vector $x$:
s = 15 # sparsity
n = 1000 # dimensions
k = 120 # number of measurements, i.e., rows of sensing matrix
# take the signal
z1 = np.ones(s)+np.random.random(s)
#z1 = np.random.random(s)
# pad it with noise/zeroes
#z2 = np.random.random(n-s)*0.10
z2 = np.zeros(n-s)
# and mix it all up to get the hidden vector
z = np.random.permutation(np.concatenate((z1,z2), axis=None))
And let's see what this looks like:
plt.subplot(2, 1, 1)
# and plot both the solution and original vector
y = np.linspace(1,n,n)
plt.plot(y,z, '+-')
plt.tight_layout()
We perform $k$ measurements, with vectors $a_1, a_2, \ldots, a_k$ (each in $\mathbb{R}^n$), and get back the measurements $b_1, \ldots, b_k$ (where $b_i = \langle a_i,x\rangle$). We want to figure out $x$. Clearly, we could set $a_i$ to be $1$ in the $i^{th}$ location and zero otherwise, which would tell us the $n$ coordinates of $x$ with $k=n$ measurements. How can we do fewer measurements, if we know that $x$ is $s$-sparse?
A little notation to make life easier: let $A$ be the $k \times n$ \emph{sensing matrix} whose rows are the $a_i$s. Hence $b = Ax$ is the vector of $k$ measurements. We construct $A$ and find out $b$ by doing the measurements, and we want to infer $x$. (Clearly $x$ is a solution to $Ax=b$, but since $k < n$ and hence this system of linear equations is under-constrained, there are many solutions. How do we find a sparse one?
The theorem mentioned in Lecture 11 was this one (due to Candes, Romberg and Tao, and Donoho): for $k = O(s \log (n/s))$ there exists a sensing matrix $A \in \mathbb{R}^{k \times n}$ so that for any $x$ that is $s$-sparse, we can find $x$ efficiently from $b = Ax$.
How to construct such a good sensing matrix $A$? They show that if we choose a Gaussian matrix (where each entry is an independent standard normal random variable, i.e., $\sim N(0,1)$) with these dimensions, then it satisfies the theorem with high probability!
# do the sensing
mu, sigma = 0, 1
A = np.random.normal(mu, sigma, (k, n))
b = A.dot(z)
Great. We've now got the measurements in $b$. How so we recover $x$? The algorithm is also simple: just find the solution to the linear system $Ax = b$, whose $\ell_1$ norm is the smallest. I.e., the solution with the smallest $\sum_i |x_i|$.
This is not an LP as stated (since the objectve function is not linear, it has all these absolute value signs). But it can be converted into one as follows (using the idea that $z_i \geq |x_i|$ if and only if $z_i \geq x_i$ and also $z_i \geq -x_i$.
# and convert the absolute values into linear constraints: solve for
# [ I -I ][x] <= [0]
# [ -I -I ][z] [0]
Id = np.eye(n)
I1 = np.concatenate((Id,-Id), axis=1)
I2 = np.concatenate((-Id,-Id), axis=1)
Aprime = np.concatenate((I1,I2), axis=0)
bprime = np.zeros(2*n)
Aext = np.concatenate((A,np.zeros((k,n))),axis=1)
c = np.concatenate((np.zeros(n), np.ones(n)), axis=None)
# we've used numpy arrays, but now want to use cvxopt
# so convert to cvxopt format
mA = matrix(Aext)
mb = matrix(b)
mAprime = matrix(Aprime)
mbprime = matrix(bprime)
mc = matrix(c)
# solve the problem: min <c,x> s.t. Aprime x <= bprime, Ax = b
sol= solvers.lp(mc, mAprime, mbprime, mA, mb)
# and get the solution out
x = np.array(sol['x'])
Great! The LP solution is in $x$, and let's see how it did.
plt.subplot(2, 1, 1)
# and plot both the solution and original vector
y = np.linspace(1,n,n)
plt.plot(y,x[:n], 'o-')
plt.plot(y,z, '+-')
plt.tight_layout()
Bingo! The LP solution (the circles) has perfectly recovered the original vector (the + signs) from these $k$ measurements!
What if the vectors were not perfectly $s$-sparse, but had most of their mass in $s$ coordinates? The Candes, Donoho, Romberg, and Tao result also works for this noisy setting. (In fact, for a purely $s$-sparse signal one can make do with only $2s$ measurements, but we'll defer that discussion for another day.)
Let's see if this holds up, by running the sensing matrix on a noisy vector.
# take the signal
z1 = np.ones(s)+np.random.random(s)
#z1 = np.random.random(s)
# pad it with noise this time
z2 = np.random.random(n-s)*0.10
# and mix it all up
z = np.random.permutation(np.concatenate((z1,z2), axis=None))
# do the sensing
b = A.dot(z)
Aext = np.concatenate((A,np.zeros((k,n))),axis=1)
c = np.concatenate((np.zeros(n), np.ones(n)), axis=None)
# convert to cvxopt format
mA = matrix(Aext)
mb = matrix(b)
mAprime = matrix(Aprime)
mbprime = matrix(bprime)
mc = matrix(c)
# solve the problem: min <c,x> s.t. Aprime x <= bprime, Ax = b
sol= solvers.lp(mc, mAprime, mbprime, mA, mb)
# and get the solution out
x = np.array(sol['x'])
# eps = 1e-5
# x[np.abs(x) < eps] = 0
plt.subplot(2, 1, 1)
# and plot both the solution and original vector
y = np.linspace(1,n,n)
plt.plot(y,x[:n], 'o-')
plt.plot(y,z, '+-')
plt.tight_layout()
Pretty good, huh? The orange crosses are the original signal, the blue dots are the LP solution we found. So we've done pretty well at figuring out the large signal coordinates, but of course the noise may be in weird locations.
We could try to trim away the small coordinates of the LP solution to get something that looks even better.
eps = 0.5
x[np.abs(x) < eps] = 0
plt.subplot(2, 1, 1)
# and plot both the solution and original vector
y = np.linspace(1,n,n)
plt.plot(y,x[:n], 'o-')
plt.plot(y,z, '+-')
plt.tight_layout()
Not perfect, but not bad. (And we can do better if we increase $k$.)
That's it for today.
If you play with the numbers (specifically keeping $n$ and $s$ fixed and then reducing $k$), you will see that the recovery remains pretty good until some level, and then sharply plummets.