We talked about why sparsity plays an important role in many of the inverse problems that we encounter in engineering. To actually find the sparse solutions to these problems, we add ‘sparsity-promoting’ terms to our optimization problems; the machine learning community calls this approach regularization.
Regularization
An optimization method that was popularized in the s and s is the LASSO , also called norm regularization, which solves problems of the following form:
Usually, corresponds to an error/loss term. It can also be the negative of something we wish to . The claim is that the additional term promotes the sparsity of the solution , i.e., it attempts to set one or more elements of to . Similarly, in ridge regression (also called Tikhonov regularization or norm regularization), we add a term to the objective. This is known to shrink the solution towards the origin, but it does not necessarily make the solution sparse.1
What about (as opposed to ), what would that do? How do we reason about an arbitrary ‘regularization term’ and interpret what it does? If you have encountered this question before, then you’ve likely seen explanations such as this one . 👈🏽 While that’s a great, conversational explainer on sparsity, I want to give it a slightly more formal treatment for anyone interested.
Sub-Gradient Descent
I expect that the reader is familiar with gradient descent and convex functions. I will offer a brief introduction to sub-gradient descent, which extends gradient descent to the case where the objective function is non-differentiable, but still convex.
A non-differentiable function is one that does not have a well-defined gradient at one or more points of its domain. But if the function is convex (i.e., bowl-shaped), then it has the next best thing: a sub-gradient of at is a vector , such that the inequality
holds for all in the domain of . The sub-gradient is not unique in general. However, if is differentiable at , then takes the unique value of . A convex function has at least one sub-gradient at every point of its domain; we can prove that fact using this theorem . Observe that sub-gradients can be thought of as hyperplanes that touch or support the function from below, similar to how the gradient of a differentiable convex function touches it from below.
Since the sub-gradient is non-unique, we define the sub-differential of at , denoted as , as the set of all sub-gradients of at . We can now do gradient descent, but instead of the gradient, we pick a sub-gradient direction to descend towards. This procedure of sub-gradient descent is motivated by the following fact: is the global minimizer of if and only if . For differentiable functions, sub-gradient descent reduces to gradient descent.
Similar to how, for differentiable functions,
we have
However, we are dealing with sets and not vectors in the non-differentiable case. The ‘’ in the preceding equation refers to the Minkowski sum; for sets and ,
Revisiting the LASSO
With this, let’s look at the LASSO-type problem,

where the green lines show the sub-gradients of the two functions at . This function is minimized whenever we can pick sub-gradients from and such that they ‘cancel each other out’.

At the differentiable points (), neither function has much freedom in picking a sub-gradient. But at , has a range of sub-gradients to pick from; it can choose one that ‘cancels out’ the corresponding (sub-)gradient of at . This is why a convex, non-differentiable regularization term is likely to pull the solution towards its non-differentiable points!
Choosing a ‘Regularization Term’
Suppose . The function has the following shape:

where the green plane is a sub-gradient at the origin. Since is non-differentiable along the axes, it tries to snap the minima towards the axes. Note that the axes of are exactly where the sparse vectors are. What about when ? At what points is non-differentiable then? (Hint: it’s not just the axes!)
The function looks like an ice-cream cone:

since it’s only non-differentiable at the origin, it tries to snap the solution towards the origin. This is different from ridge regression, which instead uses . The function is differentiable everywhere; it is ‘bowl-shaped’. It pulls the solution towards the origin, but does not particularly demand that the solution be exactly . So is there a use for ? Yes! It can be used to promote the block-sparsity of a vector, where the ’s of the vector appear in blocks. Consider
where , and . Suppose we know that the sparsity of occurs in blocks, i.e., some of the are full of zeros. Then, the regularization term is what we want to use since it sets some of the to but does not promote sparsity within each block. (I used this fact to solve an engineering problem in my PhD dissertation .)
Closing Note
There are many different ways to think about sparsity. For instance, one could imagine trying to balance a tennis ball that is resting on one of the surfaces we showed above, by holding the surface from below and tilting it. The ball is likely to settle at one of the non-differentiable points of the surface, thereby minimizing its potential energy. I like the sub-gradient interpretation because it works irrespective of the dimension. We can test for differentiability of arbitrary functions even if we cannot visualize them.
-
As an aside, LASSO and ridge regression can be studied using the theory of proximal operators . ↩︎