Solving least squares problems with sparse regularization¶
API references:¶
-
class
doatools.optim.l1lsq.
L1RegularizedLeastSquaresProblem
(m, k, formulation='penalizedl1', nonnegative=False)[source]¶ Bases:
object
Creates a reusable \(l_1\)-regularized least squares problem.
Let \(\mathbf{A}\) be an \(M \times L\) real dictionary matrix, \(\mathbf{b}\) be an \(M \times 1\) observation vector, \(\mathbf{x}\) be a \(K \times 1\) sparse vector, \(l\) be a nonnegative scalar. Let \(c\) equal to 0 if \(\mathbf{x}\) must be nonnegative and \(-\infty\) if \(\mathbf{x}\) can be any real number.
The default formulation, named
'penalizedl1'
, is given by\[\begin{split}\begin{aligned} \min_{\mathbf{x}}& \frac{1}{2} \| \mathbf{A}\mathbf{x} - \mathbf{b} \|_2^2 + l \| \mathbf{x} \|_1,\\ \text{s.t. }& x \geq c \end{aligned}\end{split}\]This formulation can be efficiently solved with QP or FISTA.
One common variant, namely the
'constraintedl1'
formulation, is given by\[\begin{split}\begin{aligned} \min_{\mathbf{x}}& \| \mathbf{A}\mathbf{x} - \mathbf{b} \|_2^2,\\ \text{s.t. }& \| \mathbf{x} \|_1 \leq l, \mathbf{x} \geq c. \end{aligned}\end{split}\]This formulation can be efficiently solved with QP.
A less common variant, namely the
'constrainedl2'
formulation, is given by\[\begin{split}\begin{aligned} \min_{\mathbf{x}}& \| \mathbf{x} \|_1,\\ \text{s.t. }& \| \mathbf{A}\mathbf{x} - \mathbf{b} \|_2 \leq l, \mathbf{x} \geq c. \end{aligned}\end{split}\]Note that the \(l_2\) error is upper bounded by l. If l is too small this problem may be infeasible. This formulation can be converted to a SOCP problem.
Parameters: - m (int) – Dimension of the observation vector \(\mathbf{b}\).
- k (int) – Dimension of the sparse vector \(\mathbf{x}\) (or the number of columns of the dictionary matrix, \(\mathbf{A}\)).
- formulation (str) –
'penalizedl1'
,'constrainedl1'
or'constrainedl2'
. Default value is'penalizedl1'
. - nonnegative (bool) – Specifies whether \(\mathbf{x}\) must be
nonnegative. Default value is
False
.
-
class
doatools.optim.l1lsq.
L21RegularizedLeastSquaresProblem
(m, k, n, complex=False)[source]¶ Bases:
object
Creates an \(l_{2,1}\)-norm regularized least squares problem.
The \(l_{2,1}\)-norm of a matrix variable \(\mathbf{X} \in \mathbb{C}^{K \times L}\) is given by
\[\| \mathbf{X} \|_{2,1} = \sum_{i=1}^K \left(\sum_{j=1}^L |X_{ij}|^2\right)^{\frac{1}{2}}.\]The \(l_{2,1}\)-norm regularized least squares problem is given by
\[\min_{\mathbf{X}} \frac{1}{2} \| \mathbf{A}\mathbf{X} - \mathbf{B} \|_F^2 + l \| \mathbf{X} \|_{2,1},\]where \(\mathbf{A}\) is \(M \times K\), \(\mathbf{X}\) is \(K \times L\), \(\mathbf{B}\) is \(M \times L\), and \(l\) is the regularization parameter. Usually \(\mathbf{A}\) is the dictionary matrix, \(\mathbf{X}\) is the sparse signal to be reconstructed, and \(\mathbf{B}\) is the observation matrix where each column of \(\mathbf{B}\) represents a single observation.
Parameters: - m (int) – Number of rows of the dictionary matrix \(\mathbf{A}\).
- k (int) – Number of rows of \(\mathbf{X}\) (or the number of columns of the dictionary matrix, \(\mathbf{A}\)).
- n (int) – Number of observations (the number of columns of the observation matrix, \(\mathbf{B}\))
- complex (bool) – Specifies whether all matrices are complex. Default
value is
False
.