## Abstract

The l_{1}-regularized least square problem has been considered in diverse fields. However, finding its solution is exacting as its objective function is not differentiable. In this paper, we propose a new one-layer neural network to find the optimal solution of the l_{1}-regularized least squares problem. To solve the problem, we first convert it into a smooth quadratic minimization by splitting the desired variable into its positive and negative parts. Accordingly, a novel neural network is proposed to solve the resulting problem, which is guaranteed to converge to the solution of the problem. Furthermore, the rate of the convergence is dependent on a scaling parameter, not to the size of datasets. The proposed neural network is further adjusted to encompass the total variation regularization. Extensive experiments on the l_{1} and total variation regularized problems illustrate the reasonable performance of the proposed neural network.

Original language | English |
---|---|

Journal | Neurocomputing |

DOIs | |

Publication status | Published - 2018 |

## Keywords

- Convex
- l-regularization
- Least squares
- Lyapunov
- Recurrent neural network
- Total variation