### Abstract

The l_{1}-regularized least square problem has been considered in diverse fields. However, finding its solution is exacting as its objective function is not differentiable. In this paper, we propose a new one-layer neural network to find the optimal solution of the l_{1}-regularized least squares problem. To solve the problem, we first convert it into a smooth quadratic minimization by splitting the desired variable into its positive and negative parts. Accordingly, a novel neural network is proposed to solve the resulting problem, which is guaranteed to converge to the solution of the problem. Furthermore, the rate of the convergence is dependent on a scaling parameter, not to the size of datasets. The proposed neural network is further adjusted to encompass the total variation regularization. Extensive experiments on the l_{1} and total variation regularized problems illustrate the reasonable performance of the proposed neural network.

Original language | English |
---|---|

Journal | Neurocomputing |

DOIs | |

Publication status | E-pub ahead of print - Jan 2018 |

### Keywords

- Convex
- l-regularization
- Least squares
- Lyapunov
- Recurrent neural network
- Total variation

## Fingerprint Dive into the research topics of 'A novel one-layer recurrent neural network for the l<sub>1</sub>-regularized least square problem'. Together they form a unique fingerprint.

## Cite this

_{1}-regularized least square problem.

*Neurocomputing*. https://doi.org/10.1016/j.neucom.2018.07.007