## Abstract

The l_{1}-regularized least square problem has been considered in diverse fields. However, finding its solution is exacting as its objective function is not differentiable. In this paper, we propose a new one-layer neural network to find the optimal solution of the l_{1}-regularized least squares problem. To solve the problem, we first convert it into a smooth quadratic minimization by splitting the desired variable into its positive and negative parts. Accordingly, a novel neural network is proposed to solve the resulting problem, which is guaranteed to converge to the solution of the problem. Furthermore, the rate of the convergence is dependent on a scaling parameter, not to the size of datasets. The proposed neural network is further adjusted to encompass the total variation regularization. Extensive experiments on the l_{1} and total variation regularized problems illustrate the reasonable performance of the proposed neural network.

Original language | English |
---|---|

Journal | Neurocomputing |

DOIs | |

Publication status | Published - 2018 |

### Bibliographical note

Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.## Keywords

- Convex
- l-regularization
- Least squares
- Lyapunov
- Recurrent neural network
- Total variation

## Fingerprint

Dive into the research topics of 'A novel one-layer recurrent neural network for the l_{1}-regularized least square problem'. Together they form a unique fingerprint.