State convergence is essential in many scientific areas, e.g. multi-agent consensus/disagreement, distributed optimization, computational game theory, multi-agent learning over networks. In this paper, we study for the first time the state convergence problem in uncertain linear systems. Preliminarily, we characterize state convergence in linear systems via equivalent linear matrix inequalities. In the presence of uncertainty, we complement the canonical definition of (weak) convergence with a stronger notion of convergence, which requires the existence of a common kernel among the generator matrices of the difference/differential inclusion (strong convergence). We investigate under which conditions the two definitions are equivalent. Then, we characterize strong and weak convergence via Lyapunov arguments, (linear) matrix inequalities and separability of the eigenvalues of the generator matrices. Finally, we show that, unlike asymptotic stability, state convergence lacks of duality.