Transient non-stationarity and generalisation in deep reinforcement learning

Maximilian Igl, Gregory Farquhar, Jelena Luketina, Wendelin Böhmer, Shimon Whiteson

Research output: Contribution to conferencePaperpeer-review

53 Downloads (Pure)


Non-stationarity can arise in Reinforcement Learning (RL) even in stationary environments. For example, most RL algorithms collect new data throughout training, using a non-stationary behaviour policy. Due to the transience of this non-stationarity, it is often not explicitly addressed in deep RL and a single neural network is continually updated. However, we find evidence that neural networks exhibit a memory effect, where these transient non-stationarities can permanently impact the latent representation and adversely affect generalisation performance. Consequently, to improve generalisation of deep RL agents, we propose Iterated Relearning (ITER). ITER augments standard RL training by repeated knowledge transfer of the current policy into a freshly initialised network, which thereby experiences less non-stationarity during training. Experimentally, we show that ITER improves performance on the challenging generalisation benchmarks ProcGen and Multiroom.
Original languageEnglish
Number of pages16
Publication statusPublished - 2021
Event9th International Conference on Learning Representations - Virtual Conference
Duration: 3 May 20207 May 2020
Conference number: 9


Conference9th International Conference on Learning Representations
Abbreviated titleICLR 2021
Internet address


  • Reinforcement Learning
  • Generalization


Dive into the research topics of 'Transient non-stationarity and generalisation in deep reinforcement learning'. Together they form a unique fingerprint.

Cite this