End-to-End learning of decision trees and forests

Thomas M. Hehn*, Julian F.P. Kooij, Fred A. Hamprecht

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

369 Downloads (Pure)


Conventional decision trees have a number of favorable properties, including a small computational footprint, interpretability, and the ability to learn from little training data. However, they lack a key quality that has helped fuel the deep learning revolution: that of being end-to-end trainable. Kontschieder et al. (ICCV, 2015) have addressed this deficit, but at the cost of losing a main attractive trait of decision trees: the fact that each sample is routed along a small subset of tree nodes only. We here present an end-to-end learning scheme for deterministic decision trees and decision forests. Thanks to a new model and expectation–maximization training scheme, the trees are fully probabilistic at train time, but after an annealing process become deterministic at test time. In experiments we explore the effect of annealing visually and quantitatively, and find that our method performs on par or superior to standard learning algorithms for oblique decision trees and forests. We further demonstrate on image datasets that our approach can learn more complex split functions than common oblique ones, and facilitates interpretability through spatial regularization.

Original languageEnglish
Pages (from-to)997-1011
JournalInternational Journal of Computer Vision
Volume128 (2020)
Publication statusPublished - 2019


  • Decision forests
  • Efficient inference
  • End-to-end learning
  • Interpretability

Cite this