Learning from demonstration (LfD) is useful in settings where hand-coding behaviour or a reward function is impractical. It has succeeded in a wide range of problems but typically relies on manually generated demonstrations or specially deployed sensors and has not generally been able to leverage the copious demonstrations available in the wild: those that capture behaviours that were occurring anyway using sensors that were already deployed for another purpose, e.g., traffic camera footage capturing demonstrations of natural behaviour of vehicles, cyclists, and pedestrians. We propose video to behaviour (ViBe), a new approach to learn models of behaviour from unlabelled raw video data of a traffic scene collected from a single, monocular, initially uncalibrated camera with ordinary resolution. Our approach calibrates the camera, detects relevant objects, tracks them through time, and uses the resulting trajectories to perform LfD, yielding models of naturalistic behaviour. We apply ViBe to raw videos of a traffic intersection and show that it can learn purely from videos, without additional expert knowledge.
|Title of host publication||2019 International Conference on Robotics and Automation, ICRA 2019|
|Number of pages||7|
|Publication status||Published - 1 May 2019|
|Event||2019 International Conference on Robotics and Automation, ICRA 2019 - Montreal, Canada|
Duration: 20 May 2019 → 24 May 2019
|Conference||2019 International Conference on Robotics and Automation, ICRA 2019|
|Period||20/05/19 → 24/05/19|