A hybrid spatial–temporal deep learning architecture for lane detection

Yongqi Dong, Sandeep Patil, Bart van Arem, Haneen Farah*

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

30 Downloads (Pure)

Abstract

Accurate and reliable lane detection is vital for the safe performance of lane-keeping assistance and lane departure warning systems. However, under certain challenging circumstances, it is difficult to get satisfactory performance in accurately detecting the lanes from one single image as mostly done in current literature. Since lane markings are continuous lines, the lanes that are difficult to be accurately detected in the current single image can potentially be better deduced if information from previous frames is incorporated. This study proposes a novel hybrid spatial–temporal (ST) sequence-to-one deep learning architecture. This architecture makes full use of the ST information in multiple continuous image frames to detect the lane markings in the very last frame. Specifically, the hybrid model integrates the following aspects: (a) the single image feature extraction module equipped with the spatial convolutional neural network; (b) the ST feature integration module constructed by ST recurrent neural network; (c) the encoder–decoder structure, which makes this image segmentation problem work in an end-to-end supervised learning format. Extensive experiments reveal that the proposed model architecture can effectively handle challenging driving scenes and outperforms available state-of-the-art methods.

Original languageEnglish
Pages (from-to)1-20
Number of pages20
JournalComputer-Aided Civil and Infrastructure Engineering
DOIs
Publication statusPublished - 2022

Fingerprint

Dive into the research topics of 'A hybrid spatial–temporal deep learning architecture for lane detection'. Together they form a unique fingerprint.

Cite this