Lane Detection in Presence of Occlusion using Deep Neural Network

Document Type : Original Article

Authors

Electronic Engineering Department, Faculty of Engineering, Shahed university

Abstract

Lane detection is an important component in the development of autonomous vehicles, facilitating the real-time identification of driving paths and compliance with traffic regulations. Despite the promising performance of current models in controlled environments, they often encounter significant challenges in real-world scenarios, such as occluded lane visibility caused by snow, dust, traffic, or the absence of lane markings. This study presents a new approach to lane detection that leverages the spatiotemporal attributes of video frames by combining Long Short-Term Memory (LSTM) networks with Convolutional Neural Networks (CNNs) to enhance performance in the presence of occlusions. We employ a CNN to extract high-level spatial features from video frames, while an LSTM aggregates these features over time to model temporal dependencies and infer occluded segments when visual cues are absent. By representing lane marking detection as a sequential learning problem, the combined CNN-LSTM network effectively extracts spatiotemporal features. This dual architecture integrates both spatial and temporal information, thereby increasing robustness against occlusions and varying lighting conditions. The proposed model was evaluated under two conditions: low and high occlusion, using separate datasets, and was compared with the baseline architecture. The results confirm the effectiveness of the proposed approach. In low occlusion conditions, the model achieves an F1 score of about 96%, similar to the baseline method. In contrast, the baseline model suffers a performance drop in high occlusion scenarios, while the proposed model remains robust, also achieving an F1 score of about 96%.

Keywords

Main Subjects