Road Detection without Lane Markings for Autonomous Vehicles

Conference proceedings article


Authors/Editors


Strategic Research Themes


Publication Details

Author listPatcharapol Ratchatapaiboon, Nattapong Wattanasit, Sittiporn Sangatit, Benjamas Panomruttanarug, Werapon Chiracharit and Kittipong Warasup

Publication year2025

Title of seriesThe 2025 SICE Festival with Annual Conference (SICE FES 2025)

Number in series24th

Start page1

End page6

Number of pages6

LanguagesEnglish-United States (EN-US)


Abstract

This paper presents a simplified training strategy for drivable area detection, focusing on road environments where lane markings are missing or unclear. Based on the YOLOParchitecture, the proposed method removes the lane detection branch and adopts a two-stage training process. The model is first trained on object detection to allow the encoder to learn useful visual features. In the second stage, segmentation is introduced, and both tasks are trained together in a multitask setup. This helps the model maintain detection performance while learning to identify drivable areas accurately. This design reduces task interference and helps the model better adapt to unstructured roads. The approach aims to improve model accuracy and robustness while maintaining architectural efficiency for real-world applications. Evaluation on a custom dataset featuring unstructured road conditions demonstrated that the proposed model achieved notable segmentation accuracy (92.60% mIoU) and reliable object detection (91.30% recall, 68.10% mAP@0.5).


Keywords

Autonomous Vehiclelane detectionroad boundary markings


Last updated on 2025-22-07 at 12:00