Road Detection without Lane Markings for Autonomous Vehicles
Conference proceedings article
Authors/Editors
Strategic Research Themes
- Automation (Innovative Materials, Manufacturing and Construction)
- Computational Science and Engineering (Digital Transformation)
- Digital Transformation (Strategic Research Themes)
- Information Technology (Digital Transformation)
- Machine Learning (Big Data Analytics)
- Smart Services and Products (Digital Transformation)
Publication Details
Author list: Patcharapol Ratchatapaiboon, Nattapong Wattanasit, Sittiporn Sangatit, Benjamas Panomruttanarug, Werapon Chiracharit and Kittipong Warasup
Publication year: 2025
Title of series: The 2025 SICE Festival with Annual Conference (SICE FES 2025)
Number in series: 24th
Start page: 1
End page: 6
Number of pages: 6
Languages: English-United States (EN-US)
Abstract
This paper presents a simplified training strategy for drivable area detection, focusing on road environments where lane markings are missing or unclear. Based on the YOLOParchitecture, the proposed method removes the lane detection branch and adopts a two-stage training process. The model is first trained on object detection to allow the encoder to learn useful visual features. In the second stage, segmentation is introduced, and both tasks are trained together in a multitask setup. This helps the model maintain detection performance while learning to identify drivable areas accurately. This design reduces task interference and helps the model better adapt to unstructured roads. The approach aims to improve model accuracy and robustness while maintaining architectural efficiency for real-world applications. Evaluation on a custom dataset featuring unstructured road conditions demonstrated that the proposed model achieved notable segmentation accuracy (92.60% mIoU) and reliable object detection (91.30% recall, 68.10% mAP@0.5).
Keywords
Autonomous Vehicle, lane detection, road boundary markings