Virtual analysis of urban road visibility using mobile laser scanning data and deep learning

Yang Ma*, Yubing Zheng, Said Easa, Yiik Diew Wong, Karim El-Basyouny

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

16 Citations (Scopus)

Abstract

This study proposes a new computer-aided framework for virtually analyzing urban road visibility using mobile laser scanning (MLS) data. The proposed framework compromises three main parts: 1) based on the data reorganization procedure, the 3D U-net is successfully introduced to tackle the issue of non-stationary noises that significantly hinders accurate detections of stationary sight obstructions, 2) a multi-step procedure is developed to extract the road areas to be estimated and fill data gaps caused by occlusions, and 3) a virtual scanning method (VSM) is proposed to achieve a fast and accurate visibility assessment of the extracted road areas. The proposed VSM also facilitates the application of deep neural networks in the automated driving domain to classify sight obstacles. By enabling multiple outputs, the proposed virtual framework provides a comprehensive understanding of urban road visibility, which can help road administrators detect and understand poor-visibility locations on urban streets.

Original languageEnglish
Article number104014
JournalAutomation in Construction
Volume133
DOIs
Publication statusPublished - Jan 2022
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2021 Elsevier B.V.

ASJC Scopus Subject Areas

  • Control and Systems Engineering
  • Civil and Structural Engineering
  • Building and Construction

Keywords

  • Algorithms
  • Deep learning
  • Mobile Lidar
  • Road visibility
  • Sight distance
  • Virtual scanning

Fingerprint

Dive into the research topics of 'Virtual analysis of urban road visibility using mobile laser scanning data and deep learning'. Together they form a unique fingerprint.

Cite this