Etd

2D Representation Learning of 3D Point Clouds in Segmentation and Localization for Autonomous Driving

Public

Downloadable Content

open in viewer

Recently light detection and ranging (LiDAR) sensors have been attracting more and more attention in autonomous driving to collect 3D geometric points surrounding the vehicles. Such collections of points, namely, point clouds, can provide useful complementary information to other sensors such as cameras, since they are produced independently on the lighting conditions. The unstructured characteristics of point clouds, however, introduces many challenges in learning representations for various applications. For instance, the continuity and sparsity in point clouds, inherent from the 3D coordinates of our physical world, can lead to significantly computational inefficiency in capturing local geometric structures of objects that are crucial for recognition and segmentation. To address such challenges in point cloud representation, in this dissertation three different 2D representations (in terms of intrinsic dimensions), namely, (1) spherical, (2) ellipsoid, and (3) graph-to-image, are proposed and comprehensively studied in point cloud segmentation and vehicle localization. In contrast to the literature, our 2D representations can make full use of well-designed 2D convolutional neural networks (CNNs), achieving not only superior performance in accuracy but also high efficiency in computation. In summary, this dissertation well demonstrates the great potentials of solving 3D applications using well-known 2D (image based) techniques towards real-time processing of large-scale point clouds for autonomous driving.

Creator
Contributors
Degree
Unit
Publisher
Identifier
  • etd-42751
Keyword
Advisor
Orcid
Committee
Defense date
Year
  • 2021
Date created
  • 2021-12-11
Resource type
Rights statement
Last modified
  • 2023-09-20

Relations

In Collection:

Items

Items

Permanent link to this page: https://digital.wpi.edu/show/vd66w317d