Student Work

Automatizing Inspection and Defect Detection for Metal Additive Manufacturing Using Deep Learning Techniques

Public

Metal 3D Printing, or Additive Manufacturing (AM), is a layer-by-layer manufacturing process that involves adding the material to create a structure instead of removing it. In contrast to conventional manufacturing techniques, AM allows for much faster, low-cost prototyping, the construction of the parts with complex geometries, and an overall reduction in material waste. However, AM is a complex process governed by many parameters, which are often interdependent and impacted by the manufacturing environment. Optimization of these parameters is crucial to ensure the high quality of the manufactured parts. Manual quality inspection of these parts is not only a time-consuming process but also depends on the skill of the quality control inspector. This project aims to utilize Deep Learning techniques, specifically Convolutional Neural Networks, to identify, localize and classify defects from cross-sectional image scans of aluminum Al-10Si-Mg parts manufactured via laser powder bed fusion to automate the defect detection and quality assessment for those parts. To achieve the localization and classification of defects on these image scans, we utilized Detectron2 model - Facebook’s implementation of Mask R-CNN, a robust Region-Based Convolutional Neural Network designed for the detection of objects present in high-dimensional images. Image scans for use in the model were generated and annotated on a rolling basis. As the data became available, several steps of data cleaning were necessary to properly train Mask R-CNN with these image scans. These steps included generating pixel-perfect masks corresponding to each annotated defect, thresholding any defects considered to small to classify, separating classes into sub-classes to ensure a balance of data, and in some cases stitching entire samples and splitting along custom boundaries to ensure the geometries of extreme defects were preserved. After several iterations of the model, the final model was trained on 1510 different images with 3366 overall instances of defects, and achieved an overall classification accuracy of 98.73% on the validation set, the Average Precision score of the segmentation task on the validation set is 75.25 and the Average Precision score of the bounding box task on the validation set is 71.43.

  • This report represents the work of one or more WPI undergraduate students submitted to the faculty as evidence of completion of a degree requirement. WPI routinely publishes these reports on its website without editorial or peer review.
Creator
Subject
Publisher
Identifier
  • 64691
  • E-project-042722-195528
Mot-clé
Advisor
Year
  • 2022
UN Sustainable Development Goals
Date created
  • 2022-04-27
Resource type
Major
Rights statement

Relations

Dans Collection:

Contenu

Articles

Permanent link to this page: https://digital.wpi.edu/show/gx41mn00x