Author Information

Thomas PasfieldFollow

Is this project an undergraduate, graduate, or faculty project?

Undergraduate

Project Type

individual

Campus

Daytona Beach

Authors' Class Standing

Thomas Pasfield, Senior

Lead Presenter's Name

Thomas Pasfield

Lead Presenter's College

DB College of Arts and Sciences

Faculty Mentor Name

Mihhail Berezovski

Abstract

We introduce a novel framework for synthesizing industrial CT-like images directly from 3D printer G-code to train a volumetric segmentation model. In this approach, the G-code (FDM toolpath instructions) is converted into a dense 3D volume using a custom anti-aliased line rendering algorithm, yielding synthetic CT images where voxel brightness corresponds to printed material density. To further mimic real CT scanning artifacts and improve the robustness of the dataset, we employ a Radon transform projection-reconstruction technique, creating more realistic synthetic data. Each generated volume is paired with a ground-truth label volume (distinguishing plastics from air), providing a ready-made training dataset for a 3D U-Net segmentation network. We enhance this dataset with various augmentations including noise injection, optical blur, and artificial void defects to increase diversity and realism. Using HPC resources, the 3D U-Net is trained and validated on these synthetic volumes, focusing on segmenting thermoplastic material within dense printed structures (≥70% infill). A qualitative evaluation against a real CT-scanned print (with manual segmentation labels) shows that the model can correctly identify material regions, demonstrating the feasibility of the synthetic training approach. This work provides a proof-of-concept that G-code-derived synthetic CT data can effectively train 3D segmentation models, offering a promising solution when real labeled CT datasets are scarce. Future work will expand on real-world validation and explore integrating G-code data as sparse annotations in advanced segmentation techniques.

Did this research project receive funding support (Spark, SURF, Research Abroad, Student Internal Grants, Collaborative, Climbing, or Ignite Grants) from the Office of Undergraduate Research?

No

Share

COinS
 

Producing Synthetic CT Imagery to Train 3D V-Net Segmentation Models

We introduce a novel framework for synthesizing industrial CT-like images directly from 3D printer G-code to train a volumetric segmentation model. In this approach, the G-code (FDM toolpath instructions) is converted into a dense 3D volume using a custom anti-aliased line rendering algorithm, yielding synthetic CT images where voxel brightness corresponds to printed material density. To further mimic real CT scanning artifacts and improve the robustness of the dataset, we employ a Radon transform projection-reconstruction technique, creating more realistic synthetic data. Each generated volume is paired with a ground-truth label volume (distinguishing plastics from air), providing a ready-made training dataset for a 3D U-Net segmentation network. We enhance this dataset with various augmentations including noise injection, optical blur, and artificial void defects to increase diversity and realism. Using HPC resources, the 3D U-Net is trained and validated on these synthetic volumes, focusing on segmenting thermoplastic material within dense printed structures (≥70% infill). A qualitative evaluation against a real CT-scanned print (with manual segmentation labels) shows that the model can correctly identify material regions, demonstrating the feasibility of the synthetic training approach. This work provides a proof-of-concept that G-code-derived synthetic CT data can effectively train 3D segmentation models, offering a promising solution when real labeled CT datasets are scarce. Future work will expand on real-world validation and explore integrating G-code data as sparse annotations in advanced segmentation techniques.

 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.