ADVANCED MATERIALS & PROCESSES | MAY/JUNE 2024 15 Artificial intelligence (AI) and machine learning (ML) are advancing at a rapid pace and having an ever-larger impact on our daily lives. Equally so, AI and ML are increasingly affecting the way materials are researched and developed. Data- driven ML models are being used to establish complex processing-structure- property (PSP) relationships and paired with optimization to design and discover better materials faster than ever before. Computer vision models are being used to automatically quantify images of materials allowing for unbiased high throughput analysis and enabling in situ process monitoring. Advanced ML models are being implemented as the brains for robotic labs to automatically determine optimal experiments and work tirelessly to discover new materials without human input. These models are integrating with and empowering existing integrated computational materials engineering (ICME) tools to further accelerate materials design. This article provides a brief overview of the many ways that AI and ML are being used for materials and manufacturing research and highlights a few early applications of the technology in industry. The specific cases discussed here are illustrative examples and their inclusion in this article in no way indicates that they are better or more advanced than the many unmentioned examples of AI and ML in the field. AUTOMATIC IMAGE ANALYSIS Establishing processing-structure- property relationships is the fundamental framework for developing materials. However, quantifying microstructure is difficult, time-consuming, and prone to bias. Therefore, it’s no surprise that ML-based image analysis that has been used to enable self-driving cars and analyze satellite and medical images is being applied to automatic microstructure analysis and process monitoring. Foundational ML models that can be applied to a variety of microstructure quantification tasks were created by training convolutional neural network (CNN) encoders on a large dataset of over 100,000 microscopy images called MicroNet[1]. CNNs have two parts: (1) an encoder which identifies the features contained within the image, and (2) a task-specific decoder, which uses the extracted features to perform a desired analysis task such as classification, property prediction (regression), segmen- tation, and many more. Traditionally, encoders are trained on millions of pictures of everyday life. By seeing these images, the encoders learn to detect simple features like edges, textures, and high-level features such as dog ears and human faces. Then, through transfer learning, the pre-trained model is copied into larger more complex models for other tasks, providing improved performance. This process is illustrated in Fig. 1. But learning to detect high- level features like dog ears isn’t useful for microscopy analysis. By pre-training on a large microscopy dataset, MicroNet models learn to detect high-level features like grain boundaries and precipitates and perform much better in downstream microstructure analysis tasks. MicroNet models have been used by NASA for the segmentation and subsequent analysis of Ni-base superalloys[2] and environmental barrier coatings[3], direct strength prediction of Artemis core stage welds, and in a collaboration with the University of Pittsburgh, embedded within an instance seg- mentation model to analyze additive manufacturing (AM) individual laser path microbead morphology. Clemex is integrating MicroNet models into Clemex Studio to produce superior segmentation models with sparse Fig. 1 — Illustration of convolutional neural network models and transfer learning from MicroNet. By initially pre-training a classification model on a massive microscopy dataset from various materials and microscopes, the encoder learns to identify relevant microscopy features. Then, through a process called transfer learning, the encoder can be reused to perform more complex tasks such as segmentation with higher accuracy through a process called transfer learning. Figure adapted from Ref. 1.
RkJQdWJsaXNoZXIy MTYyMzk3NQ==