edfas.org ELECTRONIC DEVICE FAILURE ANALYSIS | VOLUME 26 NO. 3 8 removed to enhance visualization. It’s important to note that the background has been removed in Fig. 4a for better visualization; the input data to the model consists of the original background. Figure 4b shows the overlay of the segmented solder voids of corresponding data by the U-Net 2D model with Dice loss, visualized in 3D. The model is applied individually to each slice in 2D, and the resulting predicted slices are stacked up and combined to generate a 3D visualization of the data using the Dragonfly software. This overlay provides a comprehensive representation of the voids detected within the solder balls, offering insights into their spatial distribution and morphology. Furthermore, in Fig. 4b, the overlay of only the predicted voids on the original input image is shown. To enhance the visualization of voids, the model prediction of background and solder balls is not included in the overlaid image. Also, the color intensity is adjusted for a clearer representation of the voids within the solder balls. This visualization aids in the qualitative assessment of the model’s performance, facilitating the identification of voids and their characteristics within the XRM scans of solder balls. Overall, Fig. 4 provides a detailed insight into the segmentation results achieved by the U-Net 2D model, highlighting its efficacy in void detection within electronic assemblies. One critical factor impacting the performance can be the volume of training data. Because each 3D scan is converted into over 1000 individual 2D slices, which are considered 2D images, the resulting 2D training dataset offers a significantly greater number of training images compared to the original 3D dataset. This amount of the training dataset provides enough resources for the models to learn the characteristics of voids. On the other hand, the dataset for the 3D model only consists of one 3D scan, which may restrict the learning capacity of the models. Additionally, the complexity of 3D data may also create challenges in learning common features of voids. These factors could potentially impact the model performance and explain the lower score of the 3D model. The graph in Fig. 5 shows a comparison of the number of voids and the distribution of voids by volume, by the deep learning model and calculated by a human operator. Fig. 5 Comparison of the volume distribution of voids (3D) and number of detected voids, by deep learning model (U-Net 2D with Dice loss) and human operator. The X-axis shows the range of volume in mm3 in the multiplication of 10e-4 and the Y-axis shows the number of detected voids in that particular volume range. Range of volume of voids in multiplication of 10e-4 (mm3)
RkJQdWJsaXNoZXIy MTYyMzk3NQ==