February 2026_EDFA_Digital

edfas.org ELECTRONIC DEVICE FAILURE ANALYSIS | VOLUME 28 NO. 1 8 Nevertheless, the method should be robust against deviations of sample versus CAD layout, but still sensitive to deviations caused by defects, without extensive data pre-alignment. To this end, a deep learning-based method was explored: Method D) Using a pre-trained feature extractor to find differences in higher-level image features. The key challenge is selecting features that are invariant to deviations between actual structures and reference design, while remaining sensitive to defects. Method D did not yield results different from the conventional approaches A and B. Deviations between actual structures and reference design are also evident in such higher-level image features. The feature extractor would need to be finetuned with supervision indicating expected deviations, and defects. Another possible approach would be to apply a feature detector or image arithmetic to synthetic data generated from the CAD by a 2D image-to-image GANs[8] that learned to generate ghost structures not present in the CAD data. This approach is reserved for future work. Finally, yet another method was investigated: Meth- od E) Using a deep learning-based segmentation model to analyze geometric relationships between different segments. The idea is that features such as remaining segments of an open interconnect should be discernible from intact interconnects in the same layer by inspecting properties such as segment aspect ratio and volume relations. This would allow reference-free defect detection if the pristine distribution of the analyzed segment properties is known. As this was not the case here, a reference was used by segmenting the corresponding layout image stack as well, then comparing segment properties from both datasets. Training a segmentation model on the binary CAD images annotated for the same ten classes as with the experimental dataset resulted in a lower IoU of 0.7 (25 training images, 1200 epochs). Segmentation of the full CAD image stack with this model resulted in segmentation artifacts partly too large to be removed by filtering. Improving this would require adding more annotated images to the training data, which is reserved for future work. To explore method D with the available data, the segmentation classes were reduced to only two: all metal layers and background. Training with these reduced annotations resulted in an IoU of 0.96 (experimental) and 0.99 (CAD). Segmentation of the full CAD dataset with this simplified model was artifact-free, providing 138 M. Segmentation of the experimental dataset with its simplified model provided a very good result with small erroneous segments that were filtered out, resulting in 138 M1 segments. Such filtering can be avoided in future work by appropriate noise filtering before segmentation. Method E showed potential for automatic defect detection. Figure 11 shows a scatter plot of M1 segment aspect ratios against volume, both normalized to their maximum values, each dot corresponding to a segment. Two outliers can be identified. Figure 12 presents a view of the M1 segmentation of the CAD data, and Fig. 13 shows the experimental dataset. Here the segments corresponding to the outliers are marked—the two remaining halves of the open interconnect. Outliers in the distributions of experimental segment properties, which are not present in the layout data, indicate defects. At least in this case, where the defect type (open interconnect) was known from the initial logic analysis. The distributions in Fig. 11 differ in detail due to the discrepancies between layout and sample. Therefore, finding incomplete interconnects by visual inspection of the scatter plots is difficult. However, with improved Fig. 13 3D rendering of the segmented M1 layer of the layout, colored by segment volume. Fig. 12 3D rendering of the segmented M1 layer of the experimental dataset, colored by segment volume. Open interconnect halves corresponding to the outliers in Fig. 10 are highlighted.

RkJQdWJsaXNoZXIy MTYyMzk3NQ==