Aug_EDFA_Digital

edfas.org ELECTRONIC DEVICE FAILURE ANALYSIS | VOLUME 25 NO. 3 14 strain mapping, and defect detection. Because the field of 4D-STEM is continually evolving, we include some directions which are still in development. A follow-up article including the topics of mapping crystallographic orientation, phase, and electromagnetic fields is in preparation. IMAGING STEM images—be they bright field (BF), annular bright field (ABF), or high-angle annular dark field (HAADF)—are traditionally generated by single channel integrating detectors with either an annular or circular geometry. The detector in the far-field subtends a solid angle established by the physical dimensions of the detector and the post-specimen optics of the microscope. Electrons scattered into that solid angle generate a signal, which is subsequently processed and digitized. Then a numerical grayscale value is assigned to an image pixel corresponding to each position of the scan. Alternatively, this process can be mimicked with a 4D-STEM dataset and summation. For each diffraction pattern within the 4D dataset, a subset of pixels can be summed, likewise producing a numerical value at each point in the raster, producing what is sometimes referred to as a virtual[14] or synthetic image.[15] This process has been successfully demonstrated for BF and ADF images,[10] as well as reconstructing images with atomic resolution.[16] Quantitative agreement between images reconstructed from 4D-STEM datasets with simulations was found[17] indicating that virtual images can replicate the fidelity of images acquired using traditional detectors. The flexibility of generating images from a 4D dataset is what differentiates it from a traditional single channel STEM detector. On a microscope, there are typically one to three STEM imaging detectors. Their geometries are rudimentary (albeit practical and chosen with good theoretical underpinning), the number of collection solid angles available is limited by the combination of detector dimensions and discrete predefined camera lengths (although this can be extended through various modifications of the post-specimen optics, e.g., inserting a post-specimen aperture to limit the outer collection angle[18]), and only one detector can collect a particular angle. By comparison, when using the 4D-STEM dataset, a limitless set of virtual detectors can be applied and the collection angles do not need to be unique due to limitations of detectors subtending one another. In Fig. 2, a set of four reconstructed images are shown from GaN [12 _ 10], including a complex geometry and the reuse of the same collection angles. Moreover, it is convenient to apply virtual detectors that highlight the presence of specific features. For instance, superlattice and Bragg reflections associated with a particular phase can be selected to generate position maps of minority phases within a matrix.[14,19] This may sound similar to the process of forming a dark field TEM image. Indeed, the similarity can be understood through the principal of reciprocity in TEM and STEM which can be crudely summarized as follows: In a microscope, if a signal is detected at a point A and a source is placed at point B, then the same signal would be detected at B if the source were placed at A.[20,21] The collection angle defined by the objective aperture in TEM and the convergence angle defined by the probe forming aperture in STEM are analogs in this situation. Yet, the flexibility of convergence lens systems allows for a wide range of convergence semi-angles including to very small (< 1 mrad) values, an angular range that may not be accessible with the objective apertures that are commonly installed in TEMs, which ensures the separation of Bragg Fig. 2 A demonstration of atomic resolution virtual imaging of GaN orientated to the [12 _ 10] zone axis. An individual diffraction pattern from the 4D-STEM dataset and corresponding ball model of GaN (a). Reconstructed images with the corresponding mask are shown (b). The color bar is fraction of the incident beam intensity (unitless). All images were generated from the same dataset. (a) (b)

RkJQdWJsaXNoZXIy MTMyMzg5NA==