August_EDFA_Digital
edfas.org ELECTRONIC DEV ICE FA I LURE ANALYSIS | VOLUME 24 NO . 3 14 layout, reducing the effort behind massive scale SEM image acquisition. Generative adversarial network (GAN) [8] holds the current state-of-the-art in the task of synthesizing images. This task of high-res image synthesis has followed the formulation of conditional GAN as described by Wang et al. [9] The architecture comprises a coarse-to-fine genera- tor, a multiscale discriminator, and a robust adversarial learning objective. Coarse-to-fine Generator. The generator network consists of two sub-networks, G 1 and G 2 , where G 1 is the global generator and G 2 is the local enhancer. The genera- tor is thengivenbyG= {G 1 , G 2 } as shown inFig. 2. The global generatorworks on the image resolutionof 512 x 512while the local enhancer outputs images with 2X resolution of the previous one. By incorporating more local enhancer networks, images of higher resolution can be synthesized. The global generator (G 1 ) architecture is built on the architectureproposedby Johnsonet al. [10] , whichhas been applied successfully for neural style transfer on images up to 512 x 512 resolution. G 1 is composed of three separate parts: a convolutional frontendG 1 (F) , a set of residual blocks G 1 (R) , and a convolutional backendG 1 (B) . The layout of reso- lution 512 x 512 is passed through the three components sequentially to produce an output image of 512 x 512. The local enhancer networkalsocomprises threeparts: a convolutional frontendG 2 (F) , a set of residual blocks G 2 (R) , and a transposed-convolutional backendG 2 (B) . The resolu- tion of the layout to G 2 is 1024 x 1024. The input to the G 2 (R) is the element-wise sumof two featuremaps: output from the G 2 (F) and the output from the backend of global gen- erator G 1 (B) . This helps translate information fromG 1 to G 2 . During training, global generator G 1 was trained first and then local enhancer G 2 . Then both networks were fine-tuned together. Multiscale Discriminator. Following Wang et al. [9] , threediscriminators havebeenused. All thediscriminators have identical network architecture, but they operate at three different image scales. Here the discriminators are referred to as D 1 , D 2 , and D 3 . First, the real and synthetic images have beendownsampledby two and four to create an image pyramid of three scales. The discriminators D 1 , D 2 , and D 3 are then trained to differentiate real and syn- thetic images of three different resolutions. The overall objective function with the multiscale discriminator stands as the following: (Eq 1) where S is a sampled layout image, X is a sampled SEM image, and E [ Y ] means the expectation for random vari- able Y . IMAGE PRE-PROCESSING The raw SEM image is passed through several pre- processing techniques to make it ready for logic cell extractions from every row. The pre-processing steps as Fig. 2 Generator architecture. First a residual network (G 1 ) is trained on lower resolution (512 x 512) images. This is the global generator. Another residual network (G 2 ) is appended to the G 1 and both networks are trained jointly. G 2 is called the local enhancer. The featuremap from the first layers of G 2 and the last layer of G 1 is summed-up elementwise and passed through the residual blocks of G 2 .
Made with FlippingBook
RkJQdWJsaXNoZXIy MTMyMzg5NA==