August_EDFA_Digital

edfas.org 1 7 ELECTRONIC DEV ICE FA I LURE ANALYSIS | VOLUME 24 NO . 3 G (see Fig. 5). The class conditional formulation (Eq 3) is followed by the conditional GAN. [12] (Eq 3) To handle the traditional mode collapse problem and to ensure the generation of diversified images, the distinc- tive mapping of two closely sampled random variables ( z 1 , z 2 ) have been enforcedbymode-seeking regularization term (Eq 4) as in mode-seeking GAN. [12] (Eq 4) where d x ( x 1 , x 2 ) = | x 1 - x 2 | 2 2 denotes the distance metric. With the mode-seeking objective, the overall objective function stands as follows: (Eq 5) where λ ms controls the weight of the mode-seeking ratio on the overall objective. CELL RECOGNITION Extracted and synthesized cells have beenused to train aCNNclassifier that will be deployed in the inference stage (Fig. 1). Cells were manually labeled for the classification task. Thismodel follows the residual architecture [13] as the backbone architecture. Based on the complexity of shape and texture in the dataset, ResNet-18 was chosen. This model has residual blocks as itsmajor component, which is resilient to the vanishing gradient problem in deep learning (DL). As shown in Fig. 6, the last fully connected layer of the original model is replaced with an adaptive pooling layer, followed by a fully connected layer. There were 7326 images fromseven types of cells. The dataset is divided intoa2:1:1 ratio for training, validation, and testing purposes. The images are center cropped and resized to 96 x 96 pixels as a preprocessing step. Several augmenta- tion techniques, such as rotation, magnification, and blur effects are added to increase the diversity of the acquired gate images and increase the learning capability of the model. Themodel is trainedwith a batch size (B) of 16 and a learning rate of 2 x 10 -3 . Cross-entropy (L CE ) loss is used as the objective function for training this model (Fig. 6). Fig. 6 Training convolution neural net (CNN) classifier using extracted and synthesized cells. ResNet-18 has been used as the basemodel architecture. Batch (B) of images (HxW) are passed through the network and class prediction for each sample (Y Pred ) is achievedby applyingSoftMaxon thenetworkoutcome. Cross-entropy loss has beenusedas theobjective function to supervise the learning. Fig. 5 Implementation of MSGAN for the system. (continued on page 20)

RkJQdWJsaXNoZXIy MTMyMzg5NA==