Artificial Intelligence 15 min read

GANomaly: Theory and Source Code Analysis

This article explains the GANomaly model for semi‑supervised anomaly detection, detailing its generator‑encoder‑discriminator architecture, loss functions, testing phase scoring, and provides annotated PyTorch source code to help readers implement and understand the approach.

Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
GANomaly: Theory and Source Code Analysis

GANomaly: Theory and Source Code Analysis

Introduction

The article introduces GANomaly, a semi‑supervised anomaly detection method based on adversarial training, and assumes readers are familiar with GANs for defect detection.

GANomaly Architecture

The model consists of three sub‑networks: a generator (G) that combines an encoder (GE) and a decoder (GD), a separate encoder (E) that processes the generated image, and a discriminator (D) that distinguishes real from generated images.

Generator G encodes an input image x into a latent vector z via GE(x) and reconstructs it as x̂ = GD(z) . The encoder E maps x̂ to ẑ , enabling a reconstruction loss between z and ẑ . The discriminator follows the classic GAN design.

Loss Functions

GANomaly uses three generator losses:

Adversarial Loss : L_adv = E_{x~p_x} || f(x) - f(G(x)) ||_2 , encouraging realistic outputs.

Contextual Loss : L_con = E_{x~p_x} || x - G(x) ||_1 , enforcing pixel‑wise similarity.

Encoder Loss : L_enc = E_{x~p_x} || GE(x) - E(G(x)) ||_2 , aligning latent representations.

The total generator loss is a weighted sum: L = w_adv·L_adv + w_con·L_con + w_enc·L_enc with default weights w_con=50, w_adv=w_enc=1 . The discriminator loss follows the original GAN formulation.

Testing Phase

During inference, anomaly scores are computed using the encoder loss: A(x) = || GE(x) - E(G(x)) ||_2 . Although the paper reports an L1 norm, the provided code uses L2, as shown below.

# latent_i = G_E(x), latent_o = E(G(x))
error = torch.mean(torch.pow((latent_i - latent_o), 2), dim=1)

Source Code Overview

The article presents key PyTorch modules for the encoder, decoder, discriminator, and generator, highlighting weight initialization, layer construction, and forward passes.

class Encoder(nn.Module):
    def __init__(self, isize, nz, nc, ndf, ngpu, n_extra_layers=0, add_final_conv=True):
        # ... (layer definitions) ...
        if add_final_conv:
            main.add_module('final-{0}-{1}-conv'.format(cndf, 1),
                            nn.Conv2d(cndf, nz, 4, 1, 0, bias=False))
        self.main = main
    def forward(self, input):
        return self.main(input)

class Decoder(nn.Module):
    def __init__(self, isize, nz, nc, ngf, ngpu, n_extra_layers=0):
        # ... (layer definitions) ...
        self.main = main
    def forward(self, input):
        return self.main(input)

class NetD(nn.Module):
    def __init__(self, opt):
        model = Encoder(opt.isize, 1, opt.nc, opt.ngf, opt.ngpu, opt.extralayers)
        layers = list(model.main.children())
        self.features = nn.Sequential(*layers[:-1])
        self.classifier = nn.Sequential(layers[-1])
        self.classifier.add_module('Sigmoid', nn.Sigmoid())
    def forward(self, x):
        features = self.features(x)
        classifier = self.classifier(features).view(-1, 1).squeeze(1)
        return classifier, features

class NetG(nn.Module):
    def __init__(self, opt):
        self.encoder1 = Encoder(opt.isize, opt.nz, opt.nc, opt.ngf, opt.ngpu, opt.extralayers)
        self.decoder = Decoder(opt.isize, opt.nz, opt.nc, opt.ngf, opt.ngpu, opt.extralayers)
        self.encoder2 = Encoder(opt.isize, opt.nz, opt.nc, opt.ngf, opt.ngpu, opt.extralayers)
    def forward(self, x):
        latent_i = self.encoder1(x)
        gen_image = self.decoder(latent_i)
        latent_o = self.encoder2(gen_image)
        return gen_image, latent_i, latent_o

Generator Loss Implementation

self.l_adv = l2_loss
self.l_con = nn.L1Loss()
self.l_enc = l2_loss
self.err_g_adv = self.l_adv(self.netd(self.input)[1], self.netd(self.fake)[1])
self.err_g_con = self.l_con(self.fake, self.input)
self.err_g_enc = self.l_enc(self.latent_o, self.latent_i)
self.err_g = self.err_g_adv * self.opt.w_adv + \
             self.err_g_con * self.opt.w_con + \
             self.err_g_enc * self.opt.w_enc

Conclusion

With the provided explanations and code snippets, readers can implement GANomaly themselves or adapt existing GAN anomaly detection codebases, gaining insight into the model’s structure and training objectives.

deep learningGANAnomaly DetectionPyTorchadversarial trainingEncoder-Decoder
Rare Earth Juejin Tech Community
Written by

Rare Earth Juejin Tech Community

Juejin, a tech community that helps developers grow.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.