View on GitHub

data-science

Notebooks and Python about data science

If you like this project please add your Star

CNN visualization of intermediate steps using Saliency maps

Using TensorFlow 2.0 and Keras

Based on the CIFAR-10 deep neural network prepared in the CnnCifar10 notebook (HTML / Jupyter).

Part 1's visualizations (HTML / Jupyter) based on activation maps have shown there limits. Let's try some more advanced techniques.

Learning goals:

  • saliency maps
  • attribution maps
In [1]:
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras import datasets, layers, losses, models
import tensorflow as tf
import seaborn as sns
In [2]:
if True:
    import os
    os.environ['KMP_DUPLICATE_LIB_OK']='True'

Data : CIFAR-10

Images are normalized to $[0, 1]$

In [3]:
(xTrain, yTrain),(xTest, yTest) = datasets.cifar10.load_data()
xTrain = xTrain / 255.
xTest  = xTest  / 255.

classNames = ['plane', 'car', 'bird', 'cat', 'deer',
               'dog', 'frog', 'horse', 'ship', 'truck']

xTrain.shape, xTest.shape
Out[3]:
((50000, 32, 32, 3), (10000, 32, 32, 3))

Model

In [4]:
model0 = models.load_model('models/CIFAR-10_CNN5.h5')
model0.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv_0 (Conv2D)              (None, 30, 30, 32)        896       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 15, 15, 32)        0         
_________________________________________________________________
conv_1 (Conv2D)              (None, 13, 13, 64)        18496     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 6, 6, 64)          0         
_________________________________________________________________
conv_2 (Conv2D)              (None, 4, 4, 64)          36928     
_________________________________________________________________
flatten (Flatten)            (None, 1024)              0         
_________________________________________________________________
dropout (Dropout)            (None, 1024)              0         
_________________________________________________________________
dense_0 (Dense)              (None, 64)                65600     
_________________________________________________________________
dense_1 (Dense)              (None, 10)                650       
=================================================================
Total params: 122,570
Trainable params: 122,570
Non-trainable params: 0
_________________________________________________________________

Helpers

In [5]:
def predictUntilLayer(model, layerIndex, data):
    """ Execute prediction on a portion of the model """
    intermediateModel = models.Model(inputs=model.input,
                                 outputs=model.layers[layerIndex].output)
    return intermediateModel.predict(data)
    
def plotSaliencyMap(original, saliencies, activations, activationsAfter, layerNames, title="Input"):
    """ Plot saliency map and comparison to input """
    n = len(saliencies)
    fig, axes = plt.subplots(n, 5, figsize=(14, 4*n), sharey=True)
    axes[0][0].imshow(original)
    axes[0][0].set_title(title)
    
    for i in range(n):
        axes[i][1].imshow(normalizeImage(saliencies[i]))
        axes[i][1].set_title("%s Saliency" % layerNames[i])
        axes[i][2].imshow(normalizeImage((saliencies[i] - original)))
        axes[i][2].set_title("Saliency - Input")
        act = activations[i]
        if len(act.shape) == 2:
            axes[i][3].imshow(np.clip(act, 0, 1), cmap='gray')
        else:
            axes[i][3].text(0, 0.5, "%.3g" % act)
        axes[i][3].set_title("Activation on origin")
        act = activationsAfter[i]
        if len(act.shape) == 2:
            axes[i][4].imshow(np.clip(act, 0, 1), cmap='gray')
        else:
            axes[i][4].text(0, 0.5, "%.3g" % act)
        axes[i][4].set_title("Activation on saliency")
    plt.setp(axes, xticks=[], yticks=[], frame_on=False)
    
def normalizeImage(img):
    """ Normalize image to have pixel values in [0, 1] """
    mi = img.min()
    ma = img.max()
    return (img - mi) / (ma - mi)

Network end to end saliency maps

Original saliency map paper [2, section 3] called it "Activation maximization". Activation is reinforced through gradient ascent in order to maximize : $$ x^* = \arg \max_{x s.t. \Vert x \Vert = \rho} h_{ij}(θ,x)$$

In which :

  • $h_{ij}$ is the unit #j of the layer #i
  • $x$ is an element of the image space ($\in \mathbb{R}^p \times \mathbb{R}^q$, p and q being the width and height)
  • $\theta$ are the coefficients of the unit

This problem is very difficult to tackle with a numerical optimizer.

A sub-optimal solution is sought through gradient ascent.

In [2], gradient ascent is performed 9 times on random picked test samples.

In [3] the saliency maps is computed only on a modified softmax layer in order to compute the best activation for a given class.

We transform this problem back to the classic minimization of the deep neural network :

  • select a target class $c$ to detect
  • compute the loss using cross-entropy versus the selected class, add regularization terms
  • compute corresponding gradients
  • substract gradient on input image
$$ \mathcal{I} = \mathcal{I} - \epsilon \nabla_{\mathcal{I}}(y = c) $$

In which :

  • $\mathcal{I}$ is the input image to transform as a saliency map. Initialized as a neutral black or grey
  • $c$ is the target class on which we want to figure out saliency maps
  • $\epsilon$ is the learning rate of the gradient descent

It is a DNN training in which the network coefficients are frozen, and the weights to optimize are the pixels of the input.

In [6]:
@tf.function
def gradientMinLossStep(model, img, learningRate, lambdaReg, trueActivation, loss):
    """ Minimize error using backpropagation """
    with tf.GradientTape() as tape:
        tape.watch(img)
        tape.watch(trueActivation)
        activation = model(img) 
        imgNorm = tf.norm(img, 'euclidean')
        # Loss + regularization
        objective = loss(trueActivation, activation) + tf.cast(lambdaReg * imgNorm, tf.float32) 
    grads = tape.gradient(objective, img)
    
    # Gradient descent to minimize objective (cross entropy)
    img = img - learningRate * grads 
    
    return objective, grads, img, imgNorm

def endToEndSaliency(numEpochs, model, img, learningRate, lambdaReg, trueActivation):
    
    imgTf = tf.expand_dims(img, axis=0)
    trueActivationTf = tf.constant(trueActivation.reshape(1, -1))
    loss = tf.keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE)
        
    intensityHist, gradHist, imgNorms = [], [], []
    
    for epoch in range(numEpochs):
       
        intensity, grads, imgTf, imgNorm = gradientMinLossStep(model, imgTf, learningRate, 
                                                               lambdaReg, trueActivationTf, loss)
        
        intensityHist.append(intensity)
        gradHist.append(grads)
        imgNorms.append(imgNorm)
    
    # Only clip the returned saliency image
    return (imgTf[0], intensityHist, gradHist, imgNorms)

Apply on neutral grey image

In [7]:
mediumGray = np.ones((32, 32, 3)) * 0.2
e2eSaliencies = []

for c in range(10):
    e2eSaliencies.append(endToEndSaliency(100, model0, mediumGray, 0.05, 1e-3, np.array([c * 1.]))) 
In [8]:
fig, axes = plt.subplots(2, 5, figsize=(15, 7))
for c, sal, ax in zip(classNames, e2eSaliencies, axes.ravel()):
    ax.imshow(normalizeImage(sal[0].numpy())) 
    ax.set_title(c)
    pred = model0.predict(sal[0].numpy().reshape(1, 32, 32, 3))[0]
    predMaxIndex = np.argmax(pred)
    ax.set_xlabel("predicted : %s %.2f%%" % (classNames[predMaxIndex], pred[predMaxIndex] * 100))
plt.setp(axes, xticks=[], yticks=[], frame_on=False);
In [9]:
fig, axes = plt.subplots(1, 2, figsize=(16, 4))

for i, saliency in enumerate(e2eSaliencies):
    axes[0].plot(saliency[3])
    axes[1].semilogy(saliency[1], label=classNames[i])

axes[0].set_xlabel("Epoch")
axes[0].set_title('Image norm (L2)')
axes[0].grid()

axes[1].set_xlabel("Epoch")
axes[1].set_title('Loss (sparse cross entropy)')
axes[1].legend()
axes[1].grid()

Apply saliency map to a test image

Salient images are multiplied to a test image and used as input to the model for predictions.

In [10]:
testImgIndex =  3
sampleInput = xTest[testImgIndex]
sampleClass = classNames[yTest[testImgIndex][0]]

fig, axes = plt.subplots(2, 5, figsize=(15, 7))
for c, sal, ax in zip(classNames, e2eSaliencies, axes.ravel()):
    saliencyVal = sal[0].numpy()
    gradTimesImage = np.multiply(saliencyVal, sampleInput)
    gradTimesImage = gradTimesImage #/ gradTimesImage.max()
    ax.imshow(normalizeImage(gradTimesImage)) 
    ax.set_title("%s x saliency(%s)" % (sampleClass, c))
    pred = model0.predict(gradTimesImage.reshape(1, 32, 32, 3))[0]
    predMaxIndex = np.argmax(pred)
    ax.set_xlabel("predicted : %s %.2f%%" % (classNames[predMaxIndex], pred[predMaxIndex] * 100))
plt.setp(axes, xticks=[], yticks=[], frame_on=False);

Almost all modified images are classified as the target class for which the saliency map has been made.

Selected image saliency

Another way to use saliency maps is to inspect a given layer (or a given unit within a layer) in order to find which input image will lead to the highest activation. In this problem there is no clearly defined goal as there was previously the probability of the target class. The goal is not a minimum but a maximum of intensity on the layer/unit output. Thus this is a gradient ascent as presented in [2] and [3] (with regularization).

$$ \mathcal{I} = \mathcal{I} + \nabla_{\mathcal{I}}^{(l)}(z) - \lambda \Vert \mathcal{I} \Vert_F $$

In which :

  • $\mathcal{I}$ is the input image to transform as a saliency map
  • $z$ is either the full layer $l$ output or a given unit of this layer
  • $\lambda$ is the regularization hyper-parameter
  • $\Vert . \Vert_F$ if the Frobenius norm

It can be performed on a selected test image. The main interest is to visualize not only the parts of the images that activate the full network but also the ones that activate part of the network.

Following code is inspired by [4] but using Keras and TensorFlow 2.0, and from the Deepdream demo of TensorFlow [5] but without the strange combination of two layers, without the clip on the image pixel values that is inserting some high frequency noise (ripples). Regularization is also added in order to speedup and enhance saliency maps.

We apply this technique on the successive layers of the neural network.

In [11]:
@tf.function
def gradientMaxStep(model, img, learningRate, lambdaReg, unit=None):
    """ Gradient ascent step in order to maximize the output (on the specified unit if provided)"""
    with tf.GradientTape() as tape:
        tape.watch(img)
        activation = model(img)
        # Select unit if any
        if unit is not None:
            if len(activation.shape) == 4:
                activationTarget = activation[:,:,:,unit]  
            else:
                activationTarget = activation[:,unit]
        else:
            activationTarget = activation
        imgNorm = tf.cast(tf.norm(img, 'euclidean'), tf.float32)
        # Objective to maximize, with regularization
        # tf.math.abs(tf.math.reduce_mean(activationTarget))
        objective = tf.norm(activationTarget) - lambdaReg * imgNorm
    grads = tape.gradient(objective, img)
    
    # Gradient ascent 
    img = img + learningRate * grads
   
    return img, objective, grads, imgNorm

def getPartialSalient(numEpochs, model, img, learningRate, lambdaReg, unit=None):
    """ Saliency computation on part or full network in "unsupervised mode"
         i.e. no target class provided.
        Output is maximized through gradient ascent with regularization (soft constraint on the image norm)
    """
    imgTf = tf.expand_dims(img, axis=0)
    trueActivationTf = None
    loss = None
       
    intensityHist, gradHist, imgNorms = [], [], []
    
    for epoch in range(numEpochs):
       
        imgTf, intensity, grads, imgNorm = gradientMaxStep(model, imgTf, learningRate, lambdaReg, unit)
        
        intensityHist.append(intensity)
        gradHist.append(grads)
        imgNorms.append(imgNorm)
    
    return (imgTf[0], intensityHist, gradHist, imgNorms)
In [12]:
layerTitles = ['Conv layer #0', 'Conv layer #1', 'Conv layer #2', 'Dense layer #0']
partialModels = [models.Model(inputs=model0.input,
                      outputs=model0.layers[i].output) for i in [0, 2, 4, 7]]

learningRates = [0.05, 0.05, 0.01, 0.005]
lambdaRegs = [1, 5, 0.1, 1] 
testImgIndex =  17
sampleInput = xTest[testImgIndex]
saliencyRes, activations, activationsAfter  = [], [], []
for m, learningRate, lambdaReg in zip(partialModels, learningRates, lambdaRegs):
    saliencyRes.append(getPartialSalient(200, m, sampleInput, learningRate, lambdaReg))
    act = m.predict(np.array([sampleInput])).squeeze(axis=0)
    if len(act.shape) == 3:
        activations.append(np.mean(act, axis=2)) 
    else:
        activations.append(np.mean(act)) 
    actAfter = m.predict(np.array([saliencyRes[-1][0]])).squeeze(axis=0)
    if len(act.shape) == 3:
        activationsAfter.append(np.mean(actAfter, axis=2)) 
    else:
        activationsAfter.append(np.mean(actAfter))
In [13]:
m.predict(np.array([saliencyRes[-1][0]])).shape
Out[13]:
(1, 64)
In [14]:
plotSaliencyMap(sampleInput, 
                [saliencyRes[i][0].numpy() for i in range(len(saliencyRes))], 
                activations, activationsAfter,
                layerNames=layerTitles)
In [15]:
fig, axes = plt.subplots(1, 2, figsize=(16, 4))
for i in range(len(saliencyRes)):
    axes[0].plot(saliencyRes[i][3], label=layerTitles[i])
    axes[1].plot(saliencyRes[i][1], label=layerTitles[i])
axes[0].set_title('Image norm during gradient ascent')
axes[0].set_xlabel('epoch')
axes[0].legend()
axes[0].grid()
axes[1].set_title('Intensity (objective function)')
axes[1].set_xlabel('epoch')
axes[1].grid()

The pixel values are clipped at the end of the saliency processing to $[0,1]$

Gradient display

In [16]:
saliencyIndex = 2
fig, axes = plt.subplots(2, 5, figsize=(16, 6))
for i, ax in enumerate(axes.ravel()):
    epoch = 20 * i
    gradImg = (saliencyRes[saliencyIndex][2][epoch].numpy() * 0.5 + 0.5).reshape(32, 32, 3)
    ax.imshow(np.clip(gradImg, 0, 1))
    ax.set_title("epoch %d" % epoch)
plt.setp(axes, xticks=[], yticks=[], frame_on=False);

print(np.min(saliencyRes[saliencyIndex][2][100].numpy()), np.max(saliencyRes[saliencyIndex][2][90].numpy()))
-0.9398594676330125 0.7899255318891629

Saliency of a given unit

If starting from a grey image, since the full hidden layer activation is maximized, we have no idea what the maximization will drive as the layer is supposed to handle all classes.

However we may apply such processing with focus on a given unit within the layer.

In [17]:
selectedUnits = [0, 4, 10, 60]
partialModels2 = [models.Model(inputs=model0.input,
                      outputs=model0.layers[i].output) for i in [2, 2, 7, 7]]
layerTitles2 = ['Layer #1, Conv #%d' % selectedUnits[0], 'Layer #1, Conv #%d' % selectedUnits[1],\
                'Layer #4, Unit #%d' % selectedUnits[2], 'Layer #4, Unit #%d' % selectedUnits[3]]

learningRates = [0.05, 0.05, 0.01, 0.01]
testImgIndex = 17 # 2 = boat, 3 = Concord, 17 = horse
sampleInputLocal = mediumGray 
# xTest[testImgIndex]
# np.random.normal(0.4, 0.1, (32, 32, 3))
saliencyResLocal, activationsLocal, activationsAfterLocal = [], [], []
for m, learningRate, unit in zip(partialModels2, learningRates, selectedUnits):
    saliencyResLocal.append(getPartialSalient(200, m, sampleInputLocal, learningRate, 1, unit))
    act = m.predict(np.array([sampleInputLocal])).squeeze(axis=0)
    if len(act.shape) == 3:
        activationsLocal.append(act[:,:,unit]) 
    else:
        activationsLocal.append(act[unit]) 
    actAfter = m.predict(np.array([saliencyRes[-1][0]])).squeeze(axis=0)
    if len(act.shape) == 3:
        activationsAfterLocal.append(actAfter[:,:,unit]) 
    else:
        activationsAfterLocal.append(actAfter[unit])
In [18]:
plotSaliencyMap(sampleInputLocal, 
                [saliencyResLocal[i][0].numpy() for i in range(len(saliencyResLocal))], 
                activationsLocal, activationsAfterLocal,
                layerNames=layerTitles2)