Paddy projects 17 minutes read( About 2577 words)0 visits
Neural style transfer based on Tensorflow VGG19 model and Keras
Introduction
Neural style transfer is an optimisation technique used to take two images—a content image and a style reference image (such as an artwork by a famous painter)—and blend them together so the output image looks like the content image, but “painted” in the style of the style reference image.
This is implemented by using tensorflow pretrained model VGG19, accessing and changing the intermediate layers of the model, extracting style and content, running gradient descent to minimise the loss function: total variation loss with explicit regularisation to reduce high-frequency artifacts and preserve edge details, and optimising the output image to match the statistics of the content and the style reference image. These statistics are extracted from the images using a convolutional network.
We will be using the VGG19 network architecture, a pretrained image classification network to implement it and use the intermediate layers of the model to get the content and style representations of the image. Starting from the network’s input layer, the first few layer activations represent low-level features like edges and textures. With stepping through the network, the final few layers represent higher-level features.
Load a VGG19 and test run it on our image to ensure it’s used correctly:
1 2 3 4 5 6 7
x = tf.keras.applications.vgg19.preprocess_input(content_image*255) x = tf.image.resize(x, (224, 224)) vgg = tf.keras.applications.VGG19(include_top=True, weights='imagenet') prediction_probabilities = vgg(x) prediction_probabilities.shape
#TensorShape([1, 1000])
1 2 3 4 5 6 7 8 9 10 11
predicted_top_5 = tf.keras.applications.vgg19.decode_predictions(prediction_probabilities.numpy())[0] [(class_name, prob) for (number, class_name, prob) in predicted_top_5]
So why do these intermediate outputs within our pretrained image classification network allow us to define style and content representations?
Here is a paragraph for the explains reference from Tensorflow Docs:
At a high level, in order for a network to perform image classification (which this network has been trained to do), it must understand the image. This requires taking the raw image as input pixels and building an internal representation that converts the raw image pixels into a complex understanding of the features present within the image.
This is also a reason why convolutional neural networks are able to generalize well: they’re able to capture the invariances and defining features within classes (e.g. cats vs. dogs) that are agnostic to background artifacts and other nuisances. Thus, somewhere between where the raw image is fed into the model and the output classification label, the model serves as a complex feature extractor. By accessing intermediate layers of the model, we’re able to describe the content and style of input images.
Build the model
builds a VGG19 model that returns a list of intermediate layer outputs through specifying the inputs and outputs
defvgg_layers(layer_names): """ Creates a VGG model that returns a list of intermediate output values.""" # Load our model. Load pretrained VGG, trained on ImageNet data vgg = tf.keras.applications.VGG19(include_top=False, weights='imagenet') vgg.trainable = False outputs = [vgg.get_layer(name).output for name in layer_names]
model = tf.keras.Model([vgg.input], outputs) return model #create the model style_extractor = vgg_layers(style_layers) style_outputs = style_extractor(style_image*255)
#Look at the statistics of each layer's output for name, output inzip(style_layers, style_outputs): print(name) print(" shape: ", output.numpy().shape) print(" min: ", output.numpy().min()) print(" max: ", output.numpy().max()) print(" mean: ", output.numpy().mean()) print()
The content of an image is represented by the values of the intermediate feature maps.
It turns out, the style of an image can be described by the means and correlations across the different feature maps. Calculate a Gram matrix that includes this information by taking the outer product of the feature vector with itself at each location, and averaging that outer product over all locations. This Gram matrix can be calculated for a particular layer as:
represents the Gram matrix element for feature map and feature map at layer . It quantifies the correlation between feature maps and
and are the activation values of feature map and feature map at position. By calculating their outer product, We can capture the joint information of these two feature maps at that position.
By summing the outer products over all positions and the dividing by (the spatial dimensions of the feature maps), and then we can obtain the average correlation between the feature maps.
And it could be implemented by using the tf.linalg.einsum function
Gradient descent is an optimisation algorithm used to minimise a loss function. During the training of deep learning models, gradient descent proceeds through the following steps:
Calculate Loss: First, compute the loss based on the difference between the model’s output and the true labels.
Compute Gradients: Then, use the backpropagation algorithm to calculate the gradients of the loss function with respect to the model parameters.
Update Parameters: Finally, use these gradients to update the model parameters, thus reducing the loss.
In tasks using VGG19 (like feature extraction), we may leverage a pre-trained model and fine-tune it to optimize performance for a specific task.
#Define a tf.Variable to contain the image to optimize. image = tf.Variable(content_image)
#Since this is a float image, define a function to keep the pixel values between 0 and 1: defclip_0_1(image): return tf.clip_by_value(image, clip_value_min=0.0, clip_value_max=1.0) # create an optimizer, both LBFGS and Adam are acceptable opt = tf.keras.optimizers.Adam(learning_rate=0.02, beta_1=0.99, epsilon=1e-1)
# To optimize this, we will use a weighted combination of the two losses to get the total loss: style_weight=1e-2 content_weight=1e4 defstyle_content_loss(outputs): style_outputs = outputs['style'] content_outputs = outputs['content'] style_loss = tf.add_n([tf.reduce_mean((style_outputs[name]-style_targets[name])**2) for name in style_outputs.keys()]) style_loss *= style_weight / num_style_layers
content_loss = tf.add_n([tf.reduce_mean((content_outputs[name]-content_targets[name])**2) for name in content_outputs.keys()]) content_loss *= content_weight / num_content_layers loss = style_loss + content_loss return loss # And finally use tf.GradientTape to update the image. @tf.function() deftrain_step(image): with tf.GradientTape() as tape: outputs = extractor(image) loss = style_content_loss(outputs)
grad = tape.gradient(loss, image) opt.apply_gradients([(grad, image)]) image.assign(clip_0_1(image))
step = 0 for n in range(epochs): for m in range(steps_per_epoch): step += 1 train_step(image) print(".", end='', flush=True) display.clear_output(wait=True) display.display(tensor_to_image(image)) print("Train step: {}".format(step)) end = time.time() print("Total time: {:.1f}".format(end-start))
Train step: 1000 Total time: 5952.3
Total variation loss
Variation loss is often used in image generation tasks, particularly in style transfer. Its primary aim is to maintain the smoothness and structure of the image while preventing excessive smoothing that could result in loss of detail. Variation loss is typically calculated based on the differences between pixel values, encouraging small changes between adjacent pixels. The formula can be expressed as:
Here, is the generated image, and and are the pixel coordinates of the image. This loss penalises large variations to maintain the natural smoothness of the image.
One downside to this basic implementation is that it produces a lot of high frequency artifacts. Decrease these using an explicit regularisation term on the high frequency components of the image.
This shows how the high frequency components have increased.
Re-run the optimisation
By minimizing the loss function incorporating total variation regularisation using gradient descent, we can effectively reduce high-frequency artifacts in images while preserving edge details.
# Choose a weight for the total_variation_loss total_variation_weight=30
@tf.function() deftrain_step(image): with tf.GradientTape() as tape: outputs = extractor(image) loss = style_content_loss(outputs) loss += total_variation_weight*tf.image.total_variation(image)
grad = tape.gradient(loss, image) opt.apply_gradients([(grad, image)]) image.assign(clip_0_1(image)) # Reinitialize the image-variable and the optimizer: opt = tf.keras.optimizers.Adam(learning_rate=0.02, beta_1=0.99, epsilon=1e-1) image = tf.Variable(content_image)
# Run the optimisation import time start = time.time()
epochs = 10 steps_per_epoch = 100
step = 0 for n inrange(epochs): for m inrange(steps_per_epoch): step += 1 train_step(image) print(".", end='', flush=True) display.clear_output(wait=True) display.display(tensor_to_image(image)) print("Train step: {}".format(step))
end = time.time() print("Total time: {:.1f}".format(end-start))
Checking out the GitHub repo below for complete implementing VGG19 model with the configs above on the neural style transfer tasks and the blog: Deploy the model on AWS Sagemaker
• Enviroment and dependencies set up • Visualise the input images • Define content and style image representations • Configure the VGG19 intermediate layers • Build the VGG19 model • Calculate and extract style and content • Run gradient descent • Total variation loss • Re-run the optimisation • Save the model on AWS sagemaker jupyter notebook server directory