Neural machine translation

Neural machine translation with basic Seq-to-Seq Transformer Architecture and Keras

Introduction

In this blog, we will create and train a sequence-to-sequence Transformer model to translate Portuguese into English.

Transformers are deep neural networks that replace CNNs and RNNs with self-attention. Self-attention allows Transformers to easily transmit information across the input sequences.

As suggested in the Google AI Blog post:

Neural networks for machine translation typically contain an encoder reading the input sentence and generating a representation of it. A decoder then generates the output sentence word by word while consulting the representation generated by the encoder. The Transformer starts by generating initial representations, or embeddings, for each word… Then, using self-attention, it aggregates information from all of the other words, generating a new representation per word informed by the entire context, represented by the filled balls. This step is then repeated multiple times in parallel for all words, successively generating new representations.

let’s dive into it!

Set up

Begin by installing TensorFlow Datasets for loading the dataset and TensorFlow Text for text preprocessing:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# google colab
# Install the most re version of TensorFlow to use the improved
# masking support for `tf.keras.layers.MultiHeadAttention`.
!apt install --allow-change-held-packages libcudnn8=8.1.0.77-1+cuda11.2
!pip uninstall -y -q tensorflow keras tensorflow-estimator tensorflow-text
!pip install protobuf~=3.20.3
!pip install -q tensorflow_datasets
!pip install -q -U tensorflow-text tensorflow
import logging
import time

import numpy as np
import matplotlib.pyplot as plt

import tensorflow_datasets as tfds
import tensorflow as tf

import tensorflow_text
!pip install datasets

Data handling

Download the dataset

Use TensorFlow Datasets to load the Portuguese-English translation datasetTED Talks Open Translation Project. This dataset contains approximately 52,000 training, 1,200 validation and 1,800 test examples.

1
2
3
4
5
examples, metadata = tfds.load('ted_hrlr_translate/pt_to_en',
with_info=True,
as_supervised=True)

train_examples, val_examples = examples['train'], examples['validation']

after we have loaded the dataset, we will tokenize the text, so that each element is represented as a token or token ID (a numeric representation).

Set up the tokenizer

Download, extract, and import the saved_model:

1
2
3
4
5
6
7
model_name = 'ted_hrlr_translate_pt_en_converter'
tf.keras.utils.get_file(
f'{model_name}.zip',
f'https://storage.googleapis.com/download.tensorflow.org/models/{model_name}.zip',
cache_dir='.', cache_subdir='', extract=True
)
tokenizers = tf.saved_model.load(f'{model_name}_extracted/{model_name}')

Set up a data pipeline with tf.data

The following function takes batches of text as input, and converts them to a format suitable for training.

  1. It tokenizes them into ragged batches.
  2. It trims each to be no longer than MAX_TOKENS.
  3. It splits the target (English) tokens into inputs and labels. These are shifted by one step so that at each input location the label is the id of the next token.
  4. It converts the RaggedTensor to padded dense Tensor.
  5. It returns an (inputs, labels) pair.
1
2
3
4
5
6
7
8
9
10
11
12
MAX_TOKENS=128
def prepare_batch(pt, en):
pt = tokenizers.pt.tokenize(pt) # Output is ragged.
pt = pt[:, :MAX_TOKENS] # Trim to MAX_TOKENS.
pt = pt.to_tensor() # Convert to 0-padded dense Tensor

en = tokenizers.en.tokenize(en)
en = en[:, :(MAX_TOKENS+1)]
en_inputs = en[:, :-1].to_tensor() # Drop the [END] tokens
en_labels = en[:, 1:].to_tensor() # Drop the [START] tokens

return (pt, en_inputs), en_labels

The function below converts a dataset of text examples into data of batches for training.

  1. It tokenizes the text, and filters out the sequences that are too long. (The batch/unbatch is included because the tokenizer is much more efficient on large batches).
  2. The cache method ensures that that work is only executed once.
  3. Then shuffle and, dense_to_ragged_batch randomize the order and assemble batches of examples.
  4. Finally prefetch runs the dataset in parallel with the model to ensure that data is available when needed.
1
2
3
4
5
6
7
8
9
10
11
12
13
BUFFER_SIZE = 20000
BATCH_SIZE = 64
def make_batches(ds):
return (
ds
.shuffle(BUFFER_SIZE)
.batch(BATCH_SIZE)
.map(prepare_batch, tf.data.AUTOTUNE)
.prefetch(buffer_size=tf.data.AUTOTUNE))

# Create training and validation set batches.
train_batches = make_batches(train_examples)
val_batches = make_batches(val_examples)

Define the components

we will start to implement the components of a Transformer as a standard sequence-to-sequence model with an encoder and a decoder.

Figure 1. The original transformer diagram.

The embedding and positional encoding layer

The inputs to both the encoder and decoder use the same embedding and positional encoding logic.

A Transformer adds a “Positional Encoding” to the embedding vectors. It uses a set of sines and cosines at different frequencies (across the sequence). By definition nearby elements will have similar position encodings.

Using the following formula for calculating the positional encoding:

The function using the vectors of sines and cosines concatenated simply to implement it

1
2
3
4
5
6
7
8
9
10
11
12
13
14
def positional_encoding(length, depth):
depth = depth/2

positions = np.arange(length)[:, np.newaxis] # (seq, 1)
depths = np.arange(depth)[np.newaxis, :]/depth # (1, depth)

angle_rates = 1 / (10000**depths) # (1, depth)
angle_rads = positions * angle_rates # (pos, depth)

pos_encoding = np.concatenate(
[np.sin(angle_rads), np.cos(angle_rads)],
axis=-1)

return tf.cast(pos_encoding, dtype=tf.float32)

The position encoding function is a stack of sines and cosines that vibrate at different frequencies depending on their location along the depth of the embedding vector. They vibrate across the position axis.

Creating a PositionEmbedding layer that looks-up a token’s embedding vector and adds the position vector:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
class PositionalEmbedding(tf.keras.layers.Layer):
def __init__(self, vocab_size, d_model):
super().__init__()
self.d_model = d_model
self.embedding = tf.keras.layers.Embedding(vocab_size, d_model, mask_zero=True)
self.pos_encoding = positional_encoding(length=2048, depth=d_model)

def compute_mask(self, *args, **kwargs):
return self.embedding.compute_mask(*args, **kwargs)

def call(self, x):
length = tf.shape(x)[1]
x = self.embedding(x)
# This factor sets the relative scale of the embedding and positonal_encoding.
x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
x = x + self.pos_encoding[tf.newaxis, :length, :]
return x

Add and normalise

These “Add & Norm” blocks are scattered throughout the model. Each one joins a residual connection and runs the result through a LayerNormalization layer.

The base attention layer

Attention layers are used throughout the model. These are all identical except for how the attention is configured. Each one contains a layers.MultiHeadAttention, a layers.LayerNormalization and a layers.Add

. And we will get started from a simple base class that just contains the component layers

1
2
3
4
5
6
class BaseAttention(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super().__init__()
self.mha = tf.keras.layers.MultiHeadAttention(**kwargs)
self.layernorm = tf.keras.layers.LayerNormalization()
self.add = tf.keras.layers.Add()

The cross attention layer

At the literal center of the Transformer is the cross-attention layer. This layer connects the encoder and decoder. This layer is the most straight-forward use of attention in the model, it performs the same task as the attention block

To implement this, we pass the target sequence x as the query and the context sequence as the key/value when calling the mha layer:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
class CrossAttention(BaseAttention):
def call(self, x, context):
attn_output, attn_scores = self.mha(
query=x,
key=context,
value=context,
return_attention_scores=True)

# Cache the attention scores for plotting later.
self.last_attn_scores = attn_scores

x = self.add([x, attn_output])
x = self.layernorm(x)

return x
1
2
3
4
5
6
7
8
9
10
11
12
13
# test out the layer
sample_ca = CrossAttention(num_heads=2, key_dim=512)

print(pt_emb.shape)
print(en_emb.shape)
print(sample_ca(en_emb, pt_emb).shape)

# result
"""
(64, 75, 512)
(64, 53, 512)
(64, 53, 512)
"""

The global self-attention layer

This layer is responsible for processing the context sequence, and propagating information along its length:

To implement this layer we just need to pass the target sequence, x, as both the query, and value arguments to the mha layer:

1
2
3
4
5
6
7
8
9
class GlobalSelfAttention(BaseAttention):
def call(self, x):
attn_output = self.mha(
query=x,
value=x,
key=x)
x = self.add([x, attn_output])
x = self.layernorm(x)
return x
1
2
3
4
5
6
7
8
9
10
11
# test out the layer
sample_gsa = GlobalSelfAttention(num_heads=2, key_dim=512)

print(pt_emb.shape)
print(sample_gsa(pt_emb).shape)

# result
"""
(64, 75, 512)
(64, 75, 512)
"""

The causal self-attention layer

This layer does a similar job as the global self-attention layer, for the output sequence:

To build a causal self-attention layer, we need to use an appropriate mask when computing the attention scores and summing the attention values. And we can solve this pass use_causal_mask = True to the MultiHeadAttention layer

1
2
3
4
5
6
7
8
9
10
class CausalSelfAttention(BaseAttention):
def call(self, x):
attn_output = self.mha(
query=x,
value=x,
key=x,
use_causal_mask = True)
x = self.add([x, attn_output])
x = self.layernorm(x)
return x
1
2
3
4
5
6
7
8
9
10
11
# test out the layer
sample_csa = CausalSelfAttention(num_heads=2, key_dim=512)

print(en_emb.shape)
print(sample_csa(en_emb).shape)

# result
"""
(64, 53, 512)
(64, 53, 512)
"""

The feed forward network

The transformer also includes this point-wise feed-forward network in both the encoder and decoder:

The network consists of two linear layers (tf.keras.layers.Dense) with a ReLU activation in-between, and a dropout layer. As with the attention layers the code here also includes the residual connection and normalization:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
class FeedForward(tf.keras.layers.Layer):
def __init__(self, d_model, dff, dropout_rate=0.1):
super().__init__()
self.seq = tf.keras.Sequential([
tf.keras.layers.Dense(dff, activation='relu'),
tf.keras.layers.Dense(d_model),
tf.keras.layers.Dropout(dropout_rate)
])
self.add = tf.keras.layers.Add()
self.layer_norm = tf.keras.layers.LayerNormalization()

def call(self, x):
x = self.add([x, self.seq(x)])
x = self.layer_norm(x)
return x

1
2
3
4
5
6
7
8
9
10
11
# test out the layer
sample_ffn = FeedForward(512, 2048)

print(en_emb.shape)
print(sample_ffn(en_emb).shape)

# result
"""
(64, 53, 512)
(64, 53, 512)
"""

The encoder layer

The encoder contains a stack of N encoder layers. Where each EncoderLayer contains a GlobalSelfAttention and FeedForward layer:

Here is the definition of the EncoderLayer:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
class EncoderLayer(tf.keras.layers.Layer):
def __init__(self,*, d_model, num_heads, dff, dropout_rate=0.1):
super().__init__()

self.self_attention = GlobalSelfAttention(
num_heads=num_heads,
key_dim=d_model,
dropout=dropout_rate)

self.ffn = FeedForward(d_model, dff)

def call(self, x):
x = self.self_attention(x)
x = self.ffn(x)
return x

And a quick test again, the output will have the same shape as the input

1
2
3
4
5
6
7
8
9
10
11
# test out the layer
sample_encoder_layer = EncoderLayer(d_model=512, num_heads=8, dff=2048)

print(pt_emb.shape)
print(sample_encoder_layer(pt_emb).shape)

# result
"""
(64, 75, 512)
(64, 75, 512)
"""

The encoder

The encoder consists of:

  • PositionalEmbedding layer at the input.
  • A stack of EncoderLayer layers.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
class Encoder(tf.keras.layers.Layer):
def __init__(self, *, num_layers, d_model, num_heads,
dff, vocab_size, dropout_rate=0.1):
super().__init__()

self.d_model = d_model
self.num_layers = num_layers

self.pos_embedding = PositionalEmbedding(
vocab_size=vocab_size, d_model=d_model)

self.enc_layers = [
EncoderLayer(d_model=d_model,
num_heads=num_heads,
dff=dff,
dropout_rate=dropout_rate)
for _ in range(num_layers)]
self.dropout = tf.keras.layers.Dropout(dropout_rate)

def call(self, x):
# `x` is token-IDs shape: (batch, seq_len)
x = self.pos_embedding(x) # Shape `(batch_size, seq_len, d_model)`.

# Add dropout.
x = self.dropout(x)

for i in range(self.num_layers):
x = self.enc_layers[i](x)

return x # Shape `(batch_size, seq_len, d_model)`.

And test the encoder:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# test out the layer
# Instantiate the encoder.
sample_encoder = Encoder(num_layers=4,
d_model=512,
num_heads=8,
dff=2048,
vocab_size=8500)

sample_encoder_output = sample_encoder(pt, training=False)
print(pt.shape)
print(sample_encoder_output.shape) # Shape `(batch_size, input_seq_len, d_model)`.
# result
"""
(64, 75)
(64, 75, 512)
"""

The decoder layer

The decoder’s stack is slightly more complex, with each DecoderLayer containing a CausalSelfAttention, a CrossAttention, and a FeedForward layer:

Implement it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
class DecoderLayer(tf.keras.layers.Layer):
def __init__(self,
*,
d_model,
num_heads,
dff,
dropout_rate=0.1):
super(DecoderLayer, self).__init__()

self.causal_self_attention = CausalSelfAttention(
num_heads=num_heads,
key_dim=d_model,
dropout=dropout_rate)

self.cross_attention = CrossAttention(
num_heads=num_heads,
key_dim=d_model,
dropout=dropout_rate)

self.ffn = FeedForward(d_model, dff)

def call(self, x, context):
x = self.causal_self_attention(x=x)
x = self.cross_attention(x=x, context=context)

# Cache the last attention scores for plotting later
self.last_attn_scores = self.cross_attention.last_attn_scores

x = self.ffn(x) # Shape `(batch_size, seq_len, d_model)`.
return x

Test the layer:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
sample_decoder_layer = DecoderLayer(d_model=512, num_heads=8, dff=2048)

sample_decoder_layer_output = sample_decoder_layer(
x=en_emb, context=pt_emb)

print(en_emb.shape)
print(pt_emb.shape)
print(sample_decoder_layer_output.shape) # `(batch_size, seq_len, d_model)`

# result
"""
(64, 53, 512)
(64, 75, 512)
(64, 53, 512)
"""

The decoder

Similar to the Encoder, the Decoder consists of a PositionalEmbedding, and a stack of DecoderLayer

Define the decoder by extending tf.keras.layers.Layer:  

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
class Decoder(tf.keras.layers.Layer):
def __init__(self, *, num_layers, d_model, num_heads, dff, vocab_size,
dropout_rate=0.1):
super(Decoder, self).__init__()

self.d_model = d_model
self.num_layers = num_layers

self.pos_embedding = PositionalEmbedding(vocab_size=vocab_size,
d_model=d_model)
self.dropout = tf.keras.layers.Dropout(dropout_rate)
self.dec_layers = [
DecoderLayer(d_model=d_model, num_heads=num_heads,
dff=dff, dropout_rate=dropout_rate)
for _ in range(num_layers)]

self.last_attn_scores = None

def call(self, x, context):
# `x` is token-IDs shape (batch, target_seq_len)
x = self.pos_embedding(x) # (batch_size, target_seq_len, d_model)

x = self.dropout(x)

for i in range(self.num_layers):
x = self.dec_layers[i](x, context)

self.last_attn_scores = self.dec_layers[-1].last_attn_scores

# The shape of x is (batch_size, target_seq_len, d_model).
return x

Test the decoder:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# Instantiate the decoder.
sample_decoder = Decoder(num_layers=4,
d_model=512,
num_heads=8,
dff=2048,
vocab_size=8000)

output = sample_decoder(
x=en,
context=pt_emb)

# Print the shapes.
print(en.shape)
print(pt_emb.shape)
print(output.shape)

# result
"""
(64, 53)
(64, 75, 512)
(64, 53, 512)
"""

The Transformer

Now we need to put  Encoder and Decoder together and add a final linear (Dense) layer which converts the resulting vector at each location into output token probabilities to finish the transformer model to be created.

Create the Transformer by extending tf.keras.Model:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
class Transformer(tf.keras.Model):
def __init__(self, *, num_layers, d_model, num_heads, dff,
input_vocab_size, target_vocab_size, dropout_rate=0.1):
super().__init__()
self.encoder = Encoder(num_layers=num_layers, d_model=d_model,
num_heads=num_heads, dff=dff,
vocab_size=input_vocab_size,
dropout_rate=dropout_rate)

self.decoder = Decoder(num_layers=num_layers, d_model=d_model,
num_heads=num_heads, dff=dff,
vocab_size=target_vocab_size,
dropout_rate=dropout_rate)

self.final_layer = tf.keras.layers.Dense(target_vocab_size)

def call(self, inputs):
# To use a Keras model with `.fit` you must pass all your inputs in the
# first argument.
context, x = inputs

context = self.encoder(context) # (batch_size, context_len, d_model)

x = self.decoder(x, context) # (batch_size, target_len, d_model)

# Final linear layer output.
logits = self.final_layer(x) # (batch_size, target_len, target_vocab_size)

try:
# Drop the keras mask, so it doesn't scale the losses/metrics.
# b/250038731
del logits._keras_mask
except AttributeError:
pass

# Return the final output and the attention weights.
return logits

define the hyperparameters, instantiate the model and test it out:

1
2
3
4
5
num_layers = 4
d_model = 128
dff = 512
num_heads = 8
dropout_rate = 0.1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#  instantiate the model
transformer = Transformer(
num_layers=num_layers,
d_model=d_model,
num_heads=num_heads,
dff=dff,
input_vocab_size=tokenizers.pt.get_vocab_size().numpy(),
target_vocab_size=tokenizers.en.get_vocab_size().numpy(),
dropout_rate=dropout_rate)

# test
output = transformer((pt, en))

print(en.shape)
print(pt.shape)
print(output.shape)

# result
"""
(64, 53)
(64, 75)
(64, 53, 7010)
"""

Training

Set up the optimiser

Use the Adam optimiser with a custom learning rate scheduler according to the formula in the original Transformer paper.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
def __init__(self, d_model, warmup_steps=4000):
super().__init__()

self.d_model = d_model
self.d_model = tf.cast(self.d_model, tf.float32)

self.warmup_steps = warmup_steps

def __call__(self, step):
step = tf.cast(step, dtype=tf.float32)
arg1 = tf.math.rsqrt(step)
arg2 = step * (self.warmup_steps ** -1.5)

return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2)

Set up the loss and metrics

We will use the cross-entropy loss function (tf.keras.losses.SparseCategoricalCrossentropy)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
def masked_loss(label, pred):
mask = label != 0
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
loss = loss_object(label, pred)

mask = tf.cast(mask, dtype=loss.dtype)
loss *= mask

loss = tf.reduce_sum(loss)/tf.reduce_sum(mask)
return loss

def masked_accuracy(label, pred):
pred = tf.argmax(pred, axis=2)
label = tf.cast(label, pred.dtype)
match = label == pred

mask = label != 0

match = match & mask

match = tf.cast(match, dtype=tf.float32)
mask = tf.cast(mask, dtype=tf.float32)
return tf.reduce_sum(match)/tf.reduce_sum(mask)

Training the model

1
2
3
4
5
6
7
8
9
# compile and fit
transformer.compile(
loss=masked_loss,
optimizer=optimizer,
metrics=[masked_accuracy])

transformer.fit(train_batches,
epochs=20,
validation_data=val_batches)

Run inference

Define the Translator class by subclassing tf.Module:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
class Translator(tf.Module):
def __init__(self, tokenizers, transformer):
self.tokenizers = tokenizers
self.transformer = transformer

def __call__(self, sentence, max_length=MAX_TOKENS):
# The input sentence is Portuguese, hence adding the `[START]` and `[END]` tokens.
assert isinstance(sentence, tf.Tensor)
if len(sentence.shape) == 0:
sentence = sentence[tf.newaxis]

sentence = self.tokenizers.pt.tokenize(sentence).to_tensor()

encoder_input = sentence

# As the output language is English, initialize the output with the
# English `[START]` token.
start_end = self.tokenizers.en.tokenize([''])[0]
start = start_end[0][tf.newaxis]
end = start_end[1][tf.newaxis]

# `tf.TensorArray` is required here (instead of a Python list), so that the
# dynamic-loop can be traced by `tf.function`.
output_array = tf.TensorArray(dtype=tf.int64, size=0, dynamic_size=True)
output_array = output_array.write(0, start)

for i in tf.range(max_length):
output = tf.transpose(output_array.stack())
predictions = self.transformer([encoder_input, output], training=False)

# Select the last token from the `seq_len` dimension.
predictions = predictions[:, -1:, :] # Shape `(batch_size, 1, vocab_size)`.

predicted_id = tf.argmax(predictions, axis=-1)

# Concatenate the `predicted_id` to the output which is given to the
# decoder as its input.
output_array = output_array.write(i+1, predicted_id[0])

if predicted_id == end:
break

output = tf.transpose(output_array.stack())
# The output shape is `(1, tokens)`.
text = tokenizers.en.detokenize(output)[0] # Shape: `()`.

tokens = tokenizers.en.lookup(output)[0]

# `tf.function` prevents us from using the attention_weights that were
# calculated on the last iteration of the loop.
# So, recalculate them outside the loop.
self.transformer([encoder_input, output[:,:-1]], training=False)
attention_weights = self.transformer.decoder.last_attn_scores

return text, tokens, attention_weights

Create an instance of this Translator class, and try it out a few times:

1
2
3
4
5
6
translator = Translator(tokenizers, transformer)

def print_translation(sentence, tokens, ground_truth):
print(f'{"Input:":15s}: {sentence}')
print(f'{"Prediction":15s}: {tokens.numpy().decode("utf-8")}')
print(f'{"Ground truth":15s}: {ground_truth}')
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# example one
sentence = 'este é um problema que temos que resolver.'
ground_truth = 'this is a problem we have to solve .'

translated_text, translated_tokens, attention_weights = translator(
tf.constant(sentence))
print_translation(sentence, translated_text, ground_truth)

#result
"""
Input: : este é um problema que temos que resolver.
Prediction : this is a problem we have to solve .
Ground truth : this is a problem we have to solve .
"""
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# example two
sentence = 'os meus vizinhos ouviram sobre esta ideia.'
ground_truth = 'and my neighboring homes heard about this idea .'

translated_text, translated_tokens, attention_weights = translator(
tf.constant(sentence))
print_translation(sentence, translated_text, ground_truth)

#result
"""
Input: : os meus vizinhos ouviram sobre esta ideia .
Prediction : and my neighboring homes heard about this idea .
Ground truth : and my neighboring homes heard about this idea .
"""
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# example three
sentence = 'vou então muito rapidamente partilhar convosco algumas histórias de algumas coisas mágicas que aconteceram.'
ground_truth = "so i'll just share with you some stories very quickly of some magical things that have happened."

translated_text, translated_tokens, attention_weights = translator(
tf.constant(sentence))
print_translation(sentence, translated_text, ground_truth)

#result
"""
Input: : vou então muito rapidamente partilhar convosco algumas histórias de algumas coisas mágicas que aconteceram .
Prediction : so i'll just share with you some stories very quickly of some magical things that have happened .
Ground truth : so i'll just share with you some stories very quickly of some magical things that have happened .
"""

Export the model

Create a class called ExportTranslator by subclassing the tf.Module subclass with a tf.function on the __call__ method:

1
2
3
4
5
6
7
8
9
10
11
class ExportTranslator(tf.Module):
  def __init__(self, translator):
    self.translator = translator

  @tf.function(input_signature=[tf.TensorSpec(shape=[], dtype=tf.string)])
  def __call__(self, sentence):
    (result,
     tokens,
     attention_weights) = self.translator(sentence, max_length=MAX_TOKENS)

    return result
1
2
3
4
5
translator = ExportTranslator(translator)
tf.saved_model.save(translator, export_dir='translator')
reloaded = tf.saved_model.load('translator')
print(reloaded(tf.constant('este é o primeiro livro que eu fiz.')).numpy().decode("utf-8"))
# result: this is the first book I made.

GitHub

https://github.com/PaddyZz/neural_machine_translation

Conclusion

We have finised:

• Enviroment and Dependencies set up
• Data Handling (Datasets, Tokenizer, Data Pipeline)
• Define the components (encoder,decoder, attention layers and etc)
• Train the Model
• Run the inference
• Export the model

References

paper

ted_hrlr_translate

Neural machine learning with attention

Neural machine translation with a Transformer and Keras

Author

Paddy

Posted on

31-05-2024

Updated on

24-10-2024

Categories
projects
Licensed under


Comments