10.5. 机器翻译与数据集¶ 在 SageMaker Studio Lab 中打开 Notebook
现代循环神经网络的广泛应用得益于其在统计*机器翻译*应用领域的重大进展。在机器翻译中,模型会接收一种语言的句子,并预测另一种语言中对应的句子。请注意,这里的句子可能有不同的长度,而且由于两种语言的语法结构不同,两个句子中对应的词可能出现的顺序也不同。
许多问题都有这种在两个“未对齐”序列之间进行映射的特点。例如,从对话提示到回复的映射,或者从问题到答案的映射。广义上说,这类问题被称为*序列到序列*(sequence-to-sequence,seq2seq)问题,它们是本章剩余部分以及 第11节 的大部分内容的重点。
在本节中,我们介绍机器翻译问题以及将在后续示例中使用的示例数据集。几十年来,统计语言翻译方法一直很流行 (Brown et al., 1990, Brown et al., 1988),甚至在研究人员开始使用神经网络方法(这些方法通常被统称为*神经机器翻译*)之前也是如此。
首先,我们需要一些新的代码来处理数据。与我们在 第9.3节 中看到的语言建模不同,这里每个样本都包含两个独立的文本序列,一个在源语言中,另一个(翻译)在目标语言中。下面的代码片段将展示如何将预处理后的数据加载到小批量中进行训练。
import os
import torch
from d2l import torch as d2l
import os
from mxnet import np, npx
from d2l import mxnet as d2l
npx.set_np()
import os
from jax import numpy as jnp
from d2l import jax as d2l
No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
import os
import tensorflow as tf
from d2l import tensorflow as d2l
10.5.1. 下载和预处理数据集¶
首先,我们下载一个英法数据集,该数据集包含来自 Tatoeba 项目的双语例句对。数据集中的每一行都是一个制表符分隔的文本对,包括一个英文文本序列(*源*)和翻译后的法文文本序列(*目标*)。请注意,每个文本序列可以只是一个句子,也可以是包含多个句子的段落。
class MTFraEng(d2l.DataModule): #@save
"""The English-French dataset."""
def _download(self):
d2l.extract(d2l.download(
d2l.DATA_URL+'fra-eng.zip', self.root,
'94646ad1522d915e7b0f9296181140edcf86a4f5'))
with open(self.root + '/fra-eng/fra.txt', encoding='utf-8') as f:
return f.read()
data = MTFraEng()
raw_text = data._download()
print(raw_text[:75])
Downloading ../data/fra-eng.zip from http://d2l-data.s3-accelerate.amazonaws.com/fra-eng.zip...
Go. Va !
Hi. Salut !
Run! Cours !
Run! Courez !
Who? Qui ?
Wow! Ça alors !
class MTFraEng(d2l.DataModule): #@save
"""The English-French dataset."""
def _download(self):
d2l.extract(d2l.download(
d2l.DATA_URL+'fra-eng.zip', self.root,
'94646ad1522d915e7b0f9296181140edcf86a4f5'))
with open(self.root + '/fra-eng/fra.txt', encoding='utf-8') as f:
return f.read()
data = MTFraEng()
raw_text = data._download()
print(raw_text[:75])
Downloading ../data/fra-eng.zip from http://d2l-data.s3-accelerate.amazonaws.com/fra-eng.zip...
Go. Va !
Hi. Salut !
Run! Cours !
Run! Courez !
Who? Qui ?
Wow! Ça alors !
class MTFraEng(d2l.DataModule): #@save
"""The English-French dataset."""
def _download(self):
d2l.extract(d2l.download(
d2l.DATA_URL+'fra-eng.zip', self.root,
'94646ad1522d915e7b0f9296181140edcf86a4f5'))
with open(self.root + '/fra-eng/fra.txt', encoding='utf-8') as f:
return f.read()
data = MTFraEng()
raw_text = data._download()
print(raw_text[:75])
Go. Va !
Hi. Salut !
Run! Cours !
Run! Courez !
Who? Qui ?
Wow! Ça alors !
class MTFraEng(d2l.DataModule): #@save
"""The English-French dataset."""
def _download(self):
d2l.extract(d2l.download(
d2l.DATA_URL+'fra-eng.zip', self.root,
'94646ad1522d915e7b0f9296181140edcf86a4f5'))
with open(self.root + '/fra-eng/fra.txt', encoding='utf-8') as f:
return f.read()
data = MTFraEng()
raw_text = data._download()
print(raw_text[:75])
Go. Va !
Hi. Salut !
Run! Cours !
Run! Courez !
Who? Qui ?
Wow! Ça alors !
下载数据集后,我们对原始文本数据进行几个预处理步骤。例如,我们将不间断空格替换为空格,将大写字母转换为小写字母,并在单词和标点符号之间插入空格。
@d2l.add_to_class(MTFraEng) #@save
def _preprocess(self, text):
# Replace non-breaking space with space
text = text.replace('\u202f', ' ').replace('\xa0', ' ')
# Insert space between words and punctuation marks
no_space = lambda char, prev_char: char in ',.!?' and prev_char != ' '
out = [' ' + char if i > 0 and no_space(char, text[i - 1]) else char
for i, char in enumerate(text.lower())]
return ''.join(out)
text = data._preprocess(raw_text)
print(text[:80])
go . va !
hi . salut !
run ! cours !
run ! courez !
who ? qui ?
wow ! ça alors !
@d2l.add_to_class(MTFraEng) #@save
def _preprocess(self, text):
# Replace non-breaking space with space
text = text.replace('\u202f', ' ').replace('\xa0', ' ')
# Insert space between words and punctuation marks
no_space = lambda char, prev_char: char in ',.!?' and prev_char != ' '
out = [' ' + char if i > 0 and no_space(char, text[i - 1]) else char
for i, char in enumerate(text.lower())]
return ''.join(out)
text = data._preprocess(raw_text)
print(text[:80])
go . va !
hi . salut !
run ! cours !
run ! courez !
who ? qui ?
wow ! ça alors !
@d2l.add_to_class(MTFraEng) #@save
def _preprocess(self, text):
# Replace non-breaking space with space
text = text.replace('\u202f', ' ').replace('\xa0', ' ')
# Insert space between words and punctuation marks
no_space = lambda char, prev_char: char in ',.!?' and prev_char != ' '
out = [' ' + char if i > 0 and no_space(char, text[i - 1]) else char
for i, char in enumerate(text.lower())]
return ''.join(out)
text = data._preprocess(raw_text)
print(text[:80])
go . va !
hi . salut !
run ! cours !
run ! courez !
who ? qui ?
wow ! ça alors !
@d2l.add_to_class(MTFraEng) #@save
def _preprocess(self, text):
# Replace non-breaking space with space
text = text.replace('\u202f', ' ').replace('\xa0', ' ')
# Insert space between words and punctuation marks
no_space = lambda char, prev_char: char in ',.!?' and prev_char != ' '
out = [' ' + char if i > 0 and no_space(char, text[i - 1]) else char
for i, char in enumerate(text.lower())]
return ''.join(out)
text = data._preprocess(raw_text)
print(text[:80])
go . va !
hi . salut !
run ! cours !
run ! courez !
who ? qui ?
wow ! ça alors !
10.5.2. 词元化¶
与 第9.3节 中的字符级词元化不同,对于机器翻译,我们在这里更倾向于词级词元化(当今最先进的模型使用更复杂的词元化技术)。下面的 _tokenize
方法对前 max_examples
个文本序列对进行词元化,其中每个词元要么是一个单词,要么是一个标点符号。我们在每个序列的末尾附加特殊的“<eos>”词元,以表示序列的结束。当模型逐个词元地生成序列进行预测时,生成“<eos>”词元可以表示输出序列已完成。最后,该方法返回两个词元列表的列表:src
和 tgt
。具体来说,src[i]
是源语言(此处为英语)中第 \(i\) 个文本序列的词元列表,而 tgt[i]
是目标语言(此处为法语)中的词元列表。
@d2l.add_to_class(MTFraEng) #@save
def _tokenize(self, text, max_examples=None):
src, tgt = [], []
for i, line in enumerate(text.split('\n')):
if max_examples and i > max_examples: break
parts = line.split('\t')
if len(parts) == 2:
# Skip empty tokens
src.append([t for t in f'{parts[0]} <eos>'.split(' ') if t])
tgt.append([t for t in f'{parts[1]} <eos>'.split(' ') if t])
return src, tgt
src, tgt = data._tokenize(text)
src[:6], tgt[:6]
([['go', '.', '<eos>'],
['hi', '.', '<eos>'],
['run', '!', '<eos>'],
['run', '!', '<eos>'],
['who', '?', '<eos>'],
['wow', '!', '<eos>']],
[['va', '!', '<eos>'],
['salut', '!', '<eos>'],
['cours', '!', '<eos>'],
['courez', '!', '<eos>'],
['qui', '?', '<eos>'],
['ça', 'alors', '!', '<eos>']])
@d2l.add_to_class(MTFraEng) #@save
def _tokenize(self, text, max_examples=None):
src, tgt = [], []
for i, line in enumerate(text.split('\n')):
if max_examples and i > max_examples: break
parts = line.split('\t')
if len(parts) == 2:
# Skip empty tokens
src.append([t for t in f'{parts[0]} <eos>'.split(' ') if t])
tgt.append([t for t in f'{parts[1]} <eos>'.split(' ') if t])
return src, tgt
src, tgt = data._tokenize(text)
src[:6], tgt[:6]
([['go', '.', '<eos>'],
['hi', '.', '<eos>'],
['run', '!', '<eos>'],
['run', '!', '<eos>'],
['who', '?', '<eos>'],
['wow', '!', '<eos>']],
[['va', '!', '<eos>'],
['salut', '!', '<eos>'],
['cours', '!', '<eos>'],
['courez', '!', '<eos>'],
['qui', '?', '<eos>'],
['ça', 'alors', '!', '<eos>']])
@d2l.add_to_class(MTFraEng) #@save
def _tokenize(self, text, max_examples=None):
src, tgt = [], []
for i, line in enumerate(text.split('\n')):
if max_examples and i > max_examples: break
parts = line.split('\t')
if len(parts) == 2:
# Skip empty tokens
src.append([t for t in f'{parts[0]} <eos>'.split(' ') if t])
tgt.append([t for t in f'{parts[1]} <eos>'.split(' ') if t])
return src, tgt
src, tgt = data._tokenize(text)
src[:6], tgt[:6]
([['go', '.', '<eos>'],
['hi', '.', '<eos>'],
['run', '!', '<eos>'],
['run', '!', '<eos>'],
['who', '?', '<eos>'],
['wow', '!', '<eos>']],
[['va', '!', '<eos>'],
['salut', '!', '<eos>'],
['cours', '!', '<eos>'],
['courez', '!', '<eos>'],
['qui', '?', '<eos>'],
['ça', 'alors', '!', '<eos>']])
@d2l.add_to_class(MTFraEng) #@save
def _tokenize(self, text, max_examples=None):
src, tgt = [], []
for i, line in enumerate(text.split('\n')):
if max_examples and i > max_examples: break
parts = line.split('\t')
if len(parts) == 2:
# Skip empty tokens
src.append([t for t in f'{parts[0]} <eos>'.split(' ') if t])
tgt.append([t for t in f'{parts[1]} <eos>'.split(' ') if t])
return src, tgt
src, tgt = data._tokenize(text)
src[:6], tgt[:6]
([['go', '.', '<eos>'],
['hi', '.', '<eos>'],
['run', '!', '<eos>'],
['run', '!', '<eos>'],
['who', '?', '<eos>'],
['wow', '!', '<eos>']],
[['va', '!', '<eos>'],
['salut', '!', '<eos>'],
['cours', '!', '<eos>'],
['courez', '!', '<eos>'],
['qui', '?', '<eos>'],
['ça', 'alors', '!', '<eos>']])
让我们绘制每个文本序列的词元数量直方图。在这个简单的英法数据集中,大多数文本序列的词元数少于 20 个。
#@save
def show_list_len_pair_hist(legend, xlabel, ylabel, xlist, ylist):
"""Plot the histogram for list length pairs."""
d2l.set_figsize()
_, _, patches = d2l.plt.hist(
[[len(l) for l in xlist], [len(l) for l in ylist]])
d2l.plt.xlabel(xlabel)
d2l.plt.ylabel(ylabel)
for patch in patches[1].patches:
patch.set_hatch('/')
d2l.plt.legend(legend)
show_list_len_pair_hist(['source', 'target'], '# tokens per sequence',
'count', src, tgt);
#@save
def show_list_len_pair_hist(legend, xlabel, ylabel, xlist, ylist):
"""Plot the histogram for list length pairs."""
d2l.set_figsize()
_, _, patches = d2l.plt.hist(
[[len(l) for l in xlist], [len(l) for l in ylist]])
d2l.plt.xlabel(xlabel)
d2l.plt.ylabel(ylabel)
for patch in patches[1].patches:
patch.set_hatch('/')
d2l.plt.legend(legend)
show_list_len_pair_hist(['source', 'target'], '# tokens per sequence',
'count', src, tgt);
#@save
def show_list_len_pair_hist(legend, xlabel, ylabel, xlist, ylist):
"""Plot the histogram for list length pairs."""
d2l.set_figsize()
_, _, patches = d2l.plt.hist(
[[len(l) for l in xlist], [len(l) for l in ylist]])
d2l.plt.xlabel(xlabel)
d2l.plt.ylabel(ylabel)
for patch in patches[1].patches:
patch.set_hatch('/')
d2l.plt.legend(legend)
show_list_len_pair_hist(['source', 'target'], '# tokens per sequence',
'count', src, tgt);
#@save
def show_list_len_pair_hist(legend, xlabel, ylabel, xlist, ylist):
"""Plot the histogram for list length pairs."""
d2l.set_figsize()
_, _, patches = d2l.plt.hist(
[[len(l) for l in xlist], [len(l) for l in ylist]])
d2l.plt.xlabel(xlabel)
d2l.plt.ylabel(ylabel)
for patch in patches[1].patches:
patch.set_hatch('/')
d2l.plt.legend(legend)
show_list_len_pair_hist(['source', 'target'], '# tokens per sequence',
'count', src, tgt);
10.5.3. 加载固定长度的序列¶
回想一下,在语言建模中,每个样本序列,无论是单个句子的片段还是跨越多个句子的片段,都具有固定的长度。这是由 第9.3节 中的 num_steps
(时间步数或词元数)参数指定的。在机器翻译中,每个样本是一对源文本序列和目标文本序列,这两个文本序列的长度可能不同。
为了提高计算效率,我们仍然可以通过*截断*和*填充*的方式一次性处理一个小批量的文本序列。假设同一个小批量中的每个序列都应该具有相同的长度 num_steps
。如果一个文本序列的词元数少于 num_steps
,我们将在其末尾不断附加特殊的“<pad>”词元,直到其长度达到 num_steps
。否则,我们将通过仅保留其前 num_steps
个词元并丢弃其余部分来截断文本序列。这样,每个文本序列都将具有相同的长度,以便以相同形状的小批量加载。此外,我们还记录了不包括填充词元的源序列的长度。这些信息对于我们后面将介绍的一些模型是必需的。
由于机器翻译数据集由成对的语言组成,我们可以分别为源语言和目标语言构建两个词汇表。使用词级词元化,词汇表的大小将显著大于使用字符级词元化。为了缓解这个问题,这里我们将出现次数少于两次的低频词元视为同一个未知(“<unk>”)词元。正如我们稍后将解释的(图 10.7.1),当使用目标序列进行训练时,解码器的输出(标签词元)可以与解码器的输入(目标词元)相同,只是向后移动一个词元;而特殊的序列开始(“<bos>”)词元将用作预测目标序列的第一个输入词元(图 10.7.3)。
@d2l.add_to_class(MTFraEng) #@save
def __init__(self, batch_size, num_steps=9, num_train=512, num_val=128):
super(MTFraEng, self).__init__()
self.save_hyperparameters()
self.arrays, self.src_vocab, self.tgt_vocab = self._build_arrays(
self._download())
@d2l.add_to_class(MTFraEng) #@save
def _build_arrays(self, raw_text, src_vocab=None, tgt_vocab=None):
def _build_array(sentences, vocab, is_tgt=False):
pad_or_trim = lambda seq, t: (
seq[:t] if len(seq) > t else seq + ['<pad>'] * (t - len(seq)))
sentences = [pad_or_trim(s, self.num_steps) for s in sentences]
if is_tgt:
sentences = [['<bos>'] + s for s in sentences]
if vocab is None:
vocab = d2l.Vocab(sentences, min_freq=2)
array = torch.tensor([vocab[s] for s in sentences])
valid_len = (array != vocab['<pad>']).type(torch.int32).sum(1)
return array, vocab, valid_len
src, tgt = self._tokenize(self._preprocess(raw_text),
self.num_train + self.num_val)
src_array, src_vocab, src_valid_len = _build_array(src, src_vocab)
tgt_array, tgt_vocab, _ = _build_array(tgt, tgt_vocab, True)
return ((src_array, tgt_array[:,:-1], src_valid_len, tgt_array[:,1:]),
src_vocab, tgt_vocab)
@d2l.add_to_class(MTFraEng) #@save
def __init__(self, batch_size, num_steps=9, num_train=512, num_val=128):
super(MTFraEng, self).__init__()
self.save_hyperparameters()
self.arrays, self.src_vocab, self.tgt_vocab = self._build_arrays(
self._download())
@d2l.add_to_class(MTFraEng) #@save
def _build_arrays(self, raw_text, src_vocab=None, tgt_vocab=None):
def _build_array(sentences, vocab, is_tgt=False):
pad_or_trim = lambda seq, t: (
seq[:t] if len(seq) > t else seq + ['<pad>'] * (t - len(seq)))
sentences = [pad_or_trim(s, self.num_steps) for s in sentences]
if is_tgt:
sentences = [['<bos>'] + s for s in sentences]
if vocab is None:
vocab = d2l.Vocab(sentences, min_freq=2)
array = np.array([vocab[s] for s in sentences])
valid_len = (array != vocab['<pad>']).astype(np.int32).sum(1)
return array, vocab, valid_len
src, tgt = self._tokenize(self._preprocess(raw_text),
self.num_train + self.num_val)
src_array, src_vocab, src_valid_len = _build_array(src, src_vocab)
tgt_array, tgt_vocab, _ = _build_array(tgt, tgt_vocab, True)
return ((src_array, tgt_array[:,:-1], src_valid_len, tgt_array[:,1:]),
src_vocab, tgt_vocab)
@d2l.add_to_class(MTFraEng) #@save
def __init__(self, batch_size, num_steps=9, num_train=512, num_val=128):
super(MTFraEng, self).__init__()
self.save_hyperparameters()
self.arrays, self.src_vocab, self.tgt_vocab = self._build_arrays(
self._download())
@d2l.add_to_class(MTFraEng) #@save
def _build_arrays(self, raw_text, src_vocab=None, tgt_vocab=None):
def _build_array(sentences, vocab, is_tgt=False):
pad_or_trim = lambda seq, t: (
seq[:t] if len(seq) > t else seq + ['<pad>'] * (t - len(seq)))
sentences = [pad_or_trim(s, self.num_steps) for s in sentences]
if is_tgt:
sentences = [['<bos>'] + s for s in sentences]
if vocab is None:
vocab = d2l.Vocab(sentences, min_freq=2)
array = jnp.array([vocab[s] for s in sentences])
valid_len = (array != vocab['<pad>']).astype(jnp.int32).sum(1)
return array, vocab, valid_len
src, tgt = self._tokenize(self._preprocess(raw_text),
self.num_train + self.num_val)
src_array, src_vocab, src_valid_len = _build_array(src, src_vocab)
tgt_array, tgt_vocab, _ = _build_array(tgt, tgt_vocab, True)
return ((src_array, tgt_array[:,:-1], src_valid_len, tgt_array[:,1:]),
src_vocab, tgt_vocab)
@d2l.add_to_class(MTFraEng) #@save
def __init__(self, batch_size, num_steps=9, num_train=512, num_val=128):
super(MTFraEng, self).__init__()
self.save_hyperparameters()
self.arrays, self.src_vocab, self.tgt_vocab = self._build_arrays(
self._download())
@d2l.add_to_class(MTFraEng) #@save
def _build_arrays(self, raw_text, src_vocab=None, tgt_vocab=None):
def _build_array(sentences, vocab, is_tgt=False):
pad_or_trim = lambda seq, t: (
seq[:t] if len(seq) > t else seq + ['<pad>'] * (t - len(seq)))
sentences = [pad_or_trim(s, self.num_steps) for s in sentences]
if is_tgt:
sentences = [['<bos>'] + s for s in sentences]
if vocab is None:
vocab = d2l.Vocab(sentences, min_freq=2)
array = tf.constant([vocab[s] for s in sentences])
valid_len = tf.reduce_sum(
tf.cast(array != vocab['<pad>'], tf.int32), 1)
return array, vocab, valid_len
src, tgt = self._tokenize(self._preprocess(raw_text),
self.num_train + self.num_val)
src_array, src_vocab, src_valid_len = _build_array(src, src_vocab)
tgt_array, tgt_vocab, _ = _build_array(tgt, tgt_vocab, True)
return ((src_array, tgt_array[:,:-1], src_valid_len, tgt_array[:,1:]),
src_vocab, tgt_vocab)
10.5.4. 读取数据集¶
最后,我们定义 get_dataloader
方法以返回数据迭代器。
@d2l.add_to_class(MTFraEng) #@save
def get_dataloader(self, train):
idx = slice(0, self.num_train) if train else slice(self.num_train, None)
return self.get_tensorloader(self.arrays, train, idx)
让我们从英法数据集中读取第一个小批量。
data = MTFraEng(batch_size=3)
src, tgt, src_valid_len, label = next(iter(data.train_dataloader()))
print('source:', src.type(torch.int32))
print('decoder input:', tgt.type(torch.int32))
print('source len excluding pad:', src_valid_len.type(torch.int32))
print('label:', label.type(torch.int32))
source: tensor([[117, 182, 0, 3, 4, 4, 4, 4, 4],
[ 62, 72, 2, 3, 4, 4, 4, 4, 4],
[ 57, 124, 0, 3, 4, 4, 4, 4, 4]], dtype=torch.int32)
decoder input: tensor([[ 3, 37, 100, 58, 160, 0, 4, 5, 5],
[ 3, 6, 2, 4, 5, 5, 5, 5, 5],
[ 3, 180, 0, 4, 5, 5, 5, 5, 5]], dtype=torch.int32)
source len excluding pad: tensor([4, 4, 4], dtype=torch.int32)
label: tensor([[ 37, 100, 58, 160, 0, 4, 5, 5, 5],
[ 6, 2, 4, 5, 5, 5, 5, 5, 5],
[180, 0, 4, 5, 5, 5, 5, 5, 5]], dtype=torch.int32)
data = MTFraEng(batch_size=3)
src, tgt, src_valid_len, label = next(iter(data.train_dataloader()))
print('source:', src.astype(np.int32))
print('decoder input:', tgt.astype(np.int32))
print('source len excluding pad:', src_valid_len.astype(np.int32))
print('label:', label.astype(np.int32))
[21:56:41] ../src/storage/storage.cc:196: Using Pooled (Naive) StorageManager for CPU
source: [[ 69 175 2 3 4 4 4 4 4]
[142 136 2 3 4 4 4 4 4]
[ 14 119 2 3 4 4 4 4 4]]
decoder input: [[ 3 6 0 4 5 5 5 5 5]
[ 3 69 56 2 4 5 5 5 5]
[ 3 165 0 4 5 5 5 5 5]]
source len excluding pad: [4 4 4]
label: [[ 6 0 4 5 5 5 5 5 5]
[ 69 56 2 4 5 5 5 5 5]
[165 0 4 5 5 5 5 5 5]]
data = MTFraEng(batch_size=3)
src, tgt, src_valid_len, label = next(iter(data.train_dataloader()))
print('source:', src.astype(jnp.int32))
print('decoder input:', tgt.astype(jnp.int32))
print('source len excluding pad:', src_valid_len.astype(jnp.int32))
print('label:', label.astype(jnp.int32))
source: [[ 11 163 2 3 4 4 4 4 4]
[ 59 13 2 3 4 4 4 4 4]
[ 39 122 2 3 4 4 4 4 4]]
decoder input: [[ 3 6 2 4 5 5 5 5 5]
[ 3 142 0 4 5 5 5 5 5]
[ 3 6 0 4 5 5 5 5 5]]
source len excluding pad: [4 4 4]
label: [[ 6 2 4 5 5 5 5 5 5]
[142 0 4 5 5 5 5 5 5]
[ 6 0 4 5 5 5 5 5 5]]
data = MTFraEng(batch_size=3)
src, tgt, src_valid_len, label = next(iter(data.train_dataloader()))
print('source:', tf.cast(src, tf.int32))
print('decoder input:', tf.cast(tgt, tf.int32))
print('source len excluding pad:', tf.cast(src_valid_len, tf.int32))
print('label:', tf.cast(label, tf.int32))
source: tf.Tensor(
[[155 0 3 4 4 4 4 4 4]
[ 86 43 2 3 4 4 4 4 4]
[ 86 76 2 3 4 4 4 4 4]], shape=(3, 9), dtype=int32)
decoder input: tf.Tensor(
[[ 3 211 6 0 4 5 5 5 5]
[ 3 108 183 98 2 4 5 5 5]
[ 3 108 183 47 129 2 4 5 5]], shape=(3, 9), dtype=int32)
source len excluding pad: tf.Tensor([3 4 4], shape=(3,), dtype=int32)
label: tf.Tensor(
[[211 6 0 4 5 5 5 5 5]
[108 183 98 2 4 5 5 5 5]
[108 183 47 129 2 4 5 5 5]], shape=(3, 9), dtype=int32)
我们展示了一对由上述 _build_arrays
方法处理过的源序列和目标序列(以字符串格式)。
@d2l.add_to_class(MTFraEng) #@save
def build(self, src_sentences, tgt_sentences):
raw_text = '\n'.join([src + '\t' + tgt for src, tgt in zip(
src_sentences, tgt_sentences)])
arrays, _, _ = self._build_arrays(
raw_text, self.src_vocab, self.tgt_vocab)
return arrays
src, tgt, _, _ = data.build(['hi .'], ['salut .'])
print('source:', data.src_vocab.to_tokens(src[0].type(torch.int32)))
print('target:', data.tgt_vocab.to_tokens(tgt[0].type(torch.int32)))
source: ['hi', '.', '<eos>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>']
target: ['<bos>', 'salut', '.', '<eos>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>']
@d2l.add_to_class(MTFraEng) #@save
def build(self, src_sentences, tgt_sentences):
raw_text = '\n'.join([src + '\t' + tgt for src, tgt in zip(
src_sentences, tgt_sentences)])
arrays, _, _ = self._build_arrays(
raw_text, self.src_vocab, self.tgt_vocab)
return arrays
src, tgt, _, _ = data.build(['hi .'], ['salut .'])
print('source:', data.src_vocab.to_tokens(src[0].astype(np.int32)))
print('target:', data.tgt_vocab.to_tokens(tgt[0].astype(np.int32)))
source: ['hi', '.', '<eos>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>']
target: ['<bos>', 'salut', '.', '<eos>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>']
@d2l.add_to_class(MTFraEng) #@save
def build(self, src_sentences, tgt_sentences):
raw_text = '\n'.join([src + '\t' + tgt for src, tgt in zip(
src_sentences, tgt_sentences)])
arrays, _, _ = self._build_arrays(
raw_text, self.src_vocab, self.tgt_vocab)
return arrays
src, tgt, _, _ = data.build(['hi .'], ['salut .'])
print('source:', data.src_vocab.to_tokens(src[0].astype(jnp.int32)))
print('target:', data.tgt_vocab.to_tokens(tgt[0].astype(jnp.int32)))
source: ['hi', '.', '<eos>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>']
target: ['<bos>', 'salut', '.', '<eos>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>']
@d2l.add_to_class(MTFraEng) #@save
def build(self, src_sentences, tgt_sentences):
raw_text = '\n'.join([src + '\t' + tgt for src, tgt in zip(
src_sentences, tgt_sentences)])
arrays, _, _ = self._build_arrays(
raw_text, self.src_vocab, self.tgt_vocab)
return arrays
src, tgt, _, _ = data.build(['hi .'], ['salut .'])
print('source:', data.src_vocab.to_tokens(tf.cast(src[0], tf.int32)))
print('target:', data.tgt_vocab.to_tokens(tf.cast(tgt[0], tf.int32)))
source: ['hi', '.', '<eos>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>']
target: ['<bos>', 'salut', '.', '<eos>', '<pad>', '<pad>', '<pad>', '<pad>', '<pad>']
10.5.5. 小结¶
在自然语言处理中,*机器翻译* 指的是将代表*源*语言文本字符串的序列自动映射到代表*目标*语言中合理翻译的字符串的任务。使用词级词元化,词汇表的大小将显著大于使用字符级词元化,但序列长度会短得多。为了缓解大词汇表的问题,我们可以将低频词元视为某个“未知”词元。我们可以对文本序列进行截断和填充,使它们都具有相同的长度,以便以小批量加载。现代实现通常会将长度相似的序列分桶,以避免在填充上浪费过多的计算。