基于Pytorch复现Transformer

学习LLM之前,首先就是要搞懂Transformer的原理并复现。主要参考了[该文章][2]

首先整体架构图镇楼

image-20240818210211610

嵌入表示Embedding

1
2
3
4
5
6
7
8
9
10
11
12
13
class Embedder(nn.Module):
"""
vocab_size: 词典大小
d_model:词嵌入的维度
"""
def __init__(self, vocab_size, d_model):
super().__init__()
self.d_model = d_model
self.embed = nn.Embedding(vocab_size, d_model)

def forward(self, x):
return self.embed(x) * math.sqrt(self.d_model)
# 乘以嵌入值的目的是增大词嵌入的作用,使词嵌入与位置编码相加后原始含义不会丢失。

位置编码

为了使得模型对句子敏感,他需要做到两件事:

1、每个词的含义是什么(词嵌入)

2、它在句子中处于什么位置(位置编码)

位置编码这个问题可以通过如下公式构建一个特定的带有位置信息的值(position-specific values)来表示

公式:

$$ PE_{(pos,2i)} = sin({\frac{pos}{10000^{\frac{2i}{dmodel}}}})$$

$$ PE_{(pos,2i+1)} = cos({\frac{pos}{10000^{\frac{2i}{dmodel}}}})$$

pos指其在句子中的顺序/位置(比如一个sentence当中有10个词,pos=09),i指沿着词向量维度的位置(比如512维 i=0255)。

img

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
class PositionalEncoder(nn.Module):
def __init__(self, d_model, dropout_prob=0.1, max_len=5000):
super().__init__()
self.dropout = nn.Dropout(dropout_prob)

positional_encodings = torch.zeros(max_len, d_model)
position = torch.arange(0, max_len).unsqueeze(1) #(max_len, 1)

mult_term = torch.exp(torch.arange(0, d_model, 2) * -(math.log(10000.0) / d_model))

positional_encodings[:, 0::2] = torch.sin(position * mult_term)
positional_encodings[:, 1::2] = torch.cos(position * mult_term)

positional_encodings = positional_encodings.unsqueeze(0) #(1, max_len, d_model)
self.register_buffer('positional_encodings', positional_encodings)

def forward(self, x):
x = x + self.positional_encodings[:, :x.size(1)]
return self.dropout(x)

Generator

Generator对应于图中的架构图右上角的部分,它将Decoder输出的embedding转换成对下一个词的预测(在target词表上的概率分布)

image-20240818210428536

1
2
3
4
5
6
7
8
9
10
class Generator(nn.Module):
"""
define the standard linear + softmax generator step
"""
def __init(self, d_model, vocab_size):
super.__init__()
self.linear = nn.Linear(d_model, vocab_size)

def forward(self, x):
return nn.functional.log_softmax(self.linear(x), dim=-1) # 在最后一个维度上计算概率,最后一个维度长度是vocab_size

Encoder

image-20240818210728115

Encoder是这一部分,它包含N个Encoder层即EncoderLayer。每个EncoderLayer中又包含两个子层,第一个子层是Mutil-Head Attention + Add & Norm,第二个子层是FFN + Add & Norm。下面将自底向上解释每个模块

Mutil-Head Attention

image-20240818211219628

左边Scaled Dot-Product Attention对应矩阵操作如下:

v2-3cfe8adbe2e8a6d192d13cd35ac3cde9_r

假设Q、K、V的长度分别是seq_len_q,seq_len_k, seq_v, 在Encoder中,其QKV的长度都一致,而在Decoder中,由于是将Encoder的输出作为K和V,因此其KV长度一致

Attention 的作用是将 Q 中每个向量,和 K 中的每个向量,分别做内积,***表示这两个向量的相似程度***,得到 scores 矩阵。 $$scores_{i,j}$$ 就表示 $$q_i$$ 和 $$k_j$$ 的内积。

在计算完scores之后,softmax之前,需要对scores做padding mask操作,[参考这篇文章][1]。大概意思就是同一batch中不同句子可能长度不一样,因此在输入网络前需要做padding补齐,但是在做特定计算如计算最终结果、损失函数等受样本长度的影响时,需要padding mask。

具体实现时,还会对 scores 做一下 dropout。这是原文没有提到的,但是后来的实践中发现这么做的话训练效果更好;另外,为了提高效率,会对所有 head 的 Attention 操作合在一起做

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
def scaled_dot_product_attention(query, key, value, mask=None, dropout=None):

"""
Args:
query: (batch_size, num_heads, seq_len_q, head_dim), given sequence that we focus on
key: (batch_size, num_heads, seq_len_k, head_dim), the sequence to check relevance with query
value: (batch_size, num_heads, seq_len_v, head_dim),seq_len_k == seq_len_v, usually value and key come from the same source
mask: for encoder, mask is [batch_size, 1, 1, seq_len_k], for decoder, mask is [batch_size, 1, seq_len_q, seq_len_k]
dropout: nn.Dropout(), optional
Returns:
output: (batch_size, num_heads, seq_len_q, d_v), attn: (batch_size, num_heads, seq_len_q, seq_len_k)
"""
head_dim = query.size(-1)
scores = torch.matmul(query, key.transpose(-2,-1)) / math.sqrt(head_dim) #(batch_size, num_heads, seq_len_q, seq_len_k)
if mask is not None:
scores = scores.masked_fill(mask == 0, -1e9)
scores = scores.softmax(dim=-1)

if dropout is not None:
scores = dropout(scores)
return torch.matmul(scores, value), scores #(batch_size, num_heads, seq_len_q, seq_len_k) * (batch_size, num_heads, seq_len_v, head_dim) = (batch_size, num_heads, seq_len_q, head_dim)

接下来是右边的 MultiHead 操作,公式:

image-20240818213650798

参数矩阵的维度:

image-20240818213741510

image-20240818213822123

其中h表示$$head$$的数量,$$d_k$$ = $$d_v$$ = $$d_{model}/h$$

多头其实就是将原始单头的d_model按多个头给分开,其参数量是不变的。每个$$head_i$$的计算过程:

v2-8f86544c626819a222d1f855bb78c5d7_1440w然后拼接,再做一次线性操作:

v2-984ab19e8d14860b2f1b620b370667a7_r

这里着重强调模型的不同部分,Q、K、V的来源不同

  • 在Encoder中的第一个EncoderLayer,其Q、K、V都是从位置编码来的X
  • 在Encoder中的其它EncoderLayer,其Q、K、V都是上一个EncoderLayer的输出
  • Decoder也类似,区别在于DecoderLayer中第二个Muti-Head Attention的Q来自于前边的Sublayer(即Muti-Head Attention + Add & Norm),而Q、K来自于Encoder的最后一层EncoderLayer的输出
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
class MultiHeadAttention(nn.Module):
def __init__(self, h, d_model, dropout_prob=0.1):
"""
Args:
h: number of heads
d_model: dimension of the vector for each token in input and output
dropout_prob: probability of dropout
"""
super().__init__()
self.head_dim = d_model // h
self.num_heads = h
# W_Q, W_K, W_V, W_O
self.linears = nn.ModuleList([nn.Linear(d_model, d_model) for _ in range(4)])
self.dropout = nn.Dropout(dropout_prob)

def forward(self, query, key, value, mask=None):
"""
Args:
query: (batch_size, seq_len_q, d_model)
key: (batch_size, seq_len_k, d_model)
value: (batch_size, seq_len_v, d_model), seq_len_k == seq_len_v
mask:
Returns:
output: (batch_size, seq_len_q, d_model)
attn: (batch_size, num_heads, seq_len_q, seq_len_k)
"""
if mask is not None:
mask = mask.unsqueeze(1)
n_batches = query.size(0)
# 1. linear projection for query, key, value
# after this step, the shape of each is (batch_size, num_head, seq_len, head_dim)
# -1维度其实就是seq_len,只是Q、K、V的seq_len可能不一样
query, key, value = [linear(x).view(n_batches, -1, self.num_heads, self.head_dim).transpose(1,2) for linear, x in zip(self.linears, (query, key, value))]

# 2. scaled dot product attention
# out: (batch_size, num_head, seq_len_q, head_dim)
out, _ = scaled_dot_product_attention(query, key, value, mask, self.dropout)

# 3. "Concat" using a view and apply a final linear
out = (
out.transpose(1, 2).contiguous().view(n_batches, -1, self.num_heads * self.head_dim)
)
out = self.linears[3](out)

del query, key, value
return out

Feed-Forward Network

一个全链接层:

$$FFN(x)=max(0,xW_1+b_1)W_2+b_2$$

1
2
3
4
5
6
7
8
9
10
11
12
13
class FeedForward(nn.Module):
"""
Implements FFN equation.
x : (batch_size, seq_len_q, d_model)
out: (batch_size, seq_len_q, d_model)
"""
def __init__(self, d_model, d_ff, dropout_prob):
super().__init__()
self.linear1 = nn.Linear(d_model, d_ff)
self.linear2 = nn.Linear(d_ff, d_model)
self.dropout = nn.Dropout(dropout_prob)
def forward(self, x):
return self.linear2(self.dropout(nn.functional.relu(self.linear1(x))))

LayerNorm

这里的LayerNorm指的是和Add $ Norm中的Norm。本来pytorch中封装的有LayerNorm,但是看了几个博客都是自己写了一个,不太清楚为什么。

注意:按论文中描述的操作,SubLayer 中是先进行 Multi-Head Attention 或者 Feed Forward,然后 Redsial,最后才进行 LayerNorm,这种方式称为 Post LN;后续的研究中,发现进行第一步操作前,先做一次 LayerNorm,效果会更好,称为 Pre LN。两个方式的执行路径是这样子:

v2-81c7dcbd4dde90795145fdaf2f0f1ce6_1440w

1
2
3
4
5
6
7
8
9
10
11
12
13
class LayerNorm(nn.Module):
"""
LayerNorm
"""
def __init__(self, d_model, eps=1e-6):
super.__init__()
self.gama = nn.Parameter(torch.ones(d_model))
self.beta = nn.Parameter(torch.zeros(d_model))
self.eps = eps
def forward(self, x):
mean = x.mean(-1, keepdim=True)
std = x.std(-1, keepdim=True)
return self.gama * (x-mean) / (std + self.eps) + self.beta

SubLayer

这里实现SubLayer是为了提高代码的复用性。无论是Mutil-Head Attention + Add & Norm,还是FFN + Add & Norm都当成一个子层,在forward中传入的main_logic就是attention或者ffn的函数

1
2
3
4
5
6
7
8
9
10
11
12
class SubLayer(nn.Module):
"""
Do pre-layer normalization for input, and then run multi-head attention or feed forward,
and finally do the residual connection.
"""
def __init__(self, d_model, dropout_prob=0.1):
super.__init__()
self.norm = LayerNorm(d_model)
self.dropout = nn.Dropout(dropout_prob)
def forward(self, x, main_logic): # main_logic是Multi-Head Attention或者FeedForward
x_norm = self.norm(x)
return x + self.dropout(main_logic(x_norm))

EncoderLayer

经过上边的底层模块实现,就可以很简洁地实现一个EncoderLayer

1
2
3
4
5
6
7
8
9
class EncoderLayer(nn.Module):
def __init__(self, d_model, heads, d_ff = 2048, dropout_prob = 0.1):
super().__init__()
self.attention = MultiHeadAttention(heads, d_model, dropout_prob=dropout_prob)
self.ffn = FeedForward(d_model,d_ff, dropout_prob=dropout_prob)
self.sublayers = nn.ModuleList([SubLayer(d_model, dropout_prob) for _ in range(2)])
def forward(self, x, mask):
x = self.sublayers[0](x, lambda x:self.attention(x, x, x, mask))
x = self.sublayers[1](x, self.ffn)

而N个EncoderLayer就是一个Encoder

需要注意:Encoder结束后还需要一个LayerNorm,我看架构图中也妹有啊?

1
2
3
4
5
6
7
8
9
10
class Encoder(nn.Module):
def __init__(self, d_model, N, heads):
super.__init__()
self.N = N
self.layers = nn.ModuleList([copy.deepcopy(EncoderLayer(d_model, heads)) for i in range(N)])
self.norm = LayerNorm(d_model)
def forward(self, x, mask):
for i in range(self.N):
x = self.layers[i](x, mask)
return self.norm(x)

Decoder

Decoder可以复用上边的模块,因此只需要写一个DecoderLayer和Decoder就行了

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
class DecoderLayer(nn.Module):
"""
Decoder is made of self-attn, src-attn, and feed forward.
"""
def __init__(self, d_model, heads, d_ff = 2048, dropout_prob = 0.1):
super().__init__()
self.self_atten = MultiHeadAttention(heads, d_model, dropout_prob=dropout_prob)
self.src_atten = MultiHeadAttention(heads, d_model, dropout_prob=dropout_prob)
self.ffn = FeedForward(d_model, d_ff, dropout_prob=dropout_prob)
self.sublayers = nn.ModuleList([SubLayer(d_model, dropout_prob) for _ in range(3)])
def forward(self, x, memory, src_mask, tgt_mask):
x = self.sublayers[0](x, lambda x: self.self_atten(x, x, x, tgt_mask))
x = self.sublayers[1](x, lambda x: self.src_atten(x, memory, memory, src_mask))
x = self.sublayers[2](x, self.ffn)
return x
1
2
3
4
5
6
7
8
9
10
class  Decoder(nn.Module):
def __init__(self, d_model, N, heads):
self.N = N
self.layers = nn.ModuleList([copy.deepcopy(DecoderLayer(d_model, heads)) for i in range(N)])
self.norm = LayerNorm(d_model)

def forward(self, x, memory, src_mask, tgt_mask):
for layer in self.layers:
x = layer(x, memory, src_mask, tgt_mask)
return self.norm(x)

Transformer

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
class Transformer(nn.Module):
def __init__(self, src_vocab, trg_vocab, d_model, N, heads):
super().__init__()
self.encoder_emd = Embedder(src_vocab, d_model)
self.encoder_pe = PositionalEncoder(d_model)
self.encoder = Encoder(d_model, N, heads)

self.decoder_emd = Embedder(trg_vocab, d_model)
self.decoder_pe = PositionalEncoder(d_model)
self.decoder = Decoder(d_model, N, heads)

self.generator = Generator(d_model, trg_vocab)
def forward(self, src, trg, src_mask, trg_mask):
e_output = self.encoder(self.encoder_pe(self.encoder_emd(src)), src_mask)
d_output = self.decoder(self.decoder_pe(self.decoder_emd(trg)), e_output, src_mask, trg_mask)
output = self.generator(d_output)

return output

[1]:https://ifwind.github.io/2021/08/17/Transformer%E7%9B%B8%E5%85%B3%E2%80%94%E2%80%94%EF%BC%887%EF%BC%89Mask%E6%9C%BA%E5%88%B6/#%E5%BC%95%E8%A8%80 “Transformer相关——(7)Mask机制”
[2]:https://zhuanlan.zhihu.com/p/668781029