bert可以干啥
我们理解bert为一个transformer集合,输入是一句话,输出是经过transform的结果。我们了解,深度学习的本质就是抽取核心特征, 这也是bert的核心功能,而且以transformer为主要模块,具有更优秀的attention功能,捕获的特征更为精确和全面。
一句话概括, bert就是一个抽取器。输入一句话(词序列),输出抽取后的embedding序列。
输入输出
-
输入会加入特殊的[CLS]代表整句话的含义,可以用于分类。
-
input的词help,prince,mayuko等,一共512,这是截取的最大长度。
-
然后经过12层的encoder
-
最后输出的是每个token对应的embedding序列,每个token对应一个768维的向量。这个应该很好理解。
作用
有了词序列对应的embedding向量,就可以对词分类、句子向量构建,句子分类、句子相似度比较等。
code
#%% md
# bert
#%%
!pip install transformers
#%%
import torch
from transformers import BertModel, BertTokenizer
#%%
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
#%%
input_ids = tokenizer.encode('hello world bert!')
input_ids
#%%
type(input_ids)
#%%
ids = torch.LongTensor(input_ids)
ids
#%%
text = tokenizer.convert_ids_to_tokens(input_ids)
text
#%%
model = BertModel.from_pretrained('bert-base-uncased', output_hidden_states=True)
# Set the device to GPU (cuda) if available, otherwise stick with CPU
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model = model.to(device)
ids = ids.to(device)
model.eval()
#%%
print(ids.size())
# unsqueeze IDs to get batch size of 1 as added dimension
granola_ids = ids.unsqueeze(0)
print(granola_ids.size())
#%% md
In the example below, an additional argument has been given to the model initialisation. output_hidden_states will give us more output information. By default, a BertModel will return a tuple but the contents of that tuple differ depending on the configuration of the model. When passing output_hidden_states=True, the tuple will contain (in order; shape in brackets):
1. the last hidden state (batch_size, sequence_length, hidden_size)
1. the pooler_output of the classification token (batch_size, hidden_size)
1. the hidden_states of the outputs of the model at each layer and the initial embedding outputs (batch_size, sequence_length, hidden_size)
#%%
out = model(input_ids=granola_ids) # tuple
hidden_states = out[2]
print("last hidden state:",out[0].shape) #torch.Size([1, 6, 768])
print("pooler_output of classification token:",out[1].shape)#[1,768] cls
print("all hidden_states:", len(out[2]))
#%%
for i, each_layer in enumerate(hidden_states):
print('layer=',i, each_layer)
#%%
sentence_embedding = torch.mean(hidden_states[-1], dim=1).squeeze()
print(sentence_embedding)
print(sentence_embedding.size())
#%%
# get last four layers
last_four_layers = [hidden_states[i] for i in (-1, -2, -3, -4)]
# cast layers to a tuple and concatenate over the last dimension
cat_hidden_states = torch.cat(tuple(last_four_layers), dim=-1)
print(cat_hidden_states.size())
# take the mean of the concatenated vector over the token dimension
cat_sentence_embedding = torch.mean(cat_hidden_states, dim=1).squeeze()
print(cat_sentence_embedding)
print(cat_sentence_embedding.size())
不同的emebdding组合会带来不一样的结果,参考。
利用concat的向量,最优结果。
总结
- 不同的层代表不同的特征含义,向量组合的实验可以证明这一点。
- bert就是抽取器
- 仔细理解文中的两幅图,和样例代码。然后就是感悟了!
引用
https://github.com/huggingface/transformers/issues/2986
https://github.com/BramVanroy/bert-for-inference/blob/master/introduction-to-bert.ipynb
https://www.cnblogs.com/gczr/p/11785930.html
来源:oschina
链接:https://my.oschina.net/u/4364157/blog/4660706