Concatenating two pre-trained BERT

前端 未结 0 569
迷失自我
迷失自我 2021-01-02 14:07
max_length = 50
tokenizer = RobertaTokenizer.from_pretrained(\'roberta-large\', do_lower_case=True)
encodings = tokenizer.batch_encode_plus(comments,max_length=max_l         


        
相关标签:
回答
  • 消灭零回复
提交回复
热议问题