I am trying to add some special tokens to a sentence-transformers model. I use model.tokenizer.add_special_tokens(special_tokens_dict) to add them to the tokenizer,
model.tokenizer.add_special_tokens(special_tokens_dict)