I\'m trying to fine tuning Bert for document classification.
I started by tokenizing the documents to generate the input_ids, attention_mask
input_ids
attention_mask