Performing inference with a BERT (TF 1.x) saved model

前端 未结 1 1202
隐瞒了意图╮
隐瞒了意图╮ 2021-01-21 20:17

I\'m stuck on one line of code and have been stalled on a project all weekend as a result.

I am working on a project that uses BERT for sentence classification.

相关标签:
1条回答
  • 2021-01-21 20:25

    Thank you for this post. Your serving_input_fn was the piece I was missing! Your predict function needs to be changed to feed the features dict directly, rather than use the predict_input_fn:

    def predict(sentences):
        labels = [0, 1]
        input_examples = [
            run_classifier.InputExample(
                guid="",
                text_a = x,
                text_b = None,
                label = 0
            ) for x in sentences] # here, "" is just a dummy label
        input_features = run_classifier.convert_examples_to_features(
            input_examples, labels, MAX_SEQ_LEN, tokenizer
        )
        # this is where pred_input_fn is replaced
        all_input_ids = []
        all_input_mask = []
        all_segment_ids = []
        all_label_ids = []
    
        for feature in input_features:
            all_input_ids.append(feature.input_ids)
            all_input_mask.append(feature.input_mask)
            all_segment_ids.append(feature.segment_ids)
            all_label_ids.append(feature.label_id)
        pred_dict = {
            'input_ids': all_input_ids,
            'input_mask': all_input_mask,
            'segment_ids': all_segment_ids,
            'label_ids': all_label_ids
        }
        predict_fn = predictor.from_saved_model('../testing/1589418540')
        result = predict_fn(pred_dict)
        print(result)
    
    pred_sentences = [
      "That movie was absolutely awful",
      "The acting was a bit lacking",
      "The film was creative and surprising",
      "Absolutely fantastic!",
    ]
    predict(pred_sentences)
    {'probabilities': array([[-0.3579178 , -1.2010787 ],
           [-0.36648935, -1.1814401 ],
           [-0.30407643, -1.3386648 ],
           [-0.45970002, -0.9982413 ],
           [-0.36113673, -1.1936386 ],
           [-0.36672896, -1.1808994 ]], dtype=float32), 'labels': array([0, 0, 0, 0, 0, 0])}
    

    However, the probabilities returned for sentences in pred_sentences do not match the probabilities I get use estimator.predict(predict_input_fn) where estimator is the fine-tuned model being used within the same (python) session. For example, [-0.27276006, -1.4324446 ] using estimator vs [-0.26713806, -1.4505868 ] using predictor.

    0 讨论(0)
提交回复
热议问题