In Python, I trained an image classification model with keras to receive input as a [224, 224, 3] array and output a prediction (1 or 0). When I load the save the model and
When you convert the caffe model to MLModel
, you need to add this line:
image_input_names = 'data'
Take my own transfer script as an example, the script should be like this:
import coremltools
coreml_model = coremltools.converters.caffe.convert(('gender_net.caffemodel',
'deploy_gender.prototxt'),
image_input_names = 'data',
class_labels = 'genderLabel.txt')
coreml_model.save('GenderMLModel.mlmodel')
And then your MLModel
's input data will be CVPixelBufferRef
instead of MLMultiArray
. Transferring UIImage
to CVPixelBufferRef
would be an easy thing.