问题
I have a PyTorch network and I want to deploy it to iOS devices. In short, I fail to add flexibility to the input tensor shape in CoreML.
The network is a convnet that takes an RGB image (stored as a tensor) as an input and returns an RGB image of the same size. Using PyTorch, I can input images of any size I want, for instance a tensor of size (1, 3, 300, 300) for a 300x300 image.
To convert the PyTorch model to a CoreML model, I first convert it to an ONNX model using torch.onnx.export
. This function requires to pass a dummy input so that it can execute the graph. So I did using:
input = torch.rand(1, 3, 300, 300)
My guess is that the ONNX model only accepts images / tensors of size (1, 3, 300, 300). Now, I can use the onnx_coreml.convert
function to convert the ONNX model to a CoreML model. By printing the CoreML model's spec description using Python, I get something like:
input {
name: "my_image"
type {
multiArrayType {
shape: 1
shape: 3
shape: 300
shape: 300
dataType: FLOAT32
}
}
}
output {
name: "my_output"
type {
multiArrayType {
shape: 1
shape: 3
shape: 300
shape: 300
dataType: FLOAT32
}
}
}
metadata {
userDefined {
key: "coremltoolsVersion"
value: "3.1"
}
}
The model's input must be a multiArrayType
of size (1, 3, 300, 300). By copying this model to XCode, I can see while inspecting the model that my_name
is listed under the "Inputs" section and it is expected to be a MultiArray (Float32 1 x 3 x 300 x 300)
. So far, everything is coherent.
My problem is to add flexibility to the input shape. I tried to use coremltools with no luck. This is my problem. Here is my code:
import coremltools
from coremltools.models.neural_network import flexible_shape_utils
spec = coremltools.utils.load_spec('my_model.mlmodel')
shape_range = flexible_shape_utils.NeuralNetworkMultiArrayShapeRange()
shape_range.add_channel_range((3,3))
shape_range.add_height_range((64, 5000))
shape_range.add_width_range((64, 5000))
flexible_shape_utils.update_multiarray_shape_range(spec, feature_name='my_image', shape_range=shape_range)
coremltools.models.utils.save_spec(spec, 'my_flexible_model.mlmodel')
I get the following spec description using Python:
input {
name: "my_image"
type {
multiArrayType {
shape: 1
shape: 3
shape: 300
shape: 300
dataType: FLOAT32
shapeRange {
sizeRanges {
lowerBound: 3
upperBound: 3
}
sizeRanges {
lowerBound: 64
upperBound: 5000
}
sizeRanges {
lowerBound: 64
upperBound: 5000
}
}
}
}
}
Only 3 ranges as specified, which makes sense since I only defined a range for channel, height and width, but not for the batch size. In XCode, I get the following error when inspecting the flexible CoreML model:
There was a problem decoding this CoreML document
validator error: Description of multiarray feature 'my_image' has a default 4-d shape but a 3-d shape range
I'm pretty sure it was working on another project when I was on macOS X Mojave, but at this point I'm sure of nothing.
I'm using:
- macOS X Catalina
- conda 4.7.12
- python 3.7.5
- pytorch 1.3.1
- onnx 1.6.0
- onnx-coreml 1.1
- coremltools 3.1
Thanks for the help
回答1:
Easiest thing to do is to remove that shape:1
. Something like this:
del spec.description.input[0].shape[0]
Now the default shape should also have 3 dimensions.
However, I would suggest changing the type of the input from multi-array to an actual image. Since you're going to be using it with images anyway. That will let you pass in the image as a CVPixelBuffer
or CGImage
object instead of an MLMultiArray
.
来源:https://stackoverflow.com/questions/59662399/coremltools-how-to-properly-use-neuralnetworkmultiarrayshaperange