coremltools: how to properly use NeuralNetworkMultiArrayShapeRange?
问题 I have a PyTorch network and I want to deploy it to iOS devices. In short, I fail to add flexibility to the input tensor shape in CoreML. The network is a convnet that takes an RGB image (stored as a tensor) as an input and returns an RGB image of the same size. Using PyTorch, I can input images of any size I want, for instance a tensor of size (1, 3, 300, 300) for a 300x300 image. To convert the PyTorch model to a CoreML model, I first convert it to an ONNX model using torch.onnx.export .