coreml

CoreML : “Unexpected error processing model” error sometimes occurring

╄→гoц情女王★ 提交于 2020-01-31 19:31:07
问题 Context: I'm using a custom CoreML model made by a datascientist. The model is a pipeline that uses Apple SoundAnalysisPreprocessing model as it's first model. SoundAnalysisPreprocessing model is followed by a custom model that contains different convolution layers and a softmax. Issue: When launching a prediction, I sometimes get Unexpected error processing model . With the exact same input, I sometimes get a correct result and sometimes I get this error. Question: I have no clue what to do

How to use a retrained “tensorflow for poets” graph on iOS?

空扰寡人 提交于 2020-01-25 06:30:30
问题 With "tensorflow for poets", I retrained the inceptionv3 graph. Now I want to use tfcoreml converter to convert the graph to an iOS coreML model. But tf_coreml_converter.py stops with "NotImplementedError: Unsupported Ops of type: PlaceholderWithDefault". I already tried "optimize_for_inference" and "strip_unused", but I can't get rid of this unsupported op "PlaceholderWithDefault". Any idea what steps are needed after training in tensorflow-for-poets, to convert a "tensorflow-for-poets"

How to properly implement data reorganization using PyTorch?

好久不见. 提交于 2020-01-23 12:36:25
问题 It's going to be a long post, sorry in advance... I'm working on a denoising algorithm and my goal is to: Use PyTorch to design / train the model Convert the PyTorch model into a CoreML model The denoising algorithm consists in the following 3 parts: A "down-sampling" + noise level map A regular convnet An "up-sampling" The first part is quite simple in its idea, but not so easy to explain. Given for instance an input color image and a input value "sigma" that represents the standard

Text detection in images

瘦欲@ 提交于 2020-01-01 07:21:26
问题 I am using below sample code for text detection in images (not handwritten) using coreml and vision. https://github.com/DrNeuroSurg/OCRwithVisionAndCoreML-Part2 In this they have used machine learning model which supports only uppercase and numbers. Where as in my project I want upper case, lower case , numbers and few of special characters (like : ,- ). I do not have any experience in python to do required changes and generate the required .mlmodel file using train data (which again I don't

error: Cannot subscript a value of type '[String : Any]' with an index of type 'UIImagePickerController.InfoKey' [duplicate]

橙三吉。 提交于 2019-12-31 01:48:29
问题 This question already has answers here : Cannot subscript a value of type '[String : Any]' with an index of type 'UIImagePickerController.InfoKey' (8 answers) Closed last year . I am trying to rebuild the Apple test App for image detection via CoreML, but I have the error: Cannot subscript a value of type '[String : Any]' with an index of type 'UIImagePickerController.InfoKey extension ImageClassificationViewController: UIImagePickerControllerDelegate, UINavigationControllerDelegate { func

Converting UIImage to MLMultiArray for Keras Model

会有一股神秘感。 提交于 2019-12-30 10:45:14
问题 In Python, I trained an image classification model with keras to receive input as a [224, 224, 3] array and output a prediction (1 or 0). When I load the save the model and load it into xcode, it states that the input has to be in MLMultiArray format. Is there a way for me to convert a UIImage into MLMultiArray format? Or is there a way for me change my keras model to accept CVPixelBuffer type objects as an input. 回答1: In your Core ML conversion script you can supply the parameter image_input

Error preparing CoreML model: “<something>” is not supported for CoreML code generation

丶灬走出姿态 提交于 2019-12-24 03:42:51
问题 I am modifying the code from this tutorial and I'm getting this error: Error preparing CoreML model "Resnet50.mlmodel" for code generation: Target's predominant language "Swift Interface" is not supported for CoreML code generation. Please set COREML_CODEGEN_LANGUAGE to preferred language The project used to compile before with the "Places205-GoogLeNet" model. Anyone else experiencing the same? 回答1: In the project settings view for your app target, Change the setting COREML_CODEGEN_LANGUAGE

Errors converting PyTorch Unet (“tiramisu”) into coreml, via onnx

半世苍凉 提交于 2019-12-23 04:06:42
问题 I'm trying to convert a pytorch "tiramisu" UNet (from: https://github.com/bfortuner/pytorch_tiramisu) to coreml, via onnx, and I'm getting this error in onnx-coreml's _operators.py : TypeError: Error while converting op of type: Concat. Error message: Unsupported axis 1 in input of shape Any thoughts about how I might work around this? The layers file is here, for reference: https://github.com/bfortuner/pytorch_tiramisu/blob/master/models/layers.py UPDATE 1: So, digging down further into this

Continuously train CoreML model after shipping

那年仲夏 提交于 2019-12-18 11:26:55
问题 In looking over the new CoreML API, I don't see any way to continue training the model after generating the .mlmodel and bundling it in your app. This makes me think that I won't be able to perform machine learning on my user's content or actions because the model must be entirely trained beforehand. Is there any way to add training data to my trained model after shipping? EDIT: I just noticed you could initialize a generated model class from a URL, so perhaps I can post new training data to

How to create & train a neural model to use for Core ML [closed]

ぃ、小莉子 提交于 2019-12-18 02:12:18
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 2 years ago . Apple introduced Core ML. There are many third parties providing trained models. But what if I want to create a model myself? How can I do that and what tools & technologies can I use? 回答1: Core ML doesn't provide a way to train your own models. You only can convert existing ones