问题
I am seeing an exception from the WinML runtime 'The parameter is incorrect.' when running a single convolution ONNX model on DirectX devices.
My model runs fine on Default and Cpu devices, and I am able to run the SqueezeNet.onnx model from the Windows Machine Learning repository fine on DirectX devices. My model uses the same operator set id, convolution attributes, weights, and bias as the first SqueezeNet convolution as well. I have also ran the ONNX python library's checker on my model and it appears OK from that tool's perspective.
Is there a way to get more information on what went wrong inside the runtime? Will the API provide more information in the future, or offer a validation function?
回答1:
You can collect Windows Machine Learning Trace messages using Logman for more informational debugging. Refer to here to see how to use Logman: https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/logman
Here is an example usage of logman on Command Prompt using our GUID:
logman start winml -ets -o winmllog.etl -nb 128 640 -bs 128
logman update trace winml -p {BCAD6AEE-C08D-4F66-828C-4C43461A033D} 0x0 0x0 -ets
Run your scenario or application
logman stop winml -ets
you can then view the produced ETL file with a viewer like Windows Performance Analyzer
回答2:
Another way you can also get detailed error messages is by simply running it under the debugger.
When Windows AI hits into issues at runtime, it uses RoOriginateError with an informative string. You will be able to see that error string right in the debugger.
来源:https://stackoverflow.com/questions/52977756/exception-the-parameter-is-incorrect-when-attempting-to-run-an-onnx-model-wi