onnxruntime报错:
Exception during initialization: /onnxruntime_src/onnxruntime/core/providers/cpu/math/element_wise_ops.h:324 void onnxruntime::BroadcastIterator::Init(int64_t, int64_t) axis == 1 || axis == largest was false. Attempting to broadcast an axis by a dimension other than 1. -1 by 115
Stacktrace:
Traceback (most recent call last):
File “onnx_demo.py”, line 24, in
ort_session = ort.InferenceSession(’./src/Fonnx/tranformer.onnx’)
File “/root/miniconda3/envs/torch120/lib/python3.6/site-packages/onnxruntime/capi/session.py”, line 29, in init
self._sess.load_model(path_or_bytes)
RuntimeError: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: /onnxruntime_src/onnxruntime/core/providers/cpu/math/element_wise_ops.h:324 void onnxruntime::BroadcastIterator::Init(int64_t, int64_t) axis == 1 || axis == largest was false. Attempting to broadcast an axis by a dimension other than 1. -1 by 115
Stacktrace:
解决方案
pytorch在输出onnx模型的时候,加上’do_constant_folding=True, # wether to execute constant folding for optimization’
官方示例:
# Export the model
torch.onnx.export(torch_model, # model being run
x, # model input (or a tuple for multiple inputs)
"super_resolution.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=10, # the ONNX version to export the model to
do_constant_folding=True, # wether to execute constant folding for optimization
input_names = ['input'], # the model's input names
output_names = ['output'], # the model's output names
dynamic_axes={'input' : {0 : 'batch_size'}, # variable lenght axes
'output' : {0 : 'batch_size'}})