We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[10/21/2024-17:11:29] [I] [10/21/2024-17:11:29] [I] TensorRT version: 8.4.1 [10/21/2024-17:11:30] [I] [TRT] [MemUsageChange] Init CUDA: CPU +218, GPU +0, now: CPU 242, GPU 10401 (MiB) [10/21/2024-17:11:33] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +351, GPU +327, now: CPU 612, GPU 10767 (MiB) [10/21/2024-17:11:33] [I] Start parsing network model [10/21/2024-17:11:33] [I] [TRT] ---------------------------------------------------------------- [10/21/2024-17:11:33] [I] [TRT] Input filename: abandon_p0.963_r0.933_v3.0_ptq.onnx [10/21/2024-17:11:33] [I] [TRT] ONNX IR version: 0.0.7 [10/21/2024-17:11:33] [I] [TRT] Opset version: 13 [10/21/2024-17:11:33] [I] [TRT] Producer name: pytorch [10/21/2024-17:11:33] [I] [TRT] Producer version: 1.12.0 [10/21/2024-17:11:33] [I] [TRT] Domain: [10/21/2024-17:11:33] [I] [TRT] Model version: 0 [10/21/2024-17:11:33] [I] [TRT] Doc string: [10/21/2024-17:11:33] [I] [TRT] ---------------------------------------------------------------- [10/21/2024-17:11:33] [W] [TRT] onnx2trt_utils.cpp:367: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [10/21/2024-17:11:33] [E] Error[3]: onnx::QuantizeLinear_2152: invalid weights type of Int8 [10/21/2024-17:11:33] [E] [TRT] ModelImporter.cpp:773: While parsing node number 9 [Identity -> "onnx::QuantizeLinear_2358"]: [10/21/2024-17:11:33] [E] [TRT] ModelImporter.cpp:774: --- Begin node --- [10/21/2024-17:11:33] [E] [TRT] ModelImporter.cpp:775: input: "onnx::QuantizeLinear_2152" output: "onnx::QuantizeLinear_2358" name: "Identity_9" op_type: "Identity"
[10/21/2024-17:11:33] [E] [TRT] ModelImporter.cpp:776: --- End node --- [10/21/2024-17:11:33] [E] [TRT] ModelImporter.cpp:778: ERROR: ModelImporter.cpp:180 In function parseGraph: [6] Invalid Node - Identity_9 onnx::QuantizeLinear_2152: invalid weights type of Int8 [10/21/2024-17:11:33] [E] Failed to parse onnx file [10/21/2024-17:11:33] [I] Finish parsing network model [10/21/2024-17:11:33] [E] Parsing model failed [10/21/2024-17:11:34] [E] Failed to create engine from model or file. [10/21/2024-17:11:34] [E] Engine set up failed &&&& FAILED TensorRT.trtexec [TensorRT v8401] # /usr/src/tensorrt/bin/trtexec --onnx=yolov7x_ptq.onnx --saveEngine=yolov7x_ptq.engine --int8
Looking forward to your reply?
The text was updated successfully, but these errors were encountered:
I have know the reason, and resolved it。 thanks.
Sorry, something went wrong.
No branches or pull requests
[10/21/2024-17:11:29] [I]
[10/21/2024-17:11:29] [I] TensorRT version: 8.4.1
[10/21/2024-17:11:30] [I] [TRT] [MemUsageChange] Init CUDA: CPU +218, GPU +0, now: CPU 242, GPU 10401 (MiB)
[10/21/2024-17:11:33] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +351, GPU +327, now: CPU 612, GPU 10767 (MiB)
[10/21/2024-17:11:33] [I] Start parsing network model
[10/21/2024-17:11:33] [I] [TRT] ----------------------------------------------------------------
[10/21/2024-17:11:33] [I] [TRT] Input filename: abandon_p0.963_r0.933_v3.0_ptq.onnx
[10/21/2024-17:11:33] [I] [TRT] ONNX IR version: 0.0.7
[10/21/2024-17:11:33] [I] [TRT] Opset version: 13
[10/21/2024-17:11:33] [I] [TRT] Producer name: pytorch
[10/21/2024-17:11:33] [I] [TRT] Producer version: 1.12.0
[10/21/2024-17:11:33] [I] [TRT] Domain:
[10/21/2024-17:11:33] [I] [TRT] Model version: 0
[10/21/2024-17:11:33] [I] [TRT] Doc string:
[10/21/2024-17:11:33] [I] [TRT] ----------------------------------------------------------------
[10/21/2024-17:11:33] [W] [TRT] onnx2trt_utils.cpp:367: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[10/21/2024-17:11:33] [E] Error[3]: onnx::QuantizeLinear_2152: invalid weights type of Int8
[10/21/2024-17:11:33] [E] [TRT] ModelImporter.cpp:773: While parsing node number 9 [Identity -> "onnx::QuantizeLinear_2358"]:
[10/21/2024-17:11:33] [E] [TRT] ModelImporter.cpp:774: --- Begin node ---
[10/21/2024-17:11:33] [E] [TRT] ModelImporter.cpp:775: input: "onnx::QuantizeLinear_2152"
output: "onnx::QuantizeLinear_2358"
name: "Identity_9"
op_type: "Identity"
[10/21/2024-17:11:33] [E] [TRT] ModelImporter.cpp:776: --- End node ---
[10/21/2024-17:11:33] [E] [TRT] ModelImporter.cpp:778: ERROR: ModelImporter.cpp:180 In function parseGraph:
[6] Invalid Node - Identity_9
onnx::QuantizeLinear_2152: invalid weights type of Int8
[10/21/2024-17:11:33] [E] Failed to parse onnx file
[10/21/2024-17:11:33] [I] Finish parsing network model
[10/21/2024-17:11:33] [E] Parsing model failed
[10/21/2024-17:11:34] [E] Failed to create engine from model or file.
[10/21/2024-17:11:34] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8401] # /usr/src/tensorrt/bin/trtexec --onnx=yolov7x_ptq.onnx --saveEngine=yolov7x_ptq.engine --int8
Looking forward to your reply?
The text was updated successfully, but these errors were encountered: