README.md 1.6 KB

Tensorrt 推理 resnet 分类模型

使用tensorrt推理resnet模型流程

模型转换

trtexec --onnx=resnet.onnx --saveEngine=resnet.engine --fp16 --verbose

代码使用

  1. 直接推理

    cv::Mat image = cv::imread("inference/car.jpg");
    auto resnet = resnet::load("resnet.engine");
    if (resnet == nullptr) return;
    auto attr = resnet->forward(cvimg(image));
    printf("score : %lf, label : %d\n", attr.confidence, attr.class_label);
    /*
    [infer.cu:393]: Infer 0x564a443b3440 [StaticShape]
    [infer.cu:405]: Inputs: 1
    [infer.cu:409]:     0.input.1 : shape {1x3x224x224}
    [infer.cu:412]: Outputs: 1
    [infer.cu:416]:     0.343 : shape {1x3}
    score : 0.997001, label : 2
    */
    
  2. cpm模式

    cv::Mat image = cv::imread("inference/car.jpg");
    
    cpm::Instance<resnet::Attribute, resnet::Image, resnet::Infer> cpmi;
    bool ok = cpmi.start([] { return resnet::load("resnet.engine"); }, max_infer_batch);
    
    cpmi.commit(cvimg(image)).get();
    

推理时间

模型 精度 时间
resnet34 fp16 0.49488ms

python推理

说明:使用了pybind11,将C++代码编译成so,可以让python直接import调用

from workspace import trtresnet
import cv2

infer = trtresnet.TrtResnetInfer("workspace/resnet.engine")

# image = cv2.imread("workspace/inference/car.jpg")
result = infer.forward_path("workspace/inference/car.jpg")


print(result)

image = cv2.imread("workspace/inference/car.jpg")

result = infer.forward(image)

print(result)

Reference