Run Yolov10 on Jetson AGX Orin with TensorRT

Create a virtual environment using anaconda

$ conda create -n yolov10 python=3.8 (not python3.9)
$ conda activate yolov10
$ pip install -r requirements.txt (without torch, torchvision, onnxruntime-gpu)

Do not do this
$ pip install -e .
Which will install torch, torchvision again...

Install pre-built PyTorch pip wheel 

Download one of the PyTorch binaries for your version of JetPack[2]. I downloaded torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl since my JetPack version is 5.1.2 (L4T R35.4.1)
$ sudo apt-get -y update
$ sudo apt-get install python3-pip libopenblas-base libopenmpi-dev libomp-dev
$ pip3 install 'Cython<3'
$ export TORCH_INSTALL=~/Downloads/torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl 
$ pip3 install numpy
$ pip3 install --no-cache $TORCH_INSTALL

Install corresponding torchvision

$ pip uninstall torchvision ultralytics (Optional)
$ sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev libopenblas-dev libavcodec-dev libavformat-dev libswscale-dev
$ git clone --branch v0.16.1 https://github.com/pytorch/vision torchvision
(PyTorch v2.1 need torchvision v0.16.1)
$ cd torchvision/
$ export BUILD_VERSION=0.16.1
$ python3 setup.py install
$ pip install ultralytics

Verification of pytorch installation

$ import torch
$ print(torch.__version__)
$ print('CUDA available: ' + str(torch.cuda.is_available()))
$ print('cuDNN version: ' + str(torch.backends.cudnn.version()))

Link your tensorrt libraries to the virtual environment

$ sudo ln -s /usr/lib/python3.8/dist-packages/tensorrt* ~/anaconda3/envs/yolov10/lib/python3.8/site-packages/

Install packages for exporting .engine format

$ pip install onnxsim
$ wget https://nvidia.box.com/shared/static/iizg3ggrtdkqawkmebbfixo7sce6j365.whl -O onnxruntime_gpu-1.16.0-cp38-cp38-linux_aarch64.whl
$ pip3 install onnxruntime_gpu-1.16.0-cp38-cp38-linux_aarch64.whl 

Export end-to-end onnx, engine

Export onnx
$ yolo export model=yolov10n/s/m/b/l/x.pt format=onnx opset=13 simplify

Export engine
$ cd /usr/src/tensorrt/bin
$ ./trtexec --onnx=~/yolov10/pretrained_models/yolov10n.onnx' --explicitBatch --saveEngine=~/yolov10/pretrained_models/yolov10n.engine --workspace=1024 --fp16 --verbose --useCudaGraph --noDataTransfers --dumpProfile --separateProfileRun
or
$ yolo export model=yolov10n/s/m/b/l/x.pt format=engine half=True simplify opset=13 workspace=16

Predict with different models using following python commands

from ultralytics import YOLO
import cv2

# Load a pretrained YOLOv10n model
#model = YOLO("~/yolov10/pretrained_models/yolov10n.pt")
#model = YOLO("~/yolov10/pretrained_models/yolov10n.onnx")
model = YOLO("~/yolov10/pretrained_models/yolov10n.engine")

results = model("~/image.jpg")
results[0].show()

[Optional: changel numpy ver. 1.23.1 for executing TensorRT-based prediction]
$ pip uninstall numpy (Original: 1.24.4: onnxruntime-gpu 1.16.0 requires numpy>=1.24.4)
$ pip install numpy=1.23.1


Reference:

留言

這個網誌中的熱門文章

Tuing PID parameters in QGroundcontrol (2)

Useful PX4 Parameters

Burn linux image to eMMC storage on Banana Pi M3