This repository contains custom TensorRT plugins for specialized operators. These plugins can be seamlessly integrated into your TensorRT workflow to enhance the capabilities of your deep learning models.
To build the TensorRT-OSS components, you will first need the following software packages.
- CUDA
- Recommended versions:
- cuda-12.2.0 + cuDNN-8.8
- cuda-11.8.0 + cuDNN-8.8
- GNU make >= v4.1
- cmake >= v3.13
- python >= v3.6.9, <= v3.10.x
- pip >= v19.0
- Essential utilities
- PyTorch >= 2.0
Clone the repository and submodules
git clone https://github.com/Darth-Kronos/trt-custom-plugins
cd trt-custom-plugins
git submodule update --init --recursiveTo use the custom operators in your TensorRT project, ensure that the prerequisites are met and follow these steps:
-
Copy the required plugin from
pluginfolder toTensorRT/plugin.cp -r plugin/<plugin>/trtPlugin TensorRT/plugin/<plugin>
For more details on the folder structure of plugin check Plugin Readme.
-
Add the
pluginto TensorRT's Plugin CMakeset(PLUGIN_LISTS ... ... <plugin> ) -
Building TensorRT
-
Exporting TensorRT's path
export TRT_LIBPATH=`pwd`/TensorRT
-
Generate Makefiles and build.
Example: Linux (x86-64) build with cuda-11.8
cd Tensort mkdir -p build && cd build cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out -DCUDA_VERSION=11.8 make -j$(nproc)
For more information follow TensorRT's README
- Exporting and converting
- Building Torchscript operator
cd plugin/<plugin>/tsPlugin mkdir -p build && cd build cmake -DCMAKE_PREFIX_PATH="$(python3 -c 'import torch.utils; print(torch.utils.cmake_prefix_path)')" .. make -j cd ../../..
- Exporting an example model
cd plugin/<plugin> python3 export.py cd ../..
- Convert the model using the previously built
trtexecTensorRT/build/out/trtexec --onnx=plugin/<plugin>/model.onnx --saveEngine=plugin/<plugin>/model.engine --plugins=TensorRT/build/out/libnvinfer_plugin.so
| Plugin | Name | Versions |
|---|---|---|
| cosLUPlugin | CosLUPlugin_TRT | 0 |
- Raise issue if you need a custom plugin for trt