prepare

dragon.vm.tensorrt.backend.prepare(
  model,
  device='CUDA:0',
  **kwargs
)[source]

Build a TensorRT engine from the onnx model.

Parameters:
  • model (onnx.ModelProto) – The onnx model.
  • device (str, optional) – The executing device.
Returns:

dragon.vm.tensorrt.ONNXBackendRep – The backend rep.