run_model

dragon.vm.tensorrt.backend.run_model(
  model,
  inputs,
  device='CUDA:0',
  **kwargs
)

Build and run a TensorRT engine from the onnx model.

Parameters:
  • model (onnx.ModelProto) – The onnx model.
  • inputs (Union[Sequence, Dict]) – The input arrays.
  • device (str, optional) – The executing device.
Returns:

namedtuple – The model outputs.