run_node

dragon.vm.tensorrt.backend.run_node(
  node,
  inputs,
  device='CUDA:0',
  **kwargs
)

Build and run a TensorRT engine from the onnx node.

Parameters:
  • node (onnx.NodeProto) – The onnx node.
  • inputs (Union[Sequence, Dict]) – The input arrays.
  • device (str, optional) – The executing device.
Returns:

namedtuple – The model outputs.