• Install
  • API
    • master
    • 0.3.0
    • versions...
  • Github
  • Overview
Python v0.3.0
  • dragon
    • DeviceSpec
    • GradientTape
    • Tensor
    • Workspace
    • argsort
    • assign
    • boolean_mask
    • broadcast_to
    • cast
    • concat
    • constant
    • device
    • eager_mode
    • expand_dims
    • eye
    • eye_like
    • fill
    • flatten
    • function
    • gather
    • gather_elements
    • get_num_threads
    • get_workspace
    • graph_mode
    • identity
    • linspace
    • load_library
    • name_scope
    • nonzero
    • ones
    • ones_like
    • one_hot
    • pad
    • python_plugin
    • range
    • repeat
    • reset_workspace
    • reshape
    • reverse
    • roll
    • scatter_add
    • scatter_elements
    • set_num_threads
    • shape
    • slice
    • sort
    • split
    • squeeze
    • stack
    • stop_gradient
    • tile
    • transpose
    • tril
    • triu
    • unique
    • unstack
    • variable_scope
    • where
    • zeros
    • zeros_like
  • dragon.autograph
    • set_optimization
    • set_scheduler
    • set_verbosity
  • dragon.bitwise
    • bitwise_and
    • bitwise_or
    • bitwise_xor
    • invert
  • dragon.cuda
    • Stream
    • current_device
    • get_device_capability
    • get_device_name
    • is_available
    • memory_allocated
    • set_cublas_flags
    • set_cudnn_flags
    • set_default_device
    • set_device
    • synchronize
  • dragon.distributed
    • all_reduce
    • broadcast
    • is_initialized
    • is_mpi_available
    • is_nccl_available
    • get_backend
    • get_group
    • get_rank
    • get_world_size
    • new_group
  • dragon.dlpack
    • from_dlpack
    • to_dlpack
  • dragon.logging
    • debug
    • error
    • fatal
    • get_verbosity
    • info
    • log
    • set_directory
    • set_verbosity
    • warning
  • dragon.losses
    • ctc_loss
    • l1_loss
    • l2_loss
    • nll_loss
    • sigmoid_cross_entropy_loss
    • sigmoid_focal_loss
    • smooth_l1_loss
    • softmax_cross_entropy_loss
  • dragon.math
    • abs
    • add
    • affine
    • argmax
    • argmin
    • atan2
    • ceil
    • clip
    • cos
    • cumsum
    • div
    • equal
    • exp
    • floor
    • gemm
    • greater
    • greater_equal
    • is_finite
    • is_inf
    • is_nan
    • less
    • less_equal
    • log
    • logical_and
    • logical_not
    • logical_or
    • logical_xor
    • matmul
    • max
    • maximum
    • mean
    • min
    • minimum
    • mul
    • negative
    • norm
    • not_equal
    • pow
    • reciprocal
    • round
    • rsqrt
    • sigmoid
    • sign
    • sin
    • sqrt
    • square
    • sub
    • sum
    • tanh
    • top_k
    • var
  • dragon.metrics
    • accuracy
  • dragon.mps
    • current_device
    • get_device_family
    • is_available
    • set_default_device
    • set_device
    • synchronize
  • dragon.nn
    • GRU
    • LSTM
    • RNN
    • batch_norm
    • bias_add
    • channel_norm
    • channel_shuffle
    • conv
    • conv_transpose
    • conv1d
    • conv1d_transpose
    • conv2d
    • conv2d_transpose
    • conv3d
    • conv3d_transpose
    • depthwise_conv2d
    • depth_to_space
    • dropout
    • drop_block
    • drop_path
    • elu
    • gelu
    • group_norm
    • hardsigmoid
    • hardswish
    • instance_norm
    • layer_norm
    • leaky_relu
    • local_response_norm
    • log_softmax
    • lp_norm
    • moments
    • pool
    • pool1d
    • pool2d
    • pool3d
    • prelu
    • relu
    • relu6
    • selu
    • silu
    • softmax
    • space_to_depth
    • sync_batch_norm
  • dragon.onnx
    • BackendRep
    • prepare_backend
    • export
    • record
    • run_model
    • supports_device
  • dragon.optimizers
    • Adam
    • AdamW
    • Optimizer
    • RMSprop
    • SGD
  • dragon.random
    • glorot_normal
    • glorot_uniform
    • multinomial
    • normal
    • normal_like
    • permutation
    • set_seed
    • truncated_normal
    • uniform
    • uniform_like
  • dragon.sysconfig
    • get_build_info
    • get_include
    • get_lib
  • dragon.vision
    • extract_patches
    • resize
    • roi_align
    • roi_pool
  • vm.dali
    • Iterator
    • Pipeline
    • device
    • get_device_type
    • get_distributed_info
  • vm.dali.ops
    • BbFlip
    • BBoxPaste
    • Brightness
    • BrightnessContrast
    • Cast
    • CoinFlip
    • ColorSpaceConversion
    • ColorTwist
    • Contrast
    • CropMirrorNormalize
    • Erase
    • ExternalSource
    • GaussianBlur
    • Hsv
    • ImageDecoder
    • ImageDecoderRandomCrop
    • Normalize
    • Pad
    • Paste
    • RandomBBoxCrop
    • RandomResizedCrop
    • Reshape
    • Resize
    • Rotate
    • Slice
    • TFRecordReader
    • Uniform
    • WarpAffine
  • vm.tensorflow
    • GradientTape
    • Module
    • TensorShape
    • TensorSpec
    • argsort
    • broadcast_to
    • cast
    • clip_by_value
    • concat
    • constant
    • convert_to_tensor
    • device
    • expand_dims
    • eye
    • fill
    • function
    • gather
    • identity
    • linspace
    • name_scope
    • ones
    • ones_like
    • one_hot
    • pad
    • range
    • reshape
    • reverse
    • roll
    • shape
    • slice
    • sort
    • split
    • squeeze
    • tile
    • transpose
    • unique
    • unique_with_counts
    • unstack
    • zeros
    • zeros_like
  • vm.tensorflow.bitwise
    • bitwise_and
    • bitwise_or
    • bitwise_xor
    • invert
  • vm.tensorflow.dtypes
    • DType
    • as_dtype
  • vm.tensorflow.keras
    • Input
    • Sequential
    • activations
      • elu
      • exponential
      • get
      • hard_sigmoid
      • linear
      • relu
      • selu
      • sigmoid
      • softmax
      • swish
      • tanh
    • initializers
      • Constant
      • GlorotNormal
      • GlorotUniform
      • Initializer
      • Ones
      • RandomNormal
      • RandomUniform
      • TruncatedNormal
      • VarianceScaling
      • Zeros
      • get
    • layers
      • Activation
      • Add
      • AveragePooling1D
      • AveragePooling2D
      • AveragePooling3D
      • BatchNormalization
      • Concatenate
      • Conv1D
      • Conv1DTranspose
      • Conv2D
      • Conv2DTranspose
      • Conv3D
      • Conv3DTranspose
      • Dense
      • DepthwiseConv2D
      • Dropout
      • ELU
      • Flatten
      • GlobalAveragePooling1D
      • GlobalAveragePooling2D
      • GlobalAveragePooling3D
      • GlobalMaxPool1D
      • GlobalMaxPool2D
      • GlobalMaxPool3D
      • Layer
      • LayerNormalization
      • LeakyReLU
      • Maximum
      • MaxPool1D
      • MaxPool2D
      • MaxPool3D
      • Minimum
      • Multiply
      • Permute
      • ReLU
      • Reshape
      • SELU
      • Softmax
      • Subtract
      • UpSampling1D
      • UpSampling2D
      • UpSampling3D
      • ZeroPadding1D
      • ZeroPadding2D
      • ZeroPadding3D
    • losses
      • BinaryCrossentropy
      • CategoricalCrossentropy
      • Loss
      • MeanAbsoluteError
      • MeanSquaredError
      • SparseCategoricalCrossentropy
      • binary_crossentropy
      • categorical_crossentropy
      • get
      • mean_absolute_error
      • mean_squared_error
      • sparse_categorical_crossentropy
    • optimizers
      • Adam
      • Optimizer
      • RMSprop
      • SGD
    • regularizers
      • L1
      • L1L2
      • L2
      • Regularizer
      • get
      • l1_l2
  • vm.tensorflow.linalg
    • matmul
  • vm.tensorflow.math
    • abs
    • add
    • add_n
    • argmax
    • argmin
    • atan2
    • ceil
    • cos
    • cumsum
    • divide
    • equal
    • exp
    • floor
    • greater
    • greater_equal
    • is_finite
    • is_inf
    • is_nan
    • l2_normalize
    • less
    • less_equal
    • log
    • multiply
    • negative
    • not_equal
    • pow
    • reciprocal
    • reduce_max
    • reduce_mean
    • reduce_min
    • reduce_sum
    • reduce_variance
    • round
    • rsqrt
    • sigmoid
    • sign
    • sin
    • sqrt
    • square
    • subtract
    • tanh
    • top_k
  • vm.tensorflow.nn
    • avg_pool
    • avg_pool1d
    • avg_pool2d
    • avg_pool3d
    • bias_add
    • conv1d
    • conv1d_transpose
    • conv2d
    • conv2d_transpose
    • conv3d
    • conv3d_transpose
    • convolution
    • conv_transpose
    • depthwise_conv2d
    • depth_to_space
    • dropout
    • elu
    • fused_batch_norm
    • gelu
    • l2_loss
    • leaky_relu
    • local_response_normalization
    • log_softmax
    • max_pool
    • max_pool1d
    • max_pool2d
    • max_pool3d
    • moments
    • relu
    • relu6
    • selu
    • sigmoid_cross_entropy_with_logits
    • silu
    • softmax
    • softmax_cross_entropy_with_logits
    • space_to_depth
    • sparse_softmax_cross_entropy_with_logits
  • vm.tensorflow.random
    • normal
    • truncated_normal
    • uniform
  • vm.tensorrt
    • Binding
    • Engine
  • vm.tensorrt.onnx
    • BackendRep
    • prepare_backend
    • run_model
    • run_node
    • supports_device
  • vm.torch
    • Size
    • Tensor
    • abs
    • add
    • addmm
    • arange
    • argmax
    • argmin
    • argsort
    • atan2
    • baddbmm
    • bitwise_and
    • bitwise_not
    • bitwise_or
    • bitwise_xor
    • bmm
    • cat
    • ceil
    • chunk
    • clamp
    • cos
    • cumsum
    • device
    • div
    • dtype
    • empty
    • enable_grad
    • eq
    • exp
    • eye
    • flatten
    • flip
    • fliplr
    • flipud
    • floor
    • from_numpy
    • full
    • full_like
    • gather
    • ge
    • gt
    • index_select
    • isfinite
    • isinf
    • isnan
    • le
    • linspace
    • log
    • logical_and
    • logical_not
    • logical_or
    • logical_xor
    • logsumexp
    • lt
    • masked_select
    • matmul
    • max
    • maximum
    • mean
    • min
    • minimum
    • mm
    • mul
    • multinomial
    • narrow
    • ne
    • neg
    • no_grad
    • nonzero
    • norm
    • ones
    • ones_like
    • permute
    • pow
    • rand
    • randn
    • randperm
    • reciprocal
    • reshape
    • roll
    • round
    • rsqrt
    • scatter
    • scatter_add
    • set_grad_enabled
    • sign
    • sin
    • sort
    • split
    • sqrt
    • square
    • squeeze
    • stack
    • sub
    • sum
    • tensor
    • tile
    • topk
    • transpose
    • tril
    • triu
    • unbind
    • unique
    • unsqueeze
    • where
    • var
    • var_mean
    • zeros_like
    • zeros
  • vm.torch.autograd
    • backward
  • vm.torch.backends
    • cuda
    • cudnn
    • mps
    • openmp
  • vm.torch.cuda
    • current_device
    • get_device_capability
    • get_device_name
    • is_available
    • set_device
    • synchronize
  • vm.torch.distributed
    • all_reduce
    • broadcast
  • vm.torch.jit
    • trace
  • vm.torch.nn
    • AdaptiveAvgPool1d
    • AdaptiveAvgPool2d
    • AdaptiveAvgPool3d
    • AdaptiveMaxPool1d
    • AdaptiveMaxPool2d
    • AdaptiveMaxPool3d
    • Affine
    • AvgPool1d
    • AvgPool2d
    • AvgPool3d
    • BatchNorm1d
    • BatchNorm2d
    • BatchNorm3d
    • BCEWithLogitsLoss
    • ChannelShuffle
    • ConstantPad1d
    • ConstantPad2d
    • ConstantPad3d
    • Conv1d
    • Conv2d
    • Conv3d
    • ConvTranspose1d
    • ConvTranspose2d
    • ConvTranspose3d
    • CosineSimilarity
    • CrossEntropyLoss
    • CTCLoss
    • DepthwiseConv2d
    • DropBlock2d
    • Dropout
    • DropPath
    • ELU
    • Flatten
    • GELU
    • GroupNorm
    • GRU
    • GumbelSoftmax
    • Hardsigmoid
    • Hardswish
    • Identity
    • KLDivLoss
    • L1Loss
    • LayerNorm
    • LeakyReLU
    • Linear
    • LocalResponseNorm
    • LogSoftmax
    • LSTM
    • LSTMCell
    • MaxPool1d
    • MaxPool2d
    • MaxPool3d
    • Module
    • ModuleList
    • MSELoss
    • MultiheadAttention
    • NLLLoss
    • Parameter
    • PixelShuffle
    • PixelUnshuffle
    • PReLU
    • ReflectionPad1d
    • ReflectionPad2d
    • ReflectionPad3d
    • ReLU
    • ReLU6
    • ReplicationPad1d
    • ReplicationPad2d
    • ReplicationPad3d
    • RNN
    • SELU
    • Sequential
    • Sigmoid
    • SigmoidFocalLoss
    • SiLU
    • SmoothL1Loss
    • Softmax
    • Tanh
    • TransformerDecoder
    • TransformerDecoderLayer
    • TransformerEncoder
    • TransformerEncoderLayer
    • SyncBatchNorm
    • Unfold
    • Upsample
    • UpsamplingBilinear2d
    • UpsamplingNearest2d
    • ZeroPad2d
  • vm.torch.nn.functional
    • adaptive_avg_pool1d
    • adaptive_avg_pool2d
    • adaptive_avg_pool3d
    • adaptive_max_pool1d
    • adaptive_max_pool2d
    • adaptive_max_pool3d
    • affine
    • avg_pool1d
    • avg_pool2d
    • avg_pool3d
    • batch_norm
    • binary_cross_entropy_with_logits
    • channel_norm
    • channel_shuffle
    • conv1d
    • conv2d
    • conv3d
    • conv_transpose1d
    • conv_transpose2d
    • conv_transpose3d
    • cosine_similarity
    • cross_entropy
    • ctc_loss
    • depthwise_conv2d
    • drop_block2d
    • drop_path
    • dropout
    • elu
    • gelu
    • group_norm
    • hardsigmoid
    • hardswish
    • kl_div
    • l1_loss
    • leaky_relu
    • linear
    • layer_norm
    • local_response_norm
    • log_softmax
    • interpolate
    • max_pool1d
    • max_pool2d
    • max_pool3d
    • mse_loss
    • multi_head_attention_forward
    • nll_loss
    • normalize
    • one_hot
    • pad
    • pixel_shuffle
    • pixel_unshuffle
    • prelu
    • relu
    • relu6
    • selu
    • sigmoid
    • sigmoid_focal_loss
    • silu
    • smooth_l1_loss
    • softmax
    • sync_batch_norm
    • tanh
    • unfold
    • upsample
    • upsample_bilinear
    • upsample_nearest
  • vm.torch.nn.init
    • calculate_gain
    • constant_
    • dirac_
    • eye_
    • xavier_normal_
    • kaiming_uniform_
    • normal_
    • ones_
    • trunc_normal_
    • uniform_
    • kaiming_normal_
    • xavier_uniform_
    • zeros_
  • vm.torch.onnx
    • export
  • vm.torch.optim
    • Adam
    • AdamW
    • LARS
    • Optimizer
    • RMSprop
    • SGD
  • vm.torch.utils.checkpoint
    • checkpoint
    • checkpoint_sequential
    • no_checkpoint
  • vm.torch.utils.dlpack
    • from_dlpack
    • to_dlpack
  • vm.torchvision.ops
    • roi_align
    • roi_pool
  • Dragon
  • API
  • Dragon v0.3.0
  • Python

vm.torch.jit¶

Functions¶

trace(…) : Trace a function and return an executable.

  • vm.torch.jit
    • Functions
  • For business cooperation:
    Email:business@seetatech.com
  • To join us:
    Email:hr@seetatech.com
  • For marketing cooperation:
    Email:pr@seetatech.com
Copyright (c) 2017-present, SeetaTech, Co.,Ltd
Email:business@seetatech.com
Email:pr@seetatech.com

Copyright (c) 2017-present, SeetaTech, Co.,Ltd