PyTorch provides straight-forward operations on research prototyping.

We are aware that Dragon is a graph-based framework with strictly naming for tensors, operators, and workspaces, while Torch is not. A simple way to bridge their differences is JIT, which traces the anonymous expressions, indicates a series of executions to the backend. If so, AutoGrad will just be a trick(Remember the Chain Rule).

Rewriting the GC(Garbage Collection) is crucial in this role, as the costly deconstruction on memories and operators must be avoided. We could either persist a Operator(i.e. Module), or reuse the several memories by turns(i.e. MemoryPool), if naming them formally.

We are still working hard to cover the original PyTorch operators, however, a bunch of extended operators in many other frameworks can be used. Our PyTorch will be unique and more powerful than the official one.