Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
|
2 years ago | |
---|---|---|
.. | ||
python | 2 years ago | |
src | 2 years ago | |
tablegen | 2 years ago | |
test | 3 years ago | |
.gitignore | 4 years ago | |
CMakeLists.txt | 3 years ago | |
README.md | 3 years ago |
Same as MegEngine except passing the additional flag "-DMGE_BUILD_IMPERATIVE_RT=ON" to cmake configure command.
Make sure make develop
is executed
Setup PYTHONPATH
export PYTHONPATH="$(git rev-parse --show-toplevel)/imperative/python"
Run pytest
(pip install as needed)
cd $(git rev-parse --show-toplevel)/imperative/python/test
pytest
An op is a subclass of OpBase
representing some operation, for example
Elemwise
Reduce
Op can be parametrized. For example, Elemwise
has a single parameter mode
, which is required by its constructor.
A tensor-like is a subclass of TensorBase
that defines how ops should apply on it, for example
RawTensor
launch kernel associated with opTracer
record information for autodiffOp instances are callable with signature (*args: TensorBase) -> Tuple[TensorBase]
. It will invoke the correct implementation for that specific op and tensor-like, e.g. launch kernel if args
is RawTensor
, record information for autodiff if args
is Tracer
.
Const
OpThe Const
op is a special op that is used to convert literal to tensor-likes. Although it does not really use any input, at least one should be provided, otherwise it can't know which specific tensor-like to return.
Tensor-likes have a dataflow semantic, thus immutable. TensorWrapper
provide a mutable layer on top of tensor-likes by replacing the wrapped tensor-like on demand.
Define the op
Most ops have been automatically generated in ops.builtin
using .oprdecl
files (take a look at basic_arith.oprdecl). If your op is already there, skip to next step.
For other ops, this work can still be done automatically with the help of an Python op serializer that matches MegBrain's own.
Before proceeding, if you are unfamiliar with MegBrain's serializer, here is a brief introduction. Each MegBrain op has a registered name, which is found at MGB_SEREG_OPR(this_is_the_name, ...)
in some .sereg.h
file. The default serializer simply write the memory of struct returned by opr->param()
.
You can create a serializer by subclassing ops._internal.helper.OpDef
as follows
class WhateverDef(OpDef): # must end with "Def"
name = 'Whatever' # name in MegBrain serialization registry
param_names = ('param',) # Does not have to be 'param', but it is a good practice to mirror
# C++ name, which is usually param(). It can also contain more
# than one element, for example if the C++ serializer writes
# `opr->param1()` followed by `opr->param2()`, you should use
# ('param1', 'param2') instead.
class Param:
def serialize(self):
c_struct_memory = bytes(...) # memory of a C++ `Param` struct
return b'\x00'*4 + c_struct_memory # remember to add 4 leading bytes
def __init__(self):
self.param = self.Param(...) # must assign to attribute(s) specified in param_names
A concrete example can be found at ops._internal.misc_ops.DimshuffleDef
.
Lastly, make sure it is imported in ops._internal.all_ops
and a corresponding op will show up in ops.builtin
Define a convenience function
Use functional
as a reference.
Tips:
an op instance has to be constructed before applying it
op = WhateverOp(param=...)
apply an op by calling the op instance
outputs = op(*inputs)
op always return a tuple
result = outputs
input can be any tensor-like
MegEngine 安装包中集成了使用 GPU 运行代码所需的 CUDA 环境,不用区分 CPU 和 GPU 版。 如果想要运行 GPU 程序,请确保机器本身配有 GPU 硬件设备并安装好驱动。 如果你想体验在云端 GPU 算力平台进行深度学习开发的感觉,欢迎访问 MegStudio 平台
C++ Cuda Python C SVG other