Megvii Engine Team
|
b078dda90b
|
feat(mge/random): add some random op and remove random/distrbution.py
GitOrigin-RevId: 4c05ebc266
|
4 years ago |
Megvii Engine Team
|
f30c0e06a6
|
feat(mgb/opr): add lsq opr
GitOrigin-RevId: 45494a2b57
|
4 years ago |
Megvii Engine Team
|
12a0e61542
|
feat(dnn/cuda): add cuda elemwise int4
GitOrigin-RevId: 8a9aaec328
|
4 years ago |
Megvii Engine Team
|
71c2f61254
|
feat(dnn/cuda): add relayout format to support layout transform between NCHW and NCHW64
GitOrigin-RevId: 1445ecfabe
|
4 years ago |
Megvii Engine Team
|
ed92207585
|
feat(dnn/cuda): add conv bias impl for int4 data type using sass language
GitOrigin-RevId: ae3d3e1c98
|
4 years ago |
Megvii Engine Team
|
1525a02530
|
feat(mge/module): add python wrapper for unfold
GitOrigin-RevId: 562103186f
|
4 years ago |
Megvii Engine Team
|
1997b1a289
|
feat(dnn/cuda): add correlation kernel
GitOrigin-RevId: 25e58b61e6
|
4 years ago |
Megvii Engine Team
|
8494a1529e
|
chore(scripts): clarify and fix default value of bit combined enum
GitOrigin-RevId: 3716bf9bb5
|
4 years ago |
Megvii Engine Team
|
a3ea1f153c
|
feat(mgb/opr): add fast profile and combined Execution strategy
GitOrigin-RevId: 843dc3a790
|
4 years ago |
Megvii Engine Team
|
c82d88751a
|
fix(dnn/cuda): add cuda nchw int8 conv impl with nchw4 to fix cu111 compatibility
GitOrigin-RevId: 771968f9ac
|
4 years ago |
Megvii Engine Team
|
2de2222e46
|
feat(dnn/cuda): add cutlass batched gemv kernel for matmul operator
GitOrigin-RevId: 51702c4e79
|
4 years ago |
Megvii Engine Team
|
973d2a0ac2
|
feat(dnn/cuda): add cutlass matmul using split k parallel
GitOrigin-RevId: 650209e35f
|
4 years ago |
Megvii Engine Team
|
03c921f7c4
|
feat(dnn/cuda): add cutlass matmul impls
GitOrigin-RevId: 619c8c299c
|
4 years ago |
Megvii Engine Team
|
ad87f78a14
|
chore(imperative): refine tblgen for generating op name
GitOrigin-RevId: f47ceae726
|
4 years ago |
Megvii Engine Team
|
55042195d4
|
chore(winograd): add Convolutionv2 param
GitOrigin-RevId: 1a9e2ea340
|
4 years ago |
Megvii Engine Team
|
a85531dd0f
|
feat(mgb/opr): add tqt opr
GitOrigin-RevId: 49c62cd532
|
4 years ago |
Megvii Engine Team
|
61f917fb8e
|
feat(dnn/cuda): add impl for fusing warp perspective and dimshuffle
GitOrigin-RevId: 51e025973f
|
4 years ago |
Megvii Engine Team
|
fc0fcd2f7f
|
chore(winograd): remove winograd transform code
GitOrigin-RevId: 78c3cfceae
|
4 years ago |
Megvii Engine Team
|
69e3e32240
|
feat(imperative): auto generated opdef header and python binding
GitOrigin-RevId: d2f22ad5fe
|
4 years ago |
Megvii Engine Team
|
3bf73ff16f
|
feat(dnn): add cuda preprocess fusion
GitOrigin-RevId: d789c99e59
|
4 years ago |
Megvii Engine Team
|
6856ce9ce2
|
feat(dnn): support conv bias activation for nchw4 input tensor format and nchw output tensor format
GitOrigin-RevId: 29cd73f87b
|
4 years ago |
Megvii Engine Team
|
c03249c059
|
feat(dnn/opr): add megdnn fake quant opr
GitOrigin-RevId: 5a04b6da2f
|
4 years ago |
Megvii Engine Team
|
ba66e1d039
|
feat(dnn): add nchw_fp32 nchw44_qint8 cuda dct
GitOrigin-RevId: 581e31fc20
|
4 years ago |
Megvii Engine Team
|
a9f98e9c66
|
refactor(meg/internal): move interal codes back to megbrain
GitOrigin-RevId: b2dbda96be
|
4 years ago |
Megvii Engine Team
|
5a85c907e0
|
feat(mgb/opr): add megbrain adaptive pooling opr
GitOrigin-RevId: 82833f41d9
|
4 years ago |
Megvii Engine Team
|
76fa71573b
|
feat(dnn/cuda): add cutlass nchw4 convolution
GitOrigin-RevId: 93c9b212f4
|
4 years ago |
Megvii Engine Team
|
199eefbd4c
|
fix(dnn): generate mode files
GitOrigin-RevId: 9b1e840f00
|
4 years ago |
Megvii Engine Team
|
9510136223
|
fix(mgb/rocm): remove begin-internal of rocm
GitOrigin-RevId: 1523833fcb
|
4 years ago |
Megvii Engine Team
|
aeffcd5897
|
feat(dnn/cuda): integrate cutlass nchw32 tensorcore convolution
GitOrigin-RevId: 9d6c48ed99
|
4 years ago |
Megvii Engine Team
|
9e5e32dee2
|
fix(dnn): restore opr_param_defs.py
GitOrigin-RevId: b92747cad3
|
4 years ago |
Megvii Engine Team
|
d334b229b0
|
feat(imperative): add nms opr wrapper
GitOrigin-RevId: d92241a234
|
4 years ago |
Megvii Engine Team
|
a1e6720756
|
feat(dnn): enable bool comparison
GitOrigin-RevId: 735693b81e
|
4 years ago |
Megvii Engine Team
|
e258812f12
|
feat(dnn): add bool dtype
GitOrigin-RevId: 98c8a092b4
|
4 years ago |
Megvii Engine Team
|
0f9dec6816
|
feat(mge/imperative): name so lib
GitOrigin-RevId: ccfdfaf59f
|
4 years ago |
Megvii Engine Team
|
457a1e010c
|
refactor(imperative): initial merge of xxx and megengine
GitOrigin-RevId: 48a48f5c7f
|
5 years ago |
Megvii Engine Team
|
7886ff9af0
|
feat(dnn): add relayout_format for nchw to nchw4 and ic <=4
GitOrigin-RevId: 07f2ee6c5b
|
5 years ago |
Megvii Engine Team
|
7ae05ac886
|
feat(imperative): merge common c++ code to megbrain
GitOrigin-RevId: d093778e10
|
5 years ago |
Megvii Engine Team
|
8f87a3e988
|
feat(dnn/arm_common): add int8 nchw44 winograd f23_4x4 f23_8x8 compute float32/int16 output int8
GitOrigin-RevId: d99ef7efcd
|
5 years ago |
Megvii Engine Team
|
a1f8ecc74f
|
fix(dnn/naive): add convolution nchw44-dot format
GitOrigin-RevId: 87a7c9c575
|
5 years ago |
Megvii Engine Team
|
a6bc250d1c
|
feat(dnn/common): add matmul impl for naive with matrix format mk4_dot
GitOrigin-RevId: 7c6fbdfa97
|
5 years ago |
Megvii Engine Team
|
0293d58ade
|
feat(mge): add bfloat16 support
GitOrigin-RevId: a942ce6791
|
5 years ago |
Megvii Engine Team
|
1255c9f13d
|
feat(mge/opr): add opr remap in opencl and naive
GitOrigin-RevId: 4540788660
|
5 years ago |
Megvii Engine Team
|
8ba8c11d87
|
feat(dnn): add nchw44 layout
GitOrigin-RevId: d92672b88a
|
5 years ago |
Megvii Engine Team
|
f91881ffdc
|
MegEngine: Initial commit of MegEngine.
GitOrigin-RevId: f0c8338beb
|
5 years ago |