Megvii Engine Team
|
3eb0505f9b
|
feat(imperative): add support for quantized conv transpose2d
GitOrigin-RevId: ffd6431299
|
3 years ago |
Megvii Engine Team
|
3b452d8c16
|
feat(mgb): cuda conv support nhwc format and fp16 dtype
GitOrigin-RevId: b8ddcd108a
|
3 years ago |
Megvii Engine Team
|
10bcf75767
|
feat(dnn/x86): add algo for x86 max pooling for Window size bigger than 10 and S1 under NCHW88
GitOrigin-RevId: 613a18dd91
|
3 years ago |
Megvii Engine Team
|
ddba5c9674
|
fix(core): fix nr_threads is zero
GitOrigin-RevId: 0ccbe3c69b
|
3 years ago |
Megvii Engine Team
|
67f117882b
|
perf(arm_common): add elemwise unary multithread support
GitOrigin-RevId: 8eac123f67
|
3 years ago |
Megvii Engine Team
|
3afa3893d7
|
perf(arm_common): optimize arm common pooling 9x9 and 13x13
GitOrigin-RevId: 33d5a62478
|
3 years ago |
Megvii Engine Team
|
2aba0378b9
|
refactor(mgb/dnn): fix group conv is_available
GitOrigin-RevId: b279909168
|
3 years ago |
Megvii Engine Team
|
4a92346b7a
|
refactor(mgb): refactor group conv3d
GitOrigin-RevId: 15360a3a41
|
3 years ago |
Megvii Engine Team
|
6ce212d2e0
|
refactor(mgb): refactor group conv
GitOrigin-RevId: 7afd312690
|
4 years ago |
Megvii Engine Team
|
869a03271b
|
perf(mgb): disable FoldingConvBiasDimshufflePass in cuda10 for performance
GitOrigin-RevId: d1b95a6f01
|
3 years ago |
Megvii Engine Team
|
8d248a6a9a
|
fix(dnn/cuda): fix testcase for fallback nchw qs8 conv
GitOrigin-RevId: 646440db59
|
4 years ago |
Megvii Engine Team
|
894a2407c2
|
feat(dnn/cuda): add relayout format kernel for nchw <-> nhwc
GitOrigin-RevId: e11f3e5408
|
4 years ago |
Megvii Engine Team
|
f41a808694
|
feat(dnn/cuda): add nhwc int4 conv support
GitOrigin-RevId: 5236b235d0
|
4 years ago |
Megvii Engine Team
|
633016a962
|
fix(dnn/cuda): fix AlgoFallbackNCHWQS8 to support Float32 dst
GitOrigin-RevId: 06f90f5cf3
|
4 years ago |
Megvii Engine Team
|
43098fb8f1
|
feat(mge): add SlidingWindowTranspose opr
BREAKING CHANGE:
GitOrigin-RevId: 54d726d2fe
|
4 years ago |
Megvii Engine Team
|
b078dda90b
|
feat(mge/random): add some random op and remove random/distrbution.py
GitOrigin-RevId: 4c05ebc266
|
4 years ago |
Megvii Engine Team
|
83e4c9d7ab
|
fix(opencl): open opencl topk test when opencl beyond 2.0
GitOrigin-RevId: f2ad6b4af2
|
4 years ago |
Megvii Engine Team
|
f30c0e06a6
|
feat(mgb/opr): add lsq opr
GitOrigin-RevId: 45494a2b57
|
4 years ago |
Megvii Engine Team
|
1cfdbc565c
|
feat(dnn): add deterministic max pooling
GitOrigin-RevId: 9ab4c7a748
|
4 years ago |
Megvii Engine Team
|
a5060a2bfe
|
feat(mgb/opr): add check_has_inf kernel and opr
GitOrigin-RevId: 0d042dbfce
|
4 years ago |
Megvii Engine Team
|
3597a6dbd7
|
feat(dnn/arm): nchw_nchw44 conv support 1x1s1
GitOrigin-RevId: 8c8f7d7c76
|
4 years ago |
Megvii Engine Team
|
d915c5a3fd
|
refactor(mgb): make convolution3D handle noncontiguous tensors
GitOrigin-RevId: 3d3c31b021
|
4 years ago |
Megvii Engine Team
|
d04cd67faf
|
refactor(mgb): make conv-backward-filter handle noncontiguous tensors
GitOrigin-RevId: 44c586f912
|
4 years ago |
Megvii Engine Team
|
44376f702a
|
refactor(mgb): make conv-backward-data handle noncontiguous tensors
GitOrigin-RevId: 0a8f66f9d3
|
4 years ago |
Megvii Engine Team
|
7b2a76d1ee
|
refactor(mgb): make conv handle noncontiguous tensors
GitOrigin-RevId: 86282709b3
|
4 years ago |
Megvii Engine Team
|
ca2828ddcb
|
fix(dnn/x86): fix x86 int8 matmul ldc bug
GitOrigin-RevId: 2502f99000
|
4 years ago |
Megvii Engine Team
|
b87af9f77f
|
feat(dnn/cuda): topk support fp16
GitOrigin-RevId: c6610d4cf0
|
4 years ago |
Megvii Engine Team
|
71cc814eaf
|
feat(ci): add aarch64 linux ci
GitOrigin-RevId: 2c0d3a8cc2
|
4 years ago |
Megvii Engine Team
|
606540bef4
|
feat(dnn/cuda): add nhwc 4bit warp perspective
GitOrigin-RevId: fbec4a4a1f
|
4 years ago |
Megvii Engine Team
|
1e6019436c
|
feat(dnn/cuda): add nhwc int4 pooling
GitOrigin-RevId: 9cf14cde4e
|
4 years ago |
Megvii Engine Team
|
319436dd14
|
feat(dnn/cuda): add cutlass impls for uint4 x int4 conv bias
GitOrigin-RevId: cf4536855a
|
4 years ago |
Megvii Engine Team
|
d28eba4ea5
|
feat(dnn/cuda): add cutlass impls for int4 conv bias
GitOrigin-RevId: 878bb8c955
|
4 years ago |
Megvii Engine Team
|
2d4e62ef58
|
feat(dnn/cuda): add cuda uint4 pooling
GitOrigin-RevId: a728977206
|
4 years ago |
Megvii Engine Team
|
19919384fc
|
feat(dnn/cuda): add cuda uint warp perspective
GitOrigin-RevId: 2aec72010f
|
4 years ago |
Megvii Engine Team
|
5868d1fe4f
|
fix(arm_common/pooling): check mode in pooling algo to avoid wrong use AVERAGE_COUNT_EXCLUDE_PADDING
GitOrigin-RevId: 7a2d243db7
|
4 years ago |
Megvii Engine Team
|
86b69cacd0
|
fix(dnn): fixes for int4
GitOrigin-RevId: 845e164fd3
|
4 years ago |
Megvii Engine Team
|
4a802d21ca
|
feat(dnn/cuda): add conv u4xs4 sass kernel
GitOrigin-RevId: 4defcf5f1f
|
4 years ago |
Megvii Engine Team
|
adf75a291d
|
perf(dnn/cuda): add sass int4 128x128
GitOrigin-RevId: 1bc5482102
|
4 years ago |
Megvii Engine Team
|
8da2f698a3
|
feat(dnn/cuda): support warp perspective/pooling op when channel not aligned to 64
GitOrigin-RevId: 39f29ec990
|
4 years ago |
Megvii Engine Team
|
4fe68ac9ed
|
feat(dnn/cuda): support transforming layout between nchw and nchw64 when channel not aligned to 64
GitOrigin-RevId: e9ecbcf2e2
|
4 years ago |
Megvii Engine Team
|
56e863b7d4
|
fix(dnn/cuda): fix int4 epilogue stg bug
GitOrigin-RevId: e86da9a8a8
|
4 years ago |
Megvii Engine Team
|
12a0e61542
|
feat(dnn/cuda): add cuda elemwise int4
GitOrigin-RevId: 8a9aaec328
|
4 years ago |
Megvii Engine Team
|
df1af59b5c
|
feat(dnn): warp perspective support int4
GitOrigin-RevId: 826a43b349
|
4 years ago |
Megvii Engine Team
|
2398df079c
|
feat(dnn/cuda): add cuda int4 pooling
GitOrigin-RevId: 14ed4e6f00
|
4 years ago |
Megvii Engine Team
|
e250afb08f
|
feat(dnn/cuda): support conv_bias for nchw64 and qint4
GitOrigin-RevId: 1c65ba87d7
|
4 years ago |
Megvii Engine Team
|
3b9b87809d
|
refactor(dnn): refactor lowbit tensor format
GitOrigin-RevId: b646dc085b
|
4 years ago |
Megvii Engine Team
|
8fef78d06d
|
feat(dnn/cuda): add relayout format when width is an odd number
GitOrigin-RevId: f059f1f56d
|
4 years ago |
Megvii Engine Team
|
91d6160769
|
feat(dnn/common): add tensor format for low-bits tensor layout
GitOrigin-RevId: 0aa3753f37
|
4 years ago |
Megvii Engine Team
|
19a554d674
|
test(dnn/cuda): add testcase for transforming tensor layout between nchw and nchw64
GitOrigin-RevId: 75d579635a
|
4 years ago |
Megvii Engine Team
|
23032f50f2
|
feat(dnn/cuda): support float16 for index_incr_multi_axis_vec
GitOrigin-RevId: c2ae93d568
|
4 years ago |