Megvii Engine Team
|
c2e9860feb
|
chore(license): remove all license in file header
GitOrigin-RevId: a0e31247a6
|
3 years ago |
Megvii Engine Team
|
4c0bff1dba
|
refactor(megdnn): refactor TEGRA_X1/X2 macro
GitOrigin-RevId: 1aa78712c6
|
3 years ago |
Megvii Engine Team
|
758549b936
|
feat(megengine): support tx2
GitOrigin-RevId: d1175a1f4a
|
3 years ago |
Megvii Engine Team
|
369c2ccc5a
|
style(all): reformat c++ code
GitOrigin-RevId: 3ffd1b211f
|
3 years ago |
Megvii Engine Team
|
8b40f57738
|
feat(mgb/dnn): add conv1x1 algo for matrix mul
GitOrigin-RevId: 585b2c045a
|
3 years ago |
Megvii Engine Team
|
0708bc780c
|
fix(dnn/cuda): disallow implicit dtype conversion in cublaslt matmul algos
disable tensor op matmul kernels when input and output tensors are in f32 data type to avoid potential accuracy loss
GitOrigin-RevId: 36859cba5a
|
3 years ago |
Megvii Engine Team
|
756c1eb7f2
|
fix(mgb/dnn): add cuda float naive matmul algo
GitOrigin-RevId: db7f7fc057
|
4 years ago |
Megvii Engine Team
|
af42ce7e69
|
fix(megdnn): some fixes of execution policy
GitOrigin-RevId: 920f39bcb6
|
4 years ago |
Megvii Engine Team
|
4a1d52c9c6
|
refactor(megdnn): refactor bfloat16 matmul to recursive inteface
GitOrigin-RevId: 641c508aec
|
4 years ago |
Megvii Engine Team
|
364afec033
|
chore(mge): update copyright years
GitOrigin-RevId: 3c0690bcc1
|
4 years ago |
Megvii Engine Team
|
0293d58ade
|
feat(mge): add bfloat16 support
GitOrigin-RevId: a942ce6791
|
5 years ago |
Megvii Engine Team
|
23478a0d53
|
test(dnn/cuda): fix cuda int8 test on sm60
GitOrigin-RevId: 66bab333e1
|
5 years ago |
Megvii Engine Team
|
f5833a5294
|
fix(dnn/cuda): fix cublas matmul on sm60
GitOrigin-RevId: 3fc0c30a23
|
5 years ago |
Megvii Engine Team
|
f91881ffdc
|
MegEngine: Initial commit of MegEngine.
GitOrigin-RevId: f0c8338beb
|
5 years ago |