15 Commits (afe9c4b50dcf6850b03e6c2feba630ed2962a62e)

Author SHA1 Message Date
  Megvii Engine Team afe9c4b50d feat(dnn/cuda): add implicit bmm kernels for large kernel depthwise convolution backward filter opr 3 years ago
  Megvii Engine Team 369c2ccc5a style(all): reformat c++ code 3 years ago
  Megvii Engine Team 8b40f57738 feat(mgb/dnn): add conv1x1 algo for matrix mul 3 years ago
  Megvii Engine Team ff0e6be7b9 fix(dnn/cuda): fix cutlass tensorop kernels 3 years ago
  Megvii Engine Team 336761253d feat(dnn/cuda): add tensorcore matmul for fp16 data type 3 years ago
  Megvii Engine Team ef9aa80074 fix(mgb/dnn): fix cuda naive matmul algo 4 years ago
  Megvii Engine Team 55974e8cf9 feat(log): opt log 4 years ago
  Megvii Engine Team 2de2222e46 feat(dnn/cuda): add cutlass batched gemv kernel for matmul operator 4 years ago
  Megvii Engine Team 973d2a0ac2 feat(dnn/cuda): add cutlass matmul using split k parallel 4 years ago
  Megvii Engine Team 03c921f7c4 feat(dnn/cuda): add cutlass matmul impls 4 years ago
  Megvii Engine Team 4a1d52c9c6 refactor(megdnn): refactor bfloat16 matmul to recursive inteface 4 years ago
  Megvii Engine Team 364afec033 chore(mge): update copyright years 4 years ago
  Megvii Engine Team a1877ee0fa refactor(dnn): refactor algo interface, use algoinfo instead of global algorithm 4 years ago
  Megvii Engine Team 0293d58ade feat(mge): add bfloat16 support 5 years ago
  Megvii Engine Team f91881ffdc MegEngine: Initial commit of MegEngine. 5 years ago