29 Commits (472e2f96556f531a35c41cddf8c9b5847ea0def0)

Author SHA1 Message Date
  Megvii Engine Team 472e2f9655 refactor(cuda): depthwish large kernel 3 years ago
  Megvii Engine Team e698ec20c2 feat(cuda): float16 depthwise large kernel conv compute fp32 3 years ago
  Megvii Engine Team 48406382ce feat(cuda): support float16 depthwise large kernel conv 3 years ago
  Megvii Engine Team 7042f76b34 perf(cuda): speedup conv backward data with small feature map and large filter size 3 years ago
  Megvii Engine Team 87a2aeebb1 perf(cuda): speedup chanwise conv with small feature map and large filter size 3 years ago
  Megvii Engine Team 369c2ccc5a style(all): reformat c++ code 3 years ago
  Megvii Engine Team 722aecd437 feat(mgb): support fp16 nhwc backward 3 years ago
  Megvii Engine Team 4c13bc7e1b feat(dnn/cuda): add nhwc int8 deconv 3 years ago
  Megvii Engine Team 11f022ff7c feat(dnn/cuda): add nhwc int8 imma conv and conv fuse typecvt 3 years ago
  Megvii Engine Team d915c5a3fd refactor(mgb): make convolution3D handle noncontiguous tensors 4 years ago
  Megvii Engine Team d04cd67faf refactor(mgb): make conv-backward-filter handle noncontiguous tensors 4 years ago
  Megvii Engine Team 44376f702a refactor(mgb): make conv-backward-data handle noncontiguous tensors 4 years ago
  Megvii Engine Team 1e6ef3771f feat(mgb/dnn): add accuracy shake checker 4 years ago
  Megvii Engine Team ba2ad46e54 feat(gopt): add deconv nchw4 int8 opt pass, add deconv nchw int8 4 years ago
  Megvii Engine Team 5d350fc843 feat(dnn/cuda): add deconv int8 and fix cutlass conv wrapper base on modify cutlass 2.4 4 years ago
  Megvii Engine Team b04ad06f84 refactor(megdnn): refactor matmul algo in conv backward filter 4 years ago
  Megvii Engine Team 25089e520e refactor(megdnn): refactor matmul algo in conv backward data 4 years ago
  Megvii Engine Team 0d720653ac refactor(megdnn): add default algo for convolution forward 4 years ago
  Megvii Engine Team 659217acd2 refactor(megdnn): refactor bfloat16 convbias to recursive inteface 4 years ago
  Megvii Engine Team b8febaf91f refactor(megdnn): refactor bfloat16 convolutionbackwardfilter to recursive inteface 4 years ago
  Megvii Engine Team f14e0c17e7 feat(mgb): add recursive for fastrun and megdnn test 4 years ago
  Megvii Engine Team 364afec033 chore(mge): update copyright years 4 years ago
  Megvii Engine Team a1877ee0fa refactor(dnn): refactor algo interface, use algoinfo instead of global algorithm 4 years ago
  Megvii Engine Team f354724220 fix(ci/megdnn_test/megbrain_test): split some 5 years ago
  Megvii Engine Team 0293d58ade feat(mge): add bfloat16 support 5 years ago
  Megvii Engine Team 1c4a64b2af test(megdnn): skip fp16 test if compute capability less than 60 5 years ago
  luzzyzhang 16f052e916 fix(megdnn): change ver 60 to use cuda capability 50 5 years ago
  Megvii Engine Team f5833a5294 fix(dnn/cuda): fix cublas matmul on sm60 5 years ago
  Megvii Engine Team f91881ffdc MegEngine: Initial commit of MegEngine. 5 years ago