158 Commits (c42ce9370581debfad9ef0499deaa867b623cec0)

Author SHA1 Message Date
  Megvii Engine Team 8e5410e41f feat(cuda): add fp16 compute 16 kernel 3 years ago
  Megvii Engine Team 472e2f9655 refactor(cuda): depthwish large kernel 3 years ago
  Megvii Engine Team e698ec20c2 feat(cuda): float16 depthwise large kernel conv compute fp32 3 years ago
  Megvii Engine Team 48406382ce feat(cuda): support float16 depthwise large kernel conv 3 years ago
  Megvii Engine Team 7042f76b34 perf(cuda): speedup conv backward data with small feature map and large filter size 3 years ago
  Megvii Engine Team 87a2aeebb1 perf(cuda): speedup chanwise conv with small feature map and large filter size 3 years ago
  Megvii Engine Team 2293385e93 feat(mge): add conv padding mode 3 years ago
  Megvii Engine Team afe9c4b50d feat(dnn/cuda): add implicit bmm kernels for large kernel depthwise convolution backward filter opr 3 years ago
  Megvii Engine Team 38067472d2 fix(dnn/cuda): fix ci 3 years ago
  Megvii Engine Team 1da58ae17a feat(dnn/cuda): add implicit bmm large kernel dwconv2d dgrad kernels 3 years ago
  Megvii Engine Team 96050073a2 feat(dnn/cuda): add implicit bmm large kernel dwconv2d fprop impl 3 years ago
  Megvii Engine Team 95ac055538 feat(dnn,mgb,imperative): add diag opr implement 3 years ago
  Megvii Engine Team cbbca5fb10 feat(mge): add softmax op use cudnn api 3 years ago
  Megvii Engine Team 82be0aaced test(dnn): fix compute capability requirement for NCHWX test 3 years ago
  Megvii Engine Team 1999307015 feat(mgb/opr): add dropout kernel 3 years ago
  Megvii Engine Team a93741815b feat(mgb/opr): add layernorm forward and backward kernel 3 years ago
  Megvii Engine Team 2696e4efaa feat(dnn): add float16 for remap backward 3 years ago
  Megvii Engine Team 11d75fecb5 feat(dnn/check_non_finite): add batch check_non_finite 3 years ago
  Megvii Engine Team ba2f0c2e48 fix(dnn/cuda): fix cudnn_conv algo of conv_bias opr for fp16 add z cases 3 years ago
  Megvii Engine Team c85631aa77 feat(dnn): use ref ptr interface for all backends 3 years ago
  Megvii Engine Team 89186edc5d fix(dnn): correct reduce/argmxx/fakequant calculation with nan 3 years ago
  Megvii Engine Team 68cdabd288 feat(opr): indexing_multi_axis_vec support nd index 3 years ago
  Megvii Engine Team 9b4cd92ba3 fix(mgb/dnn): fix cudnnConvBiasActivation crash on nchw32 int8 with oc > 256 3 years ago
  Megvii Engine Team 10af44abba fix(dnn/cuda): fix cudnn conv impl for nchw4_nchw hybrid layout 3 years ago
  Megvii Engine Team 5885b137fa feat(dnn/arm): support layout like NHWC channel like broadcast on arm 3 years ago
  Megvii Engine Team 369c2ccc5a style(all): reformat c++ code 3 years ago
  Megvii Engine Team f5cb21ed3a fix(mgb/opr): add non finite check 3 years ago
  Megvii Engine Team bde5cf3564 feat(dnn): add resize linear for arm 3 years ago
  Megvii Engine Team 3d3666b6e0 test(dnn/bn): add compatible configs for NHWC BN 3 years ago
  Megvii Engine Team 3977b7aa0b feat(mgb/shuffle): add shuffle opr 3 years ago
  Megvii Engine Team 17371e79b9 fix(dnn/reduce): fix reduce_mean o16c32 is incorrect for large tensor 3 years ago
  Megvii Engine Team 8b40f57738 feat(mgb/dnn): add conv1x1 algo for matrix mul 3 years ago
  Megvii Engine Team d69b59035d feat(dnn): add an get_all_algorithms_safe interface 3 years ago
  Megvii Engine Team 8b94f49328 fix(dnn/cuda): fix elemwise and relayout int4 bug when last shape is 1 3 years ago
  Megvii Engine Team 722aecd437 feat(mgb): support fp16 nhwc backward 3 years ago
  Megvii Engine Team 0708bc780c fix(dnn/cuda): disallow implicit dtype conversion in cublaslt matmul algos 3 years ago
  Megvii Engine Team 4c13bc7e1b feat(dnn/cuda): add nhwc int8 deconv 3 years ago
  Megvii Engine Team 11f022ff7c feat(dnn/cuda): add nhwc int8 imma conv and conv fuse typecvt 3 years ago
  Megvii Engine Team 67575d582c feat(mge/opr): add interpolate bilinear mode 3 years ago
  Megvii Engine Team 0558b2123d feat(mge/opr): add interpolate nearest mode 3 years ago
  Megvii Engine Team c25125e3d2 perf(dnn/cuda): sass int8 epilogue remove shared load 3 years ago
  Megvii Engine Team ff0e6be7b9 fix(dnn/cuda): fix cutlass tensorop kernels 3 years ago
  Megvii Engine Team 336761253d feat(dnn/cuda): add tensorcore matmul for fp16 data type 3 years ago
  Megvii Engine Team eab6afab47 feat(mgb): add padding opr for megbrain 4 years ago
  Megvii Engine Team b18feaab33 feat(dnn/cuda): use cutlass remove shared load imma conv kernel 4 years ago
  Megvii Engine Team 1af350c6d2 feat(dnn): add fill kernel 3 years ago
  Megvii Engine Team 3eb0505f9b feat(imperative): add support for quantized conv transpose2d 3 years ago
  Megvii Engine Team 3b452d8c16 feat(mgb): cuda conv support nhwc format and fp16 dtype 3 years ago
  Megvii Engine Team 2aba0378b9 refactor(mgb/dnn): fix group conv is_available 3 years ago
  Megvii Engine Team 4a92346b7a refactor(mgb): refactor group conv3d 3 years ago