177 Commits (dev-support-lite-fork-debug-mode)

Author SHA1 Message Date
  Megvii Engine Team 8fef78d06d feat(dnn/cuda): add relayout format when width is an odd number 4 years ago
  Megvii Engine Team 19a554d674 test(dnn/cuda): add testcase for transforming tensor layout between nchw and nchw64 4 years ago
  Megvii Engine Team 23032f50f2 feat(dnn/cuda): support float16 for index_incr_multi_axis_vec 4 years ago
  Megvii Engine Team 938944027d fix(mgb/dnn): fix cudnn8 convbias 4 years ago
  Megvii Engine Team 1997b1a289 feat(dnn/cuda): add correlation kernel 4 years ago
  Megvii Engine Team c3f8cf04fa feat(dnn): add conv_bwd_data and conv_bwd_filter accuracy shake check 4 years ago
  Megvii Engine Team 1e6ef3771f feat(mgb/dnn): add accuracy shake checker 4 years ago
  Megvii Engine Team ff755451d2 refactor(mgb): move algo's name from info to desc and delete some algo's unnecessary param() method 4 years ago
  Megvii Engine Team 756c1eb7f2 fix(mgb/dnn): add cuda float naive matmul algo 4 years ago
  Megvii Engine Team 68f2e59763 fix(mgb(ci)): fix tx1 ci testcase 4 years ago
  Megvii Engine Team ba2ad46e54 feat(gopt): add deconv nchw4 int8 opt pass, add deconv nchw int8 4 years ago
  Megvii Engine Team 5d350fc843 feat(dnn/cuda): add deconv int8 and fix cutlass conv wrapper base on modify cutlass 2.4 4 years ago
  Megvii Engine Team c82d88751a fix(dnn/cuda): add cuda nchw int8 conv impl with nchw4 to fix cu111 compatibility 4 years ago
  Megvii Engine Team 97beae2fd8 fix(megdnn): fix megdnn benchmark testcase 4 years ago
  Megvii Engine Team 2de2222e46 feat(dnn/cuda): add cutlass batched gemv kernel for matmul operator 4 years ago
  Megvii Engine Team 973d2a0ac2 feat(dnn/cuda): add cutlass matmul using split k parallel 4 years ago
  Megvii Engine Team 03c921f7c4 feat(dnn/cuda): add cutlass matmul impls 4 years ago
  Megvii Engine Team cf27dd642c fix(cuda): use cudnn8.0.4 as cu111 default libs 4 years ago
  Megvii Engine Team 649e4dd750 test(cuda): fix test for cu111 4 years ago
  Megvii Engine Team c69359d00d fix(dnn/cuda): disable cudnn conv_bias kernels for NCHW4_NCHW tensor format 4 years ago
  Megvii Engine Team 0e3a6329ff build(cuda): support cu111 build 4 years ago
  Megvii Engine Team af42ce7e69 fix(megdnn): some fixes of execution policy 4 years ago
  Megvii Engine Team 821656aa4b refactor(megdnn): refactor brute force algo in batched matmul 4 years ago
  Megvii Engine Team 08ff62deb6 refactor(megdnn): refactor batched matmul algo in conv bias 4 years ago
  Megvii Engine Team 8773926ef8 refactor(megdnn): refactor matmul algo in conv bias 4 years ago
  Megvii Engine Team e4b71bdf64 refactor(megdnn): remove unnessary 1x1 algo 4 years ago
  Megvii Engine Team b04ad06f84 refactor(megdnn): refactor matmul algo in conv backward filter 4 years ago
  Megvii Engine Team 25089e520e refactor(megdnn): refactor matmul algo in conv backward data 4 years ago
  Megvii Engine Team 0d720653ac refactor(megdnn): add default algo for convolution forward 4 years ago
  Megvii Engine Team 659217acd2 refactor(megdnn): refactor bfloat16 convbias to recursive inteface 4 years ago
  Megvii Engine Team 4a1d52c9c6 refactor(megdnn): refactor bfloat16 matmul to recursive inteface 4 years ago
  Megvii Engine Team b8febaf91f refactor(megdnn): refactor bfloat16 convolutionbackwardfilter to recursive inteface 4 years ago
  Megvii Engine Team f14e0c17e7 feat(mgb): add recursive for fastrun and megdnn test 4 years ago
  Megvii Engine Team 364afec033 chore(mge): update copyright years 4 years ago
  Megvii Engine Team a85531dd0f feat(mgb/opr): add tqt opr 4 years ago
  Megvii Engine Team c3a4b2225d feat(dnn/cuda): add cutlass impls for fused convolution reformat operation 4 years ago
  Megvii Engine Team 5f44203d7b feat(dnn/cuda): add a cutlass impl for fusing convolution and dimshuffle 4 years ago
  Megvii Engine Team 61f917fb8e feat(dnn/cuda): add impl for fusing warp perspective and dimshuffle 4 years ago
  Megvii Engine Team 3bf73ff16f feat(dnn): add cuda preprocess fusion 4 years ago
  Megvii Engine Team 142f31a875 perf(dnn/cuda): change conv_bias heu, prefer dnn chanwise impl, dislike dnn batch gemm conv1x1 4 years ago
  Megvii Engine Team a1877ee0fa refactor(dnn): refactor algo interface, use algoinfo instead of global algorithm 4 years ago
  Megvii Engine Team 6f5d0febf1 perf(dnn/cuda): enhance performance for pooling forward 4 years ago
  Megvii Engine Team 6856ce9ce2 feat(dnn): support conv bias activation for nchw4 input tensor format and nchw output tensor format 4 years ago
  Megvii Engine Team 89ad33aeb3 feat(dnn/cuda): support weight preprocessing for cutlass algorithms 4 years ago
  Megvii Engine Team c03249c059 feat(dnn/opr): add megdnn fake quant opr 4 years ago
  Megvii Engine Team 739f927c4c feat(dnn/cuda): opt dp4a conv for small channel base on cutlass 4 years ago
  Megvii Engine Team 4aa277a203 refactor(dnn/cuda): misc 4 years ago
  Megvii Engine Team ba66e1d039 feat(dnn): add nchw_fp32 nchw44_qint8 cuda dct 4 years ago
  Megvii Engine Team edb32495c6 feat(dnn/opr): add megdnn adaptive pooling opr 4 years ago
  Megvii Engine Team 310c805f20 fix(dnn/cuda): use kernel parameter instead of user constant memory 4 years ago