615 Commits (v1.9.1)

Author SHA1 Message Date
  Megvii Engine Team 25932352e9 refactor(mgb/dnn): rocm pooling rebase algochooser 4 years ago
  Megvii Engine Team 1cfdbc565c feat(dnn): add deterministic max pooling 4 years ago
  Megvii Engine Team 20ab82d00c fix(tee): fix tee crash 4 years ago
  Megvii Engine Team a5060a2bfe feat(mgb/opr): add check_has_inf kernel and opr 4 years ago
  Megvii Engine Team 3597a6dbd7 feat(dnn/arm): nchw_nchw44 conv support 1x1s1 4 years ago
  Megvii Engine Team d915c5a3fd refactor(mgb): make convolution3D handle noncontiguous tensors 4 years ago
  Megvii Engine Team d04cd67faf refactor(mgb): make conv-backward-filter handle noncontiguous tensors 4 years ago
  Megvii Engine Team 44376f702a refactor(mgb): make conv-backward-data handle noncontiguous tensors 4 years ago
  Megvii Engine Team 7b2a76d1ee refactor(mgb): make conv handle noncontiguous tensors 4 years ago
  Megvii Engine Team ca2828ddcb fix(dnn/x86): fix x86 int8 matmul ldc bug 4 years ago
  Megvii Engine Team 40085acbae fix(mgb): remove unnecessary cudnn8 warning 4 years ago
  Megvii Engine Team 62bd6c823b feat(cmake/debug): misc for build 4 years ago
  Megvii Engine Team b87af9f77f feat(dnn/cuda): topk support fp16 4 years ago
  Megvii Engine Team 2eea00097c feat(mgb): add fast run batch size graph option 4 years ago
  Megvii Engine Team 47dcdf3e17 fix(mgb/core): fix dtype and resize modifiers for tensor 4 years ago
  Megvii Engine Team 71cc814eaf feat(ci): add aarch64 linux ci 4 years ago
  Megvii Engine Team 24a3878130 feat(dnn/cuda): add nchw conv u4xs4 support 4 years ago
  Megvii Engine Team 606540bef4 feat(dnn/cuda): add nhwc 4bit warp perspective 4 years ago
  Megvii Engine Team 1e6019436c feat(dnn/cuda): add nhwc int4 pooling 4 years ago
  Megvii Engine Team e661ae904f feat(dnn/cuda): add base class for cutlass uint4 and int4 algos 4 years ago
  Megvii Engine Team 319436dd14 feat(dnn/cuda): add cutlass impls for uint4 x int4 conv bias 4 years ago
  Megvii Engine Team d28eba4ea5 feat(dnn/cuda): add cutlass impls for int4 conv bias 4 years ago
  Megvii Engine Team 14b65e4da7 feat(dnn/cuda): add reduce_filter_and_update_bias 4 years ago
  Megvii Engine Team 2d4e62ef58 feat(dnn/cuda): add cuda uint4 pooling 4 years ago
  Megvii Engine Team 19919384fc feat(dnn/cuda): add cuda uint warp perspective 4 years ago
  Megvii Engine Team 5868d1fe4f fix(arm_common/pooling): check mode in pooling algo to avoid wrong use AVERAGE_COUNT_EXCLUDE_PADDING 4 years ago
  Megvii Engine Team 86b69cacd0 fix(dnn): fixes for int4 4 years ago
  Megvii Engine Team 4a802d21ca feat(dnn/cuda): add conv u4xs4 sass kernel 4 years ago
  Megvii Engine Team adf75a291d perf(dnn/cuda): add sass int4 128x128 4 years ago
  Megvii Engine Team 8da2f698a3 feat(dnn/cuda): support warp perspective/pooling op when channel not aligned to 64 4 years ago
  Megvii Engine Team c218d4b029 feat(dnn/cuda): fallback conv qs4 support channel not aligend to 64 4 years ago
  Megvii Engine Team 4fe68ac9ed feat(dnn/cuda): support transforming layout between nchw and nchw64 when channel not aligned to 64 4 years ago
  Megvii Engine Team ae6ff2c5a6 feat(mgb/gopt): add opt pass for nchw64 layout transform 4 years ago
  Megvii Engine Team 56e863b7d4 fix(dnn/cuda): fix int4 epilogue stg bug 4 years ago
  Megvii Engine Team cff61a53d4 perf(dnn/cuda): optimize int4 sass conv main loop and epilogue without fuse_z 4 years ago
  Megvii Engine Team 12a0e61542 feat(dnn/cuda): add cuda elemwise int4 4 years ago
  Megvii Engine Team df1af59b5c feat(dnn): warp perspective support int4 4 years ago
  Megvii Engine Team 2398df079c feat(dnn/cuda): add cuda int4 pooling 4 years ago
  Megvii Engine Team 2a2a7f4552 test(mgb/opr): add testcase for conv bias int4 4 years ago
  Megvii Engine Team 858261af1f fix(python_module): fix conversion between numpy-ndarray and mgb tensor for qint4 and quint4 4 years ago
  Megvii Engine Team e250afb08f feat(dnn/cuda): support conv_bias for nchw64 and qint4 4 years ago
  Megvii Engine Team 3b9b87809d refactor(dnn): refactor lowbit tensor format 4 years ago
  Megvii Engine Team c74660ea88 fix(dnn/cuda): fix invalid local read for relayout format kernel 4 years ago
  Megvii Engine Team 8fef78d06d feat(dnn/cuda): add relayout format when width is an odd number 4 years ago
  Megvii Engine Team 91d6160769 feat(dnn/common): add tensor format for low-bits tensor layout 4 years ago
  Megvii Engine Team 19a554d674 test(dnn/cuda): add testcase for transforming tensor layout between nchw and nchw64 4 years ago
  Megvii Engine Team 71c2f61254 feat(dnn/cuda): add relayout format to support layout transform between NCHW and NCHW64 4 years ago
  Megvii Engine Team df009e89e1 feat(dnn/cuda): add cuda conv bias impls for NCHW format tensors with qint4 data type 4 years ago
  Megvii Engine Team ed92207585 feat(dnn/cuda): add conv bias impl for int4 data type using sass language 4 years ago
  Megvii Engine Team 52b55564d7 refactor(dnn/cuda): refactor reorder filter and bias kernel to support conv imma with data type s4 4 years ago