53 Commits (release-1.5)

Author SHA1 Message Date
  Megvii Engine Team 287cab49c2 fix(mgb/sereg): fix rng operator compatibility 3 years ago
  Megvii Engine Team f76a2cc2c6 feat(mge/opr): add silu and gelu 3 years ago
  Megvii Engine Team f8b0f2cb91 build(dnn/cutlass): fix build for cutlass 3 years ago
  Megvii Engine Team 4eda338876 feat(dnn/cuda): generate cutlass kimpls using cmake and bazel 4 years ago
  Megvii Engine Team 894a2407c2 feat(dnn/cuda): add relayout format kernel for nchw <-> nhwc 4 years ago
  Megvii Engine Team 5a14a89224 refactor(dnn/cuda): refactor cutlass kernel generator for gemm and gemv 4 years ago
  Megvii Engine Team 4abf7bd36f refactor(dnn/cuda): refactor kernel generator for cutlass convolution kernels 4 years ago
  Megvii Engine Team 66f70578c2 feat(dnn/cuda): add convolution with i8 input and i4 output 4 years ago
  Megvii Engine Team 43098fb8f1 feat(mge): add SlidingWindowTranspose opr 4 years ago
  Megvii Engine Team b078dda90b feat(mge/random): add some random op and remove random/distrbution.py 4 years ago
  Megvii Engine Team f30c0e06a6 feat(mgb/opr): add lsq opr 4 years ago
  Megvii Engine Team 12a0e61542 feat(dnn/cuda): add cuda elemwise int4 4 years ago
  Megvii Engine Team 71c2f61254 feat(dnn/cuda): add relayout format to support layout transform between NCHW and NCHW64 4 years ago
  Megvii Engine Team ed92207585 feat(dnn/cuda): add conv bias impl for int4 data type using sass language 4 years ago
  Megvii Engine Team 1525a02530 feat(mge/module): add python wrapper for unfold 4 years ago
  Megvii Engine Team 1997b1a289 feat(dnn/cuda): add correlation kernel 4 years ago
  Megvii Engine Team 8494a1529e chore(scripts): clarify and fix default value of bit combined enum 4 years ago
  Megvii Engine Team a3ea1f153c feat(mgb/opr): add fast profile and combined Execution strategy 4 years ago
  Megvii Engine Team c82d88751a fix(dnn/cuda): add cuda nchw int8 conv impl with nchw4 to fix cu111 compatibility 4 years ago
  Megvii Engine Team 2de2222e46 feat(dnn/cuda): add cutlass batched gemv kernel for matmul operator 4 years ago
  Megvii Engine Team 973d2a0ac2 feat(dnn/cuda): add cutlass matmul using split k parallel 4 years ago
  Megvii Engine Team 03c921f7c4 feat(dnn/cuda): add cutlass matmul impls 4 years ago
  Megvii Engine Team ad87f78a14 chore(imperative): refine tblgen for generating op name 4 years ago
  Megvii Engine Team 55042195d4 chore(winograd): add Convolutionv2 param 4 years ago
  Megvii Engine Team a85531dd0f feat(mgb/opr): add tqt opr 4 years ago
  Megvii Engine Team 61f917fb8e feat(dnn/cuda): add impl for fusing warp perspective and dimshuffle 4 years ago
  Megvii Engine Team fc0fcd2f7f chore(winograd): remove winograd transform code 4 years ago
  Megvii Engine Team 69e3e32240 feat(imperative): auto generated opdef header and python binding 4 years ago
  Megvii Engine Team 3bf73ff16f feat(dnn): add cuda preprocess fusion 4 years ago
  Megvii Engine Team 6856ce9ce2 feat(dnn): support conv bias activation for nchw4 input tensor format and nchw output tensor format 4 years ago
  Megvii Engine Team c03249c059 feat(dnn/opr): add megdnn fake quant opr 4 years ago
  Megvii Engine Team ba66e1d039 feat(dnn): add nchw_fp32 nchw44_qint8 cuda dct 4 years ago
  Megvii Engine Team a9f98e9c66 refactor(meg/internal): move interal codes back to megbrain 4 years ago
  Megvii Engine Team 5a85c907e0 feat(mgb/opr): add megbrain adaptive pooling opr 4 years ago
  Megvii Engine Team 76fa71573b feat(dnn/cuda): add cutlass nchw4 convolution 4 years ago
  Megvii Engine Team 199eefbd4c fix(dnn): generate mode files 4 years ago
  Megvii Engine Team 9510136223 fix(mgb/rocm): remove begin-internal of rocm 4 years ago
  Megvii Engine Team aeffcd5897 feat(dnn/cuda): integrate cutlass nchw32 tensorcore convolution 4 years ago
  Megvii Engine Team 9e5e32dee2 fix(dnn): restore opr_param_defs.py 4 years ago
  Megvii Engine Team d334b229b0 feat(imperative): add nms opr wrapper 4 years ago
  Megvii Engine Team a1e6720756 feat(dnn): enable bool comparison 4 years ago
  Megvii Engine Team e258812f12 feat(dnn): add bool dtype 4 years ago
  Megvii Engine Team 0f9dec6816 feat(mge/imperative): name so lib 4 years ago
  Megvii Engine Team 457a1e010c refactor(imperative): initial merge of xxx and megengine 5 years ago
  Megvii Engine Team 7886ff9af0 feat(dnn): add relayout_format for nchw to nchw4 and ic <=4 5 years ago
  Megvii Engine Team 7ae05ac886 feat(imperative): merge common c++ code to megbrain 5 years ago
  Megvii Engine Team 8f87a3e988 feat(dnn/arm_common): add int8 nchw44 winograd f23_4x4 f23_8x8 compute float32/int16 output int8 5 years ago
  Megvii Engine Team a1f8ecc74f fix(dnn/naive): add convolution nchw44-dot format 5 years ago
  Megvii Engine Team a6bc250d1c feat(dnn/common): add matmul impl for naive with matrix format mk4_dot 5 years ago
  Megvii Engine Team 0293d58ade feat(mge): add bfloat16 support 5 years ago