You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

BUILD_README.md 11 kB

feat(cmake/windows/xp/sp2/inference): implement inference on windows xp (os vesion >= sp2) build with cmake * cmake build support(xp sp2): (dbg)EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP_SP2=ON" ./scripts/cmake-build/host_build.sh -m -d (opt)EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP_SP2=ON" ./scripts/cmake-build/host_build.sh -m * cmake build support(xp sp3): (dbg)EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP=ON" ./scripts/cmake-build/host_build.sh -m -d (opt)EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP=ON" ./scripts/cmake-build/host_build.sh -m * internal behavior: will define MGB_HAVE_THREAD=0 when enable -DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP_SP2=ON * refer to https://docs.microsoft.com/en-us/cpp/build/configuring-programs-for-windows-xp?view=msvc-160 xp sp2(x86) do not support vc runtime fully, casused by KERNEL32.dll do not implement some base apis for c++ std function, for example, std::mutex/std::thread/std::condition_variable as a workround, we will disable some MegEngine features on xp sp2 env, for exampe, multi-thread etc! * about DNN_MUTEX/MGB_MUTEX/LITE_MUTEX, if your code will build in inference code (even CPU backends), please replace std::mutex to DNN_MUTEX/MGB_MUTEX, * about multi-thread, if you code need multi-thread support, please enable it when MGB_HAVE_THREAD=1 * about test build env status 1: Visual Studio 2019(MSVC version <= 14.26.28801)---- pass 2: Visual Studio 2019(MSVC version > 14.26.28801) ---- failed caused by this 'new' version will put VCR depends on win7 KERNEL32.DLL, this may be fixed at Visual Studio 2019 later version but we do not test at this MR merge point 3: Visual Studio 2017 ---------- pass 4: Visual Studio 2014 ---------- pass GitOrigin-RevId: ea6e1f8b4fea9aa03594e3af8d59708b4cdf7bdc
3 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232
  1. # Build support status
  2. ## Host build
  3. * Windows build (cpu and gpu)
  4. * Linux build (cpu and gpu)
  5. * MacOS build (cpu only)
  6. * Android build (cpu only) at [termux](https://termux.com/) env
  7. ## Cross build
  8. * Windows cross build ARM-Android (ok)
  9. * Windows cross build ARM-Linux (ok)
  10. * Linux cross build ARM-Android (ok)
  11. * Linux cross build ARM-Linux (ok)
  12. * Linux cross build RISCV(support [rvv](https://github.com/riscv/riscv-v-spec))-Linux (ok)
  13. * MacOS cross build ARM-Android (ok)
  14. * MacOS cross build ARM-Linux (ok but experimental)
  15. * MacOS cross build IOS (ok)
  16. # Build env prepare
  17. ## Prerequisites
  18. Most of the dependencies of MegBrain(MegEngine) are located in [third_party](../../third_party) directory, which can be prepared by executing:
  19. ```bash
  20. ./third_party/prepare.sh
  21. ./third_party/install-mkl.sh
  22. ```
  23. Windows shell env(bash from windows-git), infact if you can use git command on Windows, which means you always install bash.exe at the same dir of git.exe, find it, then you can prepare third-party code by
  24. * command:
  25. ```
  26. bash.exe ./third_party/prepare.sh
  27. bash.exe ./third_party/install-mkl.sh
  28. if you are use github MegEngine and build for Windows XP, please
  29. 1: donwload mkl for xp from: http://registrationcenter-download.intel.com/akdlm/irc_nas/4617/w_mkl_11.1.4.237.exe
  30. 2: install exe, then from install dir:
  31. 2a: cp include file to third_party/mkl/x86_32/include/
  32. 2b: cp lib file to third_party/mkl/x86_32/lib/
  33. ```
  34. About `third_party/prepare.sh`, also support to be managed by `CMake`, just config `EXTRA_CMAKE_ARGS="-DMGE_SYNC_THIRD_PARTY=ON"` before run `scripts/cmake-build/*.sh`
  35. But some dependencies need to be installed manually:
  36. * [CUDA](https://developer.nvidia.com/cuda-toolkit-archive)(>=10.1), [cuDNN](https://developer.nvidia.com/cudnn)(>=7.6) are required when building MegBrain with CUDA support.
  37. * [TensorRT](https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/index.html)(>=5.1.5) is required when building with TensorRT support.
  38. * LLVM/Clang(>=6.0) is required when building with Halide JIT support.
  39. * Python(>=3.6) and numpy are required to build Python modules.
  40. ## Package install
  41. ### Windows host build
  42. * commands:
  43. ```
  44. 1: install git (Windows GUI)
  45. * download git-install.exe from https://git-scm.com/download/win
  46. * only need choose git-lfs component
  47. * install to default dir: /c/Program\ Files/Git
  48. 2: install visual studio 2019 Enterprise (Windows GUI)
  49. * download install exe from https://visualstudio.microsoft.com
  50. * choose "c++ develop" -> choose cmake/MSVC/cmake/windows-sdk when install
  51. * NOTICE: windows sdk version >=14.28.29910 do not compat with CUDA 10.1, please
  52. choose version < 14.28.29910
  53. * then install choosed components
  54. 3: install LLVM from https://releases.llvm.org/download.html (Windows GUI)
  55. * llvm install by Visual Studio have some issue, eg, link crash on large project, please use official version
  56. * download install exe from https://releases.llvm.org/download.html
  57. * our ci use LLVM 12.0.1, if u install other version, please modify LLVM_PATH
  58. * install 12.0.1 to /c/Program\ Files/LLVM_12_0_1
  59. 4: install python3 (Windows GUI)
  60. * download python 64-bit install exe (we support python3.6-python3.9 now)
  61. https://www.python.org/ftp/python/3.6.8/python-3.6.8-amd64.exe
  62. https://www.python.org/ftp/python/3.7.7/python-3.7.7-amd64.exe
  63. https://www.python.org/ftp/python/3.8.3/python-3.8.3-amd64.exe
  64. https://www.python.org/ftp/python/3.9.4/python-3.9.4-amd64.exe
  65. https://www.python.org/ftp/python/3.10.1/python-3.10.1-amd64.exe
  66. * install 3.6.8 to /c/Users/${USER}/mge_whl_python_env/3.6.8
  67. * install 3.7.7 to /c/Users/${USER}/mge_whl_python_env/3.7.7
  68. * install 3.8.3 to /c/Users/${USER}/mge_whl_python_env/3.8.3
  69. * install 3.9.4 to /c/Users/${USER}/mge_whl_python_env/3.9.4
  70. * install 3.10.1 to /c/Users/${USER}/mge_whl_python_env/3.10.1
  71. * cp python.exe to python3.exe
  72. loop cd /c/Users/${USER}/mge_whl_python_env/*
  73. copy python.exe to python3.exe
  74. * install python depends components
  75. loop cd /c/Users/${USER}/mge_whl_python_env/*
  76. python3.exe -m pip install --upgrade pip
  77. python3.exe -m pip install -r imperative/python/requires.txt
  78. python3.exe -m pip install -r imperative/python/requires-test.txt
  79. 5: install cuda components (Windows GUI)
  80. * now we support cuda10.1+cudnn7.6+TensorRT6.0 on Windows
  81. * install cuda10.1 to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1
  82. * install cudnn7.6 to C:\Program Files\NVIDIA GPU Computing Toolkit\cudnn-10.1-windows10-x64-v7.6.5.32
  83. * install TensorRT6.0 to C:\Program Files\NVIDIA GPU Computing Toolkit\TensorRT-6.0.1.5
  84. 6: edit system env variables (Windows GUI)
  85. * create new key: "VS_PATH", value: "C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise"
  86. * create new key: "LLVM_PATH", value: "C:\Program Files\LLVM_12_0_1"
  87. * append "Path" env value
  88. C:\Program Files\Git\cmd
  89. C:\Users\build\mge_whl_python_env\3.8.3
  90. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin
  91. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\libnvvp
  92. C:\Program Files\NVIDIA GPU Computing Toolkit\cudnn-10.1-windows10-x64-v7.6.5.32\cuda\bin
  93. C:\Program Files\LLVM_12_0_1\lib\clang\12.0.1\lib\windows
  94. ```
  95. ### Linux host build
  96. * commands:
  97. ```
  98. 1: install Cmake, which version >= 3.15.2, ninja-build
  99. 2: install gcc/g++, which version >= 6, (gcc/g++ >= 7, if need build training mode)
  100. 3: install build-essential git git-lfs gfortran libgfortran-6-dev autoconf gnupg flex bison gperf curl zlib1g-dev gcc-multilib g++-multilib lib32ncurses5-dev libxml2-utils xsltproc unzip libtool librdmacm-dev rdmacm-utils python3-dev python3-numpy texinfo
  101. 4: CUDA env(if build with CUDA), please export CUDA/CUDNN/TRT env, for example:
  102. export CUDA_ROOT_DIR=/path/to/cuda
  103. export CUDNN_ROOT_DIR=/path/to/cudnn
  104. export TRT_ROOT_DIR=/path/to/tensorrt
  105. ```
  106. ### MacOS host build
  107. * commands:
  108. ```
  109. 1: install Cmake, which version >= 3.15.2
  110. 2: install brew: /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
  111. 3: brew install python python3 coreutils ninja
  112. 4: install at least xcode command line tool: https://developer.apple.com/xcode/
  113. 5: about cuda: we do not support CUDA on MacOS
  114. 6: python3 -m pip install numpy (if you want to build with training mode)
  115. ```
  116. ### Cross build for ARM-Android
  117. Now we support Windows/Linux/MacOS cross build to ARM-Android
  118. * commands:
  119. ```
  120. 2: download NDK from https://developer.android.google.cn/ndk/downloads/ for diff OS platform package, suggested NDK20 or NDK21
  121. 3: export NDK_ROOT=NDK_DIR at bash-like env
  122. ```
  123. ### Cross build for ARM-Linux
  124. Now we support ARM-Linux on Linux and Windows fully, also experimental on MacOS
  125. * commands:
  126. ```
  127. 1: download toolchains from http://releases.linaro.org/components/toolchain/binaries/ or https://developer.arm.com/tools-and-software/open-source-software/developer-tools/gnu-toolchain/gnu-a/downloads if use Windows or Linux
  128. 2: download toolchains from https://github.com/thinkski/osx-arm-linux-toolchains if use MacOS
  129. ```
  130. ### Cross build for RISCV-Linux
  131. Now we support RISCV-Linux
  132. * commands:
  133. ```
  134. 1: download toolchains from https://github.com/riscv-collab/riscv-gnu-toolchain
  135. ```
  136. ### Cross build for IOS
  137. Now we only support cross build to IOS from MACOS
  138. * commands:
  139. ```
  140. 1: install full xcode: https://developer.apple.com/xcode/
  141. ```
  142. # How to build
  143. ## With bash env(Linux/MacOS/Windows-git-bash)
  144. * host build just use scripts:scripts/cmake-build/host_build.sh
  145. builds MegBrain(MegEngine) that runs on the same host machine (i.e., no cross compiling)
  146. The following command displays the usage:
  147. ```
  148. scripts/cmake-build/host_build.sh -h
  149. more example:
  150. 1a: build for Windows for XP (sp3): (dbg) EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP=ON" ./scripts/cmake-build/host_build.sh -m -d
  151. (opt) EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP=ON" ./scripts/cmake-build/host_build.sh -m
  152. 2a: build for Windows for XP (sp2): (dbg) EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP_SP2=ON" ./scripts/cmake-build/host_build.sh -m -d
  153. (opt) EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP_SP2=ON" ./scripts/cmake-build/host_build.sh -m
  154. ```
  155. * cross build to ARM-Android: scripts/cmake-build/cross_build_android_arm_inference.sh
  156. builds MegBrain(MegEngine) for inference on Android-ARM platforms.
  157. The following command displays the usage:
  158. ```
  159. scripts/cmake-build/cross_build_android_arm_inference.sh -h
  160. ```
  161. * cross build to ARM-Linux: scripts/cmake-build/cross_build_linux_arm_inference.sh
  162. builds MegBrain(MegEngine) for inference on Linux-ARM platforms.
  163. The following command displays the usage:
  164. ```
  165. scripts/cmake-build/cross_build_linux_arm_inference.sh -h
  166. ```
  167. * cross build to RISCV-Linux: scripts/cmake-build/cross_build_linux_riscv_inference.sh
  168. builds MegBrain(MegEngine) for inference on Linux-RISCV platforms.
  169. The following command displays the usage:
  170. ```
  171. scripts/cmake-build/cross_build_linux_riscv_inference.sh -h
  172. ```
  173. * if board support RVV(at least 0.7), for example: nezha D1 , use -a rv64gcv0p7
  174. * if board do not support RVV, use -a rv64norvv
  175. * cross build to IOS: scripts/cmake-build/cross_build_ios_arm_inference.sh
  176. builds MegBrain(MegEngine) for inference on iOS (iPhone/iPad) platforms.
  177. The following command displays the usage:
  178. ```
  179. scripts/cmake-build/cross_build_ios_arm_inference.sh -h
  180. ```
  181. ## Visual Studio GUI(only for Windows host)
  182. * command:
  183. ```
  184. 1: import megengine src to Visual Studio as a project
  185. 2: right click CMakeLists.txt, choose config 'cmake config' choose clang_cl_x86 or clang_cl_x64
  186. 3: config other CMAKE config, eg, CUDA ON OR OFF
  187. ```
  188. ## Other ARM-Linux-Like board support
  189. It`s easy to support other customized arm-linux-like board, example:
  190. * 1: HISI 3516/3519, infact u can just use toolchains from arm developer or linaro
  191. then call scripts/cmake-build/cross_build_linux_arm_inference.sh to build a ELF
  192. binary, or if you get HISI official toolschain, you just need modify CMAKE_CXX_COMPILER
  193. and CMAKE_C_COMPILER in toolchains/arm-linux-gnueabi* to a real name
  194. * 2: about Raspberry, just use scripts/cmake-build/cross_build_linux_arm_inference.sh
  195. ## About build args
  196. All `scripts/cmake-build/*.sh` support `EXTRA_CMAKE_ARGS` to config more options
  197. * get support options by `-l`, for example: `scripts/cmake-build/cross_build_android_arm_inference.sh -l`
  198. * CMake support `Release`, `Debug`, `RelWithDebInfo` build type, all `scripts/cmake-build/*.sh` default build type is `Release`, can build `Debug` type with `-d`, if you want to build with `RelWithDebInfo`, you can config with `EXTRA_CMAKE_ARGS`, for example: `EXTRA_CMAKE_ARGS="-DCMAKE_BUILD_TYPE=RelWithDebInfo" ./scripts/cmake-build/host_build.sh`, Notice: when build with `Release` , we will disable some build components: `RTTI`, `MGB_ASSERT_LOC`, and `MGB_ENABLE_DEBUG_UTIL`
  199. * CMake build all targets by default, if you just want build a specified target, you can build with `-e xxxx `, for example, only build with `lite_shared `: `./scripts/cmake-build/cross_build_android_arm_inference.sh -e lite_shared` , Notice: when with `-e`, will do not strip target, always for debug or need strip target manually
  200. * About others build flag, please run with flag `-h`