You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

BUILD_README.md 11 kB

feat(cmake/windows/xp/sp2/inference): implement inference on windows xp (os vesion >= sp2) build with cmake * cmake build support(xp sp2): (dbg)EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP_SP2=ON" ./scripts/cmake-build/host_build.sh -m -d (opt)EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP_SP2=ON" ./scripts/cmake-build/host_build.sh -m * cmake build support(xp sp3): (dbg)EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP=ON" ./scripts/cmake-build/host_build.sh -m -d (opt)EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP=ON" ./scripts/cmake-build/host_build.sh -m * internal behavior: will define MGB_HAVE_THREAD=0 when enable -DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP_SP2=ON * refer to https://docs.microsoft.com/en-us/cpp/build/configuring-programs-for-windows-xp?view=msvc-160 xp sp2(x86) do not support vc runtime fully, casused by KERNEL32.dll do not implement some base apis for c++ std function, for example, std::mutex/std::thread/std::condition_variable as a workround, we will disable some MegEngine features on xp sp2 env, for exampe, multi-thread etc! * about DNN_MUTEX/MGB_MUTEX/LITE_MUTEX, if your code will build in inference code (even CPU backends), please replace std::mutex to DNN_MUTEX/MGB_MUTEX, * about multi-thread, if you code need multi-thread support, please enable it when MGB_HAVE_THREAD=1 * about test build env status 1: Visual Studio 2019(MSVC version <= 14.26.28801)---- pass 2: Visual Studio 2019(MSVC version > 14.26.28801) ---- failed caused by this 'new' version will put VCR depends on win7 KERNEL32.DLL, this may be fixed at Visual Studio 2019 later version but we do not test at this MR merge point 3: Visual Studio 2017 ---------- pass 4: Visual Studio 2014 ---------- pass GitOrigin-RevId: ea6e1f8b4fea9aa03594e3af8d59708b4cdf7bdc
3 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230
  1. # Build support status
  2. ## Host build
  3. * Windows build (cpu and gpu)
  4. * Linux build (cpu and gpu)
  5. * MacOS build (cpu only)
  6. * Android build (cpu only) at [termux](https://termux.com/) env
  7. ## Cross build
  8. * Windows cross build ARM-Android (ok)
  9. * Windows cross build ARM-Linux (ok)
  10. * Linux cross build ARM-Android (ok)
  11. * Linux cross build ARM-Linux (ok)
  12. * Linux cross build RISCV(support [rvv](https://github.com/riscv/riscv-v-spec))-Linux (ok)
  13. * MacOS cross build ARM-Android (ok)
  14. * MacOS cross build ARM-Linux (ok but experimental)
  15. * MacOS cross build IOS (ok)
  16. # Build env prepare
  17. ## Prerequisites
  18. Most of the dependencies of MegBrain(MegEngine) are located in [third_party](../../third_party) directory, which can be prepared by executing:
  19. ```bash
  20. ./third_party/prepare.sh
  21. ./third_party/install-mkl.sh
  22. ```
  23. Windows shell env(bash from windows-git), infact if you can use git command on Windows, which means you always install bash.exe at the same dir of git.exe, find it, then you can prepare third-party code by
  24. * command:
  25. ```
  26. bash.exe ./third_party/prepare.sh
  27. bash.exe ./third_party/install-mkl.sh
  28. if you are use github MegEngine and build for Windows XP, please
  29. 1: donwload mkl for xp from: http://registrationcenter-download.intel.com/akdlm/irc_nas/4617/w_mkl_11.1.4.237.exe
  30. 2: install exe, then from install dir:
  31. 2a: cp include file to third_party/mkl/x86_32/include/
  32. 2b: cp lib file to third_party/mkl/x86_32/lib/
  33. ```
  34. About `third_party/prepare.sh`, also support to be managed by `CMake`, just config `EXTRA_CMAKE_ARGS="-DMGE_SYNC_THIRD_PARTY=ON"` before run `scripts/cmake-build/*.sh`
  35. But some dependencies need to be installed manually:
  36. * [CUDA](https://developer.nvidia.com/cuda-toolkit-archive)(>=10.1), [cuDNN](https://developer.nvidia.com/cudnn)(>=7.6) are required when building MegBrain with CUDA support.
  37. * [TensorRT](https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/index.html)(>=5.1.5) is required when building with TensorRT support.
  38. * LLVM/Clang(>=6.0) is required when building with Halide JIT support.
  39. * Python(>=3.5) and numpy are required to build Python modules.
  40. ## Package install
  41. ### Windows host build
  42. * commands:
  43. ```
  44. 1: install git (Windows GUI)
  45. * download git-install.exe from https://git-scm.com/download/win
  46. * only need choose git-lfs component
  47. * install to default dir: /c/Program\ Files/Git
  48. 2: install visual studio 2019 Enterprise (Windows GUI)
  49. * download install exe from https://visualstudio.microsoft.com
  50. * choose "c++ develop" -> choose cmake/MSVC/cmake/windows-sdk when install
  51. * NOTICE: windows sdk version >=14.28.29910 do not compat with CUDA 10.1, please
  52. choose version < 14.28.29910
  53. * then install choosed components
  54. 3: install LLVM from https://releases.llvm.org/download.html (Windows GUI)
  55. * llvm install by Visual Studio have some issue, eg, link crash on large project, please use official version
  56. * download install exe from https://releases.llvm.org/download.html
  57. * our ci use LLVM 12.0.1, if u install other version, please modify LLVM_PATH
  58. * install 12.0.1 to /c/Program\ Files/LLVM_12_0_1
  59. 4: install python3 (Windows GUI)
  60. * download python 64-bit install exe (we support python3.5-python3.8 now)
  61. https://www.python.org/ftp/python/3.5.4/python-3.5.4-amd64.exe
  62. https://www.python.org/ftp/python/3.6.8/python-3.6.8-amd64.exe
  63. https://www.python.org/ftp/python/3.7.7/python-3.7.7-amd64.exe
  64. https://www.python.org/ftp/python/3.8.3/python-3.8.3-amd64.exe
  65. * install 3.5.4 to /c/Users/${USER}/mge_whl_python_env/3.5.4
  66. * install 3.6.8 to /c/Users/${USER}/mge_whl_python_env/3.6.8
  67. * install 3.7.7 to /c/Users/${USER}/mge_whl_python_env/3.7.7
  68. * install 3.8.3 to /c/Users/${USER}/mge_whl_python_env/3.8.3
  69. * cp python.exe to python3.exe
  70. loop cd /c/Users/${USER}/mge_whl_python_env/*
  71. copy python.exe to python3.exe
  72. * install python depends components
  73. loop cd /c/Users/${USER}/mge_whl_python_env/*
  74. python3.exe -m pip install --upgrade pip
  75. python3.exe -m pip install -r imperative/python/requires.txt
  76. python3.exe -m pip install -r imperative/python/requires-test.txt
  77. 5: install cuda components (Windows GUI)
  78. * now we support cuda10.1+cudnn7.6+TensorRT6.0 on Windows
  79. * install cuda10.1 to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1
  80. * install cudnn7.6 to C:\Program Files\NVIDIA GPU Computing Toolkit\cudnn-10.1-windows10-x64-v7.6.5.32
  81. * install TensorRT6.0 to C:\Program Files\NVIDIA GPU Computing Toolkit\TensorRT-6.0.1.5
  82. 6: edit system env variables (Windows GUI)
  83. * create new key: "VS_PATH", value: "C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise"
  84. * create new key: "LLVM_PATH", value: "C:\Program Files\LLVM_12_0_1"
  85. * append "Path" env value
  86. C:\Program Files\Git\cmd
  87. C:\Users\build\mge_whl_python_env\3.8.3
  88. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin
  89. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\libnvvp
  90. C:\Program Files\NVIDIA GPU Computing Toolkit\cudnn-10.1-windows10-x64-v7.6.5.32\cuda\bin
  91. C:\Program Files\LLVM_12_0_1\lib\clang\12.0.1\lib\windows
  92. ```
  93. ### Linux host build
  94. * commands:
  95. ```
  96. 1: install Cmake, which version >= 3.15.2, ninja-build
  97. 2: install gcc/g++, which version >= 6, (gcc/g++ >= 7, if need build training mode)
  98. 3: install build-essential git git-lfs gfortran libgfortran-6-dev autoconf gnupg flex bison gperf curl zlib1g-dev gcc-multilib g++-multilib lib32ncurses5-dev libxml2-utils xsltproc unzip libtool librdmacm-dev rdmacm-utils python3-dev python3-numpy texinfo
  99. 4: CUDA env(if build with CUDA), please export CUDA/CUDNN/TRT env, for example:
  100. export CUDA_ROOT_DIR=/path/to/cuda
  101. export CUDNN_ROOT_DIR=/path/to/cudnn
  102. export TRT_ROOT_DIR=/path/to/tensorrt
  103. ```
  104. ### MacOS host build
  105. * commands:
  106. ```
  107. 1: install Cmake, which version >= 3.15.2
  108. 2: install brew: /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
  109. 3: brew install python python3 coreutils ninja
  110. 4: install at least xcode command line tool: https://developer.apple.com/xcode/
  111. 5: about cuda: we do not support CUDA on MacOS
  112. 6: python3 -m pip install numpy (if you want to build with training mode)
  113. ```
  114. ### Cross build for ARM-Android
  115. Now we support Windows/Linux/MacOS cross build to ARM-Android
  116. * commands:
  117. ```
  118. 2: download NDK from https://developer.android.google.cn/ndk/downloads/ for diff OS platform package, suggested NDK20 or NDK21
  119. 3: export NDK_ROOT=NDK_DIR at bash-like env
  120. ```
  121. ### Cross build for ARM-Linux
  122. Now we support ARM-Linux on Linux and Windows fully, also experimental on MacOS
  123. * commands:
  124. ```
  125. 1: download toolchains from http://releases.linaro.org/components/toolchain/binaries/ or https://developer.arm.com/tools-and-software/open-source-software/developer-tools/gnu-toolchain/gnu-a/downloads if use Windows or Linux
  126. 2: download toolchains from https://github.com/thinkski/osx-arm-linux-toolchains if use MacOS
  127. ```
  128. ### Cross build for RISCV-Linux
  129. Now we support RISCV-Linux
  130. * commands:
  131. ```
  132. 1: download toolchains from https://github.com/riscv-collab/riscv-gnu-toolchain
  133. ```
  134. ### Cross build for IOS
  135. Now we only support cross build to IOS from MACOS
  136. * commands:
  137. ```
  138. 1: install full xcode: https://developer.apple.com/xcode/
  139. ```
  140. # How to build
  141. ## With bash env(Linux/MacOS/Windows-git-bash)
  142. * host build just use scripts:scripts/cmake-build/host_build.sh
  143. builds MegBrain(MegEngine) that runs on the same host machine (i.e., no cross compiling)
  144. The following command displays the usage:
  145. ```
  146. scripts/cmake-build/host_build.sh -h
  147. more example:
  148. 1a: build for Windows for XP (sp3): (dbg) EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP=ON" ./scripts/cmake-build/host_build.sh -m -d
  149. (opt) EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP=ON" ./scripts/cmake-build/host_build.sh -m
  150. 2a: build for Windows for XP (sp2): (dbg) EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP_SP2=ON" ./scripts/cmake-build/host_build.sh -m -d
  151. (opt) EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP_SP2=ON" ./scripts/cmake-build/host_build.sh -m
  152. ```
  153. * cross build to ARM-Android: scripts/cmake-build/cross_build_android_arm_inference.sh
  154. builds MegBrain(MegEngine) for inference on Android-ARM platforms.
  155. The following command displays the usage:
  156. ```
  157. scripts/cmake-build/cross_build_android_arm_inference.sh -h
  158. ```
  159. * cross build to ARM-Linux: scripts/cmake-build/cross_build_linux_arm_inference.sh
  160. builds MegBrain(MegEngine) for inference on Linux-ARM platforms.
  161. The following command displays the usage:
  162. ```
  163. scripts/cmake-build/cross_build_linux_arm_inference.sh -h
  164. ```
  165. * cross build to RISCV-Linux: scripts/cmake-build/cross_build_linux_riscv_inference.sh
  166. builds MegBrain(MegEngine) for inference on Linux-RISCV platforms.
  167. The following command displays the usage:
  168. ```
  169. scripts/cmake-build/cross_build_linux_riscv_inference.sh -h
  170. ```
  171. * if board support RVV(at least 0.7), for example: nezha D1 , use -a rv64gcv0p7
  172. * if board do not support RVV, use -a rv64norvv
  173. * cross build to IOS: scripts/cmake-build/cross_build_ios_arm_inference.sh
  174. builds MegBrain(MegEngine) for inference on iOS (iPhone/iPad) platforms.
  175. The following command displays the usage:
  176. ```
  177. scripts/cmake-build/cross_build_ios_arm_inference.sh -h
  178. ```
  179. ## Visual Studio GUI(only for Windows host)
  180. * command:
  181. ```
  182. 1: import megengine src to Visual Studio as a project
  183. 2: right click CMakeLists.txt, choose config 'cmake config' choose clang_cl_x86 or clang_cl_x64
  184. 3: config other CMAKE config, eg, CUDA ON OR OFF
  185. ```
  186. ## Other ARM-Linux-Like board support
  187. It`s easy to support other customized arm-linux-like board, example:
  188. * 1: HISI 3516/3519, infact u can just use toolchains from arm developer or linaro
  189. then call scripts/cmake-build/cross_build_linux_arm_inference.sh to build a ELF
  190. binary, or if you get HISI official toolschain, you just need modify CMAKE_CXX_COMPILER
  191. and CMAKE_C_COMPILER in toolchains/arm-linux-gnueabi* to a real name
  192. * 2: about Raspberry, just use scripts/cmake-build/cross_build_linux_arm_inference.sh
  193. ## About build args
  194. All `scripts/cmake-build/*.sh` support `EXTRA_CMAKE_ARGS` to config more options
  195. * get support options by `-l`, for example: `scripts/cmake-build/cross_build_android_arm_inference.sh -l`
  196. * CMake support `Release`, `Debug`, `RelWithDebInfo` build type, all `scripts/cmake-build/*.sh` default build type is `Release`, can build `Debug` type with `-d`, if you want to build with `RelWithDebInfo`, you can config with `EXTRA_CMAKE_ARGS`, for example: `EXTRA_CMAKE_ARGS="-DCMAKE_BUILD_TYPE=RelWithDebInfo" ./scripts/cmake-build/host_build.sh`, Notice: when build with `Release` , we will disable some build components: `RTTI`, `MGB_ASSERT_LOC`, and `MGB_ENABLE_DEBUG_UTIL`
  197. * CMake build all targets by default, if you just want build a specified target, you can build with `-e xxxx `, for example, only build with `lite_shared `: `./scripts/cmake-build/cross_build_android_arm_inference.sh -e lite_shared` , Notice: when with `-e`, will do not strip target, always for debug or need strip target manually
  198. * About others build flag, please run with flag `-h`