You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

BUILD_README.md 11 kB

feat(cmake/windows/xp/sp2/inference): implement inference on windows xp (os vesion >= sp2) build with cmake * cmake build support(xp sp2): (dbg)EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP_SP2=ON" ./scripts/cmake-build/host_build.sh -m -d (opt)EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP_SP2=ON" ./scripts/cmake-build/host_build.sh -m * cmake build support(xp sp3): (dbg)EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP=ON" ./scripts/cmake-build/host_build.sh -m -d (opt)EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP=ON" ./scripts/cmake-build/host_build.sh -m * internal behavior: will define MGB_HAVE_THREAD=0 when enable -DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP_SP2=ON * refer to https://docs.microsoft.com/en-us/cpp/build/configuring-programs-for-windows-xp?view=msvc-160 xp sp2(x86) do not support vc runtime fully, casused by KERNEL32.dll do not implement some base apis for c++ std function, for example, std::mutex/std::thread/std::condition_variable as a workround, we will disable some MegEngine features on xp sp2 env, for exampe, multi-thread etc! * about DNN_MUTEX/MGB_MUTEX/LITE_MUTEX, if your code will build in inference code (even CPU backends), please replace std::mutex to DNN_MUTEX/MGB_MUTEX, * about multi-thread, if you code need multi-thread support, please enable it when MGB_HAVE_THREAD=1 * about test build env status 1: Visual Studio 2019(MSVC version <= 14.26.28801)---- pass 2: Visual Studio 2019(MSVC version > 14.26.28801) ---- failed caused by this 'new' version will put VCR depends on win7 KERNEL32.DLL, this may be fixed at Visual Studio 2019 later version but we do not test at this MR merge point 3: Visual Studio 2017 ---------- pass 4: Visual Studio 2014 ---------- pass GitOrigin-RevId: ea6e1f8b4fea9aa03594e3af8d59708b4cdf7bdc
3 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210
  1. # Build support status
  2. ## Host build
  3. * Windows build (cpu and gpu)
  4. * Linux build (cpu and gpu)
  5. * MacOS build (cpu only)
  6. * Android build (cpu only) at [termux](https://termux.com/) env
  7. ## Cross build
  8. * Windows cross build ARM-Android (ok)
  9. * Windows cross build ARM-Linux (ok)
  10. * Linux cross build ARM-Android (ok)
  11. * Linux cross build ARM-Linux (ok)
  12. * MacOS cross build ARM-Android (ok)
  13. * MacOS cross build ARM-Linux (ok but experimental)
  14. * MacOS cross build IOS (ok)
  15. # Build env prepare
  16. ## Prerequisites
  17. Most of the dependencies of MegBrain(MegEngine) are located in [third_party](../../third_party) directory, which can be prepared by executing:
  18. ```bash
  19. ./third_party/prepare.sh
  20. ./third_party/install-mkl.sh
  21. ```
  22. Windows shell env(bash from windows-git), infact if you can use git command on Windows, which means you always install bash.exe at the same dir of git.exe, find it, then you can prepare third-party code by
  23. * command:
  24. ```
  25. bash.exe ./third_party/prepare.sh
  26. bash.exe ./third_party/install-mkl.sh
  27. if you are use github MegEngine and build for Windows XP, please
  28. 1: donwload mkl for xp from: http://registrationcenter-download.intel.com/akdlm/irc_nas/4617/w_mkl_11.1.4.237.exe
  29. 2: install exe, then from install dir:
  30. 2a: cp include file to third_party/mkl/x86_32/include/
  31. 2b: cp lib file to third_party/mkl/x86_32/lib/
  32. ```
  33. About `third_party/prepare.sh`, also support to be managed by `CMake`, just config `EXTRA_CMAKE_ARGS="-DMGE_SYNC_THIRD_PARTY=ON"` before run `scripts/cmake-build/*.sh`
  34. But some dependencies need to be installed manually:
  35. * [CUDA](https://developer.nvidia.com/cuda-toolkit-archive)(>=10.1), [cuDNN](https://developer.nvidia.com/cudnn)(>=7.6) are required when building MegBrain with CUDA support.
  36. * [TensorRT](https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/index.html)(>=5.1.5) is required when building with TensorRT support.
  37. * LLVM/Clang(>=6.0) is required when building with Halide JIT support.
  38. * Python(>=3.5) and numpy are required to build Python modules.
  39. ## Package install
  40. ### Windows host build
  41. * commands:
  42. ```
  43. 1: install git (Windows GUI)
  44. * download git-install.exe from https://git-scm.com/download/win
  45. * only need choose git-lfs component
  46. * install to default dir: /c/Program\ Files/Git
  47. 2: install visual studio 2019 Enterprise (Windows GUI)
  48. * download install exe from https://visualstudio.microsoft.com
  49. * choose "c++ develop" -> choose cmake/MSVC/cmake/windows-sdk when install
  50. * NOTICE: windows sdk version >=14.28.29910 do not compat with CUDA 10.1, please
  51. choose version < 14.28.29910
  52. * then install choosed components
  53. 3: install LLVM from https://releases.llvm.org/download.html (Windows GUI)
  54. * llvm install by Visual Studio have some issue, eg, link crash on large project, please use official version
  55. * download install exe from https://releases.llvm.org/download.html
  56. * our ci use LLVM 12.0.1, if u install other version, please modify LLVM_PATH
  57. * install 12.0.1 to /c/Program\ Files/LLVM_12_0_1
  58. 4: install python3 (Windows GUI)
  59. * download python 64-bit install exe (we support python3.5-python3.8 now)
  60. https://www.python.org/ftp/python/3.5.4/python-3.5.4-amd64.exe
  61. https://www.python.org/ftp/python/3.6.8/python-3.6.8-amd64.exe
  62. https://www.python.org/ftp/python/3.7.7/python-3.7.7-amd64.exe
  63. https://www.python.org/ftp/python/3.8.3/python-3.8.3-amd64.exe
  64. * install 3.5.4 to /c/Users/${USER}/mge_whl_python_env/3.5.4
  65. * install 3.6.8 to /c/Users/${USER}/mge_whl_python_env/3.6.8
  66. * install 3.7.7 to /c/Users/${USER}/mge_whl_python_env/3.7.7
  67. * install 3.8.3 to /c/Users/${USER}/mge_whl_python_env/3.8.3
  68. * cp python.exe to python3.exe
  69. loop cd /c/Users/${USER}/mge_whl_python_env/*
  70. copy python.exe to python3.exe
  71. * install python depends components
  72. loop cd /c/Users/${USER}/mge_whl_python_env/*
  73. python3.exe -m pip install --upgrade pip
  74. python3.exe -m pip install -r imperative/python/requires.txt
  75. python3.exe -m pip install -r imperative/python/requires-test.txt
  76. 5: install cuda components (Windows GUI)
  77. * now we support cuda10.1+cudnn7.6+TensorRT6.0 on Windows
  78. * install cuda10.1 to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1
  79. * install cudnn7.6 to C:\Program Files\NVIDIA GPU Computing Toolkit\cudnn-10.1-windows10-x64-v7.6.5.32
  80. * install TensorRT6.0 to C:\Program Files\NVIDIA GPU Computing Toolkit\TensorRT-6.0.1.5
  81. 6: edit system env variables (Windows GUI)
  82. * create new key: "VS_PATH", value: "C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise"
  83. * create new key: "LLVM_PATH", value: "C:\Program Files\LLVM_12_0_1"
  84. * append "Path" env value
  85. C:\Program Files\Git\cmd
  86. C:\Users\build\mge_whl_python_env\3.8.3
  87. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin
  88. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\libnvvp
  89. C:\Program Files\NVIDIA GPU Computing Toolkit\cudnn-10.1-windows10-x64-v7.6.5.32\cuda\bin
  90. C:\Program Files\LLVM_12_0_1\lib\clang\12.0.1\lib\windows
  91. ```
  92. ### Linux host build
  93. * commands:
  94. ```
  95. 1: install Cmake, which version >= 3.15.2, ninja-build
  96. 2: install gcc/g++, which version >= 6, (gcc/g++ >= 7, if need build training mode)
  97. 3: install build-essential git git-lfs gfortran libgfortran-6-dev autoconf gnupg flex bison gperf curl zlib1g-dev gcc-multilib g++-multilib lib32ncurses5-dev libxml2-utils xsltproc unzip libtool librdmacm-dev rdmacm-utils python3-dev python3-numpy texinfo
  98. 4: CUDA env(if build with CUDA), please export CUDA/CUDNN/TRT env, for example:
  99. export CUDA_ROOT_DIR=/path/to/cuda
  100. export CUDNN_ROOT_DIR=/path/to/cudnn
  101. export TRT_ROOT_DIR=/path/to/tensorrt
  102. ```
  103. ### MacOS host build
  104. * commands:
  105. ```
  106. 1: install Cmake, which version >= 3.15.2
  107. 2: install brew: /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
  108. 3: brew install python python3 coreutils ninja
  109. 4: install at least xcode command line tool: https://developer.apple.com/xcode/
  110. 5: about cuda: we do not support CUDA on MacOS
  111. 6: python3 -m pip install numpy (if you want to build with training mode)
  112. ```
  113. ### Cross build for ARM-Android
  114. Now we support Windows/Linux/MacOS cross build to ARM-Android
  115. * commands:
  116. ```
  117. 2: download NDK from https://developer.android.google.cn/ndk/downloads/ for diff OS platform package, suggested NDK20 or NDK21
  118. 3: export NDK_ROOT=NDK_DIR at bash-like env
  119. ```
  120. ### Cross build for ARM-Linux
  121. Now we support ARM-Linux on Linux and Windows fully, also experimental on MacOS
  122. * commands:
  123. ```
  124. 1: download toolchains from http://releases.linaro.org/components/toolchain/binaries/ or https://developer.arm.com/tools-and-software/open-source-software/developer-tools/gnu-toolchain/gnu-a/downloads if use Windows or Linux
  125. 2: download toolchains from https://github.com/thinkski/osx-arm-linux-toolchains if use MacOS
  126. ```
  127. ### Cross build for IOS
  128. Now we only support cross build to IOS from MACOS
  129. * commands:
  130. ```
  131. 1: install full xcode: https://developer.apple.com/xcode/
  132. ```
  133. # How to build
  134. ## With bash env(Linux/MacOS/Windows-git-bash)
  135. * host build just use scripts:scripts/cmake-build/host_build.sh
  136. builds MegBrain(MegEngine) that runs on the same host machine (i.e., no cross compiling)
  137. The following command displays the usage:
  138. ```
  139. scripts/cmake-build/host_build.sh -h
  140. more example:
  141. 1a: build for Windows for XP (sp3): (dbg) EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP=ON" ./scripts/cmake-build/host_build.sh -m -d
  142. (opt) EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP=ON" ./scripts/cmake-build/host_build.sh -m
  143. 2a: build for Windows for XP (sp2): (dbg) EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP_SP2=ON" ./scripts/cmake-build/host_build.sh -m -d
  144. (opt) EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP_SP2=ON" ./scripts/cmake-build/host_build.sh -m
  145. ```
  146. * cross build to ARM-Android: scripts/cmake-build/cross_build_android_arm_inference.sh
  147. builds MegBrain(MegEngine) for inference on Android-ARM platforms.
  148. The following command displays the usage:
  149. ```
  150. scripts/cmake-build/cross_build_android_arm_inference.sh -h
  151. ```
  152. * cross build to ARM-Linux: scripts/cmake-build/cross_build_linux_arm_inference.sh
  153. builds MegBrain(MegEngine) for inference on Linux-ARM platforms.
  154. The following command displays the usage:
  155. ```
  156. scripts/cmake-build/cross_build_linux_arm_inference.sh -h
  157. ```
  158. * cross build to IOS: scripts/cmake-build/cross_build_ios_arm_inference.sh
  159. builds MegBrain(MegEngine) for inference on iOS (iPhone/iPad) platforms.
  160. The following command displays the usage:
  161. ```
  162. scripts/cmake-build/cross_build_ios_arm_inference.sh -h
  163. ```
  164. ## Visual Studio GUI(only for Windows host)
  165. * command:
  166. ```
  167. 1: import megengine src to Visual Studio as a project
  168. 2: right click CMakeLists.txt, choose config 'cmake config' choose clang_cl_x86 or clang_cl_x64
  169. 3: config other CMAKE config, eg, CUDA ON OR OFF
  170. ```
  171. ## Other ARM-Linux-Like board support
  172. It`s easy to support other customized arm-linux-like board, example:
  173. * 1: HISI 3516/3519, infact u can just use toolchains from arm developer or linaro
  174. then call scripts/cmake-build/cross_build_linux_arm_inference.sh to build a ELF
  175. binary, or if you get HISI official toolschain, you just need modify CMAKE_CXX_COMPILER
  176. and CMAKE_C_COMPILER in toolchains/arm-linux-gnueabi* to a real name
  177. * 2: about Raspberry, just use scripts/cmake-build/cross_build_linux_arm_inference.sh
  178. ## About build args
  179. All `scripts/cmake-build/*.sh` support `EXTRA_CMAKE_ARGS` to config more options
  180. * get support options by `-l`, for example: `scripts/cmake-build/cross_build_android_arm_inference.sh -l`
  181. * CMake support `Release`, `Debug`, `RelWithDebInfo` build type, all `scripts/cmake-build/*.sh` default build type is `Release`, can build `Debug` type with `-d`, if you want to build with `RelWithDebInfo`, you can config with `EXTRA_CMAKE_ARGS`, for example: `EXTRA_CMAKE_ARGS="-DCMAKE_BUILD_TYPE=RelWithDebInfo" ./scripts/cmake-build/host_build.sh`, Notice: when build with `Release` , we will disable some build components: `RTTI`, `MGB_ASSERT_LOC`, and `MGB_ENABLE_DEBUG_UTIL`
  182. * CMake build all targets by default, if you just want build a specified target, you can build with `-e xxxx `, for example, only build with `lite_shared `: `./scripts/cmake-build/cross_build_android_arm_inference.sh -e lite_shared` , Notice: when with `-e`, will do not strip target, always for debug or need strip target manually
  183. * About others build flag, please run with flag `-h`