You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

BUILD_README.md 9.3 kB

feat(cmake/windows/xp/sp2/inference): implement inference on windows xp (os vesion >= sp2) build with cmake * cmake build support(xp sp2): (dbg)EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP_SP2=ON" ./scripts/cmake-build/host_build.sh -m -d (opt)EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP_SP2=ON" ./scripts/cmake-build/host_build.sh -m * cmake build support(xp sp3): (dbg)EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP=ON" ./scripts/cmake-build/host_build.sh -m -d (opt)EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP=ON" ./scripts/cmake-build/host_build.sh -m * internal behavior: will define MGB_HAVE_THREAD=0 when enable -DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP_SP2=ON * refer to https://docs.microsoft.com/en-us/cpp/build/configuring-programs-for-windows-xp?view=msvc-160 xp sp2(x86) do not support vc runtime fully, casused by KERNEL32.dll do not implement some base apis for c++ std function, for example, std::mutex/std::thread/std::condition_variable as a workround, we will disable some MegEngine features on xp sp2 env, for exampe, multi-thread etc! * about DNN_MUTEX/MGB_MUTEX/LITE_MUTEX, if your code will build in inference code (even CPU backends), please replace std::mutex to DNN_MUTEX/MGB_MUTEX, * about multi-thread, if you code need multi-thread support, please enable it when MGB_HAVE_THREAD=1 * about test build env status 1: Visual Studio 2019(MSVC version <= 14.26.28801)---- pass 2: Visual Studio 2019(MSVC version > 14.26.28801) ---- failed caused by this 'new' version will put VCR depends on win7 KERNEL32.DLL, this may be fixed at Visual Studio 2019 later version but we do not test at this MR merge point 3: Visual Studio 2017 ---------- pass 4: Visual Studio 2014 ---------- pass GitOrigin-RevId: ea6e1f8b4fea9aa03594e3af8d59708b4cdf7bdc
3 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198
  1. # Build support status
  2. ## Host build
  3. * Windows build (cpu and gpu)
  4. * Linux build (cpu and gpu)
  5. * MacOS build (cpu only)
  6. ## Cross build
  7. * Windows cross build ARM-Android (ok)
  8. * Windows cross build ARM-Linux (ok)
  9. * Linux cross build ARM-Android (ok)
  10. * Linux cross build ARM-Linux (ok)
  11. * MacOS cross build ARM-Android (ok)
  12. * MacOS cross build ARM-Linux (ok but experimental)
  13. * MacOS cross build IOS (ok)
  14. # Build env prepare
  15. ## Prerequisites
  16. Most of the dependencies of MegBrain(MegEngine) are located in [third_party](../../third_party) directory, which can be prepared by executing:
  17. ```bash
  18. ./third_party/prepare.sh
  19. ./third_party/install-mkl.sh
  20. ```
  21. Windows shell env(bash from windows-git), infact if you can use git command on Windows, which means you always install bash.exe at the same dir of git.exe, find it, then you can prepare third-party code by
  22. * command:
  23. ```
  24. bash.exe ./third_party/prepare.sh
  25. bash.exe ./third_party/install-mkl.sh
  26. if you are use github MegEngine and build for Windows XP, please
  27. 1: donwload mkl for xp from: http://registrationcenter-download.intel.com/akdlm/irc_nas/4617/w_mkl_11.1.4.237.exe
  28. 2: install exe, then from install dir:
  29. 2a: cp include file to third_party/mkl/x86_32/include/
  30. 2b: cp lib file to third_party/mkl/x86_32/lib/
  31. ```
  32. But some dependencies need to be installed manually:
  33. * [CUDA](https://developer.nvidia.com/cuda-toolkit-archive)(>=10.1), [cuDNN](https://developer.nvidia.com/cudnn)(>=7.6) are required when building MegBrain with CUDA support.
  34. * [TensorRT](https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/index.html)(>=5.1.5) is required when building with TensorRT support.
  35. * LLVM/Clang(>=6.0) is required when building with Halide JIT support.
  36. * Python(>=3.5) and numpy are required to build Python modules.
  37. ## Package install
  38. ### Windows host build
  39. * commands:
  40. ```
  41. 1: install git (Windows GUI)
  42. * download git-install.exe from https://git-scm.com/download/win
  43. * only need choose git-lfs component
  44. * install to default dir: /c/Program\ Files/Git
  45. 2: install visual studio 2019 Enterprise (Windows GUI)
  46. * download install exe from https://visualstudio.microsoft.com
  47. * choose "c++ develop" -> choose cmake/MSVC/cmake/windows-sdk when install
  48. * NOTICE: windows sdk version >=14.28.29910 do not compat with CUDA 10.1, please
  49. choose version < 14.28.29910
  50. * then install choosed components
  51. 3: install LLVM from https://releases.llvm.org/download.html (Windows GUI)
  52. * llvm install by Visual Studio have some issue, eg, link crash on large project, please use official version
  53. * download install exe from https://releases.llvm.org/download.html
  54. * our ci use LLVM 12.0.1, if u install other version, please modify LLVM_PATH
  55. * install 12.0.1 to /c/Program\ Files/LLVM_12_0_1
  56. 4: install python3 (Windows GUI)
  57. * download python 64-bit install exe (we support python3.5-python3.8 now)
  58. https://www.python.org/ftp/python/3.5.4/python-3.5.4-amd64.exe
  59. https://www.python.org/ftp/python/3.6.8/python-3.6.8-amd64.exe
  60. https://www.python.org/ftp/python/3.7.7/python-3.7.7-amd64.exe
  61. https://www.python.org/ftp/python/3.8.3/python-3.8.3-amd64.exe
  62. * install 3.5.4 to /c/Users/${USER}/mge_whl_python_env/3.5.4
  63. * install 3.6.8 to /c/Users/${USER}/mge_whl_python_env/3.6.8
  64. * install 3.7.7 to /c/Users/${USER}/mge_whl_python_env/3.7.7
  65. * install 3.8.3 to /c/Users/${USER}/mge_whl_python_env/3.8.3
  66. * cp python.exe to python3.exe
  67. loop cd /c/Users/${USER}/mge_whl_python_env/*
  68. copy python.exe to python3.exe
  69. * install python depends components
  70. loop cd /c/Users/${USER}/mge_whl_python_env/*
  71. python3.exe -m pip install --upgrade pip
  72. python3.exe -m pip install -r imperative/python/requires.txt
  73. python3.exe -m pip install -r imperative/python/requires-test.txt
  74. 5: install cuda components (Windows GUI)
  75. * now we support cuda10.1+cudnn7.6+TensorRT6.0 on Windows
  76. * install cuda10.1 to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1
  77. * install cudnn7.6 to C:\Program Files\NVIDIA GPU Computing Toolkit\cudnn-10.1-windows10-x64-v7.6.5.32
  78. * install TensorRT6.0 to C:\Program Files\NVIDIA GPU Computing Toolkit\TensorRT-6.0.1.5
  79. 6: edit system env variables (Windows GUI)
  80. * create new key: "VS_PATH", value: "C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise"
  81. * create new key: "LLVM_PATH", value: "C:\Program Files\LLVM_12_0_1"
  82. * append "Path" env value
  83. C:\Program Files\Git\cmd
  84. C:\Users\build\mge_whl_python_env\3.8.3
  85. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin
  86. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\libnvvp
  87. C:\Program Files\NVIDIA GPU Computing Toolkit\cudnn-10.1-windows10-x64-v7.6.5.32\cuda\bin
  88. C:\Program Files\LLVM_12_0_1\lib\clang\12.0.1\lib\windows
  89. ```
  90. ### Linux host build
  91. * commands:
  92. ```
  93. 1: install Cmake, which version >= 3.15.2, ninja-build
  94. 2: install gcc/g++, which version >= 6, (gcc/g++ >= 7, if need build training mode)
  95. 3: install build-essential git git-lfs gfortran libgfortran-6-dev autoconf gnupg flex bison gperf curl zlib1g-dev gcc-multilib g++-multilib lib32ncurses5-dev libxml2-utils xsltproc unzip libtool librdmacm-dev rdmacm-utils python3-dev python3-numpy texinfo
  96. 4: CUDA env(if build with CUDA), please export CUDA/CUDNN/TRT env, for example:
  97. export CUDA_ROOT_DIR=/path/to/cuda
  98. export CUDNN_ROOT_DIR=/path/to/cudnn
  99. export TRT_ROOT_DIR=/path/to/tensorrt
  100. ```
  101. ### MacOS host build
  102. * commands:
  103. ```
  104. 1: install Cmake, which version >= 3.15.2
  105. 2: install brew: /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
  106. 3: brew install python python3 coreutils ninja
  107. 4: install at least xcode command line tool: https://developer.apple.com/xcode/
  108. 5: about cuda: we do not support CUDA on MacOS
  109. 6: python3 -m pip install numpy (if you want to build with training mode)
  110. ```
  111. ### Cross build for ARM-Android
  112. Now we support Windows/Linux/MacOS cross build to ARM-Android
  113. * commands:
  114. ```
  115. 2: download NDK from https://developer.android.google.cn/ndk/downloads/ for diff OS platform package, suggested NDK20 or NDK21
  116. 3: export NDK_ROOT=NDK_DIR at bash-like env
  117. ```
  118. ### Cross build for ARM-Linux
  119. Now we support ARM-Linux on Linux and Windows fully, also experimental on MacOS
  120. * commands:
  121. ```
  122. 1: download toolchains from http://releases.linaro.org/components/toolchain/binaries/ or https://developer.arm.com/tools-and-software/open-source-software/developer-tools/gnu-toolchain/gnu-a/downloads if use Windows or Linux
  123. 2: download toolchains from https://github.com/thinkski/osx-arm-linux-toolchains if use MacOS
  124. ```
  125. ### Cross build for IOS
  126. Now we only support cross build to IOS from MACOS
  127. * commands:
  128. ```
  129. 1: install full xcode: https://developer.apple.com/xcode/
  130. ```
  131. # How to build
  132. ## With bash env(Linux/MacOS/Windows-git-bash)
  133. * host build just use scripts:scripts/cmake-build/host_build.sh
  134. builds MegBrain(MegEngine) that runs on the same host machine (i.e., no cross compiling)
  135. The following command displays the usage:
  136. ```
  137. scripts/cmake-build/host_build.sh -h
  138. more example:
  139. 1a: build for Windows for XP (sp3): (dbg) EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP=ON" ./scripts/cmake-build/host_build.sh -m -d
  140. (opt) EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP=ON" ./scripts/cmake-build/host_build.sh -m
  141. 2a: build for Windows for XP (sp2): (dbg) EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP_SP2=ON" ./scripts/cmake-build/host_build.sh -m -d
  142. (opt) EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP_SP2=ON" ./scripts/cmake-build/host_build.sh -m
  143. ```
  144. * cross build to ARM-Android: scripts/cmake-build/cross_build_android_arm_inference.sh
  145. builds MegBrain(MegEngine) for inference on Android-ARM platforms.
  146. The following command displays the usage:
  147. ```
  148. scripts/cmake-build/cross_build_android_arm_inference.sh -h
  149. ```
  150. * cross build to ARM-Linux: scripts/cmake-build/cross_build_linux_arm_inference.sh
  151. builds MegBrain(MegEngine) for inference on Linux-ARM platforms.
  152. The following command displays the usage:
  153. ```
  154. scripts/cmake-build/cross_build_linux_arm_inference.sh -h
  155. ```
  156. * cross build to IOS: scripts/cmake-build/cross_build_ios_arm_inference.sh
  157. builds MegBrain(MegEngine) for inference on iOS (iPhone/iPad) platforms.
  158. The following command displays the usage:
  159. ```
  160. scripts/cmake-build/cross_build_ios_arm_inference.sh -h
  161. ```
  162. ## Visual Studio GUI(only for Windows host)
  163. * command:
  164. ```
  165. 1: import megengine src to Visual Studio as a project
  166. 2: right click CMakeLists.txt, choose config 'cmake config' choose clang_cl_x86 or clang_cl_x64
  167. 3: config other CMAKE config, eg, CUDA ON OR OFF
  168. ```
  169. # Other ARM-Linux-Like board support
  170. It`s easy to support other customized arm-linux-like board, example:
  171. * 1: HISI 3516/3519, infact u can just use toolchains from arm developer or linaro
  172. then call scripts/cmake-build/cross_build_linux_arm_inference.sh to build a ELF
  173. binary, or if you get HISI official toolschain, you just need modify CMAKE_CXX_COMPILER
  174. and CMAKE_C_COMPILER in toolchains/arm-linux-gnueabi* to a real name
  175. * 2: about Raspberry, just use scripts/cmake-build/cross_build_linux_arm_inference.sh