You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

BUILD_README.md 6.9 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163
  1. # Build support status
  2. ## Host build
  3. * Windows build (cpu and gpu)
  4. * Linux build (cpu and gpu)
  5. * MacOS build (cpu only)
  6. ## Cross build
  7. * Windows cross build ARM-Android (ok)
  8. * Windows cross build ARM-Linux (ok)
  9. * Linux cross build ARM-Android (ok)
  10. * Linux cross build ARM-Linux (ok)
  11. * MacOS cross build ARM-Android (ok)
  12. * MacOS cross build ARM-Linux (ok but experimental)
  13. * MacOS cross build IOS (ok)
  14. # Build env prepare
  15. ## Package install
  16. ### Windows host build
  17. * commands:
  18. ```
  19. 1: install git (Windows GUI)
  20. * download git-install.exe from https://git-scm.com/download/win
  21. * only need choose git-lfs component
  22. * install to default dir: /c/Program\ Files/Git
  23. 2: install visual studio 2019 Enterprise (Windows GUI)
  24. * download install exe from https://visualstudio.microsoft.com
  25. * choose "c++ develop" -> choose cmake/MSVC/cmake/windows-sdk when install
  26. * NOTICE: windows sdk version >=14.28.29910 do not compat with CUDA 10.1, please
  27. choose version < 14.28.29910
  28. * then install choosed components
  29. 3: install LLVM from https://releases.llvm.org/download.html (Windows GUI)
  30. * llvm install by Visual Studio have some issue, eg, link crash on large project, please use official version
  31. * download install exe from https://releases.llvm.org/download.html
  32. * our ci use LLVM 12.0.1, if u install other version, please modify LLVM_PATH
  33. * install 12.0.1 to /c/Program\ Files/LLVM_12_0_1
  34. 4: install python3 (Windows GUI)
  35. * download python 64-bit install exe (we support python3.5-python3.8 now)
  36. https://www.python.org/ftp/python/3.5.4/python-3.5.4-amd64.exe
  37. https://www.python.org/ftp/python/3.6.8/python-3.6.8-amd64.exe
  38. https://www.python.org/ftp/python/3.7.7/python-3.7.7-amd64.exe
  39. https://www.python.org/ftp/python/3.8.3/python-3.8.3-amd64.exe
  40. * install 3.5.4 to /c/Users/${USER}/mge_whl_python_env/3.5.4
  41. * install 3.6.8 to /c/Users/${USER}/mge_whl_python_env/3.6.8
  42. * install 3.7.7 to /c/Users/${USER}/mge_whl_python_env/3.7.7
  43. * install 3.8.3 to /c/Users/${USER}/mge_whl_python_env/3.8.3
  44. * cp python.exe to python3.exe
  45. loop cd /c/Users/${USER}/mge_whl_python_env/*
  46. copy python.exe to python3.exe
  47. * install python depends components
  48. loop cd /c/Users/${USER}/mge_whl_python_env/*
  49. python3.exe -m pip install --upgrade pip
  50. python3.exe -m pip install -r imperative/python/requires.txt
  51. python3.exe -m pip install -r imperative/python/requires-test.txt
  52. 5: install cuda components (Windows GUI)
  53. * now we support cuda10.1+cudnn7.6+TensorRT6.0 on Windows
  54. * install cuda10.1 to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1
  55. * install cudnn7.6 to C:\Program Files\NVIDIA GPU Computing Toolkit\cudnn-10.1-windows10-x64-v7.6.5.32
  56. * install TensorRT6.0 to C:\Program Files\NVIDIA GPU Computing Toolkit\TensorRT-6.0.1.5
  57. 6: edit system env variables (Windows GUI)
  58. * create new key: "VS_PATH", value: "C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise"
  59. * create new key: "LLVM_PATH", value: "C:\Program Files\LLVM_12_0_1"
  60. * append "Path" env value
  61. C:\Program Files\Git\cmd
  62. C:\Users\build\mge_whl_python_env\3.8.3
  63. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin
  64. C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\libnvvp
  65. C:\Program Files\NVIDIA GPU Computing Toolkit\cudnn-10.1-windows10-x64-v7.6.5.32\cuda\bin
  66. C:\Program Files\LLVM_12_0_1\lib\clang\12.0.1\lib\windows
  67. ```
  68. ### Linux host build
  69. * commands:
  70. ```
  71. 1: install Cmake, which version >= 3.15.2, ninja-build
  72. 2: install gcc/g++, which version >= 6, (gcc/g++ >= 7, if need build training mode)
  73. 3: install build-essential git git-lfs gfortran libgfortran-6-dev autoconf gnupg flex bison gperf curl zlib1g-dev gcc-multilib g++-multilib lib32ncurses5-dev libxml2-utils xsltproc unzip libtool librdmacm-dev rdmacm-utils python3-dev python3-numpy texinfo
  74. 4: CUDA env(if enable CUDA), version detail refer to README.md
  75. ```
  76. ### MacOS host build
  77. * commands:
  78. ```
  79. 1: install Cmake, which version >= 3.15.2
  80. 2: install brew: /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
  81. 3: brew install python python3 coreutils ninja
  82. 4: install at least xcode command line tool: https://developer.apple.com/xcode/
  83. 5: about cuda: we do not support CUDA on MacOS
  84. 6: python3 -m pip install numpy (if you want to build with training mode)
  85. ```
  86. ### Cross build for ARM-Android
  87. Now we support Windows/Linux/MacOS cross build to ARM-Android
  88. * commands:
  89. ```
  90. 2: download NDK from https://developer.android.google.cn/ndk/downloads/ for diff OS platform package, suggested NDK20 or NDK21
  91. 3: export NDK_ROOT=NDK_DIR at bash-like env
  92. ```
  93. ### Cross build for ARM-Linux
  94. Now we support ARM-Linux on Linux and Windows fully, also experimental on MacOS
  95. * commands:
  96. ```
  97. 1: download toolchains from http://releases.linaro.org/components/toolchain/binaries/ or https://developer.arm.com/tools-and-software/open-source-software/developer-tools/gnu-toolchain/gnu-a/downloads if use Windows or Linux
  98. 2: download toolchains from https://github.com/thinkski/osx-arm-linux-toolchains if use MacOS
  99. ```
  100. ### Cross build for IOS
  101. Now we only support cross build to IOS from MACOS
  102. * commands:
  103. ```
  104. 1: install full xcode: https://developer.apple.com/xcode/
  105. ```
  106. ## Third-party code prepare
  107. With bash env(Linux/MacOS/Unix-Like tools on Windows, eg: msys etc)
  108. * commands:
  109. ```
  110. ./third_party/prepare.sh
  111. ./third_party/install-mkl.sh
  112. ```
  113. Windows shell env(bash from windows-git), infact if you can use git command on Windows, which means you always install bash.exe at the same dir of git.exe, find it, then you can prepare third-party code by
  114. * command:
  115. ```
  116. bash.exe ./third_party/prepare.sh
  117. bash.exe ./third_party/install-mkl.sh
  118. ```
  119. # How to build
  120. ## With bash env(Linux/MacOS/Windows-git-bash)
  121. * command:
  122. ```
  123. 1: host build just use scripts:scripts/cmake-build/host_build.sh
  124. 2: cross build to ARM-Android: scripts/cmake-build/cross_build_android_arm_inference.sh
  125. 3: cross build to ARM-Linux: scripts/cmake-build/cross_build_linux_arm_inference.sh
  126. 4: cross build to IOS: scripts/cmake-build/cross_build_ios_arm_inference.sh
  127. ```
  128. ## Visual Studio GUI(only for Windows host)
  129. * command:
  130. ```
  131. 1: import megengine src to Visual Studio as a project
  132. 2: right click CMakeLists.txt, choose config 'cmake config' choose clang_cl_x86 or clang_cl_x64
  133. 3: config other CMAKE config, eg, CUDA ON OR OFF
  134. ```
  135. # Other ARM-Linux-Like board support
  136. It`s easy to support other customized arm-linux-like board, example:
  137. * 1: HISI 3516/3519, infact u can just use toolchains from arm developer or linaro
  138. then call scripts/cmake-build/cross_build_linux_arm_inference.sh to build a ELF
  139. binary, or if you get HISI official toolschain, you just need modify CMAKE_CXX_COMPILER
  140. and CMAKE_C_COMPILER in toolchains/arm-linux-gnueabi* to a real name
  141. * 2: about Raspberry, just use scripts/cmake-build/cross_build_linux_arm_inference.sh

MegEngine 安装包中集成了使用 GPU 运行代码所需的 CUDA 环境,不用区分 CPU 和 GPU 版。 如果想要运行 GPU 程序,请确保机器本身配有 GPU 硬件设备并安装好驱动。 如果你想体验在云端 GPU 算力平台进行深度学习开发的感觉,欢迎访问 MegStudio 平台