Browse Source

docs(readme): update build md

GitOrigin-RevId: 279a202eb2
release-1.6
Megvii Engine Team 3 years ago
parent
commit
4ef911361f
5 changed files with 73 additions and 46 deletions
  1. +2
    -2
      cmake/cudnn.cmake
  2. +2
    -2
      cmake/tensorrt.cmake
  3. +56
    -34
      scripts/cmake-build/BUILD_README.md
  4. +6
    -7
      scripts/whl/BUILD_PYTHON_WHL_README.md
  5. +7
    -1
      scripts/whl/manylinux2014/build_wheel_common.sh

+ 2
- 2
cmake/cudnn.cmake View File

@@ -24,7 +24,7 @@ else()
endif()

if(CUDNN_LIBRARY STREQUAL "CUDNN_LIBRARY-NOTFOUND")
message(FATAL_ERROR "Can not find CuDNN Library")
message(FATAL_ERROR "Can not find CuDNN Library, please refer to scripts/cmake-build/BUILD_README.md to init CUDNN env")
endif()

get_filename_component(__found_cudnn_root ${CUDNN_LIBRARY}/../.. REALPATH)
@@ -35,7 +35,7 @@ find_path(CUDNN_INCLUDE_DIR
DOC "Path to CUDNN include directory." )

if(CUDNN_INCLUDE_DIR STREQUAL "CUDNN_INCLUDE_DIR-NOTFOUND")
message(FATAL_ERROR "Can not find CuDNN Library")
message(FATAL_ERROR "Can not find CuDNN INCLUDE, please refer to scripts/cmake-build/BUILD_README.md to init CUDNN env")
endif()

if(EXISTS ${CUDNN_INCLUDE_DIR}/cudnn_version.h)


+ 2
- 2
cmake/tensorrt.cmake View File

@@ -19,7 +19,7 @@ else()
endif()

if(TRT_LIBRARY STREQUAL "TRT_LIBRARY-NOTFOUND")
message(FATAL_ERROR "Can not find TensorRT Library")
message(FATAL_ERROR "Can not find TensorRT Library, please refer to scripts/cmake-build/BUILD_README.md to init TRT env")
endif()

get_filename_component(__found_trt_root ${TRT_LIBRARY}/../.. REALPATH)
@@ -30,7 +30,7 @@ find_path(TRT_INCLUDE_DIR
DOC "Path to TRT include directory." )

if(TRT_INCLUDE_DIR STREQUAL "TRT_INCLUDE_DIR-NOTFOUND")
message(FATAL_ERROR "Can not find TensorRT Library")
message(FATAL_ERROR "Can not find TensorRT INCLUDE, please refer to scripts/cmake-build/BUILD_README.md to init TRT env")
endif()

file(STRINGS "${TRT_INCLUDE_DIR}/NvInfer.h" TensorRT_MAJOR REGEX "^#define NV_TENSORRT_MAJOR [0-9]+.*$")


+ 56
- 34
scripts/cmake-build/BUILD_README.md View File

@@ -14,6 +14,33 @@
* MacOS cross build IOS (ok)

# Build env prepare
## Prerequisites

Most of the dependencies of MegBrain(MegEngine) are located in [third_party](third_party) directory, which can be prepared by executing:

```bash
./third_party/prepare.sh
./third_party/install-mkl.sh
```
Windows shell env(bash from windows-git), infact if you can use git command on Windows, which means you always install bash.exe at the same dir of git.exe, find it, then you can prepare third-party code by

* command:
```
bash.exe ./third_party/prepare.sh
bash.exe ./third_party/install-mkl.sh
if you are use github MegEngine and build for Windows XP, please
1: donwload mkl for xp from: http://registrationcenter-download.intel.com/akdlm/irc_nas/4617/w_mkl_11.1.4.237.exe
2: install exe, then from install dir:
2a: cp include file to third_party/mkl/x86_32/include/
2b: cp lib file to third_party/mkl/x86_32/lib/
```

But some dependencies need to be installed manually:

* [CUDA](https://developer.nvidia.com/cuda-toolkit-archive)(>=10.1), [cuDNN](https://developer.nvidia.com/cudnn)(>=7.6) are required when building MegBrain with CUDA support.
* [TensorRT](https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/index.html)(>=5.1.5) is required when building with TensorRT support.
* LLVM/Clang(>=6.0) is required when building with Halide JIT support.
* Python(>=3.5) and numpy are required to build Python modules.
## Package install
### Windows host build
* commands:
@@ -74,11 +101,10 @@
1: install Cmake, which version >= 3.15.2, ninja-build
2: install gcc/g++, which version >= 6, (gcc/g++ >= 7, if need build training mode)
3: install build-essential git git-lfs gfortran libgfortran-6-dev autoconf gnupg flex bison gperf curl zlib1g-dev gcc-multilib g++-multilib lib32ncurses5-dev libxml2-utils xsltproc unzip libtool librdmacm-dev rdmacm-utils python3-dev python3-numpy texinfo
4: CUDA env(if enable CUDA), version detail refer to README.md
recommend set env about cuda/cudnn/tensorrt as followed:
export CUDA_ROOT_DIR=/path/to/cuda/lib
export CUDNN_ROOT_DIR=/path/to/cudnn/lib
export TRT_ROOT_DIR=/path/to/tensorrt/lib
4: CUDA env(if build with CUDA), please export CUDA/CUDNN/TRT env, for example:
export CUDA_ROOT_DIR=/path/to/cuda
export CUDNN_ROOT_DIR=/path/to/cudnn
export TRT_ROOT_DIR=/path/to/tensorrt
```

### MacOS host build
@@ -118,42 +144,38 @@ Now we only support cross build to IOS from MACOS
1: install full xcode: https://developer.apple.com/xcode/
```

## Third-party code prepare
With bash env(Linux/MacOS/Unix-Like tools on Windows, eg: msys etc)

* commands:
```
./third_party/prepare.sh
./third_party/install-mkl.sh
```

Windows shell env(bash from windows-git), infact if you can use git command on Windows, which means you always install bash.exe at the same dir of git.exe, find it, then you can prepare third-party code by

* command:
```
bash.exe ./third_party/prepare.sh
bash.exe ./third_party/install-mkl.sh
if you are use github MegEngine and build for Windows XP, please
1: donwload mkl for xp from: http://registrationcenter-download.intel.com/akdlm/irc_nas/4617/w_mkl_11.1.4.237.exe
2: install exe, then from install dir:
2a: cp include file to third_party/mkl/x86_32/include/
2b: cp lib file to third_party/mkl/x86_32/lib/
```

# How to build
## With bash env(Linux/MacOS/Windows-git-bash)

* command:
```
1: host build just use scripts:scripts/cmake-build/host_build.sh
* host build just use scripts:scripts/cmake-build/host_build.sh
builds MegBrain(MegEngine) that runs on the same host machine (i.e., no cross compiling)
The following command displays the usage:
```
scripts/cmake-build/host_build.sh -h
more example:
1a: build for Windows for XP (sp3): (dbg) EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP=ON" ./scripts/cmake-build/host_build.sh -m -d
(opt) EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP=ON" ./scripts/cmake-build/host_build.sh -m
2a: build for Windows for XP (sp2): (dbg) EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP_SP2=ON" ./scripts/cmake-build/host_build.sh -m -d
(opt) EXTRA_CMAKE_ARGS="-DMGE_DEPLOY_INFERENCE_ON_WINDOWS_XP_SP2=ON" ./scripts/cmake-build/host_build.sh -m
2: cross build to ARM-Android: scripts/cmake-build/cross_build_android_arm_inference.sh
3: cross build to ARM-Linux: scripts/cmake-build/cross_build_linux_arm_inference.sh
4: cross build to IOS: scripts/cmake-build/cross_build_ios_arm_inference.sh
```
```
* cross build to ARM-Android: scripts/cmake-build/cross_build_android_arm_inference.sh
builds MegBrain(MegEngine) for inference on Android-ARM platforms.
The following command displays the usage:
```
scripts/cmake-build/cross_build_android_arm_inference.sh -h
```
* cross build to ARM-Linux: scripts/cmake-build/cross_build_linux_arm_inference.sh
builds MegBrain(MegEngine) for inference on Linux-ARM platforms.
The following command displays the usage:
```
scripts/cmake-build/cross_build_linux_arm_inference.sh -h
```
* cross build to IOS: scripts/cmake-build/cross_build_ios_arm_inference.sh
builds MegBrain(MegEngine) for inference on iOS (iPhone/iPad) platforms.
The following command displays the usage:
```
scripts/cmake-build/cross_build_ios_arm_inference.sh -h
```

## Visual Studio GUI(only for Windows host)



+ 6
- 7
scripts/whl/BUILD_PYTHON_WHL_README.md View File

@@ -6,8 +6,9 @@
# Build env prepare
## Linux

* refer to [BUILD_README.md](scripts/cmake-build/BUILD_README.md) Linux host build(CUDA env) section to init CUDA environment
```bash
1: please refer to: https://docs.docker.com/engine/security/rootless/ to enable rootless docker env
1: please refer to https://docs.docker.com/engine/security/rootless/ to enable rootless docker env
2: cd ./scripts/whl/manylinux2014
3: ./build_image.sh
4: as aarch64-linux python3.5 pip do not provide megengine depends prebuild binary package, which definition
@@ -23,24 +24,22 @@
```

## MacOS
* refer to [BUILD_README.md](scripts/cmake-build/BUILD_README.md) MacOS section to init base build environment
* init other wheel build depends env by command:
```bash
./scripts/whl/macos/macos_whl_env_prepare.sh
```

## Windows
```
1: refer to scripts/cmake-build/BUILD_README.md Windows section
```
* refer to [BUILD_README.md](scripts/cmake-build/BUILD_README.md) Windows section to init base build environment

# How to build
Note: Guarantee the git repo is mounted in docker container, do not use `git submodule update --init` in to init Project repo
## Build for linux
* This Project delivers `wheel` package with `manylinux2014` tag defined in [PEP-571](https://www.python.org/dev/peps/pep-0571/).

commands:
```bash
export CUDA_ROOT_DIR=/path/to/cuda
export CUDNN_ROOT_DIR=/path/to/cudnn
export TENSORRT_ROOT_DIR=/path/to/tensorrt
./scripts/whl/manylinux2014/build_wheel_common.sh -sdk cu101
```



+ 7
- 1
scripts/whl/manylinux2014/build_wheel_common.sh View File

@@ -166,7 +166,13 @@ if [ ${BUILD_WHL_CPU_ONLY} = "OFF" ]; then
fi
if [[ -z ${TENSORRT_ROOT_DIR} ]]; then
echo "Environment variable TENSORRT_ROOT_DIR not set."
exit -1
if [[ -z ${TRT_ROOT_DIR} ]]; then
echo "Environment variable TRT_ROOT_DIR not set."
exit -1
else
echo "put ${TRT_ROOT_DIR} to TENSORRT_ROOT_DIR env"
TENSORRT_ROOT_DIR=${TRT_ROOT_DIR}
fi
fi

## YOU SHOULD MODIFY CUDA VERSION AS BELOW WHEN UPGRADE


Loading…
Cancel
Save