You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 5.1 kB

4 years ago
4 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293
  1. # MegEngine
  2. <p align="center">
  3. <img width="202" height="118" src="logo.svg">
  4. </p>
  5. <h3> <a href="https://www.megengine.org.cn/doc/stable/en/user-guide/index.html"> Documentation </a> | <a href="https://www.megengine.org.cn/doc/stable/zh/user-guide/index.html"> 中文文档 </a> </h3>
  6. [![](https://img.shields.io/badge/English-%E4%B8%AD%E6%96%87-green.svg)](README_CN.md) [![](https://img.shields.io/badge/Website-MegEngine-green.svg)](https://megengine.org.cn/) [![](https://img.shields.io/badge/License-Apache%202.0-green.svg)](LICENSE) [![](https://img.shields.io/badge/Chat-on%20QQ-green.svg?logo=tencentqq)](https://jq.qq.com/?_wv=1027&k=jJcBU1xi) [![](https://img.shields.io/badge/Discuss-on%20Zhihu-8A2BE2.svg?labelColor=00BFFF&logo=zhihu)](https://www.zhihu.com/people/megengine-bot)
  7. MegEngine is a fast, scalable and easy-to-use deep learning framework with 3 key features.
  8. * **Unified core for both training and inference**
  9. * You can represent quantization/dynamic shape/image pre-processing and even derivation in one model.
  10. * After training, just put everything into your model and inference it on any platform at ease. Speed and precision problems won't bother you anymore due to the same core inside. Check the usage [here](https://www.megengine.org.cn/doc/stable/zh/user-guide/model-development/traced_module/index.html).
  11. * **Lowest hardware requirements helped by algorithms**
  12. * In training, GPU memory usage could go down to one-third at the cost of only one additional line, which enables the [DTR algorithm](https://www.megengine.org.cn/doc/stable/zh/user-guide/model-development/dtr/index.html).
  13. * Gain the lowest memory usage when inferencing a model by leveraging our unique pushdown memory planner
  14. * **Inference efficiently on all-platform**
  15. * Inference fast and high-precision on x86/Arm/CUDA/RoCM
  16. * Support Linux/Windows/iOS/Android/TEE...
  17. * Save more memory and optimize speed by leveraging [advanced usage](https://www.megengine.org.cn/doc/stable/zh/user-guide/deployment/lite/advance/index.html)
  18. ------
  19. ## Installation
  20. **NOTE:** MegEngine now supports Python installation on Linux-64bit/Windows-64bit/MacOS(CPU-Only)-10.14+/Android 7+(CPU-Only) platforms with Python from 3.5 to 3.8. On Windows 10 you can either install the Linux distribution through [Windows Subsystem for Linux (WSL)](https://docs.microsoft.com/en-us/windows/wsl) or install the Windows distribution directly. Many other platforms are supported for inference.
  21. ### Binaries
  22. To install the pre-built binaries via pip wheels:
  23. ```bash
  24. python3 -m pip install --upgrade pip
  25. python3 -m pip install megengine -f https://megengine.org.cn/whl/mge.html
  26. ```
  27. ## Building from Source
  28. * CMake build details. please refer to [BUILD_README.md](scripts/cmake-build/BUILD_README.md)
  29. * Python binding build details, Please refer to [BUILD_PYTHON_WHL_README.md](scripts/whl/BUILD_PYTHON_WHL_README.md)
  30. ## How to Contribute
  31. * MegEngine adopts [Contributor Covenant](https://contributor-covenant.org) as a guideline to run our community. Please read the [Code of Conduct](CODE_OF_CONDUCT.md).
  32. * Every contributor of MegEngine must sign a [Contributor License Agreement (CLA)](CONTRIBUTOR_LICENSE_AGREEMENT.md) to clarify the intellectual property license granted with the contributions.
  33. * You can help to improve MegEngine in many ways:
  34. * Write code.
  35. * Improve [documentation](https://github.com/MegEngine/Docs).
  36. * Answer questions on [MegEngine Forum](https://discuss.megengine.org.cn), or Stack Overflow.
  37. * Contribute new models in [MegEngine Model Hub](https://github.com/megengine/hub).
  38. * Try a new idea on [MegStudio](https://studio.brainpp.com).
  39. * Report or investigate [bugs and issues](https://github.com/MegEngine/MegEngine/issues).
  40. * Review [Pull Requests](https://github.com/MegEngine/MegEngine/pulls).
  41. * Star MegEngine repo.
  42. * Cite MegEngine in your papers and articles.
  43. * Recommend MegEngine to your friends.
  44. * Any other form of contribution is welcomed.
  45. We strive to build an open and friendly community. We aim to power humanity with AI.
  46. ## How to Contact Us
  47. * Issue: [github.com/MegEngine/MegEngine/issues](https://github.com/MegEngine/MegEngine/issues)
  48. * Email: [megengine-support@megvii.com](mailto:megengine-support@megvii.com)
  49. * Forum: [discuss.megengine.org.cn](https://discuss.megengine.org.cn)
  50. * QQ Group: 1029741705
  51. ## Resources
  52. - [MegEngine](https://megengine.org.cn)
  53. - [MegStudio](https://studio.brainpp.com)
  54. - mirror repo
  55. - OPENI: [openi.org.cn/MegEngine](https://www.openi.org.cn/html/2020/Framework_0325/18.html)
  56. - Gitee: [gitee.com/MegEngine/MegEngine](https://gitee.com/MegEngine/MegEngine)
  57. ## License
  58. MegEngine is licensed under the Apache License, Version 2.0
  59. ## Citation
  60. If you use MegEngine in your publication,please cite it by using the following BibTeX entry.
  61. ```
  62. @Misc{MegEngine,
  63. institution = {megvii},
  64. title = {MegEngine:A fast, scalable and easy-to-use deep learning framework},
  65. howpublished = {\url{https://github.com/MegEngine/MegEngine}},
  66. year = {2020}
  67. }
  68. ```
  69. Copyright (c) 2014-2021 Megvii Inc. All rights reserved.