From f147399a1b128687252330514f7e4a53638f81cb Mon Sep 17 00:00:00 2001 From: bingyaweng Date: Fri, 7 Aug 2020 18:11:55 +0800 Subject: [PATCH 1/8] update README remove mdp/docs and mdp/meetings updata README --- sigs/mdp/README.md | 4 ++-- sigs/mdp/docs/.keep | 0 sigs/mdp/meetings/.keep | 0 3 files changed, 2 insertions(+), 2 deletions(-) delete mode 100644 sigs/mdp/docs/.keep delete mode 100644 sigs/mdp/meetings/.keep diff --git a/sigs/mdp/README.md b/sigs/mdp/README.md index df60daf..8b41831 100644 --- a/sigs/mdp/README.md +++ b/sigs/mdp/README.md @@ -2,7 +2,7 @@ This is the working repo for the MDP Special Interest Group (SIG). MindSpore Deep Probabilistic Programming (MDP) is a programming library for Bayesian deep learning. The target of MDP is to intergrade the gap between deep learning and Bayesian learning. This repo contains all the artifacts, materials, meeting notes and proposals regarding **Probabilistic Programming** , **Deep Probabilistic Programming** , **Toolbox** . Feedbacks and contributions are welcomed. 1. Probabilistic Programming: Probabilistic Programming (PP) focuses on professional Bayesian learning, incluiding statistical distributions classes used to generate stochastic tensors and probabilistic inference algorithms. -2. Deep Probabilistic Programming: Deep Probabilistic Programming (DPP) aims to provide composable BNN modules, which contains bnn layers, bnn, transforms and context. +2. Deep Probabilistic Programming: Deep Probabilistic Programming (DPP) aims to provide composable BNN modules, which contains bnn layers, bnn modules, transforms and context. 3. Toolbox: Toolbox provides a set of BNN tools for some specific applications, sunch as Uncertainty Estimation, OoD Detection and so on. ## SIG Leads Chen Jianfei (Tsinghua University) @@ -13,4 +13,4 @@ Chen Jianfei (Tsinghua University) ## Discussion - Slack channel: https://app.slack.com/client/T018BLCMSGL/learning-slack - Documents and artifacts: https://gitee.com/mindspore/community/tree/master/sigs/mdp -## Meeting notes \ No newline at end of file +## Meeting notes diff --git a/sigs/mdp/docs/.keep b/sigs/mdp/docs/.keep deleted file mode 100644 index e69de29..0000000 diff --git a/sigs/mdp/meetings/.keep b/sigs/mdp/meetings/.keep deleted file mode 100644 index e69de29..0000000 From 23f063afb8b32d42c73cc1491b45f4132e9cdf72 Mon Sep 17 00:00:00 2001 From: lvzhangcheng Date: Tue, 11 Aug 2020 11:20:42 +0800 Subject: [PATCH 2/8] add public key and modified report way. --- security/cve-report_en.md | 20 +------------- security/cve-report_zh_cn.md | 21 ++------------- security/public_key_securities.asc | 55 ++++++++++++++++++++++++++++++++++++++ 3 files changed, 58 insertions(+), 38 deletions(-) create mode 100644 security/public_key_securities.asc diff --git a/security/cve-report_en.md b/security/cve-report_en.md index 3ebd3b9..19f9993 100644 --- a/security/cve-report_en.md +++ b/security/cve-report_en.md @@ -14,27 +14,9 @@ To build a more secure AI framework, we sincerely invite you to join us. If you find a suspected security issue, use [Suspected Security Issue Reporting Template](https://gitee.com/mindspore/community/blob/master/security/template/report-template_en.md) to report it so that the community vulnerability management team (VMT) is able to confirm and fix the issue as soon as possible with sufficient details. Your email will be confirmed within one working day. Within seven days, we will provide more detailed replies to your suspected security issues and provide the next-step handling policy. -To ensure security, please use the PGP public key to encrypt your email before sending it. +To ensure security, please use the [PGP public key](https://gitee.com/mindspore/community/blob/master/security/public_key_securities.asc) to encrypt your email before sending it. + Security email address: -+ PGP public key: - ``` - -----BEGIN PGP PUBLIC KEY BLOCK----- - - iQG2BCABCgAgFiEEwUbNw8zaTIe27U8lt42TVbzPfREFAl58v18CHQAACgkQt42T - VbzPfRGVswwAnSIi1fE0CzIkxPrhfcnfF+vx5y+qpk6ssFr5iFuepBSbA+ZGhaDn - ULYOkBMnGfrgzjw8OzMK7vKIgR2ymmuTJt9qpFH4OIXRX1OXoMYnkPxrQJFpNZpP - BvnxmEey0VOvz9Y3Fa4mHMjvA3I2pbSlH+T2wkGQRO5zhKN7NhQfRFgyFNQT2l5m - pPBdm+sAs5ty6eQuSZF1wECIW17WB53o171DTNbAPySEfOLvq0orNAJWjT4sR1jn - 9M20t3DpjC5dZuMCUuZTbCgHkaLOo0ZkwMXV+dPkm/4hMWLVPxRvlkH02PI++KBl - N8cW+TZb1YN/va9Nrjh+Ah50Px2nmQ/fk60VHKj5hTb8U+PSPGlvWUALwb6ckm55 - nUcBvFiDpe7uAtX88sv2kBR6gIbr0pW9JwOnBLjxGoM3lgfrIot1qFWdBGJrRnIo - bgMtm0PEcwRfHefJY//4BiDgg2ef9DIX7VSSb6rV0HJpNz0IAxyzG41BdSG+3dSb - ns0y2L0F2M+N - =HPa4 - - -----END PGP PUBLIC KEY BLOCK----- - ``` ## MindSpore Community Security Issue Disclosure Process diff --git a/security/cve-report_zh_cn.md b/security/cve-report_zh_cn.md index 863fa01..566b4eb 100644 --- a/security/cve-report_zh_cn.md +++ b/security/cve-report_zh_cn.md @@ -14,27 +14,10 @@ MindSpore作为一个同时支持端/边缘/云场景的训练推理框架,在 如果您发现了疑似安全问题,请您使用[疑似安全问题上报模板](https://gitee.com/mindspore/community/blob/master/security/template/report-template_zh_cn.md)进行反馈,以便社区漏洞管理团队在能够获得足够详细信息的条件下,尽快确认并修复问题。您的邮件将在1个工作日内得到确认,在7天内对您反馈的疑似安全问题提供更详细的回复,并给出下一步的处理策略。 -鉴于安全问题的敏感性,请使用PGP公钥加密后发送。 +鉴于安全问题的敏感性,请使用[PGP公钥](https://gitee.com/mindspore/community/blob/master/security/public_key_securities.asc)加密后发送。 + 安全邮箱: -+ PGP公钥: - ``` - -----BEGIN PGP PUBLIC KEY BLOCK----- - - iQG2BCABCgAgFiEEwUbNw8zaTIe27U8lt42TVbzPfREFAl58v18CHQAACgkQt42T - VbzPfRGVswwAnSIi1fE0CzIkxPrhfcnfF+vx5y+qpk6ssFr5iFuepBSbA+ZGhaDn - ULYOkBMnGfrgzjw8OzMK7vKIgR2ymmuTJt9qpFH4OIXRX1OXoMYnkPxrQJFpNZpP - BvnxmEey0VOvz9Y3Fa4mHMjvA3I2pbSlH+T2wkGQRO5zhKN7NhQfRFgyFNQT2l5m - pPBdm+sAs5ty6eQuSZF1wECIW17WB53o171DTNbAPySEfOLvq0orNAJWjT4sR1jn - 9M20t3DpjC5dZuMCUuZTbCgHkaLOo0ZkwMXV+dPkm/4hMWLVPxRvlkH02PI++KBl - N8cW+TZb1YN/va9Nrjh+Ah50Px2nmQ/fk60VHKj5hTb8U+PSPGlvWUALwb6ckm55 - nUcBvFiDpe7uAtX88sv2kBR6gIbr0pW9JwOnBLjxGoM3lgfrIot1qFWdBGJrRnIo - bgMtm0PEcwRfHefJY//4BiDgg2ef9DIX7VSSb6rV0HJpNz0IAxyzG41BdSG+3dSb - ns0y2L0F2M+N - =HPa4 - - -----END PGP PUBLIC KEY BLOCK----- - ``` + ## MindSpore社区安全问题披露流程 diff --git a/security/public_key_securities.asc b/security/public_key_securities.asc new file mode 100644 index 0000000..79d6350 --- /dev/null +++ b/security/public_key_securities.asc @@ -0,0 +1,55 @@ +-----BEGIN PGP PUBLIC KEY BLOCK----- +Version: Keybase OpenPGP v1.0.0 +Comment: https://keybase.io/crypto + +xsBNBF8wxmcBCADQgjuS6JSvD5rbOstF/pkPahLpVubGYFJbrPLnCJywmZ0fq8fv +UUcOJSM5wdPEq7LHwAq1pU0Khf3w0We+ld0zwCs7RtQOY9TddaovQxhrxQG+SJXM ++S9HE4YX1ktXPuRk/AGByk3jwYa7W63IvAoGRqPoUGwO0YkQjNU3S5RUmVB2exk6 +P3qY7hbjc68TOiX+J0vCy4f0uWKIzWjTIt9JkODtJv2ssaWwu192ZDJLb0JkWinU +LaKKRNorPS5Dy1jIsbVj7y9Qf4TKGMj7WI+9di3w0D/Aiij6Voh766l9E/PLUZ3W +/xcfRTvDnohoIYqcYnIjDcl97NRX3YOM1iA3ABEBAAHNNG1pbmRzcG9yZS1zZWN1 +cml0eSA8bWluZHNwb3JlLXNlY3VyaXR5QG1pbmRzcG9yZS5jbj7CwG0EEwEKABcF +Al8wxmcCGy8DCwkHAxUKCAIeAQIXgAAKCRDonEDWTnblPk4sCACRdC01UAUAstRi +egZbaSYGhTBfJGvh23kjTGJnoxnK+7TxGp7Cm1w9rn3Y0gLK9mCyksYSSkd+FK6Y +r7A3x3JEmL54D/BTJj6comTYLZP8u0C2S/ifivILSxwZ0xNmf9HMyTWqvXaD2wTT +pPCuKBQHgKU4twI6/tsdGwqZRn0E3vddz5SwZ8enXS2QbijUDRqKaljQkj6ZWrOL +YFFff7J5BusEfPIX54imGiV2EFIhvm53mYK2zl7L60QcW20HauGaY0IzQUxVGl1E +9CRc7/duyVEOJWWwp0IMXDHbOBCr+ViUqubIY0SBSvXqpy2Z0dUQkYK3Z54J0oHA +sQ0e8e4LzsBNBF8wxmcBCADJ8gP8cUxMPGIZwbPmsyZHcba2C99tfT1qwCfuMIZS +KuOzUQfH6nilXhi5WlCpGGVypdCQLSl8wU24OQPmKv1D5y0r2h1DI2Ipya3THn7r +CTP07MgqOzbdHPnqWgYOVH376FpgXqcxG1/GlicKnInTFuWt7iSlFDD9eX3JLHRa +CDFm4YEwpO3HAsXzuP9wxRpEccO7Q68x2dzflfbV0TDl3f8GU6fdNHK8xSOixRyY +4oq3z7hP3bnFC1yBs2UV6Px/BUseLtmvWGl+3xL5zvLyzWyWcPdqN4CcuI/7LXk+ +mAHeLBSNiV59Tjyv7KMqKppBu5vEWpeeavSNORvmwJOBABEBAAHCwYQEGAEKAA8F +Al8wxmcFCQ8JnAACGy4BKQkQ6JxA1k525T7AXSAEGQEKAAYFAl8wxmcACgkQMBVh +JvIIuQuOrQf/bO5H88GnJ+mZz/R07S9IANcS+UvnDYkEVhbIMfdJN0uUOF7PwL2K +MjsnCPc9WgKc3Vf12x28+tpqSJtM1Zk9EDvhaqiu9vOpAHpzSVAsJpjd2M8InZwc +1XXqXC44AvEYj49QW63Wh8pu8RFAK6DY2FTOF4qTXQkV2lu0ocE2KCJkcC4KfwLf +27pyBHpb5yeP6bUrYYdduhzAZQxD313rd+YfZqycMlZafjqSQifTGpgpjh7fQ3jS +TtvDwLTmXdzzW72IaYgrOir6jFeuBB2gpNSV71uYReLLxiJ+1ngNhAbGuDp3k3Ix +inCF1dzkkGEa8Uk7MAiP9L80k2gaSzPRryhWB/0ZGLi3/KegKGlIGdlP1UwAsxgS +pkbgYnb+q25jDeoKWMRgTFB+ZurqxXqPtQp9cznQ5fXNldnE/EIC363jr4rgUlfR +V+ouAr3/yKK/2loLIUvmIdnBEIYJl+gRQrM94mAKpJTr8mEZlzoO7ChY5s99XJEL +UgAs/Q+k2ISp080qzLTCYmfmXaAdvOKdaphLhHJPmf0bAS+IX2TI6PjQetfkumae +PWjahmA6cAqQDy4/fFWMTFIvzdQvPICPdHEklKvmLmIuSN8ciYY9GJTygnW7HJcA +laH6RG45EyWrTQRAuIgrVl8PdILuaAjdmEWRdOITnxj6IrB5Ggr3RQnuLuUqzsBN +BF8wxmcBCADZD+WhuNgEd7CIuNXO6dd3TJuBEMBdpmrxUCuh/KEz3BiiE4wMcv3q +wwpd0EpUDuORq/wTyrJnBOq82mdQMbDSPoP4WBmGGVUvf84IiDU5m2ZgD6kq1Aur +dCZsuBWAWSLyPIY1Kqk5VNId46sZwDhc5ueXobe6V0pr1IlRgYdPYo52OCXLVSWy +NCt1NPD00ln74m2JcodnCax3IpFjbaBQylxkFuzTNUxxyL2N2ZQHQjnuOakyG/zm +MT/otKHLytPNvfsAvSpNZ+RQZMAYUDR7YoXC5qjY2zNIGw6nO2zOwmQ6q/QJdgeb +Q5Iy4bWclAeYvId+RjVKU6ZP79hw+et7ABEBAAHCwYQEGAEKAA8FAl8wxmcFCQ8J +nAACGy4BKQkQ6JxA1k525T7AXSAEGQEKAAYFAl8wxmcACgkQnk09g1xm9S7DDQgA +orf4j7UcZRbhaRSeqY/u9ExN8a6DTf2GOru5ru0xvnfBOLmfqnfGz0oN7lun6hMg +sRKKQHFJc2950w40ewsPTKOqRFdwT2nzZ/RM5hQWGOgE2MOo4rmlq0caZ4nwPeva ++JgUW9LhGAd0h8iRWUr03Cjzy7WLrIl3W6yDeQ516HnAeEShz5tw1hse257EW5tE +suYPIU4L1b/W6VxI9jB5wpnMP0IKzu4+TL4eCXTCNS6PTLjmdex/1P3pWymLTAw2 +ZkR4U0yCjckYhbXhRDcwdD1glkRpE/oUjC5SuqV9WqUs8py94JCpPSNsGb2kplKk +IM5oNvOUlkOHEojcgAMedh9HB/sHqEYvMIgQuiTEluCNr+xww/7oAMUcYD38XwrF +eKob87W//x6bMu2XygM0zXfpM0V5xYIG6VDLg0gythzxcC+JOUmKmDhFLGWsvs3f +plA1BUXEJALxeZMbtTta0kr0pw0nIKN+zazaJmGwkREs2df+XDfuyxWxNt17H46p +RvsDafIw+E6nl+MfR/ZlcCmrtm6JZSzKQufph/+xVgLHRlMOKKs+0151EDSzGEra +nDCvnhLfFha+ro4xl70QgKu6hlRu/0oxCfx/jh8/kKOXeuOwfuTsBWXRysfxqY57 +b82gD5e2OFeB+F2PnIPIpst5iyT9I12NeTSUeEml5b/gw4JH +=UBbA +-----END PGP PUBLIC KEY BLOCK----- From feea30813e307fb674e735390d8c123823a75edb Mon Sep 17 00:00:00 2001 From: pkuliuliu Date: Mon, 10 Aug 2020 10:19:08 +0800 Subject: [PATCH 3/8] add MindSpore/MindArmour SIG meeting of August --- sigs/security/README.md | 1 + .../{meetiings => meetings}/001-20200604.md | 0 .../{meetiings => meetings}/002-20200703.md | 0 sigs/security/meetings/003-20200808.md | 33 ++++++++++++++++++++++ .../{meetiings => meetings}/meeting-template.md | 0 5 files changed, 34 insertions(+) rename sigs/security/{meetiings => meetings}/001-20200604.md (100%) rename sigs/security/{meetiings => meetings}/002-20200703.md (100%) create mode 100644 sigs/security/meetings/003-20200808.md rename sigs/security/{meetiings => meetings}/meeting-template.md (100%) diff --git a/sigs/security/README.md b/sigs/security/README.md index a68ba81..8b7c5f8 100644 --- a/sigs/security/README.md +++ b/sigs/security/README.md @@ -22,3 +22,4 @@ This is the working repo for the MindArmour special interest group (SIG). This r # Meeting notes * [Thursday June 04, 2020](./meetings/001-20200604.md) * [Friday July 03, 2020](./meetings/002-20200703.md) +* [Saturday August 08, 2020](./meetings/003-20200808.md) diff --git a/sigs/security/meetiings/001-20200604.md b/sigs/security/meetings/001-20200604.md similarity index 100% rename from sigs/security/meetiings/001-20200604.md rename to sigs/security/meetings/001-20200604.md diff --git a/sigs/security/meetiings/002-20200703.md b/sigs/security/meetings/002-20200703.md similarity index 100% rename from sigs/security/meetiings/002-20200703.md rename to sigs/security/meetings/002-20200703.md diff --git a/sigs/security/meetings/003-20200808.md b/sigs/security/meetings/003-20200808.md new file mode 100644 index 0000000..69b7404 --- /dev/null +++ b/sigs/security/meetings/003-20200808.md @@ -0,0 +1,33 @@ +# Saturday August 8, 2020 at 2:30pm GMT+8 + +## Agenda +- Refactor AI Fuzzer module. +- Add new feature: model information reverse analysis technique - member inference attack. +- Support graph mode of DpOptimizer. +- Support broadcast ability of Lapalace random operation. + +## Conference links +- https://imeeting.huawei.com/meeting/joinzoom?id=280361&app=welink +- Meeting ID:280361 +- Please install Zoom before the meeting. + +## Attendees +* Wang Ze (Huawei) +* Lv Zhangcheng (Huawei) +* Liu Liu (Huawei) +* Liu Zhidan (Huawei) +* Yang Yuan (Huawei) +* Zheng Huanhuan (Huawei) +* Jin Xiulang (Huawei) +* Li Peng (Huawei) +* Li Yanjun (Huawei), etc + +## Notes +* Participants: Wang Ze, Liu Liu, Liu Zhidan, Yang Yuan, Zheng Huanhuan, Jin Xiulang, Li Peng, Li Yanjun, etc. + +* The meeting video can be found: + + *Post link after meeting*. + +## Action items +* None. diff --git a/sigs/security/meetiings/meeting-template.md b/sigs/security/meetings/meeting-template.md similarity index 100% rename from sigs/security/meetiings/meeting-template.md rename to sigs/security/meetings/meeting-template.md From 74f8e77a6565ff79e5424a1103f1b3d53f77e456 Mon Sep 17 00:00:00 2001 From: godbaiqi Date: Mon, 17 Aug 2020 15:52:51 +0800 Subject: [PATCH 4/8] conbine working-groups --- working-groups/research/Topic10:AutoML.md | 20 ++++++++++++++++++++ .../Topic1: Low-bit Neural Networks Training.md | 20 ++++++++++++++++++++ .../research/Topic2: Memory Optimization.md | 22 ++++++++++++++++++++++ .../research/Topic3:Model Innovation.md | 22 ++++++++++++++++++++++ .../Topic4:AI for Scientific Computing.md | 20 ++++++++++++++++++++ .../research/Topic5: Verifiable Trustworthy AI.md | 19 +++++++++++++++++++ .../research/Topic6: Confidential AI Computing.md | 20 ++++++++++++++++++++ ... Tensor Differentiable Calculation Framework.md | 21 +++++++++++++++++++++ ...tributed And Parallel AI Computing Framework.md | 20 ++++++++++++++++++++ .../research/Topic9:Explainable AI.md | 20 ++++++++++++++++++++ 10 files changed, 204 insertions(+) create mode 100644 working-groups/research/Topic10:AutoML.md create mode 100644 working-groups/research/Topic1: Low-bit Neural Networks Training.md create mode 100644 working-groups/research/Topic2: Memory Optimization.md create mode 100644 working-groups/research/Topic3:Model Innovation.md create mode 100644 working-groups/research/Topic4:AI for Scientific Computing.md create mode 100644 working-groups/research/Topic5: Verifiable Trustworthy AI.md create mode 100644 working-groups/research/Topic6: Confidential AI Computing.md create mode 100644 working-groups/research/Topic7: Tensor Differentiable Calculation Framework.md create mode 100644 working-groups/research/Topic8: Distributed And Parallel AI Computing Framework.md create mode 100644 working-groups/research/Topic9:Explainable AI.md diff --git a/working-groups/research/Topic10:AutoML.md b/working-groups/research/Topic10:AutoML.md new file mode 100644 index 0000000..dc5931c --- /dev/null +++ b/working-groups/research/Topic10:AutoML.md @@ -0,0 +1,20 @@ +# Topic10:AutoML + +## Motivation: + +​ Nowadays, training a model that meets the accuracy requirements often requires rich expert knowledge and repeated iterative attempts. Although there is AutoML technology, there are still problems of difficult search space setting and long training time for large search spaces. If you can combine the iterative history of user training and analyze historical data, a lightweight hyperparameter recommendation method can be realized, which can greatly improve the developer experience. + +​ Similarly, for performance tuning, there are similar problems. In different heterogeneous hardware, models, and data processing scenarios, expert knowledge is required for tuning. Therefore, we aim to reduce the performance tuning threshold by automatically identifying system performance bottlenecks and recommending the best code path. + +## Target: + +​ Reduce model development cost, set up thresholds through automatic hyper-parameter configuration and performance optimization paths, and improve model debugging and optimization efficiency. + +## Method: + +​ We expect the applicant can conduct AutoML research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. + +## How To Join: + +1. Submit an issue/PR based on community discussion for consultation or claim on related topics +2. Submit your proposal to us by email xxx@huawei.com \ No newline at end of file diff --git a/working-groups/research/Topic1: Low-bit Neural Networks Training.md b/working-groups/research/Topic1: Low-bit Neural Networks Training.md new file mode 100644 index 0000000..4d4c30e --- /dev/null +++ b/working-groups/research/Topic1: Low-bit Neural Networks Training.md @@ -0,0 +1,20 @@ +# Topic1: Low-bit Neural Networks Training + +## Motivation: + +​ At present, mixed precision can automatically adjust the accuracy of fp16 and fp32 for the network to improve training performance and memory optimization. Because operators have different costs on different AI chips, all optimization strategies for different AI chips are different. The network configuration of different hardware is different, so how to automatically generate the precision adjustment strategy that adapts to various hardware, especially the low bit strategy has become a difficult problem. + +## Target: + +​ Self-adaptively provides a low-bit precision training mechanism for various networks. + +![target](target.PNG) + +## Method: + +​ We expect the applicant can conduct low-bit neural networks training research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. + +## How To Join + +1. Submit an issue/PR based on community discussion for consultation or claim on related topics +2. Submit your proposal to us by email xxx@huawei.com \ No newline at end of file diff --git a/working-groups/research/Topic2: Memory Optimization.md b/working-groups/research/Topic2: Memory Optimization.md new file mode 100644 index 0000000..074cb61 --- /dev/null +++ b/working-groups/research/Topic2: Memory Optimization.md @@ -0,0 +1,22 @@ +# Topic2: Memory Optimization + +## Motivation: + +​ There are many strategies for memory optimization, such as recalculation and host-device memory switching. These strategies further break through the memory bottleneck by increasing the amount of calculation and increase the batchsize. Increasing batchsize can often improve the utilization of GPU and NPU to improve throughput performance. + +## Target: + +* Adaptive search memory optimization strategy to optimize the overall network performance. + +* Or provide a methodological strategy. + + ![memor_opt](memor_opt.PNG) + +## Method: + +​ We expect the applicant can conduct memory optimization research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. + +## How To Join + +1. Submit an issue/PR based on community discussion for consultation or claim on related topics +2. Submit your proposal to us by email xxx@huawei.com \ No newline at end of file diff --git a/working-groups/research/Topic3:Model Innovation.md b/working-groups/research/Topic3:Model Innovation.md new file mode 100644 index 0000000..fa59dc0 --- /dev/null +++ b/working-groups/research/Topic3:Model Innovation.md @@ -0,0 +1,22 @@ +# Topic3:Model Innovation + +## Motivation: + +1. In-depth probability model innovation: through the combination of neural network and probability model, the model can better help decision-making. +2. Graph neural network: The neural network is combined with the traditional graph structure, oriented to cognitive reasoning and future trends. +3. Model innovation combining traditional models and neural networks is a research hotspot. + +## Target: + +- Complete probability sampling library and probability inference (learning the probability distribution of the overall sample through known samples) algorithm library +- Design new algorithms for dynamically changing heterogeneous graphs (different feature dimensions and different information aggregation methods) +- Trillion distributed graph data storage, segmentation and sampling + +## Method: + +​ We expect the applicant can conduct model innovation research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. + +## How To Join: + +1. Submit an issue/PR based on community discussion for consultation or claim on related topics +2. Submit your proposal to us by email xxx@huawei.com \ No newline at end of file diff --git a/working-groups/research/Topic4:AI for Scientific Computing.md b/working-groups/research/Topic4:AI for Scientific Computing.md new file mode 100644 index 0000000..337573e --- /dev/null +++ b/working-groups/research/Topic4:AI for Scientific Computing.md @@ -0,0 +1,20 @@ +# Topic4:AI for Scientific Computing + +## Motivation: + +* AI modeling :AI automatic modeling can effectively improve modeling efficiency, and convergence analysis can improve model reliability and ensure simple and safe use by users. +* AI solution:The calculation amount of high-order differential increases exponentially with the parameter and the order. We can design neural network models to solve such classic problems. + +## Target: + + * AI modeling:Construct a neural network, training data and Loss function for scientific computing problems. + * AI solution:AI model solves differential equations, solves optimization problems, achieve the goal that the amount of high-order automatic differential calculation increases linearly with the order. + +## Method: + +​ We expect the applicant can conduct AI for scientific computing research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. + +## How To Join: + +1. Submit an issue/PR based on community discussion for consultation or claim on related topics +2. Submit your proposal to us by email xxx@huawei.com \ No newline at end of file diff --git a/working-groups/research/Topic5: Verifiable Trustworthy AI.md b/working-groups/research/Topic5: Verifiable Trustworthy AI.md new file mode 100644 index 0000000..00056f5 --- /dev/null +++ b/working-groups/research/Topic5: Verifiable Trustworthy AI.md @@ -0,0 +1,19 @@ +# Topic5: Verifiable Trustworthy AI + +## Motivation: + +- Many aspects of trustworthy AI (or responsible AI), such as robustness, backdoor-free, fairness, privacy protection capabilities, and accountability, have gradually attracted the attention of the industry and academia. +- Scholars' understanding and research on the attributes of trustworthy AI are mostly empirical, and there are few theoretical studies. The verifiable and certifiable analysis, tuning, and evaluation methods of trustworthy AI attributes and bounds, and their relation to explainable AI, require theoretical guidance. + +## Target: + +​ Propose verifiable and certifiable research mechanism and evaluation system on trustworthy AI. + +## Method: + +​ We expect the applicant can conduct Verifiable Trustworthy AI research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. + +## How To Join + +1. Submit an issue/PR based on community discussion for consultation or claim on related topics +2. Submit your proposal to us by email xxx@huawei.com diff --git a/working-groups/research/Topic6: Confidential AI Computing.md b/working-groups/research/Topic6: Confidential AI Computing.md new file mode 100644 index 0000000..e8df265 --- /dev/null +++ b/working-groups/research/Topic6: Confidential AI Computing.md @@ -0,0 +1,20 @@ +# Topic6: Confidential AI Computing + +## Motivation: + +- In the training and deployment process of AI services, several vital resources such as data, models, and computing resources may belong to different parties, so a large amount of data will move across trust domains. The problems of data privacy protection and model confidentiality protection are prominent. +- Confidential computing is an important direction to protect the confidentiality of key data. At present, confidential computing based on trusted execution environment has performance advantages, but its trust model is limited; the trust model of confidential computing based on cryptography (homomorphic encryption, multi-party computing) is simple, but there is still a gap between performance and practicality. +- A series of specialized optimizations may improve the performance of confidential computing in AI scenarios, including but not limited to: cryptography suitable for AI, specialized intermediate representation and compling strategy for confidential AI computing, and hardware-based acceleration. + +## Target: + +​ Realize an AI on Encrypted Data & Model computing framework with feasible, flexible and efficient performance in actual AI application scenarios, or key technologies. + +## Method: + +​ We expect the applicant can conduct Confidential AI Computing research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. + +## How To Join + +1. Submit an issue/PR based on community discussion for consultation or claim on related topics +2. Submit your proposal to us by email xxx@huawei.com \ No newline at end of file diff --git a/working-groups/research/Topic7: Tensor Differentiable Calculation Framework.md b/working-groups/research/Topic7: Tensor Differentiable Calculation Framework.md new file mode 100644 index 0000000..47b9121 --- /dev/null +++ b/working-groups/research/Topic7: Tensor Differentiable Calculation Framework.md @@ -0,0 +1,21 @@ +# Topic7: Tensor Differentiable Calculation Framework + +## Motivation: + +* The new network model poses challenges to the IR expression, optimization and execution of the deep learning framework, including the introduction of a new op abstraction level, and the dynamics of the model. +* Third-party high-performance computing languages or frameworks are accelerated, and there is an urgent need for a more versatile and open tensor computing framework and API design +* The technical challenges of unified optimization of the model layer and the operator layer, including hierarchical IR design, optimization of infrastructure, automatic tuning, loop optimization, etc. +* Differential equations are solved with a large number of differentials, which have high requirements for the differential expression of the framework, interface design, algorithm analysis efficiency and reliability. + +##Target: + +​ Driven by cutting-edge applications, from the perspectives of new models, dynamic models, high-performance computing languages, etc., study the evolution direction and key technology paths of future computing frameworks. For example, it supports differentiable programming of high-order differentiation and is compatible with traditional Fortran/C numerical calculation framework. + +##Method: + +​ We expect the applicant can conduct tensor differentiable calculation framework research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. + +## How To Join + +1. Submit an issue/PR based on community discussion for consultation or claim on related topics +2. Submit your proposal to us by email xxx@huawei.com \ No newline at end of file diff --git a/working-groups/research/Topic8: Distributed And Parallel AI Computing Framework.md b/working-groups/research/Topic8: Distributed And Parallel AI Computing Framework.md new file mode 100644 index 0000000..2b7b213 --- /dev/null +++ b/working-groups/research/Topic8: Distributed And Parallel AI Computing Framework.md @@ -0,0 +1,20 @@ +# Topic8: Distributed And Parallel AI Computing Framework + +## Motivation: + +* The scale and complexity of models are getting higher and higher, such as GPT-3 with 175 billion parameters, millions of face recognition, and tens of billions of feature recommendations. +* It is difficult to split the model manually. For example, developers need to combine information such as calculation amount, cluster size, communication bandwidth, and network topology to construct a parallel mode. +* The expression of the parallel mode lacks adaptability, and the simple graph-level model segmentation cannot obtain high-efficiency speedup. It requires the decoupling of algorithm logic and parallel logic. + +## Target: + +​ Driven by super-large models, research key technologies for accelerating distributed training, including but not limited to automatic parallelism, hybrid parallelism, memory optimization, and elastic scaling. Such as achieving heterogeneous automatic parallel efficiency and linear speedup. + +## Method: + +​ We expect the applicant can conduct distributed and parallel AI computing framework research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. + +## How To Join + +1. Submit an issue/PR based on community discussion for consultation or claim on related topics +2. Submit your proposal to us by email xxx@huawei.com \ No newline at end of file diff --git a/working-groups/research/Topic9:Explainable AI.md b/working-groups/research/Topic9:Explainable AI.md new file mode 100644 index 0000000..cdc3c2f --- /dev/null +++ b/working-groups/research/Topic9:Explainable AI.md @@ -0,0 +1,20 @@ +# Topic9:Explainable AI + +##Motivation: + +​ The current deep learning model is essentially black box due to its technical complexity , which leads to the opacity and inexplicability of AI services and further restricts the commercial application and promotion of AI services. Existing interpretable AI technology mainly focuses on how to provide limited engineering auxiliary information to the model, but ignores the understanding of AI models from the perspective of human cognition + +​ Humans usually understand things through analogies, metaphors, induction and other cognitive methods, and have a certain process of mental cognition construction. Thus, in this project, we expect to be able to explore more systematic and interpretable AI methods that conform to human cognition, including interactive interfaces, interpretation methods, measurement methods, and so on. + +##Target: + +​ A complete set of interpretable AI methods and strategies in line with human cognition, providing necessary interactive cognitive interface design solutions for different scenarios and different cognitions, and a case study for typical scenarios. + +## Method: + +​ We expect the applicant can conduct XAI research based on MindSpore, and hope to get your valuable suggestions to MindSpore in the process. We will do our best to improve the capabilities of the MindSpore framework and provide you with the most powerful technical support. + +## How To Join: + +1. Submit an issue/PR based on community discussion for consultation or claim on related topics +2. Submit your proposal to us by email xxx@huawei.com \ No newline at end of file From 3d86ce82a18060258e0b7a4ee33d3f7027b4648a Mon Sep 17 00:00:00 2001 From: godbaiqi Date: Mon, 17 Aug 2020 16:13:56 +0800 Subject: [PATCH 5/8] performance_png performance_png --- working-groups/research/memor_opt.PNG | Bin 0 -> 4363 bytes working-groups/research/target.PNG | Bin 0 -> 3945 bytes 2 files changed, 0 insertions(+), 0 deletions(-) create mode 100644 working-groups/research/memor_opt.PNG create mode 100644 working-groups/research/target.PNG diff --git a/working-groups/research/memor_opt.PNG b/working-groups/research/memor_opt.PNG new file mode 100644 index 0000000000000000000000000000000000000000..d1fcde619753adc9907deeed5b97e9b72c20da34 GIT binary patch literal 4363 zcmV+m5%lhfP)Px#1ZP1_K>z@;j|==^1poj532;bRa{vGi!vFvd!vV){sAK>D5THp!K~#8N?VX8} zBt;!Y{SOV0Q)IbEU;*Wr5K)wC6ci1V3A_S4MuX5W!Uz{xj$H+0Ra5=Fh>V@?uCA^gm@l(lW##W(!5CxA;BdGbW6S_?xEo{4 z0CBh*W6S_?xEo{402v$ZtE;PFWo0F-t*wQnrKPIz94s#{Z$7tg-@fWOk#0}h*w|=Z z37gxe?_XbEubLjya?j4@`&95LJpNQGexD=PeT{-;Dt13)1L+VP{{In zI&voD^AP4|eQw@a2#!Hvu5i!c_4HVIoOb}m7&AmB(lRM&s0xL%g1Xcc==_`px!#j@ z2)xd7Fe;GQpN?%*2cVI<&Jyy^F~*po(n`21OiO56(Pp@V&KhW=wlT(-p)u2N&wnm4 zZMcsL?({gug}XWd8b2Fj%px%JaOX81r=#~Yc&80_;OF!+^}0S|T)1oh>Hdr{X6Vd3 z+(7}amIi&k?Iht2+Lv0&xNrxKCqHImj2SXx!+j{w8Tdk~F=mk%e7Kj+K8!JD=nOX8 zrFO>Y8e@zZDhr`yj4@`>aJU;|%m8t?8)M7>akv{}%m8t?8)M7>akv{}%m8t?8)M7> zakv{}%m8UM++Y4@2(P`_JTS%>GdEfd_mw9?c=uf)oVY)P3x5>C_3`nyUJT)Z--odA z_YmIrPqU#h#+aGXYPb{V%j+Rr{Yd@qB@c%1{`lOv2SPaYo;vtXy*GqQe;mTC&xY{5 zUxo1KAL|kS+wU|xF~%4(HLZsGdwvkY#~-Ul@TJB`{v$E?FOT8A_kj@3`(X%Y-d~UO z34R`bG`{}j=R(;3n-HG)a|nO^ce8V2j4>l=E!=VKC&Z90h597z+Sn<4_$MKJ@SzYc zV6-2DeoqYk<70;*&GX5h)|%&`KUnh^V@yhG;r`^G>QQ(pxR13}n&;sy&2!GdEzR@3 z`$D)Pz9!A{#H-dk#+b#UwQ#@d*D-kGXuA~7ZMEy;XkVJ=!bbC)5yPI5{&8`{FU@oN z^R?zV^qMt~F=m*w7Vda*w9GjNLO4H;u5s#wF5!!7FNJfPt2wJa7KD`7*{Bv&(4pf-0lp!+)tq$&8k{xXS36Kz;AMAuPq8ap&k2 z&8jH{bK7m3=84CYYn~6qV?*V~jC85fllkgxN9#>d6Jo zgG28^^APH2o)5&)ey({qPHCQRZ`9xArPr-_j4|`0ci}z`M#ao^XOxW7C)gQ1SNf!& zc|O*`XrIP$rJCpL*dd)F%@dCk&2!tKEzR@F8_gFq#+VtgtHOO0(mdSVD$T=PuXyuK z&C>)w&!c&+>qzrlCCw9$MVf~T&y4u#oTPc~`(3ShUVp3klExS_1%n9pad2Z3nuqml zxZw$fL$Icr2knFAxuRpub7iV|4(`-EH$7Qvp7mF1&GXj(nlEXLF&Tpo_i=C+Et-cp z`DmW2;)ouFf<7TwbIn5lmx8&?c0+8^JQo!-568)wh%^s3k6H5=V-`BxN5P2rxBsX$ z&v$W>Ijs zPYIfbkUaQu2;X?2Rya)CBup#KbFkJbjIyP97=4$*xy?qXbKw@vL&&3f_QX*>qkWE( z=g~ZLUT7XVGix5RTiCa6UszgNs@nQ`{2$%yz~MeaUj0|CMbJE7JXC8QM)}n&88lA} zSo}Vkhv4l}^U(3o0Wq!g%mXp-8_mN7UXB~h!}s9(@V$)b3mY374)qCexX%Gb#9RPn zgv?0!Q*p%1$oVAn$=zF;XKxQR52O85^Khnu=Hbl6n#Xhjp-#9H@@6y+_l_YP2}ztd zp?OUVSZN-na-w;-JGL*HhqDf89?nJ>?Q<=H&gwHy)twjDDy(_T^73+6TU!gOtE-y| zh39g3POq=5tTfMVsTcjL{b^f0C(_TM@jg-7S9%7Z_t1D=+tKo4&bLoLhvVls(ql=# zli)eM&bDIQzNf|E-eZ{c!c9)hf1*1rjz#a+ch+a*zk_@qY?ps8y_e^Y28a7Dgk=n*d6@T(=3yQ@n&*T@^AP&!Y#wf) zDur{K4b8(eO^$)%k>;V`3TGxfC(Xls2%N#V;L32wLktvPK&YjG4EgJ6*oiy@^T&c< z*ZxcUQh27pt9|jWU_bfyNJ+P6yL?+7ryW-Qodw&a9Ubt#v}2hDhx@=tSK_?m`C9WZ z!aojua!;*Oq< zlZW}Jz|?k7+Lw+c4f1^7kaiNF_oYo8L>lCR#|r#$Va;dc`=#Zb#iTgg7YMDyiRQWW zXLYy}sN4n1h?-7-(LTY--LhS19!B!gJlr*0X`Xl{l4>4a=cY3>4>zG%^YoKR!hKwD z^)cMlA*Ju(@vOpK`2XHBnAr0lyzK$l`^YGR_6v{^m_jF%5 zrS~2S4tFyaG!Nwn(L7&?qh$)4Fv73&$vySY1Z}lML!)`pfIgPSlFF!?(rbBR1yE@o zW*eb-PP=zY^UxuDKAx>e^SG{V`w(Jzc=DKl6FgS%0oy9zCJlFBA8Bajp*yQ^=h)KE z;d9f^DFt*I?!flxHte6z(=lpaz;@~JO@qVT92GlO;*7z08JZ^ssWcBE%V;0XLr_nv zdAKgXXuR~=q)nQKwg1sP`2AG#a1BD5hYk$QEe-l8<$nA>huUEMgKOEr(`>Y{rn3rXDo&RSGg*P3TL4)<I7eYTnzYB^Hgql`F`DQ0!?ot& zdc~q@9*29kINVJyAsA657smj-^B46EPRxK|CC(cjuU}&(51J>ht6OQFER^TbqxPU6n!@kfh zrC@HmRhsAFEzN_9lIG#M0?S9Bc~n;y&11~5;BYs+1P#ON8ZNXFg44pHWoRC)#L1#& zjHp@Kn57S@y1LD1ztTK$#NCPJiJvLWgK|Rau*TJlDpv1hmd(!Iez&aOTjN=7}SEI-jbpZfyKs2z@#!X&%-W zLi6xDvgR?u;cj{dZk;<~6>`jXM_0@PR^mkS@Ofw+RX^Qf*an#Xl@7Y~QK=^eK`Q_nqDkg?7T|8_~f66dF%3gO}ytlKnC6Z{-I znx~6`E7Ckw!IgsMK}AXPaOQ&Malw^g;&3;;LwAiTu+S5Xn)?djNJtWvUp-uF9zvCs zIHh@*J%i>^!IdsF5223c;f~=Qnx~n~b7P}<=)~yETySOg;&3;;f}lh%8PYna?LaC5)HFz(mX1-!gY>Q8qLG;awa0pgSN8f=|2v4(<``%DfR7Y zgbsCEhk`VZR^q(z$y)QEPtZK7t4r{!;7V7TC$FoE=1J@7(jk2?y1J?6VG%fK z9!CDsJgTe9^j9-m9PXw!pehJ}Qee=ybZX#~*ts6ljDMr}t($=n%CMXH{_J(OT^=iss@qKl8io zgXZCE1I4Qnsd(*;X^_%!~wxv|L1hIA!PG|%Ur zi6iQV;~A3lNgPqD(iXq_t~C!G4;>J952Jaw$AIHxW)C-;@&AT2kL&6>+)a<*R)6MU zm;S(M{DBiRrhU*n=}MgZ{?RC`#7RT*@ZTbx%~LA4(snCp9!C4pJSeI~#o=yx0i(#1 z;z;aBH}|lOt08p@qhdzK(mZs!guOHmIz^gCb#=SYJg6PcNEQ``yXggt5($OUA8Fby zh@nBSnErwPPFLdm?9(A!`cSQTm>$W9T9vlYJWQi(Q}Yn+D5pim;cj}tcQ(fRb~UE> zal?$+T`=Q?4xAY`w_2j1(LAO9H_}GO!23CySX3PDrWY{PZIY21 z?v`yA`Hyx{;J>j};zaYX66YmxWX~)aX&%+pMe}emUYbW8#3JKxH=V;DFc;+VARpD3 z5y9>GW^u_^;zaW>MUo{NsH=Ng40g`)78!@T=^SQVaN1o8@=~K{HX}J>N}zdIiIebO zWE}3Mb0~(bf*g}HZl=M}8)HV}a5tU9RJm!U#Np=guEv6vmh`4tLWTtTSFJhCOLx*;gtP8DnWiQ;SkG+n8x_xSP&kmQ1M__M{D0&$0!^n4LJ>O(#$c zn>9d6p*>AI{XQ4XHl~Eb-E;y=yt1NSDX^z$tO!WOe`Cyc9PXwQ?s$H(;v7e6IQ!cU zH9IxNY~pY?oxsIVmSimj_B4&c4U}y&#%#yoZsra}7$|f-tD?y#?|0E`V{#nsX6~@g zHg~3$g8FD0MY9*V5VA2P9PVcBPz?K`8TfWJyt|8L8?zmUyO}#zu1{7Bd(w8*BXz(V zW43U(o4NA#J0YC7zphL_%SORu=FS^qY8>vy7&ACR_&-+2PBx6iS%Clm002ovPDHLk FV1lIasek|g literal 0 KcmV+b0RR6000031 diff --git a/working-groups/research/target.PNG b/working-groups/research/target.PNG new file mode 100644 index 0000000000000000000000000000000000000000..787bb1634d0f62718eff75f2f165b4bb91696699 GIT binary patch literal 3945 zcmV-v50>zWP)Px#1ZP1_K>z@;j|==^1poj532;bRa{vGi!vFvd!vV){sAK>D4*p3*K~#8N?VT~J z9M>6!4au)C2&W9V3H|}WO~9pbu)!E35JH6;3)`|AH7;bk2*w0ewh0I|wo?W@)LI@#Df(Sc= z5GF!|9YP2bA;JzJgozMghY-RA%rjyBJ2=Cm_jfMU1SN@bp z>@7kF-JD6-|NQUU=NjMKdv|f=%9Ul*zUx1H&^*8IP))qH`lk|KeD~qvR!=!|bSa|*Nr}t0$BJAxV!VY0NOiI{#{pjlD z)4rCl>E|5b;^REy)kgfvOPgcm(;6D1&_5RIgsrhY*T;6{_1cNgzP`A8^|`bWt=b4z z?V8tR%-+b)Smmg~*7{xh+Gj-AA@nmjVM|C7R24SDsgl-Mj@JpBG3qB=X5>-7*Q9Y>bjhc&N$-WAi@Y-rWdr+HW&n1C#%sySl>Q~Lo)3HL=hy2QO zsEDvb=x36`CXfrFaS7?ThD5}xq)4=c%{le!Tn)|-5w_>@I;mT^JS5vH>?&o}q@VLH z^SVF6>!eS46wKH9U3)pNh_FNGXR^YUY>^eOI#-3{v8k}hjUelIj$1W()LbrX^42F| z5Beoxh4QH$@~J+p!*R{)HFpVH^ZNF49uZ-O(9dLrO&~1JOGpGla%piaNlS3*rLCNV)~I=ztA1aHkgz#soaa05h_FNGXVStZ zsv$d z&1EgEJ%(uzVV}gd!cOcaTEgCx2)m8y&>q4xIeKC1haw8(tsk!FX;?C+&&Mi9eZrJzqv?dOmv!@=GUlY9V z_U+rNvx{YZuZcb%OMOuDt7g!(&|F>##^|FJf^mI&qpd!z3*=Eg_1FFRT<`Ps>w0r1 z!aj*EVb>>J51-Q#Heyy=PtZE1Uwxc+A?wLpV?D3;N7!w)6*kwlD_X+V9Q8@=RoE^= z+Q_O8#t=g5kPm7O+Nw#Sb2MMnx`BC;s``Bmj(r`qyY@z|dR^ym||%ibm35;o`5b|KfgYQLYKLrn1r8(Byu!X+YMyBO(r8MW{= zgw|@+={f1EH8F>rqE)wQg*qM>?{cgXhQ7T~mtQv@=hwKkKT+57nva19`y_@4yFLl~ zoP7%0^VWVpKZls2uw6m|5mjM3-p?`p3SYy;1d=S_a}>(is);sZL~G1CX|k@?0FCbw zcHJAHHMJMvx#~JXV_HGqF%V&&L>1!@Vbi~FVXF>bm+|V?xw*tth3(=27xFq`cfp)0 zY)O{ro8OW#ZLgyW6TDW&ZzF8wa*=vnb$!p(T3ghd$3}#`3jPoXijvy6orj+yicN%*j$2QsIa{jts^*A4IoK^i&4<_`D(lTm`5II zHNOjzzHNl9W98=9>slvl&|Jdz!(5$TcOBgXFe@VLFlQp{Fe|nZcKsH_uHhpNKI#}F`dFv+Eg9gt=-VH=#=nuG zZ;R~WW2LdO&3`E!VTV}}VQ-G=>XYQ$h3#KC+&2++2rDA&&Cxeo4n>4LoCrJ2id_hs zSENDIr%G<@R-bNo^?~H0P+nlX-?ZeCw4fz1g~X!sT37$rc>P2lbu4vgZT0a=Zr^WL zO+Z`!I4NAfwU7LDue{c40@`auIX!os+qy1wAH=O&w@&*(V|s+M??w*BI82V1YhD`B_VK_&67q*6@Dq zLC{_&gDOF_wLY*0H9_55y%rpMKIW>%dQE}0YE&*>YhECa*CmwKuLW(5U3XnL_V>!y z=yu`?y-KkMF|kg4zb^ldxKQs}k}WIHoQjhw=#KslV3i#DZ{6_qW|UL} zb2!I5*5H^qRZNWYH3`O4=|CN)&^mrj_3FH!&6*C^Mf+)82SWWEyTm-dWaapJbx(fH zm{+f<@Nt5%dOzuFh3c&DAIF-j+GtZ3d8ws!{lWLc*jh7rm{+gy5F+d_EA}O93Bhkp zzsda=TDj;ahtQ2L5di&+(@jpl-`Fk*KNgI2Sq`DrMcdZ|YV?|_gdB2t4W6HQb*}HR zYfq}FYj1jg9M;i$uvs;}-97^8Jt`=w7~$h8R(c9<3CC2UDZH+j9` zHQvwBCIn-c=QpixV$G8v1UJ9xkW^F?)baGyJyo<9LECcxIV2vf>!PAxb<{r3uVeKq zm)28Txj=iWH95N84)tp9>#hrZjB)7ayxxn}`^6eHFLh`R{km>~yjqL7UdK8j>@X|# zC2Ziv<7Omuqw2==V<9;y)OM6huomZBSmag?@0T2WEO?#FT}Q2pcAdv{@j4|)g$ug2 zYkuv4U>(|8uWL`7BTpCWgbjVyMKuyyyY*hs_Uizim+-yQSI4`sLG$XKTSVAlR%|2e zFiu)BK2{>^Fe@VL`B9%pkC_NN%!&wmKIkKf`Z3ip6Jdv05n+cpv$3#S|D*GX#I9mn zBJ40{&Lr&C|GiTpv28@yK_V60@I*_Rj2$X$Nf0>a0z!aQlDrDWR#9qP3= zubUj6hq2zzI<*$Z>+rfkdum-1$yKPWb$RtU9Q!r$v6`!~)S&AP)a7%hNQ9jjO^o3Q z+i^)D0+*feF-}WP$f63{`zwrh3ASWY*P)i7Qc})Vt-!gn2-}aH);e4dtpR?G1anxE zakbCuqR;nU>+HIQQzF7nj3&nLgv|}^fegBY217*(E~A$8bk0NKt73D!2IjQt1jf|- zoNLtrzNU+@HJ(0~G;_6vXkC-KFTX~DIjq^LpXj|v~?}O`;=Sry)MRR&$R}qH8WS^h0pPIftu7;b7_yd zX4L0iJ`dLkpXm zUGEn+f0Y!+>UX(G3ka5q5X9g#F;bgT=##4_B@A9HX%HwwIT2wG42f2+A6>nC+81H(CS2IR{`#NAd++^yapA(Ni#vCIUY%Xfu?d^M8DbwY z^^YI&bUh9t?1ACJMz$PF%Ik#9M<^0}gxwwdZT<4gy~SH^{cR~~#pj=Ywv_gexP1B2 zQGO&6j)Mq$V7RbpBVn~$!lte8r|}58JHG$pYfIs}tciEtd3*86Cszjt(hp&dMA!qP z3Y$R0$a&opc8xw^x6U6+=p&Iwj~=}wZl0V&2od(+XbD>%jp%7x=hq1v(bp&GR{moP z30p|q5q1bKh_DAnmry0^O@vLKk0+Qz__=62iuoI(+2s?y+BJ9LyBEk-#p9niKnuxGN=qJKXj3y%N z5c-L*6QhX;JA{5B?8Imy!VaOI2s<&Fh_FNGC&EsQCL-(*`iZa;qlpMRgnrH>>_lQ; z5klx@V_}C7LO2W&b_gL%ga|u?5GF!|9YP3`V6pfe2jR5-b-76v00000NkvXXu0mjf DT Date: Mon, 17 Aug 2020 17:09:08 +0800 Subject: [PATCH 6/8] Rename some invalid documents --- working-groups/research/{Topic10:AutoML.md => Topic10_AutoML.md} | 0 ...al Networks Training.md => Topic1_Low-bit-Neural-Networks-Training.md} | 0 .../{Topic2: Memory Optimization.md => Topic2_Memory-Optimization.md} | 0 .../research/{Topic3:Model Innovation.md => Topic3_Model-Innovation.md} | 0 ... for Scientific Computing.md => Topic4_AI-for-Scientific-Computing.md} | 0 ... Verifiable Trustworthy AI.md => Topic5_Verifiable-Trustworthy-AI.md} | 0 ... Confidential AI Computing.md => Topic6_Confidential-AI-Computing.md} | 0 ...Framework.md => Topic7_Tensor-Differentiable-Calculation-Framework.md} | 0 ...ework.md => Topic8_Distributed-and-Parallel-AI-Computing-Framework.md} | 0 .../research/{Topic9:Explainable AI.md => Topic9_Explainable-AI.md} | 0 10 files changed, 0 insertions(+), 0 deletions(-) rename working-groups/research/{Topic10:AutoML.md => Topic10_AutoML.md} (100%) rename working-groups/research/{Topic1: Low-bit Neural Networks Training.md => Topic1_Low-bit-Neural-Networks-Training.md} (100%) rename working-groups/research/{Topic2: Memory Optimization.md => Topic2_Memory-Optimization.md} (100%) rename working-groups/research/{Topic3:Model Innovation.md => Topic3_Model-Innovation.md} (100%) rename working-groups/research/{Topic4:AI for Scientific Computing.md => Topic4_AI-for-Scientific-Computing.md} (100%) rename working-groups/research/{Topic5: Verifiable Trustworthy AI.md => Topic5_Verifiable-Trustworthy-AI.md} (100%) rename working-groups/research/{Topic6: Confidential AI Computing.md => Topic6_Confidential-AI-Computing.md} (100%) rename working-groups/research/{Topic7: Tensor Differentiable Calculation Framework.md => Topic7_Tensor-Differentiable-Calculation-Framework.md} (100%) rename working-groups/research/{Topic8: Distributed And Parallel AI Computing Framework.md => Topic8_Distributed-and-Parallel-AI-Computing-Framework.md} (100%) rename working-groups/research/{Topic9:Explainable AI.md => Topic9_Explainable-AI.md} (100%) diff --git a/working-groups/research/Topic10:AutoML.md b/working-groups/research/Topic10_AutoML.md similarity index 100% rename from working-groups/research/Topic10:AutoML.md rename to working-groups/research/Topic10_AutoML.md diff --git a/working-groups/research/Topic1: Low-bit Neural Networks Training.md b/working-groups/research/Topic1_Low-bit-Neural-Networks-Training.md similarity index 100% rename from working-groups/research/Topic1: Low-bit Neural Networks Training.md rename to working-groups/research/Topic1_Low-bit-Neural-Networks-Training.md diff --git a/working-groups/research/Topic2: Memory Optimization.md b/working-groups/research/Topic2_Memory-Optimization.md similarity index 100% rename from working-groups/research/Topic2: Memory Optimization.md rename to working-groups/research/Topic2_Memory-Optimization.md diff --git a/working-groups/research/Topic3:Model Innovation.md b/working-groups/research/Topic3_Model-Innovation.md similarity index 100% rename from working-groups/research/Topic3:Model Innovation.md rename to working-groups/research/Topic3_Model-Innovation.md diff --git a/working-groups/research/Topic4:AI for Scientific Computing.md b/working-groups/research/Topic4_AI-for-Scientific-Computing.md similarity index 100% rename from working-groups/research/Topic4:AI for Scientific Computing.md rename to working-groups/research/Topic4_AI-for-Scientific-Computing.md diff --git a/working-groups/research/Topic5: Verifiable Trustworthy AI.md b/working-groups/research/Topic5_Verifiable-Trustworthy-AI.md similarity index 100% rename from working-groups/research/Topic5: Verifiable Trustworthy AI.md rename to working-groups/research/Topic5_Verifiable-Trustworthy-AI.md diff --git a/working-groups/research/Topic6: Confidential AI Computing.md b/working-groups/research/Topic6_Confidential-AI-Computing.md similarity index 100% rename from working-groups/research/Topic6: Confidential AI Computing.md rename to working-groups/research/Topic6_Confidential-AI-Computing.md diff --git a/working-groups/research/Topic7: Tensor Differentiable Calculation Framework.md b/working-groups/research/Topic7_Tensor-Differentiable-Calculation-Framework.md similarity index 100% rename from working-groups/research/Topic7: Tensor Differentiable Calculation Framework.md rename to working-groups/research/Topic7_Tensor-Differentiable-Calculation-Framework.md diff --git a/working-groups/research/Topic8: Distributed And Parallel AI Computing Framework.md b/working-groups/research/Topic8_Distributed-and-Parallel-AI-Computing-Framework.md similarity index 100% rename from working-groups/research/Topic8: Distributed And Parallel AI Computing Framework.md rename to working-groups/research/Topic8_Distributed-and-Parallel-AI-Computing-Framework.md diff --git a/working-groups/research/Topic9:Explainable AI.md b/working-groups/research/Topic9_Explainable-AI.md similarity index 100% rename from working-groups/research/Topic9:Explainable AI.md rename to working-groups/research/Topic9_Explainable-AI.md From 5aefec9d15a68ea26e6ef00cd4d5f3955fea6e8d Mon Sep 17 00:00:00 2001 From: leonwanghui Date: Mon, 17 Aug 2020 17:13:59 +0800 Subject: [PATCH 7/8] Remove useless meeting folder --- meetings/README.md | 3 --- 1 file changed, 3 deletions(-) delete mode 100644 meetings/README.md diff --git a/meetings/README.md b/meetings/README.md deleted file mode 100644 index 77666a2..0000000 --- a/meetings/README.md +++ /dev/null @@ -1,3 +0,0 @@ -## MindSpore community meeting - -Coming soon... From ec9719f76f00a67087b9b65771b85f01247fb4f4 Mon Sep 17 00:00:00 2001 From: leonwanghui Date: Mon, 17 Aug 2020 21:24:04 +0800 Subject: [PATCH 8/8] Remove some useless files --- .sigs.md.swp | Bin 12288 -> 0 bytes design/meps/MEP-AKG.md | 36 +++++------ design/meps/MEP-MSLITE.md | 107 +++++++++++++------------------ sigs/modelzoo/README.md | 3 +- sigs/mslite/README.md | 6 +- sigs/mslite/meetings/meeting-template.md | 2 +- sigs/security/README.md | 2 +- 7 files changed, 65 insertions(+), 91 deletions(-) delete mode 100644 .sigs.md.swp diff --git a/.sigs.md.swp b/.sigs.md.swp deleted file mode 100644 index 2374ab44b2a146eeb02af24ee2aed918fcc821a8..0000000000000000000000000000000000000000 GIT binary patch literal 0 KcmV+b0RR6000031 literal 12288 zcmV+bF#peDFjh%TAUG~C000005C8z_S2|zT2SfsqcdH?_b000000000000000 z00000000000000000000000000001NdM!v$Ek$f;bY((sZ)+_$G%zhkHUIzs00000 z00000000000001fY)^I0000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000001VZ~*{+ z00002000000000S00000000010000000001000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000001CWB>rL2mk;)2><{95C8xi z000000002r4*&q(4*&q04*&p64*&rF4gdhJ4gdg?4gdg>4gdgw4gdgv4gdfP4gdhV z4FCXZ4FCXY4FCXI4FCXH4FCW+4FCWc4FCYU3;+OC3;+P`3jhF#3jhEW3jhG53IG5? z3IG7&2><|y2><{*2><{)2><{90000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z00000000000000000000000000000000000000000000000000000000000000000 z000000001dAWc(DNmNB3d>~Y4X>%Y`Nk<@Qb0BhMb8v5Nb7^91Wgup6av*eQWgui_ zc4cgDaBXF7bRchLAWdm*WK(c&a%CWFX>?^SAbbFPAVEt%ZQ zWpi+EZgXj3Y-J#3Z*m}XXk{Q|Wp-t3Z*Xm8Zge1TW*|*zZe&w%Z*pZIVRdwGAZulE zZe?sBXJu|>a$$6DaxNfz0DK@*Wn*=6X>@rYd>~Y4X>%Y`Nk<@Qb0BhMb8v5Nb7^91 zWgup6av*eQWgui_c4cgDaBXF7bRchLAWdm*WK(c&a%CWMWn*=6X>@rYa%F5`bY)~9 zbZ>8Lb1oo!0DK@;X>)a9Y-xI7bZKvHAbcQHXlZjGQ%Of4X>%ZQWpi+EZgXj3Y-J#3 zZ*m}XXk{Q|Wp-t3Z*Xm8Zge1TW*|*zZe&w%Z*pZIc4>2UVQgu7VRUJ4ZXk4TZ)|fe zAbbFPAV+dxaA-wtXK8L_AbcQHXlZjGQ%Of4X>%ZQWpi+EZgXj3Y-J#3Z*m}XXk{Q| zWp-t3Z*Xm8Zge1TW*|*zZe&w%Z*pZIXL4b1Xdq>7XK8L_AZBlJAVG6uWo~33K}jHR za&Kd0b8~NUE+BjWd>}+&bYUQTAXI2+b0AYmM<8i)AaZ4MaBpsNX<}?;AZBlJAarPD zAY^5BWo&P7ZDnqBAa7bYUQHa&Kd0b8~5KXCPs2WFTZ=bYUQ7 zZ*py6bRcwcVQzC~Z*py6bZKvHE+BjWd>~D4WMynxZ*L%cAXI2+b0AYmM<8i)AaZ4M zaBpsNX<}?;AZBlJAarPDAY^5BWo&P7ZDnqBAa7V>}Y#?uNb1oo!0DK@tcx7XCbZ>GXd>~Y4X>%Y`Nk<@Qb0BhMb8v5N zb7^91Wgup6av*eQWgui_c4cgDaBXF7bRchLAWdm*WK(c&a%CW5VPk78Wo~33b9HcV zZ*p`XW^ZyJaA|O5Y-w&~E+BjWd>}(_ZE$I9WpW^VAXI2+b0AYmM<8i)AaZ4MaBpsN zX<}?;AZBlJAarPDAY^5BWo&P7ZDnqBAa7%ZQWpi+EZgXj3 zY-J#3Z*m}XXk{Q|Wp-t3Z*Xm8Zge1TW*|*zZe&w%Z*pZIW^!+CbS-6WWFTdDaB^jH zb7^mGE+BjWd>}e4EiElAEiE8?AUZ8AEiElAEiElAEiElAEg*aVd>~UvM<8xtZDk;Q zAW~&>aBpsNX<}(?X>@62b0B;G03#zHLv?a;Wo~pJQ%Og2003!jAarPDAZ=l3Y-w(1 zAZ%%KbS@xMNk<@Ia&&2CVPkZ2AY)-}AYx@8W^Z+FWFTp7AarPDAX{^3XLBHOWpHnE zX>@OLd0i-TX=igOE&yq6AarPDAaZ4MaAjk3X?A5GY-w|JE+9l@bYW?1b0A@2Z*_Da zVQzUKb#P;EZE0?2AZ=x3bZKs9b0BwVY-}K5Y;$iQVr3w6Xkl_?WB^HKAbD?fAYpQ4 zAZc!NWpZV6bY)~9X>K5JVRCe7V`*?K5NXk{Q|X>((B zb8~5LZe1v7baZfYIxjD6VQFl4WnwOEX>Me5aBp&DE@N&laBp*TZ*pmGb#pIlX>)XQ zFDU>3BO@S6Z)t96Zf782AX7<4003`hAarPDAY*T2Wguy8AarPDX>uT8a%Ew2E&yq5 zaBO95Wo~pJV{c?-AY*TCbaH88b#!TOZgVamQ%Og2AYpQ4AYp8CZy<7Ib8v5Nb7^91 zWgup6av*PRXK!h4XCQ51X>N37Zeea?WdL+&Wgu{JZ)#;@bS@xMNk?-aXkm6`Aa8JG zZXjW9WFT~MVQzD9VRB_|bRckYZ)0U;WNB_^b0BnYAY^5BWo&P7AZKr3Y;z!CZe##S zZggdGWpi{OM{;j^}ZXk4MWgtyyZe&w% zZ*pZITW4=}WpZv|ZewL#C}(eWWpZv|ZewLGZDc7dAX9K ## Summary @@ -92,13 +90,13 @@ nitty-gritty. AKG aims to generate high-performance target code for fusing operators with specific patterns on different hardware backends. So three basic processes should be contained in akg as follows. - **Operator Expression.** AKG defines several basic operators which can be used to compose a complicated fused operator. These basic operators have the same granularity with MindSpore's IR. We introduce json to expressed the relation of the basic operators in one fused operator which brings weak dependency between MindSpore and AKG. - + - **Schedule initialize based on polyhedral.** - + When akg obtained the dsl of operators which would be fused, it would transform the operator dsl into formularIR(now we use HalidIR as tvm) and then into isl schedule tree. Next the polyhedral schedule process begin. With the help of pluto algorithm and other optimizations the schedule tree will do some transformations including vectorization, loop tiling, mem promotion and loop distribution, which can help us to improve the parallel capability and data locality. - **Emit instructions on different hardware from IR.** - + In order to generate correctness and high-performance codes for different hardware, The IR should be optimized respectively, which consists of double buffer optimization, storage rewrite optimization and inject sync optimization. @@ -113,15 +111,15 @@ bogged down. #### Deep Graph Optimization -Since the network is becoming more deeper and larger, there are more opportunity to fused different operation into one to optimiza network performance. +Since the network is becoming more deeper and larger, there are more opportunity to fused different operation into one to optimize network performance. AKG tools has the ability to auto-generate target code based on composited dsl, without scheduling procedure. After automatic operator fusion and operator re-composition in graph level, AKG tools can generates high-performance target code for these composited pattern. #### Optimize Dynamic Neural Network -Networks are exhibiting more and more dynamism, especially in the fields of deep graph analysis and NLP. -Tensors in a model may have dynamic shapes such as batch size, image size, sequence length, etc. -Models are expressed with control-flow, such as recursion, conditionals and loops. +Networks are exhibiting more and more dynamism, especially in the fields of deep graph analysis and NLP. +Tensors in a model may have dynamic shapes such as batch size, image size, sequence length, etc. +Models are expressed with control-flow, such as recursion, conditionals and loops. Within these different dynamic requirement, AKG can generate one general target code on davinci hardware(different hardware) using for different shape of one common operator. ## Design Details @@ -135,12 +133,12 @@ proposal will be implemented, this is the place to discuss them. -AKG composes with four basic optimization module, normalization, auto schedule, instruction emit and backend optimization. -- **normalization.** The mainly optimization of normalization includes three address transform, common subexpression elimination, copy propagation and so on. -- **auto schedule.** The auto schedule module mainly have vectorization, loop tiling, mem promotion and loop distribution. -- **instruction emit.** The instruction emitting module has the optimization about loop normalization, auto pragma and emit instruction. +AKG composes with four basic optimization module, normalization, auto schedule, instruction emit and backend optimization. +- **normalization.** The mainly optimization of normalization includes three address transform, common subexpression elimination, copy propagation and so on. +- **auto schedule.** The auto schedule module mainly have vectorization, loop tiling, mem promotion and loop distribution. +- **instruction emit.** The instruction emitting module has the optimization about loop normalization, auto pragma and emit instruction. - **backend optimization.** The backend optimization module consists of double buffer optimization, storage rewrite optimization and inject sync optimization. - + When GraphKernel is enabled, ops are reconstructed in the graph level. The new ops described in the format of json will be translated into DSL in AKG and then compiled to the target binary. @@ -173,7 +171,7 @@ when drafting this test plan. AKG employed pytests and nosetest to launch the testing process, and there are three types of testing strategies in AKG: -- **Unit Test.** Every optimization or pass in AKG has its own unitest. +- **Unit Test.** Every optimization or pass in AKG has its own unitest. - **System test**. The akg module has its own component testing. Basically we classify the testing into compilation verification, function verification and performance testing. @@ -204,7 +202,7 @@ Major milestones might include -- The schedule generated directly by pluto algorithm during the polyhedral process would exist some issues on both correctness and performance in some scenarioes. So some extra passes have to added before emitting instructions. +- The schedule generated directly by pluto algorithm during the polyhedral process would exist some issues on both correctness and performance in some scenarios. So some extra passes have to added before emitting instructions. ## Alternatives @@ -216,5 +214,5 @@ information to express the idea and why it was not acceptable. - Both TVM[1] and TC[2] are outstanding tools which can automatically synthesize high-performance machine learning kernel. However, neither of them could generate codes for Davinci cores(cce codes) as davinci cores have more complicated multi-level memory design(L0-A/B/C, L1 and UB) as well as specific dataflow constraint. Besides, TVM adopted schedule space model and had to write the schedule all by ourselves while akg used polyhedral techniques to initialize the schedule automatically, which referenced from the designing of TC. ## References -- [1] https://github.com/apache/incubator-tvm +- [1] https://github.com/apache/incubator-tvm - [2] https://github.com/facebookresearch/TensorComprehensions diff --git a/design/meps/MEP-MSLITE.md b/design/meps/MEP-MSLITE.md index 78a5b77..096c932 100644 --- a/design/meps/MEP-MSLITE.md +++ b/design/meps/MEP-MSLITE.md @@ -3,20 +3,20 @@ | ------- | -------------------------------- | ---------- | ------------------ | ----------- | ------------- | --------- | --------- | ----- | ------------- | | MEP-mslite | @zhengli  @zhiqiangzhai @chaijun | mslite | | provisional | 2020-08-18 | | TBD | beta | beta : "v0.7" | -# MEP-mslite: MindSpore Lite +# MEP-MSLITE: MindSpore Lite ## Table of Contents -- [MEP-mslite: MindSpore Lite](#mep-mindspore-lite) +- [MEP-MSLITE: MindSpore Lite](#mep-mslite-mindspore-lite) - [Table of Contents](#table-of-contents) - [Summary](#summary) - [Motivation](#motivation) - [Goals](#goals) - [Non-Goals](#non-goals) - [Proposal](#proposal) - - [User Stories](#user-stories-optional) + - [User Stories](#user-stories) - [Generate a compact target model and low latency and low consumption runtime](#generate-a-compact-target-model-and-low-latency-and-low-consumption-runtime) - [Design Details](#design-details) - [Test Plan](#test-plan) @@ -24,28 +24,22 @@ - [Drawbacks](#drawbacks) - [Alternatives](#alternatives) - [References](#references-optional) - + ## Summary -MindSpore(MS) lite is an extremely light-weight deep learning inference framework, -and designed for smart-phones and embedded devices, such as watches, headsets, and various IoT devices. -It supports Android and iOS, as well as Harmony os, and has industry leading performance. - +MindSpore(MS) lite is an extremely light-weight deep learning inference framework, and designed for smart-phones and embedded devices, such as watches, headsets, and various IoT devices. +It supports Android and iOS, as well as Harmony os, and has industry leading performance. + ## Motivation -Since increased computing power and sensor data, intelligence is moving towards edge devices. -Improved AI algorithms are driving the trend towards machine learning be run on -the end device, such as smart-phones or automobiles, rather than in the cloud. -On-device AI can dramatically reduce latency, conserve bandwidth, -improve privacy and enable smarter applications. +Since increased computing power and sensor data, intelligence is moving towards edge devices. Improved AI algorithms are driving the trend towards machine learning be run on the end device, such as smart-phones or automobiles, rather than in the cloud. +On-device AI can dramatically reduce latency, conserve bandwidth, improve privacy and enable smarter applications. ### Goals - Compatibility: supports MindSpore model, as well as mainstream third-party models, such as TensorFlow lite, Caffe 1.0 and ONNX. -- High-performance: -generates small, low power consumption and fast inference target model for various hardware backends. +- High-performance: generates small, low power consumption and fast inference target model for various hardware backends. - Versatility: supports Harmony, Android and iOS os. -- Light-weight: small shared library size, should be less than 1 MB, and could be easily deployed on -resource limited devices. +- Light-weight: small shared library size, should be less than 1 MB, and could be easily deployed on resource limited devices. ### Non-Goals - None @@ -53,93 +47,78 @@ resource limited devices. ## Proposal MS lite consists of converter and a runtime library. -The converter is an offline tool can handle most of the model translation work. -The runtime library deploys to device and executes online, -it has Lite RT and Lite Micro two modes. -Lite RT is for slightly resource limited devices, such as smart-phones, -while Lite Micro is for extremely resource limited devices, such as watches, headsets. +The converter is an offline tool can handle most of the model translation work. +The runtime library deploys to device and executes online, it has Lite RT and Lite Micro two modes. +Lite RT is for slightly resource limited devices, such as smart-phones, while Lite Micro is for extremely resource limited devices, such as watches, headsets. - Compatibility - provides an abundant of operator parsers for MindSpore, Tensorflow Lite, Caffe, ONNX, + provides an abundant of operator parsers for MindSpore, TensorFlow Lite, Caffe, ONNX, and supports common neural networks in CV and NLP, 208+ CPU operators, and 60+ GPU operators. - + - High performance Many optimization methods, including graph optimizations, post training quantization, are applied to model in offline converter, and generated target model is more compact. Graph optimizations, such as operator fusion and constant folding, make model more compact. - Post training quantization transfers fp32 model into fix-point int8 model. - It brings nearly 4x smaller model size, low latency and low consumption for inference process. - - MS lite also applies a variety of optimization schemes to NN operations, including using Winograd -algorithm in convolution and deconvolution, Strassen algorithm in matrix multiplication. -Operations support fp64, fp32, fp16 and int8, and are highly optimized with acceleration by -neon instructions, hand-written assemble, multi-thread, memory reuse, heterogeneous computing, etc. + Post training quantization transfers fp32 model into fix-point int8 model. + It brings nearly 4x smaller model size, low latency and low consumption for inference process. + + MS lite also applies a variety of optimization schemes to NN operations, including using Winograd algorithm in convolution and deconvolution, Strassen algorithm in matrix multiplication. + +Operations support fp64, fp32, fp16 and int8, and are highly optimized with acceleration by neon instructions, hand-written assemble, multi-thread, memory reuse, heterogeneous computing, etc. -- Versatility +- Versatility Supports Harmony, iOS and Android os, supports smart-phones, watches, headsets, and various IoT devices. - + - Light weight - MS lite is highly Optimized under GHLO and GLLO. It has small foot-print, - MS lite runtime is about 800 kB, and MS Micro is less than 200 KB. - It is flexible and can easily deploy to mobile and a variety of embedded devices. + MS lite is highly Optimized under GHLO and GLLO. It has small foot-print, + MS lite runtime is about 800 kB, and MS Micro is less than 200 KB. + It is flexible and can easily deploy to mobile and a variety of embedded devices. ### User Stories #### Generate a compact target model and low latency and low consumption runtime -Since devices has limited resource with few ROM, RAM, and power, how to deploy AI model to -device is very challenge. MS lite aims to solve the challenge for users, and provides user-friendly, -flexible tool to help users to make their own models more slim and more efficiency. - +Since devices has limited resource with few ROM, RAM, and power, how to deploy AI model to +device is very challenge. MS lite aims to solve the challenge for users, and provides user-friendly, flexible tool to help users to make their own models more slim and more efficiency. + ## Design Details -MS lite consists of converter and runtime. +MS lite consists of converter and runtime. The converter is an offline tool has three parts, frontend, IR, and backend. Runtime deploys to device and executes online. -- **Frontend.** Frontend aims to parse model from MindSpore, Tensorflow Lite, Caffe and ONNX in protobuf. +- **Frontend.** Frontend aims to parse model from MindSpore, TensorFlow Lite, Caffe and ONNX in protobuf. - **IR.** IR is to define ANF, including tensor, operations, and graph. -- **Backend.** Backend is an optimizer based ANF graph, including GHLO, GLLO, and quantization. - GHLO is short for "graph high level optimization", common optimization methods, - such as operators fusion, operator substitution, and constant folding, are included. - GLLO is short for "graph low level optimization", low level optimization methods - are related to hardware, such as layout adjustment, mixed-precision, etc. - +- **Backend.** Backend is an optimizer based ANF graph, including GHLO, GLLO, and quantization. `GHLO` is short for "graph high level optimization", common optimization methods, such as operators fusion, operator substitution, and constant folding, are included. `GLLO` is short for "graph low level optimization", low level optimization methods are related to hardware, such as layout adjustment, mixed-precision, etc. - **Runtime.** Runtime has Lite RT and Lite Micro two modes. - - + ### Test Plan -MS lite employed pytests and nosetest to launch the testing process, -and there are two types of testing strategies in MS lite: - -- **Unit Test.** Every operation, optimization or pass in MS has its own unitest. +MS lite employed pytests and nosetest to launch the testing process, and there are two types of testing strategies in MS lite: -- **System test**. The ms lite module has its own component testing. -Basically we classify the testing into compilation verification, -function verification and performance testing. +- **Unit Test.** Every operation, optimization or pass in MS has its own unittest. +- **System test**. The ms lite module has its own component testing. Basically we classify the testing into compilation verification, function verification and performance testing. ## Implementation History - Support high and low level graph optimization. - Support post training quantization. -- Support Arm CPU and Mali GPU. +- Support Arm CPU and Mali GPU. - Support fp64, fp32, fp16, int8 operations. ## Drawbacks - MS lite does not support on-device training yet, it is coming soon... ## Alternatives -- MNN[1], TF lite[2] and TNN[3] are outstanding on-device AI frameworks. -MS lite is for on-device AI, and MS cloud is for on-cloud AI, -both of them are in scope of Huawei's MindSpore AI framework. -They share same IR, and optimization passes. MS lite is more flexible. +- MNN[1], TF lite[2] and TNN[3] are outstanding on-device AI frameworks. +MS lite is for on-device AI, and MS cloud is for on-cloud AI, both of them are in scope of Huawei's MindSpore AI framework. +They share same IR, and optimization passes. MS lite is more flexible. ## References -- [1] https://github.com/alibaba/MNN +- [1] https://github.com/alibaba/MNN - [2] https://www.tensorflow.org/lite -- [3] https://github.com/Tencent/TNN +- [3] https://github.com/Tencent/TNN diff --git a/sigs/modelzoo/README.md b/sigs/modelzoo/README.md index 21eb392..34965fb 100644 --- a/sigs/modelzoo/README.md +++ b/sigs/modelzoo/README.md @@ -1,7 +1,7 @@ # MindSpore ModelZoo Special Interest Group (SIG) This is the working repo for the ModelZoo special interest group (SIG). This repo contains all the artifacts, materials, meeting notes and proposals regarding **state-of-the-art deep learning models** and **implementations** in MindSpore. Feedbacks and contributions are welcome. -1. **State-of-the-Art Deep Learning Models**: It covers typical deep learning models in image classification, object detection and segmentation, and natural language processing. These models are intended to be well-maintained, tested and kept up to date with the latest Mindspore API. +1. **State-of-the-Art Deep Learning Models**: It covers typical deep learning models in image classification, object detection and segmentation, and natural language processing. These models are intended to be well-maintained, tested and kept up to date with the latest Mindspore API. 2. **Implementations**: It provides a collection of example implementations for the models powered by Mindspore high-level APIs. Before implementing the model, make sure that the operations used in the model architecture and data processing pipeline are supported in Mindspore. Users can choose the related model to perform end-to-end training and do evaluation on new dataset. # SIG Leads @@ -22,4 +22,3 @@ This is the working repo for the ModelZoo special interest group (SIG). This rep # Meeting notes * [Saturday May 16, 2020](./meetings/001-20200516.md) - diff --git a/sigs/mslite/README.md b/sigs/mslite/README.md index 0c24138..6b2f946 100644 --- a/sigs/mslite/README.md +++ b/sigs/mslite/README.md @@ -1,8 +1,8 @@ # MindSpore Lite Special Interest Group (SIG) This is the working repo for the mslite Special Interest Group (SIG). This repo contains all the artifacts, materials, meeting notes and proposals regarding **MS Lite Converter** , **MS Lite Runtime**. Feedbacks and contributions are welcomed. -1. **Converter**: converter is an offline tool has three parts, frontend, IR, and backend, aims to generate a compact model with applying graph optimizations and post training quantization. -2. **Runtime**: runtime deploys to device and executes online, has Lite RT and Lite Micro two modes. +1. **Converter**: converter is an offline tool has three parts, frontend, IR, and backend, aims to generate a compact model with applying graph optimizations and post training quantization. +2. **Runtime**: runtime deploys to device and executes online, has Lite RT and Lite Micro two modes. # SIG Leads @@ -20,5 +20,3 @@ This is the working repo for the mslite Special Interest Group (SIG). This repo * Documents and artifacts: https://gitee.com/mindspore/community/tree/master/sigs/mslite # Meeting notes - - diff --git a/sigs/mslite/meetings/meeting-template.md b/sigs/mslite/meetings/meeting-template.md index 20e8c79..0c85d75 100644 --- a/sigs/mslite/meetings/meeting-template.md +++ b/sigs/mslite/meetings/meeting-template.md @@ -4,7 +4,7 @@ ## Conference links -## Attendees +## Attendees * Tom (Huawei) ## Notes diff --git a/sigs/security/README.md b/sigs/security/README.md index 8b7c5f8..c2f15a8 100644 --- a/sigs/security/README.md +++ b/sigs/security/README.md @@ -1,7 +1,7 @@ # MindSpore Security Special Interest Group (SIG) This is the working repo for the MindArmour special interest group (SIG). This repo contains all the artifacts, materials, meeting notes and proposals regarding **model security** and **Data privacy protection** in MindSpore. Feedbacks and contributions are welcome. -1. **model security**: The model security contains four features: attack, detect, defense and evaluate. +1. **model security**: The model security contains four features: attack, detect, defense and evaluate. 2. **Data privacy protection**: We will implemented this feature very soon. # SIG Leads