diff --git a/README.md b/README.md index 9f58382..f992fe7 100644 --- a/README.md +++ b/README.md @@ -12,17 +12,50 @@ ## What is MindArmour -A tool box for MindSpore users to enhance model security and trustworthiness and protect privacy data. +MindArmour focus on security and privacy of artificial intelligence. MindArmour can be used as a tool box for MindSpore users to enhance model security and trustworthiness and protect privacy data. +MindArmour contains three module: Adversarial Robustness Module, Fuzz Testing Module, Privacy Protection and Evaluation Module. -MindArmour model security module is designed for adversarial examples, including four submodule: adversarial examples generation, adversarial examples detection, model defense and evaluation. The architecture is shown as follow: +### Adversarial Robustness Module -![mindarmour_architecture](docs/mindarmour_architecture.png) +Adversarial robustness module is designed for evaluating the robustness of the model against adversarial examples, +and provides model enhancement methods to enhance the model's ability to resist the adversarial attack and improve the model's robustness. +This module includes four submodule: Adversarial Examples Generation, Adversarial Examples Detection, Model Defense and Evaluation. -MindArmour differential privacy module Differential-Privacy implements the differential privacy optimizer. Currently, SGD, Momentum and Adam are supported. They are differential privacy optimizers based on the Gaussian mechanism. -This mechanism supports both non-adaptive and adaptive policy. Rényi differential privacy (RDP) and Zero-Concentrated differential privacy(ZDP) are provided to monitor differential privacy budgets. The architecture is shown as follow: +The architecture is shown as follow: + +![mindarmour_architecture](docs/adversarial_robustness_en.png) + +### Fuzz Testing Module + +Fuzz Testing module is a security test for AI models. We introduce neuron coverage gain as a guide to fuzz testing according to the characteristics of neural networks. +Fuzz testing is guided to generate samples in the direction of increasing neuron coverage rate, so that the input can activate more neurons and neuron values have a wider distribution range to fully test neural networks and explore different types of model output results and wrong behaviors. + +The architecture is shown as follow: + +![fuzzer_architecture](docs/fuzzer_architecture_en.png) + +### Privacy Protection and Evaluation Module + +Privacy Protection and Evaluation Module includes two modules: Differential Privacy Training Module and Privacy Leakage Evaluation Module. + +#### Differential Privacy Training Module + +Differential Privacy Training Module implements the differential privacy optimizer. Currently, SGD, Momentum and Adam are supported. They are differential privacy optimizers based on the Gaussian mechanism. +This mechanism supports both non-adaptive and adaptive policy. Rényi differential privacy (RDP) and Zero-Concentrated differential privacy(ZCDP) are provided to monitor differential privacy budgets. + +The architecture is shown as follow: ![dp_architecture](docs/differential_privacy_architecture_en.png) +#### Privacy Leakage Evaluation Module + +Privacy Leakage Evaluation Module is used to assess the risk of a model revealing user privacy. The privacy data security of the deep learning model is evaluated by using membership inference method to infer whether the sample belongs to training dataset. + +The architecture is shown as follow: + +![privacy_leakage](docs/privacy_leakage_en.png) + + ## Setting up MindArmour ### Dependencies diff --git a/README_CN.md b/README_CN.md index c526a4b..aae9618 100644 --- a/README_CN.md +++ b/README_CN.md @@ -12,16 +12,42 @@ ## 简介 -MindArmour可用于增强模型的安全可信、保护用户的数据隐私。 +MindArmour关注AI的安全和隐私问题。致力于增强模型的安全可信、保护用户的数据隐私。主要包含3个模块:对抗样本鲁棒性模块、Fuzz Testing模块、隐私保护与评估模块。 -模型安全主要针对对抗样本,包含了4个子模块:对抗样本的生成、对抗样本的检测、模型防御、攻防评估。对抗样本的架构图如下: +### 对抗样本鲁棒性模块 +对抗样本鲁棒性模块用于评估模型对于对抗样本的鲁棒性,并提供模型增强方法用于增强模型抗对抗样本攻击的能力,提升模型鲁棒性。对抗样本鲁棒性模块包含了4个子模块:对抗样本的生成、对抗样本的检测、模型防御、攻防评估。 -![mindarmour_architecture](docs/mindarmour_architecture_cn.png) +对抗样本鲁棒性模块的架构图如下: -隐私保护支持差分隐私,包括动态或者非动态的差分隐私SGD、Momentum、Adam优化器,噪声机制支持高斯分布噪声、拉普拉斯分布噪声,差分隐私预算监测包含ZDP、RDP。差分隐私的架构图如下: +![mindarmour_architecture](docs/adversarial_robustness_cn.png) + +### Fuzz Testing模块 +Fuzz Testing模块是针对AI模型的安全测试,根据神经网络的特点,引入神经元覆盖率,作为Fuzz测试的指导,引导Fuzzer朝着神经元覆盖率增加的方向生成样本,让输入能够激活更多的神经元,神经元值的分布范围更广,以充分测试神经网络,探索不同类型的模型输出结果和错误行为。 + +Fuzz Testing模块的架构图如下: + +![fuzzer_architecture](docs/fuzzer_architecture_cn.png) + +### 隐私保护模块 + +隐私保护模块包含差分隐私训练与隐私泄露评估。 + +#### 差分隐私训练模块 + +差分隐私训练包括动态或者非动态的差分隐私SGD、Momentum、Adam优化器,噪声机制支持高斯分布噪声、拉普拉斯分布噪声,差分隐私预算监测包含ZCDP、RDP。 + +差分隐私的架构图如下: ![dp_architecture](docs/differential_privacy_architecture_cn.png) +#### 隐私泄露评估模块 + +隐私泄露评估模块用于评估模型泄露用户隐私的风险。利用成员推理方法来推测样本是否属于用户训练数据集,从而评估深度学习模型的隐私数据安全。 + +隐私泄露评估模块框架图如下: + +![privacy_leakage](docs/privacy_leakage_cn.png) + ## 开始 diff --git a/docs/adversarial_robustness_cn.png b/docs/adversarial_robustness_cn.png new file mode 100644 index 0000000..e47ed78 Binary files /dev/null and b/docs/adversarial_robustness_cn.png differ diff --git a/docs/adversarial_robustness_en.png b/docs/adversarial_robustness_en.png new file mode 100644 index 0000000..5caa0e2 Binary files /dev/null and b/docs/adversarial_robustness_en.png differ diff --git a/docs/differential_privacy_architecture_cn.png b/docs/differential_privacy_architecture_cn.png index 8a423a2..ef8bda1 100644 Binary files a/docs/differential_privacy_architecture_cn.png and b/docs/differential_privacy_architecture_cn.png differ diff --git a/docs/differential_privacy_architecture_en.png b/docs/differential_privacy_architecture_en.png index 0240d22..f3da383 100644 Binary files a/docs/differential_privacy_architecture_en.png and b/docs/differential_privacy_architecture_en.png differ diff --git a/docs/fuzzer_architecture_cn.png b/docs/fuzzer_architecture_cn.png new file mode 100644 index 0000000..d714866 Binary files /dev/null and b/docs/fuzzer_architecture_cn.png differ diff --git a/docs/fuzzer_architecture_en.png b/docs/fuzzer_architecture_en.png new file mode 100644 index 0000000..1f4c413 Binary files /dev/null and b/docs/fuzzer_architecture_en.png differ diff --git a/docs/mindarmour_architecture.png b/docs/mindarmour_architecture.png deleted file mode 100644 index 5cbcf9b..0000000 Binary files a/docs/mindarmour_architecture.png and /dev/null differ diff --git a/docs/mindarmour_architecture_cn.png b/docs/mindarmour_architecture_cn.png deleted file mode 100644 index 9edccab..0000000 Binary files a/docs/mindarmour_architecture_cn.png and /dev/null differ diff --git a/docs/privacy_leakage_cn.png b/docs/privacy_leakage_cn.png new file mode 100644 index 0000000..99f4e4c Binary files /dev/null and b/docs/privacy_leakage_cn.png differ diff --git a/docs/privacy_leakage_en.png b/docs/privacy_leakage_en.png new file mode 100644 index 0000000..1092978 Binary files /dev/null and b/docs/privacy_leakage_en.png differ