Browse Source

update readme

tags/v1.2.1
ZhidanLiu 4 years ago
parent
commit
8fdbaffa12
12 changed files with 68 additions and 9 deletions
  1. +38
    -5
      README.md
  2. +30
    -4
      README_CN.md
  3. BIN
      docs/adversarial_robustness_cn.png
  4. BIN
      docs/adversarial_robustness_en.png
  5. BIN
      docs/differential_privacy_architecture_cn.png
  6. BIN
      docs/differential_privacy_architecture_en.png
  7. BIN
      docs/fuzzer_architecture_cn.png
  8. BIN
      docs/fuzzer_architecture_en.png
  9. BIN
      docs/mindarmour_architecture.png
  10. BIN
      docs/mindarmour_architecture_cn.png
  11. BIN
      docs/privacy_leakage_cn.png
  12. BIN
      docs/privacy_leakage_en.png

+ 38
- 5
README.md View File

@@ -12,17 +12,50 @@

## What is MindArmour

A tool box for MindSpore users to enhance model security and trustworthiness and protect privacy data.
MindArmour focus on security and privacy of artificial intelligence. MindArmour can be used as a tool box for MindSpore users to enhance model security and trustworthiness and protect privacy data.
MindArmour contains three module: Adversarial Robustness Module, Fuzz Testing Module, Privacy Protection and Evaluation Module.

MindArmour model security module is designed for adversarial examples, including four submodule: adversarial examples generation, adversarial examples detection, model defense and evaluation. The architecture is shown as follow:
### Adversarial Robustness Module

![mindarmour_architecture](docs/mindarmour_architecture.png)
Adversarial robustness module is designed for evaluating the robustness of the model against adversarial examples,
and provides model enhancement methods to enhance the model's ability to resist the adversarial attack and improve the model's robustness.
This module includes four submodule: Adversarial Examples Generation, Adversarial Examples Detection, Model Defense and Evaluation.

MindArmour differential privacy module Differential-Privacy implements the differential privacy optimizer. Currently, SGD, Momentum and Adam are supported. They are differential privacy optimizers based on the Gaussian mechanism.
This mechanism supports both non-adaptive and adaptive policy. Rényi differential privacy (RDP) and Zero-Concentrated differential privacy(ZDP) are provided to monitor differential privacy budgets. The architecture is shown as follow:
The architecture is shown as follow:

![mindarmour_architecture](docs/adversarial_robustness_en.png)

### Fuzz Testing Module

Fuzz Testing module is a security test for AI models. We introduce neuron coverage gain as a guide to fuzz testing according to the characteristics of neural networks.
Fuzz testing is guided to generate samples in the direction of increasing neuron coverage rate, so that the input can activate more neurons and neuron values have a wider distribution range to fully test neural networks and explore different types of model output results and wrong behaviors.

The architecture is shown as follow:

![fuzzer_architecture](docs/fuzzer_architecture_en.png)

### Privacy Protection and Evaluation Module

Privacy Protection and Evaluation Module includes two modules: Differential Privacy Training Module and Privacy Leakage Evaluation Module.

#### Differential Privacy Training Module

Differential Privacy Training Module implements the differential privacy optimizer. Currently, SGD, Momentum and Adam are supported. They are differential privacy optimizers based on the Gaussian mechanism.
This mechanism supports both non-adaptive and adaptive policy. Rényi differential privacy (RDP) and Zero-Concentrated differential privacy(ZCDP) are provided to monitor differential privacy budgets.

The architecture is shown as follow:

![dp_architecture](docs/differential_privacy_architecture_en.png)

#### Privacy Leakage Evaluation Module

Privacy Leakage Evaluation Module is used to assess the risk of a model revealing user privacy. The privacy data security of the deep learning model is evaluated by using membership inference method to infer whether the sample belongs to training dataset.

The architecture is shown as follow:

![privacy_leakage](docs/privacy_leakage_en.png)


## Setting up MindArmour

### Dependencies


+ 30
- 4
README_CN.md View File

@@ -12,16 +12,42 @@

## 简介

MindArmour可用于增强模型的安全可信、保护用户的数据隐私。
MindArmour关注AI的安全和隐私问题。致力于增强模型的安全可信、保护用户的数据隐私。主要包含3个模块:对抗样本鲁棒性模块、Fuzz Testing模块、隐私保护与评估模块。

模型安全主要针对对抗样本,包含了4个子模块:对抗样本的生成、对抗样本的检测、模型防御、攻防评估。对抗样本的架构图如下:
### 对抗样本鲁棒性模块
对抗样本鲁棒性模块用于评估模型对于对抗样本的鲁棒性,并提供模型增强方法用于增强模型抗对抗样本攻击的能力,提升模型鲁棒性。对抗样本鲁棒性模块包含了4个子模块:对抗样本的生成、对抗样本的检测、模型防御、攻防评估。

![mindarmour_architecture](docs/mindarmour_architecture_cn.png)
对抗样本鲁棒性模块的架构图如下:

隐私保护支持差分隐私,包括动态或者非动态的差分隐私SGD、Momentum、Adam优化器,噪声机制支持高斯分布噪声、拉普拉斯分布噪声,差分隐私预算监测包含ZDP、RDP。差分隐私的架构图如下:
![mindarmour_architecture](docs/adversarial_robustness_cn.png)

### Fuzz Testing模块
Fuzz Testing模块是针对AI模型的安全测试,根据神经网络的特点,引入神经元覆盖率,作为Fuzz测试的指导,引导Fuzzer朝着神经元覆盖率增加的方向生成样本,让输入能够激活更多的神经元,神经元值的分布范围更广,以充分测试神经网络,探索不同类型的模型输出结果和错误行为。

Fuzz Testing模块的架构图如下:

![fuzzer_architecture](docs/fuzzer_architecture_cn.png)

### 隐私保护模块

隐私保护模块包含差分隐私训练与隐私泄露评估。

#### 差分隐私训练模块

差分隐私训练包括动态或者非动态的差分隐私SGD、Momentum、Adam优化器,噪声机制支持高斯分布噪声、拉普拉斯分布噪声,差分隐私预算监测包含ZCDP、RDP。

差分隐私的架构图如下:

![dp_architecture](docs/differential_privacy_architecture_cn.png)

#### 隐私泄露评估模块

隐私泄露评估模块用于评估模型泄露用户隐私的风险。利用成员推理方法来推测样本是否属于用户训练数据集,从而评估深度学习模型的隐私数据安全。

隐私泄露评估模块框架图如下:

![privacy_leakage](docs/privacy_leakage_cn.png)


## 开始



BIN
docs/adversarial_robustness_cn.png View File

Before After
Width: 4705  |  Height: 2601  |  Size: 238 kB

BIN
docs/adversarial_robustness_en.png View File

Before After
Width: 4737  |  Height: 2705  |  Size: 297 kB

BIN
docs/differential_privacy_architecture_cn.png View File

Before After
Width: 1314  |  Height: 690  |  Size: 38 kB Width: 3892  |  Height: 2072  |  Size: 236 kB

BIN
docs/differential_privacy_architecture_en.png View File

Before After
Width: 1389  |  Height: 700  |  Size: 50 kB Width: 4077  |  Height: 2065  |  Size: 218 kB

BIN
docs/fuzzer_architecture_cn.png View File

Before After
Width: 6041  |  Height: 3201  |  Size: 394 kB

BIN
docs/fuzzer_architecture_en.png View File

Before After
Width: 6097  |  Height: 3233  |  Size: 394 kB

BIN
docs/mindarmour_architecture.png View File

Before After
Width: 645  |  Height: 540  |  Size: 28 kB

BIN
docs/mindarmour_architecture_cn.png View File

Before After
Width: 647  |  Height: 569  |  Size: 18 kB

BIN
docs/privacy_leakage_cn.png View File

Before After
Width: 4721  |  Height: 3209  |  Size: 334 kB

BIN
docs/privacy_leakage_en.png View File

Before After
Width: 4729  |  Height: 3233  |  Size: 371 kB

Loading…
Cancel
Save