|
|
@@ -12,17 +12,50 @@ |
|
|
|
|
|
|
|
## What is MindArmour |
|
|
|
|
|
|
|
A tool box for MindSpore users to enhance model security and trustworthiness and protect privacy data. |
|
|
|
MindArmour focus on security and privacy of artificial intelligence. MindArmour can be used as a tool box for MindSpore users to enhance model security and trustworthiness and protect privacy data. |
|
|
|
MindArmour contains three module: Adversarial Robustness Module, Fuzz Testing Module, Privacy Protection and Evaluation Module. |
|
|
|
|
|
|
|
MindArmour model security module is designed for adversarial examples, including four submodule: adversarial examples generation, adversarial examples detection, model defense and evaluation. The architecture is shown as follow: |
|
|
|
### Adversarial Robustness Module |
|
|
|
|
|
|
|
 |
|
|
|
Adversarial robustness module is designed for evaluating the robustness of the model against adversarial examples, |
|
|
|
and provides model enhancement methods to enhance the model's ability to resist the adversarial attack and improve the model's robustness. |
|
|
|
This module includes four submodule: Adversarial Examples Generation, Adversarial Examples Detection, Model Defense and Evaluation. |
|
|
|
|
|
|
|
MindArmour differential privacy module Differential-Privacy implements the differential privacy optimizer. Currently, SGD, Momentum and Adam are supported. They are differential privacy optimizers based on the Gaussian mechanism. |
|
|
|
This mechanism supports both non-adaptive and adaptive policy. Rényi differential privacy (RDP) and Zero-Concentrated differential privacy(ZDP) are provided to monitor differential privacy budgets. The architecture is shown as follow: |
|
|
|
The architecture is shown as follow: |
|
|
|
|
|
|
|
 |
|
|
|
|
|
|
|
### Fuzz Testing Module |
|
|
|
|
|
|
|
Fuzz Testing module is a security test for AI models. We introduce neuron coverage gain as a guide to fuzz testing according to the characteristics of neural networks. |
|
|
|
Fuzz testing is guided to generate samples in the direction of increasing neuron coverage rate, so that the input can activate more neurons and neuron values have a wider distribution range to fully test neural networks and explore different types of model output results and wrong behaviors. |
|
|
|
|
|
|
|
The architecture is shown as follow: |
|
|
|
|
|
|
|
 |
|
|
|
|
|
|
|
### Privacy Protection and Evaluation Module |
|
|
|
|
|
|
|
Privacy Protection and Evaluation Module includes two modules: Differential Privacy Training Module and Privacy Leakage Evaluation Module. |
|
|
|
|
|
|
|
#### Differential Privacy Training Module |
|
|
|
|
|
|
|
Differential Privacy Training Module implements the differential privacy optimizer. Currently, SGD, Momentum and Adam are supported. They are differential privacy optimizers based on the Gaussian mechanism. |
|
|
|
This mechanism supports both non-adaptive and adaptive policy. Rényi differential privacy (RDP) and Zero-Concentrated differential privacy(ZCDP) are provided to monitor differential privacy budgets. |
|
|
|
|
|
|
|
The architecture is shown as follow: |
|
|
|
|
|
|
|
 |
|
|
|
|
|
|
|
#### Privacy Leakage Evaluation Module |
|
|
|
|
|
|
|
Privacy Leakage Evaluation Module is used to assess the risk of a model revealing user privacy. The privacy data security of the deep learning model is evaluated by using membership inference method to infer whether the sample belongs to training dataset. |
|
|
|
|
|
|
|
The architecture is shown as follow: |
|
|
|
|
|
|
|
 |
|
|
|
|
|
|
|
|
|
|
|
## Setting up MindArmour |
|
|
|
|
|
|
|
### Dependencies |
|
|
|