Compare commits

...

12 Commits
master ... r1.8

Author SHA1 Message Date
  i-robot 21890ed043
!402 Update 1.8.1 release note 2 years ago
  pkuliuliu d65a8c0595 update 1.8.1 release note 2 years ago
  i-robot 2976ead9e4
!398 update version NO. 1.8.1 2 years ago
  emmmmtang 82dc530259 update version NO. 1.8.1 2 years ago
  i-robot 933ddef46c
!396 add a blank line in rst 2 years ago
  ZhidanLiu 2670538f91 add a blank line in rst 2 years ago
  i-robot 4c82dea24d
!394 Optimize imp logic of PGD and JSMA 2 years ago
  pkuliuliu 74c5bed39c Optimize imp logic of PGD and JSMA 2 years ago
  i-robot 54c6c9b3e2
!387 add releasenotes of r1.8 2 years ago
  ZhidanLiu c0822f758f add releasenotes of r1.8 2 years ago
  i-robot e68cdbbe50
!385 modify doc 2 years ago
  xumengjuan1 5089713e89 modify doc 2 years ago
28 changed files with 147 additions and 78 deletions
Unified View
  1. +2
    -0
      .jenkins/check/config/filter_linklint.txt
  2. +2
    -2
      README.md
  3. +2
    -2
      README_CN.md
  4. +31
    -0
      RELEASE.md
  5. +29
    -0
      RELEASE_CN.md
  6. +6
    -6
      docs/api/api_python/mindarmour.privacy.diff_privacy.rst
  7. +1
    -1
      docs/api/api_python/mindarmour.privacy.evaluation.rst
  8. +4
    -4
      docs/api/api_python/mindarmour.privacy.sup_privacy.rst
  9. +3
    -3
      docs/api/api_python/mindarmour.reliability.rst
  10. +6
    -4
      docs/api/api_python/mindarmour.rst
  11. +4
    -4
      examples/natural_robustness/ocr_evaluate/cnn_ctc/README.md
  12. +5
    -5
      examples/natural_robustness/ocr_evaluate/cnn_ctc/README_CN.md
  13. +2
    -2
      examples/natural_robustness/ocr_evaluate/对OCR模型CNN-CTC的鲁棒性评测.md
  14. +0
    -1
      mindarmour/adv_robustness/attacks/gradient_method.py
  15. +23
    -27
      mindarmour/adv_robustness/attacks/iterative_gradient_method.py
  16. +1
    -2
      mindarmour/adv_robustness/attacks/jsma.py
  17. +1
    -0
      mindarmour/adv_robustness/defenses/adversarial_defense.py
  18. +6
    -2
      mindarmour/privacy/diff_privacy/mechanisms/mechanisms.py
  19. +9
    -3
      mindarmour/privacy/diff_privacy/monitor/monitor.py
  20. +1
    -1
      mindarmour/privacy/diff_privacy/train/model.py
  21. +1
    -1
      mindarmour/privacy/evaluation/membership_inference.py
  22. +1
    -1
      mindarmour/privacy/sup_privacy/mask_monitor/masker.py
  23. +2
    -2
      mindarmour/privacy/sup_privacy/sup_ctrl/conctrl.py
  24. +1
    -1
      mindarmour/privacy/sup_privacy/train/model.py
  25. +1
    -1
      mindarmour/reliability/concept_drift/concept_drift_check_images.py
  26. +1
    -1
      mindarmour/reliability/concept_drift/concept_drift_check_time_series.py
  27. +1
    -1
      mindarmour/reliability/model_fault_injection/fault_injection.py
  28. +1
    -1
      setup.py

+ 2
- 0
.jenkins/check/config/filter_linklint.txt View File

@@ -0,0 +1,2 @@
https://mindspore.cn/mindarmour/*/r1.8/*
https://www.mindspore.cn/*/r1.8/*

+ 2
- 2
README.md View File

@@ -75,7 +75,7 @@ The architecture is shown as follow:
- The hardware platform should be Ascend, GPU or CPU. - The hardware platform should be Ascend, GPU or CPU.
- See our [MindSpore Installation Guide](https://www.mindspore.cn/install) to install MindSpore. - See our [MindSpore Installation Guide](https://www.mindspore.cn/install) to install MindSpore.
The versions of MindArmour and MindSpore must be consistent. The versions of MindArmour and MindSpore must be consistent.
- All other dependencies are included in [setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py).
- All other dependencies are included in [setup.py](https://gitee.com/mindspore/mindarmour/blob/r1.8/setup.py).


### Installation ### Installation


@@ -100,7 +100,7 @@ The architecture is shown as follow:
pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindArmour/{arch}/mindarmour-{version}-cp37-cp37m-linux_{arch}.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindArmour/{arch}/mindarmour-{version}-cp37-cp37m-linux_{arch}.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple
``` ```


> - When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py)). In other cases, you need to manually install dependency items.
> - When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/r1.8/setup.py)). In other cases, you need to manually install dependency items.
> - `{version}` denotes the version of MindArmour. For example, when you are downloading MindArmour 1.0.1, `{version}` should be 1.0.1. > - `{version}` denotes the version of MindArmour. For example, when you are downloading MindArmour 1.0.1, `{version}` should be 1.0.1.
> - `{arch}` denotes the system architecture. For example, the Linux system you are using is x86 architecture 64-bit, `{arch}` should be `x86_64`. If the system is ARM architecture 64-bit, then it should be `aarch64`. > - `{arch}` denotes the system architecture. For example, the Linux system you are using is x86 architecture 64-bit, `{arch}` should be `x86_64`. If the system is ARM architecture 64-bit, then it should be `aarch64`.




+ 2
- 2
README_CN.md View File

@@ -72,7 +72,7 @@ Fuzz Testing模块的架构图如下:
- 硬件平台为Ascend、GPU或CPU。 - 硬件平台为Ascend、GPU或CPU。
- 参考[MindSpore安装指南](https://www.mindspore.cn/install),完成MindSpore的安装。 - 参考[MindSpore安装指南](https://www.mindspore.cn/install),完成MindSpore的安装。
MindArmour与MindSpore的版本需保持一致。 MindArmour与MindSpore的版本需保持一致。
- 其余依赖请参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py)。
- 其余依赖请参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/r1.8/setup.py)。


### 安装 ### 安装


@@ -97,7 +97,7 @@ Fuzz Testing模块的架构图如下:
pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindArmour/{arch}/mindarmour-{version}-cp37-cp37m-linux_{arch}.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindArmour/{arch}/mindarmour-{version}-cp37-cp37m-linux_{arch}.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple
``` ```


> - 在联网状态下,安装whl包时会自动下载MindArmour安装包的依赖项(依赖项详情参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py)),其余情况需自行安装。
> - 在联网状态下,安装whl包时会自动下载MindArmour安装包的依赖项(依赖项详情参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/r1.8/setup.py)),其余情况需自行安装。
> - `{version}`表示MindArmour版本号,例如下载1.0.1版本MindArmour时,`{version}`应写为1.0.1。 > - `{version}`表示MindArmour版本号,例如下载1.0.1版本MindArmour时,`{version}`应写为1.0.1。
> - `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。 > - `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。




+ 31
- 0
RELEASE.md View File

@@ -1,5 +1,36 @@
# MindArmour Release Notes # MindArmour Release Notes


## MindArmour 1.8.1 Release Notes

### Bug fixes

* [BUGFIX] Fix a bug of PGD method.
* [BUGFIX] Fix a bug of JSMA method.

### Contributors

Thanks goes to these wonderful people:

Zhang Shukun, Liu Zhidan, Jin Xiulang, Liu Liu, Tang Cong, Yangyuan.

Contributions of any kind are welcome!

# MindArmour Release Notes

## MindArmour 1.8.0 Release Notes

### API Change

* Add Chinese version of all existed api.

### Contributors

Thanks goes to these wonderful people:

Zhang Shukun, Liu Zhidan, Jin Xiulang, Liu Liu, Tang Cong, Yangyuan.

Contributions of any kind are welcome!

## MindArmour 1.7.0 Release Notes ## MindArmour 1.7.0 Release Notes


### Major Features and Improvements ### Major Features and Improvements


+ 29
- 0
RELEASE_CN.md View File

@@ -2,6 +2,35 @@


[View English](./RELEASE.md) [View English](./RELEASE.md)


## MindArmour 1.8.1 Release Notes

### Bug fixes

* [BUGFIX] 修复PGD实现误差
* [BUGFIX] 修复JSMA实现误差

### 贡献

感谢以下人员做出的贡献:

Zhang Shukun, Liu Zhidan, Jin Xiulang, Liu Liu, Tang Cong, Yangyuan.

欢迎以任何形式对项目提供贡献!

## MindArmour 1.8.0 Release Notes

### API Change

* 增加所有特性的api中文版本

### 贡献

感谢以下人员做出的贡献:

Zhang Shukun, Liu Zhidan, Jin Xiulang, Liu Liu, Tang Cong, Yangyuan.

欢迎以任何形式对项目提供贡献!

## MindArmour 1.7.0 Release Notes ## MindArmour 1.7.0 Release Notes


### 主要特性和增强 ### 主要特性和增强


+ 6
- 6
docs/api/api_python/mindarmour.privacy.diff_privacy.rst View File

@@ -88,7 +88,7 @@ mindarmour.privacy.diff_privacy


噪声产生机制的工厂类。它目前支持高斯随机噪声(Gaussian Random Noise)和自适应高斯随机噪声(Adaptive Gaussian Random Noise)。 噪声产生机制的工厂类。它目前支持高斯随机噪声(Gaussian Random Noise)和自适应高斯随机噪声(Adaptive Gaussian Random Noise)。


详情请查看: `教程 <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_。
详情请查看: `教程 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.8/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_。


.. py:method:: create(mech_name, norm_bound=1.0, initial_noise_multiplier=1.0, seed=0, noise_decay_rate=6e-6, decay_policy=None) .. py:method:: create(mech_name, norm_bound=1.0, initial_noise_multiplier=1.0, seed=0, noise_decay_rate=6e-6, decay_policy=None)


@@ -113,7 +113,7 @@ mindarmour.privacy.diff_privacy


梯度剪裁机制的工厂类。它目前支持高斯随机噪声(Gaussian Random Noise)的自适应剪裁(Adaptive Clipping)。 梯度剪裁机制的工厂类。它目前支持高斯随机噪声(Gaussian Random Noise)的自适应剪裁(Adaptive Clipping)。


详情请查看: `教程 <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_。
详情请查看: `教程 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.8/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_。


.. py:method:: create(mech_name, decay_policy='Linear', learning_rate=0.001, target_unclipped_quantile=0.9, fraction_stddev=0.01, seed=0) .. py:method:: create(mech_name, decay_policy='Linear', learning_rate=0.001, target_unclipped_quantile=0.9, fraction_stddev=0.01, seed=0)


@@ -138,7 +138,7 @@ mindarmour.privacy.diff_privacy


DP训练隐私监视器的工厂类。 DP训练隐私监视器的工厂类。


详情请查看: `教程 <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_。
详情请查看: `教程 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.8/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_。


.. py:method:: create(policy, *args, **kwargs) .. py:method:: create(policy, *args, **kwargs)


@@ -163,7 +163,7 @@ mindarmour.privacy.diff_privacy
.. math:: .. math::
(ε'+\frac{log(1/δ)}{α-1}, δ) (ε'+\frac{log(1/δ)}{α-1}, δ)


详情请查看: `教程 <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_。
详情请查看: `教程 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.8/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_。


参考文献: `Rényi Differential Privacy of the Sampled Gaussian Mechanism <https://arxiv.org/abs/1908.10530>`_。 参考文献: `Rényi Differential Privacy of the Sampled Gaussian Mechanism <https://arxiv.org/abs/1908.10530>`_。


@@ -207,7 +207,7 @@ mindarmour.privacy.diff_privacy


注意,ZCDPMonitor不适合子采样噪声机制(如NoiseAdaGaussianRandom和NoiseGaussianRandom)。未来将开发zCDP的匹配噪声机制。 注意,ZCDPMonitor不适合子采样噪声机制(如NoiseAdaGaussianRandom和NoiseGaussianRandom)。未来将开发zCDP的匹配噪声机制。


详情请查看: `教程 <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_。
详情请查看: `教程 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.8/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_。


参考文献:`Concentrated Differentially Private Gradient Descent with Adaptive per-Iteration Privacy Budget <https://arxiv.org/abs/1808.09501>`_。 参考文献:`Concentrated Differentially Private Gradient Descent with Adaptive per-Iteration Privacy Budget <https://arxiv.org/abs/1808.09501>`_。


@@ -277,7 +277,7 @@ mindarmour.privacy.diff_privacy
这个类重载自Mindpore.train.model.Model。 这个类重载自Mindpore.train.model.Model。


详情请查看: `教程 <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_。
详情请查看: `教程 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.8/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_。


**参数:** **参数:**




+ 1
- 1
docs/api/api_python/mindarmour.privacy.evaluation.rst View File

@@ -8,7 +8,7 @@ mindarmour.privacy.evaluation
成员推理是由Shokri、Stronati、Song和Shmatikov提出的一种用于推断用户隐私数据的灰盒攻击。它需要训练样本的loss或logits结果。 成员推理是由Shokri、Stronati、Song和Shmatikov提出的一种用于推断用户隐私数据的灰盒攻击。它需要训练样本的loss或logits结果。
(隐私是指单个用户的一些敏感属性)。 (隐私是指单个用户的一些敏感属性)。


有关详细信息,请参见: `教程 <https://mindspore.cn/mindarmour/docs/en/master/test_model_security_membership_inference.html>`_。
有关详细信息,请参见: `教程 <https://mindspore.cn/mindarmour/docs/en/r1.8/test_model_security_membership_inference.html>`_。


参考文献:`Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov. Membership Inference Attacks against Machine Learning Models. 2017. <https://arxiv.org/abs/1610.05820v2>`_。 参考文献:`Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov. Membership Inference Attacks against Machine Learning Models. 2017. <https://arxiv.org/abs/1610.05820v2>`_。




+ 4
- 4
docs/api/api_python/mindarmour.privacy.sup_privacy.rst View File

@@ -8,7 +8,7 @@ mindarmour.privacy.sup_privacy
周期性检查抑制隐私功能状态和切换(启动/关闭)抑制操作。 周期性检查抑制隐私功能状态和切换(启动/关闭)抑制操作。


详情请查看: `应用抑制隐私机制保护用户隐私 详情请查看: `应用抑制隐私机制保护用户隐私
<https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_suppress_privacy.html#%E5%BC%95%E5%85%A5%E6%8A%91%E5%88%B6%E9%9A%90%E7%A7%81%E8%AE%AD%E7%BB%83>`_。
<https://mindspore.cn/mindarmour/docs/zh-CN/r1.8/protect_user_privacy_with_suppress_privacy.html#%E5%BC%95%E5%85%A5%E6%8A%91%E5%88%B6%E9%9A%90%E7%A7%81%E8%AE%AD%E7%BB%83>`_。


**参数:** **参数:**


@@ -27,7 +27,7 @@ mindarmour.privacy.sup_privacy


完整的模型训练功能。抑制隐私函数嵌入到重载的mindspore.train.model.Model中。 完整的模型训练功能。抑制隐私函数嵌入到重载的mindspore.train.model.Model中。


有关详细信息,请查看: `应用抑制隐私机制保护用户隐私 <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_suppress_privacy.html>`_。
有关详细信息,请查看: `应用抑制隐私机制保护用户隐私 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.8/protect_user_privacy_with_suppress_privacy.html>`_。


**参数:** **参数:**


@@ -48,7 +48,7 @@ mindarmour.privacy.sup_privacy


SuppressCtrl机制的工厂类。 SuppressCtrl机制的工厂类。


详情请查看: `应用抑制隐私机制保护用户隐私 <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_suppress_privacy.html#%E5%BC%95%E5%85%A5%E6%8A%91%E5%88%B6%E9%9A%90%E7%A7%81%E8%AE%AD%E7%BB%83>`_。
详情请查看: `应用抑制隐私机制保护用户隐私 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.8/protect_user_privacy_with_suppress_privacy.html#%E5%BC%95%E5%85%A5%E6%8A%91%E5%88%B6%E9%9A%90%E7%A7%81%E8%AE%AD%E7%BB%83>`_。


.. py:method:: create(networks, mask_layers, policy='local_train', end_epoch=10, batch_num=20, start_epoch=3, mask_times=1000, lr=0.05, sparse_end=0.90, sparse_start=0.0) .. py:method:: create(networks, mask_layers, policy='local_train', end_epoch=10, batch_num=20, start_epoch=3, mask_times=1000, lr=0.05, sparse_end=0.90, sparse_start=0.0)


@@ -73,7 +73,7 @@ mindarmour.privacy.sup_privacy


完成抑制隐私操作,包括计算抑制比例,找到应该抑制的参数,并永久抑制这些参数。 完成抑制隐私操作,包括计算抑制比例,找到应该抑制的参数,并永久抑制这些参数。


详情请查看: `应用抑制隐私机制保护用户隐私 <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_suppress_privacy.html#%E5%BC%95%E5%85%A5%E6%8A%91%E5%88%B6%E9%9A%90%E7%A7%81%E8%AE%AD%E7%BB%83>`_。
详情请查看: `应用抑制隐私机制保护用户隐私 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.8/protect_user_privacy_with_suppress_privacy.html#%E5%BC%95%E5%85%A5%E6%8A%91%E5%88%B6%E9%9A%90%E7%A7%81%E8%AE%AD%E7%BB%83>`_。


**参数:** **参数:**




+ 3
- 3
docs/api/api_python/mindarmour.reliability.rst View File

@@ -7,7 +7,7 @@ MindArmour的可靠性方法。


故障注入模块模拟深度神经网络的各种故障场景,并评估模型的性能和可靠性。 故障注入模块模拟深度神经网络的各种故障场景,并评估模型的性能和可靠性。


详情请查看 `实现模型故障注入评估模型容错性 <https://mindspore.cn/mindarmour/docs/zh-CN/master/fault_injection.html>`_。
详情请查看 `实现模型故障注入评估模型容错性 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.8/fault_injection.html>`_。


**参数:** **参数:**


@@ -42,7 +42,7 @@ MindArmour的可靠性方法。


概念漂移检查时间序列(ConceptDriftCheckTimeSeries)用于样本序列分布变化检测。 概念漂移检查时间序列(ConceptDriftCheckTimeSeries)用于样本序列分布变化检测。
有关详细信息,请查看 `实现时序数据概念漂移检测应用 有关详细信息,请查看 `实现时序数据概念漂移检测应用
<https://mindspore.cn/mindarmour/docs/zh-CN/master/concept_drift_time_series.html>`_。
<https://mindspore.cn/mindarmour/docs/zh-CN/r1.8/concept_drift_time_series.html>`_。


**参数:** **参数:**


@@ -107,7 +107,7 @@ MindArmour的可靠性方法。


训练OOD检测器。提取训练数据特征,得到聚类中心。测试数据特征与聚类中心之间的距离确定图像是否为分布外(OOD)图像。 训练OOD检测器。提取训练数据特征,得到聚类中心。测试数据特征与聚类中心之间的距离确定图像是否为分布外(OOD)图像。


有关详细信息,请查看 `实现图像数据概念漂移检测应用 <https://mindspore.cn/mindarmour/docs/zh-CN/master/concept_drift_images.html>`_。
有关详细信息,请查看 `实现图像数据概念漂移检测应用 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.8/concept_drift_images.html>`_。


**参数:** **参数:**




+ 6
- 4
docs/api/api_python/mindarmour.rst View File

@@ -170,6 +170,7 @@ MindArmour是MindSpore的工具箱,用于增强模型可信,实现隐私保
- **target_model** (Model) - 目标模糊模型。 - **target_model** (Model) - 目标模糊模型。


.. py:method:: fuzzing(mutate_config, initial_seeds, coverage, evaluate=True, max_iters=10000, mutate_num_per_seed=20) .. py:method:: fuzzing(mutate_config, initial_seeds, coverage, evaluate=True, max_iters=10000, mutate_num_per_seed=20)

深度神经网络的模糊测试。 深度神经网络的模糊测试。


**参数:** **参数:**
@@ -196,7 +197,7 @@ MindArmour是MindSpore的工具箱,用于增强模型可信,实现隐私保
- 首先,自然鲁棒性方法包括:'Translate', 'Scale'、'Shear'、'Rotate'、'Perspective'、'Curve'、'GaussianBlur'、'MotionBlur'、'GradientBlur'、'Contrast'、'GradientLuminance'、'UniformNoise'、'GaussianNoise'、'SaltAndPepperNoise'、'NaturalNoise'。 - 首先,自然鲁棒性方法包括:'Translate', 'Scale'、'Shear'、'Rotate'、'Perspective'、'Curve'、'GaussianBlur'、'MotionBlur'、'GradientBlur'、'Contrast'、'GradientLuminance'、'UniformNoise'、'GaussianNoise'、'SaltAndPepperNoise'、'NaturalNoise'。
- 其次,对抗样本攻击方式包括:'FGSM'、'PGD'和'MDIM'。'FGSM'、'PGD'和'MDIM'分别是 FastGradientSignMethod、ProjectedGradientDent和MomentumDiverseInputIterativeMethod的缩写。 `mutate_config` 必须包含在['Contrast', 'GradientLuminance', 'GaussianBlur', 'MotionBlur', 'GradientBlur', 'UniformNoise', 'GaussianNoise', 'SaltAndPepperNoise', 'NaturalNoise']中的方法。 - 其次,对抗样本攻击方式包括:'FGSM'、'PGD'和'MDIM'。'FGSM'、'PGD'和'MDIM'分别是 FastGradientSignMethod、ProjectedGradientDent和MomentumDiverseInputIterativeMethod的缩写。 `mutate_config` 必须包含在['Contrast', 'GradientLuminance', 'GaussianBlur', 'MotionBlur', 'GradientBlur', 'UniformNoise', 'GaussianNoise', 'SaltAndPepperNoise', 'NaturalNoise']中的方法。


- 第一类方法的参数设置方式可以在 `mindarmour/natural_robustness/transform/image <https://gitee.com/mindspore/mindarmour/tree/master/mindarmour/natural_robustness/transform/image>`_ 中看到。第二类方法参数配置参考 `self._attack_param_checklists` 。
- 第一类方法的参数设置方式可以在 `mindarmour/natural_robustness/transform/image <https://gitee.com/mindspore/mindarmour/tree/r1.8/mindarmour/natural_robustness/transform/image>`_ 中看到。第二类方法参数配置参考 `self._attack_param_checklists` 。
- **initial_seeds** (list[list]) - 用于生成变异样本的初始种子队列。初始种子队列的格式为[[image_data, label], [...], ...],且标签必须为one-hot。 - **initial_seeds** (list[list]) - 用于生成变异样本的初始种子队列。初始种子队列的格式为[[image_data, label], [...], ...],且标签必须为one-hot。
- **coverage** (CoverageMetrics) - 神经元覆盖率指标类。 - **coverage** (CoverageMetrics) - 神经元覆盖率指标类。
- **evaluate** (bool) - 是否返回评估报告。默认值:True。 - **evaluate** (bool) - 是否返回评估报告。默认值:True。
@@ -223,7 +224,7 @@ MindArmour是MindSpore的工具箱,用于增强模型可信,实现隐私保


这个类就是重载Mindpore.train.model.Model。 这个类就是重载Mindpore.train.model.Model。


详情请查看: `应用差分隐私机制保护用户隐私 <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_。
详情请查看: `应用差分隐私机制保护用户隐私 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.8/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_。


**参数:** **参数:**


@@ -241,7 +242,7 @@ MindArmour是MindSpore的工具箱,用于增强模型可信,实现隐私保


成员推理是由Shokri、Stronati、Song和Shmatikov提出的一种用于推测用户隐私数据的灰盒攻击。它需要训练样本的loss或logits结果。(隐私是指单个用户的一些敏感属性)。 成员推理是由Shokri、Stronati、Song和Shmatikov提出的一种用于推测用户隐私数据的灰盒攻击。它需要训练样本的loss或logits结果。(隐私是指单个用户的一些敏感属性)。


有关详细信息,请参见:`使用成员推理测试模型安全性 <https://mindspore.cn/mindarmour/docs/zh-CN/master/test_model_security_membership_inference.html>`_。
有关详细信息,请参见:`使用成员推理测试模型安全性 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.8/test_model_security_membership_inference.html>`_。


参考文献:`Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov. Membership Inference Attacks against Machine Learning Models. 2017. <https://arxiv.org/abs/1610.05820v2>`_。 参考文献:`Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov. Membership Inference Attacks against Machine Learning Models. 2017. <https://arxiv.org/abs/1610.05820v2>`_。


@@ -340,6 +341,7 @@ MindArmour是MindSpore的工具箱,用于增强模型可信,实现隐私保
根据target_features重建图像。 根据target_features重建图像。


**参数:** **参数:**

- **target_features** (numpy.ndarray) - 原始图像的深度表示。 `target_features` 的第一个维度应该是img_num。 - **target_features** (numpy.ndarray) - 原始图像的深度表示。 `target_features` 的第一个维度应该是img_num。
需要注意的是,如果img_num等于1,则target_features的形状应该是(1, dim2, dim3, ...)。 需要注意的是,如果img_num等于1,则target_features的形状应该是(1, dim2, dim3, ...)。
- **iters** (int) - 逆向攻击的迭代次数,应为正整数。默认值:100。 - **iters** (int) - 逆向攻击的迭代次数,应为正整数。默认值:100。
@@ -357,7 +359,7 @@ MindArmour是MindSpore的工具箱,用于增强模型可信,实现隐私保


概念漂移检查时间序列(ConceptDriftCheckTimeSeries)用于样本序列分布变化检测。 概念漂移检查时间序列(ConceptDriftCheckTimeSeries)用于样本序列分布变化检测。


有关详细信息,请查看: `实现时序数据概念漂移检测应用 <https://mindspore.cn/mindarmour/docs/zh-CN/master/concept_drift_time_series.html>`_。
有关详细信息,请查看: `实现时序数据概念漂移检测应用 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.8/concept_drift_time_series.html>`_。


**参数:** **参数:**




+ 4
- 4
examples/natural_robustness/ocr_evaluate/cnn_ctc/README.md View File

@@ -94,7 +94,7 @@ This takes around 75 minutes.


## Mixed Precision ## Mixed Precision


The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/r1.8/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.


# [Environment Requirements](#contents) # [Environment Requirements](#contents)
@@ -106,9 +106,9 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil


- [MindSpore](https://www.mindspore.cn/install/en) - [MindSpore](https://www.mindspore.cn/install/en)
- For more information, please check the resources below: - For more information, please check the resources below:
- [MindSpore tutorials](https://www.mindspore.cn/tutorials/en/master/index.html)
- [MindSpore tutorials](https://www.mindspore.cn/tutorials/en/r1.8/index.html)


- [MindSpore Python API](https://www.mindspore.cn/docs/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/docs/en/r1.8/index.html)


# [Quick Start](#contents) # [Quick Start](#contents)


@@ -517,7 +517,7 @@ accuracy: 0.8533


### Inference ### Inference


If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following the steps below, this is a simple example:
If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/r1.8/infer/inference.html). Following the steps below, this is a simple example:


- Running on Ascend - Running on Ascend




+ 5
- 5
examples/natural_robustness/ocr_evaluate/cnn_ctc/README_CN.md View File

@@ -95,7 +95,7 @@ python src/preprocess_dataset.py


## 混合精度 ## 混合精度


采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/r1.8/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。 以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。


# 环境要求 # 环境要求
@@ -109,9 +109,9 @@ python src/preprocess_dataset.py
- [MindSpore](https://www.mindspore.cn/install) - [MindSpore](https://www.mindspore.cn/install)


- 如需查看详情,请参见如下资源: - 如需查看详情,请参见如下资源:
- [MindSpore教程](https://www.mindspore.cn/tutorials/zh-CN/master/index.html)
- [MindSpore教程](https://www.mindspore.cn/tutorials/zh-CN/r1.8/index.html)


- [MindSpore Python API](https://www.mindspore.cn/docs/zh-CN/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/docs/zh-CN/r1.8/index.html)


# 快速入门 # 快速入门


@@ -250,7 +250,7 @@ bash scripts/run_distribute_train_ascend.sh [RANK_TABLE_FILE] [PRETRAINED_CKPT(o


> 注意: > 注意:


RANK_TABLE_FILE相关参考资料见[链接](https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html), 获取device_ip方法详见[链接](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools).
RANK_TABLE_FILE相关参考资料见[链接](https://www.mindspore.cn/tutorials/experts/zh-CN/r1.8/parallel/train_ascend.html), 获取device_ip方法详见[链接](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools).


### 训练结果 ### 训练结果


@@ -449,7 +449,7 @@ bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DVPP] [DEVICE_ID]


### 推理 ### 推理


如果您需要在GPU、Ascend 910、Ascend 310等多个硬件平台上使用训练好的模型进行推理,请参考此[链接](https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html)。以下为简单示例:
如果您需要在GPU、Ascend 910、Ascend 310等多个硬件平台上使用训练好的模型进行推理,请参考此[链接](https://www.mindspore.cn/tutorials/experts/zh-CN/r1.8/infer/inference.html)。以下为简单示例:


- Ascend处理器环境运行 - Ascend处理器环境运行




+ 2
- 2
examples/natural_robustness/ocr_evaluate/对OCR模型CNN-CTC的鲁棒性评测.md View File

@@ -126,7 +126,7 @@
### 基于自然扰动serving生成评测数据集 ### 基于自然扰动serving生成评测数据集
1. 启动自然扰动serving服务。具体说明参考:[ 自然扰动样本生成serving服务](https://gitee.com/mindspore/mindarmour/blob/master/examples/natural_robustness/serving/README.md)
1. 启动自然扰动serving服务。具体说明参考:[ 自然扰动样本生成serving服务](https://gitee.com/mindspore/mindarmour/blob/r1.8/examples/natural_robustness/serving/README.md)
```bash ```bash
cd serving/server/ cd serving/server/
@@ -144,7 +144,7 @@
2. 核心代码说明: 2. 核心代码说明:
1. 配置扰动方法,目前可选的扰动方法及参数配置参考[image transform methods](https://gitee.com/mindspore/mindarmour/tree/master/mindarmour/natural_robustness/transform/image)。下面是一个配置例子。
1. 配置扰动方法,目前可选的扰动方法及参数配置参考[image transform methods](https://gitee.com/mindspore/mindarmour/tree/r1.8/mindarmour/natural_robustness/transform/image)。下面是一个配置例子。
```python ```python
PerturbConfig = [ PerturbConfig = [


+ 0
- 1
mindarmour/adv_robustness/attacks/gradient_method.py View File

@@ -69,7 +69,6 @@ class GradientMethod(Attack):
else: else:
with_loss_cell = WithLossCell(self._network, loss_fn) with_loss_cell = WithLossCell(self._network, loss_fn)
self._grad_all = GradWrapWithLoss(with_loss_cell) self._grad_all = GradWrapWithLoss(with_loss_cell)
self._grad_all.set_train()


def generate(self, inputs, labels): def generate(self, inputs, labels):
""" """


+ 23
- 27
mindarmour/adv_robustness/attacks/iterative_gradient_method.py View File

@@ -14,6 +14,7 @@
""" Iterative gradient method attack. """ """ Iterative gradient method attack. """
from abc import abstractmethod from abc import abstractmethod


import copy
import numpy as np import numpy as np
from PIL import Image, ImageOps from PIL import Image, ImageOps


@@ -68,13 +69,14 @@ def _reshape_l1_projection(values, eps=3):
return proj_x return proj_x




def _projection(values, eps, norm_level):
def _projection(values, eps, clip_diff, norm_level):
""" """
Implementation of values normalization within eps. Implementation of values normalization within eps.


Args: Args:
values (numpy.ndarray): Input data. values (numpy.ndarray): Input data.
eps (float): Project radius. eps (float): Project radius.
clip_diff (float): Difference range of clip bounds.
norm_level (Union[int, char, numpy.inf]): Order of the norm. Possible norm_level (Union[int, char, numpy.inf]): Order of the norm. Possible
values: np.inf, 1 or 2. values: np.inf, 1 or 2.


@@ -88,12 +90,12 @@ def _projection(values, eps, norm_level):
if norm_level in (1, '1'): if norm_level in (1, '1'):
sample_batch = values.shape[0] sample_batch = values.shape[0]
x_flat = values.reshape(sample_batch, -1) x_flat = values.reshape(sample_batch, -1)
proj_flat = _reshape_l1_projection(x_flat, eps)
proj_flat = _reshape_l1_projection(x_flat, eps*clip_diff)
return proj_flat.reshape(values.shape) return proj_flat.reshape(values.shape)
if norm_level in (2, '2'): if norm_level in (2, '2'):
return eps*normalize_value(values, norm_level) return eps*normalize_value(values, norm_level)
if norm_level in (np.inf, 'inf'): if norm_level in (np.inf, 'inf'):
return eps*np.sign(values)
return eps*clip_diff*np.sign(values)
msg = 'Values of `norm_level` different from 1, 2 and `np.inf` are ' \ msg = 'Values of `norm_level` different from 1, 2 and `np.inf` are ' \
'currently not supported.' 'currently not supported.'
LOGGER.error(TAG, msg) LOGGER.error(TAG, msg)
@@ -132,7 +134,6 @@ class IterativeGradientMethod(Attack):
self._loss_grad = network self._loss_grad = network
else: else:
self._loss_grad = GradWrapWithLoss(WithLossCell(self._network, loss_fn)) self._loss_grad = GradWrapWithLoss(WithLossCell(self._network, loss_fn))
self._loss_grad.set_train()


@abstractmethod @abstractmethod
def generate(self, inputs, labels): def generate(self, inputs, labels):
@@ -470,33 +471,28 @@ class ProjectedGradientDescent(BasicIterativeMethod):
""" """
inputs_image, inputs, labels = check_inputs_labels(inputs, labels) inputs_image, inputs, labels = check_inputs_labels(inputs, labels)
arr_x = inputs_image arr_x = inputs_image
adv_x = copy.deepcopy(inputs_image)
if self._bounds is not None: if self._bounds is not None:
clip_min, clip_max = self._bounds clip_min, clip_max = self._bounds
clip_diff = clip_max - clip_min clip_diff = clip_max - clip_min
for _ in range(self._nb_iter):
adv_x = self._attack.generate(inputs, labels)
perturs = _projection(adv_x - arr_x,
self._eps,
norm_level=self._norm_level)
perturs = np.clip(perturs, (0 - self._eps)*clip_diff,
self._eps*clip_diff)
adv_x = arr_x + perturs
if isinstance(inputs, tuple):
inputs = (adv_x,) + inputs[1:]
else:
inputs = adv_x
else: else:
for _ in range(self._nb_iter):
adv_x = self._attack.generate(inputs, labels)
perturs = _projection(adv_x - arr_x,
self._eps,
norm_level=self._norm_level)
adv_x = arr_x + perturs
adv_x = np.clip(adv_x, arr_x - self._eps, arr_x + self._eps)
if isinstance(inputs, tuple):
inputs = (adv_x,) + inputs[1:]
else:
inputs = adv_x
clip_diff = 1

for _ in range(self._nb_iter):
inputs_tensor = to_tensor_tuple(inputs)
labels_tensor = to_tensor_tuple(labels)
out_grad = self._loss_grad(*inputs_tensor, *labels_tensor)
gradient = out_grad.asnumpy()
perturbs = _projection(gradient, self._eps_iter, clip_diff, norm_level=self._norm_level)
sum_perturbs = adv_x - arr_x + perturbs
sum_perturbs = np.clip(sum_perturbs, (0 - self._eps)*clip_diff, self._eps*clip_diff)
adv_x = arr_x + sum_perturbs
if self._bounds is not None:
adv_x = np.clip(adv_x, clip_min, clip_max)
if isinstance(inputs, tuple):
inputs = (adv_x,) + inputs[1:]
else:
inputs = adv_x
return adv_x return adv_x






+ 1
- 2
mindarmour/adv_robustness/attacks/jsma.py View File

@@ -150,7 +150,6 @@ class JSMAAttack(Attack):
ori_shape = data.shape ori_shape = data.shape
temp = data.flatten() temp = data.flatten()
bit_map = np.ones_like(temp) bit_map = np.ones_like(temp)
fake_res = np.zeros_like(data)
counter = np.zeros_like(temp) counter = np.zeros_like(temp)
perturbed = np.copy(temp) perturbed = np.copy(temp)
for _ in range(self._max_iter): for _ in range(self._max_iter):
@@ -183,7 +182,7 @@ class JSMAAttack(Attack):
bit_map[p2_ind] = 0 bit_map[p2_ind] = 0
perturbed = np.clip(perturbed, self._min, self._max) perturbed = np.clip(perturbed, self._min, self._max)
LOGGER.debug(TAG, 'fail to find adversarial sample.') LOGGER.debug(TAG, 'fail to find adversarial sample.')
return fake_res
return perturbed.reshape(ori_shape)


def generate(self, inputs, labels): def generate(self, inputs, labels):
""" """


+ 1
- 0
mindarmour/adv_robustness/defenses/adversarial_defense.py View File

@@ -162,6 +162,7 @@ class AdversarialDefenseWithAttacks(AdversarialDefense):
replace_ratio, replace_ratio,
0, 1) 0, 1)
self._graph_initialized = False self._graph_initialized = False
self._train_net.set_train()


def defense(self, inputs, labels): def defense(self, inputs, labels):
""" """


+ 6
- 2
mindarmour/privacy/diff_privacy/mechanisms/mechanisms.py View File

@@ -39,7 +39,9 @@ class ClipMechanismsFactory:
Wrapper of clip noise generating mechanisms. It supports Adaptive Clipping with Wrapper of clip noise generating mechanisms. It supports Adaptive Clipping with
Gaussian Random Noise for now. Gaussian Random Noise for now.


For details, please check `Tutorial <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_.
For details, please check `Tutorial
<https://mindspore.cn/mindarmour/docs/zh-CN/r1.8/protect_user_privacy_with_differential_privacy
.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_.


""" """


@@ -100,7 +102,9 @@ class NoiseMechanismsFactory:
Wrapper of noise generating mechanisms. It supports Gaussian Random Noise and Wrapper of noise generating mechanisms. It supports Gaussian Random Noise and
Adaptive Gaussian Random Noise for now. Adaptive Gaussian Random Noise for now.


For details, please check `Tutorial <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_.
For details, please check `Tutorial
<https://mindspore.cn/mindarmour/docs/zh-CN/r1.8/protect_user_privacy_with_differential_privacy
.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_.


""" """
def __init__(self): def __init__(self):


+ 9
- 3
mindarmour/privacy/diff_privacy/monitor/monitor.py View File

@@ -28,7 +28,9 @@ TAG = 'DP monitor'
class PrivacyMonitorFactory: class PrivacyMonitorFactory:
""" """
Factory class of DP training's privacy monitor. Factory class of DP training's privacy monitor.
For details, please check `Tutorial <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_.
For details, please check `Tutorial
<https://mindspore.cn/mindarmour/docs/zh-CN/r1.8/protect_user_privacy_with_differential_privacy
.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_.


""" """


@@ -77,7 +79,9 @@ class RDPMonitor(Callback):
.. math:: .. math::
(ε'+\frac{log(1/δ)}{α-1}, δ) (ε'+\frac{log(1/δ)}{α-1}, δ)


For details, please check `Tutorial <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_.
For details, please check `Tutorial
<https://mindspore.cn/mindarmour/docs/zh-CN/r1.8/protect_user_privacy_with_differential_privacy
.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_.


Reference: `Rényi Differential Privacy of the Sampled Gaussian Mechanism Reference: `Rényi Differential Privacy of the Sampled Gaussian Mechanism
<https://arxiv.org/abs/1908.10530>`_ <https://arxiv.org/abs/1908.10530>`_
@@ -370,7 +374,9 @@ class ZCDPMonitor(Callback):
noise mechanisms(such as NoiseAdaGaussianRandom and NoiseGaussianRandom). noise mechanisms(such as NoiseAdaGaussianRandom and NoiseGaussianRandom).
The matching noise mechanism of ZCDP will be developed in the future. The matching noise mechanism of ZCDP will be developed in the future.


For details, please check `Tutorial <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_.
For details, please check `Tutorial
<https://mindspore.cn/mindarmour/docs/zh-CN/r1.8/protect_user_privacy_with_differential_privacy
.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_.


Reference: `Concentrated Differentially Private Gradient Descent with Reference: `Concentrated Differentially Private Gradient Descent with
Adaptive per-Iteration Privacy Budget <https://arxiv.org/abs/1808.09501>`_ Adaptive per-Iteration Privacy Budget <https://arxiv.org/abs/1808.09501>`_


+ 1
- 1
mindarmour/privacy/diff_privacy/train/model.py View File

@@ -71,7 +71,7 @@ class DPModel(Model):
This class is overload mindspore.train.model.Model. This class is overload mindspore.train.model.Model.


For details, please check `Protecting User Privacy with Differential Privacy Mechanism For details, please check `Protecting User Privacy with Differential Privacy Mechanism
<https://mindspore.cn/mindarmour/docs/en/master/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_.
<https://mindspore.cn/mindarmour/docs/en/r1.8/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_.


Args: Args:
micro_batches (int): The number of small batches split from an original micro_batches (int): The number of small batches split from an original


+ 1
- 1
mindarmour/privacy/evaluation/membership_inference.py View File

@@ -99,7 +99,7 @@ class MembershipInference:
(Privacy refers to some sensitive attributes of a single user). (Privacy refers to some sensitive attributes of a single user).


For details, please refer to the `Using Membership Inference to Test Model Security For details, please refer to the `Using Membership Inference to Test Model Security
<https://mindspore.cn/mindarmour/docs/en/master/test_model_security_membership_inference.html>`_.
<https://mindspore.cn/mindarmour/docs/en/r1.8/test_model_security_membership_inference.html>`_.


References: `Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov. References: `Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov.
Membership Inference Attacks against Machine Learning Models. 2017. Membership Inference Attacks against Machine Learning Models. 2017.


+ 1
- 1
mindarmour/privacy/sup_privacy/mask_monitor/masker.py View File

@@ -28,7 +28,7 @@ class SuppressMasker(Callback):
""" """
Periodicity check suppress privacy function status and toggle suppress operation. Periodicity check suppress privacy function status and toggle suppress operation.
For details, please check `Protecting User Privacy with Suppression Privacy For details, please check `Protecting User Privacy with Suppression Privacy
<https://mindspore.cn/mindarmour/docs/en/master/protect_user_privacy_with_suppress_privacy.html>`_.
<https://mindspore.cn/mindarmour/docs/en/r1.8/protect_user_privacy_with_suppress_privacy.html>`_.


Args: Args:
model (SuppressModel): SuppressModel instance. model (SuppressModel): SuppressModel instance.


+ 2
- 2
mindarmour/privacy/sup_privacy/sup_ctrl/conctrl.py View File

@@ -36,7 +36,7 @@ class SuppressPrivacyFactory:
Factory class of SuppressCtrl mechanisms. Factory class of SuppressCtrl mechanisms.


For details, please check `Protecting User Privacy with Suppress Privacy For details, please check `Protecting User Privacy with Suppress Privacy
<https://mindspore.cn/mindarmour/docs/en/master/protect_user_privacy_with_suppress_privacy.html>`_.
<https://mindspore.cn/mindarmour/docs/en/r1.8/protect_user_privacy_with_suppress_privacy.html>`_.
""" """


def __init__(self): def __init__(self):
@@ -120,7 +120,7 @@ class SuppressCtrl(Cell):
parameters permanently. parameters permanently.


For details, please check `Protecting User Privacy with Suppress Privacy For details, please check `Protecting User Privacy with Suppress Privacy
<https://mindspore.cn/mindarmour/docs/en/master/protect_user_privacy_with_suppress_privacy.html>`_.
<https://mindspore.cn/mindarmour/docs/en/r1.8/protect_user_privacy_with_suppress_privacy.html>`_.


Args: Args:
networks (Cell): The training network. networks (Cell): The training network.


+ 1
- 1
mindarmour/privacy/sup_privacy/train/model.py View File

@@ -60,7 +60,7 @@ class SuppressModel(Model):
mindspore.train.model.Model. mindspore.train.model.Model.


For details, please check `Protecting User Privacy with Suppress Privacy For details, please check `Protecting User Privacy with Suppress Privacy
<https://mindspore.cn/mindarmour/docs/en/master/protect_user_privacy_with_suppress_privacy.html>`_.
<https://mindspore.cn/mindarmour/docs/en/r1.8/protect_user_privacy_with_suppress_privacy.html>`_.


Args: Args:
network (Cell): The training network. network (Cell): The training network.


+ 1
- 1
mindarmour/reliability/concept_drift/concept_drift_check_images.py View File

@@ -90,7 +90,7 @@ class OodDetectorFeatureCluster(OodDetector):
image or not. image or not.


For details, please check `Implementing the Concept Drift Detection Application of Image Data For details, please check `Implementing the Concept Drift Detection Application of Image Data
<https://mindspore.cn/mindarmour/docs/en/master/concept_drift_images.html>`_.
<https://mindspore.cn/mindarmour/docs/en/r1.8/concept_drift_images.html>`_.


Args: Args:
model (Model):The training model. model (Model):The training model.


+ 1
- 1
mindarmour/reliability/concept_drift/concept_drift_check_time_series.py View File

@@ -24,7 +24,7 @@ class ConceptDriftCheckTimeSeries:
r""" r"""
ConceptDriftCheckTimeSeries is used for example series distribution change detection. ConceptDriftCheckTimeSeries is used for example series distribution change detection.
For details, please check `Implementing the Concept Drift Detection Application of Time Series Data For details, please check `Implementing the Concept Drift Detection Application of Time Series Data
<https://mindspore.cn/mindarmour/docs/en/master/concept_drift_time_series.html>`_.
<https://mindspore.cn/mindarmour/docs/en/r1.8/concept_drift_time_series.html>`_.


Args: Args:
window_size(int): Size of a concept window, no less than 10. If given the input data, window_size(int): Size of a concept window, no less than 10. If given the input data,


+ 1
- 1
mindarmour/reliability/model_fault_injection/fault_injection.py View File

@@ -32,7 +32,7 @@ class FaultInjector:
performance and reliability of the model. performance and reliability of the model.


For details, please check `Implementing the Model Fault Injection and Evaluation For details, please check `Implementing the Model Fault Injection and Evaluation
<https://mindspore.cn/mindarmour/docs/en/master/fault_injection.html>`_.
<https://mindspore.cn/mindarmour/docs/en/r1.8/fault_injection.html>`_.


Args: Args:
model (Model): The model need to be evaluated. model (Model): The model need to be evaluated.


+ 1
- 1
setup.py View File

@@ -24,7 +24,7 @@ from setuptools import setup
from setuptools.command.egg_info import egg_info from setuptools.command.egg_info import egg_info
from setuptools.command.build_py import build_py from setuptools.command.build_py import build_py


version = '1.8.0'
version = '1.8.1'
cur_dir = os.path.dirname(os.path.realpath(__file__)) cur_dir = os.path.dirname(os.path.realpath(__file__))
pkg_dir = os.path.join(cur_dir, 'build') pkg_dir = os.path.join(cur_dir, 'build')




Loading…
Cancel
Save