Author | SHA1 | Message | Date |
---|---|---|---|
|
6fc4193ae8
|
!422 modify the error parameter types in 1.9
Merge pull request !422 from 宦晓玲/r1.9 |
2 years ago |
|
a6abb49540 | modify the error parameter types in 1.9 | 2 years ago |
|
37d0a577c3
|
!424 modify api context code formats
Merge pull request !424 from lvmingfu/code_docs_0928 |
2 years ago |
|
04cf66848b | modify context code formats | 2 years ago |
|
43112909f5
|
!420 modify links branch to r1.9
Merge pull request !420 from lvmingfu/code_docs_0922armour |
2 years ago |
|
263bb35e96 | modify links branch to r1.9 | 2 years ago |
|
34165fe156
|
!419 delete the API file in 1.9
Merge pull request !419 from 宦晓玲/r1.9 |
2 years ago |
|
b5bd0da114 | delete the API file in 1.9 | 2 years ago |
@@ -1,6 +1,6 @@ | |||||
<!-- Thanks for sending a pull request! Here are some tips for you: | <!-- Thanks for sending a pull request! Here are some tips for you: | ||||
If this is your first time, please read our contributor guidelines: https://gitee.com/mindspore/mindspore/blob/master/CONTRIBUTING.md | |||||
If this is your first time, please read our contributor guidelines: https://gitee.com/mindspore/mindspore/blob/r1.9/CONTRIBUTING.md | |||||
--> | --> | ||||
**What type of PR is this?** | **What type of PR is this?** | ||||
@@ -0,0 +1,2 @@ | |||||
https://mindspore.cn*/r1.9/* | |||||
https://www.mindspore.cn*/r1.9/* |
@@ -1,3 +1,3 @@ | |||||
mindspore: | mindspore: | ||||
'mindspore/mindspore/version/202205/20220525/master_20220525210238_42306df4865f816c48a720d98e50ba2e586b1f59/' | |||||
'mindspore/mindspore/version/202209/20220923/r1.9_20220923224458_c16390f59ab8dace3bb7e5a6ab4ae4d3bfe74bea/' | |||||
@@ -75,7 +75,7 @@ The architecture is shown as follow: | |||||
- The hardware platform should be Ascend, GPU or CPU. | - The hardware platform should be Ascend, GPU or CPU. | ||||
- See our [MindSpore Installation Guide](https://www.mindspore.cn/install) to install MindSpore. | - See our [MindSpore Installation Guide](https://www.mindspore.cn/install) to install MindSpore. | ||||
The versions of MindArmour and MindSpore must be consistent. | The versions of MindArmour and MindSpore must be consistent. | ||||
- All other dependencies are included in [setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py). | |||||
- All other dependencies are included in [setup.py](https://gitee.com/mindspore/mindarmour/blob/r1.9/setup.py). | |||||
### Installation | ### Installation | ||||
@@ -100,7 +100,7 @@ The architecture is shown as follow: | |||||
pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindArmour/{arch}/mindarmour-{version}-cp37-cp37m-linux_{arch}.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple | pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindArmour/{arch}/mindarmour-{version}-cp37-cp37m-linux_{arch}.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple | ||||
``` | ``` | ||||
> - When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py)). In other cases, you need to manually install dependency items. | |||||
> - When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/r1.9/setup.py)). In other cases, you need to manually install dependency items. | |||||
> - `{version}` denotes the version of MindArmour. For example, when you are downloading MindArmour 1.0.1, `{version}` should be 1.0.1. | > - `{version}` denotes the version of MindArmour. For example, when you are downloading MindArmour 1.0.1, `{version}` should be 1.0.1. | ||||
> - `{arch}` denotes the system architecture. For example, the Linux system you are using is x86 architecture 64-bit, `{arch}` should be `x86_64`. If the system is ARM architecture 64-bit, then it should be `aarch64`. | > - `{arch}` denotes the system architecture. For example, the Linux system you are using is x86 architecture 64-bit, `{arch}` should be `x86_64`. If the system is ARM architecture 64-bit, then it should be `aarch64`. | ||||
@@ -122,7 +122,7 @@ Guidance on installation, tutorials, API, see our [User Documentation](https://g | |||||
## Contributing | ## Contributing | ||||
Welcome contributions. See our [Contributor Wiki](https://gitee.com/mindspore/mindspore/blob/master/CONTRIBUTING.md) for more details. | |||||
Welcome contributions. See our [Contributor Wiki](https://gitee.com/mindspore/mindspore/blob/r1.9/CONTRIBUTING.md) for more details. | |||||
## Release Notes | ## Release Notes | ||||
@@ -72,7 +72,7 @@ Fuzz Testing模块的架构图如下: | |||||
- 硬件平台为Ascend、GPU或CPU。 | - 硬件平台为Ascend、GPU或CPU。 | ||||
- 参考[MindSpore安装指南](https://www.mindspore.cn/install),完成MindSpore的安装。 | - 参考[MindSpore安装指南](https://www.mindspore.cn/install),完成MindSpore的安装。 | ||||
MindArmour与MindSpore的版本需保持一致。 | MindArmour与MindSpore的版本需保持一致。 | ||||
- 其余依赖请参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py)。 | |||||
- 其余依赖请参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/r1.9/setup.py)。 | |||||
### 安装 | ### 安装 | ||||
@@ -97,7 +97,7 @@ Fuzz Testing模块的架构图如下: | |||||
pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindArmour/{arch}/mindarmour-{version}-cp37-cp37m-linux_{arch}.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple | pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindArmour/{arch}/mindarmour-{version}-cp37-cp37m-linux_{arch}.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple | ||||
``` | ``` | ||||
> - 在联网状态下,安装whl包时会自动下载MindArmour安装包的依赖项(依赖项详情参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py)),其余情况需自行安装。 | |||||
> - 在联网状态下,安装whl包时会自动下载MindArmour安装包的依赖项(依赖项详情参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/r1.9/setup.py)),其余情况需自行安装。 | |||||
> - `{version}`表示MindArmour版本号,例如下载1.0.1版本MindArmour时,`{version}`应写为1.0.1。 | > - `{version}`表示MindArmour版本号,例如下载1.0.1版本MindArmour时,`{version}`应写为1.0.1。 | ||||
> - `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。 | > - `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。 | ||||
@@ -119,7 +119,7 @@ python -c 'import mindarmour' | |||||
## 贡献 | ## 贡献 | ||||
欢迎参与社区贡献,详情参考[Contributor Wiki](https://gitee.com/mindspore/mindspore/blob/master/CONTRIBUTING.md)。 | |||||
欢迎参与社区贡献,详情参考[Contributor Wiki](https://gitee.com/mindspore/mindspore/blob/r1.9/CONTRIBUTING.md)。 | |||||
## 版本 | ## 版本 | ||||
@@ -58,7 +58,7 @@ mindarmour.adv_robustness.defenses | |||||
参数: | 参数: | ||||
- **network** (Cell) - 要防御的MindSpore网络。 | - **network** (Cell) - 要防御的MindSpore网络。 | ||||
- **loss_fn** (Union[Loss, None]) - 损失函数。默认值:None。 | - **loss_fn** (Union[Loss, None]) - 损失函数。默认值:None。 | ||||
- **optimizer** (Cell):用于训练网络的优化器。默认值:None。 | |||||
- **optimizer** (Cell) - 用于训练网络的优化器。默认值:None。 | |||||
- **bounds** (tuple) - 数据的上下界。以(clip_min, clip_max)的形式出现。默认值:(0.0, 1.0)。 | - **bounds** (tuple) - 数据的上下界。以(clip_min, clip_max)的形式出现。默认值:(0.0, 1.0)。 | ||||
- **replace_ratio** (float) - 用对抗样本替换原始样本的比率。默认值:0.5。 | - **replace_ratio** (float) - 用对抗样本替换原始样本的比率。默认值:0.5。 | ||||
- **eps** (float) - 攻击方法(FGSM)的步长。默认值:0.1。 | - **eps** (float) - 攻击方法(FGSM)的步长。默认值:0.1。 | ||||
@@ -1,143 +0,0 @@ | |||||
mindarmour.natural_robustness.transform.image | |||||
============================================= | |||||
本模块包含图像的自然扰动方法。 | |||||
.. py:class:: mindarmour.natural_robustness.transform.image.Contrast(alpha=1, beta=0, auto_param=False) | |||||
图像的对比度。 | |||||
参数: | |||||
- **alpha** (Union[float, int]) - 控制图像的对比度。:math:`out\_image = in\_image*alpha+beta`。建议值范围[0.2, 2]。 | |||||
- **beta** (Union[float, int]) - 补充alpha的增量。默认值:0。 | |||||
- **auto_param** (bool) - 自动选择参数。在保留图像的语义的范围内自动选择参数。默认值:False。 | |||||
.. py:class:: mindarmour.natural_robustness.transform.image.GradientLuminance(color_start=(0, 0, 0), color_end=(255, 255, 255), start_point=(10, 10), scope=0.5, pattern='light', bright_rate=0.3, mode='circle', auto_param=False) | |||||
渐变调整图片的亮度。 | |||||
参数: | |||||
- **color_start** (union[tuple, list]) - 渐变中心的颜色。默认值:(0,0,0)。 | |||||
- **color_end** (union[tuple, list]) - 渐变边缘的颜色。默认值:(255,255,255)。 | |||||
- **start_point** (union[tuple, list]) - 渐变中心的二维坐标。 | |||||
- **scope** (float) - 渐变的范围。值越大,渐变范围越大。默认值:0.3。 | |||||
- **pattern** (str) - 深色或浅色,此值必须在['light', 'dark']中。 | |||||
- **bright_rate** (float) - 控制亮度。值越大,梯度范围越大。如果参数`pattern`为'light',建议值范围为[0.1, 0.7],如果参数`pattern`为'dark',建议值范围为[0.1, 0.9]。 | |||||
- **mode** (str) - 渐变模式,值必须在['circle', 'horizontal', 'vertical']中。 | |||||
- **auto_param** (bool) - 自动选择参数。在保留图像的语义的范围内自动选择参数。默认值:False。 | |||||
.. py:class:: mindarmour.natural_robustness.transform.image.GaussianBlur(ksize=2, auto_param=False) | |||||
使用高斯模糊滤镜模糊图像。 | |||||
参数: | |||||
- **ksize** (int) - 高斯核的大小,必须为非负数。 | |||||
- **auto_param** (bool) - 自动选择参数。在保留图像的语义的范围内自动选择参数。默认值:False。 | |||||
.. py:class:: mindarmour.natural_robustness.transform.image.MotionBlur(degree=5, angle=45, auto_param=False) | |||||
运动模糊。 | |||||
参数: | |||||
- **degree** (int) - 模糊程度。必须为正值。建议取值范围[1, 15]。 | |||||
- **angle** (union[float, int]) - 运动模糊的方向。angle=0表示上下运动模糊。角度为逆时针方向。 | |||||
- **auto_param** (bool) - 自动选择参数。在保留图像的语义的范围内自动选择参数。默认值:False。 | |||||
.. py:class:: mindarmour.natural_robustness.transform.image.GradientBlur(point, kernel_num=3, center=True, auto_param=False) | |||||
渐变模糊。 | |||||
参数: | |||||
- **point** (union[tuple, list]) - 模糊中心点的二维坐标。 | |||||
- **kernel_num** (int) - 模糊核的数量。建议取值范围[1, 8]。 | |||||
- **center** (bool) - 指定中心点模糊或指定中心点清晰。默认值:True。 | |||||
- **auto_param** (bool) - 自动选择参数。在保留图像的语义的范围内自动选择参数。默认值:False。 | |||||
.. py:class:: mindarmour.natural_robustness.transform.image.UniformNoise(factor=0.1, auto_param=False) | |||||
图像添加均匀噪声。 | |||||
参数: | |||||
- **factor** (float) - 噪声密度,单位像素区域添加噪声的比例。建议取值范围:[0.001, 0.15]。 | |||||
- **auto_param** (bool) - 自动选择参数。在保留图像的语义的范围内自动选择参数。默认值:False。 | |||||
.. py:class:: mindarmour.natural_robustness.transform.image.GaussianNoise(factor=0.1, auto_param=False) | |||||
图像添加高斯噪声。 | |||||
参数: | |||||
- **factor** (float) - 噪声密度,单位像素区域添加噪声的比例。建议取值范围:[0.001, 0.15]。 | |||||
- **auto_param** (bool) - 自动选择参数。在保留图像的语义的范围内自动选择参数。默认值:False。 | |||||
.. py:class:: mindarmour.natural_robustness.transform.image.SaltAndPepperNoise(factor=0, auto_param=False) | |||||
图像添加椒盐噪声。 | |||||
参数: | |||||
- **factor** (float) - 噪声密度,单位像素区域添加噪声的比例。建议取值范围:[0.001, 0.15]。 | |||||
- **auto_param** (bool) - 自动选择参数。在保留图像的语义的范围内自动选择参数。默认值:False。 | |||||
.. py:class:: mindarmour.natural_robustness.transform.image.NaturalNoise(ratio=0.0002, k_x_range=(1, 5), k_y_range=(3, 25), auto_param=False) | |||||
图像添加自然噪声。 | |||||
参数: | |||||
- **factor** (float) - 噪声密度,单位像素区域添加噪声的比例。建议取值范围:[0.00001, 0.001]。 | |||||
- **k_x_range** (union[list, tuple]) - 噪声块长度的取值范围。 | |||||
- **k_y_range** (union[list, tuple]) - 噪声块宽度的取值范围。 | |||||
- **auto_param** (bool) - 自动选择参数。在保留图像的语义的范围内自动选择参数。默认值:False。 | |||||
.. py:class:: mindarmour.natural_robustness.transform.image.Translate(x_bias=0, y_bias=0, auto_param=False) | |||||
图像平移。 | |||||
参数: | |||||
- **x_bias** (Union[int, float]) - X方向平移,x = x + x_bias*图像长度。建议取值范围在[-0.1, 0.1]中。 | |||||
- **y_bias** (Union[int, float]) - Y方向平移,y = y + y_bias*图像长度。建议取值范围在[-0.1, 0.1]中。 | |||||
- **auto_param** (bool) - 自动选择参数。在保留图像的语义的范围内自动选择参数。默认值:False。 | |||||
.. py:class:: mindarmour.natural_robustness.transform.image.Scale(factor_x=1, factor_y=1, auto_param=False) | |||||
图像缩放。 | |||||
参数: | |||||
- **factor_x** (Union[float, int]) - 在X方向缩放,x=factor_x*x。建议取值范围在[0.5, 1]且abs(factor_y - factor_x) < 0.5。 | |||||
- **factor_y** (Union[float, int]) - 沿Y方向缩放,y=factor_y*y。建议取值范围在[0.5, 1]且abs(factor_y - factor_x) < 0.5。 | |||||
- **auto_param** (bool) - 自动选择参数。在保留图像的语义的范围内自动选择参数。默认值:False。 | |||||
.. py:class:: mindarmour.natural_robustness.transform.image.Shear(factor=0.2, direction='horizontal', auto_param=False) | |||||
图像错切,错切后图像和原图的映射关系为:(new_x, new_y) = (x+factor_x*y, factor_y*x+y)。错切后图像将重新缩放到原图大小。 | |||||
参数: | |||||
- **factor** (Union[float, int]) - 沿错切方向上的错切比例。建议值范围[0.05, 0.5]。 | |||||
- **direction** (str) - 形变方向。可选值为'vertical'或'horizontal'。 | |||||
- **auto_param** (bool) - 自动选择参数。在保留图像的语义的范围内自动选择参数。默认值:False。 | |||||
.. py:class:: mindarmour.natural_robustness.transform.image.Rotate(angle=20, auto_param=False) | |||||
围绕图像中心点逆时针旋转图像。 | |||||
参数: | |||||
- **angle** (Union[float, int]) - 逆时针旋转的度数。建议值范围[-60, 60]。 | |||||
- **auto_param** (bool) - 自动选择参数。在保留图像的语义的范围内自动选择参数。默认值:False。 | |||||
.. py:class:: mindarmour.natural_robustness.transform.image.Perspective(ori_pos, dst_pos, auto_param=False) | |||||
透视变换。 | |||||
参数: | |||||
- **ori_pos** (list) - 原始图像中的四个点的坐标。 | |||||
- **dst_pos** (list) - 对应的ori_pos中4个点透视变换后的点坐标。 | |||||
- **auto_param** (bool) - 自动选择参数。在保留图像的语义的范围内自动选择参数。默认值:False。 | |||||
.. py:class:: mindarmour.natural_robustness.transform.image.Curve(curves=3, depth=10, mode='vertical', auto_param=False) | |||||
使用Sin函数的曲线变换。 | |||||
参数: | |||||
- **curves** (union[float, int]) - 曲线周期数。建议取值范围[0.1, 5]。 | |||||
- **depth** (union[float, int]) - sin函数的幅度。建议取值不超过图片长度的1/10。 | |||||
- **mode** (str) - 形变方向。可选值'vertical'或'horizontal'。 | |||||
- **auto_param** (bool) - 自动选择参数。在保留图像的语义的范围内自动选择参数。默认值:False。 |
@@ -8,10 +8,10 @@ mindarmour.privacy.diff_privacy | |||||
基于 :math:`mean=0` 以及 :math:`standard\_deviation = norm\_bound * initial\_noise\_multiplier` 的高斯分布产生噪声。 | 基于 :math:`mean=0` 以及 :math:`standard\_deviation = norm\_bound * initial\_noise\_multiplier` 的高斯分布产生噪声。 | ||||
参数: | 参数: | ||||
- **norm_bound** (float)- 梯度的l2范数的裁剪范围。默认值:1.0。 | |||||
- **initial_noise_multiplier** (float)- 高斯噪声标准偏差除以 `norm_bound` 的比率,将用于计算隐私预算。默认值:1.0。 | |||||
- **seed** (int)- 原始随机种子,如果seed=0随机正态将使用安全随机数。如果seed!=0随机正态将使用给定的种子生成值。默认值:0。 | |||||
- **decay_policy** (str)- 衰减策略。默认值:None。 | |||||
- **norm_bound** (float) - 梯度的l2范数的裁剪范围。默认值:1.0。 | |||||
- **initial_noise_multiplier** (float) - 高斯噪声标准偏差除以 `norm_bound` 的比率,将用于计算隐私预算。默认值:1.0。 | |||||
- **seed** (int) - 原始随机种子,如果seed=0随机正态将使用安全随机数。如果seed!=0随机正态将使用给定的种子生成值。默认值:0。 | |||||
- **decay_policy** (str) - 衰减策略。默认值:None。 | |||||
.. py:method:: construct(gradients) | .. py:method:: construct(gradients) | ||||
@@ -79,7 +79,7 @@ mindarmour.privacy.diff_privacy | |||||
噪声产生机制的工厂类。它目前支持高斯随机噪声(Gaussian Random Noise)和自适应高斯随机噪声(Adaptive Gaussian Random Noise)。 | 噪声产生机制的工厂类。它目前支持高斯随机噪声(Gaussian Random Noise)和自适应高斯随机噪声(Adaptive Gaussian Random Noise)。 | ||||
详情请查看: `教程 <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_。 | |||||
详情请查看: `教程 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.9/protect_user_privacy_with_differential_privacy.html#引入差分隐私>`_。 | |||||
.. py:method:: create(mech_name, norm_bound=1.0, initial_noise_multiplier=1.0, seed=0, noise_decay_rate=6e-6, decay_policy=None) | .. py:method:: create(mech_name, norm_bound=1.0, initial_noise_multiplier=1.0, seed=0, noise_decay_rate=6e-6, decay_policy=None) | ||||
@@ -101,7 +101,7 @@ mindarmour.privacy.diff_privacy | |||||
梯度剪裁机制的工厂类。它目前支持高斯随机噪声(Gaussian Random Noise)的自适应剪裁(Adaptive Clipping)。 | 梯度剪裁机制的工厂类。它目前支持高斯随机噪声(Gaussian Random Noise)的自适应剪裁(Adaptive Clipping)。 | ||||
详情请查看: `教程 <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_。 | |||||
详情请查看: `教程 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.9/protect_user_privacy_with_differential_privacy.html#引入差分隐私>`_。 | |||||
.. py:method:: create(mech_name, decay_policy='Linear', learning_rate=0.001, target_unclipped_quantile=0.9, fraction_stddev=0.01, seed=0) | .. py:method:: create(mech_name, decay_policy='Linear', learning_rate=0.001, target_unclipped_quantile=0.9, fraction_stddev=0.01, seed=0) | ||||
@@ -123,7 +123,7 @@ mindarmour.privacy.diff_privacy | |||||
DP训练隐私监视器的工厂类。 | DP训练隐私监视器的工厂类。 | ||||
详情请查看: `教程 <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_。 | |||||
详情请查看: `教程 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.9/protect_user_privacy_with_differential_privacy.html#引入差分隐私>`_。 | |||||
.. py:method:: create(policy, *args, **kwargs) | .. py:method:: create(policy, *args, **kwargs) | ||||
@@ -147,7 +147,7 @@ mindarmour.privacy.diff_privacy | |||||
.. math:: | .. math:: | ||||
(ε'+\frac{log(1/δ)}{α-1}, δ) | (ε'+\frac{log(1/δ)}{α-1}, δ) | ||||
详情请查看: `教程 <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_。 | |||||
详情请查看: `教程 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.9/protect_user_privacy_with_differential_privacy.html#引入差分隐私>`_。 | |||||
参考文献: `Rényi Differential Privacy of the Sampled Gaussian Mechanism <https://arxiv.org/abs/1908.10530>`_。 | 参考文献: `Rényi Differential Privacy of the Sampled Gaussian Mechanism <https://arxiv.org/abs/1908.10530>`_。 | ||||
@@ -188,7 +188,7 @@ mindarmour.privacy.diff_privacy | |||||
注意,ZCDPMonitor不适合子采样噪声机制(如NoiseAdaGaussianRandom和NoiseGaussianRandom)。未来将开发zCDP的匹配噪声机制。 | 注意,ZCDPMonitor不适合子采样噪声机制(如NoiseAdaGaussianRandom和NoiseGaussianRandom)。未来将开发zCDP的匹配噪声机制。 | ||||
详情请查看:`教程 <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_。 | |||||
详情请查看:`教程 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.9/protect_user_privacy_with_differential_privacy.html#引入差分隐私>`_。 | |||||
参考文献:`Concentrated Differentially Private Gradient Descent with Adaptive per-Iteration Privacy Budget <https://arxiv.org/abs/1808.09501>`_。 | 参考文献:`Concentrated Differentially Private Gradient Descent with Adaptive per-Iteration Privacy Budget <https://arxiv.org/abs/1808.09501>`_。 | ||||
@@ -251,7 +251,7 @@ mindarmour.privacy.diff_privacy | |||||
这个类重载自 :class:`mindspore.Model` 。 | 这个类重载自 :class:`mindspore.Model` 。 | ||||
详情请查看: `教程 <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_。 | |||||
详情请查看: `教程 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.9/protect_user_privacy_with_differential_privacy.html#引入差分隐私>`_。 | |||||
参数: | 参数: | ||||
- **micro_batches** (int) - 从原始批次拆分的小批次数。默认值:2。 | - **micro_batches** (int) - 从原始批次拆分的小批次数。默认值:2。 | ||||
@@ -7,7 +7,7 @@ mindarmour.privacy.evaluation | |||||
成员推理是由Shokri、Stronati、Song和Shmatikov提出的一种用于推断用户隐私数据的灰盒攻击。它需要训练样本的loss或logits结果,隐私是指单个用户的一些敏感属性。 | 成员推理是由Shokri、Stronati、Song和Shmatikov提出的一种用于推断用户隐私数据的灰盒攻击。它需要训练样本的loss或logits结果,隐私是指单个用户的一些敏感属性。 | ||||
有关详细信息,请参见: `教程 <https://mindspore.cn/mindarmour/docs/zh-CN/master/test_model_security_membership_inference.html>`_。 | |||||
有关详细信息,请参见: `教程 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.9/test_model_security_membership_inference.html>`_。 | |||||
参考文献:`Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov. Membership Inference Attacks against Machine Learning Models. 2017. <https://arxiv.org/abs/1610.05820v2>`_。 | 参考文献:`Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov. Membership Inference Attacks against Machine Learning Models. 2017. <https://arxiv.org/abs/1610.05820v2>`_。 | ||||
@@ -26,8 +26,8 @@ mindarmour.privacy.evaluation | |||||
评估指标应由metrics规定。 | 评估指标应由metrics规定。 | ||||
参数: | 参数: | ||||
- **dataset_train** (minspore.dataset) - 目标模型的训练数据集。 | |||||
- **dataset_test** (minspore.dataset) - 目标模型的测试数据集。 | |||||
- **dataset_train** (mindspore.dataset) - 目标模型的训练数据集。 | |||||
- **dataset_test** (mindspore.dataset) - 目标模型的测试数据集。 | |||||
- **metrics** (Union[list, tuple]) - 评估指标。指标的值必须在["precision", "accuracy", "recall"]中。默认值:["precision"]。 | - **metrics** (Union[list, tuple]) - 评估指标。指标的值必须在["precision", "accuracy", "recall"]中。默认值:["precision"]。 | ||||
返回: | 返回: | ||||
@@ -38,8 +38,8 @@ mindarmour.privacy.evaluation | |||||
根据配置,使用输入数据集训练攻击模型。 | 根据配置,使用输入数据集训练攻击模型。 | ||||
参数: | 参数: | ||||
- **dataset_train** (minspore.dataset) - 目标模型的训练数据集。 | |||||
- **dataset_test** (minspore.dataset) - 目标模型的测试集。 | |||||
- **dataset_train** (mindspore.dataset) - 目标模型的训练数据集。 | |||||
- **dataset_test** (mindspore.dataset) - 目标模型的测试集。 | |||||
- **attack_config** (Union[list, tuple]) - 攻击模型的参数设置。格式为 | - **attack_config** (Union[list, tuple]) - 攻击模型的参数设置。格式为 | ||||
.. code-block:: python | .. code-block:: python | ||||
@@ -8,7 +8,7 @@ mindarmour.privacy.sup_privacy | |||||
周期性检查抑制隐私功能状态和切换(启动/关闭)抑制操作。 | 周期性检查抑制隐私功能状态和切换(启动/关闭)抑制操作。 | ||||
详情请查看: `应用抑制隐私机制保护用户隐私 | 详情请查看: `应用抑制隐私机制保护用户隐私 | ||||
<https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_suppress_privacy.html#%E5%BC%95%E5%85%A5%E6%8A%91%E5%88%B6%E9%9A%90%E7%A7%81%E8%AE%AD%E7%BB%83>`_。 | |||||
<https://mindspore.cn/mindarmour/docs/zh-CN/r1.9/protect_user_privacy_with_suppress_privacy.html#%E5%BC%95%E5%85%A5%E6%8A%91%E5%88%B6%E9%9A%90%E7%A7%81%E8%AE%AD%E7%BB%83>`_。 | |||||
参数: | 参数: | ||||
- **model** (SuppressModel) - SuppressModel 实例。 | - **model** (SuppressModel) - SuppressModel 实例。 | ||||
@@ -25,7 +25,7 @@ mindarmour.privacy.sup_privacy | |||||
抑制隐私训练器,重载自 :class:`mindspore.Model` 。 | 抑制隐私训练器,重载自 :class:`mindspore.Model` 。 | ||||
有关详细信息,请查看: `应用抑制隐私机制保护用户隐私 <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_suppress_privacy.html>`_。 | |||||
有关详细信息,请查看: `应用抑制隐私机制保护用户隐私 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.9/protect_user_privacy_with_suppress_privacy.html#%E5%BC%95%E5%85%A5%E6%8A%91%E5%88%B6%E9%9A%90%E7%A7%81%E8%AE%AD%E7%BB%83>`_。 | |||||
参数: | 参数: | ||||
- **network** (Cell) - 要训练的神经网络模型。 | - **network** (Cell) - 要训练的神经网络模型。 | ||||
@@ -44,7 +44,7 @@ mindarmour.privacy.sup_privacy | |||||
SuppressCtrl机制的工厂类。 | SuppressCtrl机制的工厂类。 | ||||
详情请查看: `应用抑制隐私机制保护用户隐私 <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_suppress_privacy.html#%E5%BC%95%E5%85%A5%E6%8A%91%E5%88%B6%E9%9A%90%E7%A7%81%E8%AE%AD%E7%BB%83>`_。 | |||||
详情请查看: `应用抑制隐私机制保护用户隐私 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.9/protect_user_privacy_with_suppress_privacy.html#%E5%BC%95%E5%85%A5%E6%8A%91%E5%88%B6%E9%9A%90%E7%A7%81%E8%AE%AD%E7%BB%83>`_。 | |||||
.. py:method:: create(networks, mask_layers, policy='local_train', end_epoch=10, batch_num=20, start_epoch=3, mask_times=1000, lr=0.05, sparse_end=0.90, sparse_start=0.0) | .. py:method:: create(networks, mask_layers, policy='local_train', end_epoch=10, batch_num=20, start_epoch=3, mask_times=1000, lr=0.05, sparse_end=0.90, sparse_start=0.0) | ||||
@@ -67,7 +67,7 @@ mindarmour.privacy.sup_privacy | |||||
完成抑制隐私操作,包括计算抑制比例,找到应该抑制的参数,并永久抑制这些参数。 | 完成抑制隐私操作,包括计算抑制比例,找到应该抑制的参数,并永久抑制这些参数。 | ||||
详情请查看: `应用抑制隐私机制保护用户隐私 <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_suppress_privacy.html#%E5%BC%95%E5%85%A5%E6%8A%91%E5%88%B6%E9%9A%90%E7%A7%81%E8%AE%AD%E7%BB%83>`_。 | |||||
详情请查看: `应用抑制隐私机制保护用户隐私 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.9/protect_user_privacy_with_suppress_privacy.html#%E5%BC%95%E5%85%A5%E6%8A%91%E5%88%B6%E9%9A%90%E7%A7%81%E8%AE%AD%E7%BB%83>`_。 | |||||
参数: | 参数: | ||||
- **networks** (Cell) - 要训练的神经网络模型。 | - **networks** (Cell) - 要训练的神经网络模型。 | ||||
@@ -166,7 +166,7 @@ mindarmour.privacy.sup_privacy | |||||
for layer in networks.get_parameters(expand=True): | for layer in networks.get_parameters(expand=True): | ||||
if layer.name == "conv": ... | if layer.name == "conv": ... | ||||
- **grad_idx** (int) - 掩码层在梯度元组中的索引。可参考 `model.py <https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/sup_privacy/train/model.py>`_ 中TrainOneStepCell的构造函数,在PYNATIVE_MODE模式下打印某些层的索引值。 | |||||
- **grad_idx** (int) - 掩码层在梯度元组中的索引。可参考 `model.py <https://gitee.com/mindspore/mindarmour/blob/r1.9/mindarmour/privacy/sup_privacy/train/model.py>`_ 中TrainOneStepCell的构造函数,在PYNATIVE_MODE模式下打印某些层的索引值。 | |||||
- **is_add_noise** (bool) - 如果为True,则此层的权重可以添加噪声。如果为False,则此层的权重不能添加噪声。如果参数num大于100000,则 `is_add_noise` 无效。 | - **is_add_noise** (bool) - 如果为True,则此层的权重可以添加噪声。如果为False,则此层的权重不能添加噪声。如果参数num大于100000,则 `is_add_noise` 无效。 | ||||
- **is_lower_clip** (bool) - 如果为True,则此层的权重将被剪裁到大于下限值。如果为False,此层的权重不会被要求大于下限制。如果参数num大于100000,则is_lower_clip无效。 | - **is_lower_clip** (bool) - 如果为True,则此层的权重将被剪裁到大于下限值。如果为False,此层的权重不会被要求大于下限制。如果参数num大于100000,则is_lower_clip无效。 | ||||
- **min_num** (int) - 未抑制的剩余权重数。如果min_num小于(参数总数量 * `SupperssCtrl.sparse_end` ),则min_num无效。 | - **min_num** (int) - 未抑制的剩余权重数。如果min_num小于(参数总数量 * `SupperssCtrl.sparse_end` ),则min_num无效。 | ||||
@@ -7,7 +7,7 @@ MindArmour的可靠性方法。 | |||||
故障注入模块模拟深度神经网络的各种故障场景,并评估模型的性能和可靠性。 | 故障注入模块模拟深度神经网络的各种故障场景,并评估模型的性能和可靠性。 | ||||
详情请查看 `实现模型故障注入评估模型容错性 <https://mindspore.cn/mindarmour/docs/zh-CN/master/fault_injection.html>`_。 | |||||
详情请查看 `实现模型故障注入评估模型容错性 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.9/fault_injection.html>`_。 | |||||
参数: | 参数: | ||||
- **model** (Model) - 需要评估模型。 | - **model** (Model) - 需要评估模型。 | ||||
@@ -39,7 +39,7 @@ MindArmour的可靠性方法。 | |||||
概念漂移检查时间序列(ConceptDriftCheckTimeSeries)用于样本序列分布变化检测。 | 概念漂移检查时间序列(ConceptDriftCheckTimeSeries)用于样本序列分布变化检测。 | ||||
有关详细信息,请查看 `实现时序数据概念漂移检测应用 | 有关详细信息,请查看 `实现时序数据概念漂移检测应用 | ||||
<https://mindspore.cn/mindarmour/docs/zh-CN/master/concept_drift_time_series.html>`_。 | |||||
<https://mindspore.cn/mindarmour/docs/zh-CN/r1.9/concept_drift_time_series.html>`_。 | |||||
参数: | 参数: | ||||
- **window_size** (int) - 概念窗口的大小,不小于10。如果给定输入数据, `window_size` 在[10, 1/3*len( `data` )]中。 | - **window_size** (int) - 概念窗口的大小,不小于10。如果给定输入数据, `window_size` 在[10, 1/3*len( `data` )]中。 | ||||
@@ -96,7 +96,7 @@ MindArmour的可靠性方法。 | |||||
训练OOD检测器。提取训练数据特征,得到聚类中心。测试数据特征与聚类中心之间的距离确定图像是否为分布外(OOD)图像。 | 训练OOD检测器。提取训练数据特征,得到聚类中心。测试数据特征与聚类中心之间的距离确定图像是否为分布外(OOD)图像。 | ||||
有关详细信息,请查看 `实现图像数据概念漂移检测应用 <https://mindspore.cn/mindarmour/docs/zh-CN/master/concept_drift_images.html>`_。 | |||||
有关详细信息,请查看 `实现图像数据概念漂移检测应用 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.9/concept_drift_images.html>`_。 | |||||
参数: | 参数: | ||||
- **model** (Model) - 训练模型。 | - **model** (Model) - 训练模型。 | ||||
@@ -173,7 +173,7 @@ MindArmour是MindSpore的工具箱,用于增强模型可信,实现隐私保 | |||||
- 首先,自然鲁棒性方法包括:'Translate', 'Scale'、'Shear'、'Rotate'、'Perspective'、'Curve'、'GaussianBlur'、'MotionBlur'、'GradientBlur'、'Contrast'、'GradientLuminance'、'UniformNoise'、'GaussianNoise'、'SaltAndPepperNoise'、'NaturalNoise'。 | - 首先,自然鲁棒性方法包括:'Translate', 'Scale'、'Shear'、'Rotate'、'Perspective'、'Curve'、'GaussianBlur'、'MotionBlur'、'GradientBlur'、'Contrast'、'GradientLuminance'、'UniformNoise'、'GaussianNoise'、'SaltAndPepperNoise'、'NaturalNoise'。 | ||||
- 其次,对抗样本攻击方式包括:'FGSM'、'PGD'和'MDIM'。'FGSM'、'PGD'和'MDIM'分别是 FastGradientSignMethod、ProjectedGradientDent和MomentumDiverseInputIterativeMethod的缩写。 `mutate_config` 必须包含在['Contrast', 'GradientLuminance', 'GaussianBlur', 'MotionBlur', 'GradientBlur', 'UniformNoise', 'GaussianNoise', 'SaltAndPepperNoise', 'NaturalNoise']中的方法。 | - 其次,对抗样本攻击方式包括:'FGSM'、'PGD'和'MDIM'。'FGSM'、'PGD'和'MDIM'分别是 FastGradientSignMethod、ProjectedGradientDent和MomentumDiverseInputIterativeMethod的缩写。 `mutate_config` 必须包含在['Contrast', 'GradientLuminance', 'GaussianBlur', 'MotionBlur', 'GradientBlur', 'UniformNoise', 'GaussianNoise', 'SaltAndPepperNoise', 'NaturalNoise']中的方法。 | ||||
- 第一类方法的参数设置方式可以在 `mindarmour/natural_robustness/transform/image <https://gitee.com/mindspore/mindarmour/tree/master/mindarmour/natural_robustness/transform/image>`_ 中看到。第二类方法参数配置参考 `self._attack_param_checklists` 。 | |||||
- 第一类方法的参数设置方式可以在 `mindarmour/natural_robustness/transform/image <https://gitee.com/mindspore/mindarmour/tree/r1.9/mindarmour/natural_robustness/transform/image>`_ 中看到。第二类方法参数配置参考 `self._attack_param_checklists` 。 | |||||
- **initial_seeds** (list[list]) - 用于生成变异样本的初始种子队列。初始种子队列的格式为[[image_data, label], [...], ...],且标签必须为one-hot。 | - **initial_seeds** (list[list]) - 用于生成变异样本的初始种子队列。初始种子队列的格式为[[image_data, label], [...], ...],且标签必须为one-hot。 | ||||
- **coverage** (CoverageMetrics) - 神经元覆盖率指标类。 | - **coverage** (CoverageMetrics) - 神经元覆盖率指标类。 | ||||
- **evaluate** (bool) - 是否返回评估报告。默认值:True。 | - **evaluate** (bool) - 是否返回评估报告。默认值:True。 | ||||
@@ -198,7 +198,7 @@ MindArmour是MindSpore的工具箱,用于增强模型可信,实现隐私保 | |||||
此类重载 :class:`mindspore.Model`。 | 此类重载 :class:`mindspore.Model`。 | ||||
详情请查看: `应用差分隐私机制保护用户隐私 <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_。 | |||||
详情请查看: `应用差分隐私机制保护用户隐私 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.9/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_。 | |||||
参数: | 参数: | ||||
- **micro_batches** (int) - 从原始批次拆分的小批次数。默认值:2。 | - **micro_batches** (int) - 从原始批次拆分的小批次数。默认值:2。 | ||||
@@ -217,7 +217,7 @@ MindArmour是MindSpore的工具箱,用于增强模型可信,实现隐私保 | |||||
成员推理是由Shokri、Stronati、Song和Shmatikov提出的一种用于推测用户隐私数据的灰盒攻击。它需要训练样本的loss或logits结果,隐私是指单个用户的一些敏感属性。 | 成员推理是由Shokri、Stronati、Song和Shmatikov提出的一种用于推测用户隐私数据的灰盒攻击。它需要训练样本的loss或logits结果,隐私是指单个用户的一些敏感属性。 | ||||
有关详细信息,请参见:`使用成员推理测试模型安全性 <https://mindspore.cn/mindarmour/docs/zh-CN/master/test_model_security_membership_inference.html>`_。 | |||||
有关详细信息,请参见:`使用成员推理测试模型安全性 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.9/test_model_security_membership_inference.html>`_。 | |||||
参考文献:`Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov. Membership Inference Attacks against Machine Learning Models. 2017. <https://arxiv.org/abs/1610.05820v2>`_。 | 参考文献:`Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov. Membership Inference Attacks against Machine Learning Models. 2017. <https://arxiv.org/abs/1610.05820v2>`_。 | ||||
@@ -236,8 +236,8 @@ MindArmour是MindSpore的工具箱,用于增强模型可信,实现隐私保 | |||||
评估指标应由metrics规定。 | 评估指标应由metrics规定。 | ||||
参数: | 参数: | ||||
- **dataset_train** (minspore.dataset) - 目标模型的训练数据集。 | |||||
- **dataset_test** (minspore.dataset) - 目标模型的测试数据集。 | |||||
- **dataset_train** (mindspore.dataset) - 目标模型的训练数据集。 | |||||
- **dataset_test** (mindspore.dataset) - 目标模型的测试数据集。 | |||||
- **metrics** (Union[list, tuple]) - 评估指标。指标的值必须在["precision", "accuracy", "recall"]中。默认值:["precision"]。 | - **metrics** (Union[list, tuple]) - 评估指标。指标的值必须在["precision", "accuracy", "recall"]中。默认值:["precision"]。 | ||||
返回: | 返回: | ||||
@@ -248,8 +248,8 @@ MindArmour是MindSpore的工具箱,用于增强模型可信,实现隐私保 | |||||
根据配置,使用输入数据集训练攻击模型。 | 根据配置,使用输入数据集训练攻击模型。 | ||||
参数: | 参数: | ||||
- **dataset_train** (minspore.dataset) - 目标模型的训练数据集。 | |||||
- **dataset_test** (minspore.dataset) - 目标模型的测试集。 | |||||
- **dataset_train** (mindspore.dataset) - 目标模型的训练数据集。 | |||||
- **dataset_test** (mindspore.dataset) - 目标模型的测试集。 | |||||
- **attack_config** (Union[list, tuple]) - 攻击模型的参数设置。格式为: | - **attack_config** (Union[list, tuple]) - 攻击模型的参数设置。格式为: | ||||
.. code-block:: | .. code-block:: | ||||
@@ -321,7 +321,7 @@ MindArmour是MindSpore的工具箱,用于增强模型可信,实现隐私保 | |||||
概念漂移检查时间序列(ConceptDriftCheckTimeSeries)用于样本序列分布变化检测。 | 概念漂移检查时间序列(ConceptDriftCheckTimeSeries)用于样本序列分布变化检测。 | ||||
有关详细信息,请查看: `实现时序数据概念漂移检测应用 <https://mindspore.cn/mindarmour/docs/zh-CN/master/concept_drift_time_series.html>`_。 | |||||
有关详细信息,请查看: `实现时序数据概念漂移检测应用 <https://mindspore.cn/mindarmour/docs/zh-CN/r1.9/concept_drift_time_series.html>`_。 | |||||
参数: | 参数: | ||||
- **window_size** (int) - 概念窗口的大小,不小于10。如果给定输入数据,window_size在[10, 1/3*len(input data)]中。如果数据是周期性的,通常window_size等于2-5个周期,例如,对于月/周数据,30/7天的数据量是一个周期。默认值:100。 | - **window_size** (int) - 概念窗口的大小,不小于10。如果给定输入数据,window_size在[10, 1/3*len(input data)]中。如果数据是周期性的,通常window_size等于2-5个周期,例如,对于月/周数据,30/7天的数据量是一个周期。默认值:100。 | ||||
@@ -94,7 +94,7 @@ This takes around 75 minutes. | |||||
## Mixed Precision | ## Mixed Precision | ||||
The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/master/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. | |||||
The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/r1.9/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware. | |||||
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’. | For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’. | ||||
# [Environment Requirements](#contents) | # [Environment Requirements](#contents) | ||||
@@ -106,9 +106,9 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil | |||||
- [MindSpore](https://www.mindspore.cn/install/en) | - [MindSpore](https://www.mindspore.cn/install/en) | ||||
- For more information, please check the resources below: | - For more information, please check the resources below: | ||||
- [MindSpore tutorials](https://www.mindspore.cn/tutorials/en/master/index.html) | |||||
- [MindSpore tutorials](https://www.mindspore.cn/tutorials/en/r1.9/index.html) | |||||
- [MindSpore Python API](https://www.mindspore.cn/docs/en/master/index.html) | |||||
- [MindSpore Python API](https://www.mindspore.cn/docs/en/r1.9/index.html) | |||||
# [Quick Start](#contents) | # [Quick Start](#contents) | ||||
@@ -517,7 +517,7 @@ accuracy: 0.8533 | |||||
### Inference | ### Inference | ||||
If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/master/infer/inference.html). Following the steps below, this is a simple example: | |||||
If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/r1.9/infer/inference.html). Following the steps below, this is a simple example: | |||||
- Running on Ascend | - Running on Ascend | ||||
@@ -95,7 +95,7 @@ python src/preprocess_dataset.py | |||||
## 混合精度 | ## 混合精度 | ||||
采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/master/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。 | |||||
采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/r1.9/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。 | |||||
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。 | 以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。 | ||||
# 环境要求 | # 环境要求 | ||||
@@ -109,9 +109,9 @@ python src/preprocess_dataset.py | |||||
- [MindSpore](https://www.mindspore.cn/install) | - [MindSpore](https://www.mindspore.cn/install) | ||||
- 如需查看详情,请参见如下资源: | - 如需查看详情,请参见如下资源: | ||||
- [MindSpore教程](https://www.mindspore.cn/tutorials/zh-CN/master/index.html) | |||||
- [MindSpore教程](https://www.mindspore.cn/tutorials/zh-CN/r1.9/index.html) | |||||
- [MindSpore Python API](https://www.mindspore.cn/docs/zh-CN/master/index.html) | |||||
- [MindSpore Python API](https://www.mindspore.cn/docs/zh-CN/r1.9/index.html) | |||||
# 快速入门 | # 快速入门 | ||||
@@ -250,7 +250,7 @@ bash scripts/run_distribute_train_ascend.sh [RANK_TABLE_FILE] [PRETRAINED_CKPT(o | |||||
> 注意: | > 注意: | ||||
RANK_TABLE_FILE相关参考资料见[链接](https://www.mindspore.cn/tutorials/experts/zh-CN/master/parallel/train_ascend.html), 获取device_ip方法详见[链接](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). | |||||
RANK_TABLE_FILE相关参考资料见[链接](https://www.mindspore.cn/tutorials/experts/zh-CN/r1.9/parallel/train_ascend.html), 获取device_ip方法详见[链接](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools). | |||||
### 训练结果 | ### 训练结果 | ||||
@@ -449,7 +449,7 @@ bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DVPP] [DEVICE_ID] | |||||
### 推理 | ### 推理 | ||||
如果您需要在GPU、Ascend 910、Ascend 310等多个硬件平台上使用训练好的模型进行推理,请参考此[链接](https://www.mindspore.cn/tutorials/experts/zh-CN/master/infer/inference.html)。以下为简单示例: | |||||
如果您需要在GPU、Ascend 910、Ascend 310等多个硬件平台上使用训练好的模型进行推理,请参考此[链接](https://www.mindspore.cn/tutorials/experts/zh-CN/r1.9/infer/inference.html)。以下为简单示例: | |||||
- Ascend处理器环境运行 | - Ascend处理器环境运行 | ||||
@@ -126,7 +126,7 @@ | |||||
### 基于自然扰动serving生成评测数据集 | ### 基于自然扰动serving生成评测数据集 | ||||
1. 启动自然扰动serving服务。具体说明参考:[ 自然扰动样本生成serving服务](https://gitee.com/mindspore/mindarmour/blob/master/examples/natural_robustness/serving/README.md) | |||||
1. 启动自然扰动serving服务。具体说明参考:[ 自然扰动样本生成serving服务](https://gitee.com/mindspore/mindarmour/blob/r1.9/examples/natural_robustness/serving/README.md) | |||||
```bash | ```bash | ||||
cd serving/server/ | cd serving/server/ | ||||
@@ -144,7 +144,7 @@ | |||||
2. 核心代码说明: | 2. 核心代码说明: | ||||
1. 配置扰动方法,目前可选的扰动方法及参数配置参考[image transform methods](https://gitee.com/mindspore/mindarmour/tree/master/mindarmour/natural_robustness/transform/image)。下面是一个配置例子。 | |||||
1. 配置扰动方法,目前可选的扰动方法及参数配置参考[image transform methods](https://gitee.com/mindspore/mindarmour/tree/r1.9/mindarmour/natural_robustness/transform/image)。下面是一个配置例子。 | |||||
```python | ```python | ||||
PerturbConfig = [ | PerturbConfig = [ | ||||
@@ -35,14 +35,14 @@ class PointWiseAttack(Attack): | |||||
References: `L. Schott, J. Rauber, M. Bethge, W. Brendel: "Towards the | References: `L. Schott, J. Rauber, M. Bethge, W. Brendel: "Towards the | ||||
first adversarially robust neural network model on MNIST", ICLR (2019) | first adversarially robust neural network model on MNIST", ICLR (2019) | ||||
<https://arxiv.org/abs/1805.09190>`_ | |||||
<https://arxiv.org/abs/1805.09190>`_. | |||||
Args: | Args: | ||||
model (BlackModel): Target model. | model (BlackModel): Target model. | ||||
max_iter (int): Max rounds of iteration to generate adversarial image. Default: 1000. | max_iter (int): Max rounds of iteration to generate adversarial image. Default: 1000. | ||||
search_iter (int): Max rounds of binary search. Default: 10. | search_iter (int): Max rounds of binary search. Default: 10. | ||||
is_targeted (bool): If True, targeted attack. If False, untargeted attack. Default: False. | is_targeted (bool): If True, targeted attack. If False, untargeted attack. Default: False. | ||||
init_attack (Attack): Attack used to find a starting point. Default: None. | |||||
init_attack (Union[Attack, None]): Attack used to find a starting point. Default: None. | |||||
sparse (bool): If True, input labels are sparse-encoded. If False, input labels are one-hot-encoded. | sparse (bool): If True, input labels are sparse-encoded. If False, input labels are one-hot-encoded. | ||||
Default: True. | Default: True. | ||||
@@ -96,7 +96,7 @@ class DeepFool(Attack): | |||||
sample to the nearest classification boundary and crossing the boundary. | sample to the nearest classification boundary and crossing the boundary. | ||||
Reference: `DeepFool: a simple and accurate method to fool deep neural | Reference: `DeepFool: a simple and accurate method to fool deep neural | ||||
networks <https://arxiv.org/abs/1511.04599>`_ | |||||
networks <https://arxiv.org/abs/1511.04599>`_. | |||||
Args: | Args: | ||||
network (Cell): Target model. | network (Cell): Target model. | ||||
@@ -109,7 +109,7 @@ class DeepFool(Attack): | |||||
max_iters (int): Max iterations, which should be | max_iters (int): Max iterations, which should be | ||||
greater than zero. Default: 50. | greater than zero. Default: 50. | ||||
overshoot (float): Overshoot parameter. Default: 0.02. | overshoot (float): Overshoot parameter. Default: 0.02. | ||||
norm_level (Union[int, str]): Order of the vector norm. Possible values: np.inf | |||||
norm_level (Union[int, str, numpy.inf]): Order of the vector norm. Possible values: np.inf | |||||
or 2. Default: 2. | or 2. Default: 2. | ||||
bounds (Union[tuple, list]): Upper and lower bounds of data range. In form of (clip_min, | bounds (Union[tuple, list]): Upper and lower bounds of data range. In form of (clip_min, | ||||
clip_max). Default: None. | clip_max). Default: None. | ||||
@@ -130,21 +130,21 @@ class FastGradientMethod(GradientMethod): | |||||
References: `I. J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining | References: `I. J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining | ||||
and harnessing adversarial examples," in ICLR, 2015. | and harnessing adversarial examples," in ICLR, 2015. | ||||
<https://arxiv.org/abs/1412.6572>`_ | |||||
<https://arxiv.org/abs/1412.6572>`_. | |||||
Args: | Args: | ||||
network (Cell): Target model. | network (Cell): Target model. | ||||
eps (float): Proportion of single-step adversarial perturbation generated | eps (float): Proportion of single-step adversarial perturbation generated | ||||
by the attack to data range. Default: 0.07. | by the attack to data range. Default: 0.07. | ||||
alpha (float): Proportion of single-step random perturbation to data range. | |||||
alpha (Union[float, None]): Proportion of single-step random perturbation to data range. | |||||
Default: None. | Default: None. | ||||
bounds (tuple): Upper and lower bounds of data, indicating the data range. | bounds (tuple): Upper and lower bounds of data, indicating the data range. | ||||
In form of (clip_min, clip_max). Default: (0.0, 1.0). | In form of (clip_min, clip_max). Default: (0.0, 1.0). | ||||
norm_level (Union[int, numpy.inf]): Order of the norm. | |||||
norm_level (Union[int, str, numpy.inf]): Order of the norm. | |||||
Possible values: np.inf, 1 or 2. Default: 2. | Possible values: np.inf, 1 or 2. Default: 2. | ||||
is_targeted (bool): If True, targeted attack. If False, untargeted | is_targeted (bool): If True, targeted attack. If False, untargeted | ||||
attack. Default: False. | attack. Default: False. | ||||
loss_fn (Loss): Loss function for optimization. If None, the input network \ | |||||
loss_fn (Union[loss, None]): Loss function for optimization. If None, the input network \ | |||||
is already equipped with loss function. Default: None. | is already equipped with loss function. Default: None. | ||||
Examples: | Examples: | ||||
@@ -207,7 +207,7 @@ class RandomFastGradientMethod(FastGradientMethod): | |||||
References: `Florian Tramer, Alexey Kurakin, Nicolas Papernot, "Ensemble | References: `Florian Tramer, Alexey Kurakin, Nicolas Papernot, "Ensemble | ||||
adversarial training: Attacks and defenses" in ICLR, 2018 | adversarial training: Attacks and defenses" in ICLR, 2018 | ||||
<https://arxiv.org/abs/1705.07204>`_ | |||||
<https://arxiv.org/abs/1705.07204>`_. | |||||
Args: | Args: | ||||
network (Cell): Target model. | network (Cell): Target model. | ||||
@@ -217,11 +217,11 @@ class RandomFastGradientMethod(FastGradientMethod): | |||||
Default: 0.035. | Default: 0.035. | ||||
bounds (tuple): Upper and lower bounds of data, indicating the data range. | bounds (tuple): Upper and lower bounds of data, indicating the data range. | ||||
In form of (clip_min, clip_max). Default: (0.0, 1.0). | In form of (clip_min, clip_max). Default: (0.0, 1.0). | ||||
norm_level (Union[int, numpy.inf]): Order of the norm. | |||||
norm_level (Union[int, str, numpy.inf): Order of the norm. | |||||
Possible values: np.inf, 1 or 2. Default: 2. | Possible values: np.inf, 1 or 2. Default: 2. | ||||
is_targeted (bool): If True, targeted attack. If False, untargeted | is_targeted (bool): If True, targeted attack. If False, untargeted | ||||
attack. Default: False. | attack. Default: False. | ||||
loss_fn (Loss): Loss function for optimization. If None, the input network \ | |||||
loss_fn (Union[loss, None]): Loss function for optimization. If None, the input network \ | |||||
is already equipped with loss function. Default: None. | is already equipped with loss function. Default: None. | ||||
Raises: | Raises: | ||||
@@ -264,19 +264,19 @@ class FastGradientSignMethod(GradientMethod): | |||||
References: `Ian J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining | References: `Ian J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining | ||||
and harnessing adversarial examples," in ICLR, 2015 | and harnessing adversarial examples," in ICLR, 2015 | ||||
<https://arxiv.org/abs/1412.6572>`_ | |||||
<https://arxiv.org/abs/1412.6572>`_. | |||||
Args: | Args: | ||||
network (Cell): Target model. | network (Cell): Target model. | ||||
eps (float): Proportion of single-step adversarial perturbation generated | eps (float): Proportion of single-step adversarial perturbation generated | ||||
by the attack to data range. Default: 0.07. | by the attack to data range. Default: 0.07. | ||||
alpha (float): Proportion of single-step random perturbation to data range. | |||||
alpha (Union[float, None]): Proportion of single-step random perturbation to data range. | |||||
Default: None. | Default: None. | ||||
bounds (tuple): Upper and lower bounds of data, indicating the data range. | bounds (tuple): Upper and lower bounds of data, indicating the data range. | ||||
In form of (clip_min, clip_max). Default: (0.0, 1.0). | In form of (clip_min, clip_max). Default: (0.0, 1.0). | ||||
is_targeted (bool): If True, targeted attack. If False, untargeted | is_targeted (bool): If True, targeted attack. If False, untargeted | ||||
attack. Default: False. | attack. Default: False. | ||||
loss_fn (Loss): Loss function for optimization. If None, the input network \ | |||||
loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ | |||||
is already equipped with loss function. Default: None. | is already equipped with loss function. Default: None. | ||||
Examples: | Examples: | ||||
@@ -338,7 +338,7 @@ class RandomFastGradientSignMethod(FastGradientSignMethod): | |||||
to create adversarial noises. | to create adversarial noises. | ||||
References: `F. Tramer, et al., "Ensemble adversarial training: Attacks | References: `F. Tramer, et al., "Ensemble adversarial training: Attacks | ||||
and defenses," in ICLR, 2018 <https://arxiv.org/abs/1705.07204>`_ | |||||
and defenses," in ICLR, 2018 <https://arxiv.org/abs/1705.07204>`_. | |||||
Args: | Args: | ||||
network (Cell): Target model. | network (Cell): Target model. | ||||
@@ -350,7 +350,7 @@ class RandomFastGradientSignMethod(FastGradientSignMethod): | |||||
In form of (clip_min, clip_max). Default: (0.0, 1.0). | In form of (clip_min, clip_max). Default: (0.0, 1.0). | ||||
is_targeted (bool): True: targeted attack. False: untargeted attack. | is_targeted (bool): True: targeted attack. False: untargeted attack. | ||||
Default: False. | Default: False. | ||||
loss_fn (Loss): Loss function for optimization. If None, the input network \ | |||||
loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ | |||||
is already equipped with loss function. Default: None. | is already equipped with loss function. Default: None. | ||||
Raises: | Raises: | ||||
@@ -391,17 +391,17 @@ class LeastLikelyClassMethod(FastGradientSignMethod): | |||||
least-likely class to generate the adversarial examples. | least-likely class to generate the adversarial examples. | ||||
References: `F. Tramer, et al., "Ensemble adversarial training: Attacks | References: `F. Tramer, et al., "Ensemble adversarial training: Attacks | ||||
and defenses," in ICLR, 2018 <https://arxiv.org/abs/1705.07204>`_ | |||||
and defenses," in ICLR, 2018 <https://arxiv.org/abs/1705.07204>`_. | |||||
Args: | Args: | ||||
network (Cell): Target model. | network (Cell): Target model. | ||||
eps (float): Proportion of single-step adversarial perturbation generated | eps (float): Proportion of single-step adversarial perturbation generated | ||||
by the attack to data range. Default: 0.07. | by the attack to data range. Default: 0.07. | ||||
alpha (float): Proportion of single-step random perturbation to data range. | |||||
alpha (Union[float, None]): Proportion of single-step random perturbation to data range. | |||||
Default: None. | Default: None. | ||||
bounds (tuple): Upper and lower bounds of data, indicating the data range. | bounds (tuple): Upper and lower bounds of data, indicating the data range. | ||||
In form of (clip_min, clip_max). Default: (0.0, 1.0). | In form of (clip_min, clip_max). Default: (0.0, 1.0). | ||||
loss_fn (Loss): Loss function for optimization. If None, the input network \ | |||||
loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ | |||||
is already equipped with loss function. Default: None. | is already equipped with loss function. Default: None. | ||||
Examples: | Examples: | ||||
@@ -439,7 +439,7 @@ class RandomLeastLikelyClassMethod(FastGradientSignMethod): | |||||
targets the least-likely class to generate the adversarial examples. | targets the least-likely class to generate the adversarial examples. | ||||
References: `F. Tramer, et al., "Ensemble adversarial training: Attacks | References: `F. Tramer, et al., "Ensemble adversarial training: Attacks | ||||
and defenses," in ICLR, 2018 <https://arxiv.org/abs/1705.07204>`_ | |||||
and defenses," in ICLR, 2018 <https://arxiv.org/abs/1705.07204>`_. | |||||
Args: | Args: | ||||
network (Cell): Target model. | network (Cell): Target model. | ||||
@@ -449,7 +449,7 @@ class RandomLeastLikelyClassMethod(FastGradientSignMethod): | |||||
Default: 0.035. | Default: 0.035. | ||||
bounds (tuple): Upper and lower bounds of data, indicating the data range. | bounds (tuple): Upper and lower bounds of data, indicating the data range. | ||||
In form of (clip_min, clip_max). Default: (0.0, 1.0). | In form of (clip_min, clip_max). Default: (0.0, 1.0). | ||||
loss_fn (Loss): Loss function for optimization. If None, the input network \ | |||||
loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ | |||||
is already equipped with loss function. Default: None. | is already equipped with loss function. Default: None. | ||||
Raises: | Raises: | ||||
@@ -115,7 +115,7 @@ class IterativeGradientMethod(Attack): | |||||
bounds (tuple): Upper and lower bounds of data, indicating the data range. | bounds (tuple): Upper and lower bounds of data, indicating the data range. | ||||
In form of (clip_min, clip_max). Default: (0.0, 1.0). | In form of (clip_min, clip_max). Default: (0.0, 1.0). | ||||
nb_iter (int): Number of iteration. Default: 5. | nb_iter (int): Number of iteration. Default: 5. | ||||
loss_fn (Loss): Loss function for optimization. If None, the input network \ | |||||
loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ | |||||
is already equipped with loss function. Default: None. | is already equipped with loss function. Default: None. | ||||
""" | """ | ||||
def __init__(self, network, eps=0.3, eps_iter=0.1, bounds=(0.0, 1.0), nb_iter=5, | def __init__(self, network, eps=0.3, eps_iter=0.1, bounds=(0.0, 1.0), nb_iter=5, | ||||
@@ -162,7 +162,7 @@ class BasicIterativeMethod(IterativeGradientMethod): | |||||
adversarial examples. | adversarial examples. | ||||
References: `A. Kurakin, I. Goodfellow, and S. Bengio, "Adversarial examples | References: `A. Kurakin, I. Goodfellow, and S. Bengio, "Adversarial examples | ||||
in the physical world," in ICLR, 2017 <https://arxiv.org/abs/1607.02533>`_ | |||||
in the physical world," in ICLR, 2017 <https://arxiv.org/abs/1607.02533>`_. | |||||
Args: | Args: | ||||
network (Cell): Target model. | network (Cell): Target model. | ||||
@@ -175,7 +175,7 @@ class BasicIterativeMethod(IterativeGradientMethod): | |||||
is_targeted (bool): If True, targeted attack. If False, untargeted | is_targeted (bool): If True, targeted attack. If False, untargeted | ||||
attack. Default: False. | attack. Default: False. | ||||
nb_iter (int): Number of iteration. Default: 5. | nb_iter (int): Number of iteration. Default: 5. | ||||
loss_fn (Loss): Loss function for optimization. If None, the input network \ | |||||
loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ | |||||
is already equipped with loss function. Default: None. | is already equipped with loss function. Default: None. | ||||
Examples: | Examples: | ||||
@@ -263,7 +263,7 @@ class MomentumIterativeMethod(IterativeGradientMethod): | |||||
References: `Y. Dong, et al., "Boosting adversarial attacks with | References: `Y. Dong, et al., "Boosting adversarial attacks with | ||||
momentum," arXiv:1710.06081, 2017 <https://arxiv.org/abs/1710.06081>`_ | |||||
momentum," arXiv:1710.06081, 2017 <https://arxiv.org/abs/1710.06081>`_. | |||||
Args: | Args: | ||||
network (Cell): Target model. | network (Cell): Target model. | ||||
@@ -277,9 +277,9 @@ class MomentumIterativeMethod(IterativeGradientMethod): | |||||
attack. Default: False. | attack. Default: False. | ||||
nb_iter (int): Number of iteration. Default: 5. | nb_iter (int): Number of iteration. Default: 5. | ||||
decay_factor (float): Decay factor in iterations. Default: 1.0. | decay_factor (float): Decay factor in iterations. Default: 1.0. | ||||
norm_level (Union[int, numpy.inf]): Order of the norm. Possible values: | |||||
norm_level (Union[int, str, numpy.inf]): Order of the norm. Possible values: | |||||
np.inf, 1 or 2. Default: 'inf'. | np.inf, 1 or 2. Default: 'inf'. | ||||
loss_fn (Loss): Loss function for optimization. If None, the input network \ | |||||
loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ | |||||
is already equipped with loss function. Default: None. | is already equipped with loss function. Default: None. | ||||
Examples: | Examples: | ||||
@@ -407,7 +407,7 @@ class ProjectedGradientDescent(BasicIterativeMethod): | |||||
the attack proposed by Madry et al. for adversarial training. | the attack proposed by Madry et al. for adversarial training. | ||||
References: `A. Madry, et al., "Towards deep learning models resistant to | References: `A. Madry, et al., "Towards deep learning models resistant to | ||||
adversarial attacks," in ICLR, 2018 <https://arxiv.org/abs/1706.06083>`_ | |||||
adversarial attacks," in ICLR, 2018 <https://arxiv.org/abs/1706.06083>`_. | |||||
Args: | Args: | ||||
network (Cell): Target model. | network (Cell): Target model. | ||||
@@ -420,9 +420,9 @@ class ProjectedGradientDescent(BasicIterativeMethod): | |||||
is_targeted (bool): If True, targeted attack. If False, untargeted | is_targeted (bool): If True, targeted attack. If False, untargeted | ||||
attack. Default: False. | attack. Default: False. | ||||
nb_iter (int): Number of iteration. Default: 5. | nb_iter (int): Number of iteration. Default: 5. | ||||
norm_level (Union[int, numpy.inf]): Order of the norm. Possible values: | |||||
norm_level (Union[int, str, numpy.inf]): Order of the norm. Possible values: | |||||
np.inf, 1 or 2. Default: 'inf'. | np.inf, 1 or 2. Default: 'inf'. | ||||
loss_fn (Loss): Loss function for optimization. If None, the input network \ | |||||
loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ | |||||
is already equipped with loss function. Default: None. | is already equipped with loss function. Default: None. | ||||
Examples: | Examples: | ||||
@@ -503,7 +503,7 @@ class DiverseInputIterativeMethod(BasicIterativeMethod): | |||||
on the input data could improve the transferability of the adversarial examples. | on the input data could improve the transferability of the adversarial examples. | ||||
References: `Xie, Cihang and Zhang, et al., "Improving Transferability of | References: `Xie, Cihang and Zhang, et al., "Improving Transferability of | ||||
Adversarial Examples With Input Diversity," in CVPR, 2019 <https://arxiv.org/abs/1803.06978>`_ | |||||
Adversarial Examples With Input Diversity," in CVPR, 2019 <https://arxiv.org/abs/1803.06978>`_. | |||||
Args: | Args: | ||||
network (Cell): Target model. | network (Cell): Target model. | ||||
@@ -514,7 +514,7 @@ class DiverseInputIterativeMethod(BasicIterativeMethod): | |||||
is_targeted (bool): If True, targeted attack. If False, untargeted | is_targeted (bool): If True, targeted attack. If False, untargeted | ||||
attack. Default: False. | attack. Default: False. | ||||
prob (float): Transformation probability. Default: 0.5. | prob (float): Transformation probability. Default: 0.5. | ||||
loss_fn (Loss): Loss function for optimization. If None, the input network \ | |||||
loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ | |||||
is already equipped with loss function. Default: None. | is already equipped with loss function. Default: None. | ||||
Examples: | Examples: | ||||
@@ -558,7 +558,7 @@ class MomentumDiverseInputIterativeMethod(MomentumIterativeMethod): | |||||
References: `Xie, Cihang and Zhang, et al., "Improving Transferability of | References: `Xie, Cihang and Zhang, et al., "Improving Transferability of | ||||
Adversarial Examples With Input Diversity," in CVPR, 2019 <https://arxiv.org/abs/1803.06978>`_ | |||||
Adversarial Examples With Input Diversity," in CVPR, 2019 <https://arxiv.org/abs/1803.06978>`_. | |||||
Args: | Args: | ||||
network (Cell): Target model. | network (Cell): Target model. | ||||
@@ -568,10 +568,10 @@ class MomentumDiverseInputIterativeMethod(MomentumIterativeMethod): | |||||
In form of (clip_min, clip_max). Default: (0.0, 1.0). | In form of (clip_min, clip_max). Default: (0.0, 1.0). | ||||
is_targeted (bool): If True, targeted attack. If False, untargeted | is_targeted (bool): If True, targeted attack. If False, untargeted | ||||
attack. Default: False. | attack. Default: False. | ||||
norm_level (Union[int, numpy.inf]): Order of the norm. Possible values: | |||||
norm_level (Union[int, str, numpy.inf]): Order of the norm. Possible values: | |||||
np.inf, 1 or 2. Default: 'l1'. | np.inf, 1 or 2. Default: 'l1'. | ||||
prob (float): Transformation probability. Default: 0.5. | prob (float): Transformation probability. Default: 0.5. | ||||
loss_fn (Loss): Loss function for optimization. If None, the input network \ | |||||
loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ | |||||
is already equipped with loss function. Default: None. | is already equipped with loss function. Default: None. | ||||
Examples: | Examples: | ||||
@@ -32,7 +32,7 @@ class AdversarialDefense(Defense): | |||||
Args: | Args: | ||||
network (Cell): A MindSpore network to be defensed. | network (Cell): A MindSpore network to be defensed. | ||||
loss_fn (Functions): Loss function. Default: None. | |||||
loss_fn (Union[Loss, None]): Loss function. Default: None. | |||||
optimizer (Cell): Optimizer used to train the network. Default: None. | optimizer (Cell): Optimizer used to train the network. Default: None. | ||||
Examples: | Examples: | ||||
@@ -105,7 +105,7 @@ class AdversarialDefenseWithAttacks(AdversarialDefense): | |||||
Args: | Args: | ||||
network (Cell): A MindSpore network to be defensed. | network (Cell): A MindSpore network to be defensed. | ||||
attacks (list[Attack]): List of attack method. | attacks (list[Attack]): List of attack method. | ||||
loss_fn (Functions): Loss function. Default: None. | |||||
loss_fn (Union[Loss, None]): Loss function. Default: None. | |||||
optimizer (Cell): Optimizer used to train the network. Default: None. | optimizer (Cell): Optimizer used to train the network. Default: None. | ||||
bounds (tuple): Upper and lower bounds of data. In form of (clip_min, | bounds (tuple): Upper and lower bounds of data. In form of (clip_min, | ||||
clip_max). Default: (0.0, 1.0). | clip_max). Default: (0.0, 1.0). | ||||
@@ -204,7 +204,7 @@ class EnsembleAdversarialDefense(AdversarialDefenseWithAttacks): | |||||
Args: | Args: | ||||
network (Cell): A MindSpore network to be defensed. | network (Cell): A MindSpore network to be defensed. | ||||
attacks (list[Attack]): List of attack method. | attacks (list[Attack]): List of attack method. | ||||
loss_fn (Functions): Loss function. Default: None. | |||||
loss_fn (Union[Loss, None]): Loss function. Default: None. | |||||
optimizer (Cell): Optimizer used to train the network. Default: None. | optimizer (Cell): Optimizer used to train the network. Default: None. | ||||
bounds (tuple): Upper and lower bounds of data. In form of (clip_min, | bounds (tuple): Upper and lower bounds of data. In form of (clip_min, | ||||
clip_max). Default: (0.0, 1.0). | clip_max). Default: (0.0, 1.0). | ||||
@@ -23,11 +23,11 @@ class NaturalAdversarialDefense(AdversarialDefenseWithAttacks): | |||||
Adversarial training based on FGSM. | Adversarial training based on FGSM. | ||||
Reference: `A. Kurakin, et al., "Adversarial machine learning at scale," in | Reference: `A. Kurakin, et al., "Adversarial machine learning at scale," in | ||||
ICLR, 2017. <https://arxiv.org/abs/1611.01236>`_ | |||||
ICLR, 2017. <https://arxiv.org/abs/1611.01236>`_. | |||||
Args: | Args: | ||||
network (Cell): A MindSpore network to be defensed. | network (Cell): A MindSpore network to be defensed. | ||||
loss_fn (Functions): Loss function. Default: None. | |||||
loss_fn (Union[Loss, None]): Loss function. Default: None. | |||||
optimizer (Cell): Optimizer used to train the network. Default: None. | optimizer (Cell): Optimizer used to train the network. Default: None. | ||||
bounds (tuple): Upper and lower bounds of data. In form of (clip_min, | bounds (tuple): Upper and lower bounds of data. In form of (clip_min, | ||||
clip_max). Default: (0.0, 1.0). | clip_max). Default: (0.0, 1.0). | ||||
@@ -23,11 +23,11 @@ class ProjectedAdversarialDefense(AdversarialDefenseWithAttacks): | |||||
Adversarial training based on PGD. | Adversarial training based on PGD. | ||||
Reference: `A. Madry, et al., "Towards deep learning models resistant to | Reference: `A. Madry, et al., "Towards deep learning models resistant to | ||||
adversarial attacks," in ICLR, 2018. <https://arxiv.org/abs/1611.01236>`_ | |||||
adversarial attacks," in ICLR, 2018. <https://arxiv.org/abs/1611.01236>`_. | |||||
Args: | Args: | ||||
network (Cell): A MindSpore network to be defensed. | network (Cell): A MindSpore network to be defensed. | ||||
loss_fn (Functions): Loss function. Default: None. | |||||
loss_fn (Union[Loss, None]): Loss function. Default: None. | |||||
optimizer (Cell): Optimizer used to train the nerwork. Default: None. | optimizer (Cell): Optimizer used to train the nerwork. Default: None. | ||||
bounds (tuple): Upper and lower bounds of input data. In form of | bounds (tuple): Upper and lower bounds of input data. In form of | ||||
(clip_min, clip_max). Default: (0.0, 1.0). | (clip_min, clip_max). Default: (0.0, 1.0). | ||||
@@ -39,7 +39,8 @@ class ClipMechanismsFactory: | |||||
Wrapper of clip noise generating mechanisms. It supports Adaptive Clipping with | Wrapper of clip noise generating mechanisms. It supports Adaptive Clipping with | ||||
Gaussian Random Noise for now. | Gaussian Random Noise for now. | ||||
For details, please check `Tutorial <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_. | |||||
For details, please check `Tutorial <https://mindspore.cn/mindarmour/docs/zh-CN/r1.9/\ | |||||
protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_. | |||||
""" | """ | ||||
@@ -100,7 +101,8 @@ class NoiseMechanismsFactory: | |||||
Wrapper of noise generating mechanisms. It supports Gaussian Random Noise and | Wrapper of noise generating mechanisms. It supports Gaussian Random Noise and | ||||
Adaptive Gaussian Random Noise for now. | Adaptive Gaussian Random Noise for now. | ||||
For details, please check `Tutorial <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_. | |||||
For details, please check `Tutorial <https://mindspore.cn/mindarmour/docs/zh-CN/r1.9/\ | |||||
protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_. | |||||
""" | """ | ||||
def __init__(self): | def __init__(self): | ||||
@@ -28,7 +28,8 @@ TAG = 'DP monitor' | |||||
class PrivacyMonitorFactory: | class PrivacyMonitorFactory: | ||||
""" | """ | ||||
Factory class of DP training's privacy monitor. | Factory class of DP training's privacy monitor. | ||||
For details, please check `Tutorial <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_. | |||||
For details, please check `Tutorial <https://mindspore.cn/mindarmour/docs/zh-CN/r1.9/\ | |||||
protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_. | |||||
""" | """ | ||||
@@ -77,7 +78,8 @@ class RDPMonitor(Callback): | |||||
.. math:: | .. math:: | ||||
(ε'+\frac{log(1/δ)}{α-1}, δ) | (ε'+\frac{log(1/δ)}{α-1}, δ) | ||||
For details, please check `Tutorial <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_. | |||||
For details, please check `Tutorial <https://mindspore.cn/mindarmour/docs/zh-CN/r1.9/\ | |||||
protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_. | |||||
Reference: `Rényi Differential Privacy of the Sampled Gaussian Mechanism | Reference: `Rényi Differential Privacy of the Sampled Gaussian Mechanism | ||||
<https://arxiv.org/abs/1908.10530>`_ | <https://arxiv.org/abs/1908.10530>`_ | ||||
@@ -370,7 +372,8 @@ class ZCDPMonitor(Callback): | |||||
noise mechanisms(such as NoiseAdaGaussianRandom and NoiseGaussianRandom). | noise mechanisms(such as NoiseAdaGaussianRandom and NoiseGaussianRandom). | ||||
The matching noise mechanism of ZCDP will be developed in the future. | The matching noise mechanism of ZCDP will be developed in the future. | ||||
For details, please check `Tutorial <https://mindspore.cn/mindarmour/docs/zh-CN/master/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_. | |||||
For details, please check `Tutorial <https://mindspore.cn/mindarmour/docs/zh-CN/r1.9/\ | |||||
protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_. | |||||
Reference: `Concentrated Differentially Private Gradient Descent with | Reference: `Concentrated Differentially Private Gradient Descent with | ||||
Adaptive per-Iteration Privacy Budget <https://arxiv.org/abs/1808.09501>`_ | Adaptive per-Iteration Privacy Budget <https://arxiv.org/abs/1808.09501>`_ | ||||
@@ -70,7 +70,7 @@ class DPModel(Model): | |||||
This class is overload mindspore.train.model.Model. | This class is overload mindspore.train.model.Model. | ||||
For details, please check `Protecting User Privacy with Differential Privacy Mechanism | For details, please check `Protecting User Privacy with Differential Privacy Mechanism | ||||
<https://mindspore.cn/mindarmour/docs/en/master/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_. | |||||
<https://mindspore.cn/mindarmour/docs/en/r1.9/protect_user_privacy_with_differential_privacy.html#%E5%B7%AE%E5%88%86%E9%9A%90%E7%A7%81>`_. | |||||
Args: | Args: | ||||
micro_batches (int): The number of small batches split from an original | micro_batches (int): The number of small batches split from an original | ||||
@@ -99,11 +99,11 @@ class MembershipInference: | |||||
to some sensitive attributes of a single user. | to some sensitive attributes of a single user. | ||||
For details, please refer to the `Using Membership Inference to Test Model Security | For details, please refer to the `Using Membership Inference to Test Model Security | ||||
<https://mindspore.cn/mindarmour/docs/en/master/test_model_security_membership_inference.html>`_. | |||||
<https://mindspore.cn/mindarmour/docs/en/r1.9/test_model_security_membership_inference.html>`_. | |||||
References: `Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov. | References: `Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov. | ||||
Membership Inference Attacks against Machine Learning Models. 2017. | Membership Inference Attacks against Machine Learning Models. 2017. | ||||
<https://arxiv.org/abs/1610.05820v2>`_ | |||||
<https://arxiv.org/abs/1610.05820v2>`_. | |||||
Args: | Args: | ||||
model (Model): Target model. | model (Model): Target model. | ||||
@@ -28,7 +28,7 @@ class SuppressMasker(Callback): | |||||
""" | """ | ||||
Periodicity check suppress privacy function status and toggle suppress operation. | Periodicity check suppress privacy function status and toggle suppress operation. | ||||
For details, please check `Protecting User Privacy with Suppression Privacy | For details, please check `Protecting User Privacy with Suppression Privacy | ||||
<https://mindspore.cn/mindarmour/docs/en/master/protect_user_privacy_with_suppress_privacy.html>`_. | |||||
<https://mindspore.cn/mindarmour/docs/en/r1.9/protect_user_privacy_with_suppress_privacy.html>`_. | |||||
Args: | Args: | ||||
model (SuppressModel): SuppressModel instance. | model (SuppressModel): SuppressModel instance. | ||||
@@ -36,8 +36,8 @@ class SuppressMasker(Callback): | |||||
Examples: | Examples: | ||||
>>> import mindspore.nn as nn | >>> import mindspore.nn as nn | ||||
>>> import mindspore.ops.operations as P | |||||
>>> from mindspore import context | |||||
>>> import mindspore as ms | |||||
>>> from mindspore import set_context | |||||
>>> from mindspore.nn import Accuracy | >>> from mindspore.nn import Accuracy | ||||
>>> from mindarmour.privacy.sup_privacy import SuppressModel | >>> from mindarmour.privacy.sup_privacy import SuppressModel | ||||
>>> from mindarmour.privacy.sup_privacy import SuppressMasker | >>> from mindarmour.privacy.sup_privacy import SuppressMasker | ||||
@@ -46,14 +46,14 @@ class SuppressMasker(Callback): | |||||
>>> class Net(nn.Cell): | >>> class Net(nn.Cell): | ||||
... def __init__(self): | ... def __init__(self): | ||||
... super(Net, self).__init__() | ... super(Net, self).__init__() | ||||
... self._softmax = P.Softmax() | |||||
... self._softmax = ops.Softmax() | |||||
... self._Dense = nn.Dense(10,10) | ... self._Dense = nn.Dense(10,10) | ||||
... self._squeeze = P.Squeeze(1) | |||||
... self._squeeze = ops.Squeeze(1) | |||||
... def construct(self, inputs): | ... def construct(self, inputs): | ||||
... out = self._softmax(inputs) | ... out = self._softmax(inputs) | ||||
... out = self._Dense(out) | ... out = self._Dense(out) | ||||
... return self._squeeze(out) | ... return self._squeeze(out) | ||||
>>> context.set_context(mode=context.PYNATIVE_MODE, device_target="GPU") | |||||
>>> set_context(mode=ms.PYNATIVE_MODE, device_target="GPU") | |||||
>>> network = Net() | >>> network = Net() | ||||
>>> masklayers = [] | >>> masklayers = [] | ||||
>>> masklayers.append(MaskLayerDes("_Dense.weight", 0, False, True, 10)) | >>> masklayers.append(MaskLayerDes("_Dense.weight", 0, False, True, 10)) | ||||
@@ -36,7 +36,7 @@ class SuppressPrivacyFactory: | |||||
Factory class of SuppressCtrl mechanisms. | Factory class of SuppressCtrl mechanisms. | ||||
For details, please check `Protecting User Privacy with Suppress Privacy | For details, please check `Protecting User Privacy with Suppress Privacy | ||||
<https://mindspore.cn/mindarmour/docs/en/master/protect_user_privacy_with_suppress_privacy.html>`_. | |||||
<https://mindspore.cn/mindarmour/docs/en/r1.9/protect_user_privacy_with_suppress_privacy.html>`_. | |||||
""" | """ | ||||
def __init__(self): | def __init__(self): | ||||
@@ -66,8 +66,8 @@ class SuppressPrivacyFactory: | |||||
Examples: | Examples: | ||||
>>> import mindspore.nn as nn | >>> import mindspore.nn as nn | ||||
>>> import mindspore.ops.operations as P | |||||
>>> from mindspore import context | |||||
>>> import mindspore as ms | |||||
>>> from mindspore import set_context, ops | |||||
>>> from mindspore.nn import Accuracy | >>> from mindspore.nn import Accuracy | ||||
>>> from mindarmour.privacy.sup_privacy import SuppressPrivacyFactory | >>> from mindarmour.privacy.sup_privacy import SuppressPrivacyFactory | ||||
>>> from mindarmour.privacy.sup_privacy import MaskLayerDes | >>> from mindarmour.privacy.sup_privacy import MaskLayerDes | ||||
@@ -75,14 +75,14 @@ class SuppressPrivacyFactory: | |||||
>>> class Net(nn.Cell): | >>> class Net(nn.Cell): | ||||
... def __init__(self): | ... def __init__(self): | ||||
... super(Net, self).__init__() | ... super(Net, self).__init__() | ||||
... self._softmax = P.Softmax() | |||||
... self._softmax = ops.Softmax() | |||||
... self._Dense = nn.Dense(10,10) | ... self._Dense = nn.Dense(10,10) | ||||
... self._squeeze = P.Squeeze(1) | |||||
... self._squeeze = ops.Squeeze(1) | |||||
... def construct(self, inputs): | ... def construct(self, inputs): | ||||
... out = self._softmax(inputs) | ... out = self._softmax(inputs) | ||||
... out = self._Dense(out) | ... out = self._Dense(out) | ||||
... return self._squeeze(out) | ... return self._squeeze(out) | ||||
>>> context.set_context(mode=context.PYNATIVE_MODE, device_target="CPU") | |||||
>>> set_context(mode=ms.PYNATIVE_MODE, device_target="CPU") | |||||
>>> network = Net() | >>> network = Net() | ||||
>>> masklayers = [] | >>> masklayers = [] | ||||
>>> masklayers.append(MaskLayerDes("_Dense.weight", 0, False, True, 10)) | >>> masklayers.append(MaskLayerDes("_Dense.weight", 0, False, True, 10)) | ||||
@@ -120,7 +120,7 @@ class SuppressCtrl(Cell): | |||||
parameters permanently. | parameters permanently. | ||||
For details, please check `Protecting User Privacy with Suppress Privacy | For details, please check `Protecting User Privacy with Suppress Privacy | ||||
<https://mindspore.cn/mindarmour/docs/en/master/protect_user_privacy_with_suppress_privacy.html>`_. | |||||
<https://mindspore.cn/mindarmour/docs/en/r1.9/protect_user_privacy_with_suppress_privacy.html>`_. | |||||
Args: | Args: | ||||
networks (Cell): The training network. | networks (Cell): The training network. | ||||
@@ -59,7 +59,7 @@ class SuppressModel(Model): | |||||
Suppress privacy training model, which is overload from mindspore.train.model.Model. | Suppress privacy training model, which is overload from mindspore.train.model.Model. | ||||
For details, please check `Protecting User Privacy with Suppress Privacy | For details, please check `Protecting User Privacy with Suppress Privacy | ||||
<https://mindspore.cn/mindarmour/docs/en/master/protect_user_privacy_with_suppress_privacy.html>`_. | |||||
<https://mindspore.cn/mindarmour/docs/en/r1.9/protect_user_privacy_with_suppress_privacy.html>`_. | |||||
Args: | Args: | ||||
network (Cell): The training network. | network (Cell): The training network. | ||||
@@ -90,7 +90,7 @@ class OodDetectorFeatureCluster(OodDetector): | |||||
image or not. | image or not. | ||||
For details, please check `Implementing the Concept Drift Detection Application of Image Data | For details, please check `Implementing the Concept Drift Detection Application of Image Data | ||||
<https://mindspore.cn/mindarmour/docs/en/master/concept_drift_images.html>`_. | |||||
<https://mindspore.cn/mindarmour/docs/en/r1.9/concept_drift_images.html>`_. | |||||
Args: | Args: | ||||
model (Model):The training model. | model (Model):The training model. | ||||
@@ -24,7 +24,7 @@ class ConceptDriftCheckTimeSeries: | |||||
r""" | r""" | ||||
ConceptDriftCheckTimeSeries is used for example series distribution change detection. | ConceptDriftCheckTimeSeries is used for example series distribution change detection. | ||||
For details, please check `Implementing the Concept Drift Detection Application of Time Series Data | For details, please check `Implementing the Concept Drift Detection Application of Time Series Data | ||||
<https://mindspore.cn/mindarmour/docs/en/master/concept_drift_time_series.html>`_. | |||||
<https://mindspore.cn/mindarmour/docs/en/r1.9/concept_drift_time_series.html>`_. | |||||
Args: | Args: | ||||
window_size(int): Size of a concept window, no less than 10. If given the input data, | window_size(int): Size of a concept window, no less than 10. If given the input data, | ||||
@@ -32,7 +32,7 @@ class FaultInjector: | |||||
performance and reliability of the model. | performance and reliability of the model. | ||||
For details, please check `Implementing the Model Fault Injection and Evaluation | For details, please check `Implementing the Model Fault Injection and Evaluation | ||||
<https://mindspore.cn/mindarmour/docs/en/master/fault_injection.html>`_. | |||||
<https://mindspore.cn/mindarmour/docs/en/r1.9/fault_injection.html>`_. | |||||
Args: | Args: | ||||
model (Model): The model need to be evaluated. | model (Model): The model need to be evaluated. | ||||