Browse Source

modify the error parameter types in 1.9

pull/422/head
huanxiaoling 2 years ago
parent
commit
a6abb49540
13 changed files with 57 additions and 57 deletions
  1. +1
    -1
      .jenkins/test/config/dependent_packages.yaml
  2. +1
    -1
      docs/api/api_python/mindarmour.adv_robustness.defenses.rst
  3. +4
    -4
      docs/api/api_python/mindarmour.privacy.diff_privacy.rst
  4. +4
    -4
      docs/api/api_python/mindarmour.privacy.evaluation.rst
  5. +4
    -4
      docs/api/api_python/mindarmour.rst
  6. +2
    -2
      mindarmour/adv_robustness/attacks/black/pointwise_attack.py
  7. +2
    -2
      mindarmour/adv_robustness/attacks/deep_fool.py
  8. +17
    -17
      mindarmour/adv_robustness/attacks/gradient_method.py
  9. +14
    -14
      mindarmour/adv_robustness/attacks/iterative_gradient_method.py
  10. +3
    -3
      mindarmour/adv_robustness/defenses/adversarial_defense.py
  11. +2
    -2
      mindarmour/adv_robustness/defenses/natural_adversarial_defense.py
  12. +2
    -2
      mindarmour/adv_robustness/defenses/projected_adversarial_defense.py
  13. +1
    -1
      mindarmour/privacy/evaluation/membership_inference.py

+ 1
- 1
.jenkins/test/config/dependent_packages.yaml View File

@@ -1,3 +1,3 @@
mindspore:
'mindspore/mindspore/version/202205/20220525/master_20220525210238_42306df4865f816c48a720d98e50ba2e586b1f59/'
'mindspore/mindspore/version/202209/20220923/r1.9_20220923224458_c16390f59ab8dace3bb7e5a6ab4ae4d3bfe74bea/'


+ 1
- 1
docs/api/api_python/mindarmour.adv_robustness.defenses.rst View File

@@ -58,7 +58,7 @@ mindarmour.adv_robustness.defenses
参数:
- **network** (Cell) - 要防御的MindSpore网络。
- **loss_fn** (Union[Loss, None]) - 损失函数。默认值:None。
- **optimizer** (Cell)用于训练网络的优化器。默认值:None。
- **optimizer** (Cell) - 用于训练网络的优化器。默认值:None。
- **bounds** (tuple) - 数据的上下界。以(clip_min, clip_max)的形式出现。默认值:(0.0, 1.0)。
- **replace_ratio** (float) - 用对抗样本替换原始样本的比率。默认值:0.5。
- **eps** (float) - 攻击方法(FGSM)的步长。默认值:0.1。


+ 4
- 4
docs/api/api_python/mindarmour.privacy.diff_privacy.rst View File

@@ -8,10 +8,10 @@ mindarmour.privacy.diff_privacy
基于 :math:`mean=0` 以及 :math:`standard\_deviation = norm\_bound * initial\_noise\_multiplier` 的高斯分布产生噪声。

参数:
- **norm_bound** (float)- 梯度的l2范数的裁剪范围。默认值:1.0。
- **initial_noise_multiplier** (float)- 高斯噪声标准偏差除以 `norm_bound` 的比率,将用于计算隐私预算。默认值:1.0。
- **seed** (int)- 原始随机种子,如果seed=0随机正态将使用安全随机数。如果seed!=0随机正态将使用给定的种子生成值。默认值:0。
- **decay_policy** (str)- 衰减策略。默认值:None。
- **norm_bound** (float) - 梯度的l2范数的裁剪范围。默认值:1.0。
- **initial_noise_multiplier** (float) - 高斯噪声标准偏差除以 `norm_bound` 的比率,将用于计算隐私预算。默认值:1.0。
- **seed** (int) - 原始随机种子,如果seed=0随机正态将使用安全随机数。如果seed!=0随机正态将使用给定的种子生成值。默认值:0。
- **decay_policy** (str) - 衰减策略。默认值:None。

.. py:method:: construct(gradients)



+ 4
- 4
docs/api/api_python/mindarmour.privacy.evaluation.rst View File

@@ -26,8 +26,8 @@ mindarmour.privacy.evaluation
评估指标应由metrics规定。

参数:
- **dataset_train** (minspore.dataset) - 目标模型的训练数据集。
- **dataset_test** (minspore.dataset) - 目标模型的测试数据集。
- **dataset_train** (mindspore.dataset) - 目标模型的训练数据集。
- **dataset_test** (mindspore.dataset) - 目标模型的测试数据集。
- **metrics** (Union[list, tuple]) - 评估指标。指标的值必须在["precision", "accuracy", "recall"]中。默认值:["precision"]。

返回:
@@ -38,8 +38,8 @@ mindarmour.privacy.evaluation
根据配置,使用输入数据集训练攻击模型。

参数:
- **dataset_train** (minspore.dataset) - 目标模型的训练数据集。
- **dataset_test** (minspore.dataset) - 目标模型的测试集。
- **dataset_train** (mindspore.dataset) - 目标模型的训练数据集。
- **dataset_test** (mindspore.dataset) - 目标模型的测试集。
- **attack_config** (Union[list, tuple]) - 攻击模型的参数设置。格式为
.. code-block:: python



+ 4
- 4
docs/api/api_python/mindarmour.rst View File

@@ -236,8 +236,8 @@ MindArmour是MindSpore的工具箱,用于增强模型可信,实现隐私保
评估指标应由metrics规定。

参数:
- **dataset_train** (minspore.dataset) - 目标模型的训练数据集。
- **dataset_test** (minspore.dataset) - 目标模型的测试数据集。
- **dataset_train** (mindspore.dataset) - 目标模型的训练数据集。
- **dataset_test** (mindspore.dataset) - 目标模型的测试数据集。
- **metrics** (Union[list, tuple]) - 评估指标。指标的值必须在["precision", "accuracy", "recall"]中。默认值:["precision"]。

返回:
@@ -248,8 +248,8 @@ MindArmour是MindSpore的工具箱,用于增强模型可信,实现隐私保
根据配置,使用输入数据集训练攻击模型。

参数:
- **dataset_train** (minspore.dataset) - 目标模型的训练数据集。
- **dataset_test** (minspore.dataset) - 目标模型的测试集。
- **dataset_train** (mindspore.dataset) - 目标模型的训练数据集。
- **dataset_test** (mindspore.dataset) - 目标模型的测试集。
- **attack_config** (Union[list, tuple]) - 攻击模型的参数设置。格式为:

.. code-block::


+ 2
- 2
mindarmour/adv_robustness/attacks/black/pointwise_attack.py View File

@@ -35,14 +35,14 @@ class PointWiseAttack(Attack):

References: `L. Schott, J. Rauber, M. Bethge, W. Brendel: "Towards the
first adversarially robust neural network model on MNIST", ICLR (2019)
<https://arxiv.org/abs/1805.09190>`_
<https://arxiv.org/abs/1805.09190>`_.

Args:
model (BlackModel): Target model.
max_iter (int): Max rounds of iteration to generate adversarial image. Default: 1000.
search_iter (int): Max rounds of binary search. Default: 10.
is_targeted (bool): If True, targeted attack. If False, untargeted attack. Default: False.
init_attack (Attack): Attack used to find a starting point. Default: None.
init_attack (Union[Attack, None]): Attack used to find a starting point. Default: None.
sparse (bool): If True, input labels are sparse-encoded. If False, input labels are one-hot-encoded.
Default: True.



+ 2
- 2
mindarmour/adv_robustness/attacks/deep_fool.py View File

@@ -96,7 +96,7 @@ class DeepFool(Attack):
sample to the nearest classification boundary and crossing the boundary.

Reference: `DeepFool: a simple and accurate method to fool deep neural
networks <https://arxiv.org/abs/1511.04599>`_
networks <https://arxiv.org/abs/1511.04599>`_.

Args:
network (Cell): Target model.
@@ -109,7 +109,7 @@ class DeepFool(Attack):
max_iters (int): Max iterations, which should be
greater than zero. Default: 50.
overshoot (float): Overshoot parameter. Default: 0.02.
norm_level (Union[int, str]): Order of the vector norm. Possible values: np.inf
norm_level (Union[int, str, numpy.inf]): Order of the vector norm. Possible values: np.inf
or 2. Default: 2.
bounds (Union[tuple, list]): Upper and lower bounds of data range. In form of (clip_min,
clip_max). Default: None.


+ 17
- 17
mindarmour/adv_robustness/attacks/gradient_method.py View File

@@ -130,21 +130,21 @@ class FastGradientMethod(GradientMethod):

References: `I. J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining
and harnessing adversarial examples," in ICLR, 2015.
<https://arxiv.org/abs/1412.6572>`_
<https://arxiv.org/abs/1412.6572>`_.

Args:
network (Cell): Target model.
eps (float): Proportion of single-step adversarial perturbation generated
by the attack to data range. Default: 0.07.
alpha (float): Proportion of single-step random perturbation to data range.
alpha (Union[float, None]): Proportion of single-step random perturbation to data range.
Default: None.
bounds (tuple): Upper and lower bounds of data, indicating the data range.
In form of (clip_min, clip_max). Default: (0.0, 1.0).
norm_level (Union[int, numpy.inf]): Order of the norm.
norm_level (Union[int, str, numpy.inf]): Order of the norm.
Possible values: np.inf, 1 or 2. Default: 2.
is_targeted (bool): If True, targeted attack. If False, untargeted
attack. Default: False.
loss_fn (Loss): Loss function for optimization. If None, the input network \
loss_fn (Union[loss, None]): Loss function for optimization. If None, the input network \
is already equipped with loss function. Default: None.

Examples:
@@ -207,7 +207,7 @@ class RandomFastGradientMethod(FastGradientMethod):

References: `Florian Tramer, Alexey Kurakin, Nicolas Papernot, "Ensemble
adversarial training: Attacks and defenses" in ICLR, 2018
<https://arxiv.org/abs/1705.07204>`_
<https://arxiv.org/abs/1705.07204>`_.

Args:
network (Cell): Target model.
@@ -217,11 +217,11 @@ class RandomFastGradientMethod(FastGradientMethod):
Default: 0.035.
bounds (tuple): Upper and lower bounds of data, indicating the data range.
In form of (clip_min, clip_max). Default: (0.0, 1.0).
norm_level (Union[int, numpy.inf]): Order of the norm.
norm_level (Union[int, str, numpy.inf): Order of the norm.
Possible values: np.inf, 1 or 2. Default: 2.
is_targeted (bool): If True, targeted attack. If False, untargeted
attack. Default: False.
loss_fn (Loss): Loss function for optimization. If None, the input network \
loss_fn (Union[loss, None]): Loss function for optimization. If None, the input network \
is already equipped with loss function. Default: None.

Raises:
@@ -264,19 +264,19 @@ class FastGradientSignMethod(GradientMethod):

References: `Ian J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining
and harnessing adversarial examples," in ICLR, 2015
<https://arxiv.org/abs/1412.6572>`_
<https://arxiv.org/abs/1412.6572>`_.

Args:
network (Cell): Target model.
eps (float): Proportion of single-step adversarial perturbation generated
by the attack to data range. Default: 0.07.
alpha (float): Proportion of single-step random perturbation to data range.
alpha (Union[float, None]): Proportion of single-step random perturbation to data range.
Default: None.
bounds (tuple): Upper and lower bounds of data, indicating the data range.
In form of (clip_min, clip_max). Default: (0.0, 1.0).
is_targeted (bool): If True, targeted attack. If False, untargeted
attack. Default: False.
loss_fn (Loss): Loss function for optimization. If None, the input network \
loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \
is already equipped with loss function. Default: None.

Examples:
@@ -338,7 +338,7 @@ class RandomFastGradientSignMethod(FastGradientSignMethod):
to create adversarial noises.

References: `F. Tramer, et al., "Ensemble adversarial training: Attacks
and defenses," in ICLR, 2018 <https://arxiv.org/abs/1705.07204>`_
and defenses," in ICLR, 2018 <https://arxiv.org/abs/1705.07204>`_.

Args:
network (Cell): Target model.
@@ -350,7 +350,7 @@ class RandomFastGradientSignMethod(FastGradientSignMethod):
In form of (clip_min, clip_max). Default: (0.0, 1.0).
is_targeted (bool): True: targeted attack. False: untargeted attack.
Default: False.
loss_fn (Loss): Loss function for optimization. If None, the input network \
loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \
is already equipped with loss function. Default: None.

Raises:
@@ -391,17 +391,17 @@ class LeastLikelyClassMethod(FastGradientSignMethod):
least-likely class to generate the adversarial examples.

References: `F. Tramer, et al., "Ensemble adversarial training: Attacks
and defenses," in ICLR, 2018 <https://arxiv.org/abs/1705.07204>`_
and defenses," in ICLR, 2018 <https://arxiv.org/abs/1705.07204>`_.

Args:
network (Cell): Target model.
eps (float): Proportion of single-step adversarial perturbation generated
by the attack to data range. Default: 0.07.
alpha (float): Proportion of single-step random perturbation to data range.
alpha (Union[float, None]): Proportion of single-step random perturbation to data range.
Default: None.
bounds (tuple): Upper and lower bounds of data, indicating the data range.
In form of (clip_min, clip_max). Default: (0.0, 1.0).
loss_fn (Loss): Loss function for optimization. If None, the input network \
loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \
is already equipped with loss function. Default: None.

Examples:
@@ -439,7 +439,7 @@ class RandomLeastLikelyClassMethod(FastGradientSignMethod):
targets the least-likely class to generate the adversarial examples.

References: `F. Tramer, et al., "Ensemble adversarial training: Attacks
and defenses," in ICLR, 2018 <https://arxiv.org/abs/1705.07204>`_
and defenses," in ICLR, 2018 <https://arxiv.org/abs/1705.07204>`_.

Args:
network (Cell): Target model.
@@ -449,7 +449,7 @@ class RandomLeastLikelyClassMethod(FastGradientSignMethod):
Default: 0.035.
bounds (tuple): Upper and lower bounds of data, indicating the data range.
In form of (clip_min, clip_max). Default: (0.0, 1.0).
loss_fn (Loss): Loss function for optimization. If None, the input network \
loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \
is already equipped with loss function. Default: None.

Raises:


+ 14
- 14
mindarmour/adv_robustness/attacks/iterative_gradient_method.py View File

@@ -115,7 +115,7 @@ class IterativeGradientMethod(Attack):
bounds (tuple): Upper and lower bounds of data, indicating the data range.
In form of (clip_min, clip_max). Default: (0.0, 1.0).
nb_iter (int): Number of iteration. Default: 5.
loss_fn (Loss): Loss function for optimization. If None, the input network \
loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \
is already equipped with loss function. Default: None.
"""
def __init__(self, network, eps=0.3, eps_iter=0.1, bounds=(0.0, 1.0), nb_iter=5,
@@ -162,7 +162,7 @@ class BasicIterativeMethod(IterativeGradientMethod):
adversarial examples.

References: `A. Kurakin, I. Goodfellow, and S. Bengio, "Adversarial examples
in the physical world," in ICLR, 2017 <https://arxiv.org/abs/1607.02533>`_
in the physical world," in ICLR, 2017 <https://arxiv.org/abs/1607.02533>`_.

Args:
network (Cell): Target model.
@@ -175,7 +175,7 @@ class BasicIterativeMethod(IterativeGradientMethod):
is_targeted (bool): If True, targeted attack. If False, untargeted
attack. Default: False.
nb_iter (int): Number of iteration. Default: 5.
loss_fn (Loss): Loss function for optimization. If None, the input network \
loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \
is already equipped with loss function. Default: None.

Examples:
@@ -263,7 +263,7 @@ class MomentumIterativeMethod(IterativeGradientMethod):


References: `Y. Dong, et al., "Boosting adversarial attacks with
momentum," arXiv:1710.06081, 2017 <https://arxiv.org/abs/1710.06081>`_
momentum," arXiv:1710.06081, 2017 <https://arxiv.org/abs/1710.06081>`_.

Args:
network (Cell): Target model.
@@ -277,9 +277,9 @@ class MomentumIterativeMethod(IterativeGradientMethod):
attack. Default: False.
nb_iter (int): Number of iteration. Default: 5.
decay_factor (float): Decay factor in iterations. Default: 1.0.
norm_level (Union[int, numpy.inf]): Order of the norm. Possible values:
norm_level (Union[int, str, numpy.inf]): Order of the norm. Possible values:
np.inf, 1 or 2. Default: 'inf'.
loss_fn (Loss): Loss function for optimization. If None, the input network \
loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \
is already equipped with loss function. Default: None.

Examples:
@@ -407,7 +407,7 @@ class ProjectedGradientDescent(BasicIterativeMethod):
the attack proposed by Madry et al. for adversarial training.

References: `A. Madry, et al., "Towards deep learning models resistant to
adversarial attacks," in ICLR, 2018 <https://arxiv.org/abs/1706.06083>`_
adversarial attacks," in ICLR, 2018 <https://arxiv.org/abs/1706.06083>`_.

Args:
network (Cell): Target model.
@@ -420,9 +420,9 @@ class ProjectedGradientDescent(BasicIterativeMethod):
is_targeted (bool): If True, targeted attack. If False, untargeted
attack. Default: False.
nb_iter (int): Number of iteration. Default: 5.
norm_level (Union[int, numpy.inf]): Order of the norm. Possible values:
norm_level (Union[int, str, numpy.inf]): Order of the norm. Possible values:
np.inf, 1 or 2. Default: 'inf'.
loss_fn (Loss): Loss function for optimization. If None, the input network \
loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \
is already equipped with loss function. Default: None.

Examples:
@@ -503,7 +503,7 @@ class DiverseInputIterativeMethod(BasicIterativeMethod):
on the input data could improve the transferability of the adversarial examples.

References: `Xie, Cihang and Zhang, et al., "Improving Transferability of
Adversarial Examples With Input Diversity," in CVPR, 2019 <https://arxiv.org/abs/1803.06978>`_
Adversarial Examples With Input Diversity," in CVPR, 2019 <https://arxiv.org/abs/1803.06978>`_.

Args:
network (Cell): Target model.
@@ -514,7 +514,7 @@ class DiverseInputIterativeMethod(BasicIterativeMethod):
is_targeted (bool): If True, targeted attack. If False, untargeted
attack. Default: False.
prob (float): Transformation probability. Default: 0.5.
loss_fn (Loss): Loss function for optimization. If None, the input network \
loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \
is already equipped with loss function. Default: None.

Examples:
@@ -558,7 +558,7 @@ class MomentumDiverseInputIterativeMethod(MomentumIterativeMethod):


References: `Xie, Cihang and Zhang, et al., "Improving Transferability of
Adversarial Examples With Input Diversity," in CVPR, 2019 <https://arxiv.org/abs/1803.06978>`_
Adversarial Examples With Input Diversity," in CVPR, 2019 <https://arxiv.org/abs/1803.06978>`_.

Args:
network (Cell): Target model.
@@ -568,10 +568,10 @@ class MomentumDiverseInputIterativeMethod(MomentumIterativeMethod):
In form of (clip_min, clip_max). Default: (0.0, 1.0).
is_targeted (bool): If True, targeted attack. If False, untargeted
attack. Default: False.
norm_level (Union[int, numpy.inf]): Order of the norm. Possible values:
norm_level (Union[int, str, numpy.inf]): Order of the norm. Possible values:
np.inf, 1 or 2. Default: 'l1'.
prob (float): Transformation probability. Default: 0.5.
loss_fn (Loss): Loss function for optimization. If None, the input network \
loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \
is already equipped with loss function. Default: None.

Examples:


+ 3
- 3
mindarmour/adv_robustness/defenses/adversarial_defense.py View File

@@ -32,7 +32,7 @@ class AdversarialDefense(Defense):

Args:
network (Cell): A MindSpore network to be defensed.
loss_fn (Functions): Loss function. Default: None.
loss_fn (Union[Loss, None]): Loss function. Default: None.
optimizer (Cell): Optimizer used to train the network. Default: None.

Examples:
@@ -105,7 +105,7 @@ class AdversarialDefenseWithAttacks(AdversarialDefense):
Args:
network (Cell): A MindSpore network to be defensed.
attacks (list[Attack]): List of attack method.
loss_fn (Functions): Loss function. Default: None.
loss_fn (Union[Loss, None]): Loss function. Default: None.
optimizer (Cell): Optimizer used to train the network. Default: None.
bounds (tuple): Upper and lower bounds of data. In form of (clip_min,
clip_max). Default: (0.0, 1.0).
@@ -204,7 +204,7 @@ class EnsembleAdversarialDefense(AdversarialDefenseWithAttacks):
Args:
network (Cell): A MindSpore network to be defensed.
attacks (list[Attack]): List of attack method.
loss_fn (Functions): Loss function. Default: None.
loss_fn (Union[Loss, None]): Loss function. Default: None.
optimizer (Cell): Optimizer used to train the network. Default: None.
bounds (tuple): Upper and lower bounds of data. In form of (clip_min,
clip_max). Default: (0.0, 1.0).


+ 2
- 2
mindarmour/adv_robustness/defenses/natural_adversarial_defense.py View File

@@ -23,11 +23,11 @@ class NaturalAdversarialDefense(AdversarialDefenseWithAttacks):
Adversarial training based on FGSM.

Reference: `A. Kurakin, et al., "Adversarial machine learning at scale," in
ICLR, 2017. <https://arxiv.org/abs/1611.01236>`_
ICLR, 2017. <https://arxiv.org/abs/1611.01236>`_.

Args:
network (Cell): A MindSpore network to be defensed.
loss_fn (Functions): Loss function. Default: None.
loss_fn (Union[Loss, None]): Loss function. Default: None.
optimizer (Cell): Optimizer used to train the network. Default: None.
bounds (tuple): Upper and lower bounds of data. In form of (clip_min,
clip_max). Default: (0.0, 1.0).


+ 2
- 2
mindarmour/adv_robustness/defenses/projected_adversarial_defense.py View File

@@ -23,11 +23,11 @@ class ProjectedAdversarialDefense(AdversarialDefenseWithAttacks):
Adversarial training based on PGD.

Reference: `A. Madry, et al., "Towards deep learning models resistant to
adversarial attacks," in ICLR, 2018. <https://arxiv.org/abs/1611.01236>`_
adversarial attacks," in ICLR, 2018. <https://arxiv.org/abs/1611.01236>`_.

Args:
network (Cell): A MindSpore network to be defensed.
loss_fn (Functions): Loss function. Default: None.
loss_fn (Union[Loss, None]): Loss function. Default: None.
optimizer (Cell): Optimizer used to train the nerwork. Default: None.
bounds (tuple): Upper and lower bounds of input data. In form of
(clip_min, clip_max). Default: (0.0, 1.0).


+ 1
- 1
mindarmour/privacy/evaluation/membership_inference.py View File

@@ -103,7 +103,7 @@ class MembershipInference:

References: `Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov.
Membership Inference Attacks against Machine Learning Models. 2017.
<https://arxiv.org/abs/1610.05820v2>`_
<https://arxiv.org/abs/1610.05820v2>`_.

Args:
model (Model): Target model.


Loading…
Cancel
Save