diff --git a/docs/api/api_python/mindarmour.adv_robustness.defenses.rst b/docs/api/api_python/mindarmour.adv_robustness.defenses.rst index c24dae6..29a7578 100644 --- a/docs/api/api_python/mindarmour.adv_robustness.defenses.rst +++ b/docs/api/api_python/mindarmour.adv_robustness.defenses.rst @@ -58,7 +58,7 @@ mindarmour.adv_robustness.defenses 参数: - **network** (Cell) - 要防御的MindSpore网络。 - **loss_fn** (Union[Loss, None]) - 损失函数。默认值:None。 - - **optimizer** (Cell):用于训练网络的优化器。默认值:None。 + - **optimizer** (Cell) - 用于训练网络的优化器。默认值:None。 - **bounds** (tuple) - 数据的上下界。以(clip_min, clip_max)的形式出现。默认值:(0.0, 1.0)。 - **replace_ratio** (float) - 用对抗样本替换原始样本的比率。默认值:0.5。 - **eps** (float) - 攻击方法(FGSM)的步长。默认值:0.1。 diff --git a/docs/api/api_python/mindarmour.privacy.diff_privacy.rst b/docs/api/api_python/mindarmour.privacy.diff_privacy.rst index 4ead822..5741ddb 100644 --- a/docs/api/api_python/mindarmour.privacy.diff_privacy.rst +++ b/docs/api/api_python/mindarmour.privacy.diff_privacy.rst @@ -8,10 +8,10 @@ mindarmour.privacy.diff_privacy 基于 :math:`mean=0` 以及 :math:`standard\_deviation = norm\_bound * initial\_noise\_multiplier` 的高斯分布产生噪声。 参数: - - **norm_bound** (float)- 梯度的l2范数的裁剪范围。默认值:1.0。 - - **initial_noise_multiplier** (float)- 高斯噪声标准偏差除以 `norm_bound` 的比率,将用于计算隐私预算。默认值:1.0。 - - **seed** (int)- 原始随机种子,如果seed=0随机正态将使用安全随机数。如果seed!=0随机正态将使用给定的种子生成值。默认值:0。 - - **decay_policy** (str)- 衰减策略。默认值:None。 + - **norm_bound** (float) - 梯度的l2范数的裁剪范围。默认值:1.0。 + - **initial_noise_multiplier** (float) - 高斯噪声标准偏差除以 `norm_bound` 的比率,将用于计算隐私预算。默认值:1.0。 + - **seed** (int) - 原始随机种子,如果seed=0随机正态将使用安全随机数。如果seed!=0随机正态将使用给定的种子生成值。默认值:0。 + - **decay_policy** (str) - 衰减策略。默认值:None。 .. py:method:: construct(gradients) diff --git a/docs/api/api_python/mindarmour.privacy.evaluation.rst b/docs/api/api_python/mindarmour.privacy.evaluation.rst index 8c0f2e0..c5743da 100644 --- a/docs/api/api_python/mindarmour.privacy.evaluation.rst +++ b/docs/api/api_python/mindarmour.privacy.evaluation.rst @@ -26,8 +26,8 @@ mindarmour.privacy.evaluation 评估指标应由metrics规定。 参数: - - **dataset_train** (minspore.dataset) - 目标模型的训练数据集。 - - **dataset_test** (minspore.dataset) - 目标模型的测试数据集。 + - **dataset_train** (mindspore.dataset) - 目标模型的训练数据集。 + - **dataset_test** (mindspore.dataset) - 目标模型的测试数据集。 - **metrics** (Union[list, tuple]) - 评估指标。指标的值必须在["precision", "accuracy", "recall"]中。默认值:["precision"]。 返回: @@ -38,8 +38,8 @@ mindarmour.privacy.evaluation 根据配置,使用输入数据集训练攻击模型。 参数: - - **dataset_train** (minspore.dataset) - 目标模型的训练数据集。 - - **dataset_test** (minspore.dataset) - 目标模型的测试集。 + - **dataset_train** (mindspore.dataset) - 目标模型的训练数据集。 + - **dataset_test** (mindspore.dataset) - 目标模型的测试集。 - **attack_config** (Union[list, tuple]) - 攻击模型的参数设置。格式为 .. code-block:: python diff --git a/docs/api/api_python/mindarmour.rst b/docs/api/api_python/mindarmour.rst index 56d02fc..5a27156 100644 --- a/docs/api/api_python/mindarmour.rst +++ b/docs/api/api_python/mindarmour.rst @@ -237,8 +237,8 @@ MindArmour是MindSpore的工具箱,用于增强模型可信,实现隐私保 评估指标应由metrics规定。 参数: - - **dataset_train** (minspore.dataset) - 目标模型的训练数据集。 - - **dataset_test** (minspore.dataset) - 目标模型的测试数据集。 + - **dataset_train** (mindspore.dataset) - 目标模型的训练数据集。 + - **dataset_test** (mindspore.dataset) - 目标模型的测试数据集。 - **metrics** (Union[list, tuple]) - 评估指标。指标的值必须在["precision", "accuracy", "recall"]中。默认值:["precision"]。 返回: @@ -249,8 +249,8 @@ MindArmour是MindSpore的工具箱,用于增强模型可信,实现隐私保 根据配置,使用输入数据集训练攻击模型。 参数: - - **dataset_train** (minspore.dataset) - 目标模型的训练数据集。 - - **dataset_test** (minspore.dataset) - 目标模型的测试集。 + - **dataset_train** (mindspore.dataset) - 目标模型的训练数据集。 + - **dataset_test** (mindspore.dataset) - 目标模型的测试集。 - **attack_config** (Union[list, tuple]) - 攻击模型的参数设置。格式为: .. code-block:: diff --git a/mindarmour/adv_robustness/attacks/black/pointwise_attack.py b/mindarmour/adv_robustness/attacks/black/pointwise_attack.py index 5cf0b10..02b9588 100644 --- a/mindarmour/adv_robustness/attacks/black/pointwise_attack.py +++ b/mindarmour/adv_robustness/attacks/black/pointwise_attack.py @@ -35,14 +35,14 @@ class PointWiseAttack(Attack): References: `L. Schott, J. Rauber, M. Bethge, W. Brendel: "Towards the first adversarially robust neural network model on MNIST", ICLR (2019) - `_ + `_. Args: model (BlackModel): Target model. max_iter (int): Max rounds of iteration to generate adversarial image. Default: 1000. search_iter (int): Max rounds of binary search. Default: 10. is_targeted (bool): If True, targeted attack. If False, untargeted attack. Default: False. - init_attack (Attack): Attack used to find a starting point. Default: None. + init_attack (Union[Attack, None]): Attack used to find a starting point. Default: None. sparse (bool): If True, input labels are sparse-encoded. If False, input labels are one-hot-encoded. Default: True. diff --git a/mindarmour/adv_robustness/attacks/deep_fool.py b/mindarmour/adv_robustness/attacks/deep_fool.py index d4e4c3a..ab6f3e8 100644 --- a/mindarmour/adv_robustness/attacks/deep_fool.py +++ b/mindarmour/adv_robustness/attacks/deep_fool.py @@ -96,7 +96,7 @@ class DeepFool(Attack): sample to the nearest classification boundary and crossing the boundary. Reference: `DeepFool: a simple and accurate method to fool deep neural - networks `_ + networks `_. Args: network (Cell): Target model. @@ -109,7 +109,7 @@ class DeepFool(Attack): max_iters (int): Max iterations, which should be greater than zero. Default: 50. overshoot (float): Overshoot parameter. Default: 0.02. - norm_level (Union[int, str]): Order of the vector norm. Possible values: np.inf + norm_level (Union[int, str, numpy.inf]): Order of the vector norm. Possible values: np.inf or 2. Default: 2. bounds (Union[tuple, list]): Upper and lower bounds of data range. In form of (clip_min, clip_max). Default: None. diff --git a/mindarmour/adv_robustness/attacks/gradient_method.py b/mindarmour/adv_robustness/attacks/gradient_method.py index 1ca002c..1b50f2c 100644 --- a/mindarmour/adv_robustness/attacks/gradient_method.py +++ b/mindarmour/adv_robustness/attacks/gradient_method.py @@ -130,21 +130,21 @@ class FastGradientMethod(GradientMethod): References: `I. J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples," in ICLR, 2015. - `_ + `_. Args: network (Cell): Target model. eps (float): Proportion of single-step adversarial perturbation generated by the attack to data range. Default: 0.07. - alpha (float): Proportion of single-step random perturbation to data range. + alpha (Union[float, None]): Proportion of single-step random perturbation to data range. Default: None. bounds (tuple): Upper and lower bounds of data, indicating the data range. In form of (clip_min, clip_max). Default: (0.0, 1.0). - norm_level (Union[int, numpy.inf]): Order of the norm. + norm_level (Union[int, str, numpy.inf]): Order of the norm. Possible values: np.inf, 1 or 2. Default: 2. is_targeted (bool): If True, targeted attack. If False, untargeted attack. Default: False. - loss_fn (Loss): Loss function for optimization. If None, the input network \ + loss_fn (Union[loss, None]): Loss function for optimization. If None, the input network \ is already equipped with loss function. Default: None. Examples: @@ -207,7 +207,7 @@ class RandomFastGradientMethod(FastGradientMethod): References: `Florian Tramer, Alexey Kurakin, Nicolas Papernot, "Ensemble adversarial training: Attacks and defenses" in ICLR, 2018 - `_ + `_. Args: network (Cell): Target model. @@ -217,11 +217,11 @@ class RandomFastGradientMethod(FastGradientMethod): Default: 0.035. bounds (tuple): Upper and lower bounds of data, indicating the data range. In form of (clip_min, clip_max). Default: (0.0, 1.0). - norm_level (Union[int, numpy.inf]): Order of the norm. + norm_level (Union[int, str, numpy.inf]): Order of the norm. Possible values: np.inf, 1 or 2. Default: 2. is_targeted (bool): If True, targeted attack. If False, untargeted attack. Default: False. - loss_fn (Loss): Loss function for optimization. If None, the input network \ + loss_fn (Union[loss, None]): Loss function for optimization. If None, the input network \ is already equipped with loss function. Default: None. Raises: @@ -264,19 +264,19 @@ class FastGradientSignMethod(GradientMethod): References: `Ian J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples," in ICLR, 2015 - `_ + `_. Args: network (Cell): Target model. eps (float): Proportion of single-step adversarial perturbation generated by the attack to data range. Default: 0.07. - alpha (float): Proportion of single-step random perturbation to data range. + alpha (Union[float, None]): Proportion of single-step random perturbation to data range. Default: None. bounds (tuple): Upper and lower bounds of data, indicating the data range. In form of (clip_min, clip_max). Default: (0.0, 1.0). is_targeted (bool): If True, targeted attack. If False, untargeted attack. Default: False. - loss_fn (Loss): Loss function for optimization. If None, the input network \ + loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ is already equipped with loss function. Default: None. Examples: @@ -338,7 +338,7 @@ class RandomFastGradientSignMethod(FastGradientSignMethod): to create adversarial noises. References: `F. Tramer, et al., "Ensemble adversarial training: Attacks - and defenses," in ICLR, 2018 `_ + and defenses," in ICLR, 2018 `_. Args: network (Cell): Target model. @@ -350,7 +350,7 @@ class RandomFastGradientSignMethod(FastGradientSignMethod): In form of (clip_min, clip_max). Default: (0.0, 1.0). is_targeted (bool): True: targeted attack. False: untargeted attack. Default: False. - loss_fn (Loss): Loss function for optimization. If None, the input network \ + loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ is already equipped with loss function. Default: None. Raises: @@ -391,17 +391,17 @@ class LeastLikelyClassMethod(FastGradientSignMethod): least-likely class to generate the adversarial examples. References: `F. Tramer, et al., "Ensemble adversarial training: Attacks - and defenses," in ICLR, 2018 `_ + and defenses," in ICLR, 2018 `_. Args: network (Cell): Target model. eps (float): Proportion of single-step adversarial perturbation generated by the attack to data range. Default: 0.07. - alpha (float): Proportion of single-step random perturbation to data range. + alpha (Union[float, None]): Proportion of single-step random perturbation to data range. Default: None. bounds (tuple): Upper and lower bounds of data, indicating the data range. In form of (clip_min, clip_max). Default: (0.0, 1.0). - loss_fn (Loss): Loss function for optimization. If None, the input network \ + loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ is already equipped with loss function. Default: None. Examples: @@ -439,7 +439,7 @@ class RandomLeastLikelyClassMethod(FastGradientSignMethod): targets the least-likely class to generate the adversarial examples. References: `F. Tramer, et al., "Ensemble adversarial training: Attacks - and defenses," in ICLR, 2018 `_ + and defenses," in ICLR, 2018 `_. Args: network (Cell): Target model. @@ -449,7 +449,7 @@ class RandomLeastLikelyClassMethod(FastGradientSignMethod): Default: 0.035. bounds (tuple): Upper and lower bounds of data, indicating the data range. In form of (clip_min, clip_max). Default: (0.0, 1.0). - loss_fn (Loss): Loss function for optimization. If None, the input network \ + loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ is already equipped with loss function. Default: None. Raises: diff --git a/mindarmour/adv_robustness/attacks/iterative_gradient_method.py b/mindarmour/adv_robustness/attacks/iterative_gradient_method.py index 99e45b8..32dc3da 100644 --- a/mindarmour/adv_robustness/attacks/iterative_gradient_method.py +++ b/mindarmour/adv_robustness/attacks/iterative_gradient_method.py @@ -115,7 +115,7 @@ class IterativeGradientMethod(Attack): bounds (tuple): Upper and lower bounds of data, indicating the data range. In form of (clip_min, clip_max). Default: (0.0, 1.0). nb_iter (int): Number of iteration. Default: 5. - loss_fn (Loss): Loss function for optimization. If None, the input network \ + loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ is already equipped with loss function. Default: None. """ def __init__(self, network, eps=0.3, eps_iter=0.1, bounds=(0.0, 1.0), nb_iter=5, @@ -162,7 +162,7 @@ class BasicIterativeMethod(IterativeGradientMethod): adversarial examples. References: `A. Kurakin, I. Goodfellow, and S. Bengio, "Adversarial examples - in the physical world," in ICLR, 2017 `_ + in the physical world," in ICLR, 2017 `_. Args: network (Cell): Target model. @@ -175,7 +175,7 @@ class BasicIterativeMethod(IterativeGradientMethod): is_targeted (bool): If True, targeted attack. If False, untargeted attack. Default: False. nb_iter (int): Number of iteration. Default: 5. - loss_fn (Loss): Loss function for optimization. If None, the input network \ + loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ is already equipped with loss function. Default: None. Examples: @@ -263,7 +263,7 @@ class MomentumIterativeMethod(IterativeGradientMethod): References: `Y. Dong, et al., "Boosting adversarial attacks with - momentum," arXiv:1710.06081, 2017 `_ + momentum," arXiv:1710.06081, 2017 `_. Args: network (Cell): Target model. @@ -277,9 +277,9 @@ class MomentumIterativeMethod(IterativeGradientMethod): attack. Default: False. nb_iter (int): Number of iteration. Default: 5. decay_factor (float): Decay factor in iterations. Default: 1.0. - norm_level (Union[int, numpy.inf]): Order of the norm. Possible values: + norm_level (Union[int, str, numpy.inf]): Order of the norm. Possible values: np.inf, 1 or 2. Default: 'inf'. - loss_fn (Loss): Loss function for optimization. If None, the input network \ + loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ is already equipped with loss function. Default: None. Examples: @@ -407,7 +407,7 @@ class ProjectedGradientDescent(BasicIterativeMethod): the attack proposed by Madry et al. for adversarial training. References: `A. Madry, et al., "Towards deep learning models resistant to - adversarial attacks," in ICLR, 2018 `_ + adversarial attacks," in ICLR, 2018 `_. Args: network (Cell): Target model. @@ -420,9 +420,9 @@ class ProjectedGradientDescent(BasicIterativeMethod): is_targeted (bool): If True, targeted attack. If False, untargeted attack. Default: False. nb_iter (int): Number of iteration. Default: 5. - norm_level (Union[int, numpy.inf]): Order of the norm. Possible values: + norm_level (Union[int, str, numpy.inf]): Order of the norm. Possible values: np.inf, 1 or 2. Default: 'inf'. - loss_fn (Loss): Loss function for optimization. If None, the input network \ + loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ is already equipped with loss function. Default: None. Examples: @@ -503,7 +503,7 @@ class DiverseInputIterativeMethod(BasicIterativeMethod): on the input data could improve the transferability of the adversarial examples. References: `Xie, Cihang and Zhang, et al., "Improving Transferability of - Adversarial Examples With Input Diversity," in CVPR, 2019 `_ + Adversarial Examples With Input Diversity," in CVPR, 2019 `_. Args: network (Cell): Target model. @@ -514,7 +514,7 @@ class DiverseInputIterativeMethod(BasicIterativeMethod): is_targeted (bool): If True, targeted attack. If False, untargeted attack. Default: False. prob (float): Transformation probability. Default: 0.5. - loss_fn (Loss): Loss function for optimization. If None, the input network \ + loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ is already equipped with loss function. Default: None. Examples: @@ -558,7 +558,7 @@ class MomentumDiverseInputIterativeMethod(MomentumIterativeMethod): References: `Xie, Cihang and Zhang, et al., "Improving Transferability of - Adversarial Examples With Input Diversity," in CVPR, 2019 `_ + Adversarial Examples With Input Diversity," in CVPR, 2019 `_. Args: network (Cell): Target model. @@ -568,10 +568,10 @@ class MomentumDiverseInputIterativeMethod(MomentumIterativeMethod): In form of (clip_min, clip_max). Default: (0.0, 1.0). is_targeted (bool): If True, targeted attack. If False, untargeted attack. Default: False. - norm_level (Union[int, numpy.inf]): Order of the norm. Possible values: + norm_level (Union[int, str, numpy.inf]): Order of the norm. Possible values: np.inf, 1 or 2. Default: 'l1'. prob (float): Transformation probability. Default: 0.5. - loss_fn (Loss): Loss function for optimization. If None, the input network \ + loss_fn (Union[Loss, None]): Loss function for optimization. If None, the input network \ is already equipped with loss function. Default: None. Examples: diff --git a/mindarmour/adv_robustness/defenses/adversarial_defense.py b/mindarmour/adv_robustness/defenses/adversarial_defense.py index 71c889c..8d17a2d 100644 --- a/mindarmour/adv_robustness/defenses/adversarial_defense.py +++ b/mindarmour/adv_robustness/defenses/adversarial_defense.py @@ -32,7 +32,7 @@ class AdversarialDefense(Defense): Args: network (Cell): A MindSpore network to be defensed. - loss_fn (Functions): Loss function. Default: None. + loss_fn (Union[Loss, None]): Loss function. Default: None. optimizer (Cell): Optimizer used to train the network. Default: None. Examples: @@ -105,7 +105,7 @@ class AdversarialDefenseWithAttacks(AdversarialDefense): Args: network (Cell): A MindSpore network to be defensed. attacks (list[Attack]): List of attack method. - loss_fn (Functions): Loss function. Default: None. + loss_fn (Union[Loss, None]): Loss function. Default: None. optimizer (Cell): Optimizer used to train the network. Default: None. bounds (tuple): Upper and lower bounds of data. In form of (clip_min, clip_max). Default: (0.0, 1.0). @@ -204,7 +204,7 @@ class EnsembleAdversarialDefense(AdversarialDefenseWithAttacks): Args: network (Cell): A MindSpore network to be defensed. attacks (list[Attack]): List of attack method. - loss_fn (Functions): Loss function. Default: None. + loss_fn (Union[Loss, None]): Loss function. Default: None. optimizer (Cell): Optimizer used to train the network. Default: None. bounds (tuple): Upper and lower bounds of data. In form of (clip_min, clip_max). Default: (0.0, 1.0). diff --git a/mindarmour/adv_robustness/defenses/natural_adversarial_defense.py b/mindarmour/adv_robustness/defenses/natural_adversarial_defense.py index 7d16a3f..e8e6785 100644 --- a/mindarmour/adv_robustness/defenses/natural_adversarial_defense.py +++ b/mindarmour/adv_robustness/defenses/natural_adversarial_defense.py @@ -23,11 +23,11 @@ class NaturalAdversarialDefense(AdversarialDefenseWithAttacks): Adversarial training based on FGSM. Reference: `A. Kurakin, et al., "Adversarial machine learning at scale," in - ICLR, 2017. `_ + ICLR, 2017. `_. Args: network (Cell): A MindSpore network to be defensed. - loss_fn (Functions): Loss function. Default: None. + loss_fn (Union[Loss, None]): Loss function. Default: None. optimizer (Cell): Optimizer used to train the network. Default: None. bounds (tuple): Upper and lower bounds of data. In form of (clip_min, clip_max). Default: (0.0, 1.0). diff --git a/mindarmour/adv_robustness/defenses/projected_adversarial_defense.py b/mindarmour/adv_robustness/defenses/projected_adversarial_defense.py index 8d928ed..e858cc3 100644 --- a/mindarmour/adv_robustness/defenses/projected_adversarial_defense.py +++ b/mindarmour/adv_robustness/defenses/projected_adversarial_defense.py @@ -23,11 +23,11 @@ class ProjectedAdversarialDefense(AdversarialDefenseWithAttacks): Adversarial training based on PGD. Reference: `A. Madry, et al., "Towards deep learning models resistant to - adversarial attacks," in ICLR, 2018. `_ + adversarial attacks," in ICLR, 2018. `_. Args: network (Cell): A MindSpore network to be defensed. - loss_fn (Functions): Loss function. Default: None. + loss_fn (Union[Loss, None]): Loss function. Default: None. optimizer (Cell): Optimizer used to train the nerwork. Default: None. bounds (tuple): Upper and lower bounds of input data. In form of (clip_min, clip_max). Default: (0.0, 1.0). diff --git a/mindarmour/privacy/evaluation/membership_inference.py b/mindarmour/privacy/evaluation/membership_inference.py index 7792faf..05052a6 100644 --- a/mindarmour/privacy/evaluation/membership_inference.py +++ b/mindarmour/privacy/evaluation/membership_inference.py @@ -103,7 +103,7 @@ class MembershipInference: References: `Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov. Membership Inference Attacks against Machine Learning Models. 2017. - `_ + `_. Args: model (Model): Target model.