You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

test_defense_eval.py 1.7 kB

5 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051
  1. # Copyright 2019 Huawei Technologies Co., Ltd
  2. #
  3. # Licensed under the Apache License, Version 2.0 (the "License");
  4. # you may not use this file except in compliance with the License.
  5. # You may obtain a copy of the License at
  6. #
  7. # http://www.apache.org/licenses/LICENSE-2.0
  8. #
  9. # Unless required by applicable law or agreed to in writing, software
  10. # distributed under the License is distributed on an "AS IS" BASIS,
  11. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  12. # See the License for the specific language governing permissions and
  13. # limitations under the License.
  14. """
  15. Defense evaluation test.
  16. """
  17. import numpy as np
  18. import pytest
  19. from mindarmour.evaluations.defense_evaluation import DefenseEvaluate
  20. @pytest.mark.level0
  21. @pytest.mark.platform_arm_ascend_training
  22. @pytest.mark.platform_x86_ascend_training
  23. @pytest.mark.env_card
  24. @pytest.mark.component_mindarmour
  25. def test_def_eval():
  26. # prepare data
  27. raw_preds = np.array([[0.1, 0.1, 0.2, 0.6],
  28. [0.1, 0.7, 0.0, 0.2],
  29. [0.8, 0.1, 0.0, 0.1]])
  30. def_preds = np.array([[0.1, 0.1, 0.1, 0.7],
  31. [0.1, 0.6, 0.2, 0.1],
  32. [0.1, 0.2, 0.1, 0.6]])
  33. true_labels = np.array([3, 1, 0])
  34. # create obj
  35. def_eval = DefenseEvaluate(raw_preds, def_preds, true_labels)
  36. # run eval
  37. cav = def_eval.cav()
  38. crr = def_eval.crr()
  39. csr = def_eval.csr()
  40. ccv = def_eval.ccv()
  41. cos = def_eval.cos()
  42. res = [cav, crr, csr, ccv, cos]
  43. # compare
  44. expected_value = [-0.3333, 0.0, 0.3333, 0.0999, 0.0450]
  45. assert np.allclose(res, expected_value, 0.0001, 0.0001)

MindArmour关注AI的安全和隐私问题。致力于增强模型的安全可信、保护用户的数据隐私。主要包含3个模块:对抗样本鲁棒性模块、Fuzz Testing模块、隐私保护与评估模块。 对抗样本鲁棒性模块 对抗样本鲁棒性模块用于评估模型对于对抗样本的鲁棒性,并提供模型增强方法用于增强模型抗对抗样本攻击的能力,提升模型鲁棒性。对抗样本鲁棒性模块包含了4个子模块:对抗样本的生成、对抗样本的检测、模型防御、攻防评估。