You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

test_black_defense_eval.py 2.8 kB

5 years ago
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273
  1. # Copyright 2019 Huawei Technologies Co., Ltd
  2. #
  3. # Licensed under the Apache License, Version 2.0 (the "License");
  4. # you may not use this file except in compliance with the License.
  5. # You may obtain a copy of the License at
  6. #
  7. # http://www.apache.org/licenses/LICENSE-2.0
  8. #
  9. # Unless required by applicable law or agreed to in writing, software
  10. # distributed under the License is distributed on an "AS IS" BASIS,
  11. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  12. # See the License for the specific language governing permissions and
  13. # limitations under the License.
  14. """
  15. Black-box defense evaluation test.
  16. """
  17. import numpy as np
  18. import pytest
  19. from mindarmour.evaluations.black.defense_evaluation import BlackDefenseEvaluate
  20. @pytest.mark.level0
  21. @pytest.mark.platform_arm_ascend_training
  22. @pytest.mark.platform_x86_ascend_training
  23. @pytest.mark.env_card
  24. @pytest.mark.component_mindarmour
  25. def test_def_eval():
  26. """
  27. Tests for black-box defense evaluation
  28. """
  29. # prepare data
  30. raw_preds = np.array([[0.1, 0.1, 0.2, 0.6], [0.1, 0.7, 0.0, 0.2],
  31. [0.8, 0.1, 0.0, 0.1], [0.1, 0.1, 0.2, 0.6],
  32. [0.1, 0.7, 0.0, 0.2], [0.8, 0.1, 0.0, 0.1],
  33. [0.1, 0.1, 0.2, 0.6], [0.1, 0.7, 0.0, 0.2],
  34. [0.8, 0.1, 0.0, 0.1], [0.1, 0.1, 0.2, 0.6]])
  35. def_preds = np.array([[0.1, 0.1, 0.2, 0.6], [0.1, 0.7, 0.0, 0.2],
  36. [0.8, 0.1, 0.0, 0.1], [0.1, 0.1, 0.2, 0.6],
  37. [0.1, 0.7, 0.0, 0.2], [0.8, 0.1, 0.0, 0.1],
  38. [0.1, 0.1, 0.2, 0.6], [0.1, 0.7, 0.0, 0.2],
  39. [0.8, 0.1, 0.0, 0.1], [0.1, 0.1, 0.2, 0.6]])
  40. raw_query_counts = np.array([0, 0, 0, 0, 0, 10, 10, 20, 20, 30])
  41. def_query_counts = np.array([0, 0, 0, 0, 0, 30, 30, 40, 40, 50])
  42. raw_query_time = np.array([0.1, 0.1, 0.1, 0.1, 0.1, 2, 2, 4, 4, 6])
  43. def_query_time = np.array([0.3, 0.3, 0.3, 0.3, 0.3, 4, 4, 8, 8, 12])
  44. def_detection_counts = np.array([1, 0, 0, 0, 1, 5, 5, 5, 10, 20])
  45. true_labels = np.array([3, 1, 0, 3, 1, 0, 3, 1, 0, 3])
  46. # create obj
  47. def_eval = BlackDefenseEvaluate(raw_preds,
  48. def_preds,
  49. raw_query_counts,
  50. def_query_counts,
  51. raw_query_time,
  52. def_query_time,
  53. def_detection_counts,
  54. true_labels,
  55. max_queries=100)
  56. # run eval
  57. qcv = def_eval.qcv()
  58. asv = def_eval.asv()
  59. fpr = def_eval.fpr()
  60. qrv = def_eval.qrv()
  61. res = [qcv, asv, fpr, qrv]
  62. # compare
  63. expected_value = [0.2, 0.0, 0.4, 2.0]
  64. assert np.allclose(res, expected_value, 0.0001, 0.0001)

MindArmour关注AI的安全和隐私问题。致力于增强模型的安全可信、保护用户的数据隐私。主要包含3个模块:对抗样本鲁棒性模块、Fuzz Testing模块、隐私保护与评估模块。 对抗样本鲁棒性模块 对抗样本鲁棒性模块用于评估模型对于对抗样本的鲁棒性,并提供模型增强方法用于增强模型抗对抗样本攻击的能力,提升模型鲁棒性。对抗样本鲁棒性模块包含了4个子模块:对抗样本的生成、对抗样本的检测、模型防御、攻防评估。