You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 2.9 kB

2 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119
  1. # 人脸识别物理对抗攻击
  2. ## 描述
  3. 本项目是基于MindSpore框架对人脸识别模型的物理对抗攻击,通过生成对抗口罩,使人脸佩戴后实现有目标攻击和非目标攻击。
  4. ## 模型结构
  5. 采用华为MindSpore官方训练的FaceRecognition模型
  6. https://www.mindspore.cn/resources/hub/details?MindSpore/1.7/facerecognition_ms1mv2
  7. ## 环境要求
  8. mindspore>=1.7,硬件平台为GPU。
  9. ## 脚本说明
  10. ```markdown
  11. ├── readme.md
  12. ├── photos
  13. │ ├── adv_input //对抗图像
  14. │ ├── input //输入图像
  15. │ └── target //目标图像
  16. ├── outputs //训练后的图像
  17. ├── adversarial_attack.py //训练脚本
  18. │── example_non_target_attack.py //无目标攻击训练
  19. │── example_target_attack.py //有目标攻击训练
  20. │── loss_design.py //训练优化设置
  21. └── test.py //评估攻击效果
  22. ```
  23. ## 模型调用
  24. 方法一:
  25. ```python
  26. #基于mindspore_hub库调用FaceRecognition模型
  27. import mindspore_hub as mshub
  28. from mindspore import context
  29. def get_model():
  30. context.set_context(mode=context.GRAPH_MODE, device_target="GPU", device_id=0)
  31. model = "mindspore/1.7/facerecognition_ms1mv2"
  32. network = mshub.load(model)
  33. network.set_train(False)
  34. return network
  35. ```
  36. 方法二:
  37. ```text
  38. 利用MindSpore代码仓中的 <https://gitee.com/mindspore/models/blob/master/research/cv/FaceRecognition/eval.py> 的get_model函数加载模型
  39. ```
  40. ## 训练过程
  41. 有目标攻击:
  42. ```shell
  43. cd face_adversarial_attack/
  44. python example_target_attack.py
  45. ```
  46. 非目标攻击:
  47. ```shell
  48. cd face_adversarial_attack/
  49. python example_non_target_attack.py
  50. ```
  51. ## 默认训练参数
  52. optimizer=adam, learning rate=0.01, weight_decay=0.0001, epoch=2000
  53. ## 评估过程
  54. 评估方法一:
  55. ```shell
  56. adversarial_attack.FaceAdversarialAttack.test_non_target_attack()
  57. adversarial_attack.FaceAdversarialAttack.test_target_attack()
  58. ```
  59. 评估方法二:
  60. ```shell
  61. cd face_adversarial_attack/
  62. python test.py
  63. ```
  64. ## 实验结果
  65. 有目标攻击:
  66. ```text
  67. input_label: 60
  68. target_label: 345
  69. The confidence of the input image on the input label: 26.67
  70. The confidence of the input image on the target label: 0.95
  71. ================================
  72. adversarial_label: 345
  73. The confidence of the adversarial sample on the correct label: 1.82
  74. The confidence of the adversarial sample on the target label: 10.96
  75. input_label: 60, target_label: 345, adversarial_label: 345
  76. photos中是有目标攻击的实验结果
  77. ```
  78. 非目标攻击:
  79. ```text
  80. input_label: 60
  81. The confidence of the input image on the input label: 25.16
  82. ================================
  83. adversarial_label: 251
  84. The confidence of the adversarial sample on the correct label: 9.52
  85. The confidence of the adversarial sample on the adversarial label: 60.96
  86. input_label: 60, adversarial_label: 251
  87. ```

MindArmour关注AI的安全和隐私问题。致力于增强模型的安全可信、保护用户的数据隐私。主要包含3个模块:对抗样本鲁棒性模块、Fuzz Testing模块、隐私保护与评估模块。 对抗样本鲁棒性模块 对抗样本鲁棒性模块用于评估模型对于对抗样本的鲁棒性,并提供模型增强方法用于增强模型抗对抗样本攻击的能力,提升模型鲁棒性。对抗样本鲁棒性模块包含了4个子模块:对抗样本的生成、对抗样本的检测、模型防御、攻防评估。