Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
|
2 years ago | |
---|---|---|
.. | ||
photos | 2 years ago | |
README.md | 2 years ago | |
adversarial_attack.py | 2 years ago | |
example_non_target_attack.py | 2 years ago | |
example_target_attack.py | 2 years ago | |
loss_design.py | 2 years ago | |
test.py | 2 years ago |
本项目是基于MindSpore框架对人脸识别模型的物理对抗攻击,通过生成对抗口罩,使人脸佩戴后实现有目标攻击和非目标攻击。
采用华为MindSpore官方训练的FaceRecognition模型
https://www.mindspore.cn/resources/hub/details?MindSpore/1.7/facerecognition_ms1mv2
mindspore>=1.7,硬件平台为GPU。
├── readme.md
├── photos
│ ├── adv_input //对抗图像
│ ├── input //输入图像
│ └── target //目标图像
├── outputs //训练后的图像
├── adversarial_attack.py //训练脚本
│── example_non_target_attack.py //无目标攻击训练
│── example_target_attack.py //有目标攻击训练
│── loss_design.py //训练优化设置
└── test.py //评估攻击效果
方法一:
#基于mindspore_hub库调用FaceRecognition模型
import mindspore_hub as mshub
from mindspore import context
def get_model():
context.set_context(mode=context.GRAPH_MODE, device_target="GPU", device_id=0)
model = "mindspore/1.7/facerecognition_ms1mv2"
network = mshub.load(model)
network.set_train(False)
return network
方法二:
利用MindSpore代码仓中的 <https://gitee.com/mindspore/models/blob/master/research/cv/FaceRecognition/eval.py> 的get_model函数加载模型
有目标攻击:
cd face_adversarial_attack/
python example_target_attack.py
非目标攻击:
cd face_adversarial_attack/
python example_non_target_attack.py
optimizer=adam, learning rate=0.01, weight_decay=0.0001, epoch=2000
评估方法一:
adversarial_attack.FaceAdversarialAttack.test_non_target_attack()
adversarial_attack.FaceAdversarialAttack.test_target_attack()
评估方法二:
cd face_adversarial_attack/
python test.py
有目标攻击:
input_label: 60
target_label: 345
The confidence of the input image on the input label: 26.67
The confidence of the input image on the target label: 0.95
================================
adversarial_label: 345
The confidence of the adversarial sample on the correct label: 1.82
The confidence of the adversarial sample on the target label: 10.96
input_label: 60, target_label: 345, adversarial_label: 345
photos中是有目标攻击的实验结果
非目标攻击:
input_label: 60
The confidence of the input image on the input label: 25.16
================================
adversarial_label: 251
The confidence of the adversarial sample on the correct label: 9.52
The confidence of the adversarial sample on the adversarial label: 60.96
input_label: 60, adversarial_label: 251
MindArmour关注AI的安全和隐私问题。致力于增强模型的安全可信、保护用户的数据隐私。主要包含3个模块:对抗样本鲁棒性模块、Fuzz Testing模块、隐私保护与评估模块。 对抗样本鲁棒性模块 对抗样本鲁棒性模块用于评估模型对于对抗样本的鲁棒性,并提供模型增强方法用于增强模型抗对抗样本攻击的能力,提升模型鲁棒性。对抗样本鲁棒性模块包含了4个子模块:对抗样本的生成、对抗样本的检测、模型防御、攻防评估。
Python Markdown Text other