Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
|
2 years ago | |
---|---|---|
.. | ||
diff_privacy | 2 years ago | |
inversion_attack | 4 years ago | |
membership_inference | 4 years ago | |
sup_privacy | 3 years ago | |
README.md | 4 years ago | |
__init__.py | 4 years ago |
Although machine learning could obtain a generic model based on training data, it has been proved that the trained
model may disclose the information of training data (such as the membership inference attack).
Differential privacy training is an effective method proposed to overcome this problem, in which Gaussian noise is
added while training. There are mainly three parts for differential privacy(DP) training: noise-generating
mechanism, DP optimizer and DP monitor. We have implemented a novel noise-generating mechanisms: adaptive decay
noise mechanism. DP monitor is used to compute the privacy budget while training.
Suppress Privacy training is a novel method to protect privacy distinct from the noise addition method
(such as DP), in which the negligible model parameter is removed gradually to achieve a better balance between
accuracy and privacy.
With adaptive decay mechanism, the magnitude of the Gaussian noise would be decayed as the training step grows, which
resulting a stable convergence.
cd examples/privacy/diff_privacy
python lenet5_dp_ada_gaussian.py
With adaptive norm clip mechanism, the norm clip of the gradients would be changed according to the norm values of
them, which can adjust the ratio of noise and original gradients.
cd examples/privacy/diff_privacy
python lenet5_dp.py
By this evaluation method, we could judge whether a sample is belongs to training dataset or not.
cd examples/privacy/membership_inference_attack
python train.py --data_path home_path_to_cifar100 --ckpt_path ./
python example_vgg_cifar.py --data_path home_path_to_cifar100 --pre_trained 0-100_781.ckpt
With suppress privacy mechanism, the values of some trainable parameters (such as conv layers and fully connected
layers) are set to zero as the training step grows, which can
achieve a better balance between accuracy and privacy
cd examples/privacy/sup_privacy
python sup_privacy.py
Inversion attack means reconstructing an image based on its deep representations. For example,
reconstruct a MNIST image based on its output through LeNet5. The mechanism behind it is that well-trained
model can "remember" those training dataset. Therefore, inversion attack can be used to estimate the privacy
leakage of training tasks.
cd examples/privacy/inversion_attack
python mnist_inversion_attack.py
MindArmour关注AI的安全和隐私问题。致力于增强模型的安全可信、保护用户的数据隐私。主要包含3个模块:对抗样本鲁棒性模块、Fuzz Testing模块、隐私保护与评估模块。 对抗样本鲁棒性模块 对抗样本鲁棒性模块用于评估模型对于对抗样本的鲁棒性,并提供模型增强方法用于增强模型抗对抗样本攻击的能力,提升模型鲁棒性。对抗样本鲁棒性模块包含了4个子模块:对抗样本的生成、对抗样本的检测、模型防御、攻防评估。
Python Markdown Text other