Compare commits

...

18 Commits
master ... r1.7

Author SHA1 Message Date
  i-robot e8fa8ddc70
!366 modify file 3 years ago
  zhangyi c00a1fbd65 modify the file 3 years ago
  i-robot 1fbb8f0683
!363 modify file 3 years ago
  zhangyi 3c7667930a modify the file 3 years ago
  i-robot 31685c9f40
!361 modify the English file 3 years ago
  i-robot d22f501cc1
!360 modify normalize_value 3 years ago
  zhangyi f9361d0c7d modify the file 3 years ago
  ZhidanLiu 831c48d850 modify normalize_value 3 years ago
  i-robot 90ea44088f
!358 modify the English file 3 years ago
  zhangyi ba34358f7c modify the file 3 years ago
  i-robot 56b600ef4f
!354 add Chiniese version release 3 years ago
  ZhidanLiu 7a5ffc6d71 add Chinese version release 3 years ago
  i-robot 70ae728701
!353 fix bug of norm level check for r1.7 3 years ago
  ZhidanLiu 71143146d2 fix bug of norm level check for r1.7 3 years ago
  i-robot bdc5ea54a4
!352 modify urls 3 years ago
  huodagu 37286bb798 modify url 3 years ago
  i-robot a149bb6f46
!348 modify urls 3 years ago
  huodagu f49ce3a979 modify url 3 years ago
6 changed files with 54 additions and 17 deletions
Unified View
  1. +4
    -0
      .jenkins/check/config/filter_linklint.txt
  2. +7
    -3
      RELEASE.md
  3. +27
    -0
      RELEASE_CN.md
  4. +3
    -3
      examples/natural_robustness/ocr_evaluate/cnn_ctc/README.md
  5. +4
    -4
      examples/natural_robustness/ocr_evaluate/cnn_ctc/README_CN.md
  6. +9
    -7
      mindarmour/utils/_check_param.py

+ 4
- 0
.jenkins/check/config/filter_linklint.txt View File

@@ -0,0 +1,4 @@
# akg-third-party
# file directory: mindspore/akg/third_party/incubator-tvm/

https://www.mindspore.cn/*/r1.7/*

+ 7
- 3
RELEASE.md View File

@@ -1,4 +1,6 @@
# MindArmour 1.7.0
# MindArmour Release Notes

[查看中文](./RELEASE_CN.md)


## MindArmour 1.7.0 Release Notes ## MindArmour 1.7.0 Release Notes


@@ -10,11 +12,11 @@


### API Change ### API Change


* `mindarmour.fuzz_testing.Fuzzer.fuzzing` interface's parameter `mutate_config` change supported range. ([!333](https://gitee.com/mindspore/mindarmour/pulls/333))
* Change value of parameter `mutate_config` in `mindarmour.fuzz_testing.Fuzzer.fuzzing` interface. ([!333](https://gitee.com/mindspore/mindarmour/pulls/333))


### Bug fixes ### Bug fixes


* Update version of third-party dependence pillow from more than 6.2.0 to more than 7.2.0. ([!329](https://gitee.com/mindspore/mindarmour/pulls/329))
* Update version of third-party dependence pillow from more than or equal to 6.2.0 to more than or equal to 7.2.0. ([!329](https://gitee.com/mindspore/mindarmour/pulls/329))


### Contributors ### Contributors


@@ -22,6 +24,8 @@ Thanks goes to these wonderful people:


Liu Zhidan, Zhang Shukun, Jin Xiulang, Liu Liu. Liu Zhidan, Zhang Shukun, Jin Xiulang, Liu Liu.


Contributions of any kind are welcome!

# MindArmour 1.6.0 # MindArmour 1.6.0


## MindArmour 1.6.0 Release Notes ## MindArmour 1.6.0 Release Notes


+ 27
- 0
RELEASE_CN.md View File

@@ -0,0 +1,27 @@
# MindArmour Release Notes

[View English](./RELEASE.md)

## MindArmour 1.7.0 Release Notes

### 主要特性和增强

#### 鲁棒性

* [STABLE] 自然扰动评估方法

### API Change

* 接口`mindarmour.fuzz_testing.Fuzzer.fuzzing`的参数`mutate_config`的取值范围变化。 ([!333](https://gitee.com/mindspore/mindarmour/pulls/333))

### Bug修复

* 更新第三方依赖pillow的版本从大于等于6.2.0更新为大于等于7.2.0. ([!329](https://gitee.com/mindspore/mindarmour/pulls/329))

### 贡献

感谢以下人员做出的贡献:

Liu Zhidan, Zhang Shukun, Jin Xiulang, Liu Liu.

欢迎以任何形式对项目提供贡献!

+ 3
- 3
examples/natural_robustness/ocr_evaluate/cnn_ctc/README.md View File

@@ -94,7 +94,7 @@ This takes around 75 minutes.


## Mixed Precision ## Mixed Precision


The [mixed precision](https://www.mindspore.cn/docs/programming_guide/en/master/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
The [mixed precision](https://www.mindspore.cn/tutorials/experts/en/r1.7/others/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.


# [Environment Requirements](#contents) # [Environment Requirements](#contents)
@@ -108,7 +108,7 @@ For FP16 operators, if the input data type is FP32, the backend of MindSpore wil
- For more information, please check the resources below: - For more information, please check the resources below:
- [MindSpore tutorials](https://www.mindspore.cn/tutorials/en/master/index.html) - [MindSpore tutorials](https://www.mindspore.cn/tutorials/en/master/index.html)


- [MindSpore Python API](https://www.mindspore.cn/docs/api/en/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/docs/en/r1.7/index.html)


# [Quick Start](#contents) # [Quick Start](#contents)


@@ -517,7 +517,7 @@ accuracy: 0.8533


### Inference ### Inference


If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html). Following the steps below, this is a simple example:
If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorials/experts/en/r1.7/infer/inference.html). Following the steps below, this is a simple example:


- Running on Ascend - Running on Ascend




+ 4
- 4
examples/natural_robustness/ocr_evaluate/cnn_ctc/README_CN.md View File

@@ -95,7 +95,7 @@ python src/preprocess_dataset.py


## 混合精度 ## 混合精度


采用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
采用[混合精度](https://www.mindspore.cn/tutorials/experts/zh-CN/r1.7/others/mixed_precision.html)的训练方法使用支持单精度和半精度数据来提高深度学习神经网络的训练速度,同时保持单精度训练所能达到的网络精度。混合精度训练提高计算速度、减少内存使用的同时,支持在特定硬件上训练更大的模型或实现更大批次的训练。
以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。 以FP16算子为例,如果输入数据类型为FP32,MindSpore后台会自动降低精度来处理数据。用户可打开INFO日志,搜索“reduce precision”查看精度降低的算子。


# 环境要求 # 环境要求
@@ -111,7 +111,7 @@ python src/preprocess_dataset.py
- 如需查看详情,请参见如下资源: - 如需查看详情,请参见如下资源:
- [MindSpore教程](https://www.mindspore.cn/tutorials/zh-CN/master/index.html) - [MindSpore教程](https://www.mindspore.cn/tutorials/zh-CN/master/index.html)


- [MindSpore Python API](https://www.mindspore.cn/docs/api/zh-CN/master/index.html)
- [MindSpore Python API](https://www.mindspore.cn/docs/zh-CN/r1.7/index.html)


# 快速入门 # 快速入门


@@ -250,7 +250,7 @@ bash scripts/run_distribute_train_ascend.sh [RANK_TABLE_FILE] [PRETRAINED_CKPT(o


> 注意: > 注意:


RANK_TABLE_FILE相关参考资料见[链接](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/distributed_training_ascend.html), 获取device_ip方法详见[链接](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools).
RANK_TABLE_FILE相关参考资料见[链接](https://www.mindspore.cn/tutorials/experts/zh-CN/r1.7/parallel/train_ascend.html), 获取device_ip方法详见[链接](https://gitee.com/mindspore/models/tree/master/utils/hccl_tools).


### 训练结果 ### 训练结果


@@ -449,7 +449,7 @@ bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DVPP] [DEVICE_ID]


### 推理 ### 推理


如果您需要在GPU、Ascend 910、Ascend 310等多个硬件平台上使用训练好的模型进行推理,请参考此[链接](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html)。以下为简单示例:
如果您需要在GPU、Ascend 910、Ascend 310等多个硬件平台上使用训练好的模型进行推理,请参考此[链接](https://www.mindspore.cn/tutorials/experts/zh-CN/r1.7/infer/inference.html)。以下为简单示例:


- Ascend处理器环境运行 - Ascend处理器环境运行




+ 9
- 7
mindarmour/utils/_check_param.py View File

@@ -204,7 +204,7 @@ def check_norm_level(norm_level):
"""Check norm_level of regularization.""" """Check norm_level of regularization."""
if not isinstance(norm_level, (int, str)): if not isinstance(norm_level, (int, str)):
msg = 'Type of norm_level must be in [int, str], but got {}'.format(type(norm_level)) msg = 'Type of norm_level must be in [int, str], but got {}'.format(type(norm_level))
accept_norm = [1, 2, '1', '2', 'l1', 'l2', 'inf', 'linf', np.inf]
accept_norm = [1, 2, '1', '2', 'l1', 'l2', 'inf', 'linf', 'np.inf', np.inf]
if norm_level not in accept_norm: if norm_level not in accept_norm:
msg = 'norm_level must be in {}, but got {}'.format(accept_norm, norm_level) msg = 'norm_level must be in {}, but got {}'.format(accept_norm, norm_level)
LOGGER.error(TAG, msg) LOGGER.error(TAG, msg)
@@ -218,14 +218,16 @@ def normalize_value(value, norm_level):


Args: Args:
value (numpy.ndarray): Inputs. value (numpy.ndarray): Inputs.
norm_level (Union[int, str]): Normalized level. Option: [1, 2, np.inf, '1', '2', 'inf', 'l1', 'l2']
norm_level (Union[int, str]): Normalized level. Option: [1, 2, '1', '2', 'l1', 'l2', 'inf', 'linf',
'np.inf', np.inf].



Returns: Returns:
numpy.ndarray, normalized value. numpy.ndarray, normalized value.


Raises: Raises:
NotImplementedError: If norm_level is not in [1, 2 , np.inf, '1', '2',
'inf', 'l1', 'l2']
NotImplementedError: If norm_level is not in [1, 2, '1', '2', 'l1', 'l2', 'inf', 'linf', 'np.inf', np.inf].
""" """
norm_level = check_norm_level(norm_level) norm_level = check_norm_level(norm_level)
ori_shape = value.shape ori_shape = value.shape
@@ -237,12 +239,12 @@ def normalize_value(value, norm_level):
elif norm_level in (2, '2', 'l2'): elif norm_level in (2, '2', 'l2'):
norm = np.linalg.norm(value_reshape, ord=2, axis=1, keepdims=True) + avoid_zero_div norm = np.linalg.norm(value_reshape, ord=2, axis=1, keepdims=True) + avoid_zero_div
norm_value = value_reshape / norm norm_value = value_reshape / norm
elif norm_level in (np.inf, 'inf'):
elif norm_level in (np.inf, 'inf', 'np.inf', 'linf'):
norm = np.max(abs(value_reshape), axis=1, keepdims=True) + avoid_zero_div norm = np.max(abs(value_reshape), axis=1, keepdims=True) + avoid_zero_div
norm_value = value_reshape / norm norm_value = value_reshape / norm
else: else:
msg = 'Values of `norm_level` different from 1, 2 and `np.inf` are currently not supported, but got {}.' \
.format(norm_level)
accept_norm = [1, 2, '1', '2', 'l1', 'l2', 'inf', 'linf', 'np.inf', np.inf]
msg = 'Values of `norm_level` must be in {}, but got {}'.format(accept_norm, norm_level)
LOGGER.error(TAG, msg) LOGGER.error(TAG, msg)
raise NotImplementedError(msg) raise NotImplementedError(msg)
return norm_value.reshape(ori_shape) return norm_value.reshape(ori_shape)


Loading…
Cancel
Save