You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

comments_specification_zh_cn.md 27 kB

2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
2 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746
  1. # MindSpore API注释规范
  2. <!-- TOC -->
  3. - [MindSpore API注释规范](#mindspore-api注释规范)
  4. - [总体说明](#总体说明)
  5. - [Python API注释规范](#python-api注释规范)
  6. - [注释格式](#注释格式)
  7. - [注意事项](#注意事项)
  8. - [Python示例](#python示例)
  9. - [类](#类)
  10. - [方法](#方法)
  11. - [公式](#公式)
  12. - [链接](#链接)
  13. - [C++ API注释规范](#c-api注释规范)
  14. - [注释格式](#注释格式)
  15. - [注意事项](#注意事项)
  16. - [完整示例](#完整示例)
  17. <!-- /TOC -->
  18. ## 总体说明
  19. - MindSpore Python代码注释遵循[Google Python Style Guide](https://google.github.io/styleguide/pyguide.html),由Sphinx工具自动生成API文档,注释样例和支持情况可参考[Example Google Style Python Docstrings](https://www.sphinx-doc.org/en/master/usage/extensions/example_google.html)和[Support for NumPy and Google style docstrings](https://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html)。
  20. - MindSpore C++代码需按照命名空间的设计,分别编写Markdown文件。
  21. ## Python API注释规范
  22. ### 注释格式
  23. 类和方法的注释都采用如下格式:
  24. ```text
  25. Summary.
  26. More elaborate description.
  27. .. warning::
  28. Warning description.
  29. Note:
  30. Note description.
  31. Args:
  32. Arg1 (Type): Description. Default: xxx.
  33. Arg2 (Type): Description.
  34. - Sub-argument1 or Value1 of Arg2: Description.
  35. - Sub-argument2 or Value2 of Arg2: Description.
  36. Returns:
  37. Type, description.
  38. Raises:
  39. Type: Description.
  40. Examples:
  41. >>> Sample Code
  42. ```
  43. 其中,
  44. - `Summary`:简单描述该接口的功能。若该描述为动词开头,模块内需统一用第一人称(即动词原形)或第三人称(即动词后加s),推荐用第一人称。
  45. - `More elaborate description`:详细描述该接口的功能和如何使用等信息。
  46. - `warning`:描述使用该接口时需要警告的事项,以免造成负面影响。
  47. - `Note`:描述使用该接口时需要注意的事项。特别注意不能写成`Notes`。
  48. - `Args`:接口参数信息,包含参数名、参数类型、取值范围、默认值等。
  49. - `Returns`:返回值信息,包含返回值类型等。
  50. - `Raises`:异常信息,包含异常类型、含义等。
  51. - `Examples`:样例代码。
  52. 针对算子和Cell的注释,需要在`Examples`前添加`Inputs`、`Outputs`和`Supported Platforms`三项内容。
  53. - `Inputs`和`Outputs`:用于描述实例化后,算子的输入和输出的类型和shape,输入名可以和样例相同。建议在注释中给出对应的数学公式。
  54. - `Supported Platforms`:用于描述算子支持的硬件平台,名称前后需添加``,存在多个时使用空格隔开。
  55. ```text
  56. Inputs:
  57. - **input_name1** (Type) - Description.
  58. - **input_name2** (Type) - Description.
  59. Outputs:
  60. Type and shape, description.
  61. Supported Platforms:
  62. ``Ascend`` ``GPU`` ``CPU``
  63. ```
  64. ### 注意事项
  65. - 整体要求
  66. - 类或方法必须书写的注释项有:`Summary`、`Args`、`Returns`和`Raises`。如果函数中没有相关信息(如`Args`、`Returns`和`Raises`等),不需要写None(如`Raises:None`),直接省略注释项即可。
  67. - 以目录为粒度生成API时,__init__文件头部的注释内容会呈现在网页开头;以文件为粒度生成API时,该文件头部的注释内容会呈现在网页开头。这些注释内容需包含相应模块的整体说明。
  68. - 注释中包含反斜杠时,需要将头部的`"""`改成`r"""`。
  69. - 注释内容若为动词开头,模块内需统一用第一人称(即动词原形)或第三人称(即动词后加s),推荐用第一人称。
  70. - 冒号要求:关键字(如`Args`、`Returns`等)后面有冒号":";参数名(如`Arg1`、`Arg2`等)后面有冒号":",冒号后需有空格。`Summary`和`Returns`的内容中不能包含冒号。
  71. - 空行要求:
  72. 不同类型(如`Args`、`Returns`等)的内容之间需有空行,同种类型(如`Arg1`、`Arg2`等)的内容之间不需要空行。
  73. ```text
  74. High-Level API for Training or Testing.
  75. Args:
  76. network (Cell): A training or testing network.
  77. loss_fn (Cell): Objective function, if `loss_fn` is None, the
  78. network should contain the logic of loss and grads calculation, and the logic
  79. of parallel if needed. Default: None.
  80. Returns:
  81. function, original function.
  82. ```
  83. 采用无序或有序列表描述内容时,整个列表内容与上方内容之间需增加一个空行。
  84. ```text
  85. Args:
  86. amp_level (str): Option for argument `level` in `mindspore.amp.build_train_network`, level for mixed
  87. precision training. Supports ["O0", "O2", "O3", "auto"]. Default: "O0".
  88. - O0: Do not change.
  89. - O2: Cast network to float16, keep batchnorm run in float32, using dynamic loss scale.
  90. - O3: Cast network to float16, with additional property 'keep_batchnorm_fp32=False'.
  91. - auto: Set to level to recommended level in different devices. Set level to "O2" on GPU, set
  92. level to "O3" Ascend. The recommended level is choose by the export experience, cannot
  93. always generalize. User should specify the level for special network.
  94. "O2" is recommended on GPU, "O3" is recommended on Ascend.
  95. ```
  96. - 空格要求:
  97. `Args`和`Raises`内容的换行需要缩进4个空格。
  98. ```text
  99. Args:
  100. lr_power (float): Learning rate power controls how the learning rate decreases during training,
  101. must be less than or equal to zero. Use fixed learning rate if `lr_power` is zero.
  102. use_locking (bool): If True, the var and accumulation tensors will be protected from being updated.
  103. Default: False.
  104. Raises:
  105. TypeError: If `lr`, `l1`, `l2`, `lr_power` or `use_locking` is not a float.
  106. If `use_locking` is not a bool.
  107. If dtype of `var`, `accum`, `linear` or `grad` is neither float16 nor float32.
  108. If dtype of `indices` is not int32.
  109. ```
  110. 以下内容的换行不需要缩进,与上一行的正文起始位置对齐。
  111. 1. `Args`的子参数或取值
  112. ```text
  113. Args:
  114. parallel_mode (str): There are five kinds of parallel modes, "stand_alone", "data_parallel",
  115. "hybrid_parallel", "semi_auto_parallel" and "auto_parallel". Default: "stand_alone".
  116. - stand_alone: Only one processor is working.
  117. - data_parallel: Distributes the data across different processors.
  118. - hybrid_parallel: Achieves data parallelism and model parallelism
  119. manually.
  120. - semi_auto_parallel: Achieves data parallelism and model parallelism by
  121. setting parallel strategies.
  122. ```
  123. 2. `Inputs`、`Outputs`和`Returns`等无序或有序列表内容
  124. ```text
  125. Inputs:
  126. - **var** (Parameter) - The variable to be updated. The data type must be float16 or float32.
  127. - **accum** (Parameter) - The accumulation to be updated, must be same data type and shape as `var`.
  128. - **linear** (Parameter) - the linear coefficient to be updated, must be same data type and shape
  129. as `var`.
  130. - **grad** (Tensor) - A tensor of the same type as `var`, for the gradient.
  131. - **indices** (Tensor) - A vector of indices in the first dimension of `var` and `accum`.
  132. The shape of `indices` must be the same as `grad` in the first dimension. The type must be int32.
  133. Outputs:
  134. Tuple of 3 Tensor, the updated parameters.
  135. - **var** (Tensor) - Tensor, has the same shape
  136. and data type as `var`.
  137. - **accum** (Tensor) - Tensor, has the same shape
  138. and data type as `accum`.
  139. - **linear** (Tensor) - Tensor, has the same shape
  140. and data type as `linear`.
  141. ```
  142. 3. `Note`和`warning`
  143. ```text
  144. .. warning::
  145. This is warning text. Use a warning for information the user must
  146. understand to avoid negative consequences.
  147. If warning text runs over a line, make sure the lines wrap and are indented to
  148. the same level as the warning tag.
  149. ```
  150. `Args`中参数名和类型的`(`之间需要有空格。
  151. ```text
  152. Args:
  153. lr (float): The learning rate value, must be positive.
  154. ```
  155. - `Args`注释说明
  156. - 常见参数类型有:
  157. - 基本数据类型:`int`、`float`、`bool`、`str`、`list`、`dict`、`set`、`tuple`、`numpy.ndarray`。
  158. ```text
  159. Args:
  160. arg1 (int): Some description.
  161. arg2 (float): Some description.
  162. arg3 (bool): Some description.
  163. arg4 (str): Some description.
  164. arg5 (list): Some description.
  165. arg6 (dict): Some description.
  166. arg7 (set): Some description.
  167. arg8 (tuple): Some description.
  168. arg9 (numpy.ndarray): Some description.
  169. ```
  170. - dtype:如果是mindspore.dtype里的值,写成`mindspore.dtype`,如果是numpy类型,写成`numpy.dtype`。其他按实际情况写。
  171. ```text
  172. Args:
  173. arg1 (mindspore.dtype): Some description.
  174. ```
  175. - 一个参数有多个可选类型:Union[类型1, 类型2],如`Union[Tensor, Number]`。
  176. ```text
  177. Args:
  178. arg1 (Union[Tensor, Number]): Some description.
  179. ```
  180. - list类型:list[具体类型],如`list[str]`。
  181. ```text
  182. Args:
  183. arg1 (list[str]): Some description.
  184. ```
  185. - 可选类型统一格式:(类型, optional)。
  186. ```text
  187. Args:
  188. arg1 (bool, optional): Some description.
  189. ```
  190. - 其他类型:Tensor或其他具体类型或方法名。
  191. ```text
  192. Args:
  193. arg1 (Tensor): Some description.
  194. ```
  195. - `Returns`注释说明
  196. - 如果返回值类型或维度发生变化,需要说明返回值与输入的关系。
  197. - 多个返回值时,分行写,网页显示时不分行,无序列表的方式可支持分行。
  198. ```text
  199. Returns:
  200. - DatasetNode, the root node of the IR tree.
  201. - Dataset, the root dataset of the IR tree.
  202. ```
  203. - `Examples`注释说明
  204. - `Examples`中的内容需在每行代码开头加上```>>>```,多行代码(含类或函数定义、人为换行等)或空行的开头需加上```...```,输出结果行开头不需要加任何符号。
  205. ```text
  206. Examples:
  207. >>> import mindspore as ms
  208. >>> import mindspore.nn as nn
  209. >>> class Net(nn.Cell):
  210. ... def __init__(self, dense_shape):
  211. ... super(Net, self).__init__()
  212. ... self.dense_shape = dense_shape
  213. ... def construct(self, indices, values):
  214. ... x = SparseTensor(indices, values, self.dense_shape)
  215. ... return x.values, x.indices, x.dense_shape
  216. ...
  217. >>> indices = Tensor([[0, 1], [1, 2]])
  218. >>> values = Tensor([1, 2], dtype=ms.float32)
  219. >>> out = Net((3, 4))(indices, values)
  220. >>> print(out[0])
  221. [1. 2.]
  222. >>> print(out[1])
  223. [[0 1]
  224. [1 2]]
  225. >>> print(out[2])
  226. (3, 4)
  227. ```
  228. - `Examples`中需提供实际的代码,如果需提示参考其他Examples,请使用Note。
  229. - ops算子注释采用PyNative模式写作,可运行的需给出运行结果。
  230. - 业界共识的情况可省略import,如np、nn等。
  231. - 导入路径较长和必须自定义别名的导入需要加`from xxx import xxx as something`或`import xxx`,导入路径短的尽量放到代码中。
  232. - `Inputs`和`Outputs`注释说明
  233. - 类型是Tensor时,需描述shape,并按:math:\`(N, C, X)\`格式写作。
  234. - 公式
  235. - 行公式(单独占一行,居中)
  236. ```text
  237. .. math::
  238. formula
  239. ```
  240. - 行内嵌公式(与其他同行文字显示在一起,不居中)
  241. ```text
  242. xxx :math:`formula` xxx
  243. ```
  244. - 公式中带有含下划线的变量,且下划线后存在多个字母(如xxx_yyy),请根据实际需要选择以下其中一种方式。
  245. 1. 多个字母用{}括起来(如xxx_{yyy}),可将下划线后的内容作为下标,显示为$xxx_{yyy}$。
  246. 2. 在下划线前增加反斜杠(如xxx\\_yyy),可将完整显示变量名称,显示为xxx_yyy。
  247. - 父类方法的显示
  248. - 默认不显示父类方法。
  249. - 可通过在Sphinx工程rst文件的模块下添加`:inherited-members:`,指定需要显示父类方法,详细可参考<https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html>。
  250. - 链接
  251. - 只显示标题(如例子中的name),不显示详细地址。
  252. 引用的地方需这样写:
  253. ```text
  254. `name`_
  255. ```
  256. 提供链接的地方需这样写:
  257. ```text
  258. .. _`name`: https://xxx
  259. ```
  260. 注意:
  261. - 如有换行请注意缩进,参考下方具体示例。
  262. - https前需有空格。
  263. 或者可以采用以下简化写法,只在引用的地方写即可。
  264. ```text
  265. `name <https://xxx>`_
  266. ```
  267. - 直接显示详细地址。
  268. ```text
  269. https://xxx
  270. ```
  271. - 表格(详细可参考<https://sublime-and-sphinx-guide.readthedocs.io/en/latest/tables.html#list-table-directive>)
  272. ```text
  273. .. list-table:: Title # 表格标题
  274. :widths: 25 25 25 # 表格列宽
  275. :header-rows: 1
  276. * - Heading row 1, column 1 # 表头
  277. - Heading row 1, column 2
  278. - Heading row 1, column 3
  279. * - Row 1, column 1
  280. - # 表格内容为空
  281. - Row 1, column 3
  282. * - Row 2, column 1
  283. - Row 2, column 2
  284. - Row 2,
  285. # 表格内容如需换行,在中间增加一个空行
  286. column 3
  287. ```
  288. 显示效果:
  289. ![image](./resource/list_table.png)
  290. - 详细说明默认不换行,如需换行,需以列表或code-block的方式写作。
  291. - 列表方式:
  292. ```text
  293. - Content1
  294. - Content2
  295. - Content3
  296. ```
  297. - code-block方式:
  298. ```text
  299. .. code-block::
  300. Content1
  301. Content2
  302. Content3
  303. ```
  304. - 在注释中引用其他接口。
  305. - 引用class。
  306. 只写接口名:
  307. ```text
  308. :class:`AdamNoUpdateParam`
  309. ```
  310. 若存在重复接口名,则需引用完整模块名和类名:
  311. ```text
  312. :class:`mindspore.ops.LARS`
  313. ```
  314. - 引用function,必须写上完整模块名和函数名。
  315. ```text
  316. :func:`mindspore.compression.quant.create_quant_config`
  317. ```
  318. - 接口描述中,变量名或接口名使用符号\`包裹,变量值使用符号\'或\"包裹。
  319. - 变量名或接口名。
  320. ```text
  321. This part is a more detailed overview of `Mul` operation. For more details about Quantization,
  322. please refer to the implementation of subclass of `Observer`.
  323. Other losses derived from this should implement their own `construct` and use method `self.get_loss`
  324. to apply reduction to loss values.
  325. ```
  326. - 变量值。
  327. ```text
  328. If `reduction` is not one of 'none', 'mean', 'sum'.
  329. ```
  330. - 废弃算子需要写明建议使用的接口,以及需要在支持平台中写上Deprecated。
  331. ```python
  332. class BasicLSTMCell(PrimitiveWithInfer):
  333. """
  334. It's similar to operator :class:`DynamicRNN`. BasicLSTMCell will be deprecated in the future.
  335. Please use :class:`DynamicRNN` instead.
  336. Supported Platforms:
  337. Deprecated
  338. """
  339. ```
  340. - 添加图片。
  341. 格式:`.. image:: {name.png}`。
  342. 其中`{name.png}`为图片名称,并将图片提交到<https://gitee.com/mindspore/mindspore/tree/master/docs/api/api_python>中对应模块的目录下。
  343. 如在`mindspore.dataset.audio.transforms.FrequencyMasking`接口注释中添加图片`frequency_masking.png`:
  344. ```python
  345. class FrequencyMasking(AudioTensorOperation):
  346. """
  347. Some description.
  348. .. image:: frequency_masking.png
  349. """
  350. ```
  351. 并将图片提交至:<https://gitee.com/mindspore/mindspore/blob/master/docs/api/api_python/dataset_audio/frequency_masking.png>。
  352. ### Python示例
  353. #### 类
  354. ```python
  355. class Tensor(Tensor_):
  356. """
  357. Tensor is used for data storage.
  358. Tensor inherits tensor object in C++.
  359. Some functions are implemented in C++ and some functions are implemented in Python.
  360. Args:
  361. input_data (Tensor, float, int, bool, tuple, list, numpy.ndarray): Input data of the tensor.
  362. dtype (:class:`mindspore.dtype`): Input data should be None, bool or numeric type defined in `mindspore.dtype`.
  363. The argument is used to define the data type of the output tensor. If it is None, the data type of the
  364. output tensor will be as same as the `input_data`. Default: None.
  365. Outputs:
  366. Tensor, with the same shape as `input_data`.
  367. Examples:
  368. >>> # initialize a tensor with input data
  369. >>> t1 = Tensor(np.zeros([1, 2, 3]), mindspore.float32)
  370. >>> assert isinstance(t1, Tensor)
  371. >>> assert t1.shape == (1, 2, 3)
  372. >>> assert t1.dtype == mindspore.float32
  373. ...
  374. >>> # initialize a tensor with a float scalar
  375. >>> t2 = Tensor(0.1)
  376. >>> assert isinstance(t2, Tensor)
  377. >>> assert t2.dtype == mindspore.float64
  378. """
  379. def __init__(self, input_data, dtype=None):
  380. ...
  381. ```
  382. 显示效果可访问[这里](https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore/mindspore.Tensor.html)。
  383. #### 方法
  384. ```python
  385. def ms_function(fn=None, obj=None, input_signature=None):
  386. """
  387. Create a callable MindSpore graph from a python function.
  388. This allows the MindSpore runtime to apply optimizations based on graph.
  389. Args:
  390. fn (Function): The Python function that will be run as a graph. Default: None.
  391. obj (Object): The Python Object that provides the information for identifying the compiled function. Default:
  392. None.
  393. input_signature (MetaTensor): The MetaTensor which describes the input arguments. The MetaTensor specifies
  394. the shape and dtype of the Tensor and they will be supplied to this function. If `input_signature`
  395. is specified, each input to `fn` must be a `Tensor`. And the input parameters of `fn` cannot accept
  396. `**kwargs`. The shape and dtype of actual inputs should keep the same as `input_signature`. Otherwise,
  397. TypeError will be raised. Default: None.
  398. Returns:
  399. Function, if `fn` is not None, returns a callable function that will execute the compiled function; If `fn` is
  400. None, returns a decorator and when this decorator invokes with a single `fn` argument, the callable function is
  401. equal to the case when `fn` is not None.
  402. Examples:
  403. >>> def tensor_add(x, y):
  404. ... z = F.tensor_add(x, y)
  405. ... return z
  406. ...
  407. >>> @ms_function
  408. ... def tensor_add_with_dec(x, y):
  409. ... z = F.tensor_add(x, y)
  410. ... return z
  411. ...
  412. >>> @ms_function(input_signature=(MetaTensor(mindspore.float32, (1, 1, 3, 3)),
  413. ... MetaTensor(mindspore.float32, (1, 1, 3, 3))))
  414. ... def tensor_add_with_sig(x, y):
  415. ... z = F.tensor_add(x, y)
  416. ... return z
  417. ...
  418. >>> x = Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))
  419. >>> y = Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))
  420. ...
  421. >>> tensor_add_graph = ms_function(fn=tensor_add)
  422. >>> out = tensor_add_graph(x, y)
  423. >>> out = tensor_add_with_dec(x, y)
  424. >>> out = tensor_add_with_sig(x, y)
  425. """
  426. ...
  427. ```
  428. 显示效果可访问[这里](https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore/mindspore.ms_function.html)。
  429. #### 公式
  430. ```python
  431. class Conv2d(_Conv):
  432. r"""
  433. 2D convolution layer.
  434. Apply a 2D convolution over an input tensor which is typically of shape :math:`(N, C_{in}, H_{in}, W_{in})`,
  435. where :math:`N` is batch size, :math:`C_{in}` is channel number, and :math:`H_{in}, W_{in})` are height and width.
  436. For each batch of shape :math:`(C_{in}, H_{in}, W_{in})`, the formula is defined as:
  437. .. math::
  438. out_j = \sum_{i=0}^{C_{in} - 1} ccor(W_{ij}, X_i) + b_j,
  439. ...
  440. """
  441. ```
  442. 显示效果可访问[这里](https://www.mindspore.cn/docs/api/zh-CN/master/api_python/nn/mindspore.nn.Conv2d.html)。
  443. #### 链接
  444. ```python
  445. class BatchNorm(PrimitiveWithInfer):
  446. r"""
  447. Batch Normalization for input data and updated parameters.
  448. Batch Normalization is widely used in convolutional neural networks. This operation
  449. applies Batch Normalization over input to avoid internal covariate shift as described
  450. in the paper `Batch Normalization: Accelerating Deep Network Training by Reducing Internal
  451. Covariate Shift <https://arxiv.org/abs/1502.03167>`_. It rescales and recenters the
  452. features using a mini-batch of data and the learned parameters which can be described
  453. in the following formula,
  454. ...
  455. """
  456. ```
  457. 显示效果可访问[这里](https://www.mindspore.cn/docs/api/zh-CN/master/api_python/ops/mindspore.ops.BatchNorm.html)。
  458. ## C++ API注释规范
  459. ### 注释格式
  460. 所有接口注释都采用如下格式:
  461. ```cpp
  462. /// \brief Short description
  463. ///
  464. /// Detailed description.
  465. ///
  466. /// \note
  467. /// Describe what to be aware of when using this interface.
  468. ///
  469. /// \f[
  470. /// math formula
  471. /// \f]
  472. /// XXX \f$ formulas in the line \f$ XXX
  473. ///
  474. /// \param[in] Parameter_name meaning, range of values, other instructions.
  475. ///
  476. /// \return Returns a description of the value, the cause of the error,
  477. /// and the corresponding solution.
  478. ///
  479. /// \par Example
  480. /// \code
  481. /// Example code
  482. /// \endcode
  483. ```
  484. 其中:
  485. - `\brief`:简要描述。
  486. ```cpp
  487. /// \brief Function to create a CocoDataset.
  488. ```
  489. - `Detailed description`:详细描述。
  490. ```cpp
  491. /// Base class for all recognizable patterns.
  492. /// We implement an Expression Template approach using static polymorphism based on
  493. /// the Curiously Recurring Template Pattern (CRTP) which "achieves a similar effect
  494. /// to the use of virtual functions without the costs..." as described in:
  495. /// https://en.wikipedia.org/wiki/Expression_templates and
  496. /// https://en.wikipedia.org/wiki/Curiously_recurring_template_pattern
  497. /// The TryCapture function tries to capture the pattern with the given node.
  498. /// The GetNode function builds a new node using the captured values.
  499. ```
  500. - `\note`:使用该接口的注意事项。
  501. ```cpp
  502. /// \note
  503. /// The generated dataset has multi-columns
  504. ```
  505. - 公式写法。
  506. 多行公式写法:
  507. ```cpp
  508. /// \f[
  509. /// x>=y
  510. /// \f]
  511. ```
  512. 行内公式写法,公式位于两个`\f$`之间:
  513. ```cpp
  514. /// \brief Computes the boolean value of \f$x>=y\f$ element-wise.
  515. ```
  516. - `\param[in]`:传入参数描述。
  517. ```cpp
  518. /// \param[in] weight Defines the width of memory to request
  519. /// \param[in] height Defines the height of memory to request
  520. /// \param[in] type Defines the data type of memory to request
  521. ```
  522. - `\return`:返回值描述。
  523. ```cpp
  524. /// \return Reference count of a certain memory currently.
  525. ```
  526. - 示例代码,格式如下,`\par Example`作为前缀,示例代码位于`\code`和`\endcode`之间:
  527. ```cpp
  528. /// \par Example
  529. /// \code
  530. /// /* Set number of workers(threads) to process the dataset in parallel */
  531. /// std::shared_ptr<Dataset> ds = ImageFolder(folder_path, true);
  532. /// ds = ds->SetNumWorkers(16);
  533. /// \endcode
  534. ```
  535. ### 注意事项
  536. 1. 需要生成文档的接口注释内容统一使用`///`引导而不是使用`//`引导;
  537. 2. 不要间断注释,空行使用`///`;
  538. 3. 引用在C++ API中具有同名的外部名称时,避免生成错误链接,需要在前面添加`@ref`标识:
  539. ```cpp
  540. /// \brief Referring to @ref mindspore.nn.Cell for detail.
  541. ```
  542. ### 完整示例
  543. ```cpp
  544. /// \brief Function to create a MnistDataset.
  545. /// \note The generated dataset has two columns ["image", "label"].
  546. /// \param[in] dataset_dir Path to the root directory that contains the dataset.
  547. /// \param[in] usage Part of dataset of MNIST, can be "train", "test" or "all" (default = "all").
  548. /// \param[in] sampler Shared pointer to a sampler object used to choose samples from the dataset. If sampler is not
  549. /// given, a `RandomSampler` will be used to randomly iterate the entire dataset (default = RandomSampler()).
  550. /// \param[in] cache Tensor cache to use (default=nullptr which means no cache is used).
  551. /// \return Shared pointer to the MnistDataset.
  552. /// \par Example
  553. /// \code
  554. /// /* Define dataset path and MindData object */
  555. /// std::string folder_path = "/path/to/mnist_dataset_directory";
  556. /// std::shared_ptr<Dataset> ds = Mnist(folder_path, "all", std::make_shared<RandomSampler>(false, 20));
  557. ///
  558. /// /* Create iterator to read dataset */
  559. /// std::shared_ptr<Iterator> iter = ds->CreateIterator();
  560. /// std::unordered_map<std::string, mindspore::MSTensor> row;
  561. /// iter->GetNextRow(&row);
  562. ///
  563. /// /* Note: In MNIST dataset, each dictionary has keys "image" and "label" */
  564. /// auto image = row["image"];
  565. /// \endcode
  566. inline std::shared_ptr<MnistDataset> MS_API
  567. Mnist(const std::string &dataset_dir, const std::string &usage = "all",
  568. const std::shared_ptr<Sampler> &sampler = std::make_shared<RandomSampler>(),
  569. const std::shared_ptr<DatasetCache> &cache = nullptr) {
  570. return std::make_shared<MnistDataset>(StringToChar(dataset_dir), StringToChar(usage), sampler, cache);
  571. }
  572. ```
  573. 根据以上注释内容输出的API文档页面为[Function mindspore::dataset::Coco](https://www.mindspore.cn/lite/api/en/master/generate/function_mindspore_dataset_Mnist-1.html)。