The comments of classes and methods are in the following format:
Summary.
More elaborate description.
.. warning::
Warning description.
Note:
Note description.
Args:
Arg1 (Type): Description. Default: xxx.
Arg2 (Type): Description.
- Sub-argument1 or Value1 of Arg2: Description.
- Sub-argument2 or Value2 of Arg2: Description.
Returns:
Type, description.
Raises:
Type: Description.
Examples:
>>> Sample Code
The format items are described as follows:
Summary
: briefly describes the API function. If the description begins with a verb, the module must use the same verb form, the first person (the original form of the verb) or the third person (add an "s" after the verb), and the first person is recommended.More elaborate description
: describes the function and usage of an API in detail.warning
: describes warnings for using an API to avoid negative consequences.Note
: describes precautions for using an API. Do not use Notes
.Args
: API parameter information, including the parameter name, type, value range, and default value.Returns
: return value information, including the return value type.Raises
: exception information, including the exception type and meaning.Examples
: sample codeFor comments of operators and cells, add Inputs
, Outputs
and Supported Platforms
before Examples
.
Inputs
and Outputs
: describes the input and output types and shapes of the operators after instantiation. The input name can be the same as that in the example. It is recommended to provide the corresponding mathematical formula in the comment.Supported Platforms
: describes the hardware platforms supported by the operator. Add `` before and after the platform name, and use space to separate them when there are more than one.Inputs:
- **input_name1** (Type) - Description.
- **input_name2** (Type) - Description.
Outputs:
Type and shape, description.
Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
Overall Requirements
The comment items required for a class or method are as follows: Summary
, Args
, Returns
, and Raises
. If a function does not contain related information (such as Args
, Returns
, and Raises
), do not write None (for example, Raises: None
), but directly omit the comment item.
When an API is generated by directory, the comments in the __init__ file header are displayed at the beginning of the web page. When an API is generated by file, the comments at the beginning of the file are displayed at the beginning of the web page. The comments must contain the overall description of the corresponding modules.
If a comment contains a backslash (\), change """
in the header to r"""
.
If the comment begins with a verb, the module must use the same verb form, the first person (the original form of the verb) or the third person (add an "s" after the verb), and the first person is recommended.
Colon requirements: Keywords (such as Args
and Returns
) and parameter names (such as Arg1
or Arg2
) are followed by colons (:). A colon must be followed by a space. The content of Summary
and Returns
cannot contain colons.
Blank line requirements:
A blank line is required between contents of different types (such as Args
and Returns
). A blank line is not required between contents of the same type (for example, Arg1
and Arg2
).
High-Level API for Training or Testing.
Args:
network (Cell): A training or testing network.
loss_fn (Cell): Objective function, if `loss_fn` is None, the
network should contain the logic of loss and grads calculation, and the logic
of parallel if needed. Default: None.
Returns:
function, original function.
If the content is described in an unordered or ordered list, a blank line must be added between the content in the list and the content above the list.
Args:
amp_level (str): Option for argument `level` in `mindspore.amp.build_train_network`, level for mixed
precision training. Supports ["O0", "O2", "O3", "auto"]. Default: "O0".
- O0: Do not change.
- O2: Cast network to float16, keep batchnorm run in float32, using dynamic loss scale.
- O3: Cast network to float16, with additional property 'keep_batchnorm_fp32=False'.
- auto: Set to level to recommended level in different devices. Set level to "O2" on GPU, set
level to "O3" Ascend. The recommended level is choose by the export experience, cannot
always generalize. User should specify the level for special network.
"O2" is recommended on GPU, "O3" is recommended on Ascend.
Space requirements:
The newline characters of Args
and Raises
must be indented by four spaces.
Args:
lr_power (float): Learning rate power controls how the learning rate decreases during training,
must be less than or equal to zero. Use fixed learning rate if `lr_power` is zero.
use_locking (bool): If True, the var and accumulation tensors will be protected from being updated.
Default: False.
Raises:
TypeError: If `lr`, `l1`, `l2`, `lr_power` or `use_locking` is not a float.
If `use_locking` is not a bool.
If dtype of `var`, `accum`, `linear` or `grad` is neither float16 nor float32.
If dtype of `indices` is not int32.
The following contents do not need indents and are aligned with the start position of the previous line.
The sub-parameters or values of Args
Args:
parallel_mode (str): There are five kinds of parallel modes, "stand_alone", "data_parallel",
"hybrid_parallel", "semi_auto_parallel" and "auto_parallel". Default: "stand_alone".
- stand_alone: Only one processor is working.
- data_parallel: Distributes the data across different processors.
- hybrid_parallel: Achieves data parallelism and model parallelism
manually.
- semi_auto_parallel: Achieves data parallelism and model parallelism by
setting parallel strategies.
The line breaks for disordered or ordered content of Inputs
, Outputs
and Returns
Inputs:
- **var** (Parameter) - The variable to be updated. The data type must be float16 or float32.
- **accum** (Parameter) - The accumulation to be updated, must be same data type and shape as `var`.
- **linear** (Parameter) - the linear coefficient to be updated, must be same data type and shape
as `var`.
- **grad** (Tensor) - A tensor of the same type as `var`, for the gradient.
- **indices** (Tensor) - A vector of indices in the first dimension of `var` and `accum`.
The shape of `indices` must be the same as `grad` in the first dimension. The type must be int32.
Outputs:
Tuple of 3 Tensor, the updated parameters.
- **var** (Tensor) - Tensor, has the same shape
and data type as `var`.
- **accum** (Tensor) - Tensor, has the same shape
and data type as `accum`.
- **linear** (Tensor) - Tensor, has the same shape
and data type as `linear`.
Note
and warning
.. warning::
This is warning text. Use a warning for information the user must
understand to avoid negative consequences.
If warning text runs over a line, make sure the lines wrap and are indented to
the same level as the warning tag.
In Args
, there must be a space between the parameter name and the (
of the type.
Args:
lr (float): The learning rate value, must be positive.
Args
Comment
Basic data types: int
, float
, bool
, str
, list
, dict
, set
, tuple
and numpy.ndarray
.
Args:
arg1 (int): Some description.
arg2 (float): Some description.
arg3 (bool): Some description.
arg4 (str): Some description.
arg5 (list): Some description.
arg6 (dict): Some description.
arg7 (set): Some description.
arg8 (tuple): Some description.
arg9 (numpy.ndarray): Some description.
dtype: For the value of mindspore.dtype, set this parameter to mindspore.dtype
. For the value of the numpy type, set this parameter to numpy.dtype
. Set other parameters based on the actual requirements.
Args:
arg1 (mindspore.dtype): Some description.
One parameter with multiple optional types: Union [type 1, type 2], for example, Union[Tensor, Number]
.
Args:
arg1 (Union[Tensor, Number]): Some description.
List type: list[Specific type], for example, list[str]
.
Args:
arg1 (list[str]): Some description.
The format of optional types is as follows: (Type, optional).
Args:
arg1 (bool, optional): Some description.
Other types: Tensor, other specific types, or method names.
Args:
arg1 (Tensor): Some description.
Returns
Comment
If the return value type or dimension changes, describe the relationship between the return value and the input.
If there are multiple return values, write them in different lines. The line difference is not displayed on the web page. The unordered list supports return values in different lines.
Returns:
- DatasetNode, the root node of the IR tree.
- Dataset, the root dataset of the IR tree.
Examples
Comment
For the content in Examples
, >>>
should be added at the beginning of each line of code. For multiple lines (including classes, function definitions or manual switch line) and blank lines, ...
is needed at the beginning of these lines. There is no need to add any symbols at the beginning of the output result line.
Examples:
>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> class Net(nn.Cell):
... def __init__(self, dense_shape):
... super(Net, self).__init__()
... self.dense_shape = dense_shape
... def construct(self, indices, values):
... x = SparseTensor(indices, values, self.dense_shape)
... return x.values, x.indices, x.dense_shape
...
>>> indices = Tensor([[0, 1], [1, 2]])
>>> values = Tensor([1, 2], dtype=ms.float32)
>>> out = Net((3, 4))(indices, values)
>>> print(out[0])
[1. 2.]
>>> print(out[1])
[[0 1]
[1 2]]
>>> print(out[2])
(3, 4)
Actual code needs to be provided in Examples
. If you need to refer to other Examples, use Note.
The comments of the ops operator are written in PyNative mode. If the operator can be executed, the execution result must be provided.
Import can be omitted in the case of industry consensus, such as np and nn.
If the import path is long or a user-defined alias is required, add from xxx import xxx as something
or import xxx
. If the import path is short, place it in the code.
Inputs
and Outputs
Comment
Formula
Line formula (in the middle of the singly occupied line)
.. math::
formula
Line-embedded formula (displayed together with other peer text, not in the middle)
xxx :math:`formula` xxx
If the formula contains an underscored variable and the underscore is followed by multiple letters (for example, xxx_yyy) , select one of the following methods based on the site requirements:
Parent Class Method Display
:inherited-members:
to the module of the RST file in the Sphinx project to specify the parent class method to be displayed. For details, see https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html.Link
Only the title (such as the name in the following example) is displayed. The detailed address is not displayed.
Write a quotation in the following format:
`name`_
Provide a link in the following format:
.. _`name`: https://xxx
Note:
- If there is a newline character, indent it. For details, see the following table.
- There must be a space before https.
Alternatively, you can use the following simplified format, that is, write only in the place where the reference is made.
`name <https://xxx>`_
Display the detailed address:
https://xxx
Table (For details, see section https://sublime-and-sphinx-guide.readthedocs.io/en/latest/tables.html#list-table-directive.)
.. list-table:: Title # Table title
:widths: 25 25 25 # Table column width
:header-rows: 1
* - Heading row 1, column 1 # Table header
- Heading row 1, column 2
- Heading row 1, column 3
* - Row 1, column 1
- #The table is empty.
- Row 1, column 3
* - Row 2, column 1
- Row 2, column 2
- Row 2,
# If a newline is required for the table content, add a blank line in the middle.
column 3
Display effect:
By default, the detailed description is displayed in one line. If you need to display it in another line, write it in the form of a list or code-block.
List mode:
- Content1
- Content2
- Content3
Code-Block mode:
.. code-block::
Content1
Content2
Content3
Reference other APIs in comments.
Reference class.
Only write API name:
:class:`AdamNoUpdateParam`
If there are duplicate API names, you need to quote the complete module name and class name:
:class:`mindspore.ops.LARS`
To quote function, the complete module name and function name must be written.
:func:`mindspore.compression.quant.create_quant_config`
In the interface description, the variable name or interface name should be wrapped with the symbol ',
and the variable value should be wrapped with the symbol ' or ".
Variable name or interface name.
This part is a more detailed overview of `Mul` operation. For more details about Quantization,
please refer to the implementation of subclass of `Observer`.
Other losses derived from this should implement their own `construct` and use method `self.get_loss`
to apply reduction to loss values.
variable value.
If `reduction` is not one of 'none', 'mean', 'sum'.
The deprecated operator needs to specify the recommended api, and "Deprecated" needs to be added in the Supported Platforms.
class BasicLSTMCell(PrimitiveWithInfer):
"""
It's similar to operator :class:`DynamicRNN`. BasicLSTMCell will be deprecated in the future.
Please use :class:`DynamicRNN` instead.
Supported Platforms:
Deprecated
"""
Add images.
format: .. image:: {name.png}
{name.png}
is the name of the image, and submit the image to the directory of the corresponding module in https://gitee.com/mindspore/mindspore/tree/master/docs/api/api_python.
For example, add image named frequency_masking.png
to the comments of mindspore.dataset.audio.transforms.FrequencyMasking
:
class FrequencyMasking(AudioTensorOperation):
"""
Some description.
.. image:: frequency_masking.png
"""
And then submit the image to https://gitee.com/mindspore/mindspore/blob/master/docs/api/api_python/dataset_audio/frequency_masking.png.
class Tensor(Tensor_):
"""
Tensor is used for data storage.
Tensor inherits tensor object in C++.
Some functions are implemented in C++ and some functions are implemented in Python.
Args:
input_data (Tensor, float, int, bool, tuple, list, numpy.ndarray): Input data of the tensor.
dtype (:class:`mindspore.dtype`): Input data should be None, bool or numeric type defined in `mindspore.dtype`.
The argument is used to define the data type of the output tensor. If it is None, the data type of the
output tensor will be as same as the `input_data`. Default: None.
Outputs:
Tensor, with the same shape as `input_data`.
Examples:
>>> # initialize a tensor with input data
>>> t1 = Tensor(np.zeros([1, 2, 3]), mindspore.float32)
>>> assert isinstance(t1, Tensor)
>>> assert t1.shape == (1, 2, 3)
>>> assert t1.dtype == mindspore.float32
...
>>> # initialize a tensor with a float scalar
>>> t2 = Tensor(0.1)
>>> assert isinstance(t2, Tensor)
>>> assert t2.dtype == mindspore.float64
"""
def __init__(self, input_data, dtype=None):
...
For details about the display effect, click here.
def ms_function(fn=None, obj=None, input_signature=None):
"""
Create a callable MindSpore graph from a python function.
This allows the MindSpore runtime to apply optimizations based on graph.
Args:
fn (Function): The Python function that will be run as a graph. Default: None.
obj (Object): The Python Object that provides the information for identifying the compiled function. Default:
None.
input_signature (MetaTensor): The MetaTensor which describes the input arguments. The MetaTensor specifies
the shape and dtype of the Tensor and they will be supplied to this function. If `input_signature`
is specified, each input to `fn` must be a `Tensor`. And the input parameters of `fn` cannot accept
`**kwargs`. The shape and dtype of actual inputs should keep the same as `input_signature`. Otherwise,
TypeError will be raised. Default: None.
Returns:
Function, if `fn` is not None, returns a callable function that will execute the compiled function; If `fn` is
None, returns a decorator and when this decorator invokes with a single `fn` argument, the callable function is
equal to the case when `fn` is not None.
Examples:
>>> def tensor_add(x, y):
... z = F.tensor_add(x, y)
... return z
...
>>> @ms_function
... def tensor_add_with_dec(x, y):
... z = F.tensor_add(x, y)
... return z
...
>>> @ms_function(input_signature=(MetaTensor(mindspore.float32, (1, 1, 3, 3)),
... MetaTensor(mindspore.float32, (1, 1, 3, 3))))
... def tensor_add_with_sig(x, y):
... z = F.tensor_add(x, y)
... return z
...
>>> x = Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))
>>> y = Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))
...
>>> tensor_add_graph = ms_function(fn=tensor_add)
>>> out = tensor_add_graph(x, y)
>>> out = tensor_add_with_dec(x, y)
>>> out = tensor_add_with_sig(x, y)
"""
...
For details about the display effect, click here.
class Conv2d(_Conv):
r"""
2D convolution layer.
Apply a 2D convolution over an input tensor which is typically of shape :math:`(N, C_{in}, H_{in}, W_{in})`,
where :math:`N` is batch size, :math:`C_{in}` is channel number, and :math:`H_{in}, W_{in})` are height and width.
For each batch of shape :math:`(C_{in}, H_{in}, W_{in})`, the formula is defined as:
.. math::
out_j = \sum_{i=0}^{C_{in} - 1} ccor(W_{ij}, X_i) + b_j,
...
"""
For details about the display effect, click here.
class BatchNorm(PrimitiveWithInfer):
r"""
Batch Normalization for input data and updated parameters.
Batch Normalization is widely used in convolutional neural networks. This operation
applies Batch Normalization over input to avoid internal covariate shift as described
in the paper `Batch Normalization: Accelerating Deep Network Training by Reducing Internal
Covariate Shift <https://arxiv.org/abs/1502.03167>`_. It rescales and recenters the
features using a mini-batch of data and the learned parameters which can be described
in the following formula,
...
"""
For details about the display effect, click here.
All API comments follow the following format:
/// \brief Short description
///
/// Detailed description.
///
/// \note
/// Describe what to be aware of when using this interface.
///
/// \f[
/// math formula
/// \f]
/// XXX \f$ formulas in the line \f$ XXX
///
/// \param[in] Parameter_name meaning, range of values, other instructions.
///
/// \return Returns a description of the value, the cause of the error,
/// and the corresponding solution.
///
/// \par Example
/// \code
/// Example code
/// \endcode
in which,
\brief
: A brief description.
/// \brief Function to create a CocoDataset.
Detailed description
: A detailed description.
/// Base class for all recognizable patterns.
/// We implement an Expression Template approach using static polymorphism based on
/// the Curiously Recurring Template Pattern (CRTP) which "achieves a similar effect
/// to the use of virtual functions without the costs..." as described in:
/// https://en.wikipedia.org/wiki/Expression_templates and
/// https://en.wikipedia.org/wiki/Curiously_recurring_template_pattern
/// The TryCapture function tries to capture the pattern with the given node.
/// The GetNode function builds a new node using the captured values.
\note
: Precautions for using this API.
/// \note
/// The generated dataset has multi-columns
Formula writing.
Multi-line formula writing:
/// \f[
/// x>=y
/// \f]
Line-embedded formula writing, the formula is located between two \f$
:
/// \brief Computes the boolean value of \f$x>=y\f$ element-wise.
\param[in]
: input parameter description.
/// \param[in] weight Defines the width of memory to request
/// \param[in] height Defines the height of memory to request
/// \param[in] type Defines the data type of memory to request
\return
: return value description.
/// \return Reference count of a certain memory currently.
The format of the sample code is as follows, with \par Example
as the prefix, and the sample code is located between \code
and \endcode
. To make it easier to read, add 4 spaces for relative indentation:
/// \par Example
/// \code
/// /* Set number of workers(threads) to process the dataset in parallel */
/// std::shared_ptr<Dataset> ds = ImageFolder(folder_path, true);
/// ds = ds->SetNumWorkers(16);
/// \endcode
The API annotation content of the document that needs to be generated is guided by ///
instead of using //
;
Don't break comments, use ///
for blank lines;
When quoting an external name with the same name in the C++ API, to avoid generating incorrect links, you need to add the @ref
mark in front:
/// \brief Referring to @ref mindspore.nn.Cell for detail.
/// \brief Function to create a MnistDataset.
/// \note The generated dataset has two columns ["image", "label"].
/// \param[in] dataset_dir Path to the root directory that contains the dataset.
/// \param[in] usage Part of dataset of MNIST, can be "train", "test" or "all" (default = "all").
/// \param[in] sampler Shared pointer to a sampler object used to choose samples from the dataset. If sampler is not
/// given, a `RandomSampler` will be used to randomly iterate the entire dataset (default = RandomSampler()).
/// \param[in] cache Tensor cache to use (default=nullptr which means no cache is used).
/// \return Shared pointer to the MnistDataset.
/// \par Example
/// \code
/// /* Define dataset path and MindData object */
/// std::string folder_path = "/path/to/mnist_dataset_directory";
/// std::shared_ptr<Dataset> ds = Mnist(folder_path, "all", std::make_shared<RandomSampler>(false, 20));
///
/// /* Create iterator to read dataset */
/// std::shared_ptr<Iterator> iter = ds->CreateIterator();
/// std::unordered_map<std::string, mindspore::MSTensor> row;
/// iter->GetNextRow(&row);
///
/// /* Note: In MNIST dataset, each dictionary has keys "image" and "label" */
/// auto image = row["image"];
/// \endcode
inline std::shared_ptr<MnistDataset> MS_API
Mnist(const std::string &dataset_dir, const std::string &usage = "all",
const std::shared_ptr<Sampler> &sampler = std::make_shared<RandomSampler>(),
const std::shared_ptr<DatasetCache> &cache = nullptr) {
return std::make_shared<MnistDataset>(StringToChar(dataset_dir), StringToChar(usage), sampler, cache);
}
The API document page output according to the above comments is Function mindspore::dataset::Coco.