You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

tutorial_6_datasetiter.rst 24 kB

5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421
  1. ==============================================================================
  2. 使用DataSetIter实现自定义训练过程
  3. ==============================================================================
  4. 我们使用前面介绍过的 :doc:`/tutorials/文本分类` 任务来进行详细的介绍。这里我们把数据集换成了SST2,使用 :class:`~fastNLP.DataSetIter` 类来编写自己的训练过程。
  5. DataSetIter初探之前的内容与 :doc:`/tutorials/tutorial_5_loss_optimizer` 中的完全一样,如已经阅读过可以跳过。
  6. .. note::
  7. 本教程中的代码没有使用 GPU 。读者可以自行修改代码,扩大数据量并使用 GPU 进行训练。
  8. 数据读入和预处理
  9. --------------------
  10. 数据读入
  11. 我们可以使用 fastNLP :mod:`fastNLP.io` 模块中的 :class:`~fastNLP.io.SST2Pipe` 类,轻松地读取以及预处理SST2数据集。:class:`~fastNLP.io.SST2Pipe` 对象的
  12. :meth:`~fastNLP.io.SST2Pipe.process_from_file` 方法能够对读入的SST2数据集进行数据的预处理,方法的参数为paths, 指要处理的文件所在目录,如果paths为None,则会自动下载数 据集,函数默认paths值为None。
  13. 此函数返回一个 :class:`~fastNLP.io.DataBundle`,包含SST2数据集的训练集、测试集、验证集以及source端和target端的字典。其训练、测试、验证数据集含有四个 :mod:`~fastNLP.core.field` :
  14. * raw_words: 原source句子
  15. * target: 标签值
  16. * words: index之后的raw_words
  17. * seq_len: 句子长度
  18. 读入数据代码如下:
  19. .. code-block:: python
  20. from fastNLP.io import SST2Pipe
  21. pipe = SST2Pipe()
  22. databundle = pipe.process_from_file()
  23. vocab = databundle.get_vocab('words')
  24. print(databundle)
  25. print(databundle.get_dataset('train')[0])
  26. print(databundle.get_vocab('words'))
  27. 输出数据如下::
  28. In total 3 datasets:
  29. test has 1821 instances.
  30. train has 67349 instances.
  31. dev has 872 instances.
  32. In total 2 vocabs:
  33. words has 16293 entries.
  34. target has 2 entries.
  35. +-------------------------------------------+--------+--------------------------------------+---------+
  36. | raw_words | target | words | seq_len |
  37. +-------------------------------------------+--------+--------------------------------------+---------+
  38. | hide new secretions from the parental ... | 1 | [4111, 98, 12010, 38, 2, 6844, 9042] | 7 |
  39. +-------------------------------------------+--------+--------------------------------------+---------+
  40. Vocabulary(['hide', 'new', 'secretions', 'from', 'the']...)
  41. 除了可以对数据进行读入的Pipe类,fastNLP还提供了读入和下载数据的Loader类,不同数据集的Pipe和Loader及其用法详见 :doc:`/tutorials/tutorial_4_load_dataset` 。
  42. 数据集分割
  43. 由于SST2数据集的测试集并不带有标签数值,故我们分割出一部分训练集作为测试集。下面这段代码展示了 :meth:`~fastNLP.DataSet.split` 的使用方法,
  44. 为了能让读者快速运行完整个教程,我们只取了训练集的前5000个数据。
  45. .. code-block:: python
  46. train_data = databundle.get_dataset('train')[:5000]
  47. train_data, test_data = train_data.split(0.015)
  48. dev_data = databundle.get_dataset('dev')
  49. print(len(train_data),len(dev_data),len(test_data))
  50. 输出结果为::
  51. 4925 872 75
  52. 数据集 :meth:`~fastNLP.DataSet.set_input` 和 :meth:`~fastNLP.DataSet.set_target` 函数
  53. :class:`~fastNLP.io.SST2Pipe` 类的 :meth:`~fastNLP.io.SST2Pipe.process_from_file` 方法在预处理过程中还将训练、测试、验证集
  54. 的 `words` 、`seq_len` :mod:`~fastNLP.core.field` 设定为input,同时将`target` :mod:`~fastNLP.core.field` 设定为target。
  55. 我们可以通过 :class:`~fastNLP.core.Dataset` 类的 :meth:`~fastNLP.core.Dataset.print_field_meta` 方法查看各个
  56. :mod:`~fastNLP.core.field` 的设定情况,代码如下:
  57. .. code-block:: python
  58. train_data.print_field_meta()
  59. 输出结果为::
  60. +-------------+-----------+--------+-------+---------+
  61. | field_names | raw_words | target | words | seq_len |
  62. +-------------+-----------+--------+-------+---------+
  63. | is_input | False | False | True | True |
  64. | is_target | False | True | False | False |
  65. | ignore_type | | False | False | False |
  66. | pad_value | | 0 | 0 | 0 |
  67. +-------------+-----------+--------+-------+---------+
  68. 其中is_input和is_target分别表示是否为input和target。ignore_type为true时指使用 :class:`~fastNLP.DataSetIter` 取出batch数
  69. 据时fastNLP不会进行自动padding,pad_value指对应 :mod:`~fastNLP.core.field` padding所用的值,这两者只有当
  70. :mod:`~fastNLP.core.field` 设定为input或者target的时候才有存在的意义。
  71. is_input为true的 :mod:`~fastNLP.core.field` 在 :class:`~fastNLP.DataSetIter` 迭代取出的 batch_x 中,
  72. 而 is_target为true的 :mod:`~fastNLP.core.field` 在 :class:`~fastNLP.DataSetIter` 迭代取出的 batch_y 中。
  73. 具体分析见下面DataSetIter的介绍过程。
  74. 评价指标
  75. 训练模型需要提供一个评价指标。这里使用准确率做为评价指标。
  76. * ``pred`` 参数对应的是模型的 forward 方法返回的 dict 中的一个 key 的名字。
  77. * ``target`` 参数对应的是 :class:`~fastNLP.DataSet` 中作为标签的 :mod:`~fastNLP.core.field` 的名字。
  78. 这里我们用 :class:`~fastNLP.Const` 来辅助命名,如果你自己编写模型中 forward 方法的返回值或
  79. 数据集中 :mod:`~fastNLP.core.field` 的名字与本例不同, 你可以把 ``pred`` 参数和 ``target`` 参数设定符合自己代码的值。代码如下:
  80. .. code-block:: python
  81. from fastNLP import AccuracyMetric
  82. from fastNLP import Const
  83. # metrics=AccuracyMetric() 在本例中与下面这行代码等价
  84. metrics=AccuracyMetric(pred=Const.OUTPUT, target=Const.TARGET)
  85. DataSetIter初探
  86. --------------------------
  87. DataSetIter
  88. fastNLP定义的 :class:`~fastNLP.DataSetIter` 类,用于定义一个batch,并实现batch的多种功能,在初始化时传入的参数有:
  89. * dataset: :class:`~fastNLP.DataSet` 对象, 数据集
  90. * batch_size: 取出的batch大小
  91. * sampler: 规定使用的 :class:`~fastNLP.Sampler` 若为 None, 使用 :class:`~fastNLP.RandomSampler` (Default: None)
  92. * as_numpy: 若为 True, 输出batch为 `numpy.array`. 否则为 `torch.Tensor` (Default: False)
  93. * prefetch: 若为 True使用多进程预先取出下一batch. (Default: False)
  94. sampler
  95. fastNLP 实现的采样器有:
  96. * :class:`~fastNLP.BucketSampler` 可以随机地取出长度相似的元素 【初始化参数: num_buckets:bucket的数量; batch_size:batch大小; seq_len_field_name:dataset中对应序列长度的 :mod:`~fastNLP.core.field` 的名字】
  97. * SequentialSampler: 顺序取出元素的采样器【无初始化参数】
  98. * RandomSampler:随机化取元素的采样器【无初始化参数】
  99. Padder
  100. 在fastNLP里,pad是与一个 :mod:`~fastNLP.core.field` 绑定的。即不同的 :mod:`~fastNLP.core.field` 可以使用不同的pad方式,比如在英文任务中word需要的pad和
  101. character的pad方式往往是不同的。fastNLP是通过一个叫做 :class:`~fastNLP.Padder` 的子类来完成的。
  102. 默认情况下,所有field使用 :class:`~fastNLP.AutoPadder`
  103. 。大多数情况下直接使用 :class:`~fastNLP.AutoPadder` 就可以了。
  104. 如果 :class:`~fastNLP.AutoPadder` 或 :class:`~fastNLP.EngChar2DPadder` 无法满足需求,
  105. 也可以自己写一个 :class:`~fastNLP.Padder` 。
  106. DataSetIter自动padding
  107. 以下代码展示了DataSetIter的简单使用:
  108. .. code-block:: python
  109. from fastNLP import BucketSampler
  110. from fastNLP import DataSetIter
  111. tmp_data = dev_data[:10]
  112. # 定义一个Batch,传入DataSet,规定batch_size和去batch的规则。
  113. # 顺序(Sequential),随机(Random),相似长度组成一个batch(Bucket)
  114. sampler = BucketSampler(batch_size=2, seq_len_field_name='seq_len')
  115. batch = DataSetIter(batch_size=2, dataset=tmp_data, sampler=sampler)
  116. for batch_x, batch_y in batch:
  117. print("batch_x: ",batch_x)
  118. print("batch_y: ", batch_y)
  119. 输出结果如下::
  120. batch_x: {'words': tensor([[ 13, 830, 7746, 174, 3, 47, 6, 83, 5752, 15,
  121. 2177, 15, 63, 57, 406, 84, 1009, 4973, 27, 17,
  122. 13785, 3, 533, 3687, 15623, 39, 375, 8, 15624, 8,
  123. 1323, 4398, 7],
  124. [ 1045, 11113, 16, 104, 5, 4, 176, 1824, 1704, 3,
  125. 2, 18, 11, 4, 1018, 432, 143, 33, 245, 308,
  126. 7, 0, 0, 0, 0, 0, 0, 0, 0, 0,
  127. 0, 0, 0]]), 'seq_len': tensor([33, 21])}
  128. batch_y: {'target': tensor([1, 0])}
  129. batch_x: {'words': tensor([[ 14, 10, 4, 311, 5, 154, 1418, 609, 7],
  130. [ 14, 10, 437, 32, 78, 3, 78, 437, 7]]), 'seq_len': tensor([9, 9])}
  131. batch_y: {'target': tensor([0, 1])}
  132. batch_x: {'words': tensor([[ 4, 277, 685, 18, 7],
  133. [15618, 3204, 5, 1675, 0]]), 'seq_len': tensor([5, 4])}
  134. batch_y: {'target': tensor([1, 1])}
  135. batch_x: {'words': tensor([[ 2, 155, 3, 4426, 3, 239, 3, 739, 5, 1136,
  136. 41, 43, 2427, 736, 2, 648, 10, 15620, 2285, 7],
  137. [ 24, 95, 28, 46, 8, 336, 38, 239, 8, 2133,
  138. 2, 18, 10, 15622, 1421, 6, 61, 5, 387, 7]]), 'seq_len': tensor([20, 20])}
  139. batch_y: {'target': tensor([0, 0])}
  140. batch_x: {'words': tensor([[ 879, 96, 8, 1026, 12, 8067, 11, 13623, 8, 15619,
  141. 4, 673, 662, 15, 4, 1154, 240, 639, 417, 7],
  142. [ 45, 752, 327, 180, 10, 15621, 16, 72, 8904, 9,
  143. 1217, 7, 0, 0, 0, 0, 0, 0, 0, 0]]), 'seq_len': tensor([20, 12])}
  144. batch_y: {'target': tensor([0, 1])}
  145. 可以看到那些设定为input的 :mod:`~fastNLP.core.field` 都出现在batch_x中,而设定为target的 :mod:`~fastNLP.core.field` 则出现在batch_y中。同时对于同一个batch_x中的两个数据,长度偏短的那个会被自动padding到和长度偏长的句子长度一致,默认的padding值为0。
  146. Dataset改变padding值
  147. 可以通过 :meth:`~fastNLP.core.Dataset.set_pad_val` 方法修改默认的pad值,代码如下:
  148. .. code-block:: python
  149. tmp_data.set_pad_val('words',-1)
  150. batch = DataSetIter(batch_size=2, dataset=tmp_data, sampler=sampler)
  151. for batch_x, batch_y in batch:
  152. print("batch_x: ",batch_x)
  153. print("batch_y: ", batch_y)
  154. 输出结果如下::
  155. batch_x: {'words': tensor([[ 13, 830, 7746, 174, 3, 47, 6, 83, 5752, 15,
  156. 2177, 15, 63, 57, 406, 84, 1009, 4973, 27, 17,
  157. 13785, 3, 533, 3687, 15623, 39, 375, 8, 15624, 8,
  158. 1323, 4398, 7],
  159. [ 1045, 11113, 16, 104, 5, 4, 176, 1824, 1704, 3,
  160. 2, 18, 11, 4, 1018, 432, 143, 33, 245, 308,
  161. 7, -1, -1, -1, -1, -1, -1, -1, -1, -1,
  162. -1, -1, -1]]), 'seq_len': tensor([33, 21])}
  163. batch_y: {'target': tensor([1, 0])}
  164. batch_x: {'words': tensor([[ 14, 10, 4, 311, 5, 154, 1418, 609, 7],
  165. [ 14, 10, 437, 32, 78, 3, 78, 437, 7]]), 'seq_len': tensor([9, 9])}
  166. batch_y: {'target': tensor([0, 1])}
  167. batch_x: {'words': tensor([[ 2, 155, 3, 4426, 3, 239, 3, 739, 5, 1136,
  168. 41, 43, 2427, 736, 2, 648, 10, 15620, 2285, 7],
  169. [ 24, 95, 28, 46, 8, 336, 38, 239, 8, 2133,
  170. 2, 18, 10, 15622, 1421, 6, 61, 5, 387, 7]]), 'seq_len': tensor([20, 20])}
  171. batch_y: {'target': tensor([0, 0])}
  172. batch_x: {'words': tensor([[ 4, 277, 685, 18, 7],
  173. [15618, 3204, 5, 1675, -1]]), 'seq_len': tensor([5, 4])}
  174. batch_y: {'target': tensor([1, 1])}
  175. batch_x: {'words': tensor([[ 879, 96, 8, 1026, 12, 8067, 11, 13623, 8, 15619,
  176. 4, 673, 662, 15, 4, 1154, 240, 639, 417, 7],
  177. [ 45, 752, 327, 180, 10, 15621, 16, 72, 8904, 9,
  178. 1217, 7, -1, -1, -1, -1, -1, -1, -1, -1]]), 'seq_len': tensor([20, 12])}
  179. batch_y: {'target': tensor([0, 1])}
  180. 可以看到使用了-1进行padding。
  181. Dataset个性化padding
  182. 如果我们希望对某一些 :mod:`~fastNLP.core.field` 进行个性化padding,可以自己构造Padder类,并使用 :meth:`~fastNLP.core.Dataset.set_padder` 函数修改padder来实现。下面通过构造一个将数据padding到固定长度的padder进行展示:
  183. .. code-block:: python
  184. from fastNLP.core.field import Padder
  185. import numpy as np
  186. class FixLengthPadder(Padder):
  187. def __init__(self, pad_val=0, length=None):
  188. super().__init__(pad_val=pad_val)
  189. self.length = length
  190. assert self.length is not None, "Creating FixLengthPadder with no specific length!"
  191. def __call__(self, contents, field_name, field_ele_dtype, dim):
  192. #计算当前contents中的最大长度
  193. max_len = max(map(len, contents))
  194. #如果当前contents中的最大长度大于指定的padder length的话就报错
  195. assert max_len <= self.length, "Fixed padder length smaller than actual length! with length {}".format(max_len)
  196. array = np.full((len(contents), self.length), self.pad_val, dtype=field_ele_dtype)
  197. for i, content_i in enumerate(contents):
  198. array[i, :len(content_i)] = content_i
  199. return array
  200. #设定FixLengthPadder的固定长度为40
  201. tmp_padder = FixLengthPadder(pad_val=0,length=40)
  202. #利用dataset的set_padder函数设定words field的padder
  203. tmp_data.set_padder('words',tmp_padder)
  204. batch = DataSetIter(batch_size=2, dataset=tmp_data, sampler=sampler)
  205. for batch_x, batch_y in batch:
  206. print("batch_x: ",batch_x)
  207. print("batch_y: ", batch_y)
  208. 输出结果如下::
  209. batch_x: {'words': tensor([[ 45, 752, 327, 180, 10, 15621, 16, 72, 8904, 9,
  210. 1217, 7, 0, 0, 0, 0, 0, 0, 0, 0,
  211. 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
  212. 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
  213. [ 879, 96, 8, 1026, 12, 8067, 11, 13623, 8, 15619,
  214. 4, 673, 662, 15, 4, 1154, 240, 639, 417, 7,
  215. 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
  216. 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'seq_len': tensor([12, 20])}
  217. batch_y: {'target': tensor([1, 0])}
  218. batch_x: {'words': tensor([[ 13, 830, 7746, 174, 3, 47, 6, 83, 5752, 15,
  219. 2177, 15, 63, 57, 406, 84, 1009, 4973, 27, 17,
  220. 13785, 3, 533, 3687, 15623, 39, 375, 8, 15624, 8,
  221. 1323, 4398, 7, 0, 0, 0, 0, 0, 0, 0],
  222. [ 1045, 11113, 16, 104, 5, 4, 176, 1824, 1704, 3,
  223. 2, 18, 11, 4, 1018, 432, 143, 33, 245, 308,
  224. 7, 0, 0, 0, 0, 0, 0, 0, 0, 0,
  225. 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'seq_len': tensor([33, 21])}
  226. batch_y: {'target': tensor([1, 0])}
  227. batch_x: {'words': tensor([[ 14, 10, 4, 311, 5, 154, 1418, 609, 7, 0, 0, 0,
  228. 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
  229. 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
  230. 0, 0, 0, 0],
  231. [ 14, 10, 437, 32, 78, 3, 78, 437, 7, 0, 0, 0,
  232. 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
  233. 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
  234. 0, 0, 0, 0]]), 'seq_len': tensor([9, 9])}
  235. batch_y: {'target': tensor([0, 1])}
  236. batch_x: {'words': tensor([[ 2, 155, 3, 4426, 3, 239, 3, 739, 5, 1136,
  237. 41, 43, 2427, 736, 2, 648, 10, 15620, 2285, 7,
  238. 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
  239. 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
  240. [ 24, 95, 28, 46, 8, 336, 38, 239, 8, 2133,
  241. 2, 18, 10, 15622, 1421, 6, 61, 5, 387, 7,
  242. 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
  243. 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'seq_len': tensor([20, 20])}
  244. batch_y: {'target': tensor([0, 0])}
  245. batch_x: {'words': tensor([[ 4, 277, 685, 18, 7, 0, 0, 0, 0, 0,
  246. 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
  247. 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
  248. 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
  249. [15618, 3204, 5, 1675, 0, 0, 0, 0, 0, 0,
  250. 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
  251. 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
  252. 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'seq_len': tensor([5, 4])}
  253. batch_y: {'target': tensor([1, 1])}
  254. 在这里所有的 `words` 都被pad成了长度为40的list。
  255. 使用DataSetIter自己编写训练过程
  256. ------------------------------------
  257. 如果你想用类似 PyTorch 的使用方法,自己编写训练过程,可以参考下面这段代码。
  258. 其中使用了 fastNLP 提供的 :class:`~fastNLP.DataSetIter` 来获得小批量训练的小批量数据,
  259. 使用 :class:`~fastNLP.BucketSampler` 做为 :class:`~fastNLP.DataSetIter` 的参数来选择采样的方式。
  260. 以下代码使用BucketSampler作为 :class:`~fastNLP.DataSetIter` 初始化的输入,运用 :class:`~fastNLP.DataSetIter` 自己写训练程序
  261. .. code-block:: python
  262. from fastNLP import BucketSampler
  263. from fastNLP import DataSetIter
  264. from fastNLP.models import CNNText
  265. from fastNLP import Tester
  266. import torch
  267. import time
  268. embed_dim = 100
  269. model = CNNText((len(vocab),embed_dim), num_classes=2, dropout=0.1)
  270. def train(epoch, data, devdata):
  271. optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
  272. lossfunc = torch.nn.CrossEntropyLoss()
  273. batch_size = 32
  274. # 定义一个Batch,传入DataSet,规定batch_size和去batch的规则。
  275. # 顺序(Sequential),随机(Random),相似长度组成一个batch(Bucket)
  276. train_sampler = BucketSampler(batch_size=batch_size, seq_len_field_name='seq_len')
  277. train_batch = DataSetIter(batch_size=batch_size, dataset=data, sampler=train_sampler)
  278. start_time = time.time()
  279. print("-"*5+"start training"+"-"*5)
  280. for i in range(epoch):
  281. loss_list = []
  282. for batch_x, batch_y in train_batch:
  283. optimizer.zero_grad()
  284. output = model(batch_x['words'])
  285. loss = lossfunc(output['pred'], batch_y['target'])
  286. loss.backward()
  287. optimizer.step()
  288. loss_list.append(loss.item())
  289. #这里verbose如果为0,在调用Tester对象的test()函数时不输出任何信息,返回评估信息; 如果为1,打印出验证结果,返回评估信息
  290. #在调用过Tester对象的test()函数后,调用其_format_eval_results(res)函数,结构化输出验证结果
  291. tester_tmp = Tester(devdata, model, metrics=AccuracyMetric(), verbose=0)
  292. res=tester_tmp.test()
  293. print('Epoch {:d} Avg Loss: {:.2f}'.format(i, sum(loss_list) / len(loss_list)),end=" ")
  294. print(tester_tmp._format_eval_results(res),end=" ")
  295. print('{:d}ms'.format(round((time.time()-start_time)*1000)))
  296. loss_list.clear()
  297. train(10, train_data, dev_data)
  298. #使用tester进行快速测试
  299. tester = Tester(test_data, model, metrics=AccuracyMetric())
  300. tester.test()
  301. 这段代码的输出如下::
  302. -----start training-----
  303. Evaluate data in 2.68 seconds!
  304. Epoch 0 Avg Loss: 0.66 AccuracyMetric: acc=0.708716 29307ms
  305. Evaluate data in 0.38 seconds!
  306. Epoch 1 Avg Loss: 0.41 AccuracyMetric: acc=0.770642 52200ms
  307. Evaluate data in 0.51 seconds!
  308. Epoch 2 Avg Loss: 0.16 AccuracyMetric: acc=0.747706 70268ms
  309. Evaluate data in 0.96 seconds!
  310. Epoch 3 Avg Loss: 0.06 AccuracyMetric: acc=0.741972 90349ms
  311. Evaluate data in 1.04 seconds!
  312. Epoch 4 Avg Loss: 0.03 AccuracyMetric: acc=0.740826 114250ms
  313. Evaluate data in 0.8 seconds!
  314. Epoch 5 Avg Loss: 0.02 AccuracyMetric: acc=0.738532 134742ms
  315. Evaluate data in 0.65 seconds!
  316. Epoch 6 Avg Loss: 0.01 AccuracyMetric: acc=0.731651 154503ms
  317. Evaluate data in 0.8 seconds!
  318. Epoch 7 Avg Loss: 0.01 AccuracyMetric: acc=0.738532 175397ms
  319. Evaluate data in 0.36 seconds!
  320. Epoch 8 Avg Loss: 0.01 AccuracyMetric: acc=0.733945 192384ms
  321. Evaluate data in 0.84 seconds!
  322. Epoch 9 Avg Loss: 0.01 AccuracyMetric: acc=0.744266 214417ms
  323. Evaluate data in 0.04 seconds!
  324. [tester]
  325. AccuracyMetric: acc=0.786667
  326. ----------------------------------
  327. 代码下载
  328. ----------------------------------
  329. `点击下载 IPython Notebook 文件 <https://sourcegraph.com/github.com/fastnlp/fastNLP@master/-/raw/tutorials/tutorial_6_datasetiter.ipynb>`_)