You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

fastnlp_tutorial_5.ipynb 60 kB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798991001011021031041051061071081091101111121131141151161171181191201211221231241251261271281291301311321331341351361371381391401411421431441451461471481491501511521531541551561571581591601611621631641651661671681691701711721731741751761771781791801811821831841851861871881891901911921931941951961971981992002012022032042052062072082092102112122132142152162172182192202212222232242252262272282292302312322332342352362372382392402412422432442452462472482492502512522532542552562572582592602612622632642652662672682692702712722732742752762772782792802812822832842852862872882892902912922932942952962972982993003013023033043053063073083093103113123133143153163173183193203213223233243253263273283293303313323333343353363373383393403413423433443453463473483493503513523533543553563573583593603613623633643653663673683693703713723733743753763773783793803813823833843853863873883893903913923933943953963973983994004014024034044054064074084094104114124134144154164174184194204214224234244254264274284294304314324334344354364374384394404414424434444454464474484494504514524534544554564574584594604614624634644654664674684694704714724734744754764774784794804814824834844854864874884894904914924934944954964974984995005015025035045055065075085095105115125135145155165175185195205215225235245255265275285295305315325335345355365375385395405415425435445455465475485495505515525535545555565575585595605615625635645655665675685695705715725735745755765775785795805815825835845855865875885895905915925935945955965975985996006016026036046056066076086096106116126136146156166176186196206216226236246256266276286296306316326336346356366376386396406416426436446456466476486496506516526536546556566576586596606616626636646656666676686696706716726736746756766776786796806816826836846856866876886896906916926936946956966976986997007017027037047057067077087097107117127137147157167177187197207217227237247257267277287297307317327337347357367377387397407417427437447457467477487497507517527537547557567577587597607617627637647657667677687697707717727737747757767777787797807817827837847857867877887897907917927937947957967977987998008018028038048058068078088098108118128138148158168178188198208218228238248258268278288298308318328338348358368378388398408418428438448458468478488498508518528538548558568578588598608618628638648658668678688698708718728738748758768778788798808818828838848858868878888898908918928938948958968978988999009019029039049059069079089099109119129139149159169179189199209219229239249259269279289299309319329339349359369379389399409419429439449459469479489499509519529539549559569579589599609619629639649659669679689699709719729739749759769779789799809819829839849859869879889899909919929939949959969979989991000100110021003100410051006100710081009101010111012101310141015101610171018101910201021102210231024102510261027102810291030103110321033103410351036103710381039104010411042104310441045104610471048104910501051105210531054105510561057105810591060106110621063106410651066106710681069107010711072107310741075107610771078107910801081108210831084108510861087108810891090109110921093109410951096109710981099110011011102110311041105110611071108110911101111111211131114111511161117111811191120112111221123112411251126112711281129113011311132113311341135113611371138113911401141114211431144114511461147114811491150115111521153115411551156115711581159116011611162116311641165116611671168116911701171117211731174117511761177117811791180118111821183118411851186118711881189119011911192119311941195119611971198119912001201120212031204120512061207120812091210121112121213121412151216121712181219122012211222122312241225122612271228122912301231123212331234123512361237123812391240124112421243124412451246124712481249
  1. {
  2. "cells": [
  3. {
  4. "cell_type": "markdown",
  5. "id": "fdd7ff16",
  6. "metadata": {},
  7. "source": [
  8. "# T5. trainer 和 evaluator 的深入介绍\n",
  9. "\n",
  10. "  1   fastNLP 中 driver 的补充介绍\n",
  11. " \n",
  12. "    1.1   trainer 和 driver 的构想 \n",
  13. "\n",
  14. "    1.2   device 与 多卡训练\n",
  15. "\n",
  16. "  2   fastNLP 中的更多 metric 类型\n",
  17. "\n",
  18. "    2.1   预定义的 metric 类型\n",
  19. "\n",
  20. "    2.2   自定义的 metric 类型\n",
  21. "\n",
  22. "  3   fastNLP 中 trainer 的补充介绍\n",
  23. "\n",
  24. "    3.1   trainer 的内部结构"
  25. ]
  26. },
  27. {
  28. "cell_type": "markdown",
  29. "id": "08752c5a",
  30. "metadata": {
  31. "pycharm": {
  32. "name": "#%% md\n"
  33. }
  34. },
  35. "source": [
  36. "## 1. fastNLP 中 driver 的补充介绍\n",
  37. "\n",
  38. "### 1.1 trainer 和 driver 的构想\n",
  39. "\n",
  40. "在`fastNLP 0.8`中,模型训练最关键的模块便是**训练模块`trainer`、评测模块`evaluator`、驱动模块`driver`**,\n",
  41. "\n",
  42. "  在`tutorial 0`中,已经简单介绍过上述三个模块:**`driver`用来控制训练评测中的`model`的最终运行**\n",
  43. "\n",
  44. "    **`evaluator`封装评测的`metric`**,**`trainer`封装训练的`optimizer`**,**也可以包括`evaluator`**\n",
  45. "\n",
  46. "之所以做出上述的划分,其根本目的在于要**达成对于多个`python`学习框架**,**例如`pytorch`、`paddle`、`jittor`的兼容**\n",
  47. "\n",
  48. "  对于训练环节,其伪代码如下方左边紫色一栏所示,由于**不同框架对模型、损失、张量的定义各有不同**,所以将训练环节\n",
  49. "\n",
  50. "    划分为**框架无关的循环控制、批量分发部分**,**由`trainer`模块负责**实现,对应的伪代码如下方中间蓝色一栏所示\n",
  51. "\n",
  52. "    以及**随框架不同的模型调用、数值优化部分**,**由`driver`模块负责**实现,对应的伪代码如下方右边红色一栏所示\n",
  53. "\n",
  54. "| <div align=\"center\">训练过程</div> | <div align=\"center\">框架无关 对应`trainer`</div> | <div align=\"center\">框架相关 对应`driver`</div> |\n",
  55. "|:--|:--|:--|\n",
  56. "| <div style=\"font-family:Consolas;font-weight:bold;color:purple;\">try:</div> | <div style=\"font-family:Consolas;font-weight:bold;color:blue;\">try:</div> | |\n",
  57. "| <div style=\"font-family:Consolas;font-weight:bold;color:purple;text-indent:20px;\">for epoch in 1:n_eoochs:</div> | <div style=\"font-family:Consolas;font-weight:bold;color:blue;text-indent:20px;\">for epoch in 1:n_eoochs:</div> | |\n",
  58. "| <div style=\"font-family:Consolas;font-weight:bold;color:purple;text-indent:40px;\">for step in 1:total_steps:</div> | <div style=\"font-family:Consolas;font-weight:bold;color:blue;text-indent:40px;\">for step in 1:total_steps:</div> | |\n",
  59. "| <div style=\"font-family:Consolas;font-weight:bold;color:purple;text-indent:60px;\">batch = fetch_batch()</div> | <div style=\"font-family:Consolas;font-weight:bold;color:blue;text-indent:60px;\">batch = fetch_batch()</div> | |\n",
  60. "| <div style=\"font-family:Consolas;font-weight:bold;color:purple;text-indent:60px;\">loss = model.forward(batch)&emsp;</div> | | <div style=\"font-family:Consolas;font-weight:bold;color:red;text-indent:60px;\">loss = model.forward(batch)&emsp;</div> |\n",
  61. "| <div style=\"font-family:Consolas;font-weight:bold;color:purple;text-indent:60px;\">loss.backward()</div> | | <div style=\"font-family:Consolas;font-weight:bold;color:red;text-indent:60px;\">loss.backward()</div> |\n",
  62. "| <div style=\"font-family:Consolas;font-weight:bold;color:purple;text-indent:60px;\">model.clear_grad()</div> | | <div style=\"font-family:Consolas;font-weight:bold;color:red;text-indent:60px;\">model.clear_grad()</div> |\n",
  63. "| <div style=\"font-family:Consolas;font-weight:bold;color:purple;text-indent:60px;\">model.update()</div> | | <div style=\"font-family:Consolas;font-weight:bold;color:red;text-indent:60px;\">model.update()</div> |\n",
  64. "| <div style=\"font-family:Consolas;font-weight:bold;color:purple;text-indent:40px;\">if need_save:</div> | <div style=\"font-family:Consolas;font-weight:bold;color:blue;text-indent:40px;\">if need_save:</div> | |\n",
  65. "| <div style=\"font-family:Consolas;font-weight:bold;color:purple;text-indent:60px;\">model.save()</div> | | <div style=\"font-family:Consolas;font-weight:bold;color:red;text-indent:60px;\">model.save()</div> |\n",
  66. "| <div style=\"font-family:Consolas;font-weight:bold;color:purple;\">except:</div> | <div style=\"font-family:Consolas;font-weight:bold;color:blue;\">except:</div> | |\n",
  67. "| <div style=\"font-family:Consolas;font-weight:bold;color:purple;text-indent:20px;\">process_exception()</div> | <div style=\"font-family:Consolas;font-weight:bold;color:blue;text-indent:20px;\">process_exception()</div> | |"
  68. ]
  69. },
  70. {
  71. "cell_type": "markdown",
  72. "id": "3e55f07b",
  73. "metadata": {},
  74. "source": [
  75. "&emsp; 对于评测环节,其伪代码如下方左边紫色一栏所示,同样由于不同框架对模型、损失、张量的定义各有不同,所以将评测环节\n",
  76. "\n",
  77. "&emsp; &emsp; 划分为**框架无关的循环控制、分发汇总部分**,**由`evaluator`模块负责**实现,对应的伪代码如下方中间蓝色一栏所示\n",
  78. "\n",
  79. "&emsp; &emsp; 以及**随框架不同的模型调用、评测计算部分**,同样**由`driver`模块负责**实现,对应的伪代码如下方右边红色一栏所示\n",
  80. "\n",
  81. "| <div align=\"center\">评测过程</div> | <div align=\"center\">框架无关 对应`evaluator`</div> | <div align=\"center\">框架相关 对应`driver`</div> |\n",
  82. "|:--|:--|:--|\n",
  83. "| <div style=\"font-family:Consolas;font-weight:bold;color:purple;\">try:</div> | <div style=\"font-family:Consolas;font-weight:bold;color:blue;\">try:</div> | |\n",
  84. "| <div style=\"font-family:Consolas;font-weight:bold;color:purple;text-indent:20px;\">model.set_eval()</div> | <div style=\"font-family:Consolas;font-weight:bold;color:blue;text-indent:20px;\">model.set_eval()</div> | |\n",
  85. "| <div style=\"font-family:Consolas;font-weight:bold;color:purple;text-indent:20px;\">for step in 1:total_steps:</div> | <div style=\"font-family:Consolas;font-weight:bold;color:blue;text-indent:20px;\">for step in 1:total_steps:</div> | |\n",
  86. "| <div style=\"font-family:Consolas;font-weight:bold;color:purple;text-indent:40px;\">batch = fetch_batch()</div> | <div style=\"font-family:Consolas;font-weight:bold;color:blue;text-indent:40px;\">batch = fetch_batch()</div> | |\n",
  87. "| <div style=\"font-family:Consolas;font-weight:bold;color:purple;text-indent:40px;\">outputs = model.evaluate(batch)&emsp;</div> | | <div style=\"font-family:Consolas;font-weight:bold;color:red;text-indent:40px;\">outputs = model.evaluate(batch)&emsp;</div> |\n",
  88. "| <div style=\"font-family:Consolas;font-weight:bold;color:purple;text-indent:40px;\">metric.compute(batch, outputs)</div> | | <div style=\"font-family:Consolas;font-weight:bold;color:red;text-indent:40px;\">metric.compute(batch, outputs)</div> |\n",
  89. "| <div style=\"font-family:Consolas;font-weight:bold;color:purple;text-indent:20px;\">results = metric.get_metric()</div> | <div style=\"font-family:Consolas;font-weight:bold;color:blue;text-indent:20px;\">results = metric.get_metric()</div> | |\n",
  90. "| <div style=\"font-family:Consolas;font-weight:bold;color:purple;\">except:</div> | <div style=\"font-family:Consolas;font-weight:bold;color:blue;\">except:</div> | |\n",
  91. "| <div style=\"font-family:Consolas;font-weight:bold;color:purple;text-indent:20px;\">process_exception()</div> | <div style=\"font-family:Consolas;font-weight:bold;color:blue;text-indent:20px;\">process_exception()</div> | |"
  92. ]
  93. },
  94. {
  95. "cell_type": "markdown",
  96. "id": "94ba11c6",
  97. "metadata": {
  98. "pycharm": {
  99. "name": "#%%\n"
  100. }
  101. },
  102. "source": [
  103. "由此,从程序员的角度,`fastNLP v0.8`**通过一个`driver`让基于`pytorch`、`paddle`、`jittor`框架的模型**\n",
  104. "\n",
  105. "&emsp; &emsp; **都能在相同的`trainer`和`evaluator`上运行**,这也**是`fastNLP v0.8`相比于之前版本的一大亮点**\n",
  106. "\n",
  107. "&emsp; 而从`driver`的角度,`fastNLP v0.8`通过定义一个`driver`基类,**将所有张量转化为`numpy.tensor`**\n",
  108. "\n",
  109. "&emsp; &emsp; 并由此泛化出`torch_driver`、`paddle_driver`、`jittor_driver`三个子类,从而实现了\n",
  110. "\n",
  111. "&emsp; &emsp; 对`pytorch`、`paddle`、`jittor`的兼容,有关后两者的实践请参考接下来的`tutorial-6`"
  112. ]
  113. },
  114. {
  115. "cell_type": "markdown",
  116. "id": "ab1cea7d",
  117. "metadata": {},
  118. "source": [
  119. "### 1.2 device 与 多卡训练\n",
  120. "\n",
  121. "**`fastNLP v0.8`支持多卡训练**,实现方法则是**通过将`trainer`中的`device`设置为对应显卡的序号列表**\n",
  122. "\n",
  123. "&emsp; 由单卡切换成多卡,无论是数据、模型还是评测都会面临一定的调整,`fastNLP v0.8`保证:\n",
  124. "\n",
  125. "&emsp; &emsp; 数据拆分时,不同卡之间相互协调,所有数据都可以被训练,且不会使用到相同的数据\n",
  126. "\n",
  127. "&emsp; &emsp; 模型训练时,模型之间需要交换梯度;评测计算时,每张卡先各自计算,再汇总结果\n",
  128. "\n",
  129. "&emsp; 例如,在评测计算运行`get_metric`函数时,`fastNLP v0.8`将自动按照`self.right`和`self.total`\n",
  130. "\n",
  131. "&emsp; &emsp; 指定的**`aggregate_method`方法**,默认为`sum`,将每张卡上结果汇总起来,因此最终\n",
  132. "\n",
  133. "&emsp; &emsp; 在调用`get_metric`方法时,`Accuracy`类能够返回全部的统计结果,代码如下\n",
  134. " \n",
  135. "```python\n",
  136. "trainer = Trainer(\n",
  137. " model=model, # model 基于 pytorch 实现 \n",
  138. " train_dataloader=train_dataloader,\n",
  139. " optimizers=optimizer,\n",
  140. " ...\n",
  141. " driver='torch', # driver 使用 torch_driver \n",
  142. " device=[0, 1], # gpu 选择 cuda:0 + cuda:1\n",
  143. " ...\n",
  144. " evaluate_dataloaders=evaluate_dataloader,\n",
  145. " metrics={'acc': Accuracy()},\n",
  146. " ...\n",
  147. " )\n",
  148. "\n",
  149. "class Accuracy(Metric):\n",
  150. " def __init__(self):\n",
  151. " super().__init__()\n",
  152. " self.register_element(name='total', value=0, aggregate_method='sum')\n",
  153. " self.register_element(name='right', value=0, aggregate_method='sum')\n",
  154. "```\n"
  155. ]
  156. },
  157. {
  158. "cell_type": "markdown",
  159. "id": "e2e0a210",
  160. "metadata": {
  161. "pycharm": {
  162. "name": "#%%\n"
  163. }
  164. },
  165. "source": [
  166. "注:`fastNLP v0.8`中要求`jupyter`不能多卡,仅能单卡,故在所有`tutorial`中均不作相关演示"
  167. ]
  168. },
  169. {
  170. "cell_type": "markdown",
  171. "id": "8d19220c",
  172. "metadata": {},
  173. "source": [
  174. "## 2. fastNLP 中的更多 metric 类型\n",
  175. "\n",
  176. "### 2.1 预定义的 metric 类型\n",
  177. "\n",
  178. "在`fastNLP 0.8`中,除了前几篇`tutorial`中经常见到的**正确率`Accuracy`**,还有其他**预定义的评测标准`metric`**\n",
  179. "\n",
  180. "&emsp; 包括**所有`metric`的基类`Metric`**、适配`Transformers`中相关模型的正确率`TransformersAccuracy`\n",
  181. "\n",
  182. "&emsp; &emsp; **适用于分类语境下的`F1`值`ClassifyFPreRecMetric`**(其中也包括召回率`Pre`、精确率`Rec`\n",
  183. "\n",
  184. "&emsp; &emsp; **适用于抽取语境下的`F1`值`SpanFPreRecMetric`**;相关基本信息内容见下表,之后是详细分析\n",
  185. "\n",
  186. "| <div align=\"center\">代码名称</div> | <div align=\"center\">简要介绍</div> | <div align=\"center\">代码路径</div> |\n",
  187. "|:--|:--|:--|\n",
  188. "| `Metric` | 定义`metrics`时继承的基类 | `/core/metrics/metric.py` |\n",
  189. "| `Accuracy` | 正确率,最为常用 | `/core/metrics/accuracy.py` |\n",
  190. "| `TransformersAccuracy` | 正确率,为了兼容`Transformers`中相关模型 | `/core/metrics/accuracy.py` |\n",
  191. "| `ClassifyFPreRecMetric` | 召回率、精确率、F1值,适用于**分类问题** | `/core/metrics/classify_f1_pre_rec_metric.py` |\n",
  192. "| `SpanFPreRecMetric` | 召回率、精确率、F1值,适用于**抽取问题** | `/core/metrics/span_f1_pre_rec_metric.py` |"
  193. ]
  194. },
  195. {
  196. "cell_type": "markdown",
  197. "id": "fdc083a3",
  198. "metadata": {
  199. "pycharm": {
  200. "name": "#%%\n"
  201. }
  202. },
  203. "source": [
  204. "&emsp; 如`tutorial-0`中所述,所有的`metric`都包含`get_metric`和`update`函数,其中\n",
  205. "\n",
  206. "&emsp; &emsp; **`update`函数更新单个`batch`的统计量**,**`get_metric`函数返回最终结果**,并打印显示\n",
  207. "\n",
  208. "\n",
  209. "### 2.1.1 Accuracy 与 TransformersAccuracy\n",
  210. "\n",
  211. "`Accuracy`,正确率,预测正确的数据`right_num`在总数据`total_num`,中的占比(公式就不用列了\n",
  212. "\n",
  213. "&emsp; `get_metric`函数打印格式为 **`{\"acc#xx\": float, 'total#xx': float, 'correct#xx': float}`**\n",
  214. "\n",
  215. "&emsp; 一般在初始化时不需要传参,`fastNLP`会根据`update`函数的传入参数确定对应后台框架`backend`\n",
  216. "\n",
  217. "&emsp; **`update`函数的参数包括`pred`、`target`、`seq_len`**,**后者用来标记批次中每笔数据的长度**\n",
  218. "\n",
  219. "`TransformersAccuracy`,继承自`Accuracy`,只是为了兼容`Transformers`框架中相关模型\n",
  220. "\n",
  221. "&emsp; 在`update`函数中,将`Transformers`框架输出的`attention_mask`参数转化为`seq_len`参数\n",
  222. "\n",
  223. "\n",
  224. "### 2.1.2 ClassifyFPreRecMetric 与 SpanFPreRecMetric\n",
  225. "\n",
  226. "`ClassifyFPreRecMetric`,分类评价,`SpanFPreRecMetric`,抽取评价,后者在`tutorial-4`中已出现\n",
  227. "\n",
  228. "&emsp; 两者的相同之处在于:**第一**,**都包括召回率/查全率`Rec`**、**精确率/查准率`Pre`**、**`F1`值**这三个指标\n",
  229. "\n",
  230. "&emsp; &emsp; `get_metric`函数打印格式为 **`{\"f#xx\": float, 'pre#xx': float, 'rec#xx': float}`**\n",
  231. "\n",
  232. "&emsp; &emsp; 三者的计算公式如下,其中`beta`默认为`1`,即`F1`值是召回率`Rec`和精确率`Pre`的调和平均数\n",
  233. "\n",
  234. "$$\\text{召回率}\\ Rec=\\dfrac{\\text{正确预测为正例的数量}}{\\text{所有本来是正例的数量}}\\qquad \\text{精确率}\\ Pre=\\dfrac{\\text{正确预测为正例的数量}}{\\text{所有预测为正例的数量}}$$\n",
  235. "\n",
  236. "$$F_{beta} = \\frac{(1 + {beta}^{2})*(Pre*Rec)}{({beta}^{2}*Pre + Rec)}$$\n",
  237. "\n",
  238. "&emsp; **第二**,可以通过参数`only_gross`为`False`,要求返回所有类别的`Rec-Pre-F1`,同时`F1`值又根据参数`f_type`又分为\n",
  239. "\n",
  240. "&emsp; &emsp; **`micro F1`**(**直接统计所有类别的`Rec-Pre-F1`**)、**`macro F1`**(**统计各类别的`Rec-Pre-F1`再算术平均**)\n",
  241. "\n",
  242. "&emsp; **第三**,两者在初始化时还可以**传入基于`fastNLP.Vocabulary`的`tag_vocab`参数记录数据集中的标签序号**\n",
  243. "\n",
  244. "&emsp; &emsp; **与标签名称之间的映射**,通过字符串列表`ignore_labels`参数,指定若干标签不用于`Rec-Pre-F1`的计算\n",
  245. "\n",
  246. "两者的不同之处在于:`ClassifyFPreRecMetric`针对简单的分类问题,每个分类标签之间彼此独立,不构成标签对\n",
  247. "\n",
  248. "&emsp; &emsp; **`SpanFPreRecMetric`针对更复杂的抽取问题**,**规定标签`B-xx`和`I-xx`或`B-xx`和`E-xx`构成标签对**\n",
  249. "\n",
  250. "&emsp; 在计算`Rec-Pre-F1`时,`ClassifyFPreRecMetric`只需要考虑标签本身是否正确这就足够了,但是\n",
  251. "\n",
  252. "&emsp; &emsp; 对于`SpanFPreRecMetric`,需要保证**标签符合规则且覆盖的区间与正确结果重合才算正确**\n",
  253. "\n",
  254. "&emsp; &emsp; 因此回到`tutorial-4`中`CoNLL-2003`的`NER`任务,如果评测方法选择`ClassifyFPreRecMetric`\n",
  255. "\n",
  256. "&emsp; &emsp; &emsp; 或者`Accuracy`,会发现虽然评测结果显示很高,这是因为选择的评测方法要求太低\n",
  257. "\n",
  258. "&emsp; &emsp; 最后通过`CoNLL-2003`的词性标注`POS`任务简单演示下`ClassifyFPreRecMetric`相关的使用\n",
  259. "\n",
  260. "```python\n",
  261. "from fastNLP import Vocabulary\n",
  262. "from fastNLP import ClassifyFPreRecMetric\n",
  263. "\n",
  264. "tag_vocab = Vocabulary(padding=None, unknown=None) # 记录序号与标签之间的映射\n",
  265. "tag_vocab.add_word_lst(['\"', \"''\", '#', '$', '(', ')', ',', '.', ':', '``', \n",
  266. " 'CC', 'CD', 'DT', 'EX', 'FW', 'IN', 'JJ', 'JJR', 'JJS', 'LS', \n",
  267. " 'MD', 'NN', 'NNP', 'NNPS', 'NNS', 'NN|SYM', 'PDT', 'POS', 'PRP', 'PRP$', \n",
  268. " 'RB', 'RBR', 'RBS', 'RP', 'SYM', 'TO', 'UH', 'VB', 'VBD', 'VBG', \n",
  269. " 'VBN', 'VBP', 'VBZ', 'WDT', 'WP', 'WP+', 'WRB', ]) # CoNLL-2003 中的 pos_tags\n",
  270. "ignore_labels = ['\"', \"''\", '#', '$', '(', ')', ',', '.', ':', '``', ]\n",
  271. "\n",
  272. "FPreRec = ClassifyFPreRecMetric(tag_vocab=tag_vocab, \n",
  273. " ignore_labels=ignore_labels, # 表示评测/优化中不考虑上述标签的正误/损失\n",
  274. " only_gross=True, # 默认为 True 表示输出所有类别的综合统计结果\n",
  275. " f_type='micro') # 默认为 'micro' 表示统计所有类别的 Rec-Pre-F1\n",
  276. "metrics = {'F1': FPreRec}\n",
  277. "```"
  278. ]
  279. },
  280. {
  281. "cell_type": "markdown",
  282. "id": "8a22f522",
  283. "metadata": {},
  284. "source": [
  285. "### 2.2 自定义的 metric 类型\n",
  286. "\n",
  287. "如上文所述,`Metric`作为所有`metric`的基类,`Accuracy`等都是其子类,同样地,对于**自定义的`metric`类型**\n",
  288. "\n",
  289. "&emsp; &emsp; 也**需要继承自`Metric`类**,同时**内部自定义好`__init__`、`update`和`get_metric`函数**\n",
  290. "\n",
  291. "&emsp; 在`__init__`函数中,根据需求定义评测时需要用到的变量,此处沿用`Accuracy`中的`total_num`和`right_num`\n",
  292. "\n",
  293. "&emsp; 在`update`函数中,根据需求定义评测变量的更新方式,需要注意的是如`tutorial-0`中所述,**`update`的参数名**\n",
  294. "\n",
  295. "&emsp; &emsp; **需要待评估模型在`evaluate_step`中的输出名称一致**,由此**和数据集中对应字段名称一致**,即**参数匹配**\n",
  296. "\n",
  297. "&emsp; &emsp; 在`fastNLP v0.8`中,`update`函数的默认输入参数:`pred`,对应预测值;`target`,对应真实值\n",
  298. "\n",
  299. "&emsp; &emsp; 此处仍然沿用,因为接下来会需要使用`fastNLP`函数的与定义模型,其输入参数格式即使如此\n",
  300. "\n",
  301. "&emsp; 在`get_metric`函数中,根据需求定义评测指标最终的计算,此处直接计算准确率,该函数必须返回一个字典\n",
  302. "\n",
  303. "&emsp; &emsp; 其中,字串`'prefix'`表示该`metric`的名称,会对应显示到`trainer`的`progress bar`中\n",
  304. "\n",
  305. "根据上述要求,这里简单定义了一个名为`MyMetric`的评测模块,用于分类问题的评测,以此展开一个实例展示"
  306. ]
  307. },
  308. {
  309. "cell_type": "code",
  310. "execution_count": 1,
  311. "id": "08a872e9",
  312. "metadata": {},
  313. "outputs": [
  314. {
  315. "data": {
  316. "text/html": [
  317. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
  318. "</pre>\n"
  319. ],
  320. "text/plain": [
  321. "\n"
  322. ]
  323. },
  324. "metadata": {},
  325. "output_type": "display_data"
  326. }
  327. ],
  328. "source": [
  329. "import sys\n",
  330. "sys.path.append('..')\n",
  331. "\n",
  332. "from fastNLP import Metric\n",
  333. "\n",
  334. "class MyMetric(Metric):\n",
  335. "\n",
  336. " def __init__(self):\n",
  337. " Metric.__init__(self)\n",
  338. " self.total_num = 0\n",
  339. " self.right_num = 0\n",
  340. "\n",
  341. " def update(self, pred, target):\n",
  342. " self.total_num += target.size(0)\n",
  343. " self.right_num += target.eq(pred).sum().item()\n",
  344. "\n",
  345. " def get_metric(self, reset=True):\n",
  346. " acc = self.right_num / self.total_num\n",
  347. " if reset:\n",
  348. " self.total_num = 0\n",
  349. " self.right_num = 0\n",
  350. " return {'prefix': acc}"
  351. ]
  352. },
  353. {
  354. "cell_type": "markdown",
  355. "id": "0155f447",
  356. "metadata": {},
  357. "source": [
  358. "&emsp; 数据使用方面,此处仍然使用`datasets`模块中的`load_dataset`函数,加载`SST-2`二分类数据集"
  359. ]
  360. },
  361. {
  362. "cell_type": "code",
  363. "execution_count": 2,
  364. "id": "5ad81ac7",
  365. "metadata": {
  366. "pycharm": {
  367. "name": "#%%\n"
  368. }
  369. },
  370. "outputs": [
  371. {
  372. "name": "stderr",
  373. "output_type": "stream",
  374. "text": [
  375. "Reusing dataset glue (/remote-home/xrliu/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)\n"
  376. ]
  377. },
  378. {
  379. "data": {
  380. "application/vnd.jupyter.widget-view+json": {
  381. "model_id": "ef923b90b19847f4916cccda5d33fc36",
  382. "version_major": 2,
  383. "version_minor": 0
  384. },
  385. "text/plain": [
  386. " 0%| | 0/3 [00:00<?, ?it/s]"
  387. ]
  388. },
  389. "metadata": {},
  390. "output_type": "display_data"
  391. }
  392. ],
  393. "source": [
  394. "from datasets import load_dataset\n",
  395. "\n",
  396. "sst2data = load_dataset('glue', 'sst2')"
  397. ]
  398. },
  399. {
  400. "cell_type": "markdown",
  401. "id": "e9d81760",
  402. "metadata": {},
  403. "source": [
  404. "&emsp; 在数据预处理中,需要注意的是,这里原本应该根据`metric`和`model`的输入参数格式,调整\n",
  405. "\n",
  406. "&emsp; &emsp; 数据集中表示预测目标的字段,调整为`target`,在后文中会揭晓为什么,以及如何补救"
  407. ]
  408. },
  409. {
  410. "cell_type": "code",
  411. "execution_count": 3,
  412. "id": "cfb28b1b",
  413. "metadata": {
  414. "pycharm": {
  415. "name": "#%%\n"
  416. }
  417. },
  418. "outputs": [
  419. {
  420. "data": {
  421. "application/vnd.jupyter.widget-view+json": {
  422. "model_id": "",
  423. "version_major": 2,
  424. "version_minor": 0
  425. },
  426. "text/plain": [
  427. "Processing: 0%| | 0/6000 [00:00<?, ?it/s]"
  428. ]
  429. },
  430. "metadata": {},
  431. "output_type": "display_data"
  432. }
  433. ],
  434. "source": [
  435. "from fastNLP import DataSet\n",
  436. "\n",
  437. "dataset = DataSet.from_pandas(sst2data['train'].to_pandas())[:6000]\n",
  438. "\n",
  439. "dataset.apply_more(lambda ins:{'words': ins['sentence'].lower().split()}, progress_bar=\"tqdm\")\n",
  440. "dataset.delete_field('sentence')\n",
  441. "dataset.delete_field('idx')\n",
  442. "\n",
  443. "from fastNLP import Vocabulary\n",
  444. "\n",
  445. "vocab = Vocabulary()\n",
  446. "vocab.from_dataset(dataset, field_name='words')\n",
  447. "vocab.index_dataset(dataset, field_name='words')\n",
  448. "\n",
  449. "train_dataset, evaluate_dataset = dataset.split(ratio=0.85)\n",
  450. "\n",
  451. "from fastNLP import prepare_torch_dataloader\n",
  452. "\n",
  453. "train_dataloader = prepare_torch_dataloader(train_dataset, batch_size=16, shuffle=True)\n",
  454. "evaluate_dataloader = prepare_torch_dataloader(evaluate_dataset, batch_size=16)"
  455. ]
  456. },
  457. {
  458. "cell_type": "markdown",
  459. "id": "af3f8c63",
  460. "metadata": {},
  461. "source": [
  462. "&emsp; 模型使用方面,此处仍然使用`tutorial-4`中介绍过的预定义`CNNText`模型,实现`SST-2`二分类"
  463. ]
  464. },
  465. {
  466. "cell_type": "code",
  467. "execution_count": 4,
  468. "id": "2fd210c5",
  469. "metadata": {},
  470. "outputs": [],
  471. "source": [
  472. "from fastNLP.models.torch import CNNText\n",
  473. "\n",
  474. "model = CNNText(embed=(len(vocab), 100), num_classes=2, dropout=0.1)\n",
  475. "\n",
  476. "from torch.optim import AdamW\n",
  477. "\n",
  478. "optimizers = AdamW(params=model.parameters(), lr=5e-4)"
  479. ]
  480. },
  481. {
  482. "cell_type": "markdown",
  483. "id": "6e723b87",
  484. "metadata": {},
  485. "source": [
  486. "## 3. fastNLP 中 trainer 的补充介绍\n",
  487. "\n",
  488. "### 3.1 trainer 的内部结构\n",
  489. "\n",
  490. "在`tutorial-0`中,我们已经介绍了`trainer`的基本使用,从`tutorial-1`到`tutorial-4`,我们也已经展示了\n",
  491. "\n",
  492. "&emsp; 很多`trainer`的使用案例,这里通过表格,相对完整地介绍`trainer`模块的属性和初始化参数(标粗为必选参数\n",
  493. "\n",
  494. "| <div align=\"center\">名称</div> | <div align=\"center\">参数</div> | <div align=\"center\">属性</div> | <div align=\"center\">功能</div> | <div align=\"center\">内容</div> |\n",
  495. "|:--|:--:|:--:|:--|:--|\n",
  496. "| **`model`** | √ | √ | 指定`trainer`控制的模型 | 视框架而定,如`torch.nn.Module` |\n",
  497. "| `device` | √ | | 指定`trainer`运行的卡位 | 例如`'cpu'`、`'cuda'`、`0`、`[0, 1]`等 |\n",
  498. "| | | √ | 记录`trainer`运行的卡位 | `Device`类型,在初始化阶段生成 |\n",
  499. "| **`driver`** | √ | | 指定`trainer`驱动的框架 | 包括`'torch'`、`'paddle'`、`'jittor'` |\n",
  500. "| | | √ | 记录`trainer`驱动的框架 | `Driver`类型,在初始化阶段生成 |\n",
  501. "| `n_epochs` | √ | - | 指定`trainer`迭代的轮数 | 默认`20`,记录在`driver.n_epochs`中 |\n",
  502. "| **`optimizers`** | √ | √ | 指定`trainer`优化的方法 | 视框架而定,如`torch.optim.Adam` |\n",
  503. "| `metrics` | √ | √ | 指定`trainer`评测的方法 | 字典类型,如`{'acc': Metric()}` |\n",
  504. "| `evaluator` | | √ | 内置的`trainer`评测模块 | `Evaluator`类型,在初始化阶段生成 |\n",
  505. "| `input_mapping` | √ | √ | 调整`dataloader`的参数不匹配 | 函数类型,输出字典匹配`forward`输入参数 |\n",
  506. "| `output_mapping` | √ | √ | 调整`forward`输出的参数不匹配 | 函数类型,输出字典匹配`xx_step`输入参数 |\n",
  507. "| **`train_dataloader`** | √ | √ | 指定`trainer`训练的数据 | `DataLoader`类型,生成视框架而定 |\n",
  508. "| `evaluate_dataloaders` | √ | √ | 指定`trainer`评测的数据 | `DataLoader`类型,生成视框架而定 |\n",
  509. "| `train_fn` | √ | √ | 指定`trainer`获取某个批次的损失值 | 函数类型,默认为`model.train_step` |\n",
  510. "| `evaluate_fn` | √ | √ | 指定`trainer`获取某个批次的评估量 | 函数类型,默认为`model.evaluate_step` |\n",
  511. "| `batch_step_fn` | √ | √ | 指定`trainer`训练时前向传输一个批次的方式 | 函数类型,默认为`TrainBatchLoop.batch_step_fn` |\n",
  512. "| `evaluate_batch_step_fn` | √ | √ | 指定`trainer`评测时前向传输一个批次的方式 | 函数类型,默认为`EvaluateBatchLoop.batch_step_fn` |\n",
  513. "| `accumulation_steps` | √ | √ | 指定`trainer`训练时反向传播的频率 | 默认为`1`,即每个批次都反向传播 |\n",
  514. "| `evaluate_every` | √ | √ | 指定`evaluator`评测时计算的频率 | 默认`-1`表示每个循环一次,相反`1`表示每个批次一次 |\n",
  515. "| `progress_bar` | √ | √ | 指定`trainer`训练和评测时的进度条样式 | 包括`'auto'`、`'tqdm'`、`'raw'`、`'rich'` |\n",
  516. "| `callbacks` | √ | | 指定`trainer`训练时需要触发的函数 | `Callback`列表类型,详见`tutorial-7` |\n",
  517. "| `callback_manager` | | √ | 记录与管理`callbacks`相关内容 | `CallbackManager`类型,详见`tutorial-7` |\n",
  518. "| `monitor` | √ | √ | 辅助部分的`callbacks`相关内容 | 字符串/函数类型,详见`tutorial-7` |\n",
  519. "| `marker` | √ | √ | 标记`trainer`实例,辅助`callbacks`相关内容 | 字符串型,详见`tutorial-7` |\n",
  520. "| `trainer_state` | | √ | 记录`trainer`状态,辅助`callbacks`相关内容 | `TrainerState`类型,详见`tutorial-7` |\n",
  521. "| `state` | | √ | 记录`trainer`状态,辅助`callbacks`相关内容 | `State`类型,详见`tutorial-7` |\n",
  522. "| `fp16` | √ | √ | 指定`trainer`是否进行混合精度训练 | 布尔类型,默认`False` |"
  523. ]
  524. },
  525. {
  526. "cell_type": "markdown",
  527. "id": "9e13ee08",
  528. "metadata": {},
  529. "source": [
  530. "其中,**`input_mapping`和`output_mapping`** 定义形式如下:输入字典形式的数据,根据参数匹配要求\n",
  531. "\n",
  532. "&emsp; 调整数据格式,这里就回应了前文未在数据集预处理时调整格式的问题,**总之参数匹配一定要求**"
  533. ]
  534. },
  535. {
  536. "cell_type": "code",
  537. "execution_count": 5,
  538. "id": "de96c1d1",
  539. "metadata": {},
  540. "outputs": [],
  541. "source": [
  542. "def input_mapping(data):\n",
  543. " data['target'] = data['label']\n",
  544. " return data"
  545. ]
  546. },
  547. {
  548. "cell_type": "markdown",
  549. "id": "2fc8b9f3",
  550. "metadata": {},
  551. "source": [
  552. "&emsp; 而`trainer`模块的基础方法列表如下,相关进阶操作,如“`on`系列函数”、`callback`控制,请参考后续的`tutorial-7`\n",
  553. "\n",
  554. "| <div align=\"center\">名称</div> |<div align=\"center\">功能</div> | <div align=\"center\">主要参数</div> |\n",
  555. "|:--|:--|:--|\n",
  556. "| `run` | 控制`trainer`中模型的训练和评测 | 详见后文 |\n",
  557. "| `train_step` | 实现`trainer`训练中一个批数据的前向传播过程 | 输入`batch` |\n",
  558. "| `backward` | 实现`trainer`训练中一次损失的反向传播过程 | 输入`output` |\n",
  559. "| `zero_grad` | 实现`trainer`训练中`optimizers`的梯度置零 | 无输入 |\n",
  560. "| `step` | 实现`trainer`训练中`optimizers`的参数更新 | 无输入 |\n",
  561. "| `epoch_evaluate` | 实现`trainer`训练中每个循环的评测,实际是否执行取决于评测频率 | 无输入 |\n",
  562. "| `step_evaluate` | 实现`trainer`训练中每个批次的评测,实际是否执行取决于评测频率 | 无输入 |\n",
  563. "| `save_model` | 保存`trainer`中的模型参数/状态字典至`fastnlp_model.pkl.tar` | `folder`指明路径,`only_state_dict`指明是否只保存状态字典,默认`False` |\n",
  564. "| `load_model` | 加载`trainer`中的模型参数/状态字典自`fastnlp_model.pkl.tar` | `folder`指明路径,`only_state_dict`指明是否只加载状态字典,默认`True` |\n",
  565. "| `save_checkpoint` | <div style=\"line-height:25px;\">保存`trainer`中模型参数/状态字典 以及 `callback`、`sampler`<br>和`optimizer`的状态至`fastnlp_model/checkpoint.pkl.tar`</div> | `folder`指明路径,`only_state_dict`指明是否只保存状态字典,默认`True` |\n",
  566. "| `load_checkpoint` | <div style=\"line-height:25px;\">加载`trainer`中模型参数/状态字典 以及 `callback`、`sampler`<br>和`optimizer`的状态自`fastnlp_model/checkpoint.pkl.tar`</div> | <div style=\"line-height:25px;\">`folder`指明路径,`only_state_dict`指明是否只保存状态字典,默认`True`<br>`resume_training`指明是否只精确到上次训练的批量,默认`True`</div> |\n",
  567. "| `add_callback_fn` | 在`trainer`初始化后添加`callback`函数 | 输入`event`指明回调时机,`fn`指明回调函数 |\n",
  568. "| `on` | 函数修饰器,将一个函数转变为`callback`函数 | 详见`tutorial-7` |\n",
  569. "\n",
  570. "<!-- ```python\n",
  571. "Trainer.__init__():\n",
  572. "\ton_after_trainer_initialized(trainer, driver)\n",
  573. "Trainer.run():\n",
  574. "\tif num_eval_sanity_batch > 0: # 如果设置了 num_eval_sanity_batch\n",
  575. "\t\ton_sanity_check_begin(trainer)\n",
  576. "\t\ton_sanity_check_end(trainer, sanity_check_res)\n",
  577. "\ttry:\n",
  578. "\t\ton_train_begin(trainer)\n",
  579. "\t\twhile cur_epoch_idx < n_epochs:\n",
  580. "\t\t\ton_train_epoch_begin(trainer)\n",
  581. "\t\t\twhile batch_idx_in_epoch<=num_batches_per_epoch:\n",
  582. "\t\t\t\ton_fetch_data_begin(trainer)\n",
  583. "\t\t\t\tbatch = next(dataloader)\n",
  584. "\t\t\t\ton_fetch_data_end(trainer)\n",
  585. "\t\t\t\ton_train_batch_begin(trainer, batch, indices)\n",
  586. "\t\t\t\ton_before_backward(trainer, outputs) # 其中 outputs 是经过 output_mapping 后的\n",
  587. "\t\t\t\ton_after_backward(trainer)\n",
  588. "\t\t\t\ton_before_zero_grad(trainer, optimizers) # 实际调用受到 accumulation_steps 影响\n",
  589. "\t\t\t\ton_after_zero_grad(trainer, optimizers) # 实际调用受到 accumulation_steps 影响\n",
  590. "\t\t\t\ton_before_optimizers_step(trainer, optimizers) # 实际调用受到 accumulation_steps 影响\n",
  591. "\t\t\t\ton_after_optimizers_step(trainer, optimizers) # 实际调用受到 accumulation_steps 影响\n",
  592. "\t\t\t\ton_train_batch_end(trainer)\n",
  593. "\t\t\ton_train_epoch_end(trainer)\n",
  594. "\texcept BaseException:\n",
  595. "\t\tself.on_exception(trainer, exception)\n",
  596. "\tfinally:\n",
  597. "\t\ton_train_end(trainer)\n",
  598. "``` -->"
  599. ]
  600. },
  601. {
  602. "cell_type": "markdown",
  603. "id": "1e21df35",
  604. "metadata": {},
  605. "source": [
  606. "紧接着,初始化`trainer`实例,继续完成`SST-2`分类,其中`metrics`输入的键值对,字串`'suffix'`和之前定义的\n",
  607. "\n",
  608. "&emsp; 字串`'prefix'`将拼接在一起显示到`progress bar`中,故完整的输出形式为`{'prefix#suffix': float}`"
  609. ]
  610. },
  611. {
  612. "cell_type": "code",
  613. "execution_count": 6,
  614. "id": "926a9c50",
  615. "metadata": {},
  616. "outputs": [],
  617. "source": [
  618. "from fastNLP import Trainer\n",
  619. "\n",
  620. "trainer = Trainer(\n",
  621. " model=model,\n",
  622. " driver='torch',\n",
  623. " device=0, # 'cuda'\n",
  624. " n_epochs=10,\n",
  625. " optimizers=optimizers,\n",
  626. " input_mapping=input_mapping,\n",
  627. " train_dataloader=train_dataloader,\n",
  628. " evaluate_dataloaders=evaluate_dataloader,\n",
  629. " metrics={'suffix': MyMetric()}\n",
  630. ")"
  631. ]
  632. },
  633. {
  634. "cell_type": "markdown",
  635. "id": "b1b2e8b7",
  636. "metadata": {
  637. "pycharm": {
  638. "name": "#%%\n"
  639. }
  640. },
  641. "source": [
  642. "最后就是`run`函数的使用,关于其参数,这里也以表格形式列出,由此就解答了`num_eval_batch_per_dl=10`的含义\n",
  643. "\n",
  644. "| <div align=\"center\">名称</div> | <div align=\"center\">功能</div> | <div align=\"center\">默认值</div> |\n",
  645. "|:--|:--|:--|\n",
  646. "| `num_train_batch_per_epoch` | 指定`trainer`训练时,每个循环计算批量数目 | 整数类型,默认`-1`,表示训练时,每个循环计算所有批量 |\n",
  647. "| `num_eval_batch_per_dl` | 指定`trainer`评测时,每个循环计算批量数目 | 整数类型,默认`-1`,表示评测时,每个循环计算所有批量 |\n",
  648. "| `num_eval_sanity_batch` | 指定`trainer`训练开始前,试探性评测批量数目 | 整数类型,默认`2`,表示训练开始前评估两个批量 |\n",
  649. "| `resume_from` | 指定`trainer`恢复状态的路径,需要是文件夹 | 字符串型,默认`None`,使用可参考`CheckpointCallback` |\n",
  650. "| `resume_training` | 指定`trainer`恢复状态的程度 | 布尔类型,默认`True`恢复所有状态,`False`仅恢复`model`和`optimizers`状态 |"
  651. ]
  652. },
  653. {
  654. "cell_type": "code",
  655. "execution_count": 7,
  656. "id": "43be274f",
  657. "metadata": {
  658. "pycharm": {
  659. "name": "#%%\n"
  660. }
  661. },
  662. "outputs": [
  663. {
  664. "data": {
  665. "text/html": [
  666. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"color: #7fbfbf; text-decoration-color: #7fbfbf\">[09:30:35] </span><span style=\"color: #000080; text-decoration-color: #000080\">INFO </span> Running evaluator sanity check for <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">2</span> batches. <a href=\"file://../fastNLP/core/controllers/trainer.py\"><span style=\"color: #7f7f7f; text-decoration-color: #7f7f7f\">trainer.py</span></a><span style=\"color: #7f7f7f; text-decoration-color: #7f7f7f\">:</span><a href=\"file://../fastNLP/core/controllers/trainer.py#596\"><span style=\"color: #7f7f7f; text-decoration-color: #7f7f7f\">596</span></a>\n",
  667. "</pre>\n"
  668. ],
  669. "text/plain": [
  670. "\u001b[2;36m[09:30:35]\u001b[0m\u001b[2;36m \u001b[0m\u001b[34mINFO \u001b[0m Running evaluator sanity check for \u001b[1;36m2\u001b[0m batches. \u001b]8;id=954293;file://../fastNLP/core/controllers/trainer.py\u001b\\\u001b[2mtrainer.py\u001b[0m\u001b]8;;\u001b\\\u001b[2m:\u001b[0m\u001b]8;id=366534;file://../fastNLP/core/controllers/trainer.py#596\u001b\\\u001b[2m596\u001b[0m\u001b]8;;\u001b\\\n"
  671. ]
  672. },
  673. "metadata": {},
  674. "output_type": "display_data"
  675. },
  676. {
  677. "data": {
  678. "application/vnd.jupyter.widget-view+json": {
  679. "model_id": "",
  680. "version_major": 2,
  681. "version_minor": 0
  682. },
  683. "text/plain": [
  684. "Output()"
  685. ]
  686. },
  687. "metadata": {},
  688. "output_type": "display_data"
  689. },
  690. {
  691. "data": {
  692. "text/html": [
  693. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">/remote-home/xrliu/anaconda3/envs/demo/lib/python3.7/site-packages/ipywidgets/widgets/widget_\n",
  694. "output.py:111: DeprecationWarning: Kernel._parent_header is deprecated in ipykernel 6. Use \n",
  695. ".get_parent()\n",
  696. " if ip and hasattr(ip, 'kernel') and hasattr(ip.kernel, '_parent_header'):\n",
  697. "</pre>\n"
  698. ],
  699. "text/plain": [
  700. "/remote-home/xrliu/anaconda3/envs/demo/lib/python3.7/site-packages/ipywidgets/widgets/widget_\n",
  701. "output.py:111: DeprecationWarning: Kernel._parent_header is deprecated in ipykernel 6. Use \n",
  702. ".get_parent()\n",
  703. " if ip and hasattr(ip, 'kernel') and hasattr(ip.kernel, '_parent_header'):\n"
  704. ]
  705. },
  706. "metadata": {},
  707. "output_type": "display_data"
  708. },
  709. {
  710. "data": {
  711. "text/html": [
  712. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">/remote-home/xrliu/anaconda3/envs/demo/lib/python3.7/site-packages/ipywidgets/widgets/widget_\n",
  713. "output.py:112: DeprecationWarning: Kernel._parent_header is deprecated in ipykernel 6. Use \n",
  714. ".get_parent()\n",
  715. " self.msg_id = ip.kernel._parent_header['header']['msg_id']\n",
  716. "</pre>\n"
  717. ],
  718. "text/plain": [
  719. "/remote-home/xrliu/anaconda3/envs/demo/lib/python3.7/site-packages/ipywidgets/widgets/widget_\n",
  720. "output.py:112: DeprecationWarning: Kernel._parent_header is deprecated in ipykernel 6. Use \n",
  721. ".get_parent()\n",
  722. " self.msg_id = ip.kernel._parent_header['header']['msg_id']\n"
  723. ]
  724. },
  725. "metadata": {},
  726. "output_type": "display_data"
  727. },
  728. {
  729. "data": {
  730. "text/html": [
  731. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"></pre>\n"
  732. ],
  733. "text/plain": []
  734. },
  735. "metadata": {},
  736. "output_type": "display_data"
  737. },
  738. {
  739. "data": {
  740. "application/vnd.jupyter.widget-view+json": {
  741. "model_id": "",
  742. "version_major": 2,
  743. "version_minor": 0
  744. },
  745. "text/plain": [
  746. "Output()"
  747. ]
  748. },
  749. "metadata": {},
  750. "output_type": "display_data"
  751. },
  752. {
  753. "data": {
  754. "text/html": [
  755. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
  756. "</pre>\n"
  757. ],
  758. "text/plain": [
  759. "\n"
  760. ]
  761. },
  762. "metadata": {},
  763. "output_type": "display_data"
  764. },
  765. {
  766. "data": {
  767. "text/html": [
  768. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">----------------------------- Eval. results on Epoch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">1</span>, Batch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0</span> -----------------------------\n",
  769. "</pre>\n"
  770. ],
  771. "text/plain": [
  772. "----------------------------- Eval. results on Epoch:\u001b[1;36m1\u001b[0m, Batch:\u001b[1;36m0\u001b[0m -----------------------------\n"
  773. ]
  774. },
  775. "metadata": {},
  776. "output_type": "display_data"
  777. },
  778. {
  779. "data": {
  780. "text/html": [
  781. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">{</span>\n",
  782. " <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"prefix#suffix\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0.6875</span>\n",
  783. "<span style=\"font-weight: bold\">}</span>\n",
  784. "</pre>\n"
  785. ],
  786. "text/plain": [
  787. "\u001b[1m{\u001b[0m\n",
  788. " \u001b[1;34m\"prefix#suffix\"\u001b[0m: \u001b[1;36m0.6875\u001b[0m\n",
  789. "\u001b[1m}\u001b[0m\n"
  790. ]
  791. },
  792. "metadata": {},
  793. "output_type": "display_data"
  794. },
  795. {
  796. "data": {
  797. "text/html": [
  798. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
  799. "</pre>\n"
  800. ],
  801. "text/plain": [
  802. "\n"
  803. ]
  804. },
  805. "metadata": {},
  806. "output_type": "display_data"
  807. },
  808. {
  809. "data": {
  810. "text/html": [
  811. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">----------------------------- Eval. results on Epoch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">2</span>, Batch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0</span> -----------------------------\n",
  812. "</pre>\n"
  813. ],
  814. "text/plain": [
  815. "----------------------------- Eval. results on Epoch:\u001b[1;36m2\u001b[0m, Batch:\u001b[1;36m0\u001b[0m -----------------------------\n"
  816. ]
  817. },
  818. "metadata": {},
  819. "output_type": "display_data"
  820. },
  821. {
  822. "data": {
  823. "text/html": [
  824. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">{</span>\n",
  825. " <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"prefix#suffix\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0.8125</span>\n",
  826. "<span style=\"font-weight: bold\">}</span>\n",
  827. "</pre>\n"
  828. ],
  829. "text/plain": [
  830. "\u001b[1m{\u001b[0m\n",
  831. " \u001b[1;34m\"prefix#suffix\"\u001b[0m: \u001b[1;36m0.8125\u001b[0m\n",
  832. "\u001b[1m}\u001b[0m\n"
  833. ]
  834. },
  835. "metadata": {},
  836. "output_type": "display_data"
  837. },
  838. {
  839. "data": {
  840. "text/html": [
  841. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
  842. "</pre>\n"
  843. ],
  844. "text/plain": [
  845. "\n"
  846. ]
  847. },
  848. "metadata": {},
  849. "output_type": "display_data"
  850. },
  851. {
  852. "data": {
  853. "text/html": [
  854. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">----------------------------- Eval. results on Epoch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">3</span>, Batch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0</span> -----------------------------\n",
  855. "</pre>\n"
  856. ],
  857. "text/plain": [
  858. "----------------------------- Eval. results on Epoch:\u001b[1;36m3\u001b[0m, Batch:\u001b[1;36m0\u001b[0m -----------------------------\n"
  859. ]
  860. },
  861. "metadata": {},
  862. "output_type": "display_data"
  863. },
  864. {
  865. "data": {
  866. "text/html": [
  867. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">{</span>\n",
  868. " <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"prefix#suffix\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0.80625</span>\n",
  869. "<span style=\"font-weight: bold\">}</span>\n",
  870. "</pre>\n"
  871. ],
  872. "text/plain": [
  873. "\u001b[1m{\u001b[0m\n",
  874. " \u001b[1;34m\"prefix#suffix\"\u001b[0m: \u001b[1;36m0.80625\u001b[0m\n",
  875. "\u001b[1m}\u001b[0m\n"
  876. ]
  877. },
  878. "metadata": {},
  879. "output_type": "display_data"
  880. },
  881. {
  882. "data": {
  883. "text/html": [
  884. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
  885. "</pre>\n"
  886. ],
  887. "text/plain": [
  888. "\n"
  889. ]
  890. },
  891. "metadata": {},
  892. "output_type": "display_data"
  893. },
  894. {
  895. "data": {
  896. "text/html": [
  897. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">----------------------------- Eval. results on Epoch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">4</span>, Batch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0</span> -----------------------------\n",
  898. "</pre>\n"
  899. ],
  900. "text/plain": [
  901. "----------------------------- Eval. results on Epoch:\u001b[1;36m4\u001b[0m, Batch:\u001b[1;36m0\u001b[0m -----------------------------\n"
  902. ]
  903. },
  904. "metadata": {},
  905. "output_type": "display_data"
  906. },
  907. {
  908. "data": {
  909. "text/html": [
  910. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">{</span>\n",
  911. " <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"prefix#suffix\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0.825</span>\n",
  912. "<span style=\"font-weight: bold\">}</span>\n",
  913. "</pre>\n"
  914. ],
  915. "text/plain": [
  916. "\u001b[1m{\u001b[0m\n",
  917. " \u001b[1;34m\"prefix#suffix\"\u001b[0m: \u001b[1;36m0.825\u001b[0m\n",
  918. "\u001b[1m}\u001b[0m\n"
  919. ]
  920. },
  921. "metadata": {},
  922. "output_type": "display_data"
  923. },
  924. {
  925. "data": {
  926. "text/html": [
  927. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
  928. "</pre>\n"
  929. ],
  930. "text/plain": [
  931. "\n"
  932. ]
  933. },
  934. "metadata": {},
  935. "output_type": "display_data"
  936. },
  937. {
  938. "data": {
  939. "text/html": [
  940. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">----------------------------- Eval. results on Epoch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">5</span>, Batch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0</span> -----------------------------\n",
  941. "</pre>\n"
  942. ],
  943. "text/plain": [
  944. "----------------------------- Eval. results on Epoch:\u001b[1;36m5\u001b[0m, Batch:\u001b[1;36m0\u001b[0m -----------------------------\n"
  945. ]
  946. },
  947. "metadata": {},
  948. "output_type": "display_data"
  949. },
  950. {
  951. "data": {
  952. "text/html": [
  953. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">{</span>\n",
  954. " <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"prefix#suffix\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0.8125</span>\n",
  955. "<span style=\"font-weight: bold\">}</span>\n",
  956. "</pre>\n"
  957. ],
  958. "text/plain": [
  959. "\u001b[1m{\u001b[0m\n",
  960. " \u001b[1;34m\"prefix#suffix\"\u001b[0m: \u001b[1;36m0.8125\u001b[0m\n",
  961. "\u001b[1m}\u001b[0m\n"
  962. ]
  963. },
  964. "metadata": {},
  965. "output_type": "display_data"
  966. },
  967. {
  968. "data": {
  969. "text/html": [
  970. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
  971. "</pre>\n"
  972. ],
  973. "text/plain": [
  974. "\n"
  975. ]
  976. },
  977. "metadata": {},
  978. "output_type": "display_data"
  979. },
  980. {
  981. "data": {
  982. "text/html": [
  983. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">----------------------------- Eval. results on Epoch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">6</span>, Batch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0</span> -----------------------------\n",
  984. "</pre>\n"
  985. ],
  986. "text/plain": [
  987. "----------------------------- Eval. results on Epoch:\u001b[1;36m6\u001b[0m, Batch:\u001b[1;36m0\u001b[0m -----------------------------\n"
  988. ]
  989. },
  990. "metadata": {},
  991. "output_type": "display_data"
  992. },
  993. {
  994. "data": {
  995. "text/html": [
  996. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">{</span>\n",
  997. " <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"prefix#suffix\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0.80625</span>\n",
  998. "<span style=\"font-weight: bold\">}</span>\n",
  999. "</pre>\n"
  1000. ],
  1001. "text/plain": [
  1002. "\u001b[1m{\u001b[0m\n",
  1003. " \u001b[1;34m\"prefix#suffix\"\u001b[0m: \u001b[1;36m0.80625\u001b[0m\n",
  1004. "\u001b[1m}\u001b[0m\n"
  1005. ]
  1006. },
  1007. "metadata": {},
  1008. "output_type": "display_data"
  1009. },
  1010. {
  1011. "data": {
  1012. "text/html": [
  1013. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
  1014. "</pre>\n"
  1015. ],
  1016. "text/plain": [
  1017. "\n"
  1018. ]
  1019. },
  1020. "metadata": {},
  1021. "output_type": "display_data"
  1022. },
  1023. {
  1024. "data": {
  1025. "text/html": [
  1026. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">----------------------------- Eval. results on Epoch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">7</span>, Batch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0</span> -----------------------------\n",
  1027. "</pre>\n"
  1028. ],
  1029. "text/plain": [
  1030. "----------------------------- Eval. results on Epoch:\u001b[1;36m7\u001b[0m, Batch:\u001b[1;36m0\u001b[0m -----------------------------\n"
  1031. ]
  1032. },
  1033. "metadata": {},
  1034. "output_type": "display_data"
  1035. },
  1036. {
  1037. "data": {
  1038. "text/html": [
  1039. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">{</span>\n",
  1040. " <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"prefix#suffix\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0.80625</span>\n",
  1041. "<span style=\"font-weight: bold\">}</span>\n",
  1042. "</pre>\n"
  1043. ],
  1044. "text/plain": [
  1045. "\u001b[1m{\u001b[0m\n",
  1046. " \u001b[1;34m\"prefix#suffix\"\u001b[0m: \u001b[1;36m0.80625\u001b[0m\n",
  1047. "\u001b[1m}\u001b[0m\n"
  1048. ]
  1049. },
  1050. "metadata": {},
  1051. "output_type": "display_data"
  1052. },
  1053. {
  1054. "data": {
  1055. "text/html": [
  1056. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
  1057. "</pre>\n"
  1058. ],
  1059. "text/plain": [
  1060. "\n"
  1061. ]
  1062. },
  1063. "metadata": {},
  1064. "output_type": "display_data"
  1065. },
  1066. {
  1067. "data": {
  1068. "text/html": [
  1069. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">----------------------------- Eval. results on Epoch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">8</span>, Batch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0</span> -----------------------------\n",
  1070. "</pre>\n"
  1071. ],
  1072. "text/plain": [
  1073. "----------------------------- Eval. results on Epoch:\u001b[1;36m8\u001b[0m, Batch:\u001b[1;36m0\u001b[0m -----------------------------\n"
  1074. ]
  1075. },
  1076. "metadata": {},
  1077. "output_type": "display_data"
  1078. },
  1079. {
  1080. "data": {
  1081. "text/html": [
  1082. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">{</span>\n",
  1083. " <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"prefix#suffix\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0.8</span>\n",
  1084. "<span style=\"font-weight: bold\">}</span>\n",
  1085. "</pre>\n"
  1086. ],
  1087. "text/plain": [
  1088. "\u001b[1m{\u001b[0m\n",
  1089. " \u001b[1;34m\"prefix#suffix\"\u001b[0m: \u001b[1;36m0.8\u001b[0m\n",
  1090. "\u001b[1m}\u001b[0m\n"
  1091. ]
  1092. },
  1093. "metadata": {},
  1094. "output_type": "display_data"
  1095. },
  1096. {
  1097. "data": {
  1098. "text/html": [
  1099. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
  1100. "</pre>\n"
  1101. ],
  1102. "text/plain": [
  1103. "\n"
  1104. ]
  1105. },
  1106. "metadata": {},
  1107. "output_type": "display_data"
  1108. },
  1109. {
  1110. "data": {
  1111. "text/html": [
  1112. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">----------------------------- Eval. results on Epoch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">9</span>, Batch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0</span> -----------------------------\n",
  1113. "</pre>\n"
  1114. ],
  1115. "text/plain": [
  1116. "----------------------------- Eval. results on Epoch:\u001b[1;36m9\u001b[0m, Batch:\u001b[1;36m0\u001b[0m -----------------------------\n"
  1117. ]
  1118. },
  1119. "metadata": {},
  1120. "output_type": "display_data"
  1121. },
  1122. {
  1123. "data": {
  1124. "text/html": [
  1125. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">{</span>\n",
  1126. " <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"prefix#suffix\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0.80625</span>\n",
  1127. "<span style=\"font-weight: bold\">}</span>\n",
  1128. "</pre>\n"
  1129. ],
  1130. "text/plain": [
  1131. "\u001b[1m{\u001b[0m\n",
  1132. " \u001b[1;34m\"prefix#suffix\"\u001b[0m: \u001b[1;36m0.80625\u001b[0m\n",
  1133. "\u001b[1m}\u001b[0m\n"
  1134. ]
  1135. },
  1136. "metadata": {},
  1137. "output_type": "display_data"
  1138. },
  1139. {
  1140. "data": {
  1141. "text/html": [
  1142. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
  1143. "</pre>\n"
  1144. ],
  1145. "text/plain": [
  1146. "\n"
  1147. ]
  1148. },
  1149. "metadata": {},
  1150. "output_type": "display_data"
  1151. },
  1152. {
  1153. "data": {
  1154. "text/html": [
  1155. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">---------------------------- Eval. results on Epoch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">10</span>, Batch:<span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0</span> -----------------------------\n",
  1156. "</pre>\n"
  1157. ],
  1158. "text/plain": [
  1159. "---------------------------- Eval. results on Epoch:\u001b[1;36m10\u001b[0m, Batch:\u001b[1;36m0\u001b[0m -----------------------------\n"
  1160. ]
  1161. },
  1162. "metadata": {},
  1163. "output_type": "display_data"
  1164. },
  1165. {
  1166. "data": {
  1167. "text/html": [
  1168. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">{</span>\n",
  1169. " <span style=\"color: #000080; text-decoration-color: #000080; font-weight: bold\">\"prefix#suffix\"</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0.80625</span>\n",
  1170. "<span style=\"font-weight: bold\">}</span>\n",
  1171. "</pre>\n"
  1172. ],
  1173. "text/plain": [
  1174. "\u001b[1m{\u001b[0m\n",
  1175. " \u001b[1;34m\"prefix#suffix\"\u001b[0m: \u001b[1;36m0.80625\u001b[0m\n",
  1176. "\u001b[1m}\u001b[0m\n"
  1177. ]
  1178. },
  1179. "metadata": {},
  1180. "output_type": "display_data"
  1181. },
  1182. {
  1183. "data": {
  1184. "text/html": [
  1185. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"></pre>\n"
  1186. ],
  1187. "text/plain": []
  1188. },
  1189. "metadata": {},
  1190. "output_type": "display_data"
  1191. },
  1192. {
  1193. "data": {
  1194. "text/html": [
  1195. "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
  1196. "</pre>\n"
  1197. ],
  1198. "text/plain": [
  1199. "\n"
  1200. ]
  1201. },
  1202. "metadata": {},
  1203. "output_type": "display_data"
  1204. }
  1205. ],
  1206. "source": [
  1207. "trainer.run(num_eval_batch_per_dl=10)"
  1208. ]
  1209. },
  1210. {
  1211. "cell_type": "code",
  1212. "execution_count": null,
  1213. "id": "f1abfa0a",
  1214. "metadata": {},
  1215. "outputs": [],
  1216. "source": []
  1217. }
  1218. ],
  1219. "metadata": {
  1220. "kernelspec": {
  1221. "display_name": "Python 3 (ipykernel)",
  1222. "language": "python",
  1223. "name": "python3"
  1224. },
  1225. "language_info": {
  1226. "codemirror_mode": {
  1227. "name": "ipython",
  1228. "version": 3
  1229. },
  1230. "file_extension": ".py",
  1231. "mimetype": "text/x-python",
  1232. "name": "python",
  1233. "nbconvert_exporter": "python",
  1234. "pygments_lexer": "ipython3",
  1235. "version": "3.7.13"
  1236. },
  1237. "pycharm": {
  1238. "stem_cell": {
  1239. "cell_type": "raw",
  1240. "metadata": {
  1241. "collapsed": false
  1242. },
  1243. "source": []
  1244. }
  1245. }
  1246. },
  1247. "nbformat": 4,
  1248. "nbformat_minor": 5
  1249. }