You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

quickstart.ipynb 9.1 kB

Dev0.4.0 (#149) * 1. CRF增加支持bmeso类型的tag 2. vocabulary中增加注释 * BucketSampler增加一条错误检测 * 1.修改ClipGradientCallback的bug;删除LRSchedulerCallback中的print,之后应该传入pbar进行打印;2.增加MLP注释 * update MLP module * 增加metric注释;修改trainer save过程中的bug * Update README.md fix tutorial link * Add ENAS (Efficient Neural Architecture Search) * add ignore_type in DataSet.add_field * * AutoPadder will not pad when dtype is None * add ignore_type in DataSet.apply * 修复fieldarray中padder潜在bug * 修复crf中typo; 以及可能导致数值不稳定的地方 * 修复CRF中可能存在的bug * change two default init arguments of Trainer into None * Changes to Callbacks: * 给callback添加给定几个只读属性 * 通过manager设置这些属性 * 代码优化,减轻@transfer的负担 * * 将enas相关代码放到automl目录下 * 修复fast_param_mapping的一个bug * Trainer添加自动创建save目录 * Vocabulary的打印,显示内容 * * 给vocabulary添加遍历方法 * 修复CRF为负数的bug * add SQuAD metric * add sigmoid activate function in MLP * - add star transformer model - add ConllLoader, for all kinds of conll-format files - add JsonLoader, for json-format files - add SSTLoader, for SST-2 & SST-5 - change Callback interface - fix batch multi-process when killed - add README to list models and their performance * - fix test * - fix callback & tests * - update README * 修改部分bug;调整callback * 准备发布0.4.0版本“ * update readme * support parallel loss * 防止多卡的情况导致无法正确计算loss“ * update advance_tutorial jupyter notebook * 1. 在embedding_loader中增加新的读取函数load_with_vocab(), load_without_vocab, 比之前的函数改变主要在(1)不再需要传入embed_dim(2)自动判断当前是word2vec还是glove. 2. vocabulary增加from_dataset(), index_dataset()函数。避免需要多行写index dataset的问题。 3. 在utils中新增一个cache_result()修饰器,用于cache函数的返回值。 4. callback中新增update_every属性 * 1.DataSet.apply()报错时提供错误的index 2.Vocabulary.from_dataset(), index_dataset()提供报错时的vocab顺序 3.embedloader在embed读取时遇到不规则的数据跳过这一行. * update attention * doc tools * fix some doc errors * 修改为中文注释,增加viterbi解码方法 * 样例版本 * - add pad sequence for lstm - add csv, conll, json filereader - update dataloader - remove useless dataloader - fix trainer loss print - fix tests * - fix test_tutorial * 注释增加 * 测试文档 * 本地暂存 * 本地暂存 * 修改文档的顺序 * - add document * 本地暂存 * update pooling * update bert * update documents in MLP * update documents in snli * combine self attention module to attention.py * update documents on losses.py * 对DataSet的文档进行更新 * update documents on metrics * 1. 删除了LSTM中print的内容; 2. 将Trainer和Tester的use_cuda修改为了device; 3.补充Trainer的文档 * 增加对Trainer的注释 * 完善了trainer,callback等的文档; 修改了部分代码的命名以使得代码从文档中隐藏 * update char level encoder * update documents on embedding.py * - update doc * 补充注释,并修改部分代码 * - update doc - add get_embeddings * 修改了文档配置项 * 修改embedding为init_embed初始化 * 1.增加对Trainer和Tester的多卡支持; * - add test - fix jsonloader * 删除了注释教程 * 给 dataset 增加了get_field_names * 修复bug * - add Const - fix bugs * 修改部分注释 * - add model runner for easier test models - add model tests * 修改了 docs 的配置和架构 * 修改了核心部分的一大部分文档,TODO: 1. 完善 trainer 和 tester 部分的文档 2. 研究注释样例与测试 * core部分的注释基本检查完成 * 修改了 io 部分的注释 * 全部改为相对路径引用 * 全部改为相对路径引用 * small change * 1. 从安装文件中删除api/automl的安装 2. metric中存在seq_len的bug 3. sampler中存在命名错误,已修改 * 修复 bug :兼容 cpu 版本的 PyTorch TODO:其它地方可能也存在类似的 bug * 修改文档中的引用部分 * 把 tqdm.autonotebook 换成tqdm.auto * - fix batch & vocab * 上传了文档文件 *.rst * 上传了文档文件和若干 TODO * 讨论并整合了若干模块 * core部分的测试和一些小修改 * 删除了一些冗余文档 * update init files * update const files * update const files * 增加cnn的测试 * fix a little bug * - update attention - fix tests * 完善测试 * 完成快速入门教程 * 修改了sequence_modeling 命名为 sequence_labeling 的文档 * 重新 apidoc 解决改名的遗留问题 * 修改文档格式 * 统一不同位置的seq_len_to_mask, 现统一到core.utils.seq_len_to_mask * 增加了一行提示 * 在文档中展示 dataset_loader * 提示 Dataset.read_csv 会被 CSVLoader 替换 * 完成 Callback 和 Trainer 之间的文档 * index更新了部分 * 删除冗余的print * 删除用于分词的metric,因为有可能引起错误 * 修改文档中的中文名称 * 完成了详细介绍文档 * tutorial 的 ipynb 文件 * 修改了一些介绍文档 * 修改了 models 和 modules 的主页介绍 * 加上了 titlesonly 这个设置 * 修改了模块文档展示的标题 * 修改了 core 和 io 的开篇介绍 * 修改了 modules 和 models 开篇介绍 * 使用 .. todo:: 隐藏了可能被抽到文档中的 TODO 注释 * 修改了一些注释 * delete an old metric in test * 修改 tutorials 的测试文件 * 把暂不发布的功能移到 legacy 文件夹 * 删除了不能运行的测试 * 修改 callback 的测试文件 * 删除了过时的教程和测试文件 * cache_results 参数的修改 * 修改 io 的测试文件; 删除了一些过时的测试 * 修复bug * 修复无法通过test_utils.py的测试 * 修复与pytorch1.1中的padsequence的兼容问题; 修改Trainer的pbar * 1. 修复metric中的bug; 2.增加metric测试 * add model summary * 增加别名 * 删除encoder中的嵌套层 * 修改了 core 部分 import 的顺序,__all__ 暴露的内容 * 修改了 models 部分 import 的顺序,__all__ 暴露的内容 * 修改了文件名 * 修改了 modules 模块的__all__ 和 import * fix var runn * 增加vocab的clear方法 * 一些符合 PEP8 的微调 * 更新了cache_results的例子 * 1. 对callback中indices潜在None作出提示;2.DataSet支持通过List进行index * 修改了一个typo * 修改了 README.md * update documents on bert * update documents on encoder/bert * 增加一个fitlog callback,实现与fitlog实验记录 * typo * - update dataset_loader * 增加了到 fitlog 文档的链接。 * 增加了 DataSet Loader 的文档 * - add star-transformer reproduction
6 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280
  1. {
  2. "cells": [
  3. {
  4. "cell_type": "markdown",
  5. "metadata": {
  6. "collapsed": true
  7. },
  8. "source": [
  9. "# 快速入门"
  10. ]
  11. },
  12. {
  13. "cell_type": "code",
  14. "execution_count": 1,
  15. "metadata": {},
  16. "outputs": [
  17. {
  18. "data": {
  19. "text/plain": [
  20. "{'raw_sentence': A series of escapades demonstrating the adage that what is good for the goose is also good for the gander , some of which occasionally amuses but none of which amounts to much of a story . type=str,\n",
  21. "'label': 1 type=str}"
  22. ]
  23. },
  24. "execution_count": 1,
  25. "metadata": {},
  26. "output_type": "execute_result"
  27. }
  28. ],
  29. "source": [
  30. "from fastNLP.io import CSVLoader\n",
  31. "\n",
  32. "loader = CSVLoader(headers=('raw_sentence', 'label'), sep='\\t')\n",
  33. "dataset = loader.load(\"./sample_data/tutorial_sample_dataset.csv\")\n",
  34. "dataset[0]"
  35. ]
  36. },
  37. {
  38. "cell_type": "code",
  39. "execution_count": 2,
  40. "metadata": {},
  41. "outputs": [
  42. {
  43. "data": {
  44. "text/plain": [
  45. "{'raw_sentence': A series of escapades demonstrating the adage that what is good for the goose is also good for the gander , some of which occasionally amuses but none of which amounts to much of a story . type=str,\n",
  46. "'label': 1 type=str,\n",
  47. "'sentence': a series of escapades demonstrating the adage that what is good for the goose is also good for the gander , some of which occasionally amuses but none of which amounts to much of a story . type=str,\n",
  48. "'words': ['a', 'series', 'of', 'escapades', 'demonstrating', 'the', 'adage', 'that', 'what', 'is', 'good', 'for', 'the', 'goose', 'is', 'also', 'good', 'for', 'the', 'gander', ',', 'some', 'of', 'which', 'occasionally', 'amuses', 'but', 'none', 'of', 'which', 'amounts', 'to', 'much', 'of', 'a', 'story', '.'] type=list}"
  49. ]
  50. },
  51. "execution_count": 2,
  52. "metadata": {},
  53. "output_type": "execute_result"
  54. }
  55. ],
  56. "source": [
  57. "# 将所有字母转为小写, 并所有句子变成单词序列\n",
  58. "dataset.apply(lambda x: x['raw_sentence'].lower(), new_field_name='sentence')\n",
  59. "dataset.apply(lambda x: x['sentence'].split(), new_field_name='words', is_input=True)\n",
  60. "dataset[0]"
  61. ]
  62. },
  63. {
  64. "cell_type": "code",
  65. "execution_count": 3,
  66. "metadata": {},
  67. "outputs": [
  68. {
  69. "data": {
  70. "text/plain": [
  71. "{'raw_sentence': A series of escapades demonstrating the adage that what is good for the goose is also good for the gander , some of which occasionally amuses but none of which amounts to much of a story . type=str,\n",
  72. "'label': 1 type=str,\n",
  73. "'sentence': a series of escapades demonstrating the adage that what is good for the goose is also good for the gander , some of which occasionally amuses but none of which amounts to much of a story . type=str,\n",
  74. "'words': [4, 1, 6, 1, 1, 2, 1, 11, 153, 10, 28, 17, 2, 1, 10, 1, 28, 17, 2, 1, 5, 154, 6, 149, 1, 1, 23, 1, 6, 149, 1, 8, 30, 6, 4, 35, 3] type=list}"
  75. ]
  76. },
  77. "execution_count": 3,
  78. "metadata": {},
  79. "output_type": "execute_result"
  80. }
  81. ],
  82. "source": [
  83. "from fastNLP import Vocabulary\n",
  84. "\n",
  85. "# 使用Vocabulary类统计单词,并将单词序列转化为数字序列\n",
  86. "vocab = Vocabulary(min_freq=2).from_dataset(dataset, field_name='words')\n",
  87. "vocab.index_dataset(dataset, field_name='words',new_field_name='words')\n",
  88. "dataset[0]"
  89. ]
  90. },
  91. {
  92. "cell_type": "code",
  93. "execution_count": 4,
  94. "metadata": {},
  95. "outputs": [
  96. {
  97. "data": {
  98. "text/plain": [
  99. "{'raw_sentence': A series of escapades demonstrating the adage that what is good for the goose is also good for the gander , some of which occasionally amuses but none of which amounts to much of a story . type=str,\n",
  100. "'label': 1 type=str,\n",
  101. "'sentence': a series of escapades demonstrating the adage that what is good for the goose is also good for the gander , some of which occasionally amuses but none of which amounts to much of a story . type=str,\n",
  102. "'words': [4, 1, 6, 1, 1, 2, 1, 11, 153, 10, 28, 17, 2, 1, 10, 1, 28, 17, 2, 1, 5, 154, 6, 149, 1, 1, 23, 1, 6, 149, 1, 8, 30, 6, 4, 35, 3] type=list,\n",
  103. "'target': 1 type=int}"
  104. ]
  105. },
  106. "execution_count": 4,
  107. "metadata": {},
  108. "output_type": "execute_result"
  109. }
  110. ],
  111. "source": [
  112. "# 将label转为整数,并设置为 target\n",
  113. "dataset.apply(lambda x: int(x['label']), new_field_name='target', is_target=True)\n",
  114. "dataset[0]"
  115. ]
  116. },
  117. {
  118. "cell_type": "code",
  119. "execution_count": 5,
  120. "metadata": {},
  121. "outputs": [
  122. {
  123. "data": {
  124. "text/plain": [
  125. "CNNText(\n",
  126. " (embed): Embedding(\n",
  127. " 177, 50\n",
  128. " (dropout): Dropout(p=0.0)\n",
  129. " )\n",
  130. " (conv_pool): ConvMaxpool(\n",
  131. " (convs): ModuleList(\n",
  132. " (0): Conv1d(50, 3, kernel_size=(3,), stride=(1,), padding=(2,))\n",
  133. " (1): Conv1d(50, 4, kernel_size=(4,), stride=(1,), padding=(2,))\n",
  134. " (2): Conv1d(50, 5, kernel_size=(5,), stride=(1,), padding=(2,))\n",
  135. " )\n",
  136. " )\n",
  137. " (dropout): Dropout(p=0.1)\n",
  138. " (fc): Linear(in_features=12, out_features=5, bias=True)\n",
  139. ")"
  140. ]
  141. },
  142. "execution_count": 5,
  143. "metadata": {},
  144. "output_type": "execute_result"
  145. }
  146. ],
  147. "source": [
  148. "from fastNLP.models import CNNText\n",
  149. "model = CNNText((len(vocab),50), num_classes=5, padding=2, dropout=0.1)\n",
  150. "model"
  151. ]
  152. },
  153. {
  154. "cell_type": "code",
  155. "execution_count": 8,
  156. "metadata": {},
  157. "outputs": [
  158. {
  159. "data": {
  160. "text/plain": [
  161. "(62, 15)"
  162. ]
  163. },
  164. "execution_count": 8,
  165. "metadata": {},
  166. "output_type": "execute_result"
  167. }
  168. ],
  169. "source": [
  170. "# 分割训练集/验证集\n",
  171. "train_data, dev_data = dataset.split(0.2)\n",
  172. "len(train_data), len(dev_data)"
  173. ]
  174. },
  175. {
  176. "cell_type": "code",
  177. "execution_count": 7,
  178. "metadata": {},
  179. "outputs": [
  180. {
  181. "name": "stdout",
  182. "output_type": "stream",
  183. "text": [
  184. "input fields after batch(if batch size is 2):\n",
  185. "\twords: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2, 26]) \n",
  186. "target fields after batch(if batch size is 2):\n",
  187. "\ttarget: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2]) \n",
  188. "\n",
  189. "training epochs started 2019-05-09-10-59-39\n"
  190. ]
  191. },
  192. {
  193. "data": {
  194. "application/vnd.jupyter.widget-view+json": {
  195. "model_id": "",
  196. "version_major": 2,
  197. "version_minor": 0
  198. },
  199. "text/plain": [
  200. "HBox(children=(IntProgress(value=0, layout=Layout(flex='2'), max=20), HTML(value='')), layout=Layout(display='…"
  201. ]
  202. },
  203. "metadata": {},
  204. "output_type": "display_data"
  205. },
  206. {
  207. "name": "stdout",
  208. "output_type": "stream",
  209. "text": [
  210. "Evaluation at Epoch 1/10. Step:2/20. AccuracyMetric: acc=0.333333\n",
  211. "\n",
  212. "Evaluation at Epoch 2/10. Step:4/20. AccuracyMetric: acc=0.533333\n",
  213. "\n",
  214. "Evaluation at Epoch 3/10. Step:6/20. AccuracyMetric: acc=0.533333\n",
  215. "\n",
  216. "Evaluation at Epoch 4/10. Step:8/20. AccuracyMetric: acc=0.533333\n",
  217. "\n",
  218. "Evaluation at Epoch 5/10. Step:10/20. AccuracyMetric: acc=0.6\n",
  219. "\n",
  220. "Evaluation at Epoch 6/10. Step:12/20. AccuracyMetric: acc=0.8\n",
  221. "\n",
  222. "Evaluation at Epoch 7/10. Step:14/20. AccuracyMetric: acc=0.8\n",
  223. "\n",
  224. "Evaluation at Epoch 8/10. Step:16/20. AccuracyMetric: acc=0.733333\n",
  225. "\n",
  226. "Evaluation at Epoch 9/10. Step:18/20. AccuracyMetric: acc=0.733333\n",
  227. "\n",
  228. "Evaluation at Epoch 10/10. Step:20/20. AccuracyMetric: acc=0.733333\n",
  229. "\n",
  230. "\n",
  231. "In Epoch:6/Step:12, got best dev performance:AccuracyMetric: acc=0.8\n",
  232. "Reloaded the best model.\n"
  233. ]
  234. },
  235. {
  236. "data": {
  237. "text/plain": [
  238. "{'best_eval': {'AccuracyMetric': {'acc': 0.8}},\n",
  239. " 'best_epoch': 6,\n",
  240. " 'best_step': 12,\n",
  241. " 'seconds': 0.22}"
  242. ]
  243. },
  244. "execution_count": 7,
  245. "metadata": {},
  246. "output_type": "execute_result"
  247. }
  248. ],
  249. "source": [
  250. "from fastNLP import Trainer, CrossEntropyLoss, AccuracyMetric\n",
  251. "\n",
  252. "# 定义trainer并进行训练\n",
  253. "trainer = Trainer(model=model, train_data=train_data, dev_data=dev_data,\n",
  254. " loss=CrossEntropyLoss(), metrics=AccuracyMetric())\n",
  255. "trainer.train()"
  256. ]
  257. }
  258. ],
  259. "metadata": {
  260. "kernelspec": {
  261. "display_name": "Python 3",
  262. "language": "python",
  263. "name": "python3"
  264. },
  265. "language_info": {
  266. "codemirror_mode": {
  267. "name": "ipython",
  268. "version": 3
  269. },
  270. "file_extension": ".py",
  271. "mimetype": "text/x-python",
  272. "name": "python",
  273. "nbconvert_exporter": "python",
  274. "pygments_lexer": "ipython3",
  275. "version": "3.6.7"
  276. }
  277. },
  278. "nbformat": 4,
  279. "nbformat_minor": 1
  280. }