You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

5-param_initialize.ipynb 15 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470
  1. {
  2. "cells": [
  3. {
  4. "cell_type": "markdown",
  5. "metadata": {},
  6. "source": [
  7. "# 参数初始化\n",
  8. "参数初始化对模型具有较大的影响,不同的初始化方式可能会导致截然不同的结果,所幸的是很多深度学习的先驱们已经帮我们探索了各种各样的初始化方式,所以我们只需要学会如何对模型的参数进行初始化的赋值即可。"
  9. ]
  10. },
  11. {
  12. "cell_type": "markdown",
  13. "metadata": {},
  14. "source": [
  15. "PyTorch 的初始化方式并没有那么显然,如果你使用最原始的方式创建模型,那么你需要定义模型中的所有参数,当然这样你可以非常方便地定义每个变量的初始化方式,但是对于复杂的模型,这并不容易,而且我们推崇使用 Sequential 和 Module 来定义模型,所以这个时候我们就需要知道如何来自定义初始化方式"
  16. ]
  17. },
  18. {
  19. "cell_type": "markdown",
  20. "metadata": {},
  21. "source": [
  22. "## 使用 NumPy 来初始化\n",
  23. "因为 PyTorch 是一个非常灵活的框架,理论上能够对所有的 Tensor 进行操作,所以我们能够通过定义新的 Tensor 来初始化,直接看下面的例子"
  24. ]
  25. },
  26. {
  27. "cell_type": "code",
  28. "execution_count": 1,
  29. "metadata": {},
  30. "outputs": [],
  31. "source": [
  32. "import numpy as np\n",
  33. "import torch\n",
  34. "from torch import nn"
  35. ]
  36. },
  37. {
  38. "cell_type": "code",
  39. "execution_count": 2,
  40. "metadata": {},
  41. "outputs": [],
  42. "source": [
  43. "# 定义一个 Sequential 模型\n",
  44. "net1 = nn.Sequential(\n",
  45. " nn.Linear(30, 40),\n",
  46. " nn.ReLU(),\n",
  47. " nn.Linear(40, 50),\n",
  48. " nn.ReLU(),\n",
  49. " nn.Linear(50, 10)\n",
  50. ")"
  51. ]
  52. },
  53. {
  54. "cell_type": "code",
  55. "execution_count": 3,
  56. "metadata": {},
  57. "outputs": [],
  58. "source": [
  59. "# 访问第一层的参数\n",
  60. "w1 = net1[0].weight\n",
  61. "b1 = net1[0].bias"
  62. ]
  63. },
  64. {
  65. "cell_type": "code",
  66. "execution_count": 4,
  67. "metadata": {},
  68. "outputs": [
  69. {
  70. "name": "stdout",
  71. "output_type": "stream",
  72. "text": [
  73. "Parameter containing:\n",
  74. "tensor([[ 0.0276, -0.1197, -0.0397, ..., 0.0759, -0.1630, 0.1599],\n",
  75. " [ 0.1419, 0.0903, -0.1630, ..., -0.0615, 0.1502, 0.0596],\n",
  76. " [-0.0451, 0.1103, 0.1070, ..., -0.1506, -0.1346, 0.1284],\n",
  77. " ...,\n",
  78. " [-0.0975, -0.1264, 0.0738, ..., -0.1058, -0.1396, 0.1800],\n",
  79. " [-0.1352, 0.0287, 0.0779, ..., 0.1773, -0.1585, 0.1046],\n",
  80. " [-0.1194, 0.1526, -0.0018, ..., 0.0946, -0.1453, -0.1512]],\n",
  81. " requires_grad=True)\n"
  82. ]
  83. }
  84. ],
  85. "source": [
  86. "print(w1)"
  87. ]
  88. },
  89. {
  90. "cell_type": "markdown",
  91. "metadata": {},
  92. "source": [
  93. "注意,这是一个 Parameter,也就是一个特殊的 Variable,我们可以访问其 `.data`属性得到其中的数据,然后直接定义一个新的 Tensor 对其进行替换,我们可以使用 PyTorch 中的一些随机数据生成的方式,比如 `torch.randn`,如果要使用更多 PyTorch 中没有的随机化方式,可以使用 numpy"
  94. ]
  95. },
  96. {
  97. "cell_type": "code",
  98. "execution_count": 5,
  99. "metadata": {},
  100. "outputs": [],
  101. "source": [
  102. "# 定义一个 Tensor 直接对其进行替换\n",
  103. "net1[0].weight.data = torch.from_numpy(np.random.uniform(3, 5, size=(40, 30)))"
  104. ]
  105. },
  106. {
  107. "cell_type": "code",
  108. "execution_count": 6,
  109. "metadata": {},
  110. "outputs": [
  111. {
  112. "name": "stdout",
  113. "output_type": "stream",
  114. "text": [
  115. "Parameter containing:\n",
  116. "tensor([[3.0403, 4.7550, 4.9311, ..., 3.0626, 4.3593, 3.9823],\n",
  117. " [4.4812, 4.5463, 4.4052, ..., 3.7669, 3.4201, 4.6582],\n",
  118. " [3.7711, 3.3997, 4.1416, ..., 3.4086, 3.1681, 4.0410],\n",
  119. " ...,\n",
  120. " [4.4137, 4.1779, 4.8741, ..., 3.4678, 3.4457, 4.7489],\n",
  121. " [3.8246, 4.2699, 4.9944, ..., 4.8576, 3.8945, 4.5525],\n",
  122. " [3.4959, 3.6991, 4.4047, ..., 4.7308, 3.5796, 3.2013]],\n",
  123. " dtype=torch.float64, requires_grad=True)\n"
  124. ]
  125. }
  126. ],
  127. "source": [
  128. "print(net1[0].weight)"
  129. ]
  130. },
  131. {
  132. "cell_type": "markdown",
  133. "metadata": {},
  134. "source": [
  135. "可以看到这个参数的值已经被改变了,也就是说已经被定义成了我们需要的初始化方式,如果模型中某一层需要我们手动去修改,那么我们可以直接用这种方式去访问,但是更多的时候是模型中相同类型的层都需要初始化成相同的方式,这个时候一种更高效的方式是使用循环去访问,比如"
  136. ]
  137. },
  138. {
  139. "cell_type": "code",
  140. "execution_count": 7,
  141. "metadata": {},
  142. "outputs": [],
  143. "source": [
  144. "for layer in net1:\n",
  145. " if isinstance(layer, nn.Linear): # 判断是否是线性层\n",
  146. " param_shape = layer.weight.shape\n",
  147. " layer.weight.data = torch.from_numpy(np.random.normal(0, 0.5, size=param_shape)) \n",
  148. " # 定义为均值为 0,方差为 0.5 的正态分布"
  149. ]
  150. },
  151. {
  152. "cell_type": "markdown",
  153. "metadata": {},
  154. "source": [
  155. "**小练习:一种非常流行的初始化方式叫 Xavier,方法来源于 2010 年的一篇论文 [Understanding the difficulty of training deep feedforward neural networks](http://proceedings.mlr.press/v9/glorot10a.html),其通过数学的推到,证明了这种初始化方式可以使得每一层的输出方差是尽可能相等的,有兴趣的同学可以去看看论文**\n",
  156. "\n",
  157. "我们给出这种初始化的公式\n",
  158. "\n",
  159. "$$\n",
  160. "w\\ \\sim \\ Uniform[- \\frac{\\sqrt{6}}{\\sqrt{n_j + n_{j+1}}}, \\frac{\\sqrt{6}}{\\sqrt{n_j + n_{j+1}}}]\n",
  161. "$$\n",
  162. "\n",
  163. "其中 $n_j$ 和 $n_{j+1}$ 表示该层的输入和输出数目,所以请尝试实现以下这种初始化方式"
  164. ]
  165. },
  166. {
  167. "cell_type": "markdown",
  168. "metadata": {},
  169. "source": [
  170. "对于 Module 的参数初始化,其实也非常简单,如果想对其中的某层进行初始化,可以直接像 Sequential 一样对其 Tensor 进行重新定义,其唯一不同的地方在于,如果要用循环的方式访问,需要介绍两个属性,children 和 modules,下面我们举例来说明"
  171. ]
  172. },
  173. {
  174. "cell_type": "code",
  175. "execution_count": 8,
  176. "metadata": {
  177. "collapsed": true
  178. },
  179. "outputs": [],
  180. "source": [
  181. "class sim_net(nn.Module):\n",
  182. " def __init__(self):\n",
  183. " super(sim_net, self).__init__()\n",
  184. " self.l1 = nn.Sequential(\n",
  185. " nn.Linear(30, 40),\n",
  186. " nn.ReLU()\n",
  187. " )\n",
  188. " \n",
  189. " self.l1[0].weight.data = torch.randn(40, 30) # 直接对某一层初始化\n",
  190. " \n",
  191. " self.l2 = nn.Sequential(\n",
  192. " nn.Linear(40, 50),\n",
  193. " nn.ReLU()\n",
  194. " )\n",
  195. " \n",
  196. " self.l3 = nn.Sequential(\n",
  197. " nn.Linear(50, 10),\n",
  198. " nn.ReLU()\n",
  199. " )\n",
  200. " \n",
  201. " def forward(self, x):\n",
  202. " x = self.l1(x)\n",
  203. " x =self.l2(x)\n",
  204. " x = self.l3(x)\n",
  205. " return x"
  206. ]
  207. },
  208. {
  209. "cell_type": "code",
  210. "execution_count": 9,
  211. "metadata": {
  212. "collapsed": true
  213. },
  214. "outputs": [],
  215. "source": [
  216. "net2 = sim_net()"
  217. ]
  218. },
  219. {
  220. "cell_type": "code",
  221. "execution_count": 10,
  222. "metadata": {},
  223. "outputs": [
  224. {
  225. "name": "stdout",
  226. "output_type": "stream",
  227. "text": [
  228. "Sequential(\n",
  229. " (0): Linear(in_features=30, out_features=40)\n",
  230. " (1): ReLU()\n",
  231. ")\n",
  232. "Sequential(\n",
  233. " (0): Linear(in_features=40, out_features=50)\n",
  234. " (1): ReLU()\n",
  235. ")\n",
  236. "Sequential(\n",
  237. " (0): Linear(in_features=50, out_features=10)\n",
  238. " (1): ReLU()\n",
  239. ")\n"
  240. ]
  241. }
  242. ],
  243. "source": [
  244. "# 访问 children\n",
  245. "for i in net2.children():\n",
  246. " print(i)"
  247. ]
  248. },
  249. {
  250. "cell_type": "code",
  251. "execution_count": 11,
  252. "metadata": {},
  253. "outputs": [
  254. {
  255. "name": "stdout",
  256. "output_type": "stream",
  257. "text": [
  258. "sim_net(\n",
  259. " (l1): Sequential(\n",
  260. " (0): Linear(in_features=30, out_features=40)\n",
  261. " (1): ReLU()\n",
  262. " )\n",
  263. " (l2): Sequential(\n",
  264. " (0): Linear(in_features=40, out_features=50)\n",
  265. " (1): ReLU()\n",
  266. " )\n",
  267. " (l3): Sequential(\n",
  268. " (0): Linear(in_features=50, out_features=10)\n",
  269. " (1): ReLU()\n",
  270. " )\n",
  271. ")\n",
  272. "Sequential(\n",
  273. " (0): Linear(in_features=30, out_features=40)\n",
  274. " (1): ReLU()\n",
  275. ")\n",
  276. "Linear(in_features=30, out_features=40)\n",
  277. "ReLU()\n",
  278. "Sequential(\n",
  279. " (0): Linear(in_features=40, out_features=50)\n",
  280. " (1): ReLU()\n",
  281. ")\n",
  282. "Linear(in_features=40, out_features=50)\n",
  283. "ReLU()\n",
  284. "Sequential(\n",
  285. " (0): Linear(in_features=50, out_features=10)\n",
  286. " (1): ReLU()\n",
  287. ")\n",
  288. "Linear(in_features=50, out_features=10)\n",
  289. "ReLU()\n"
  290. ]
  291. }
  292. ],
  293. "source": [
  294. "# 访问 modules\n",
  295. "for i in net2.modules():\n",
  296. " print(i)"
  297. ]
  298. },
  299. {
  300. "cell_type": "markdown",
  301. "metadata": {},
  302. "source": [
  303. "通过上面的例子,看到区别了吗?\n",
  304. "\n",
  305. "children 只会访问到模型定义中的第一层,因为上面的模型中定义了三个 Sequential,所以只会访问到三个 Sequential,而 modules 会访问到最后的结构,比如上面的例子,modules 不仅访问到了 Sequential,也访问到了 Sequential 里面,这就对我们做初始化非常方便,比如"
  306. ]
  307. },
  308. {
  309. "cell_type": "code",
  310. "execution_count": 12,
  311. "metadata": {},
  312. "outputs": [],
  313. "source": [
  314. "for layer in net2.modules():\n",
  315. " if isinstance(layer, nn.Linear):\n",
  316. " param_shape = layer.weight.shape\n",
  317. " layer.weight.data = torch.from_numpy(np.random.normal(0, 0.5, size=param_shape)) "
  318. ]
  319. },
  320. {
  321. "cell_type": "markdown",
  322. "metadata": {},
  323. "source": [
  324. "这上面实现了和 Sequential 相同的初始化,同样非常简便"
  325. ]
  326. },
  327. {
  328. "cell_type": "markdown",
  329. "metadata": {},
  330. "source": [
  331. "## torch.nn.init\n",
  332. "因为 PyTorch 灵活的特性,我们可以直接对 Tensor 进行操作从而初始化,PyTorch 也提供了初始化的函数帮助我们快速初始化,就是 `torch.nn.init`,其操作层面仍然在 Tensor 上,下面我们举例说明"
  333. ]
  334. },
  335. {
  336. "cell_type": "code",
  337. "execution_count": 7,
  338. "metadata": {},
  339. "outputs": [],
  340. "source": [
  341. "from torch.nn import init"
  342. ]
  343. },
  344. {
  345. "cell_type": "code",
  346. "execution_count": 8,
  347. "metadata": {},
  348. "outputs": [
  349. {
  350. "name": "stdout",
  351. "output_type": "stream",
  352. "text": [
  353. "Parameter containing:\n",
  354. "tensor([[3.0403, 4.7550, 4.9311, ..., 3.0626, 4.3593, 3.9823],\n",
  355. " [4.4812, 4.5463, 4.4052, ..., 3.7669, 3.4201, 4.6582],\n",
  356. " [3.7711, 3.3997, 4.1416, ..., 3.4086, 3.1681, 4.0410],\n",
  357. " ...,\n",
  358. " [4.4137, 4.1779, 4.8741, ..., 3.4678, 3.4457, 4.7489],\n",
  359. " [3.8246, 4.2699, 4.9944, ..., 4.8576, 3.8945, 4.5525],\n",
  360. " [3.4959, 3.6991, 4.4047, ..., 4.7308, 3.5796, 3.2013]],\n",
  361. " dtype=torch.float64, requires_grad=True)\n"
  362. ]
  363. }
  364. ],
  365. "source": [
  366. "print(net1[0].weight)"
  367. ]
  368. },
  369. {
  370. "cell_type": "code",
  371. "execution_count": 9,
  372. "metadata": {},
  373. "outputs": [
  374. {
  375. "name": "stderr",
  376. "output_type": "stream",
  377. "text": [
  378. "/home/bushuhui/.virtualenv/dl/lib/python3.5/site-packages/ipykernel_launcher.py:1: UserWarning: nn.init.xavier_uniform is now deprecated in favor of nn.init.xavier_uniform_.\n",
  379. " \"\"\"Entry point for launching an IPython kernel.\n"
  380. ]
  381. },
  382. {
  383. "data": {
  384. "text/plain": [
  385. "Parameter containing:\n",
  386. "tensor([[-0.0889, 0.2279, 0.1816, ..., 0.1091, 0.0207, -0.2063],\n",
  387. " [ 0.0394, 0.1860, 0.1261, ..., 0.2250, -0.2881, 0.0727],\n",
  388. " [-0.2252, -0.0639, 0.2077, ..., 0.0328, -0.0075, 0.0339],\n",
  389. " ...,\n",
  390. " [-0.0932, 0.2806, -0.2377, ..., -0.2087, 0.0325, 0.0504],\n",
  391. " [-0.2305, 0.2866, -0.1872, ..., 0.2127, 0.1487, 0.0645],\n",
  392. " [-0.0072, 0.2771, 0.0928, ..., -0.0234, -0.1238, 0.1197]],\n",
  393. " dtype=torch.float64, requires_grad=True)"
  394. ]
  395. },
  396. "execution_count": 9,
  397. "metadata": {},
  398. "output_type": "execute_result"
  399. }
  400. ],
  401. "source": [
  402. "init.xavier_uniform(net1[0].weight) # 这就是上面我们讲过的 Xavier 初始化方法,PyTorch 直接内置了其实现"
  403. ]
  404. },
  405. {
  406. "cell_type": "code",
  407. "execution_count": 10,
  408. "metadata": {},
  409. "outputs": [
  410. {
  411. "name": "stdout",
  412. "output_type": "stream",
  413. "text": [
  414. "Parameter containing:\n",
  415. "tensor([[-0.0889, 0.2279, 0.1816, ..., 0.1091, 0.0207, -0.2063],\n",
  416. " [ 0.0394, 0.1860, 0.1261, ..., 0.2250, -0.2881, 0.0727],\n",
  417. " [-0.2252, -0.0639, 0.2077, ..., 0.0328, -0.0075, 0.0339],\n",
  418. " ...,\n",
  419. " [-0.0932, 0.2806, -0.2377, ..., -0.2087, 0.0325, 0.0504],\n",
  420. " [-0.2305, 0.2866, -0.1872, ..., 0.2127, 0.1487, 0.0645],\n",
  421. " [-0.0072, 0.2771, 0.0928, ..., -0.0234, -0.1238, 0.1197]],\n",
  422. " dtype=torch.float64, requires_grad=True)\n"
  423. ]
  424. }
  425. ],
  426. "source": [
  427. "print(net1[0].weight)"
  428. ]
  429. },
  430. {
  431. "cell_type": "markdown",
  432. "metadata": {},
  433. "source": [
  434. "可以看到参数已经被修改了\n",
  435. "\n",
  436. "`torch.nn.init` 为我们提供了更多的内置初始化方式,避免了我们重复去实现一些相同的操作"
  437. ]
  438. },
  439. {
  440. "cell_type": "markdown",
  441. "metadata": {},
  442. "source": [
  443. "上面讲了两种初始化方式,其实它们的本质都是一样的,就是去修改某一层参数的实际值,而 `torch.nn.init` 提供了更多成熟的深度学习相关的初始化方式,非常方便\n",
  444. "\n",
  445. "下一节课,我们将讲一下目前流行的各种基于梯度的优化算法"
  446. ]
  447. }
  448. ],
  449. "metadata": {
  450. "kernelspec": {
  451. "display_name": "Python 3",
  452. "language": "python",
  453. "name": "python3"
  454. },
  455. "language_info": {
  456. "codemirror_mode": {
  457. "name": "ipython",
  458. "version": 3
  459. },
  460. "file_extension": ".py",
  461. "mimetype": "text/x-python",
  462. "name": "python",
  463. "nbconvert_exporter": "python",
  464. "pygments_lexer": "ipython3",
  465. "version": "3.6.9"
  466. }
  467. },
  468. "nbformat": 4,
  469. "nbformat_minor": 2
  470. }

机器学习越来越多应用到飞行器、机器人等领域,其目的是利用计算机实现类似人类的智能,从而实现装备的智能化与无人化。本课程旨在引导学生掌握机器学习的基本知识、典型方法与技术,通过具体的应用案例激发学生对该学科的兴趣,鼓励学生能够从人工智能的角度来分析、解决飞行器、机器人所面临的问题和挑战。本课程主要内容包括Python编程基础,机器学习模型,无监督学习、监督学习、深度学习基础知识与实现,并学习如何利用机器学习解决实际问题,从而全面提升自我的《综合能力》。