You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

2-autograd.ipynb 15 kB

4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581
  1. {
  2. "cells": [
  3. {
  4. "cell_type": "markdown",
  5. "metadata": {},
  6. "source": [
  7. "# 自动求导\n",
  8. "\n",
  9. "自动求导是 PyTorch 中非常重要的特性,能够让我们避免手动去计算非常复杂的导数,这能够极大地减少构建模型的时间。 PyTorch 的 Autograd 模块实现了深度学习的算法中的反向传播求导数,在张量(Tensor类)上的所有操作, Autograd 都能为他们自动提供微分,简化了手动计算导数的复杂过程。\n",
  10. "\n",
  11. "在PyTorch 0.4以前的版本中, PyTorch 使用 `Variabe` 类来自动计算所有的梯度 `Variable` 类主要包含三个属性 \n",
  12. "* Variable 所包含的 Tensor;\n",
  13. "* grad:保存 data 对应的梯度,grad 也是个 Variable,而不是 Tensor,它和 data 的形状一样;\n",
  14. "* grad_fn:指向一个 Function 对象,这个 Function 用来反向传播计算输入的梯度;\n",
  15. "\n",
  16. "从 PyTorch 0.4版本起, `Variable` 正式合并入 `Tensor` 类,通过 `Variable` 嵌套实现的自动微分功能已经整合进入了 `Tensor` 类中。虽然为了的兼容性还是可以使用 `Variable`(tensor)这种方式进行嵌套,但是这个操作其实什么都没做。\n",
  17. "\n",
  18. "以后的代码建议直接使用 `Tensor` 类进行操作,因为官方文档中已经将 `Variable` 设置成过期模块。"
  19. ]
  20. },
  21. {
  22. "cell_type": "code",
  23. "execution_count": 1,
  24. "metadata": {},
  25. "outputs": [],
  26. "source": [
  27. "import torch"
  28. ]
  29. },
  30. {
  31. "cell_type": "markdown",
  32. "metadata": {},
  33. "source": [
  34. "## 1. 简单情况的自动求导\n",
  35. "下面我们显示一些简单情况的自动求导,\"简单\"体现在计算的结果都是标量,也就是一个数,我们对这个标量进行自动求导。"
  36. ]
  37. },
  38. {
  39. "cell_type": "code",
  40. "execution_count": 3,
  41. "metadata": {},
  42. "outputs": [
  43. {
  44. "name": "stdout",
  45. "output_type": "stream",
  46. "text": [
  47. "tensor([19.], grad_fn=<AddBackward0>)\n"
  48. ]
  49. }
  50. ],
  51. "source": [
  52. "x = torch.tensor([2.0], requires_grad=True)\n",
  53. "y = x + 2\n",
  54. "z = y ** 2 + 3\n",
  55. "print(z)"
  56. ]
  57. },
  58. {
  59. "cell_type": "markdown",
  60. "metadata": {},
  61. "source": [
  62. "通过上面的一些列操作,我们从 x 得到了最后的结果out,我们可以将其表示为数学公式\n",
  63. "\n",
  64. "$$\n",
  65. "z = (x + 2)^2 + 3\n",
  66. "$$\n",
  67. "\n",
  68. "那么我们从 z 对 x 求导的结果就是 \n",
  69. "\n",
  70. "$$\n",
  71. "\\frac{\\partial z}{\\partial x} = 2 (x + 2) = 2 (2 + 2) = 8\n",
  72. "$$\n",
  73. "\n",
  74. "如果你对求导不熟悉,可以查看以下[《导数介绍资料》](https://baike.baidu.com/item/%E5%AF%BC%E6%95%B0#1)网址进行复习"
  75. ]
  76. },
  77. {
  78. "cell_type": "code",
  79. "execution_count": 4,
  80. "metadata": {},
  81. "outputs": [
  82. {
  83. "name": "stdout",
  84. "output_type": "stream",
  85. "text": [
  86. "tensor([8.])\n"
  87. ]
  88. }
  89. ],
  90. "source": [
  91. "# 使用自动求导\n",
  92. "z.backward()\n",
  93. "print(x.grad)"
  94. ]
  95. },
  96. {
  97. "cell_type": "markdown",
  98. "metadata": {},
  99. "source": [
  100. "对于上面这样一个简单的例子,我们验证了自动求导,同时可以发现发现使用自动求导非常方便。如果是一个更加复杂的例子,那么手动求导就会显得非常的麻烦,所以自动求导的机制能够帮助我们省去麻烦的数学计算,下面我们可以看一个更加复杂的例子。"
  101. ]
  102. },
  103. {
  104. "cell_type": "code",
  105. "execution_count": 8,
  106. "metadata": {},
  107. "outputs": [
  108. {
  109. "name": "stdout",
  110. "output_type": "stream",
  111. "text": [
  112. "tensor([[1., 2.],\n",
  113. " [3., 4.]], requires_grad=True)\n"
  114. ]
  115. }
  116. ],
  117. "source": [
  118. "# 定义变量\n",
  119. "x = torch.tensor([1,2], dtype=torch.float, requires_grad=False)\n",
  120. "b = torch.tensor([5,6], dtype=torch.float, requires_grad=False)\n",
  121. "w = torch.tensor([[1,2],[3,4]], dtype=torch.float, requires_grad=True)\n",
  122. "print(w)"
  123. ]
  124. },
  125. {
  126. "cell_type": "code",
  127. "execution_count": 9,
  128. "metadata": {},
  129. "outputs": [],
  130. "source": [
  131. "z = torch.mean(torch.matmul(w, x) + b) # torch.matmul 是做矩阵乘法\n",
  132. "z.backward()"
  133. ]
  134. },
  135. {
  136. "cell_type": "markdown",
  137. "metadata": {},
  138. "source": [
  139. "如果你对矩阵乘法不熟悉,可以查看下面的[《矩阵乘法说明》](https://baike.baidu.com/item/%E7%9F%A9%E9%98%B5%E4%B9%98%E6%B3%95/5446029?fr=aladdin)进行复习"
  140. ]
  141. },
  142. {
  143. "cell_type": "code",
  144. "execution_count": 10,
  145. "metadata": {},
  146. "outputs": [
  147. {
  148. "name": "stdout",
  149. "output_type": "stream",
  150. "text": [
  151. "tensor([[0.5000, 1.0000],\n",
  152. " [0.5000, 1.0000]])\n"
  153. ]
  154. }
  155. ],
  156. "source": [
  157. "# 得到 w 的梯度\n",
  158. "print(w.grad)"
  159. ]
  160. },
  161. {
  162. "cell_type": "markdown",
  163. "metadata": {},
  164. "source": [
  165. "具体计算的公式为:\n",
  166. "$$\n",
  167. "z_1 = w_{11}*x_1 + w_{12}*x_2 + b_1 \\\\\n",
  168. "z_2 = w_{21}*x_1 + w_{22}*x_2 + b_2 \\\\\n",
  169. "z = \\frac{1}{2} (z_1 + z_2)\n",
  170. "$$"
  171. ]
  172. },
  173. {
  174. "cell_type": "markdown",
  175. "metadata": {},
  176. "source": [
  177. "则微分计算结果是:\n",
  178. "$$\n",
  179. "\\frac{\\partial z}{w_{11}} = \\frac{1}{2} x_1 \\\\\n",
  180. "\\frac{\\partial z}{w_{12}} = \\frac{1}{2} x_2 \\\\\n",
  181. "\\frac{\\partial z}{w_{21}} = \\frac{1}{2} x_1 \\\\\n",
  182. "\\frac{\\partial z}{w_{22}} = \\frac{1}{2} x_2\n",
  183. "$$"
  184. ]
  185. },
  186. {
  187. "cell_type": "markdown",
  188. "metadata": {},
  189. "source": [
  190. "上面数学公式的具体含义是:矩阵乘法之后对两个矩阵对应元素相乘,然后所有元素求平均。使用 PyTorch 的自动求导,能够非常容易得到 对 `w` 的导数,因为深度学习中充满大量的矩阵运算,所以手动去求这些导数比较费时间和精力,有了自动求导能够非常方便地解决网络更新的问题。"
  191. ]
  192. },
  193. {
  194. "cell_type": "markdown",
  195. "metadata": {},
  196. "source": [
  197. "## 2. 复杂情况的自动求导\n",
  198. "\n",
  199. "上面我们展示了简单情况下的自动求导,都是对标量进行自动求导,那么如何对一个向量或者矩阵自动求导?"
  200. ]
  201. },
  202. {
  203. "cell_type": "code",
  204. "execution_count": 12,
  205. "metadata": {},
  206. "outputs": [
  207. {
  208. "name": "stdout",
  209. "output_type": "stream",
  210. "text": [
  211. "tensor([[2., 3.]], requires_grad=True)\n",
  212. "tensor([[0., 0.]])\n"
  213. ]
  214. }
  215. ],
  216. "source": [
  217. "m = torch.tensor([[2, 3]], dtype=torch.float, requires_grad=True) # 构建一个 1 x 2 的矩阵\n",
  218. "n = torch.zeros(1, 2) # 构建一个相同大小的 0 矩阵\n",
  219. "print(m)\n",
  220. "print(n)"
  221. ]
  222. },
  223. {
  224. "cell_type": "code",
  225. "execution_count": 13,
  226. "metadata": {},
  227. "outputs": [
  228. {
  229. "name": "stdout",
  230. "output_type": "stream",
  231. "text": [
  232. "tensor(2., grad_fn=<SelectBackward0>)\n",
  233. "tensor([[ 4., 27.]], grad_fn=<CopySlices>)\n"
  234. ]
  235. }
  236. ],
  237. "source": [
  238. "# 通过 m 中的值计算新的 n 中的值\n",
  239. "print(m[0,0])\n",
  240. "n[0, 0] = m[0, 0] ** 2\n",
  241. "n[0, 1] = m[0, 1] ** 3\n",
  242. "print(n)"
  243. ]
  244. },
  245. {
  246. "cell_type": "markdown",
  247. "metadata": {},
  248. "source": [
  249. "将上面的式子写成数学公式,可以得到 \n",
  250. "$$\n",
  251. "n = (n_0,\\ n_1) = (m_0^2,\\ m_1^3) = (2^2,\\ 3^3) \n",
  252. "$$"
  253. ]
  254. },
  255. {
  256. "cell_type": "markdown",
  257. "metadata": {},
  258. "source": [
  259. "下面我们直接对 `n` 进行反向传播,也就是求 `n` 对 `m` 的导数。\n",
  260. "\n",
  261. "这时我们需要明确这个导数的定义,即如何定义\n",
  262. "\n",
  263. "$$\n",
  264. "\\frac{\\partial n}{\\partial m} = \\frac{\\partial (n_0,\\ n_1)}{\\partial (m_0,\\ m_1)}\n",
  265. "$$\n"
  266. ]
  267. },
  268. {
  269. "cell_type": "markdown",
  270. "metadata": {},
  271. "source": [
  272. "在 PyTorch 中,如果要调用自动求导,需要往`backward()`中传入一个参数,这个参数的形状和 n 一样大,比如是 $(w_0,\\ w_1)$,那么自动求导的结果就是:\n",
  273. "$$\n",
  274. "\\frac{\\partial n}{\\partial m_0} = w_0 \\frac{\\partial n_0}{\\partial m_0} + w_1 \\frac{\\partial n_1}{\\partial m_0}\n",
  275. "$$\n",
  276. "$$\n",
  277. "\\frac{\\partial n}{\\partial m_1} = w_0 \\frac{\\partial n_0}{\\partial m_1} + w_1 \\frac{\\partial n_1}{\\partial m_1}\n",
  278. "$$"
  279. ]
  280. },
  281. {
  282. "cell_type": "code",
  283. "execution_count": 14,
  284. "metadata": {},
  285. "outputs": [],
  286. "source": [
  287. "n.backward(torch.ones_like(n)) # 将 (w0, w1) 取成 (1, 1)"
  288. ]
  289. },
  290. {
  291. "cell_type": "code",
  292. "execution_count": 15,
  293. "metadata": {},
  294. "outputs": [
  295. {
  296. "name": "stdout",
  297. "output_type": "stream",
  298. "text": [
  299. "tensor([[ 4., 27.]])\n"
  300. ]
  301. }
  302. ],
  303. "source": [
  304. "print(m.grad)"
  305. ]
  306. },
  307. {
  308. "cell_type": "markdown",
  309. "metadata": {},
  310. "source": [
  311. "通过自动求导我们得到了梯度是 4 和 27,我们可以验算一下\n",
  312. "$$\n",
  313. "\\frac{\\partial n}{\\partial m_0} = w_0 \\frac{\\partial n_0}{\\partial m_0} + w_1 \\frac{\\partial n_1}{\\partial m_0} = 2 m_0 + 0 = 2 \\times 2 = 4\n",
  314. "$$\n",
  315. "$$\n",
  316. "\\frac{\\partial n}{\\partial m_1} = w_0 \\frac{\\partial n_0}{\\partial m_1} + w_1 \\frac{\\partial n_1}{\\partial m_1} = 0 + 3 m_1^2 = 3 \\times 3^2 = 27\n",
  317. "$$\n",
  318. "通过验算我们可以得到相同的结果"
  319. ]
  320. },
  321. {
  322. "cell_type": "markdown",
  323. "metadata": {},
  324. "source": [
  325. "\n"
  326. ]
  327. },
  328. {
  329. "cell_type": "markdown",
  330. "metadata": {},
  331. "source": [
  332. "## 3. 多次自动求导\n",
  333. "通过调用 backward 我们可以进行一次自动求导,如果我们再调用一次 backward,会发现程序报错,没有办法再做一次。这是因为 PyTorch 默认做完一次自动求导之后,计算图就被丢弃了,所以两次自动求导需要手动设置一个东西,我们通过下面的小例子来说明。"
  334. ]
  335. },
  336. {
  337. "cell_type": "code",
  338. "execution_count": 17,
  339. "metadata": {},
  340. "outputs": [
  341. {
  342. "name": "stdout",
  343. "output_type": "stream",
  344. "text": [
  345. "tensor([18.], grad_fn=<AddBackward0>)\n"
  346. ]
  347. }
  348. ],
  349. "source": [
  350. "x = torch.tensor([3], dtype=torch.float, requires_grad=True)\n",
  351. "y = x * 2 + x ** 2 + 3\n",
  352. "print(y)"
  353. ]
  354. },
  355. {
  356. "cell_type": "code",
  357. "execution_count": 18,
  358. "metadata": {},
  359. "outputs": [],
  360. "source": [
  361. "y.backward(retain_graph=True) # 设置 retain_graph 为 True 来保留计算图"
  362. ]
  363. },
  364. {
  365. "cell_type": "code",
  366. "execution_count": 19,
  367. "metadata": {},
  368. "outputs": [
  369. {
  370. "name": "stdout",
  371. "output_type": "stream",
  372. "text": [
  373. "tensor([8.])\n"
  374. ]
  375. }
  376. ],
  377. "source": [
  378. "print(x.grad)"
  379. ]
  380. },
  381. {
  382. "cell_type": "code",
  383. "execution_count": 20,
  384. "metadata": {},
  385. "outputs": [],
  386. "source": [
  387. "y.backward() # 再做一次自动求导,这次不保留计算图"
  388. ]
  389. },
  390. {
  391. "cell_type": "code",
  392. "execution_count": 21,
  393. "metadata": {},
  394. "outputs": [
  395. {
  396. "name": "stdout",
  397. "output_type": "stream",
  398. "text": [
  399. "tensor([16.])\n"
  400. ]
  401. }
  402. ],
  403. "source": [
  404. "print(x.grad)"
  405. ]
  406. },
  407. {
  408. "cell_type": "markdown",
  409. "metadata": {},
  410. "source": [
  411. "可以发现 x 的梯度变成了 16,因为这里做了两次自动求导,所以讲第一次的梯度 8 和第二次的梯度 8 加起来得到了 16 的结果。"
  412. ]
  413. },
  414. {
  415. "cell_type": "markdown",
  416. "metadata": {},
  417. "source": [
  418. "\n"
  419. ]
  420. },
  421. {
  422. "cell_type": "markdown",
  423. "metadata": {},
  424. "source": [
  425. "## 4. 练习题\n",
  426. "\n",
  427. "定义\n",
  428. "\n",
  429. "$$\n",
  430. "x = \n",
  431. "\\left[\n",
  432. "\\begin{matrix}\n",
  433. "x_0 \\\\\n",
  434. "x_1\n",
  435. "\\end{matrix}\n",
  436. "\\right] = \n",
  437. "\\left[\n",
  438. "\\begin{matrix}\n",
  439. "2 \\\\\n",
  440. "3\n",
  441. "\\end{matrix}\n",
  442. "\\right]\n",
  443. "$$\n",
  444. "\n",
  445. "$$\n",
  446. "k = (k_0,\\ k_1) = (x_0^2 + 3 x_1,\\ 2 x_0 + x_1^2)\n",
  447. "$$\n",
  448. "\n",
  449. "我们希望求得\n",
  450. "\n",
  451. "$$\n",
  452. "j = \\left[\n",
  453. "\\begin{matrix}\n",
  454. "\\frac{\\partial k_0}{\\partial x_0} & \\frac{\\partial k_0}{\\partial x_1} \\\\\n",
  455. "\\frac{\\partial k_1}{\\partial x_0} & \\frac{\\partial k_1}{\\partial x_1}\n",
  456. "\\end{matrix}\n",
  457. "\\right]\n",
  458. "$$\n"
  459. ]
  460. },
  461. {
  462. "cell_type": "code",
  463. "execution_count": 22,
  464. "metadata": {},
  465. "outputs": [],
  466. "source": [
  467. "x = torch.tensor([2, 3], dtype=torch.float, requires_grad=True)\n",
  468. "k = torch.zeros(2)\n",
  469. "\n",
  470. "k[0] = x[0] ** 2 + 3 * x[1]\n",
  471. "k[1] = x[1] ** 2 + 2 * x[0]"
  472. ]
  473. },
  474. {
  475. "cell_type": "code",
  476. "execution_count": 23,
  477. "metadata": {},
  478. "outputs": [
  479. {
  480. "name": "stdout",
  481. "output_type": "stream",
  482. "text": [
  483. "tensor([13., 13.], grad_fn=<CopySlices>)\n",
  484. "tensor([4., 3.])\n",
  485. "tensor([2., 6.])\n"
  486. ]
  487. }
  488. ],
  489. "source": [
  490. "# calc k_0 -> (x_0, x_1)\n",
  491. "j = torch.zeros(2, 2)\n",
  492. "k.backward(torch.FloatTensor([1, 0]), retain_graph=True)\n",
  493. "print(k)\n",
  494. "j[0] = x.grad.data\n",
  495. "print(x.grad.data)\n",
  496. "\n",
  497. "x.grad.data.zero_() # 归零之前求得的梯度\n",
  498. "\n",
  499. "# calc k_1 -> (x_0, x_1)\n",
  500. "k.backward(torch.FloatTensor([0, 1]))\n",
  501. "j[1] = x.grad.data\n",
  502. "print(x.grad.data)\n"
  503. ]
  504. },
  505. {
  506. "cell_type": "code",
  507. "execution_count": 24,
  508. "metadata": {
  509. "scrolled": true
  510. },
  511. "outputs": [
  512. {
  513. "name": "stdout",
  514. "output_type": "stream",
  515. "text": [
  516. "tensor([[4., 3.],\n",
  517. " [2., 6.]])\n"
  518. ]
  519. }
  520. ],
  521. "source": [
  522. "print(j)"
  523. ]
  524. },
  525. {
  526. "cell_type": "code",
  527. "execution_count": 25,
  528. "metadata": {},
  529. "outputs": [
  530. {
  531. "name": "stdout",
  532. "output_type": "stream",
  533. "text": [
  534. "tensor([2., 3., 4.], requires_grad=True)\n",
  535. "tensor([2., 0., 0.])\n"
  536. ]
  537. }
  538. ],
  539. "source": [
  540. "# demo to show how to use `.backward`\n",
  541. "x = torch.tensor([2,3,4], dtype=torch.float, requires_grad=True)\n",
  542. "print(x)\n",
  543. "y = x*2\n",
  544. "\n",
  545. "y.backward(torch.tensor([1, 0, 0], dtype=torch.float))\n",
  546. "print(x.grad)"
  547. ]
  548. },
  549. {
  550. "cell_type": "markdown",
  551. "metadata": {},
  552. "source": [
  553. "## 参考资料\n",
  554. "* [PyTorch 的 Autograd](https://zhuanlan.zhihu.com/p/69294347)\n",
  555. "* [PyTorch学习笔记之自动求导(AutoGrad)](https://zhuanlan.zhihu.com/p/102942725)\n",
  556. "* [Pytorch Autograd (自动求导机制)](https://www.cnblogs.com/wangqinze/p/13418291.html)"
  557. ]
  558. }
  559. ],
  560. "metadata": {
  561. "kernelspec": {
  562. "display_name": "Python 3 (ipykernel)",
  563. "language": "python",
  564. "name": "python3"
  565. },
  566. "language_info": {
  567. "codemirror_mode": {
  568. "name": "ipython",
  569. "version": 3
  570. },
  571. "file_extension": ".py",
  572. "mimetype": "text/x-python",
  573. "name": "python",
  574. "nbconvert_exporter": "python",
  575. "pygments_lexer": "ipython3",
  576. "version": "3.9.7"
  577. }
  578. },
  579. "nbformat": 4,
  580. "nbformat_minor": 2
  581. }

机器学习越来越多应用到飞行器、机器人等领域,其目的是利用计算机实现类似人类的智能,从而实现装备的智能化与无人化。本课程旨在引导学生掌握机器学习的基本知识、典型方法与技术,通过具体的应用案例激发学生对该学科的兴趣,鼓励学生能够从人工智能的角度来分析、解决飞行器、机器人所面临的问题和挑战。本课程主要内容包括Python编程基础,机器学习模型,无监督学习、监督学习、深度学习基础知识与实现,并学习如何利用机器学习解决实际问题,从而全面提升自我的《综合能力》。