You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

1-Tensor-and-Variable.ipynb 35 kB

4 years ago
4 years ago
4 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946947948949950951952953954955956957958959960961962963964965
  1. {
  2. "cells": [
  3. {
  4. "cell_type": "markdown",
  5. "metadata": {},
  6. "source": [
  7. "# Tensor and Variable\n",
  8. "\n",
  9. "PyTorch的简洁设计使得它入门很简单,在深入介绍PyTorch之前,本节将先介绍一些PyTorch的基础知识,使得读者能够对PyTorch有一个大致的了解,并能够用PyTorch搭建一个简单的神经网络。部分内容读者可能暂时不太理解,可先不予以深究,后续的课程将会对此进行深入讲解。\n",
  10. "\n",
  11. "本节内容参考了PyTorch官方教程[^1]并做了相应的增删修改,使得内容更贴合新版本的PyTorch接口,同时也更适合新手快速入门。另外本书需要读者先掌握基础的Numpy使用,其他相关知识推荐读者参考CS231n的教程[^2]。\n",
  12. "\n",
  13. "[^1]: http://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html\n",
  14. "[^2]: http://cs231n.github.io/python-numpy-tutorial/\n",
  15. "\n"
  16. ]
  17. },
  18. {
  19. "cell_type": "markdown",
  20. "metadata": {},
  21. "source": [
  22. "## 把 PyTorch 当做 NumPy 用\n",
  23. "\n",
  24. "PyTorch 的官方介绍是一个拥有强力GPU加速的张量和动态构建网络的库,其主要构件是张量,所以我们可以把 PyTorch 当做 NumPy 来用,PyTorch 的很多操作好 NumPy 都是类似的,但是因为其能够在 GPU 上运行,所以有着比 NumPy 快很多倍的速度。通过本次课程,你能够学会如何像使用 NumPy 一样使用 PyTorch,了解到 PyTorch 中的基本元素 Tensor 和 Variable 及其操作方式。"
  25. ]
  26. },
  27. {
  28. "cell_type": "code",
  29. "execution_count": 5,
  30. "metadata": {},
  31. "outputs": [],
  32. "source": [
  33. "import torch\n",
  34. "import numpy as np"
  35. ]
  36. },
  37. {
  38. "cell_type": "code",
  39. "execution_count": 6,
  40. "metadata": {},
  41. "outputs": [],
  42. "source": [
  43. "# 创建一个 numpy ndarray\n",
  44. "numpy_tensor = np.random.randn(10, 20)"
  45. ]
  46. },
  47. {
  48. "cell_type": "markdown",
  49. "metadata": {},
  50. "source": [
  51. "我们可以使用下面两种方式将numpy的ndarray转换到tensor上"
  52. ]
  53. },
  54. {
  55. "cell_type": "code",
  56. "execution_count": 7,
  57. "metadata": {},
  58. "outputs": [],
  59. "source": [
  60. "pytorch_tensor1 = torch.Tensor(numpy_tensor)\n",
  61. "pytorch_tensor2 = torch.from_numpy(numpy_tensor)"
  62. ]
  63. },
  64. {
  65. "cell_type": "markdown",
  66. "metadata": {},
  67. "source": [
  68. "使用以上两种方法进行转换的时候,会直接将 NumPy ndarray 的数据类型转换为对应的 PyTorch Tensor 数据类型"
  69. ]
  70. },
  71. {
  72. "cell_type": "markdown",
  73. "metadata": {},
  74. "source": [
  75. "\n"
  76. ]
  77. },
  78. {
  79. "cell_type": "markdown",
  80. "metadata": {},
  81. "source": [
  82. "同时我们也可以使用下面的方法将 pytorch tensor 转换为 numpy ndarray"
  83. ]
  84. },
  85. {
  86. "cell_type": "code",
  87. "execution_count": 8,
  88. "metadata": {},
  89. "outputs": [],
  90. "source": [
  91. "# 如果 pytorch tensor 在 cpu 上\n",
  92. "numpy_array = pytorch_tensor1.numpy()\n",
  93. "\n",
  94. "# 如果 pytorch tensor 在 gpu 上\n",
  95. "numpy_array = pytorch_tensor1.cpu().numpy()"
  96. ]
  97. },
  98. {
  99. "cell_type": "markdown",
  100. "metadata": {},
  101. "source": [
  102. "需要注意 GPU 上的 Tensor 不能直接转换为 NumPy ndarray,需要使用`.cpu()`先将 GPU 上的 Tensor 转到 CPU 上"
  103. ]
  104. },
  105. {
  106. "cell_type": "markdown",
  107. "metadata": {},
  108. "source": [
  109. "\n"
  110. ]
  111. },
  112. {
  113. "cell_type": "markdown",
  114. "metadata": {},
  115. "source": [
  116. "PyTorch Tensor 使用 GPU 加速\n",
  117. "\n",
  118. "我们可以使用以下两种方式将 Tensor 放到 GPU 上"
  119. ]
  120. },
  121. {
  122. "cell_type": "code",
  123. "execution_count": null,
  124. "metadata": {},
  125. "outputs": [],
  126. "source": [
  127. "# 第一种方式是定义 cuda 数据类型\n",
  128. "dtype = torch.cuda.FloatTensor # 定义默认 GPU 的 数据类型\n",
  129. "gpu_tensor = torch.randn(10, 20).type(dtype)\n",
  130. "\n",
  131. "# 第二种方式更简单,推荐使用\n",
  132. "gpu_tensor = torch.randn(10, 20).cuda(0) # 将 tensor 放到第一个 GPU 上\n",
  133. "gpu_tensor = torch.randn(10, 20).cuda(1) # 将 tensor 放到第二个 GPU 上"
  134. ]
  135. },
  136. {
  137. "cell_type": "markdown",
  138. "metadata": {},
  139. "source": [
  140. "使用第一种方式将 tensor 放到 GPU 上的时候会将数据类型转换成定义的类型,而是用第二种方式能够直接将 tensor 放到 GPU 上,类型跟之前保持一致\n",
  141. "\n",
  142. "推荐在定义 tensor 的时候就明确数据类型,然后直接使用第二种方法将 tensor 放到 GPU 上"
  143. ]
  144. },
  145. {
  146. "cell_type": "markdown",
  147. "metadata": {},
  148. "source": [
  149. "而将 tensor 放回 CPU 的操作非常简单"
  150. ]
  151. },
  152. {
  153. "cell_type": "code",
  154. "execution_count": null,
  155. "metadata": {
  156. "collapsed": true
  157. },
  158. "outputs": [],
  159. "source": [
  160. "cpu_tensor = gpu_tensor.cpu()"
  161. ]
  162. },
  163. {
  164. "cell_type": "markdown",
  165. "metadata": {},
  166. "source": [
  167. "我们也能够访问到 Tensor 的一些属性"
  168. ]
  169. },
  170. {
  171. "cell_type": "code",
  172. "execution_count": 5,
  173. "metadata": {},
  174. "outputs": [
  175. {
  176. "name": "stdout",
  177. "output_type": "stream",
  178. "text": [
  179. "torch.Size([10, 20])\n",
  180. "torch.Size([10, 20])\n"
  181. ]
  182. }
  183. ],
  184. "source": [
  185. "# 可以通过下面两种方式得到 tensor 的大小\n",
  186. "print(pytorch_tensor1.shape)\n",
  187. "print(pytorch_tensor1.size())"
  188. ]
  189. },
  190. {
  191. "cell_type": "code",
  192. "execution_count": 6,
  193. "metadata": {},
  194. "outputs": [
  195. {
  196. "name": "stdout",
  197. "output_type": "stream",
  198. "text": [
  199. "torch.FloatTensor\n"
  200. ]
  201. }
  202. ],
  203. "source": [
  204. "# 得到 tensor 的数据类型\n",
  205. "print(pytorch_tensor1.type())"
  206. ]
  207. },
  208. {
  209. "cell_type": "code",
  210. "execution_count": 7,
  211. "metadata": {},
  212. "outputs": [
  213. {
  214. "name": "stdout",
  215. "output_type": "stream",
  216. "text": [
  217. "2\n"
  218. ]
  219. }
  220. ],
  221. "source": [
  222. "# 得到 tensor 的维度\n",
  223. "print(pytorch_tensor1.dim())"
  224. ]
  225. },
  226. {
  227. "cell_type": "code",
  228. "execution_count": 8,
  229. "metadata": {},
  230. "outputs": [
  231. {
  232. "name": "stdout",
  233. "output_type": "stream",
  234. "text": [
  235. "200\n"
  236. ]
  237. }
  238. ],
  239. "source": [
  240. "# 得到 tensor 的所有元素个数\n",
  241. "print(pytorch_tensor1.numel())"
  242. ]
  243. },
  244. {
  245. "cell_type": "markdown",
  246. "metadata": {},
  247. "source": [
  248. "**小练习**\n",
  249. "\n",
  250. "查阅以下[文档](http://pytorch.org/docs/0.3.0/tensors.html)了解 tensor 的数据类型,创建一个 float64、大小是 3 x 2、随机初始化的 tensor,将其转化为 numpy 的 ndarray,输出其数据类型\n",
  251. "\n",
  252. "参考输出: float64"
  253. ]
  254. },
  255. {
  256. "cell_type": "code",
  257. "execution_count": 6,
  258. "metadata": {},
  259. "outputs": [
  260. {
  261. "name": "stdout",
  262. "output_type": "stream",
  263. "text": [
  264. "float64\n"
  265. ]
  266. }
  267. ],
  268. "source": [
  269. "# 答案\n",
  270. "x = torch.randn(3, 2)\n",
  271. "x = x.type(torch.DoubleTensor)\n",
  272. "x_array = x.numpy()\n",
  273. "print(x_array.dtype)"
  274. ]
  275. },
  276. {
  277. "cell_type": "markdown",
  278. "metadata": {},
  279. "source": [
  280. "\n"
  281. ]
  282. },
  283. {
  284. "cell_type": "markdown",
  285. "metadata": {},
  286. "source": [
  287. "## Tensor的操作\n",
  288. "Tensor 操作中的 api 和 NumPy 非常相似,如果你熟悉 NumPy 中的操作,那么 tensor 基本是一致的,下面我们来列举其中的一些操作"
  289. ]
  290. },
  291. {
  292. "cell_type": "code",
  293. "execution_count": 9,
  294. "metadata": {},
  295. "outputs": [
  296. {
  297. "name": "stdout",
  298. "output_type": "stream",
  299. "text": [
  300. "\n",
  301. " 1 1\n",
  302. " 1 1\n",
  303. "[torch.FloatTensor of size 2x2]\n",
  304. "\n"
  305. ]
  306. }
  307. ],
  308. "source": [
  309. "x = torch.ones(2, 2)\n",
  310. "print(x) # 这是一个float tensor"
  311. ]
  312. },
  313. {
  314. "cell_type": "code",
  315. "execution_count": 10,
  316. "metadata": {},
  317. "outputs": [
  318. {
  319. "name": "stdout",
  320. "output_type": "stream",
  321. "text": [
  322. "torch.FloatTensor\n"
  323. ]
  324. }
  325. ],
  326. "source": [
  327. "print(x.type())"
  328. ]
  329. },
  330. {
  331. "cell_type": "code",
  332. "execution_count": 11,
  333. "metadata": {},
  334. "outputs": [
  335. {
  336. "name": "stdout",
  337. "output_type": "stream",
  338. "text": [
  339. "\n",
  340. " 1 1\n",
  341. " 1 1\n",
  342. "[torch.LongTensor of size 2x2]\n",
  343. "\n"
  344. ]
  345. }
  346. ],
  347. "source": [
  348. "# 将其转化为整形\n",
  349. "x = x.long()\n",
  350. "# x = x.type(torch.LongTensor)\n",
  351. "print(x)"
  352. ]
  353. },
  354. {
  355. "cell_type": "code",
  356. "execution_count": 12,
  357. "metadata": {},
  358. "outputs": [
  359. {
  360. "name": "stdout",
  361. "output_type": "stream",
  362. "text": [
  363. "\n",
  364. " 1 1\n",
  365. " 1 1\n",
  366. "[torch.FloatTensor of size 2x2]\n",
  367. "\n"
  368. ]
  369. }
  370. ],
  371. "source": [
  372. "# 再将其转回 float\n",
  373. "x = x.float()\n",
  374. "# x = x.type(torch.FloatTensor)\n",
  375. "print(x)"
  376. ]
  377. },
  378. {
  379. "cell_type": "code",
  380. "execution_count": 13,
  381. "metadata": {},
  382. "outputs": [
  383. {
  384. "name": "stdout",
  385. "output_type": "stream",
  386. "text": [
  387. "\n",
  388. "-0.8203 -0.0328 1.8283\n",
  389. "-0.1734 -0.1873 0.9818\n",
  390. "-1.8368 -2.2450 -0.4410\n",
  391. "-0.8005 -2.1132 0.7140\n",
  392. "[torch.FloatTensor of size 4x3]\n",
  393. "\n"
  394. ]
  395. }
  396. ],
  397. "source": [
  398. "x = torch.randn(4, 3)\n",
  399. "print(x)"
  400. ]
  401. },
  402. {
  403. "cell_type": "code",
  404. "execution_count": 14,
  405. "metadata": {
  406. "collapsed": true
  407. },
  408. "outputs": [],
  409. "source": [
  410. "# 沿着行取最大值\n",
  411. "max_value, max_idx = torch.max(x, dim=1)"
  412. ]
  413. },
  414. {
  415. "cell_type": "code",
  416. "execution_count": 15,
  417. "metadata": {},
  418. "outputs": [
  419. {
  420. "data": {
  421. "text/plain": [
  422. "\n",
  423. " 1.8283\n",
  424. " 0.9818\n",
  425. "-0.4410\n",
  426. " 0.7140\n",
  427. "[torch.FloatTensor of size 4]"
  428. ]
  429. },
  430. "execution_count": 15,
  431. "metadata": {},
  432. "output_type": "execute_result"
  433. }
  434. ],
  435. "source": [
  436. "# 每一行的最大值\n",
  437. "max_value"
  438. ]
  439. },
  440. {
  441. "cell_type": "code",
  442. "execution_count": 16,
  443. "metadata": {},
  444. "outputs": [
  445. {
  446. "data": {
  447. "text/plain": [
  448. "\n",
  449. " 2\n",
  450. " 2\n",
  451. " 2\n",
  452. " 2\n",
  453. "[torch.LongTensor of size 4]"
  454. ]
  455. },
  456. "execution_count": 16,
  457. "metadata": {},
  458. "output_type": "execute_result"
  459. }
  460. ],
  461. "source": [
  462. "# 每一行最大值的下标\n",
  463. "max_idx"
  464. ]
  465. },
  466. {
  467. "cell_type": "code",
  468. "execution_count": 17,
  469. "metadata": {},
  470. "outputs": [
  471. {
  472. "name": "stdout",
  473. "output_type": "stream",
  474. "text": [
  475. "\n",
  476. " 0.9751\n",
  477. " 0.6212\n",
  478. "-4.5228\n",
  479. "-2.1997\n",
  480. "[torch.FloatTensor of size 4]\n",
  481. "\n"
  482. ]
  483. }
  484. ],
  485. "source": [
  486. "# 沿着行对 x 求和\n",
  487. "sum_x = torch.sum(x, dim=1)\n",
  488. "print(sum_x)"
  489. ]
  490. },
  491. {
  492. "cell_type": "code",
  493. "execution_count": 18,
  494. "metadata": {},
  495. "outputs": [
  496. {
  497. "name": "stdout",
  498. "output_type": "stream",
  499. "text": [
  500. "torch.Size([4, 3])\n",
  501. "torch.Size([1, 4, 3])\n"
  502. ]
  503. }
  504. ],
  505. "source": [
  506. "# 增加维度或者减少维度\n",
  507. "print(x.shape)\n",
  508. "x = x.unsqueeze(0) # 在第一维增加\n",
  509. "print(x.shape)"
  510. ]
  511. },
  512. {
  513. "cell_type": "code",
  514. "execution_count": 19,
  515. "metadata": {},
  516. "outputs": [
  517. {
  518. "name": "stdout",
  519. "output_type": "stream",
  520. "text": [
  521. "torch.Size([1, 1, 4, 3])\n"
  522. ]
  523. }
  524. ],
  525. "source": [
  526. "x = x.unsqueeze(1) # 在第二维增加\n",
  527. "print(x.shape)"
  528. ]
  529. },
  530. {
  531. "cell_type": "code",
  532. "execution_count": 20,
  533. "metadata": {},
  534. "outputs": [
  535. {
  536. "name": "stdout",
  537. "output_type": "stream",
  538. "text": [
  539. "torch.Size([1, 4, 3])\n"
  540. ]
  541. }
  542. ],
  543. "source": [
  544. "x = x.squeeze(0) # 减少第一维\n",
  545. "print(x.shape)"
  546. ]
  547. },
  548. {
  549. "cell_type": "code",
  550. "execution_count": 21,
  551. "metadata": {},
  552. "outputs": [
  553. {
  554. "name": "stdout",
  555. "output_type": "stream",
  556. "text": [
  557. "torch.Size([4, 3])\n"
  558. ]
  559. }
  560. ],
  561. "source": [
  562. "x = x.squeeze() # 将 tensor 中所有的一维全部都去掉\n",
  563. "print(x.shape)"
  564. ]
  565. },
  566. {
  567. "cell_type": "code",
  568. "execution_count": 22,
  569. "metadata": {},
  570. "outputs": [
  571. {
  572. "name": "stdout",
  573. "output_type": "stream",
  574. "text": [
  575. "torch.Size([3, 4, 5])\n",
  576. "torch.Size([4, 3, 5])\n",
  577. "torch.Size([5, 3, 4])\n"
  578. ]
  579. }
  580. ],
  581. "source": [
  582. "x = torch.randn(3, 4, 5)\n",
  583. "print(x.shape)\n",
  584. "\n",
  585. "# 使用permute和transpose进行维度交换\n",
  586. "x = x.permute(1, 0, 2) # permute 可以重新排列 tensor 的维度\n",
  587. "print(x.shape)\n",
  588. "\n",
  589. "x = x.transpose(0, 2) # transpose 交换 tensor 中的两个维度\n",
  590. "print(x.shape)"
  591. ]
  592. },
  593. {
  594. "cell_type": "code",
  595. "execution_count": 23,
  596. "metadata": {},
  597. "outputs": [
  598. {
  599. "name": "stdout",
  600. "output_type": "stream",
  601. "text": [
  602. "torch.Size([3, 4, 5])\n",
  603. "torch.Size([12, 5])\n",
  604. "torch.Size([3, 20])\n"
  605. ]
  606. }
  607. ],
  608. "source": [
  609. "# 使用 view 对 tensor 进行 reshape\n",
  610. "x = torch.randn(3, 4, 5)\n",
  611. "print(x.shape)\n",
  612. "\n",
  613. "x = x.view(-1, 5) # -1 表示任意的大小,5 表示第二维变成 5\n",
  614. "print(x.shape)\n",
  615. "\n",
  616. "x = x.view(3, 20) # 重新 reshape 成 (3, 20) 的大小\n",
  617. "print(x.shape)"
  618. ]
  619. },
  620. {
  621. "cell_type": "code",
  622. "execution_count": 24,
  623. "metadata": {
  624. "collapsed": true
  625. },
  626. "outputs": [],
  627. "source": [
  628. "x = torch.randn(3, 4)\n",
  629. "y = torch.randn(3, 4)\n",
  630. "\n",
  631. "# 两个 tensor 求和\n",
  632. "z = x + y\n",
  633. "# z = torch.add(x, y)"
  634. ]
  635. },
  636. {
  637. "cell_type": "markdown",
  638. "metadata": {},
  639. "source": [
  640. "另外,pytorch中大多数的操作都支持 inplace 操作,也就是可以直接对 tensor 进行操作而不需要另外开辟内存空间,方式非常简单,一般都是在操作的符号后面加`_`,比如"
  641. ]
  642. },
  643. {
  644. "cell_type": "code",
  645. "execution_count": 25,
  646. "metadata": {},
  647. "outputs": [
  648. {
  649. "name": "stdout",
  650. "output_type": "stream",
  651. "text": [
  652. "torch.Size([3, 3])\n",
  653. "torch.Size([1, 3, 3])\n",
  654. "torch.Size([3, 1, 3])\n"
  655. ]
  656. }
  657. ],
  658. "source": [
  659. "x = torch.ones(3, 3)\n",
  660. "print(x.shape)\n",
  661. "\n",
  662. "# unsqueeze 进行 inplace\n",
  663. "x.unsqueeze_(0)\n",
  664. "print(x.shape)\n",
  665. "\n",
  666. "# transpose 进行 inplace\n",
  667. "x.transpose_(1, 0)\n",
  668. "print(x.shape)"
  669. ]
  670. },
  671. {
  672. "cell_type": "code",
  673. "execution_count": null,
  674. "metadata": {
  675. "collapsed": true
  676. },
  677. "outputs": [],
  678. "source": [
  679. "x = torch.ones(3, 3)\n",
  680. "y = torch.ones(3, 3)\n",
  681. "print(x)\n",
  682. "\n",
  683. "# add 进行 inplace\n",
  684. "x.add_(y)\n",
  685. "print(x)"
  686. ]
  687. },
  688. {
  689. "cell_type": "markdown",
  690. "metadata": {},
  691. "source": [
  692. "**小练习**\n",
  693. "\n",
  694. "访问[文档](http://pytorch.org/docs/0.3.0/tensors.html)了解 tensor 更多的 api,实现下面的要求\n",
  695. "\n",
  696. "创建一个 float32、4 x 4 的全为1的矩阵,将矩阵正中间 2 x 2 的矩阵,全部修改成2\n",
  697. "\n",
  698. "参考输出\n",
  699. "$$\n",
  700. "\\left[\n",
  701. "\\begin{matrix}\n",
  702. "1 & 1 & 1 & 1 \\\\\n",
  703. "1 & 2 & 2 & 1 \\\\\n",
  704. "1 & 2 & 2 & 1 \\\\\n",
  705. "1 & 1 & 1 & 1\n",
  706. "\\end{matrix}\n",
  707. "\\right] \\\\\n",
  708. "[torch.FloatTensor\\ of\\ size\\ 4x4]\n",
  709. "$$"
  710. ]
  711. },
  712. {
  713. "cell_type": "code",
  714. "execution_count": 10,
  715. "metadata": {},
  716. "outputs": [
  717. {
  718. "name": "stdout",
  719. "output_type": "stream",
  720. "text": [
  721. "\n",
  722. " 1 1 1 1\n",
  723. " 1 2 2 1\n",
  724. " 1 2 2 1\n",
  725. " 1 1 1 1\n",
  726. "[torch.FloatTensor of size 4x4]\n",
  727. "\n"
  728. ]
  729. }
  730. ],
  731. "source": [
  732. "# 答案\n",
  733. "x = torch.ones(4, 4).float()\n",
  734. "x[1:3, 1:3] = 2\n",
  735. "print(x)"
  736. ]
  737. },
  738. {
  739. "cell_type": "markdown",
  740. "metadata": {},
  741. "source": [
  742. "## Variable\n",
  743. "tensor 是 PyTorch 中的完美组件,但是构建神经网络还远远不够,我们需要能够构建计算图的 tensor,这就是 Variable。Variable 是对 tensor 的封装,操作和 tensor 是一样的,但是每个 Variabel都有三个属性,Variable 中的 tensor本身`.data`,对应 tensor 的梯度`.grad`以及这个 Variable 是通过什么方式得到的`.grad_fn`"
  744. ]
  745. },
  746. {
  747. "cell_type": "code",
  748. "execution_count": 4,
  749. "metadata": {},
  750. "outputs": [],
  751. "source": [
  752. "# 通过下面这种方式导入 Variable\n",
  753. "from torch.autograd import Variable"
  754. ]
  755. },
  756. {
  757. "cell_type": "code",
  758. "execution_count": 28,
  759. "metadata": {},
  760. "outputs": [],
  761. "source": [
  762. "x_tensor = torch.randn(10, 5)\n",
  763. "y_tensor = torch.randn(10, 5)\n",
  764. "\n",
  765. "# 将 tensor 变成 Variable\n",
  766. "x = Variable(x_tensor, requires_grad=True) # 默认 Variable 是不需要求梯度的,所以我们用这个方式申明需要对其进行求梯度\n",
  767. "y = Variable(y_tensor, requires_grad=True)"
  768. ]
  769. },
  770. {
  771. "cell_type": "code",
  772. "execution_count": 29,
  773. "metadata": {
  774. "collapsed": true
  775. },
  776. "outputs": [],
  777. "source": [
  778. "z = torch.sum(x + y)"
  779. ]
  780. },
  781. {
  782. "cell_type": "code",
  783. "execution_count": 30,
  784. "metadata": {},
  785. "outputs": [
  786. {
  787. "name": "stdout",
  788. "output_type": "stream",
  789. "text": [
  790. "\n",
  791. "-2.1379\n",
  792. "[torch.FloatTensor of size 1]\n",
  793. "\n",
  794. "<SumBackward0 object at 0x10da636a0>\n"
  795. ]
  796. }
  797. ],
  798. "source": [
  799. "print(z.data)\n",
  800. "print(z.grad_fn)"
  801. ]
  802. },
  803. {
  804. "cell_type": "markdown",
  805. "metadata": {},
  806. "source": [
  807. "上面我们打出了 z 中的 tensor 数值,同时通过`grad_fn`知道了其是通过 Sum 这种方式得到的"
  808. ]
  809. },
  810. {
  811. "cell_type": "code",
  812. "execution_count": 31,
  813. "metadata": {},
  814. "outputs": [
  815. {
  816. "name": "stdout",
  817. "output_type": "stream",
  818. "text": [
  819. "Variable containing:\n",
  820. " 1 1 1 1 1\n",
  821. " 1 1 1 1 1\n",
  822. " 1 1 1 1 1\n",
  823. " 1 1 1 1 1\n",
  824. " 1 1 1 1 1\n",
  825. " 1 1 1 1 1\n",
  826. " 1 1 1 1 1\n",
  827. " 1 1 1 1 1\n",
  828. " 1 1 1 1 1\n",
  829. " 1 1 1 1 1\n",
  830. "[torch.FloatTensor of size 10x5]\n",
  831. "\n",
  832. "Variable containing:\n",
  833. " 1 1 1 1 1\n",
  834. " 1 1 1 1 1\n",
  835. " 1 1 1 1 1\n",
  836. " 1 1 1 1 1\n",
  837. " 1 1 1 1 1\n",
  838. " 1 1 1 1 1\n",
  839. " 1 1 1 1 1\n",
  840. " 1 1 1 1 1\n",
  841. " 1 1 1 1 1\n",
  842. " 1 1 1 1 1\n",
  843. "[torch.FloatTensor of size 10x5]\n",
  844. "\n"
  845. ]
  846. }
  847. ],
  848. "source": [
  849. "# 求 x 和 y 的梯度\n",
  850. "z.backward()\n",
  851. "\n",
  852. "print(x.grad)\n",
  853. "print(y.grad)"
  854. ]
  855. },
  856. {
  857. "cell_type": "markdown",
  858. "metadata": {},
  859. "source": [
  860. "通过`.grad`我们得到了 x 和 y 的梯度,这里我们使用了 PyTorch 提供的自动求导机制,非常方便,下一小节会具体讲自动求导。"
  861. ]
  862. },
  863. {
  864. "cell_type": "markdown",
  865. "metadata": {},
  866. "source": [
  867. "**小练习**\n",
  868. "\n",
  869. "尝试构建一个函数 $y = x^2 $,然后求 x=2 的导数。\n",
  870. "\n",
  871. "参考输出:4"
  872. ]
  873. },
  874. {
  875. "cell_type": "markdown",
  876. "metadata": {},
  877. "source": [
  878. "提示:\n",
  879. "\n",
  880. "$y = x^2$的图像如下"
  881. ]
  882. },
  883. {
  884. "cell_type": "code",
  885. "execution_count": 1,
  886. "metadata": {},
  887. "outputs": [
  888. {
  889. "data": {
  890. "image/png": "iVBORw0KGgoAAAANSUhEUgAAAWoAAAD4CAYAAADFAawfAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMCwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy86wFpkAAAACXBIWXMAAAsTAAALEwEAmpwYAAAnB0lEQVR4nO3dd3hUVf7H8feZyaRDAkkIhCSEEFpAekcUBbvoYqPYcFXsZd2iq/tTd11dy9pdC9ZVKVbEiqKgINICRFoChIQ0IIUQkpBCMnN+fyS6igSGkMm5M/N9PU8eyWSY+VwDn1zOPfccpbVGCCGEddlMBxBCCHFkUtRCCGFxUtRCCGFxUtRCCGFxUtRCCGFxAZ540ejoaJ2UlOSJlxZCCJ+0du3aUq11zOG+5pGiTkpKIi0tzRMvLYQQPkkpldvc12ToQwghLE6KWgghLE6KWgghLE6KWgghLE6KWgghLE6KWgghLE6KWgghLM4yRV1b7+Tlpdn8sKPUdBQhhDhmSzKLeX15DgcbXK3+2pYp6gCb4uVl2by6LMd0FCGEOGYvfLeD//6wE4ddtfprW6eo7TYuGhrPkq3F7NlfazqOEEK4LbukitU5ZVwyPAGlfLioAS4ZloBLw/tr801HEUIIt72Tlo/dprhoSLxHXt9SRZ0UHcbo5CjeScvH5ZItwoQQ1lfvdPHB2gJO7dOJTu2DPfIelipqgKkjEsgvq2FF9l7TUYQQ4qi+ySimtOogU4cneOw9LFfUZ/TrTESIg7mr80xHEUKIo5q3Jo/O7YM5uddhVyhtFZYr6mCHncmDu/LV5iLKDhw0HUcIIZq1q7yG77aVcPGweALsnqtTyxU1wJThCRx0upi/vtB0FCGEaNZ7aQVo3TgRwpMsWdR9u7RnYEIk76zJQ2u5qCiEsB6nS/NuWj4npkST0DHUo+9lyaIGmDo8gW1FVazPLzcdRQghfmN5VimF5TVM8eBFxJ9YtqgnDYwjNNDOPLmoKISwoHlr8ugQ6uD0frEefy/LFnV4UACTBsTxyY+7qaytNx1HCCF+VlpVx6ItRVwwJJ6gALvH38+yRQ0wZUQCNfVOPvlxt+koQgjxs/nrCql36jYZ9gCLF/XghEh6x7Zj3hoZ/hBCWIPWmrmr8xiSGEmv2HZt8p6WLmqlFNNHJrKhYD8bCspNxxFCCFZk7yW79ACXjuzWZu9p6aIGmDykKyEOO3NWyVm1EMK82avyiAhxcM6ALm32npYv6vbBDs4bGMeC9F1UyEVFIYRBJZV1fLlpDxcNjSfY4fmLiD+xfFEDXDoqkZp6JwvkTkUhhEHvrc2nwaWZPjKxTd/XK4p6QHwk/bu2Z/YquVNRCGGGy6WZsyqPUckd6RET3qbv7VZRK6X+oJTarJTapJSaq5TyzKKrR3DpyG5k7qlkXd6+tn5rIYRg6fYSCvbVtOlFxJ8ctaiVUl2BW4FhWuv+gB2Y6ulghzpvYBzhQQHMXikXFYUQbW/2qjyiwgI5o1/nNn9vd4c+AoAQpVQAEArs8lykwwsLCmDy4K58unE35dWy/KkQou3s3l/D4sxiLhmeQGBA248YH/UdtdaFwL+BPGA3sF9r/dWhz1NKzVRKpSml0kpKSlo/KTB9ZCIHG1y8v7bAI68vhBCH886afJwuzbThbXsR8SfuDH10AM4HugNxQJhS6rJDn6e1nqW1Hqa1HhYT45mdDvp2ac+QxEjmyEVFIUQbaXC6mLc6n5N6xZAY5dnlTJvjzjn8RCBHa12ita4HPgTGeDZW8y4d2Y3s0gOyp6IQok0szixmT0Utl7bxlLxfcqeo84BRSqlQpZQCJgAZno3VvHMGdCEixCEXFYUQbWL2qjxi2wcxoU8nYxncGaNeBbwPrAM2Nv2eWR7O1axgh52Lh8bz5eY9FFXUmoohhPADO0sP8N22EqaNSPTonohH49Y7a63v01r30Vr311pfrrWu83SwI7lsVDecWsv6H0IIj3prZS4BNsX0EeaGPcBL7kw8VFJ0GCf3imHO6jwONrhMxxFC+KDqgw28l5bPmf0706l9m9/j9yteWdQAV4zu1rhAyuY9pqMIIXxQ40JwDVwxOsl0FO8t6pN7dSKxYyhvrcg1HUUI4WO01ry5Ipc+ndsxPKmD6TjeW9R2m+KyUYms3llGxu4K03GEED5kbe4+MnZXcMXoJBonu5nltUUNcMmwBIICbLwpZ9VCiFb03xW5tAsO4HeD40xHAby8qCNDAzl/UBwfrS9kf41sKiCEOH7FFbV8sXE3Fw9NIDQwwHQcwMuLGuCK0UnU1Dtl/Q8hRKuYu7pxc4DLR7f9cqbN8fqi7t81giGJkby9MheXS9b/EEK0XL3TxZzVuZzUK4bu0WGm4/zM64sa4MoxSeSUHmBZVqnpKEIIL/bV5iKKKuq40kJn0+AjRX1m/85Ehwfy5g87TUcRQnixN1fsJL5DCON7m1vX43B8oqiDAuxMH5HI4q3F7Cw9YDqOEMILbd61n1U5ZVwxuht2m/kpeb/kE0UNjet/BNgUb8hZtRCiBV5fvpPQQDtThpld1+NwfKaoO7UP5twBcbyXlk9FrUzVE0K4r6Syjo/Td3HhkHgiQh2m4/yGzxQ1wO/HdufAQSfvpclUPSGE++asyuOg08WMsUmmoxyWTxX1CfERDE/qwBs/5OCUqXpCCDfUNTh5a2Uup/SOoUdMuOk4h+VTRQ1w1dju5JfV8HVGkekoQggv8OmPuymtquOqsd1NR2mWzxX16amxdI0M4fXlOaajCCEsTmvNa8tzSOkUzrie0abjNMvnijrAbuOK0d1YmV3G5l37TccRQljYmp372LyrgqvGWmOVvOb4XFEDTB2eSIjDzhvLd5qOIoSwsNeX5xAR4uCCwfGmoxyRTxZ1RKiDi4bGsyB9F6VVRrd3FEJYVH5ZNV9u3sP0kYmEBNpNxzkinyxqgBljkzjodDF7pWyAK4T4rTdX7EQpxeWjrLWux+H4bFH3iAlnfO8Y3lqZS22903QcIYSFVNU1MG9N48a1cZEhpuMclc8WNcC145IprapjQXqh6ShCCAuZtzqPytoGZo5LNh3FLT5d1GN6RJHapT0vL8uRtaqFEAA0OF28vnwnI7p3ZGBCpOk4bvHpolZKMfOkZLKKq/huW4npOEIIC/h80x4Ky2u85mwafLyoAc4Z0IUuEcHMWpptOooQwjCtNbOW7iA5JoxT+1hrzekj8fmidtht/H5sd1Zk72VjgdwAI4Q/W5ldxqbCCq4dl4zNYmtOH4nPFzXA1BEJtAsK4OVlclYthD97eVk20eGBTB7c1XSUY+IXRd0u2MG0kYl8tnE3BfuqTccRQhiwvaiSxZnFXDE6iWCHtW9wOZRfFDXAjDFJKBp3cRBC+J9XluUQ7LBxmRfc4HIovynquMgQJg2MY97qPPbXyA4wQviT4spa5q8v5OKhCXQMCzQd55j5TVEDXDOucQeYuavltnIh/MmbP+RS73Jx9YnWXXP6SPyqqPvFRTA2JYrXl+dQ1yC3lQvhDw7UNfD2qlxOT40lKTrMdJwW8auiBrj+5B4UVdTx0Xq5rVwIfzB3dR7l1fVcd3IP01FazO+K+sSUaPp3bc+L32XLvopC+Li6BievLMthVHJHhiR2MB2nxdwqaqVUpFLqfaVUplIqQyk12tPBPEUpxY3jU8gpPcDCTXtMxxFCeNBH6wvZU1HLjeNTTEc5Lu6eUT8NLNRa9wEGAhmei+R5Z/TrTHJ0GC98l4XWclYthC9yujQvfZdN/67tLb0fojuOWtRKqQjgJOBVAK31Qa11uYdzeZTdprju5GQ2FVawbHup6ThCCA/4cvMesksPcMPJKZbeD9Ed7pxRdwdKgNeVUuuVUq8opX5z6VQpNVMplaaUSispsf5KdZMHx9O5fTDPf5tlOooQopVprXn+2yySo8M4s39n03GOmztFHQAMAV7QWg8GDgB3HfokrfUsrfUwrfWwmJiYVo7Z+gIDbFwzrjsrs8tYl7fPdBwhRCv6PquUTYUVXHdyMnYvWnypOe4UdQFQoLVe1fT5+zQWt9ebNiKRyFAHL3y7w3QUIUQren7JDmLbB/E7L1t8qTlHLWqt9R4gXynVu+mhCcAWj6ZqI2FBAVw5OolFW4rYVlRpOo4QohWsz9vHiuy9XDsumaAA71p8qTnuzvq4BZitlNoADAIe8liiNjZjTBIhDjsvylm1ED7h+W93EBHiYNqIRNNRWo1bRa21Tm8afx6gtf6d1tpnBnU7hAUybUQiC37cRd5eWQJVCG+2dU8li7YUceWYJMKCAkzHaTV+d2fi4fx0weGF72QGiBDe7NnF2wkLtPP7sUmmo7QqKWogtn0wU4cn8P7aAtlYQAgvlVVcyWcbd3PlmCQiQ71vKdMjkaJucn3Tgi0vfidj1UJ4o+cWZxHisHONF+0u7i4p6iZxkSFcNDSBd9cUsGd/rek4QohjkFN6gI9/3MVlo7p55cYARyNF/Qs3ju+BS2s5qxbCy/xnSRYOu41rffBsGqSofyWhYygXDOnK3NV5FFfIWbUQ3iBvbzXz1xdy6chuxLQLMh3HI6SoD3HTKSk0uDSzlmabjiKEcMPz32b9vNCar5KiPkS3qDDOHxTH26tyKa2qMx1HCHEEBfuqeX9tAdOGJxDbPth0HI+Roj6Mm05Joa7BxcvL5KxaCCt74dsdKIVXb7PlDinqw+gRE86kAXG8tSKXvXJWLYQl7Sqv4b20Ai4elkBcZIjpOB4lRd2MWyekUFvv5CUZqxbCkp5dnIVGc+N43z6bBinqZqV0asfvBnXlzRU7Ka6UGSBCWEne3mreS8tn2ohE4juEmo7jcVLUR3DrhJ7UOzXPL5F51UJYyTOLt2O3KW46xbs3rXWXFPURJEWHcdGQeOasymNXeY3pOEIIYEdJFR+uK+CyUd18eqbHL0lRH8UtE1LQaJ5bIivrCWEFT3+9naAAOzf4wdj0T6SojyK+QyhThyfy7pp8Wa9aCMO27qnkkw27mDE2iehw37wL8XCkqN1w0ykp2GyKZxZvNx1FCL/25KJthAUGMNNH1/RojhS1GzpHBHP5qG58uK6A7JIq03GE8EubCvezcPMerj6xOx18cIW8I5GidtMN43sQFGDnqa/lrFoIE55YtI2IEAdXj+tuOkqbk6J2U3R4EDPGJvHJhl1k7K4wHUcIv7I2dx+LM4uZeVIy7YMdpuO0OSnqY3DdScm0CwrgsS+3mo4ihN/QWvPIF5lEhwdxlY/theguKepjEBkayA3jU1icWcyq7L2m4wjhF5ZsLWb1zjJum9iT0EDf2Vn8WEhRH6MZY5KIbR/Ewwsz0VqbjiOET3O6NI98sZWkqFCmDk8wHccYKepjFBJo5w8Te7E+r5yvthSZjiOET/tofSFbiyr50xm9cdj9t67898iPw0VD4+kRE8ajCzNpcLpMxxHCJ9XWO3li0TZO6BrB2f27mI5jlBR1CwTYbfz5jD7sKDnAB+sKTMcRwie9vTKXwvIa7jqrDzabMh3HKCnqFjqjXyyDEyN5ctF2auudpuMI4VMqaut5bkkW43pGMzYl2nQc46SoW0gpxZ1n9mFPRS1v/LDTdBwhfMqs77Ipr67nzjP7mI5iCVLUx2FUchSn9I7h+SVZlFcfNB1HCJ9QXFHLq9/nMGlgHP27RpiOYwlS1MfpzrP6UFXXwDPfyDKoQrSGf3+1lQaXiz+d3st0FMuQoj5OfTq355JhCby5Yqcs2CTEcdq8az/vrS3gytFJdIsKMx3HMqSoW8Edp/ciKMDGw19kmo4ihNfSWvPgZxlEhji4ZUJP03EsRYq6FXRqF8yNp6Tw1ZYiVuyQW8uFaIlvMor5Ycdebp/Yi4gQ/1t46UikqFvJ1Sd2Jy4imH9+tgWXS24tF+JY1DtdPPR5BskxYUwfmWg6juVIUbeSYIedO8/qw+ZdFXy4vtB0HCG8yuyVuWSXHuCes/v69a3izXH7/4hSyq6UWq+U+tSTgbzZeQPjGJQQyWNfZlJ9sMF0HCGsbfZsSEpC22ycfs4o/lyaxql9OplOZUnH8qPrNiDDU0F8gVKK/zu3L0UVdcxamm06jhDWNXs2zJwJubkorYnbX8wNcx5BzZljOpkluVXUSql44BzgFc/G8X5Du3XknAFdeOm7bHbvrzEdRwhruuceqK7+1UO2mprGx8VvuHtG/RTwF6DZpeKUUjOVUmlKqbSSkpLWyOa17jqzDy6t+dfnMl1PiMPKyzu2x/3cUYtaKXUuUKy1Xnuk52mtZ2mth2mth8XExLRaQG+U0DGU607uwcc/7mKl7AQjxG8lNjOzo7nH/Zw7Z9RjgfOUUjuBecCpSqm3PZrKB9xwcg+6RoZw/8ebZc1qIQ5x8B8PUOsI+vWDoaHw4INmAlncUYtaa/1XrXW81joJmAos1lpf5vFkXi4k0M7/nZtK5p5K3l6ZazqOEJYyK34UfznjZmrj4kEp6NYNZs2CSy81Hc2SZMKiB53RL5ZxPaN5fNE2SqvqTMcRwhIKy2t4bkkW9VOmEVyYDy4X7NwpJX0Ex1TUWutvtdbneiqMr1FKcd+kftQcdPLYwq2m4whhCQ991jjL955z+hpO4j3kjNrDUjqFc/WJ3XknLZ/0/HLTcYQwanlWKZ9t3M1N41OI7xBqOo7XkKJuA7dM6EmndkHct2CTrAMi/Fa908V9H28msWMo156UbDqOV5GibgPhQQHcfXZffizYz7w1+abjCGHE68tzyCqu4t5zUwl22E3H8SpS1G3k/EFxjEruyMNfZFBSKRcWhX8p2FfNk4u2M7FvJyb0lfU8jpUUdRtRSvHg5BOorXfxz8+2mI4jRJvRWnPvgs0oBX8/vz9KKdORvI4UdRvqERPODeN7sCB9F0u3+fdt9sJ/LNy0h8WZxdxxWi+6RoaYjuOVpKjb2A3je5AcHcbfPtpEbb3TdBwhPKqytp77P9lMapf2zBiTZDqO15KibmPBDjv/nNyfvLJqnlssO5cL3/b4V9sorqzjXxecQIBsCNBi8n/OgDE9orlgSFdeWrqDbUWVpuMI4RE/5pfz3xU7uWJUNwYmRJqO49WkqA255+y+hAUFcM/8jTK3WvicBqeLv364kU7tgvjjGb1Nx/F6UtSGRIUHcffZfVmzcx9z18gavMK3vLY8hy27K7h/Uj/aB8uO4sdLitqgi4fGM6ZHFP/6PJPCctkNRviG7JIqHv9qGxP7xnJm/86m4/gEKWqDlFI8fMEAnC7NXz/ciNYyBCK8m9Ol+cv7GwgKsPHQZJkz3VqkqA1LjArlzjN7s3RbCe+vLTAdR4jj8uaKnaTl7uPeSf3o1D7YdByfIUVtAVeMTmJEUkce+HQLRRW1puMI0SK5ew/w6MKtjO8dw4VDupqO41OkqC3AZlM8ctEA6hpc3DNfhkCE93G5NHd+sIEAm+JfF5wgQx6tTIraIrpHh/HnM3rzdUYxC9J3mY4jxDGZvTqPldll3HNOX7pEyG3irU2K2kKuGtudIYmR3P/JZoorZQhEeIeCfdU8/HkG43pGM2V4guk4PkmK2kLsNsWjFw2k+qCTu2UWiPACLpfmz+9tAJAhDw+SoraYlE7h/KVpCEQ2GRBW9+r3OazI3su9k1Jlay0PkqK2oN+P7c7YlCge+HQLO0sPmI4jxGFl7K7gsS+3cnpqLJcMkyEPT5KitiCbTfHviwcSYFPc/k46DU6X6UhC/EptvZM/vJNO+xCHDHm0ASlqi+oSEcKDk08gPb+c/yzZYTqOEL/y+FdbydxTyWMXDSAqPMh0HJ8nRW1hkwbG8btBcTyzeDvr8/aZjiMEAD9klfLyshwuG5XIKX1k/8O2IEVtcX8/vz+x7YK4490fqT7YYDqO8HP7q+v543s/khwdxj1np5qO4zekqC0uIsTB45cMYufeA/zjE9kUV5ijtebujzZSUlnHk1MGERJoNx3Jb0hRe4HRPaK44eQezFuTz4L0QtNxhJ+aszqPzzbs5o7Te8mOLW1MitpL3HFaL4Z168DdH24ku6TKdBzhZ7bsquDvn2zhpF4xXH9SD9Nx/I4UtZcIsNt4ZtpgHAE2bpqzXnYwF22mqq6Bm+esIzLEwROXDMRmk6l4bU2K2ovERYbwxCUDydhdwT8/k/Fq4Xlaa/42fyM79x7gmWmDiZapeEZIUXuZU/vEMvOkZN5e2TheKIQnvZdWwEfpu7h9Yi9GJUeZjuO3pKi90J/P6M2ghEju+mADuXvlFnPhGduKKrn3402M6RHFTaekmI7j16SovZDDbuO56YNRCm54ex01B2W8WrSuytp6rn97LeFBATw1dRB2GZc2SoraS8V3COWpqYPI2FPBXz/cIEuiilbjcmnuePdHcvdW89z0IXRqJ3sfmiZF7cVO7RPLHRN78VH6Ll5fvtN0HOEjnluSxaItRfztnL4yLm0RRy1qpVSCUmqJUmqLUmqzUuq2tggm3HPTKSmcnhrLg59nsGLHXtNxhJf7JqOIJ7/exgWDuzJjTJLpOKKJO2fUDcAftdapwCjgJqWU3ORvETab4vFLBpIUFcrNc9ZRWF5jOpLwUtklVdw+L53ULu15SJYutZSjFrXWerfWel3TryuBDED2greQdsEOZl0xjLoGF9e/tVZuhhHHrKqugeveWkuAXfHS5UMJdsg6HlZyTGPUSqkkYDCw6jBfm6mUSlNKpZWUlLRSPOGuHjHhPDllEBsL98t+i+KYuFyaP76bzo6SKv4zfYhsqWVBbhe1Uioc+AC4XWtdcejXtdaztNbDtNbDYmJiWjOjcNNpqbHccVovPlxfyHOLs0zHEV7ikYWZfLm5iL+dk8qYlGjTccRhBLjzJKWUg8aSnq21/tCzkcTxuOXUFHaWHuDxRdtIjArl/EEySiWaN3d1Hi8tzebyUd24amyS6TiiGe7M+lDAq0CG1voJz0cSx0Mpxb8uPIER3Tvy5/c3sDa3zHQkYVHLtpfwt482Mb53DPdNSpWLhxbmztDHWOBy4FSlVHrTx9keziWOQ1CAnZcuG0rXyBCufXOt3GYufmNbUSU3vr2Onp3CeXbaYALsckuFlbkz6+N7rbXSWg/QWg9q+vi8LcKJlusQFshrM4bj0pqr3ljD/up605GERZRU1nHV62sIDrTz6ozhtAt2mI4kjkJ+jPqw7tFhvHTZUPLLqpn5VppM2xMcqGvgmjfT2HugjlevHEbXyBDTkYQbpKh93MjkKP598UBW5ZRx69z1NDhdpiMJQ+oanFz/9lo2Fe7n2WlDGBAfaTqScJMUtR84f1BX7p+UyldbivirzLH2S06X5o53fmTZ9lIeuXAAp6XGmo4kjoFb0/OE95sxtjv7qut5+pvtRIY6uPvsvnKV309orfnbR5v4bONu/nZOXy4aGm86kjhGUtR+5PaJPSmvPsjLy3LoEBbIjeNlMXh/8NiXW5m7Oo+bTunBNeOSTccRLSBF7UeUUtw3qR/lNfU8unArkSGBTB+ZaDqW8KCXl2bz/Lc7mD4ykT+d3tt0HNFCUtR+xmZT/PvigVTWNnDPRxsJsCkuGZ5gOpbwgNe+z+HBzzM4Z0AXHji/vwx1eTG5mOiHHHYbz186hJN6xvCXDzbwzpo805FEK3v1+xz+8ekWzurfmaemyFZa3k6K2k8FO+y8dPlQxveO4c4PNjJvtZS1r3hlWTYPfLqFs0/ozDPTBuOQuw69nnwH/Viww86Llw3llN4x3PXhRuaskrL2dq8sy+afn2VwzgldeHqqlLSvkO+inwt22Hnx8sayvnv+RmavyjUdSbTQy0v/V9JPTR0kJe1D5DspCApoLOtT+3TinvmbeP7bLLkpxotorXnsy8yfLxw+LSXtc+S7KYCmsr5sKOcNjOPRhVt54NMMXC4pa6trcLq464ON/GfJDqaNSOCZqbISni+S6XniZ4EBNp6aMoio8EBeW57D3gN1PHbRQAID5C++FdXWO7ll7noWbSni1lNT+MNpvWQKno+Soha/YrMp7j03lZh2QTy6cCv7qut54dIhhAXJHxUr2V9Tz7X/TWNNbhl/P68fV45JMh1JeJCcKonfUEpx4/gUHr1wAN9vL2H6yysprqg1HUs0KdhXzZSXVrA+fx/PThssJe0HpKhFsy4ZnsCsy4exvbiK855bzoaCctOR/N6anWWc/9xyCstreH3GCM4dEGc6kmgDUtTiiCamxvLBDWOw2xQXv7iCBemFpiP5rXmr85j+8koiQhx8dNNYTuwpO4b7CylqcVR9u7Tn45vHMjA+ktvmpfPYl5kyI6QNNThd3P/xZu76cCOjkqOYf+NYesSEm44l2pAUtXBLVHgQb18zkmkjEvjPkh3MfGst+2tkH0ZP21tVx1VvrOGNH3Zy9YndeX3GcCJCZY9DfyNFLdwWGGDjockncP+kVL7dWszZTy9jXd4+07F81g87Sjnr6WWsyinj0QsH8H/npsocaT8l33VxTJRSzBjbnfeuH41ScPGLK3jh2x0yFNKKGpwunli0jUtfWUV4cADzbxwjS9H6OSlq0SKDEzvw2a3jOLNfZx5ZmMmVr6+mpLLOdCyvt3t/DdNfXsUz32znwiHxfHLzifSLizAdSxgmRS1aLCLEwXPTB/PQ5BNYnVPGWU8vY+Gm3aZjeSWtNQvSCznr6WVs2rWfJ6cM5N8XD5QbjQQgRS2Ok1KK6SMT+fjmE+nULojr317HjbPXUlwpN8i4a1d5DVf/N43b5qWTFBXGp7ecyOTBsgGt+B/liVXShg0bptPS0lr9dYW11TtdzFqazdPfbCfEYef/zk3lwiFdZf2JZrhcmjmr83j4i0ycLs2fzujNjDFJshuLn1JKrdVaDzvs16SoRWvLKq7izg82sDZ3Hyf1iuH+Sakky7zfX9m6p5J7F2xiVU4ZY1Oi+NfkASRGhZqOJQySohZtzuXSvLUyl0cXZlLX4OLy0d24bUJPIkMDTUczqqSyjie/3sa81XmEBwVwzzl9uWRYgvyrQ0hRC3NKKut4YtE23lmTR7tgB7dO6Mnlo7r53dKptfVOXluew/NLdlBb7+SyUY0/uDqE+fcPLvE/UtTCuMw9FTz4WQbLtpfSPTqMW05NYdLAOJ/fiaSuwcn8dYU8uziLwvIaJvaN5a9n95FbwMVvSFELS9Ba8+22Eh75IpPMPZXEdwjhupN7cPHQeIIddtPxWlX1wQbmrs7n5aXZ7Kmo5YSuEdx1Vh/GpshCSuLwpKiFpWitWZxZzHNLslifV050eBDXjOvO1OEJXj+Gvbeqjjmr8nhteQ77qusZldyRG8enMK5ntIxDiyOSohaWpLVmZXYZz3+bxbLtpQQG2Dirf2emDEtgVHIUNi+ZpuZ0ab7PKuWdNXks2lJEvVMzoU8nbjylB0O7dTQdT3iJIxW13PYkjFFKMbpHFKN7RLFlVwXvrMlj/vpCFqTvIrFjKJcMi2fSwDi6RYWZjnpYO0qq+Dh9F++vLaCwvIYOoQ6uGJ3E1OEJ9IxtZzqe8CFyRi0spbbeycJNe5i3Jo+V2WUA9OwUzoS+sZyW2olBCR2M3RDS4HSxNncfX2cU8XVGMTmlBwAY1zOaKcMTOC01lqAA3xprF23nuIc+lFJnAk8DduAVrfXDR3q+FLVoDfll1SzaUsTXGUWszimjwaWJCgtkZHJHBiVEMjixA/3jIggJ9Ew5HqhrYGPhftbnlZOev49VOWWUV9fjsCtGJUdxWmosE/rG0jUyxCPvL/zLcRW1UsoObANOAwqANcA0rfWW5n6PFLVobftr6vluWwmLM4pYm7eP/LIaAOw2RZ/O7egV246EjqEkdAghoWMoiR1D6RgWSFCArdmLeFpr6hpclFbVkV9WQ35ZNfn7qskvqyZzTyXbiir5afXWblGhDO3WgYl9YxnXM5p2wbJ4v2hdxztGPQLI0lpnN73YPOB8oNmiFqK1RYQ4OG9gHOcNbNzMtbSqjvS8ctLzGz9W55TxUXohh5532BSEBgYQEmgnNNCO1lB90EnNwQZq6p0cuoy2TUGXiBCSY8I4PTWWwYkdGJgQSUe5MUUY5E5RdwXyf/F5ATDy0CcppWYCMwESExNbJZwQzYkOD2JiaiwTU2N/fuxgg4td5TXk76smr6ya8up6ag46qT7opPpgA9UHnSgFoYF2QhwBjf8NtNMxLJCEDo1n4V0ig33+JhzhfVpt1ofWehYwCxqHPlrrdYVwV2CAjaToMJKirTlLRIiWcufUoRD45T5A8U2PCSGEaAPuFPUaoKdSqrtSKhCYCnzs2VhCCCF+ctShD611g1LqZuBLGqfnvaa13uzxZEIIIQA3x6i11p8Dn3s4ixBCiMOQy9tCCGFxUtRCCGFxUtRCCGFxUtRCCGFxHlk9TylVAuS28LdHA6WtGMckXzkWXzkOkGOxIl85Dji+Y+mmtY453Bc8UtTHQymV1tzCJN7GV47FV44D5FisyFeOAzx3LDL0IYQQFidFLYQQFmfFop5lOkAr8pVj8ZXjADkWK/KV4wAPHYvlxqiFEEL8mhXPqIUQQvyCFLUQQlicJYtaKfWAUmqDUipdKfWVUirOdKaWUEo9ppTKbDqW+UqpSNOZWkopdbFSarNSyqWU8rqpVEqpM5VSW5VSWUqpu0znOR5KqdeUUsVKqU2msxwPpVSCUmqJUmpL05+t20xnaimlVLBSarVS6semY/l7q76+FceolVLttdYVTb++FUjVWl9vONYxU0qdDixuWir2EQCt9Z2GY7WIUqov4AJeAv6ktfaa3YtbskGzlSmlTgKqgDe11v1N52kppVQXoIvWep1Sqh2wFvidN35fVOMOymFa6yqllAP4HrhNa72yNV7fkmfUP5V0kzDAej9N3KC1/kpr3dD06Uoad8fxSlrrDK31VtM5WujnDZq11geBnzZo9kpa66VAmekcx0trvVtrva7p15VABo17tHod3aiq6VNH00er9ZYlixpAKfWgUiofuBS413SeVvB74AvTIfzU4TZo9spC8FVKqSRgMLDKcJQWU0rZlVLpQDGwSGvdasdirKiVUl8rpTYd5uN8AK31PVrrBGA2cLOpnEdztONoes49QAONx2JZ7hyLEK1NKRUOfADcfsi/pr2K1tqptR5E47+cRyilWm1YqtV2IT9WWuuJbj51No27y9znwTgtdrTjUErNAM4FJmgrXhD4hWP4nngb2aDZoprGcz8AZmutPzSdpzVorcuVUkuAM4FWueBryaEPpVTPX3x6PpBpKsvxUEqdCfwFOE9rXW06jx+TDZotqOkC3KtAhtb6CdN5jodSKuanWV1KqRAaL1y3Wm9ZddbHB0BvGmcZ5ALXa6297gxIKZUFBAF7mx5a6Y2zVwCUUpOBZ4EYoBxI11qfYTTUMVBKnQ08xf82aH7QbKKWU0rNBcbTuKRmEXCf1vpVo6FaQCl1IrAM2Ejj33WAu5v2aPUqSqkBwH9p/PNlA97VWv+j1V7fikUthBDifyw59CGEEOJ/pKiFEMLipKiFEMLipKiFEMLipKiFEMLipKiFEMLipKiFEMLi/h95yyGcg55E7QAAAABJRU5ErkJggg==\n",
  891. "text/plain": [
  892. "<Figure size 432x288 with 1 Axes>"
  893. ]
  894. },
  895. "metadata": {
  896. "needs_background": "light"
  897. },
  898. "output_type": "display_data"
  899. }
  900. ],
  901. "source": [
  902. "import numpy as np\n",
  903. "import matplotlib.pyplot as plt\n",
  904. "\n",
  905. "x = np.arange(-3, 3.01, 0.1)\n",
  906. "y = x ** 2\n",
  907. "plt.plot(x, y)\n",
  908. "plt.plot(2, 4, 'ro')\n",
  909. "plt.show()"
  910. ]
  911. },
  912. {
  913. "cell_type": "code",
  914. "execution_count": 6,
  915. "metadata": {},
  916. "outputs": [
  917. {
  918. "name": "stdout",
  919. "output_type": "stream",
  920. "text": [
  921. "tensor([4.])\n"
  922. ]
  923. }
  924. ],
  925. "source": [
  926. "import torch\n",
  927. "from torch.autograd import Variable\n",
  928. "\n",
  929. "# 答案\n",
  930. "x = Variable(torch.FloatTensor([2]), requires_grad=True)\n",
  931. "y = x ** 2\n",
  932. "y.backward()\n",
  933. "print(x.grad)"
  934. ]
  935. },
  936. {
  937. "cell_type": "markdown",
  938. "metadata": {},
  939. "source": [
  940. "下一次课程我们将会从导数展开,了解 PyTorch 的自动求导机制"
  941. ]
  942. }
  943. ],
  944. "metadata": {
  945. "kernelspec": {
  946. "display_name": "Python 3",
  947. "language": "python",
  948. "name": "python3"
  949. },
  950. "language_info": {
  951. "codemirror_mode": {
  952. "name": "ipython",
  953. "version": 3
  954. },
  955. "file_extension": ".py",
  956. "mimetype": "text/x-python",
  957. "name": "python",
  958. "nbconvert_exporter": "python",
  959. "pygments_lexer": "ipython3",
  960. "version": "3.6.9"
  961. }
  962. },
  963. "nbformat": 4,
  964. "nbformat_minor": 2
  965. }

机器学习越来越多应用到飞行器、机器人等领域,其目的是利用计算机实现类似人类的智能,从而实现装备的智能化与无人化。本课程旨在引导学生掌握机器学习的基本知识、典型方法与技术,通过具体的应用案例激发学生对该学科的兴趣,鼓励学生能够从人工智能的角度来分析、解决飞行器、机器人所面临的问题和挑战。本课程主要内容包括Python编程基础,机器学习模型,无监督学习、监督学习、深度学习基础知识与实现,并学习如何利用机器学习解决实际问题,从而全面提升自我的《综合能力》。