You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

1-Tensor-and-Variable.ipynb 35 kB

4 years ago
4 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946
  1. {
  2. "cells": [
  3. {
  4. "cell_type": "markdown",
  5. "metadata": {},
  6. "source": [
  7. "# Tensor and Variable\n",
  8. "\n",
  9. "PyTorch的简洁设计使得它入门很简单,在深入介绍PyTorch之前,本节将先介绍一些PyTorch的基础知识,使得读者能够对PyTorch有一个大致的了解,并能够用PyTorch搭建一个简单的神经网络。部分内容读者可能暂时不太理解,可先不予以深究,后续的课程将会对此进行深入讲解。\n",
  10. "\n",
  11. "本节内容参考了PyTorch官方教程[^1]并做了相应的增删修改,使得内容更贴合新版本的PyTorch接口,同时也更适合新手快速入门。另外本书需要读者先掌握基础的Numpy使用,其他相关知识推荐读者参考CS231n的教程[^2]。\n",
  12. "\n",
  13. "[^1]: http://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html\n",
  14. "[^2]: http://cs231n.github.io/python-numpy-tutorial/\n",
  15. "\n"
  16. ]
  17. },
  18. {
  19. "cell_type": "markdown",
  20. "metadata": {},
  21. "source": [
  22. "## 把 PyTorch 当做 NumPy 用\n",
  23. "\n",
  24. "PyTorch 的官方介绍是一个拥有强力GPU加速的张量和动态构建网络的库,其主要构件是张量,所以我们可以把 PyTorch 当做 NumPy 来用,PyTorch 的很多操作好 NumPy 都是类似的,但是因为其能够在 GPU 上运行,所以有着比 NumPy 快很多倍的速度。通过本次课程,你能够学会如何像使用 NumPy 一样使用 PyTorch,了解到 PyTorch 中的基本元素 Tensor 和 Variable 及其操作方式。"
  25. ]
  26. },
  27. {
  28. "cell_type": "code",
  29. "execution_count": 1,
  30. "metadata": {},
  31. "outputs": [],
  32. "source": [
  33. "import torch\n",
  34. "import numpy as np"
  35. ]
  36. },
  37. {
  38. "cell_type": "code",
  39. "execution_count": 2,
  40. "metadata": {},
  41. "outputs": [],
  42. "source": [
  43. "# 创建一个 numpy ndarray\n",
  44. "numpy_tensor = np.random.randn(10, 20)"
  45. ]
  46. },
  47. {
  48. "cell_type": "markdown",
  49. "metadata": {},
  50. "source": [
  51. "我们可以使用下面两种方式将numpy的ndarray转换到tensor上"
  52. ]
  53. },
  54. {
  55. "cell_type": "code",
  56. "execution_count": 3,
  57. "metadata": {},
  58. "outputs": [],
  59. "source": [
  60. "pytorch_tensor1 = torch.Tensor(numpy_tensor)\n",
  61. "pytorch_tensor2 = torch.from_numpy(numpy_tensor)"
  62. ]
  63. },
  64. {
  65. "cell_type": "markdown",
  66. "metadata": {},
  67. "source": [
  68. "使用以上两种方法进行转换的时候,会直接将 NumPy ndarray 的数据类型转换为对应的 PyTorch Tensor 数据类型"
  69. ]
  70. },
  71. {
  72. "cell_type": "markdown",
  73. "metadata": {},
  74. "source": [
  75. "\n"
  76. ]
  77. },
  78. {
  79. "cell_type": "markdown",
  80. "metadata": {},
  81. "source": [
  82. "同时我们也可以使用下面的方法将 pytorch tensor 转换为 numpy ndarray"
  83. ]
  84. },
  85. {
  86. "cell_type": "code",
  87. "execution_count": 5,
  88. "metadata": {},
  89. "outputs": [],
  90. "source": [
  91. "# 如果 pytorch tensor 在 cpu 上\n",
  92. "numpy_array = pytorch_tensor1.numpy()\n",
  93. "\n",
  94. "# 如果 pytorch tensor 在 gpu 上\n",
  95. "numpy_array = pytorch_tensor1.cpu().numpy()"
  96. ]
  97. },
  98. {
  99. "cell_type": "markdown",
  100. "metadata": {},
  101. "source": [
  102. "需要注意 GPU 上的 Tensor 不能直接转换为 NumPy ndarray,需要使用`.cpu()`先将 GPU 上的 Tensor 转到 CPU 上"
  103. ]
  104. },
  105. {
  106. "cell_type": "markdown",
  107. "metadata": {},
  108. "source": [
  109. "\n"
  110. ]
  111. },
  112. {
  113. "cell_type": "markdown",
  114. "metadata": {},
  115. "source": [
  116. "PyTorch Tensor 使用 GPU 加速\n",
  117. "\n",
  118. "我们可以使用以下两种方式将 Tensor 放到 GPU 上"
  119. ]
  120. },
  121. {
  122. "cell_type": "code",
  123. "execution_count": 7,
  124. "metadata": {},
  125. "outputs": [],
  126. "source": [
  127. "# 第一种方式是定义 cuda 数据类型\n",
  128. "dtype = torch.cuda.FloatTensor # 定义默认 GPU 的 数据类型\n",
  129. "gpu_tensor = torch.randn(10, 20).type(dtype)\n",
  130. "\n",
  131. "# 第二种方式更简单,推荐使用\n",
  132. "gpu_tensor = torch.randn(10, 20).cuda(0) # 将 tensor 放到第一个 GPU 上\n",
  133. "gpu_tensor = torch.randn(10, 20).cuda(0) # 将 tensor 放到第二个 GPU 上"
  134. ]
  135. },
  136. {
  137. "cell_type": "markdown",
  138. "metadata": {},
  139. "source": [
  140. "使用第一种方式将 tensor 放到 GPU 上的时候会将数据类型转换成定义的类型,而是用第二种方式能够直接将 tensor 放到 GPU 上,类型跟之前保持一致\n",
  141. "\n",
  142. "推荐在定义 tensor 的时候就明确数据类型,然后直接使用第二种方法将 tensor 放到 GPU 上"
  143. ]
  144. },
  145. {
  146. "cell_type": "markdown",
  147. "metadata": {},
  148. "source": [
  149. "而将 tensor 放回 CPU 的操作非常简单"
  150. ]
  151. },
  152. {
  153. "cell_type": "code",
  154. "execution_count": 9,
  155. "metadata": {},
  156. "outputs": [],
  157. "source": [
  158. "cpu_tensor = gpu_tensor.cpu()"
  159. ]
  160. },
  161. {
  162. "cell_type": "markdown",
  163. "metadata": {},
  164. "source": [
  165. "我们也能够访问到 Tensor 的一些属性"
  166. ]
  167. },
  168. {
  169. "cell_type": "code",
  170. "execution_count": 10,
  171. "metadata": {},
  172. "outputs": [
  173. {
  174. "name": "stdout",
  175. "output_type": "stream",
  176. "text": [
  177. "torch.Size([10, 20])\n",
  178. "torch.Size([10, 20])\n"
  179. ]
  180. }
  181. ],
  182. "source": [
  183. "# 可以通过下面两种方式得到 tensor 的大小\n",
  184. "print(pytorch_tensor1.shape)\n",
  185. "print(pytorch_tensor1.size())"
  186. ]
  187. },
  188. {
  189. "cell_type": "code",
  190. "execution_count": 12,
  191. "metadata": {},
  192. "outputs": [
  193. {
  194. "name": "stdout",
  195. "output_type": "stream",
  196. "text": [
  197. "torch.FloatTensor\n",
  198. "torch.cuda.FloatTensor\n"
  199. ]
  200. }
  201. ],
  202. "source": [
  203. "# 得到 tensor 的数据类型\n",
  204. "print(pytorch_tensor1.type())\n",
  205. "print(gpu_tensor.type())"
  206. ]
  207. },
  208. {
  209. "cell_type": "code",
  210. "execution_count": 13,
  211. "metadata": {},
  212. "outputs": [
  213. {
  214. "name": "stdout",
  215. "output_type": "stream",
  216. "text": [
  217. "2\n"
  218. ]
  219. }
  220. ],
  221. "source": [
  222. "# 得到 tensor 的维度\n",
  223. "print(pytorch_tensor1.dim())"
  224. ]
  225. },
  226. {
  227. "cell_type": "code",
  228. "execution_count": 14,
  229. "metadata": {},
  230. "outputs": [
  231. {
  232. "name": "stdout",
  233. "output_type": "stream",
  234. "text": [
  235. "200\n"
  236. ]
  237. }
  238. ],
  239. "source": [
  240. "# 得到 tensor 的所有元素个数\n",
  241. "print(pytorch_tensor1.numel())"
  242. ]
  243. },
  244. {
  245. "cell_type": "markdown",
  246. "metadata": {},
  247. "source": [
  248. "**小练习**\n",
  249. "\n",
  250. "查阅以下[文档](http://pytorch.org/docs/0.3.0/tensors.html)了解 tensor 的数据类型,创建一个 float64、大小是 3 x 2、随机初始化的 tensor,将其转化为 numpy 的 ndarray,输出其数据类型\n",
  251. "\n",
  252. "参考输出: float64"
  253. ]
  254. },
  255. {
  256. "cell_type": "code",
  257. "execution_count": 6,
  258. "metadata": {},
  259. "outputs": [
  260. {
  261. "name": "stdout",
  262. "output_type": "stream",
  263. "text": [
  264. "float64\n"
  265. ]
  266. }
  267. ],
  268. "source": [
  269. "# 答案\n",
  270. "x = torch.randn(3, 2)\n",
  271. "x = x.type(torch.DoubleTensor)\n",
  272. "x_array = x.numpy()\n",
  273. "print(x_array.dtype)"
  274. ]
  275. },
  276. {
  277. "cell_type": "markdown",
  278. "metadata": {},
  279. "source": [
  280. "\n"
  281. ]
  282. },
  283. {
  284. "cell_type": "markdown",
  285. "metadata": {},
  286. "source": [
  287. "## Tensor的操作\n",
  288. "Tensor 操作中的 api 和 NumPy 非常相似,如果你熟悉 NumPy 中的操作,那么 tensor 基本是一致的,下面我们来列举其中的一些操作"
  289. ]
  290. },
  291. {
  292. "cell_type": "code",
  293. "execution_count": 16,
  294. "metadata": {},
  295. "outputs": [
  296. {
  297. "name": "stdout",
  298. "output_type": "stream",
  299. "text": [
  300. "tensor([[1., 1.],\n",
  301. " [1., 1.],\n",
  302. " [1., 1.]])\n"
  303. ]
  304. }
  305. ],
  306. "source": [
  307. "x = torch.ones(3, 2)\n",
  308. "print(x) # 这是一个float tensor"
  309. ]
  310. },
  311. {
  312. "cell_type": "code",
  313. "execution_count": 17,
  314. "metadata": {},
  315. "outputs": [
  316. {
  317. "name": "stdout",
  318. "output_type": "stream",
  319. "text": [
  320. "torch.FloatTensor\n"
  321. ]
  322. }
  323. ],
  324. "source": [
  325. "print(x.type())"
  326. ]
  327. },
  328. {
  329. "cell_type": "code",
  330. "execution_count": 18,
  331. "metadata": {},
  332. "outputs": [
  333. {
  334. "name": "stdout",
  335. "output_type": "stream",
  336. "text": [
  337. "tensor([[1, 1],\n",
  338. " [1, 1],\n",
  339. " [1, 1]])\n"
  340. ]
  341. }
  342. ],
  343. "source": [
  344. "# 将其转化为整形\n",
  345. "x = x.long()\n",
  346. "# x = x.type(torch.LongTensor)\n",
  347. "print(x)"
  348. ]
  349. },
  350. {
  351. "cell_type": "code",
  352. "execution_count": 19,
  353. "metadata": {},
  354. "outputs": [
  355. {
  356. "name": "stdout",
  357. "output_type": "stream",
  358. "text": [
  359. "tensor([[1., 1.],\n",
  360. " [1., 1.],\n",
  361. " [1., 1.]])\n"
  362. ]
  363. }
  364. ],
  365. "source": [
  366. "# 再将其转回 float\n",
  367. "x = x.float()\n",
  368. "# x = x.type(torch.FloatTensor)\n",
  369. "print(x)"
  370. ]
  371. },
  372. {
  373. "cell_type": "code",
  374. "execution_count": 20,
  375. "metadata": {},
  376. "outputs": [
  377. {
  378. "name": "stdout",
  379. "output_type": "stream",
  380. "text": [
  381. "tensor([[-1.8509, 0.5228, -1.3782],\n",
  382. " [ 0.9726, 1.7519, -0.3425],\n",
  383. " [-0.0131, 2.1198, -1.1388],\n",
  384. " [ 0.2897, 1.2477, -0.2862]])\n"
  385. ]
  386. }
  387. ],
  388. "source": [
  389. "x = torch.randn(4, 3)\n",
  390. "print(x)"
  391. ]
  392. },
  393. {
  394. "cell_type": "code",
  395. "execution_count": 21,
  396. "metadata": {},
  397. "outputs": [],
  398. "source": [
  399. "# 沿着行取最大值\n",
  400. "max_value, max_idx = torch.max(x, dim=1)"
  401. ]
  402. },
  403. {
  404. "cell_type": "code",
  405. "execution_count": 22,
  406. "metadata": {},
  407. "outputs": [
  408. {
  409. "data": {
  410. "text/plain": [
  411. "tensor([0.5228, 1.7519, 2.1198, 1.2477])"
  412. ]
  413. },
  414. "execution_count": 22,
  415. "metadata": {},
  416. "output_type": "execute_result"
  417. }
  418. ],
  419. "source": [
  420. "# 每一行的最大值\n",
  421. "max_value"
  422. ]
  423. },
  424. {
  425. "cell_type": "code",
  426. "execution_count": 23,
  427. "metadata": {},
  428. "outputs": [
  429. {
  430. "data": {
  431. "text/plain": [
  432. "tensor([1, 1, 1, 1])"
  433. ]
  434. },
  435. "execution_count": 23,
  436. "metadata": {},
  437. "output_type": "execute_result"
  438. }
  439. ],
  440. "source": [
  441. "# 每一行最大值的下标\n",
  442. "max_idx"
  443. ]
  444. },
  445. {
  446. "cell_type": "code",
  447. "execution_count": 24,
  448. "metadata": {},
  449. "outputs": [
  450. {
  451. "name": "stdout",
  452. "output_type": "stream",
  453. "text": [
  454. "tensor([-2.7063, 2.3820, 0.9679, 1.2512])\n"
  455. ]
  456. }
  457. ],
  458. "source": [
  459. "# 沿着行对 x 求和\n",
  460. "sum_x = torch.sum(x, dim=1)\n",
  461. "print(sum_x)"
  462. ]
  463. },
  464. {
  465. "cell_type": "code",
  466. "execution_count": 28,
  467. "metadata": {},
  468. "outputs": [
  469. {
  470. "name": "stdout",
  471. "output_type": "stream",
  472. "text": [
  473. "torch.Size([1, 1, 1, 4, 3])\n",
  474. "torch.Size([1, 1, 1, 1, 4, 3])\n",
  475. "tensor([[[[[[-1.8509, 0.5228, -1.3782],\n",
  476. " [ 0.9726, 1.7519, -0.3425],\n",
  477. " [-0.0131, 2.1198, -1.1388],\n",
  478. " [ 0.2897, 1.2477, -0.2862]]]]]])\n"
  479. ]
  480. }
  481. ],
  482. "source": [
  483. "# 增加维度或者减少维度\n",
  484. "print(x.shape)\n",
  485. "x = x.unsqueeze(0) # 在第一维增加\n",
  486. "print(x.shape)\n",
  487. "print(x)"
  488. ]
  489. },
  490. {
  491. "cell_type": "code",
  492. "execution_count": 19,
  493. "metadata": {},
  494. "outputs": [
  495. {
  496. "name": "stdout",
  497. "output_type": "stream",
  498. "text": [
  499. "torch.Size([1, 1, 4, 3])\n"
  500. ]
  501. }
  502. ],
  503. "source": [
  504. "x = x.unsqueeze(1) # 在第二维增加\n",
  505. "print(x.shape)"
  506. ]
  507. },
  508. {
  509. "cell_type": "code",
  510. "execution_count": 43,
  511. "metadata": {},
  512. "outputs": [
  513. {
  514. "name": "stdout",
  515. "output_type": "stream",
  516. "text": [
  517. "torch.Size([4, 3])\n",
  518. "tensor([[-1.8509, 0.5228, -1.3782],\n",
  519. " [ 0.9726, 1.7519, -0.3425],\n",
  520. " [-0.0131, 2.1198, -1.1388],\n",
  521. " [ 0.2897, 1.2477, -0.2862]])\n"
  522. ]
  523. }
  524. ],
  525. "source": [
  526. "x = x.squeeze(0) # 减少第一维\n",
  527. "print(x.shape)\n",
  528. "print(x)"
  529. ]
  530. },
  531. {
  532. "cell_type": "code",
  533. "execution_count": 44,
  534. "metadata": {},
  535. "outputs": [
  536. {
  537. "name": "stdout",
  538. "output_type": "stream",
  539. "text": [
  540. "torch.Size([4, 3])\n"
  541. ]
  542. }
  543. ],
  544. "source": [
  545. "x = x.squeeze() # 将 tensor 中所有的一维全部都去掉\n",
  546. "print(x.shape)"
  547. ]
  548. },
  549. {
  550. "cell_type": "code",
  551. "execution_count": 46,
  552. "metadata": {},
  553. "outputs": [
  554. {
  555. "name": "stdout",
  556. "output_type": "stream",
  557. "text": [
  558. "torch.Size([3, 4, 5])\n",
  559. "torch.Size([4, 3, 5])\n",
  560. "torch.Size([5, 3, 4])\n"
  561. ]
  562. }
  563. ],
  564. "source": [
  565. "x = torch.randn(3, 4, 5)\n",
  566. "print(x.shape)\n",
  567. "\n",
  568. "# 使用permute和transpose进行维度交换\n",
  569. "x = x.permute(1, 0, 2) # permute 可以重新排列 tensor 的维度\n",
  570. "print(x.shape)\n",
  571. "\n",
  572. "x = x.transpose(0, 2) # transpose 交换 tensor 中的两个维度\n",
  573. "print(x.shape)"
  574. ]
  575. },
  576. {
  577. "cell_type": "code",
  578. "execution_count": 47,
  579. "metadata": {},
  580. "outputs": [
  581. {
  582. "name": "stdout",
  583. "output_type": "stream",
  584. "text": [
  585. "torch.Size([3, 4, 5])\n",
  586. "torch.Size([12, 5])\n",
  587. "torch.Size([3, 20])\n"
  588. ]
  589. }
  590. ],
  591. "source": [
  592. "# 使用 view 对 tensor 进行 reshape\n",
  593. "x = torch.randn(3, 4, 5)\n",
  594. "print(x.shape)\n",
  595. "\n",
  596. "x = x.view(-1, 5) # -1 表示任意的大小,5 表示第二维变成 5\n",
  597. "print(x.shape)\n",
  598. "\n",
  599. "x = x.view(3, 20) # 重新 reshape 成 (3, 20) 的大小\n",
  600. "print(x.shape)"
  601. ]
  602. },
  603. {
  604. "cell_type": "code",
  605. "execution_count": 48,
  606. "metadata": {},
  607. "outputs": [],
  608. "source": [
  609. "x = torch.randn(3, 4)\n",
  610. "y = torch.randn(3, 4)\n",
  611. "\n",
  612. "# 两个 tensor 求和\n",
  613. "z = x + y\n",
  614. "# z = torch.add(x, y)"
  615. ]
  616. },
  617. {
  618. "cell_type": "markdown",
  619. "metadata": {},
  620. "source": [
  621. "另外,pytorch中大多数的操作都支持 inplace 操作,也就是可以直接对 tensor 进行操作而不需要另外开辟内存空间,方式非常简单,一般都是在操作的符号后面加`_`,比如"
  622. ]
  623. },
  624. {
  625. "cell_type": "code",
  626. "execution_count": 49,
  627. "metadata": {},
  628. "outputs": [
  629. {
  630. "name": "stdout",
  631. "output_type": "stream",
  632. "text": [
  633. "torch.Size([3, 3])\n",
  634. "torch.Size([1, 3, 3])\n",
  635. "torch.Size([3, 1, 3])\n"
  636. ]
  637. }
  638. ],
  639. "source": [
  640. "x = torch.ones(3, 3)\n",
  641. "print(x.shape)\n",
  642. "\n",
  643. "# unsqueeze 进行 inplace\n",
  644. "x.unsqueeze_(0)\n",
  645. "print(x.shape)\n",
  646. "\n",
  647. "# transpose 进行 inplace\n",
  648. "x.transpose_(1, 0)\n",
  649. "print(x.shape)"
  650. ]
  651. },
  652. {
  653. "cell_type": "code",
  654. "execution_count": 50,
  655. "metadata": {},
  656. "outputs": [
  657. {
  658. "name": "stdout",
  659. "output_type": "stream",
  660. "text": [
  661. "tensor([[1., 1., 1.],\n",
  662. " [1., 1., 1.],\n",
  663. " [1., 1., 1.]])\n",
  664. "tensor([[2., 2., 2.],\n",
  665. " [2., 2., 2.],\n",
  666. " [2., 2., 2.]])\n"
  667. ]
  668. }
  669. ],
  670. "source": [
  671. "x = torch.ones(3, 3)\n",
  672. "y = torch.ones(3, 3)\n",
  673. "print(x)\n",
  674. "\n",
  675. "# add 进行 inplace\n",
  676. "x.add_(y)\n",
  677. "print(x)"
  678. ]
  679. },
  680. {
  681. "cell_type": "markdown",
  682. "metadata": {},
  683. "source": [
  684. "**小练习**\n",
  685. "\n",
  686. "访问[文档](http://pytorch.org/docs/0.3.0/tensors.html)了解 tensor 更多的 api,实现下面的要求\n",
  687. "\n",
  688. "创建一个 float32、4 x 4 的全为1的矩阵,将矩阵正中间 2 x 2 的矩阵,全部修改成2\n",
  689. "\n",
  690. "参考输出\n",
  691. "$$\n",
  692. "\\left[\n",
  693. "\\begin{matrix}\n",
  694. "1 & 1 & 1 & 1 \\\\\n",
  695. "1 & 2 & 2 & 1 \\\\\n",
  696. "1 & 2 & 2 & 1 \\\\\n",
  697. "1 & 1 & 1 & 1\n",
  698. "\\end{matrix}\n",
  699. "\\right] \\\\\n",
  700. "[torch.FloatTensor\\ of\\ size\\ 4x4]\n",
  701. "$$"
  702. ]
  703. },
  704. {
  705. "cell_type": "code",
  706. "execution_count": 10,
  707. "metadata": {},
  708. "outputs": [
  709. {
  710. "name": "stdout",
  711. "output_type": "stream",
  712. "text": [
  713. "\n",
  714. " 1 1 1 1\n",
  715. " 1 2 2 1\n",
  716. " 1 2 2 1\n",
  717. " 1 1 1 1\n",
  718. "[torch.FloatTensor of size 4x4]\n",
  719. "\n"
  720. ]
  721. }
  722. ],
  723. "source": [
  724. "# 答案\n",
  725. "x = torch.ones(4, 4).float()\n",
  726. "x[1:3, 1:3] = 2\n",
  727. "print(x)"
  728. ]
  729. },
  730. {
  731. "cell_type": "markdown",
  732. "metadata": {},
  733. "source": [
  734. "## Variable\n",
  735. "tensor 是 PyTorch 中的完美组件,但是构建神经网络还远远不够,我们需要能够构建计算图的 tensor,这就是 Variable。Variable 是对 tensor 的封装,操作和 tensor 是一样的,但是每个 Variabel都有三个属性,Variable 中的 tensor本身`.data`,对应 tensor 的梯度`.grad`以及这个 Variable 是通过什么方式得到的`.grad_fn`"
  736. ]
  737. },
  738. {
  739. "cell_type": "code",
  740. "execution_count": 51,
  741. "metadata": {},
  742. "outputs": [],
  743. "source": [
  744. "# 通过下面这种方式导入 Variable\n",
  745. "from torch.autograd import Variable"
  746. ]
  747. },
  748. {
  749. "cell_type": "code",
  750. "execution_count": 52,
  751. "metadata": {},
  752. "outputs": [],
  753. "source": [
  754. "x_tensor = torch.randn(10, 5)\n",
  755. "y_tensor = torch.randn(10, 5)\n",
  756. "\n",
  757. "# 将 tensor 变成 Variable\n",
  758. "x = Variable(x_tensor, requires_grad=True) # 默认 Variable 是不需要求梯度的,所以我们用这个方式申明需要对其进行求梯度\n",
  759. "y = Variable(y_tensor, requires_grad=True)"
  760. ]
  761. },
  762. {
  763. "cell_type": "code",
  764. "execution_count": 53,
  765. "metadata": {},
  766. "outputs": [],
  767. "source": [
  768. "z = torch.sum(x + y)"
  769. ]
  770. },
  771. {
  772. "cell_type": "code",
  773. "execution_count": 54,
  774. "metadata": {},
  775. "outputs": [
  776. {
  777. "name": "stdout",
  778. "output_type": "stream",
  779. "text": [
  780. "tensor(-6.3913)\n",
  781. "<SumBackward0 object at 0x7f8a18db6e10>\n"
  782. ]
  783. }
  784. ],
  785. "source": [
  786. "print(z.data)\n",
  787. "print(z.grad_fn)"
  788. ]
  789. },
  790. {
  791. "cell_type": "markdown",
  792. "metadata": {},
  793. "source": [
  794. "上面我们打出了 z 中的 tensor 数值,同时通过`grad_fn`知道了其是通过 Sum 这种方式得到的"
  795. ]
  796. },
  797. {
  798. "cell_type": "code",
  799. "execution_count": 55,
  800. "metadata": {},
  801. "outputs": [
  802. {
  803. "name": "stdout",
  804. "output_type": "stream",
  805. "text": [
  806. "tensor([[1., 1., 1., 1., 1.],\n",
  807. " [1., 1., 1., 1., 1.],\n",
  808. " [1., 1., 1., 1., 1.],\n",
  809. " [1., 1., 1., 1., 1.],\n",
  810. " [1., 1., 1., 1., 1.],\n",
  811. " [1., 1., 1., 1., 1.],\n",
  812. " [1., 1., 1., 1., 1.],\n",
  813. " [1., 1., 1., 1., 1.],\n",
  814. " [1., 1., 1., 1., 1.],\n",
  815. " [1., 1., 1., 1., 1.]])\n",
  816. "tensor([[1., 1., 1., 1., 1.],\n",
  817. " [1., 1., 1., 1., 1.],\n",
  818. " [1., 1., 1., 1., 1.],\n",
  819. " [1., 1., 1., 1., 1.],\n",
  820. " [1., 1., 1., 1., 1.],\n",
  821. " [1., 1., 1., 1., 1.],\n",
  822. " [1., 1., 1., 1., 1.],\n",
  823. " [1., 1., 1., 1., 1.],\n",
  824. " [1., 1., 1., 1., 1.],\n",
  825. " [1., 1., 1., 1., 1.]])\n"
  826. ]
  827. }
  828. ],
  829. "source": [
  830. "# 求 x 和 y 的梯度\n",
  831. "z.backward()\n",
  832. "\n",
  833. "print(x.grad)\n",
  834. "print(y.grad)"
  835. ]
  836. },
  837. {
  838. "cell_type": "markdown",
  839. "metadata": {},
  840. "source": [
  841. "通过`.grad`我们得到了 x 和 y 的梯度,这里我们使用了 PyTorch 提供的自动求导机制,非常方便,下一小节会具体讲自动求导。"
  842. ]
  843. },
  844. {
  845. "cell_type": "markdown",
  846. "metadata": {},
  847. "source": [
  848. "**小练习**\n",
  849. "\n",
  850. "尝试构建一个函数 $y = x^2 $,然后求 x=2 的导数。\n",
  851. "\n",
  852. "参考输出:4"
  853. ]
  854. },
  855. {
  856. "cell_type": "markdown",
  857. "metadata": {},
  858. "source": [
  859. "提示:\n",
  860. "\n",
  861. "$y = x^2$的图像如下"
  862. ]
  863. },
  864. {
  865. "cell_type": "code",
  866. "execution_count": 56,
  867. "metadata": {},
  868. "outputs": [
  869. {
  870. "data": {
  871. "image/png": "iVBORw0KGgoAAAANSUhEUgAAAWoAAAD4CAYAAADFAawfAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMCwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy86wFpkAAAACXBIWXMAAAsTAAALEwEAmpwYAAAnB0lEQVR4nO3dd3hUVf7H8feZyaRDAkkIhCSEEFpAekcUBbvoYqPYcFXsZd2iq/tTd11dy9pdC9ZVKVbEiqKgINICRFoChIQ0IIUQkpBCMnN+fyS6igSGkMm5M/N9PU8eyWSY+VwDn1zOPfccpbVGCCGEddlMBxBCCHFkUtRCCGFxUtRCCGFxUtRCCGFxUtRCCGFxAZ540ejoaJ2UlOSJlxZCCJ+0du3aUq11zOG+5pGiTkpKIi0tzRMvLYQQPkkpldvc12ToQwghLE6KWgghLE6KWgghLE6KWgghLE6KWgghLE6KWgghLE6KWgghLM4yRV1b7+Tlpdn8sKPUdBQhhDhmSzKLeX15DgcbXK3+2pYp6gCb4uVl2by6LMd0FCGEOGYvfLeD//6wE4ddtfprW6eo7TYuGhrPkq3F7NlfazqOEEK4LbukitU5ZVwyPAGlfLioAS4ZloBLw/tr801HEUIIt72Tlo/dprhoSLxHXt9SRZ0UHcbo5CjeScvH5ZItwoQQ1lfvdPHB2gJO7dOJTu2DPfIelipqgKkjEsgvq2FF9l7TUYQQ4qi+ySimtOogU4cneOw9LFfUZ/TrTESIg7mr80xHEUKIo5q3Jo/O7YM5uddhVyhtFZYr6mCHncmDu/LV5iLKDhw0HUcIIZq1q7yG77aVcPGweALsnqtTyxU1wJThCRx0upi/vtB0FCGEaNZ7aQVo3TgRwpMsWdR9u7RnYEIk76zJQ2u5qCiEsB6nS/NuWj4npkST0DHUo+9lyaIGmDo8gW1FVazPLzcdRQghfmN5VimF5TVM8eBFxJ9YtqgnDYwjNNDOPLmoKISwoHlr8ugQ6uD0frEefy/LFnV4UACTBsTxyY+7qaytNx1HCCF+VlpVx6ItRVwwJJ6gALvH38+yRQ0wZUQCNfVOPvlxt+koQgjxs/nrCql36jYZ9gCLF/XghEh6x7Zj3hoZ/hBCWIPWmrmr8xiSGEmv2HZt8p6WLmqlFNNHJrKhYD8bCspNxxFCCFZk7yW79ACXjuzWZu9p6aIGmDykKyEOO3NWyVm1EMK82avyiAhxcM6ALm32npYv6vbBDs4bGMeC9F1UyEVFIYRBJZV1fLlpDxcNjSfY4fmLiD+xfFEDXDoqkZp6JwvkTkUhhEHvrc2nwaWZPjKxTd/XK4p6QHwk/bu2Z/YquVNRCGGGy6WZsyqPUckd6RET3qbv7VZRK6X+oJTarJTapJSaq5TyzKKrR3DpyG5k7qlkXd6+tn5rIYRg6fYSCvbVtOlFxJ8ctaiVUl2BW4FhWuv+gB2Y6ulghzpvYBzhQQHMXikXFYUQbW/2qjyiwgI5o1/nNn9vd4c+AoAQpVQAEArs8lykwwsLCmDy4K58unE35dWy/KkQou3s3l/D4sxiLhmeQGBA248YH/UdtdaFwL+BPGA3sF9r/dWhz1NKzVRKpSml0kpKSlo/KTB9ZCIHG1y8v7bAI68vhBCH886afJwuzbThbXsR8SfuDH10AM4HugNxQJhS6rJDn6e1nqW1Hqa1HhYT45mdDvp2ac+QxEjmyEVFIUQbaXC6mLc6n5N6xZAY5dnlTJvjzjn8RCBHa12ita4HPgTGeDZW8y4d2Y3s0gOyp6IQok0szixmT0Utl7bxlLxfcqeo84BRSqlQpZQCJgAZno3VvHMGdCEixCEXFYUQbWL2qjxi2wcxoU8nYxncGaNeBbwPrAM2Nv2eWR7O1axgh52Lh8bz5eY9FFXUmoohhPADO0sP8N22EqaNSPTonohH49Y7a63v01r30Vr311pfrrWu83SwI7lsVDecWsv6H0IIj3prZS4BNsX0EeaGPcBL7kw8VFJ0GCf3imHO6jwONrhMxxFC+KDqgw28l5bPmf0706l9m9/j9yteWdQAV4zu1rhAyuY9pqMIIXxQ40JwDVwxOsl0FO8t6pN7dSKxYyhvrcg1HUUI4WO01ry5Ipc+ndsxPKmD6TjeW9R2m+KyUYms3llGxu4K03GEED5kbe4+MnZXcMXoJBonu5nltUUNcMmwBIICbLwpZ9VCiFb03xW5tAsO4HeD40xHAby8qCNDAzl/UBwfrS9kf41sKiCEOH7FFbV8sXE3Fw9NIDQwwHQcwMuLGuCK0UnU1Dtl/Q8hRKuYu7pxc4DLR7f9cqbN8fqi7t81giGJkby9MheXS9b/EEK0XL3TxZzVuZzUK4bu0WGm4/zM64sa4MoxSeSUHmBZVqnpKEIIL/bV5iKKKuq40kJn0+AjRX1m/85Ehwfy5g87TUcRQnixN1fsJL5DCON7m1vX43B8oqiDAuxMH5HI4q3F7Cw9YDqOEMILbd61n1U5ZVwxuht2m/kpeb/kE0UNjet/BNgUb8hZtRCiBV5fvpPQQDtThpld1+NwfKaoO7UP5twBcbyXlk9FrUzVE0K4r6Syjo/Td3HhkHgiQh2m4/yGzxQ1wO/HdufAQSfvpclUPSGE++asyuOg08WMsUmmoxyWTxX1CfERDE/qwBs/5OCUqXpCCDfUNTh5a2Uup/SOoUdMuOk4h+VTRQ1w1dju5JfV8HVGkekoQggv8OmPuymtquOqsd1NR2mWzxX16amxdI0M4fXlOaajCCEsTmvNa8tzSOkUzrie0abjNMvnijrAbuOK0d1YmV3G5l37TccRQljYmp372LyrgqvGWmOVvOb4XFEDTB2eSIjDzhvLd5qOIoSwsNeX5xAR4uCCwfGmoxyRTxZ1RKiDi4bGsyB9F6VVRrd3FEJYVH5ZNV9u3sP0kYmEBNpNxzkinyxqgBljkzjodDF7pWyAK4T4rTdX7EQpxeWjrLWux+H4bFH3iAlnfO8Y3lqZS22903QcIYSFVNU1MG9N48a1cZEhpuMclc8WNcC145IprapjQXqh6ShCCAuZtzqPytoGZo5LNh3FLT5d1GN6RJHapT0vL8uRtaqFEAA0OF28vnwnI7p3ZGBCpOk4bvHpolZKMfOkZLKKq/huW4npOEIIC/h80x4Ky2u85mwafLyoAc4Z0IUuEcHMWpptOooQwjCtNbOW7iA5JoxT+1hrzekj8fmidtht/H5sd1Zk72VjgdwAI4Q/W5ldxqbCCq4dl4zNYmtOH4nPFzXA1BEJtAsK4OVlclYthD97eVk20eGBTB7c1XSUY+IXRd0u2MG0kYl8tnE3BfuqTccRQhiwvaiSxZnFXDE6iWCHtW9wOZRfFDXAjDFJKBp3cRBC+J9XluUQ7LBxmRfc4HIovynquMgQJg2MY97qPPbXyA4wQviT4spa5q8v5OKhCXQMCzQd55j5TVEDXDOucQeYuavltnIh/MmbP+RS73Jx9YnWXXP6SPyqqPvFRTA2JYrXl+dQ1yC3lQvhDw7UNfD2qlxOT40lKTrMdJwW8auiBrj+5B4UVdTx0Xq5rVwIfzB3dR7l1fVcd3IP01FazO+K+sSUaPp3bc+L32XLvopC+Li6BievLMthVHJHhiR2MB2nxdwqaqVUpFLqfaVUplIqQyk12tPBPEUpxY3jU8gpPcDCTXtMxxFCeNBH6wvZU1HLjeNTTEc5Lu6eUT8NLNRa9wEGAhmei+R5Z/TrTHJ0GC98l4XWclYthC9yujQvfZdN/67tLb0fojuOWtRKqQjgJOBVAK31Qa11uYdzeZTdprju5GQ2FVawbHup6ThCCA/4cvMesksPcMPJKZbeD9Ed7pxRdwdKgNeVUuuVUq8opX5z6VQpNVMplaaUSispsf5KdZMHx9O5fTDPf5tlOooQopVprXn+2yySo8M4s39n03GOmztFHQAMAV7QWg8GDgB3HfokrfUsrfUwrfWwmJiYVo7Z+gIDbFwzrjsrs8tYl7fPdBwhRCv6PquUTYUVXHdyMnYvWnypOe4UdQFQoLVe1fT5+zQWt9ebNiKRyFAHL3y7w3QUIUQren7JDmLbB/E7L1t8qTlHLWqt9R4gXynVu+mhCcAWj6ZqI2FBAVw5OolFW4rYVlRpOo4QohWsz9vHiuy9XDsumaAA71p8qTnuzvq4BZitlNoADAIe8liiNjZjTBIhDjsvylm1ED7h+W93EBHiYNqIRNNRWo1bRa21Tm8afx6gtf6d1tpnBnU7hAUybUQiC37cRd5eWQJVCG+2dU8li7YUceWYJMKCAkzHaTV+d2fi4fx0weGF72QGiBDe7NnF2wkLtPP7sUmmo7QqKWogtn0wU4cn8P7aAtlYQAgvlVVcyWcbd3PlmCQiQ71vKdMjkaJucn3Tgi0vfidj1UJ4o+cWZxHisHONF+0u7i4p6iZxkSFcNDSBd9cUsGd/rek4QohjkFN6gI9/3MVlo7p55cYARyNF/Qs3ju+BS2s5qxbCy/xnSRYOu41rffBsGqSofyWhYygXDOnK3NV5FFfIWbUQ3iBvbzXz1xdy6chuxLQLMh3HI6SoD3HTKSk0uDSzlmabjiKEcMPz32b9vNCar5KiPkS3qDDOHxTH26tyKa2qMx1HCHEEBfuqeX9tAdOGJxDbPth0HI+Roj6Mm05Joa7BxcvL5KxaCCt74dsdKIVXb7PlDinqw+gRE86kAXG8tSKXvXJWLYQl7Sqv4b20Ai4elkBcZIjpOB4lRd2MWyekUFvv5CUZqxbCkp5dnIVGc+N43z6bBinqZqV0asfvBnXlzRU7Ka6UGSBCWEne3mreS8tn2ohE4juEmo7jcVLUR3DrhJ7UOzXPL5F51UJYyTOLt2O3KW46xbs3rXWXFPURJEWHcdGQeOasymNXeY3pOEIIYEdJFR+uK+CyUd18eqbHL0lRH8UtE1LQaJ5bIivrCWEFT3+9naAAOzf4wdj0T6SojyK+QyhThyfy7pp8Wa9aCMO27qnkkw27mDE2iehw37wL8XCkqN1w0ykp2GyKZxZvNx1FCL/25KJthAUGMNNH1/RojhS1GzpHBHP5qG58uK6A7JIq03GE8EubCvezcPMerj6xOx18cIW8I5GidtMN43sQFGDnqa/lrFoIE55YtI2IEAdXj+tuOkqbk6J2U3R4EDPGJvHJhl1k7K4wHUcIv7I2dx+LM4uZeVIy7YMdpuO0OSnqY3DdScm0CwrgsS+3mo4ihN/QWvPIF5lEhwdxlY/theguKepjEBkayA3jU1icWcyq7L2m4wjhF5ZsLWb1zjJum9iT0EDf2Vn8WEhRH6MZY5KIbR/Ewwsz0VqbjiOET3O6NI98sZWkqFCmDk8wHccYKepjFBJo5w8Te7E+r5yvthSZjiOET/tofSFbiyr50xm9cdj9t67898iPw0VD4+kRE8ajCzNpcLpMxxHCJ9XWO3li0TZO6BrB2f27mI5jlBR1CwTYbfz5jD7sKDnAB+sKTMcRwie9vTKXwvIa7jqrDzabMh3HKCnqFjqjXyyDEyN5ctF2auudpuMI4VMqaut5bkkW43pGMzYl2nQc46SoW0gpxZ1n9mFPRS1v/LDTdBwhfMqs77Ipr67nzjP7mI5iCVLUx2FUchSn9I7h+SVZlFcfNB1HCJ9QXFHLq9/nMGlgHP27RpiOYwlS1MfpzrP6UFXXwDPfyDKoQrSGf3+1lQaXiz+d3st0FMuQoj5OfTq355JhCby5Yqcs2CTEcdq8az/vrS3gytFJdIsKMx3HMqSoW8Edp/ciKMDGw19kmo4ihNfSWvPgZxlEhji4ZUJP03EsRYq6FXRqF8yNp6Tw1ZYiVuyQW8uFaIlvMor5Ycdebp/Yi4gQ/1t46UikqFvJ1Sd2Jy4imH9+tgWXS24tF+JY1DtdPPR5BskxYUwfmWg6juVIUbeSYIedO8/qw+ZdFXy4vtB0HCG8yuyVuWSXHuCes/v69a3izXH7/4hSyq6UWq+U+tSTgbzZeQPjGJQQyWNfZlJ9sMF0HCGsbfZsSEpC22ycfs4o/lyaxql9OplOZUnH8qPrNiDDU0F8gVKK/zu3L0UVdcxamm06jhDWNXs2zJwJubkorYnbX8wNcx5BzZljOpkluVXUSql44BzgFc/G8X5Du3XknAFdeOm7bHbvrzEdRwhruuceqK7+1UO2mprGx8VvuHtG/RTwF6DZpeKUUjOVUmlKqbSSkpLWyOa17jqzDy6t+dfnMl1PiMPKyzu2x/3cUYtaKXUuUKy1Xnuk52mtZ2mth2mth8XExLRaQG+U0DGU607uwcc/7mKl7AQjxG8lNjOzo7nH/Zw7Z9RjgfOUUjuBecCpSqm3PZrKB9xwcg+6RoZw/8ebZc1qIQ5x8B8PUOsI+vWDoaHw4INmAlncUYtaa/1XrXW81joJmAos1lpf5vFkXi4k0M7/nZtK5p5K3l6ZazqOEJYyK34UfznjZmrj4kEp6NYNZs2CSy81Hc2SZMKiB53RL5ZxPaN5fNE2SqvqTMcRwhIKy2t4bkkW9VOmEVyYDy4X7NwpJX0Ex1TUWutvtdbneiqMr1FKcd+kftQcdPLYwq2m4whhCQ991jjL955z+hpO4j3kjNrDUjqFc/WJ3XknLZ/0/HLTcYQwanlWKZ9t3M1N41OI7xBqOo7XkKJuA7dM6EmndkHct2CTrAMi/Fa908V9H28msWMo156UbDqOV5GibgPhQQHcfXZffizYz7w1+abjCGHE68tzyCqu4t5zUwl22E3H8SpS1G3k/EFxjEruyMNfZFBSKRcWhX8p2FfNk4u2M7FvJyb0lfU8jpUUdRtRSvHg5BOorXfxz8+2mI4jRJvRWnPvgs0oBX8/vz9KKdORvI4UdRvqERPODeN7sCB9F0u3+fdt9sJ/LNy0h8WZxdxxWi+6RoaYjuOVpKjb2A3je5AcHcbfPtpEbb3TdBwhPKqytp77P9lMapf2zBiTZDqO15KibmPBDjv/nNyfvLJqnlssO5cL3/b4V9sorqzjXxecQIBsCNBi8n/OgDE9orlgSFdeWrqDbUWVpuMI4RE/5pfz3xU7uWJUNwYmRJqO49WkqA255+y+hAUFcM/8jTK3WvicBqeLv364kU7tgvjjGb1Nx/F6UtSGRIUHcffZfVmzcx9z18gavMK3vLY8hy27K7h/Uj/aB8uO4sdLitqgi4fGM6ZHFP/6PJPCctkNRviG7JIqHv9qGxP7xnJm/86m4/gEKWqDlFI8fMEAnC7NXz/ciNYyBCK8m9Ol+cv7GwgKsPHQZJkz3VqkqA1LjArlzjN7s3RbCe+vLTAdR4jj8uaKnaTl7uPeSf3o1D7YdByfIUVtAVeMTmJEUkce+HQLRRW1puMI0SK5ew/w6MKtjO8dw4VDupqO41OkqC3AZlM8ctEA6hpc3DNfhkCE93G5NHd+sIEAm+JfF5wgQx6tTIraIrpHh/HnM3rzdUYxC9J3mY4jxDGZvTqPldll3HNOX7pEyG3irU2K2kKuGtudIYmR3P/JZoorZQhEeIeCfdU8/HkG43pGM2V4guk4PkmK2kLsNsWjFw2k+qCTu2UWiPACLpfmz+9tAJAhDw+SoraYlE7h/KVpCEQ2GRBW9+r3OazI3su9k1Jlay0PkqK2oN+P7c7YlCge+HQLO0sPmI4jxGFl7K7gsS+3cnpqLJcMkyEPT5KitiCbTfHviwcSYFPc/k46DU6X6UhC/EptvZM/vJNO+xCHDHm0ASlqi+oSEcKDk08gPb+c/yzZYTqOEL/y+FdbydxTyWMXDSAqPMh0HJ8nRW1hkwbG8btBcTyzeDvr8/aZjiMEAD9klfLyshwuG5XIKX1k/8O2IEVtcX8/vz+x7YK4490fqT7YYDqO8HP7q+v543s/khwdxj1np5qO4zekqC0uIsTB45cMYufeA/zjE9kUV5ijtebujzZSUlnHk1MGERJoNx3Jb0hRe4HRPaK44eQezFuTz4L0QtNxhJ+aszqPzzbs5o7Te8mOLW1MitpL3HFaL4Z168DdH24ku6TKdBzhZ7bsquDvn2zhpF4xXH9SD9Nx/I4UtZcIsNt4ZtpgHAE2bpqzXnYwF22mqq6Bm+esIzLEwROXDMRmk6l4bU2K2ovERYbwxCUDydhdwT8/k/Fq4Xlaa/42fyM79x7gmWmDiZapeEZIUXuZU/vEMvOkZN5e2TheKIQnvZdWwEfpu7h9Yi9GJUeZjuO3pKi90J/P6M2ghEju+mADuXvlFnPhGduKKrn3402M6RHFTaekmI7j16SovZDDbuO56YNRCm54ex01B2W8WrSuytp6rn97LeFBATw1dRB2GZc2SoraS8V3COWpqYPI2FPBXz/cIEuiilbjcmnuePdHcvdW89z0IXRqJ3sfmiZF7cVO7RPLHRN78VH6Ll5fvtN0HOEjnluSxaItRfztnL4yLm0RRy1qpVSCUmqJUmqLUmqzUuq2tggm3HPTKSmcnhrLg59nsGLHXtNxhJf7JqOIJ7/exgWDuzJjTJLpOKKJO2fUDcAftdapwCjgJqWU3ORvETab4vFLBpIUFcrNc9ZRWF5jOpLwUtklVdw+L53ULu15SJYutZSjFrXWerfWel3TryuBDED2greQdsEOZl0xjLoGF9e/tVZuhhHHrKqugeveWkuAXfHS5UMJdsg6HlZyTGPUSqkkYDCw6jBfm6mUSlNKpZWUlLRSPOGuHjHhPDllEBsL98t+i+KYuFyaP76bzo6SKv4zfYhsqWVBbhe1Uioc+AC4XWtdcejXtdaztNbDtNbDYmJiWjOjcNNpqbHccVovPlxfyHOLs0zHEV7ikYWZfLm5iL+dk8qYlGjTccRhBLjzJKWUg8aSnq21/tCzkcTxuOXUFHaWHuDxRdtIjArl/EEySiWaN3d1Hi8tzebyUd24amyS6TiiGe7M+lDAq0CG1voJz0cSx0Mpxb8uPIER3Tvy5/c3sDa3zHQkYVHLtpfwt482Mb53DPdNSpWLhxbmztDHWOBy4FSlVHrTx9keziWOQ1CAnZcuG0rXyBCufXOt3GYufmNbUSU3vr2Onp3CeXbaYALsckuFlbkz6+N7rbXSWg/QWg9q+vi8LcKJlusQFshrM4bj0pqr3ljD/up605GERZRU1nHV62sIDrTz6ozhtAt2mI4kjkJ+jPqw7tFhvHTZUPLLqpn5VppM2xMcqGvgmjfT2HugjlevHEbXyBDTkYQbpKh93MjkKP598UBW5ZRx69z1NDhdpiMJQ+oanFz/9lo2Fe7n2WlDGBAfaTqScJMUtR84f1BX7p+UyldbivirzLH2S06X5o53fmTZ9lIeuXAAp6XGmo4kjoFb0/OE95sxtjv7qut5+pvtRIY6uPvsvnKV309orfnbR5v4bONu/nZOXy4aGm86kjhGUtR+5PaJPSmvPsjLy3LoEBbIjeNlMXh/8NiXW5m7Oo+bTunBNeOSTccRLSBF7UeUUtw3qR/lNfU8unArkSGBTB+ZaDqW8KCXl2bz/Lc7mD4ykT+d3tt0HNFCUtR+xmZT/PvigVTWNnDPRxsJsCkuGZ5gOpbwgNe+z+HBzzM4Z0AXHji/vwx1eTG5mOiHHHYbz186hJN6xvCXDzbwzpo805FEK3v1+xz+8ekWzurfmaemyFZa3k6K2k8FO+y8dPlQxveO4c4PNjJvtZS1r3hlWTYPfLqFs0/ozDPTBuOQuw69nnwH/Viww86Llw3llN4x3PXhRuaskrL2dq8sy+afn2VwzgldeHqqlLSvkO+inwt22Hnx8sayvnv+RmavyjUdSbTQy0v/V9JPTR0kJe1D5DspCApoLOtT+3TinvmbeP7bLLkpxotorXnsy8yfLxw+LSXtc+S7KYCmsr5sKOcNjOPRhVt54NMMXC4pa6trcLq464ON/GfJDqaNSOCZqbISni+S6XniZ4EBNp6aMoio8EBeW57D3gN1PHbRQAID5C++FdXWO7ll7noWbSni1lNT+MNpvWQKno+Soha/YrMp7j03lZh2QTy6cCv7qut54dIhhAXJHxUr2V9Tz7X/TWNNbhl/P68fV45JMh1JeJCcKonfUEpx4/gUHr1wAN9vL2H6yysprqg1HUs0KdhXzZSXVrA+fx/PThssJe0HpKhFsy4ZnsCsy4exvbiK855bzoaCctOR/N6anWWc/9xyCstreH3GCM4dEGc6kmgDUtTiiCamxvLBDWOw2xQXv7iCBemFpiP5rXmr85j+8koiQhx8dNNYTuwpO4b7CylqcVR9u7Tn45vHMjA+ktvmpfPYl5kyI6QNNThd3P/xZu76cCOjkqOYf+NYesSEm44l2pAUtXBLVHgQb18zkmkjEvjPkh3MfGst+2tkH0ZP21tVx1VvrOGNH3Zy9YndeX3GcCJCZY9DfyNFLdwWGGDjockncP+kVL7dWszZTy9jXd4+07F81g87Sjnr6WWsyinj0QsH8H/npsocaT8l33VxTJRSzBjbnfeuH41ScPGLK3jh2x0yFNKKGpwunli0jUtfWUV4cADzbxwjS9H6OSlq0SKDEzvw2a3jOLNfZx5ZmMmVr6+mpLLOdCyvt3t/DdNfXsUz32znwiHxfHLzifSLizAdSxgmRS1aLCLEwXPTB/PQ5BNYnVPGWU8vY+Gm3aZjeSWtNQvSCznr6WVs2rWfJ6cM5N8XD5QbjQQgRS2Ok1KK6SMT+fjmE+nULojr317HjbPXUlwpN8i4a1d5DVf/N43b5qWTFBXGp7ecyOTBsgGt+B/liVXShg0bptPS0lr9dYW11TtdzFqazdPfbCfEYef/zk3lwiFdZf2JZrhcmjmr83j4i0ycLs2fzujNjDFJshuLn1JKrdVaDzvs16SoRWvLKq7izg82sDZ3Hyf1iuH+Sakky7zfX9m6p5J7F2xiVU4ZY1Oi+NfkASRGhZqOJQySohZtzuXSvLUyl0cXZlLX4OLy0d24bUJPIkMDTUczqqSyjie/3sa81XmEBwVwzzl9uWRYgvyrQ0hRC3NKKut4YtE23lmTR7tgB7dO6Mnlo7r53dKptfVOXluew/NLdlBb7+SyUY0/uDqE+fcPLvE/UtTCuMw9FTz4WQbLtpfSPTqMW05NYdLAOJ/fiaSuwcn8dYU8uziLwvIaJvaN5a9n95FbwMVvSFELS9Ba8+22Eh75IpPMPZXEdwjhupN7cPHQeIIddtPxWlX1wQbmrs7n5aXZ7Kmo5YSuEdx1Vh/GpshCSuLwpKiFpWitWZxZzHNLslifV050eBDXjOvO1OEJXj+Gvbeqjjmr8nhteQ77qusZldyRG8enMK5ntIxDiyOSohaWpLVmZXYZz3+bxbLtpQQG2Dirf2emDEtgVHIUNi+ZpuZ0ab7PKuWdNXks2lJEvVMzoU8nbjylB0O7dTQdT3iJIxW13PYkjFFKMbpHFKN7RLFlVwXvrMlj/vpCFqTvIrFjKJcMi2fSwDi6RYWZjnpYO0qq+Dh9F++vLaCwvIYOoQ6uGJ3E1OEJ9IxtZzqe8CFyRi0spbbeycJNe5i3Jo+V2WUA9OwUzoS+sZyW2olBCR2M3RDS4HSxNncfX2cU8XVGMTmlBwAY1zOaKcMTOC01lqAA3xprF23nuIc+lFJnAk8DduAVrfXDR3q+FLVoDfll1SzaUsTXGUWszimjwaWJCgtkZHJHBiVEMjixA/3jIggJ9Ew5HqhrYGPhftbnlZOev49VOWWUV9fjsCtGJUdxWmosE/rG0jUyxCPvL/zLcRW1UsoObANOAwqANcA0rfWW5n6PFLVobftr6vluWwmLM4pYm7eP/LIaAOw2RZ/O7egV246EjqEkdAghoWMoiR1D6RgWSFCArdmLeFpr6hpclFbVkV9WQ35ZNfn7qskvqyZzTyXbiir5afXWblGhDO3WgYl9YxnXM5p2wbJ4v2hdxztGPQLI0lpnN73YPOB8oNmiFqK1RYQ4OG9gHOcNbNzMtbSqjvS8ctLzGz9W55TxUXohh5532BSEBgYQEmgnNNCO1lB90EnNwQZq6p0cuoy2TUGXiBCSY8I4PTWWwYkdGJgQSUe5MUUY5E5RdwXyf/F5ATDy0CcppWYCMwESExNbJZwQzYkOD2JiaiwTU2N/fuxgg4td5TXk76smr6ya8up6ag46qT7opPpgA9UHnSgFoYF2QhwBjf8NtNMxLJCEDo1n4V0ig33+JhzhfVpt1ofWehYwCxqHPlrrdYVwV2CAjaToMJKirTlLRIiWcufUoRD45T5A8U2PCSGEaAPuFPUaoKdSqrtSKhCYCnzs2VhCCCF+ctShD611g1LqZuBLGqfnvaa13uzxZEIIIQA3x6i11p8Dn3s4ixBCiMOQy9tCCGFxUtRCCGFxUtRCCGFxUtRCCGFxHlk9TylVAuS28LdHA6WtGMckXzkWXzkOkGOxIl85Dji+Y+mmtY453Bc8UtTHQymV1tzCJN7GV47FV44D5FisyFeOAzx3LDL0IYQQFidFLYQQFmfFop5lOkAr8pVj8ZXjADkWK/KV4wAPHYvlxqiFEEL8mhXPqIUQQvyCFLUQQlicJYtaKfWAUmqDUipdKfWVUirOdKaWUEo9ppTKbDqW+UqpSNOZWkopdbFSarNSyqWU8rqpVEqpM5VSW5VSWUqpu0znOR5KqdeUUsVKqU2msxwPpVSCUmqJUmpL05+t20xnaimlVLBSarVS6semY/l7q76+FceolVLttdYVTb++FUjVWl9vONYxU0qdDixuWir2EQCt9Z2GY7WIUqov4AJeAv6ktfaa3YtbskGzlSmlTgKqgDe11v1N52kppVQXoIvWep1Sqh2wFvidN35fVOMOymFa6yqllAP4HrhNa72yNV7fkmfUP5V0kzDAej9N3KC1/kpr3dD06Uoad8fxSlrrDK31VtM5WujnDZq11geBnzZo9kpa66VAmekcx0trvVtrva7p15VABo17tHod3aiq6VNH00er9ZYlixpAKfWgUiofuBS413SeVvB74AvTIfzU4TZo9spC8FVKqSRgMLDKcJQWU0rZlVLpQDGwSGvdasdirKiVUl8rpTYd5uN8AK31PVrrBGA2cLOpnEdztONoes49QAONx2JZ7hyLEK1NKRUOfADcfsi/pr2K1tqptR5E47+cRyilWm1YqtV2IT9WWuuJbj51No27y9znwTgtdrTjUErNAM4FJmgrXhD4hWP4nngb2aDZoprGcz8AZmutPzSdpzVorcuVUkuAM4FWueBryaEPpVTPX3x6PpBpKsvxUEqdCfwFOE9rXW06jx+TDZotqOkC3KtAhtb6CdN5jodSKuanWV1KqRAaL1y3Wm9ZddbHB0BvGmcZ5ALXa6297gxIKZUFBAF7mx5a6Y2zVwCUUpOBZ4EYoBxI11qfYTTUMVBKnQ08xf82aH7QbKKWU0rNBcbTuKRmEXCf1vpVo6FaQCl1IrAM2Ejj33WAu5v2aPUqSqkBwH9p/PNlA97VWv+j1V7fikUthBDifyw59CGEEOJ/pKiFEMLipKiFEMLipKiFEMLipKiFEMLipKiFEMLipKiFEMLi/h95yyGcg55E7QAAAABJRU5ErkJggg==\n",
  872. "text/plain": [
  873. "<Figure size 432x288 with 1 Axes>"
  874. ]
  875. },
  876. "metadata": {
  877. "needs_background": "light"
  878. },
  879. "output_type": "display_data"
  880. }
  881. ],
  882. "source": [
  883. "import numpy as np\n",
  884. "import matplotlib.pyplot as plt\n",
  885. "\n",
  886. "x = np.arange(-3, 3.01, 0.1)\n",
  887. "y = x ** 2\n",
  888. "plt.plot(x, y)\n",
  889. "plt.plot(2, 4, 'ro')\n",
  890. "plt.show()"
  891. ]
  892. },
  893. {
  894. "cell_type": "code",
  895. "execution_count": 57,
  896. "metadata": {},
  897. "outputs": [
  898. {
  899. "name": "stdout",
  900. "output_type": "stream",
  901. "text": [
  902. "tensor([4.])\n"
  903. ]
  904. }
  905. ],
  906. "source": [
  907. "import torch\n",
  908. "from torch.autograd import Variable\n",
  909. "\n",
  910. "# 答案\n",
  911. "x = Variable(torch.FloatTensor([2]), requires_grad=True)\n",
  912. "y = x ** 2\n",
  913. "y.backward()\n",
  914. "print(x.grad)"
  915. ]
  916. },
  917. {
  918. "cell_type": "markdown",
  919. "metadata": {},
  920. "source": [
  921. "下一次课程我们将会从导数展开,了解 PyTorch 的自动求导机制"
  922. ]
  923. }
  924. ],
  925. "metadata": {
  926. "kernelspec": {
  927. "display_name": "Python 3",
  928. "language": "python",
  929. "name": "python3"
  930. },
  931. "language_info": {
  932. "codemirror_mode": {
  933. "name": "ipython",
  934. "version": 3
  935. },
  936. "file_extension": ".py",
  937. "mimetype": "text/x-python",
  938. "name": "python",
  939. "nbconvert_exporter": "python",
  940. "pygments_lexer": "ipython3",
  941. "version": "3.6.9"
  942. }
  943. },
  944. "nbformat": 4,
  945. "nbformat_minor": 2
  946. }

机器学习越来越多应用到飞行器、机器人等领域,其目的是利用计算机实现类似人类的智能,从而实现装备的智能化与无人化。本课程旨在引导学生掌握机器学习的基本知识、典型方法与技术,通过具体的应用案例激发学生对该学科的兴趣,鼓励学生能够从人工智能的角度来分析、解决飞行器、机器人所面临的问题和挑战。本课程主要内容包括Python编程基础,机器学习模型,无监督学习、监督学习、深度学习基础知识与实现,并学习如何利用机器学习解决实际问题,从而全面提升自我的《综合能力》。