You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

1-Tensor-and-Variable.ipynb 35 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946947948949950951952953954955956957958959960961962963964965966967968969970971972973974975976
  1. {
  2. "cells": [
  3. {
  4. "cell_type": "markdown",
  5. "metadata": {},
  6. "source": [
  7. "# Tensor and Variable\n",
  8. "\n",
  9. "\n",
  10. "张量(Tensor)是一种专门的数据结构,非常类似于数组和矩阵。在PyTorch中,我们使用张量来编码模型的输入和输出,以及模型的参数。\n",
  11. "\n",
  12. "张量类似于`numpy`的`ndarray`,不同之处在于张量可以在GPU或其他硬件加速器上运行。事实上,张量和NumPy数组通常可以共享相同的底层内存,从而消除了复制数据的需要(请参阅使用NumPy的桥接)。张量还针对自动微分进行了优化,在Autograd部分中看到更多关于这一点的内介绍。\n",
  13. "\n",
  14. "`variable`是一种可以不断变化的变量,符合反向传播,参数更新的属性。PyTorch的`variable`是一个存放会变化值的内存位置,里面的值会不停变化,像装糖果(糖果就是数据,即tensor)的盒子,糖果的数量不断变化。pytorch都是由tensor计算的,而tensor里面的参数是variable形式。\n"
  15. ]
  16. },
  17. {
  18. "cell_type": "markdown",
  19. "metadata": {},
  20. "source": [
  21. "## 1. Tensor基本用法\n",
  22. "\n",
  23. "PyTorch基础的数据是张量,PyTorch 的很多操作好 NumPy 都是类似的,但是因为其能够在 GPU 上运行,所以有着比 NumPy 快很多倍的速度。通过本次课程,能够学会如何像使用 NumPy 一样使用 PyTorch,了解到 PyTorch 中的基本元素 Tensor 和 Variable 及其操作方式。"
  24. ]
  25. },
  26. {
  27. "cell_type": "markdown",
  28. "metadata": {},
  29. "source": [
  30. "### 1.1 Tensor定义与生成"
  31. ]
  32. },
  33. {
  34. "cell_type": "code",
  35. "execution_count": 1,
  36. "metadata": {},
  37. "outputs": [],
  38. "source": [
  39. "import torch\n",
  40. "import numpy as np"
  41. ]
  42. },
  43. {
  44. "cell_type": "code",
  45. "execution_count": 2,
  46. "metadata": {},
  47. "outputs": [],
  48. "source": [
  49. "# 创建一个 numpy ndarray\n",
  50. "numpy_tensor = np.random.randn(10, 20)"
  51. ]
  52. },
  53. {
  54. "cell_type": "markdown",
  55. "metadata": {},
  56. "source": [
  57. "我们可以使用下面两种方式将numpy的ndarray转换到tensor上"
  58. ]
  59. },
  60. {
  61. "cell_type": "code",
  62. "execution_count": 3,
  63. "metadata": {},
  64. "outputs": [],
  65. "source": [
  66. "pytorch_tensor1 = torch.Tensor(numpy_tensor)\n",
  67. "pytorch_tensor2 = torch.from_numpy(numpy_tensor)"
  68. ]
  69. },
  70. {
  71. "cell_type": "markdown",
  72. "metadata": {},
  73. "source": [
  74. "使用以上两种方法进行转换的时候,会直接将 NumPy ndarray 的数据类型转换为对应的 PyTorch Tensor 数据类型"
  75. ]
  76. },
  77. {
  78. "cell_type": "markdown",
  79. "metadata": {},
  80. "source": [
  81. "\n"
  82. ]
  83. },
  84. {
  85. "cell_type": "markdown",
  86. "metadata": {},
  87. "source": [
  88. "同时我们也可以使用下面的方法将 pytorch tensor 转换为 numpy ndarray"
  89. ]
  90. },
  91. {
  92. "cell_type": "code",
  93. "execution_count": 4,
  94. "metadata": {},
  95. "outputs": [],
  96. "source": [
  97. "# 如果 pytorch tensor 在 cpu 上\n",
  98. "numpy_array = pytorch_tensor1.numpy()\n",
  99. "\n",
  100. "# 如果 pytorch tensor 在 gpu 上\n",
  101. "numpy_array = pytorch_tensor1.cpu().numpy()"
  102. ]
  103. },
  104. {
  105. "cell_type": "markdown",
  106. "metadata": {},
  107. "source": [
  108. "需要注意 GPU 上的 Tensor 不能直接转换为 NumPy ndarray,需要使用`.cpu()`先将 GPU 上的 Tensor 转到 CPU 上"
  109. ]
  110. },
  111. {
  112. "cell_type": "markdown",
  113. "metadata": {},
  114. "source": [
  115. "\n"
  116. ]
  117. },
  118. {
  119. "cell_type": "markdown",
  120. "metadata": {},
  121. "source": [
  122. "### 1.2 PyTorch Tensor 使用 GPU 加速\n",
  123. "\n",
  124. "我们可以使用以下两种方式将 Tensor 放到 GPU 上"
  125. ]
  126. },
  127. {
  128. "cell_type": "code",
  129. "execution_count": 7,
  130. "metadata": {},
  131. "outputs": [],
  132. "source": [
  133. "# 第一种方式是定义 cuda 数据类型\n",
  134. "dtype = torch.cuda.FloatTensor # 定义默认 GPU 的 数据类型\n",
  135. "gpu_tensor = torch.randn(10, 20).type(dtype)\n",
  136. "\n",
  137. "# 第二种方式更简单,推荐使用\n",
  138. "gpu_tensor = torch.randn(10, 20).cuda(0) # 将 tensor 放到第一个 GPU 上\n",
  139. "gpu_tensor = torch.randn(10, 20).cuda(0) # 将 tensor 放到第二个 GPU 上"
  140. ]
  141. },
  142. {
  143. "cell_type": "markdown",
  144. "metadata": {},
  145. "source": [
  146. "使用第一种方式将 tensor 放到 GPU 上的时候会将数据类型转换成定义的类型,而是用第二种方式能够直接将 tensor 放到 GPU 上,类型跟之前保持一致\n",
  147. "\n",
  148. "推荐在定义 tensor 的时候就明确数据类型,然后直接使用第二种方法将 tensor 放到 GPU 上"
  149. ]
  150. },
  151. {
  152. "cell_type": "markdown",
  153. "metadata": {},
  154. "source": [
  155. "而将 tensor 放回 CPU 的操作非常简单"
  156. ]
  157. },
  158. {
  159. "cell_type": "code",
  160. "execution_count": 8,
  161. "metadata": {},
  162. "outputs": [],
  163. "source": [
  164. "cpu_tensor = gpu_tensor.cpu()"
  165. ]
  166. },
  167. {
  168. "cell_type": "markdown",
  169. "metadata": {},
  170. "source": [
  171. "我们也能够访问到 Tensor 的一些属性"
  172. ]
  173. },
  174. {
  175. "cell_type": "code",
  176. "execution_count": 9,
  177. "metadata": {},
  178. "outputs": [
  179. {
  180. "name": "stdout",
  181. "output_type": "stream",
  182. "text": [
  183. "torch.Size([10, 20])\n",
  184. "torch.Size([10, 20])\n"
  185. ]
  186. }
  187. ],
  188. "source": [
  189. "# 可以通过下面两种方式得到 tensor 的大小\n",
  190. "print(pytorch_tensor1.shape)\n",
  191. "print(pytorch_tensor1.size())"
  192. ]
  193. },
  194. {
  195. "cell_type": "code",
  196. "execution_count": 10,
  197. "metadata": {},
  198. "outputs": [
  199. {
  200. "name": "stdout",
  201. "output_type": "stream",
  202. "text": [
  203. "torch.FloatTensor\n",
  204. "torch.cuda.FloatTensor\n"
  205. ]
  206. }
  207. ],
  208. "source": [
  209. "# 得到 tensor 的数据类型\n",
  210. "print(pytorch_tensor1.type())\n",
  211. "print(gpu_tensor.type())"
  212. ]
  213. },
  214. {
  215. "cell_type": "code",
  216. "execution_count": 11,
  217. "metadata": {},
  218. "outputs": [
  219. {
  220. "name": "stdout",
  221. "output_type": "stream",
  222. "text": [
  223. "2\n"
  224. ]
  225. }
  226. ],
  227. "source": [
  228. "# 得到 tensor 的维度\n",
  229. "print(pytorch_tensor1.dim())"
  230. ]
  231. },
  232. {
  233. "cell_type": "code",
  234. "execution_count": 12,
  235. "metadata": {},
  236. "outputs": [
  237. {
  238. "name": "stdout",
  239. "output_type": "stream",
  240. "text": [
  241. "200\n"
  242. ]
  243. }
  244. ],
  245. "source": [
  246. "# 得到 tensor 的所有元素个数\n",
  247. "print(pytorch_tensor1.numel())"
  248. ]
  249. },
  250. {
  251. "cell_type": "markdown",
  252. "metadata": {},
  253. "source": [
  254. "### 1.3 小练习\n",
  255. "\n",
  256. "查阅以下[文档](http://pytorch.org/docs/0.3.0/tensors.html)了解 tensor 的数据类型,创建一个 float64、大小是 3 x 2、随机初始化的 tensor,将其转化为 numpy 的 ndarray,输出其数据类型\n",
  257. "\n",
  258. "参考输出: float64"
  259. ]
  260. },
  261. {
  262. "cell_type": "code",
  263. "execution_count": 6,
  264. "metadata": {},
  265. "outputs": [
  266. {
  267. "name": "stdout",
  268. "output_type": "stream",
  269. "text": [
  270. "float64\n"
  271. ]
  272. }
  273. ],
  274. "source": [
  275. "# 答案\n",
  276. "x = torch.randn(3, 2)\n",
  277. "x = x.type(torch.DoubleTensor)\n",
  278. "x_array = x.numpy()\n",
  279. "print(x_array.dtype)"
  280. ]
  281. },
  282. {
  283. "cell_type": "markdown",
  284. "metadata": {},
  285. "source": [
  286. "\n"
  287. ]
  288. },
  289. {
  290. "cell_type": "markdown",
  291. "metadata": {},
  292. "source": [
  293. "## 2. Tensor的操作\n",
  294. "Tensor 操作中的 API 和 NumPy 非常相似,如果你熟悉 NumPy 中的操作,那么 tensor 基本是一致的,下面我们来列举其中的一些操作"
  295. ]
  296. },
  297. {
  298. "cell_type": "markdown",
  299. "metadata": {},
  300. "source": [
  301. "### 2.1 基本操作"
  302. ]
  303. },
  304. {
  305. "cell_type": "code",
  306. "execution_count": 13,
  307. "metadata": {},
  308. "outputs": [
  309. {
  310. "name": "stdout",
  311. "output_type": "stream",
  312. "text": [
  313. "tensor([[1., 1.],\n",
  314. " [1., 1.],\n",
  315. " [1., 1.]])\n"
  316. ]
  317. }
  318. ],
  319. "source": [
  320. "x = torch.ones(3, 2)\n",
  321. "print(x) # 这是一个float tensor"
  322. ]
  323. },
  324. {
  325. "cell_type": "code",
  326. "execution_count": 14,
  327. "metadata": {},
  328. "outputs": [
  329. {
  330. "name": "stdout",
  331. "output_type": "stream",
  332. "text": [
  333. "torch.FloatTensor\n"
  334. ]
  335. }
  336. ],
  337. "source": [
  338. "print(x.type())"
  339. ]
  340. },
  341. {
  342. "cell_type": "code",
  343. "execution_count": 15,
  344. "metadata": {},
  345. "outputs": [
  346. {
  347. "name": "stdout",
  348. "output_type": "stream",
  349. "text": [
  350. "tensor([[1, 1],\n",
  351. " [1, 1],\n",
  352. " [1, 1]])\n"
  353. ]
  354. }
  355. ],
  356. "source": [
  357. "# 将其转化为整形\n",
  358. "x = x.long()\n",
  359. "# x = x.type(torch.LongTensor)\n",
  360. "print(x)"
  361. ]
  362. },
  363. {
  364. "cell_type": "code",
  365. "execution_count": 16,
  366. "metadata": {},
  367. "outputs": [
  368. {
  369. "name": "stdout",
  370. "output_type": "stream",
  371. "text": [
  372. "tensor([[1., 1.],\n",
  373. " [1., 1.],\n",
  374. " [1., 1.]])\n"
  375. ]
  376. }
  377. ],
  378. "source": [
  379. "# 再将其转回 float\n",
  380. "x = x.float()\n",
  381. "# x = x.type(torch.FloatTensor)\n",
  382. "print(x)"
  383. ]
  384. },
  385. {
  386. "cell_type": "code",
  387. "execution_count": 17,
  388. "metadata": {},
  389. "outputs": [
  390. {
  391. "name": "stdout",
  392. "output_type": "stream",
  393. "text": [
  394. "tensor([[-1.2200, 0.9769, -2.3477],\n",
  395. " [ 1.0125, -1.3236, -0.2626],\n",
  396. " [-0.3501, 0.5753, 1.5657],\n",
  397. " [ 0.4823, -0.4008, -1.3442]])\n"
  398. ]
  399. }
  400. ],
  401. "source": [
  402. "x = torch.randn(4, 3)\n",
  403. "print(x)"
  404. ]
  405. },
  406. {
  407. "cell_type": "code",
  408. "execution_count": 18,
  409. "metadata": {},
  410. "outputs": [],
  411. "source": [
  412. "# 沿着行取最大值\n",
  413. "max_value, max_idx = torch.max(x, dim=1)"
  414. ]
  415. },
  416. {
  417. "cell_type": "code",
  418. "execution_count": 19,
  419. "metadata": {},
  420. "outputs": [
  421. {
  422. "data": {
  423. "text/plain": [
  424. "tensor([0.9769, 1.0125, 1.5657, 0.4823])"
  425. ]
  426. },
  427. "execution_count": 19,
  428. "metadata": {},
  429. "output_type": "execute_result"
  430. }
  431. ],
  432. "source": [
  433. "# 每一行的最大值\n",
  434. "max_value"
  435. ]
  436. },
  437. {
  438. "cell_type": "code",
  439. "execution_count": 20,
  440. "metadata": {},
  441. "outputs": [
  442. {
  443. "data": {
  444. "text/plain": [
  445. "tensor([1, 0, 2, 0])"
  446. ]
  447. },
  448. "execution_count": 20,
  449. "metadata": {},
  450. "output_type": "execute_result"
  451. }
  452. ],
  453. "source": [
  454. "# 每一行最大值的下标\n",
  455. "max_idx"
  456. ]
  457. },
  458. {
  459. "cell_type": "code",
  460. "execution_count": 21,
  461. "metadata": {},
  462. "outputs": [
  463. {
  464. "name": "stdout",
  465. "output_type": "stream",
  466. "text": [
  467. "tensor([-2.5908, -0.5736, 1.7909, -1.2627])\n"
  468. ]
  469. }
  470. ],
  471. "source": [
  472. "# 沿着行对 x 求和\n",
  473. "sum_x = torch.sum(x, dim=1)\n",
  474. "print(sum_x)"
  475. ]
  476. },
  477. {
  478. "cell_type": "code",
  479. "execution_count": 22,
  480. "metadata": {},
  481. "outputs": [
  482. {
  483. "name": "stdout",
  484. "output_type": "stream",
  485. "text": [
  486. "torch.Size([4, 3])\n",
  487. "torch.Size([1, 4, 3])\n",
  488. "tensor([[[-1.2200, 0.9769, -2.3477],\n",
  489. " [ 1.0125, -1.3236, -0.2626],\n",
  490. " [-0.3501, 0.5753, 1.5657],\n",
  491. " [ 0.4823, -0.4008, -1.3442]]])\n"
  492. ]
  493. }
  494. ],
  495. "source": [
  496. "# 增加维度或者减少维度\n",
  497. "print(x.shape)\n",
  498. "x = x.unsqueeze(0) # 在第一维增加\n",
  499. "print(x.shape)\n",
  500. "print(x)"
  501. ]
  502. },
  503. {
  504. "cell_type": "code",
  505. "execution_count": 23,
  506. "metadata": {},
  507. "outputs": [
  508. {
  509. "name": "stdout",
  510. "output_type": "stream",
  511. "text": [
  512. "torch.Size([1, 1, 4, 3])\n"
  513. ]
  514. }
  515. ],
  516. "source": [
  517. "x = x.unsqueeze(1) # 在第二维增加\n",
  518. "print(x.shape)"
  519. ]
  520. },
  521. {
  522. "cell_type": "code",
  523. "execution_count": 24,
  524. "metadata": {},
  525. "outputs": [
  526. {
  527. "name": "stdout",
  528. "output_type": "stream",
  529. "text": [
  530. "torch.Size([1, 4, 3])\n",
  531. "tensor([[[-1.2200, 0.9769, -2.3477],\n",
  532. " [ 1.0125, -1.3236, -0.2626],\n",
  533. " [-0.3501, 0.5753, 1.5657],\n",
  534. " [ 0.4823, -0.4008, -1.3442]]])\n"
  535. ]
  536. }
  537. ],
  538. "source": [
  539. "x = x.squeeze(0) # 减少第一维\n",
  540. "print(x.shape)\n",
  541. "print(x)"
  542. ]
  543. },
  544. {
  545. "cell_type": "code",
  546. "execution_count": 25,
  547. "metadata": {},
  548. "outputs": [
  549. {
  550. "name": "stdout",
  551. "output_type": "stream",
  552. "text": [
  553. "torch.Size([4, 3])\n"
  554. ]
  555. }
  556. ],
  557. "source": [
  558. "x = x.squeeze() # 将 tensor 中所有的一维全部都去掉\n",
  559. "print(x.shape)"
  560. ]
  561. },
  562. {
  563. "cell_type": "code",
  564. "execution_count": 26,
  565. "metadata": {},
  566. "outputs": [
  567. {
  568. "name": "stdout",
  569. "output_type": "stream",
  570. "text": [
  571. "torch.Size([3, 4, 5])\n",
  572. "torch.Size([4, 3, 5])\n",
  573. "torch.Size([5, 3, 4])\n"
  574. ]
  575. }
  576. ],
  577. "source": [
  578. "x = torch.randn(3, 4, 5)\n",
  579. "print(x.shape)\n",
  580. "\n",
  581. "# 使用permute和transpose进行维度交换\n",
  582. "x = x.permute(1, 0, 2) # permute 可以重新排列 tensor 的维度\n",
  583. "print(x.shape)\n",
  584. "\n",
  585. "x = x.transpose(0, 2) # transpose 交换 tensor 中的两个维度\n",
  586. "print(x.shape)"
  587. ]
  588. },
  589. {
  590. "cell_type": "code",
  591. "execution_count": 27,
  592. "metadata": {},
  593. "outputs": [
  594. {
  595. "name": "stdout",
  596. "output_type": "stream",
  597. "text": [
  598. "torch.Size([3, 4, 5])\n",
  599. "torch.Size([12, 5])\n",
  600. "torch.Size([3, 20])\n"
  601. ]
  602. }
  603. ],
  604. "source": [
  605. "# 使用 view 对 tensor 进行 reshape\n",
  606. "x = torch.randn(3, 4, 5)\n",
  607. "print(x.shape)\n",
  608. "\n",
  609. "x = x.view(-1, 5) # -1 表示任意的大小,5 表示第二维变成 5\n",
  610. "print(x.shape)\n",
  611. "\n",
  612. "x = x.view(3, 20) # 重新 reshape 成 (3, 20) 的大小\n",
  613. "print(x.shape)"
  614. ]
  615. },
  616. {
  617. "cell_type": "code",
  618. "execution_count": 32,
  619. "metadata": {},
  620. "outputs": [
  621. {
  622. "name": "stdout",
  623. "output_type": "stream",
  624. "text": [
  625. "tensor([[-3.1321, -0.9734, 0.5307, 0.4975],\n",
  626. " [ 0.8537, 1.3424, 0.2630, -1.6658],\n",
  627. " [-1.0088, -2.2100, -1.9233, -0.3059]])\n"
  628. ]
  629. }
  630. ],
  631. "source": [
  632. "x = torch.randn(3, 4)\n",
  633. "y = torch.randn(3, 4)\n",
  634. "\n",
  635. "# 两个 tensor 求和\n",
  636. "z = x + y\n",
  637. "# z = torch.add(x, y)\n",
  638. "print(z)"
  639. ]
  640. },
  641. {
  642. "cell_type": "markdown",
  643. "metadata": {},
  644. "source": [
  645. "### 2.2 `inplace`操作\n",
  646. "另外,pytorch中大多数的操作都支持 `inplace` 操作,也就是可以直接对 tensor 进行操作而不需要另外开辟内存空间,方式非常简单,一般都是在操作的符号后面加`_`,比如"
  647. ]
  648. },
  649. {
  650. "cell_type": "code",
  651. "execution_count": 33,
  652. "metadata": {},
  653. "outputs": [
  654. {
  655. "name": "stdout",
  656. "output_type": "stream",
  657. "text": [
  658. "torch.Size([3, 3])\n",
  659. "torch.Size([1, 3, 3])\n",
  660. "torch.Size([3, 1, 3])\n"
  661. ]
  662. }
  663. ],
  664. "source": [
  665. "x = torch.ones(3, 3)\n",
  666. "print(x.shape)\n",
  667. "\n",
  668. "# unsqueeze 进行 inplace\n",
  669. "x.unsqueeze_(0)\n",
  670. "print(x.shape)\n",
  671. "\n",
  672. "# transpose 进行 inplace\n",
  673. "x.transpose_(1, 0)\n",
  674. "print(x.shape)"
  675. ]
  676. },
  677. {
  678. "cell_type": "code",
  679. "execution_count": 34,
  680. "metadata": {},
  681. "outputs": [
  682. {
  683. "name": "stdout",
  684. "output_type": "stream",
  685. "text": [
  686. "tensor([[1., 1., 1.],\n",
  687. " [1., 1., 1.],\n",
  688. " [1., 1., 1.]])\n",
  689. "tensor([[2., 2., 2.],\n",
  690. " [2., 2., 2.],\n",
  691. " [2., 2., 2.]])\n"
  692. ]
  693. }
  694. ],
  695. "source": [
  696. "x = torch.ones(3, 3)\n",
  697. "y = torch.ones(3, 3)\n",
  698. "print(x)\n",
  699. "\n",
  700. "# add 进行 inplace\n",
  701. "x.add_(y)\n",
  702. "print(x)"
  703. ]
  704. },
  705. {
  706. "cell_type": "markdown",
  707. "metadata": {},
  708. "source": [
  709. "### 2.3 **小练习**\n",
  710. "\n",
  711. "访问[文档](http://pytorch.org/docs/tensors.html)了解 tensor 更多的 api,实现下面的要求\n",
  712. "\n",
  713. "创建一个 float32、4 x 4 的全为1的矩阵,将矩阵正中间 2 x 2 的矩阵,全部修改成2\n",
  714. "\n",
  715. "参考输出\n",
  716. "$$\n",
  717. "\\left[\n",
  718. "\\begin{matrix}\n",
  719. "1 & 1 & 1 & 1 \\\\\n",
  720. "1 & 2 & 2 & 1 \\\\\n",
  721. "1 & 2 & 2 & 1 \\\\\n",
  722. "1 & 1 & 1 & 1\n",
  723. "\\end{matrix}\n",
  724. "\\right] \\\\\n",
  725. "[torch.FloatTensor\\ of\\ size\\ 4x4]\n",
  726. "$$"
  727. ]
  728. },
  729. {
  730. "cell_type": "code",
  731. "execution_count": 10,
  732. "metadata": {},
  733. "outputs": [
  734. {
  735. "name": "stdout",
  736. "output_type": "stream",
  737. "text": [
  738. "\n",
  739. " 1 1 1 1\n",
  740. " 1 2 2 1\n",
  741. " 1 2 2 1\n",
  742. " 1 1 1 1\n",
  743. "[torch.FloatTensor of size 4x4]\n",
  744. "\n"
  745. ]
  746. }
  747. ],
  748. "source": [
  749. "# 答案\n",
  750. "x = torch.ones(4, 4).float()\n",
  751. "x[1:3, 1:3] = 2\n",
  752. "print(x)"
  753. ]
  754. },
  755. {
  756. "cell_type": "markdown",
  757. "metadata": {},
  758. "source": [
  759. "## 3. Variable\n",
  760. "tensor 是 PyTorch 中的基础数据类型,但是构建神经网络还远远不够,需要能够构建计算图的 tensor,这就是 Variable。Variable 是对 tensor 的封装,操作和 tensor 是一样的,但是每个 Variabel都有三个属性:\n",
  761. "* Variable 中的 tensor本身`.data`,\n",
  762. "* 对应 tensor 的梯度`.grad`\n",
  763. "* Variable 是通过什么方式得到的`.grad_fn`"
  764. ]
  765. },
  766. {
  767. "cell_type": "markdown",
  768. "metadata": {},
  769. "source": [
  770. "### 3.1 Variable的基本操作"
  771. ]
  772. },
  773. {
  774. "cell_type": "code",
  775. "execution_count": 4,
  776. "metadata": {},
  777. "outputs": [],
  778. "source": [
  779. "import torch\n",
  780. "from torch.autograd import Variable"
  781. ]
  782. },
  783. {
  784. "cell_type": "code",
  785. "execution_count": 5,
  786. "metadata": {},
  787. "outputs": [],
  788. "source": [
  789. "x_tensor = torch.randn(3, 4)\n",
  790. "y_tensor = torch.randn(3, 4)\n",
  791. "\n",
  792. "# 将 tensor 变成 Variable\n",
  793. "x = Variable(x_tensor, requires_grad=True) # 默认 Variable 是不需要求梯度的,所以我们用这个方式申明需要对其进行求梯度\n",
  794. "y = Variable(y_tensor, requires_grad=True)"
  795. ]
  796. },
  797. {
  798. "cell_type": "code",
  799. "execution_count": 6,
  800. "metadata": {},
  801. "outputs": [],
  802. "source": [
  803. "z = torch.sum(x + y)"
  804. ]
  805. },
  806. {
  807. "cell_type": "code",
  808. "execution_count": 7,
  809. "metadata": {},
  810. "outputs": [
  811. {
  812. "name": "stdout",
  813. "output_type": "stream",
  814. "text": [
  815. "tensor(-7.7018)\n",
  816. "<SumBackward0 object at 0x7f0d79305810>\n"
  817. ]
  818. }
  819. ],
  820. "source": [
  821. "print(z.data)\n",
  822. "print(z.grad_fn)"
  823. ]
  824. },
  825. {
  826. "cell_type": "markdown",
  827. "metadata": {},
  828. "source": [
  829. "上面我们打出了 z 中的 tensor 数值,同时通过`grad_fn`知道了其是通过 Sum 这种方式得到的"
  830. ]
  831. },
  832. {
  833. "cell_type": "code",
  834. "execution_count": 8,
  835. "metadata": {},
  836. "outputs": [
  837. {
  838. "name": "stdout",
  839. "output_type": "stream",
  840. "text": [
  841. "tensor([[1., 1., 1., 1.],\n",
  842. " [1., 1., 1., 1.],\n",
  843. " [1., 1., 1., 1.]])\n",
  844. "tensor([[1., 1., 1., 1.],\n",
  845. " [1., 1., 1., 1.],\n",
  846. " [1., 1., 1., 1.]])\n"
  847. ]
  848. }
  849. ],
  850. "source": [
  851. "# 求 x 和 y 的梯度\n",
  852. "z.backward()\n",
  853. "\n",
  854. "print(x.grad)\n",
  855. "print(y.grad)"
  856. ]
  857. },
  858. {
  859. "cell_type": "markdown",
  860. "metadata": {},
  861. "source": [
  862. "通过`.grad`我们得到了 x 和 y 的梯度,这里我们使用了 PyTorch 提供的自动求导机制,非常方便,下一小节会具体讲自动求导。"
  863. ]
  864. },
  865. {
  866. "cell_type": "markdown",
  867. "metadata": {},
  868. "source": [
  869. "### 3.2 **小练习**\n",
  870. "\n",
  871. "尝试构建一个函数 $y = x^2 $,然后求 x=2 的导数。\n",
  872. "\n",
  873. "参考输出:4"
  874. ]
  875. },
  876. {
  877. "cell_type": "markdown",
  878. "metadata": {},
  879. "source": [
  880. "提示:\n",
  881. "\n",
  882. "$y = x^2$的图像如下"
  883. ]
  884. },
  885. {
  886. "cell_type": "code",
  887. "execution_count": 46,
  888. "metadata": {},
  889. "outputs": [
  890. {
  891. "data": {
  892. "image/png": "iVBORw0KGgoAAAANSUhEUgAAAWoAAAD4CAYAAADFAawfAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjQuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/MnkTPAAAACXBIWXMAAAsTAAALEwEAmpwYAAAnB0lEQVR4nO3dd3hUVf7H8feZyaRDAkkIhCSEEFpAekcUBbvoYqPYcFXsZd2iq/tTd11dy9pdC9ZVKVbEiqKgINICRFoChIQ0IIUQkpBCMnN+fyS6igSGkMm5M/N9PU8eyWSY+VwDn1zOPfccpbVGCCGEddlMBxBCCHFkUtRCCGFxUtRCCGFxUtRCCGFxUtRCCGFxAZ540ejoaJ2UlOSJlxZCCJ+0du3aUq11zOG+5pGiTkpKIi0tzRMvLYQQPkkpldvc12ToQwghLE6KWgghLE6KWgghLE6KWgghLE6KWgghLE6KWgghLE6KWgghLM4yRV1b7+Tlpdn8sKPUdBQhhDhmSzKLeX15DgcbXK3+2pYp6gCb4uVl2by6LMd0FCGEOGYvfLeD//6wE4ddtfprW6eo7TYuGhrPkq3F7NlfazqOEEK4LbukitU5ZVwyPAGlfLioAS4ZloBLw/tr801HEUIIt72Tlo/dprhoSLxHXt9SRZ0UHcbo5CjeScvH5ZItwoQQ1lfvdPHB2gJO7dOJTu2DPfIelipqgKkjEsgvq2FF9l7TUYQQ4qi+ySimtOogU4cneOw9LFfUZ/TrTESIg7mr80xHEUKIo5q3Jo/O7YM5uddhVyhtFZYr6mCHncmDu/LV5iLKDhw0HUcIIZq1q7yG77aVcPGweALsnqtTyxU1wJThCRx0upi/vtB0FCGEaNZ7aQVo3TgRwpMsWdR9u7RnYEIk76zJQ2u5qCiEsB6nS/NuWj4npkST0DHUo+9lyaIGmDo8gW1FVazPLzcdRQghfmN5VimF5TVM8eBFxJ9YtqgnDYwjNNDOPLmoKISwoHlr8ugQ6uD0frEefy/LFnV4UACTBsTxyY+7qaytNx1HCCF+VlpVx6ItRVwwJJ6gALvH38+yRQ0wZUQCNfVOPvlxt+koQgjxs/nrCql36jYZ9gCLF/XghEh6x7Zj3hoZ/hBCWIPWmrmr8xiSGEmv2HZt8p6WLmqlFNNHJrKhYD8bCspNxxFCCFZk7yW79ACXjuzWZu9p6aIGmDykKyEOO3NWyVm1EMK82avyiAhxcM6ALm32npYv6vbBDs4bGMeC9F1UyEVFIYRBJZV1fLlpDxcNjSfY4fmLiD+xfFEDXDoqkZp6JwvkTkUhhEHvrc2nwaWZPjKxTd/XK4p6QHwk/bu2Z/YquVNRCGGGy6WZsyqPUckd6RET3qbv7VZRK6X+oJTarJTapJSaq5TyzKKrR3DpyG5k7qlkXd6+tn5rIYRg6fYSCvbVtOlFxJ8ctaiVUl2BW4FhWuv+gB2Y6ulghzpvYBzhQQHMXikXFYUQbW/2qjyiwgI5o1/nNn9vd4c+AoAQpVQAEArs8lykwwsLCmDy4K58unE35dWy/KkQou3s3l/D4sxiLhmeQGBA248YH/UdtdaFwL+BPGA3sF9r/dWhz1NKzVRKpSml0kpKSlo/KTB9ZCIHG1y8v7bAI68vhBCH886afJwuzbThbXsR8SfuDH10AM4HugNxQJhS6rJDn6e1nqW1Hqa1HhYT45mdDvp2ac+QxEjmyEVFIUQbaXC6mLc6n5N6xZAY5dnlTJvjzjn8RCBHa12ita4HPgTGeDZW8y4d2Y3s0gOyp6IQok0szixmT0Utl7bxlLxfcqeo84BRSqlQpZQCJgAZno3VvHMGdCEixCEXFYUQbWL2qjxi2wcxoU8nYxncGaNeBbwPrAM2Nv2eWR7O1axgh52Lh8bz5eY9FFXUmoohhPADO0sP8N22EqaNSPTonohH49Y7a63v01r30Vr311pfrrWu83SwI7lsVDecWsv6H0IIj3prZS4BNsX0EeaGPcBL7kw8VFJ0GCf3imHO6jwONrhMxxFC+KDqgw28l5bPmf0706l9m9/j9yteWdQAV4zu1rhAyuY9pqMIIXxQ40JwDVwxOsl0FO8t6pN7dSKxYyhvrcg1HUUI4WO01ry5Ipc+ndsxPKmD6TjeW9R2m+KyUYms3llGxu4K03GEED5kbe4+MnZXcMXoJBonu5nltUUNcMmwBIICbLwpZ9VCiFb03xW5tAsO4HeD40xHAby8qCNDAzl/UBwfrS9kf41sKiCEOH7FFbV8sXE3Fw9NIDQwwHQcwMuLGuCK0UnU1Dtl/Q8hRKuYu7pxc4DLR7f9cqbN8fqi7t81giGJkby9MheXS9b/EEK0XL3TxZzVuZzUK4bu0WGm4/zM64sa4MoxSeSUHmBZVqnpKEIIL/bV5iKKKuq40kJn0+AjRX1m/85Ehwfy5g87TUcRQnixN1fsJL5DCON7m1vX43B8oqiDAuxMH5HI4q3F7Cw9YDqOEMILbd61n1U5ZVwxuht2m/kpeb/kE0UNjet/BNgUb8hZtRCiBV5fvpPQQDtThpld1+NwfKaoO7UP5twBcbyXlk9FrUzVE0K4r6Syjo/Td3HhkHgiQh2m4/yGzxQ1wO/HdufAQSfvpclUPSGE++asyuOg08WMsUmmoxyWTxX1CfERDE/qwBs/5OCUqXpCCDfUNTh5a2Uup/SOoUdMuOk4h+VTRQ1w1dju5JfV8HVGkekoQggv8OmPuymtquOqsd1NR2mWzxX16amxdI0M4fXlOaajCCEsTmvNa8tzSOkUzrie0abjNMvnijrAbuOK0d1YmV3G5l37TccRQljYmp372LyrgqvGWmOVvOb4XFEDTB2eSIjDzhvLd5qOIoSwsNeX5xAR4uCCwfGmoxyRTxZ1RKiDi4bGsyB9F6VVRrd3FEJYVH5ZNV9u3sP0kYmEBNpNxzkinyxqgBljkzjodDF7pWyAK4T4rTdX7EQpxeWjrLWux+H4bFH3iAlnfO8Y3lqZS22903QcIYSFVNU1MG9N48a1cZEhpuMclc8WNcC145IprapjQXqh6ShCCAuZtzqPytoGZo5LNh3FLT5d1GN6RJHapT0vL8uRtaqFEAA0OF28vnwnI7p3ZGBCpOk4bvHpolZKMfOkZLKKq/huW4npOEIIC/h80x4Ky2u85mwafLyoAc4Z0IUuEcHMWpptOooQwjCtNbOW7iA5JoxT+1hrzekj8fmidtht/H5sd1Zk72VjgdwAI4Q/W5ldxqbCCq4dl4zNYmtOH4nPFzXA1BEJtAsK4OVlclYthD97eVk20eGBTB7c1XSUY+IXRd0u2MG0kYl8tnE3BfuqTccRQhiwvaiSxZnFXDE6iWCHtW9wOZRfFDXAjDFJKBp3cRBC+J9XluUQ7LBxmRfc4HIovynquMgQJg2MY97qPPbXyA4wQviT4spa5q8v5OKhCXQMCzQd55j5TVEDXDOucQeYuavltnIh/MmbP+RS73Jx9YnWXXP6SPyqqPvFRTA2JYrXl+dQ1yC3lQvhDw7UNfD2qlxOT40lKTrMdJwW8auiBrj+5B4UVdTx0Xq5rVwIfzB3dR7l1fVcd3IP01FazO+K+sSUaPp3bc+L32XLvopC+Li6BievLMthVHJHhiR2MB2nxdwqaqVUpFLqfaVUplIqQyk12tPBPEUpxY3jU8gpPcDCTXtMxxFCeNBH6wvZU1HLjeNTTEc5Lu6eUT8NLNRa9wEGAhmei+R5Z/TrTHJ0GC98l4XWclYthC9yujQvfZdN/67tLb0fojuOWtRKqQjgJOBVAK31Qa11uYdzeZTdprju5GQ2FVawbHup6ThCCA/4cvMesksPcMPJKZbeD9Ed7pxRdwdKgNeVUuuVUq8opX5z6VQpNVMplaaUSispsf5KdZMHx9O5fTDPf5tlOooQopVprXn+2yySo8M4s39n03GOmztFHQAMAV7QWg8GDgB3HfokrfUsrfUwrfWwmJiYVo7Z+gIDbFwzrjsrs8tYl7fPdBwhRCv6PquUTYUVXHdyMnYvWnypOe4UdQFQoLVe1fT5+zQWt9ebNiKRyFAHL3y7w3QUIUQren7JDmLbB/E7L1t8qTlHLWqt9R4gXynVu+mhCcAWj6ZqI2FBAVw5OolFW4rYVlRpOo4QohWsz9vHiuy9XDsumaAA71p8qTnuzvq4BZitlNoADAIe8liiNjZjTBIhDjsvylm1ED7h+W93EBHiYNqIRNNRWo1bRa21Tm8afx6gtf6d1tpnBnU7hAUybUQiC37cRd5eWQJVCG+2dU8li7YUceWYJMKCAkzHaTV+d2fi4fx0weGF72QGiBDe7NnF2wkLtPP7sUmmo7QqKWogtn0wU4cn8P7aAtlYQAgvlVVcyWcbd3PlmCQiQ71vKdMjkaJucn3Tgi0vfidj1UJ4o+cWZxHisHONF+0u7i4p6iZxkSFcNDSBd9cUsGd/rek4QohjkFN6gI9/3MVlo7p55cYARyNF/Qs3ju+BS2s5qxbCy/xnSRYOu41rffBsGqSofyWhYygXDOnK3NV5FFfIWbUQ3iBvbzXz1xdy6chuxLQLMh3HI6SoD3HTKSk0uDSzlmabjiKEcMPz32b9vNCar5KiPkS3qDDOHxTH26tyKa2qMx1HCHEEBfuqeX9tAdOGJxDbPth0HI+Roj6Mm05Joa7BxcvL5KxaCCt74dsdKIVXb7PlDinqw+gRE86kAXG8tSKXvXJWLYQl7Sqv4b20Ai4elkBcZIjpOB4lRd2MWyekUFvv5CUZqxbCkp5dnIVGc+N43z6bBinqZqV0asfvBnXlzRU7Ka6UGSBCWEne3mreS8tn2ohE4juEmo7jcVLUR3DrhJ7UOzXPL5F51UJYyTOLt2O3KW46xbs3rXWXFPURJEWHcdGQeOasymNXeY3pOEIIYEdJFR+uK+CyUd18eqbHL0lRH8UtE1LQaJ5bIivrCWEFT3+9naAAOzf4wdj0T6SojyK+QyhThyfy7pp8Wa9aCMO27qnkkw27mDE2iehw37wL8XCkqN1w0ykp2GyKZxZvNx1FCL/25KJthAUGMNNH1/RojhS1GzpHBHP5qG58uK6A7JIq03GE8EubCvezcPMerj6xOx18cIW8I5GidtMN43sQFGDnqa/lrFoIE55YtI2IEAdXj+tuOkqbk6J2U3R4EDPGJvHJhl1k7K4wHUcIv7I2dx+LM4uZeVIy7YMdpuO0OSnqY3DdScm0CwrgsS+3mo4ihN/QWvPIF5lEhwdxlY/theguKepjEBkayA3jU1icWcyq7L2m4wjhF5ZsLWb1zjJum9iT0EDf2Vn8WEhRH6MZY5KIbR/Ewwsz0VqbjiOET3O6NI98sZWkqFCmDk8wHccYKepjFBJo5w8Te7E+r5yvthSZjiOET/tofSFbiyr50xm9cdj9t67898iPw0VD4+kRE8ajCzNpcLpMxxHCJ9XWO3li0TZO6BrB2f27mI5jlBR1CwTYbfz5jD7sKDnAB+sKTMcRwie9vTKXwvIa7jqrDzabMh3HKCnqFjqjXyyDEyN5ctF2auudpuMI4VMqaut5bkkW43pGMzYl2nQc46SoW0gpxZ1n9mFPRS1v/LDTdBwhfMqs77Ipr67nzjP7mI5iCVLUx2FUchSn9I7h+SVZlFcfNB1HCJ9QXFHLq9/nMGlgHP27RpiOYwlS1MfpzrP6UFXXwDPfyDKoQrSGf3+1lQaXiz+d3st0FMuQoj5OfTq355JhCby5Yqcs2CTEcdq8az/vrS3gytFJdIsKMx3HMqSoW8Edp/ciKMDGw19kmo4ihNfSWvPgZxlEhji4ZUJP03EsRYq6FXRqF8yNp6Tw1ZYiVuyQW8uFaIlvMor5Ycdebp/Yi4gQ/1t46UikqFvJ1Sd2Jy4imH9+tgWXS24tF+JY1DtdPPR5BskxYUwfmWg6juVIUbeSYIedO8/qw+ZdFXy4vtB0HCG8yuyVuWSXHuCes/v69a3izXH7/4hSyq6UWq+U+tSTgbzZeQPjGJQQyWNfZlJ9sMF0HCGsbfZsSEpC22ycfs4o/lyaxql9OplOZUnH8qPrNiDDU0F8gVKK/zu3L0UVdcxamm06jhDWNXs2zJwJubkorYnbX8wNcx5BzZljOpkluVXUSql44BzgFc/G8X5Du3XknAFdeOm7bHbvrzEdRwhruuceqK7+1UO2mprGx8VvuHtG/RTwF6DZpeKUUjOVUmlKqbSSkpLWyOa17jqzDy6t+dfnMl1PiMPKyzu2x/3cUYtaKXUuUKy1Xnuk52mtZ2mth2mth8XExLRaQG+U0DGU607uwcc/7mKl7AQjxG8lNjOzo7nH/Zw7Z9RjgfOUUjuBecCpSqm3PZrKB9xwcg+6RoZw/8ebZc1qIQ5x8B8PUOsI+vWDoaHw4INmAlncUYtaa/1XrXW81joJmAos1lpf5vFkXi4k0M7/nZtK5p5K3l6ZazqOEJYyK34UfznjZmrj4kEp6NYNZs2CSy81Hc2SZMKiB53RL5ZxPaN5fNE2SqvqTMcRwhIKy2t4bkkW9VOmEVyYDy4X7NwpJX0Ex1TUWutvtdbneiqMr1FKcd+kftQcdPLYwq2m4whhCQ991jjL955z+hpO4j3kjNrDUjqFc/WJ3XknLZ/0/HLTcYQwanlWKZ9t3M1N41OI7xBqOo7XkKJuA7dM6EmndkHct2CTrAMi/Fa908V9H28msWMo156UbDqOV5GibgPhQQHcfXZffizYz7w1+abjCGHE68tzyCqu4t5zUwl22E3H8SpS1G3k/EFxjEruyMNfZFBSKRcWhX8p2FfNk4u2M7FvJyb0lfU8jpUUdRtRSvHg5BOorXfxz8+2mI4jRJvRWnPvgs0oBX8/vz9KKdORvI4UdRvqERPODeN7sCB9F0u3+fdt9sJ/LNy0h8WZxdxxWi+6RoaYjuOVpKjb2A3je5AcHcbfPtpEbb3TdBwhPKqytp77P9lMapf2zBiTZDqO15KibmPBDjv/nNyfvLJqnlssO5cL3/b4V9sorqzjXxecQIBsCNBi8n/OgDE9orlgSFdeWrqDbUWVpuMI4RE/5pfz3xU7uWJUNwYmRJqO49WkqA255+y+hAUFcM/8jTK3WvicBqeLv364kU7tgvjjGb1Nx/F6UtSGRIUHcffZfVmzcx9z18gavMK3vLY8hy27K7h/Uj/aB8uO4sdLitqgi4fGM6ZHFP/6PJPCctkNRviG7JIqHv9qGxP7xnJm/86m4/gEKWqDlFI8fMEAnC7NXz/ciNYyBCK8m9Ol+cv7GwgKsPHQZJkz3VqkqA1LjArlzjN7s3RbCe+vLTAdR4jj8uaKnaTl7uPeSf3o1D7YdByfIUVtAVeMTmJEUkce+HQLRRW1puMI0SK5ew/w6MKtjO8dw4VDupqO41OkqC3AZlM8ctEA6hpc3DNfhkCE93G5NHd+sIEAm+JfF5wgQx6tTIraIrpHh/HnM3rzdUYxC9J3mY4jxDGZvTqPldll3HNOX7pEyG3irU2K2kKuGtudIYmR3P/JZoorZQhEeIeCfdU8/HkG43pGM2V4guk4PkmK2kLsNsWjFw2k+qCTu2UWiPACLpfmz+9tAJAhDw+SoraYlE7h/KVpCEQ2GRBW9+r3OazI3su9k1Jlay0PkqK2oN+P7c7YlCge+HQLO0sPmI4jxGFl7K7gsS+3cnpqLJcMkyEPT5KitiCbTfHviwcSYFPc/k46DU6X6UhC/EptvZM/vJNO+xCHDHm0ASlqi+oSEcKDk08gPb+c/yzZYTqOEL/y+FdbydxTyWMXDSAqPMh0HJ8nRW1hkwbG8btBcTyzeDvr8/aZjiMEAD9klfLyshwuG5XIKX1k/8O2IEVtcX8/vz+x7YK4490fqT7YYDqO8HP7q+v543s/khwdxj1np5qO4zekqC0uIsTB45cMYufeA/zjE9kUV5ijtebujzZSUlnHk1MGERJoNx3Jb0hRe4HRPaK44eQezFuTz4L0QtNxhJ+aszqPzzbs5o7Te8mOLW1MitpL3HFaL4Z168DdH24ku6TKdBzhZ7bsquDvn2zhpF4xXH9SD9Nx/I4UtZcIsNt4ZtpgHAE2bpqzXnYwF22mqq6Bm+esIzLEwROXDMRmk6l4bU2K2ovERYbwxCUDydhdwT8/k/Fq4Xlaa/42fyM79x7gmWmDiZapeEZIUXuZU/vEMvOkZN5e2TheKIQnvZdWwEfpu7h9Yi9GJUeZjuO3pKi90J/P6M2ghEju+mADuXvlFnPhGduKKrn3402M6RHFTaekmI7j16SovZDDbuO56YNRCm54ex01B2W8WrSuytp6rn97LeFBATw1dRB2GZc2SoraS8V3COWpqYPI2FPBXz/cIEuiilbjcmnuePdHcvdW89z0IXRqJ3sfmiZF7cVO7RPLHRN78VH6Ll5fvtN0HOEjnluSxaItRfztnL4yLm0RRy1qpVSCUmqJUmqLUmqzUuq2tggm3HPTKSmcnhrLg59nsGLHXtNxhJf7JqOIJ7/exgWDuzJjTJLpOKKJO2fUDcAftdapwCjgJqWU3ORvETab4vFLBpIUFcrNc9ZRWF5jOpLwUtklVdw+L53ULu15SJYutZSjFrXWerfWel3TryuBDED2greQdsEOZl0xjLoGF9e/tVZuhhHHrKqugeveWkuAXfHS5UMJdsg6HlZyTGPUSqkkYDCw6jBfm6mUSlNKpZWUlLRSPOGuHjHhPDllEBsL98t+i+KYuFyaP76bzo6SKv4zfYhsqWVBbhe1Uioc+AC4XWtdcejXtdaztNbDtNbDYmJiWjOjcNNpqbHccVovPlxfyHOLs0zHEV7ikYWZfLm5iL+dk8qYlGjTccRhBLjzJKWUg8aSnq21/tCzkcTxuOXUFHaWHuDxRdtIjArl/EEySiWaN3d1Hi8tzebyUd24amyS6TiiGe7M+lDAq0CG1voJz0cSx0Mpxb8uPIER3Tvy5/c3sDa3zHQkYVHLtpfwt482Mb53DPdNSpWLhxbmztDHWOBy4FSlVHrTx9keziWOQ1CAnZcuG0rXyBCufXOt3GYufmNbUSU3vr2Onp3CeXbaYALsckuFlbkz6+N7rbXSWg/QWg9q+vi8LcKJlusQFshrM4bj0pqr3ljD/up605GERZRU1nHV62sIDrTz6ozhtAt2mI4kjkJ+jPqw7tFhvHTZUPLLqpn5VppM2xMcqGvgmjfT2HugjlevHEbXyBDTkYQbpKh93MjkKP598UBW5ZRx69z1NDhdpiMJQ+oanFz/9lo2Fe7n2WlDGBAfaTqScJMUtR84f1BX7p+UyldbivirzLH2S06X5o53fmTZ9lIeuXAAp6XGmo4kjoFb0/OE95sxtjv7qut5+pvtRIY6uPvsvnKV309orfnbR5v4bONu/nZOXy4aGm86kjhGUtR+5PaJPSmvPsjLy3LoEBbIjeNlMXh/8NiXW5m7Oo+bTunBNeOSTccRLSBF7UeUUtw3qR/lNfU8unArkSGBTB+ZaDqW8KCXl2bz/Lc7mD4ykT+d3tt0HNFCUtR+xmZT/PvigVTWNnDPRxsJsCkuGZ5gOpbwgNe+z+HBzzM4Z0AXHji/vwx1eTG5mOiHHHYbz186hJN6xvCXDzbwzpo805FEK3v1+xz+8ekWzurfmaemyFZa3k6K2k8FO+y8dPlQxveO4c4PNjJvtZS1r3hlWTYPfLqFs0/ozDPTBuOQuw69nnwH/Viww86Llw3llN4x3PXhRuaskrL2dq8sy+afn2VwzgldeHqqlLSvkO+inwt22Hnx8sayvnv+RmavyjUdSbTQy0v/V9JPTR0kJe1D5DspCApoLOtT+3TinvmbeP7bLLkpxotorXnsy8yfLxw+LSXtc+S7KYCmsr5sKOcNjOPRhVt54NMMXC4pa6trcLq464ON/GfJDqaNSOCZqbISni+S6XniZ4EBNp6aMoio8EBeW57D3gN1PHbRQAID5C++FdXWO7ll7noWbSni1lNT+MNpvWQKno+Soha/YrMp7j03lZh2QTy6cCv7qut54dIhhAXJHxUr2V9Tz7X/TWNNbhl/P68fV45JMh1JeJCcKonfUEpx4/gUHr1wAN9vL2H6yysprqg1HUs0KdhXzZSXVrA+fx/PThssJe0HpKhFsy4ZnsCsy4exvbiK855bzoaCctOR/N6anWWc/9xyCstreH3GCM4dEGc6kmgDUtTiiCamxvLBDWOw2xQXv7iCBemFpiP5rXmr85j+8koiQhx8dNNYTuwpO4b7CylqcVR9u7Tn45vHMjA+ktvmpfPYl5kyI6QNNThd3P/xZu76cCOjkqOYf+NYesSEm44l2pAUtXBLVHgQb18zkmkjEvjPkh3MfGst+2tkH0ZP21tVx1VvrOGNH3Zy9YndeX3GcCJCZY9DfyNFLdwWGGDjockncP+kVL7dWszZTy9jXd4+07F81g87Sjnr6WWsyinj0QsH8H/npsocaT8l33VxTJRSzBjbnfeuH41ScPGLK3jh2x0yFNKKGpwunli0jUtfWUV4cADzbxwjS9H6OSlq0SKDEzvw2a3jOLNfZx5ZmMmVr6+mpLLOdCyvt3t/DdNfXsUz32znwiHxfHLzifSLizAdSxgmRS1aLCLEwXPTB/PQ5BNYnVPGWU8vY+Gm3aZjeSWtNQvSCznr6WVs2rWfJ6cM5N8XD5QbjQQgRS2Ok1KK6SMT+fjmE+nULojr317HjbPXUlwpN8i4a1d5DVf/N43b5qWTFBXGp7ecyOTBsgGt+B/liVXShg0bptPS0lr9dYW11TtdzFqazdPfbCfEYef/zk3lwiFdZf2JZrhcmjmr83j4i0ycLs2fzujNjDFJshuLn1JKrdVaDzvs16SoRWvLKq7izg82sDZ3Hyf1iuH+Sakky7zfX9m6p5J7F2xiVU4ZY1Oi+NfkASRGhZqOJQySohZtzuXSvLUyl0cXZlLX4OLy0d24bUJPIkMDTUczqqSyjie/3sa81XmEBwVwzzl9uWRYgvyrQ0hRC3NKKut4YtE23lmTR7tgB7dO6Mnlo7r53dKptfVOXluew/NLdlBb7+SyUY0/uDqE+fcPLvE/UtTCuMw9FTz4WQbLtpfSPTqMW05NYdLAOJ/fiaSuwcn8dYU8uziLwvIaJvaN5a9n95FbwMVvSFELS9Ba8+22Eh75IpPMPZXEdwjhupN7cPHQeIIddtPxWlX1wQbmrs7n5aXZ7Kmo5YSuEdx1Vh/GpshCSuLwpKiFpWitWZxZzHNLslifV050eBDXjOvO1OEJXj+Gvbeqjjmr8nhteQ77qusZldyRG8enMK5ntIxDiyOSohaWpLVmZXYZz3+bxbLtpQQG2Dirf2emDEtgVHIUNi+ZpuZ0ab7PKuWdNXks2lJEvVMzoU8nbjylB0O7dTQdT3iJIxW13PYkjFFKMbpHFKN7RLFlVwXvrMlj/vpCFqTvIrFjKJcMi2fSwDi6RYWZjnpYO0qq+Dh9F++vLaCwvIYOoQ6uGJ3E1OEJ9IxtZzqe8CFyRi0spbbeycJNe5i3Jo+V2WUA9OwUzoS+sZyW2olBCR2M3RDS4HSxNncfX2cU8XVGMTmlBwAY1zOaKcMTOC01lqAA3xprF23nuIc+lFJnAk8DduAVrfXDR3q+FLVoDfll1SzaUsTXGUWszimjwaWJCgtkZHJHBiVEMjixA/3jIggJ9Ew5HqhrYGPhftbnlZOev49VOWWUV9fjsCtGJUdxWmosE/rG0jUyxCPvL/zLcRW1UsoObANOAwqANcA0rfWW5n6PFLVobftr6vluWwmLM4pYm7eP/LIaAOw2RZ/O7egV246EjqEkdAghoWMoiR1D6RgWSFCArdmLeFpr6hpclFbVkV9WQ35ZNfn7qskvqyZzTyXbiir5afXWblGhDO3WgYl9YxnXM5p2wbJ4v2hdxztGPQLI0lpnN73YPOB8oNmiFqK1RYQ4OG9gHOcNbNzMtbSqjvS8ctLzGz9W55TxUXohh5532BSEBgYQEmgnNNCO1lB90EnNwQZq6p0cuoy2TUGXiBCSY8I4PTWWwYkdGJgQSUe5MUUY5E5RdwXyf/F5ATDy0CcppWYCMwESExNbJZwQzYkOD2JiaiwTU2N/fuxgg4td5TXk76smr6ya8up6ag46qT7opPpgA9UHnSgFoYF2QhwBjf8NtNMxLJCEDo1n4V0ig33+JhzhfVpt1ofWehYwCxqHPlrrdYVwV2CAjaToMJKirTlLRIiWcufUoRD45T5A8U2PCSGEaAPuFPUaoKdSqrtSKhCYCnzs2VhCCCF+ctShD611g1LqZuBLGqfnvaa13uzxZEIIIQA3x6i11p8Dn3s4ixBCiMOQy9tCCGFxUtRCCGFxUtRCCGFxUtRCCGFxHlk9TylVAuS28LdHA6WtGMckXzkWXzkOkGOxIl85Dji+Y+mmtY453Bc8UtTHQymV1tzCJN7GV47FV44D5FisyFeOAzx3LDL0IYQQFidFLYQQFmfFop5lOkAr8pVj8ZXjADkWK/KV4wAPHYvlxqiFEEL8mhXPqIUQQvyCFLUQQlicJYtaKfWAUmqDUipdKfWVUirOdKaWUEo9ppTKbDqW+UqpSNOZWkopdbFSarNSyqWU8rqpVEqpM5VSW5VSWUqpu0znOR5KqdeUUsVKqU2msxwPpVSCUmqJUmpL05+t20xnaimlVLBSarVS6semY/l7q76+FceolVLttdYVTb++FUjVWl9vONYxU0qdDixuWir2EQCt9Z2GY7WIUqov4AJeAv6ktfaa3YtbskGzlSmlTgKqgDe11v1N52kppVQXoIvWep1Sqh2wFvidN35fVOMOymFa6yqllAP4HrhNa72yNV7fkmfUP5V0kzDAej9N3KC1/kpr3dD06Uoad8fxSlrrDK31VtM5WujnDZq11geBnzZo9kpa66VAmekcx0trvVtrva7p15VABo17tHod3aiq6VNH00er9ZYlixpAKfWgUiofuBS413SeVvB74AvTIfzU4TZo9spC8FVKqSRgMLDKcJQWU0rZlVLpQDGwSGvdasdirKiVUl8rpTYd5uN8AK31PVrrBGA2cLOpnEdztONoes49QAONx2JZ7hyLEK1NKRUOfADcfsi/pr2K1tqptR5E47+cRyilWm1YqtV2IT9WWuuJbj51No27y9znwTgtdrTjUErNAM4FJmgrXhD4hWP4nngb2aDZoprGcz8AZmutPzSdpzVorcuVUkuAM4FWueBryaEPpVTPX3x6PpBpKsvxUEqdCfwFOE9rXW06jx+TDZotqOkC3KtAhtb6CdN5jodSKuanWV1KqRAaL1y3Wm9ZddbHB0BvGmcZ5ALXa6297gxIKZUFBAF7mx5a6Y2zVwCUUpOBZ4EYoBxI11qfYTTUMVBKnQ08xf82aH7QbKKWU0rNBcbTuKRmEXCf1vpVo6FaQCl1IrAM2Ejj33WAu5v2aPUqSqkBwH9p/PNlA97VWv+j1V7fikUthBDifyw59CGEEOJ/pKiFEMLipKiFEMLipKiFEMLipKiFEMLipKiFEMLipKiFEMLi/h95yyGcg55E7QAAAABJRU5ErkJggg==\n",
  893. "text/plain": [
  894. "<Figure size 432x288 with 1 Axes>"
  895. ]
  896. },
  897. "metadata": {
  898. "needs_background": "light"
  899. },
  900. "output_type": "display_data"
  901. }
  902. ],
  903. "source": [
  904. "import numpy as np\n",
  905. "import matplotlib.pyplot as plt\n",
  906. "\n",
  907. "x = np.arange(-3, 3.01, 0.1)\n",
  908. "y = x ** 2\n",
  909. "plt.plot(x, y)\n",
  910. "plt.plot(2, 4, 'ro')\n",
  911. "plt.show()"
  912. ]
  913. },
  914. {
  915. "cell_type": "code",
  916. "execution_count": 47,
  917. "metadata": {},
  918. "outputs": [
  919. {
  920. "name": "stdout",
  921. "output_type": "stream",
  922. "text": [
  923. "tensor([4.])\n"
  924. ]
  925. }
  926. ],
  927. "source": [
  928. "import torch\n",
  929. "from torch.autograd import Variable\n",
  930. "\n",
  931. "# 答案\n",
  932. "x = Variable(torch.FloatTensor([2]), requires_grad=True)\n",
  933. "y = x ** 2\n",
  934. "y.backward()\n",
  935. "print(x.grad)"
  936. ]
  937. },
  938. {
  939. "cell_type": "markdown",
  940. "metadata": {},
  941. "source": [
  942. "下一次课程我们将会从导数展开,了解 PyTorch 的自动求导机制"
  943. ]
  944. },
  945. {
  946. "cell_type": "markdown",
  947. "metadata": {},
  948. "source": [
  949. "## References\n",
  950. "* http://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html\n",
  951. "* http://cs231n.github.io/python-numpy-tutorial/"
  952. ]
  953. }
  954. ],
  955. "metadata": {
  956. "kernelspec": {
  957. "display_name": "Python 3",
  958. "language": "python",
  959. "name": "python3"
  960. },
  961. "language_info": {
  962. "codemirror_mode": {
  963. "name": "ipython",
  964. "version": 3
  965. },
  966. "file_extension": ".py",
  967. "mimetype": "text/x-python",
  968. "name": "python",
  969. "nbconvert_exporter": "python",
  970. "pygments_lexer": "ipython3",
  971. "version": "3.7.9"
  972. }
  973. },
  974. "nbformat": 4,
  975. "nbformat_minor": 2
  976. }

机器学习越来越多应用到飞行器、机器人等领域,其目的是利用计算机实现类似人类的智能,从而实现装备的智能化与无人化。本课程旨在引导学生掌握机器学习的基本知识、典型方法与技术,通过具体的应用案例激发学生对该学科的兴趣,鼓励学生能够从人工智能的角度来分析、解决飞行器、机器人所面临的问题和挑战。本课程主要内容包括Python编程基础,机器学习模型,无监督学习、监督学习、深度学习基础知识与实现,并学习如何利用机器学习解决实际问题,从而全面提升自我的《综合能力》。