You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

1-tensor.ipynb 17 kB

3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736
  1. {
  2. "cells": [
  3. {
  4. "cell_type": "markdown",
  5. "metadata": {},
  6. "source": [
  7. "# PyTorch\n",
  8. "\n",
  9. "PyTorch是基于Python的科学计算包,其旨在服务两类场合:\n",
  10. "* 替代NumPy发挥GPU潜能\n",
  11. "* 提供了高度灵活性和效率的深度学习平台\n",
  12. "\n",
  13. "PyTorch的简洁设计使得它入门很简单,本部分内容在深入介绍PyTorch之前,先介绍一些PyTorch的基础知识,让大家能够对PyTorch有一个大致的了解,并能够用PyTorch搭建一个简单的神经网络,然后在深入学习如何使用PyTorch实现各类网络结构。在学习过程,可能部分内容暂时不太理解,可先不予以深究,后续的课程将会对此进行深入讲解。\n",
  14. "\n",
  15. "\n",
  16. "\n",
  17. "![PyTorch Demo](imgs/PyTorch.png)\n"
  18. ]
  19. },
  20. {
  21. "cell_type": "markdown",
  22. "metadata": {},
  23. "source": [
  24. "## 1. Tensor基本用法\n",
  25. "\n",
  26. "张量(Tensor)是一种专门的数据结构,非常类似于数组和矩阵。在PyTorch中,我们使用张量来编码模型的输入和输出,以及模型的参数。\n",
  27. "\n",
  28. "张量类似于`NumPy`的`ndarray`,不同之处在于张量可以在GPU或其他硬件加速器上运行。事实上,张量和NumPy数组通常可以共享相同的底层内存,从而消除了复制数据的需要(请参阅使用NumPy的桥接)。张量还针对自动微分进行了优化,在Autograd部分中看到更多关于这一点的内介绍。\n",
  29. "\n",
  30. "`variable`是一种可以不断变化的变量,符合反向传播,参数更新的属性。PyTorch的`variable`是一个存放会变化值的内存位置,里面的值会不停变化,像装糖果(糖果就是数据,即tensor)的盒子,糖果的数量不断变化。pytorch都是由tensor计算的,而tensor里面的参数是variable形式。\n",
  31. "\n",
  32. "PyTorch基础的数据是张量(Tensor),PyTorch 的很多操作好 NumPy 都是类似的,但是因为其能够在 GPU 上运行,所以有着比 NumPy 快很多倍的速度。本节内容主要包括 PyTorch 中的基本元素 Tensor 和 Variable 及其操作方式。"
  33. ]
  34. },
  35. {
  36. "cell_type": "markdown",
  37. "metadata": {},
  38. "source": [
  39. "### 1.1 Tensor定义与生成"
  40. ]
  41. },
  42. {
  43. "cell_type": "code",
  44. "execution_count": 2,
  45. "metadata": {
  46. "collapsed": true
  47. },
  48. "outputs": [],
  49. "source": [
  50. "import torch\n",
  51. "import numpy as np"
  52. ]
  53. },
  54. {
  55. "cell_type": "code",
  56. "execution_count": 3,
  57. "metadata": {
  58. "collapsed": true
  59. },
  60. "outputs": [],
  61. "source": [
  62. "# 创建一个 numpy ndarray\n",
  63. "numpy_tensor = np.random.randn(10, 20)"
  64. ]
  65. },
  66. {
  67. "cell_type": "markdown",
  68. "metadata": {},
  69. "source": [
  70. "可以使用下面两种方式将numpy的ndarray转换到tensor上"
  71. ]
  72. },
  73. {
  74. "cell_type": "code",
  75. "execution_count": 9,
  76. "metadata": {
  77. "collapsed": true
  78. },
  79. "outputs": [],
  80. "source": [
  81. "pytorch_tensor1 = torch.tensor(numpy_tensor)\n",
  82. "pytorch_tensor2 = torch.from_numpy(numpy_tensor)"
  83. ]
  84. },
  85. {
  86. "cell_type": "markdown",
  87. "metadata": {},
  88. "source": [
  89. "使用以上两种方法进行转换的时候,会直接将 NumPy ndarray 的数据类型转换为对应的 PyTorch Tensor 数据类型"
  90. ]
  91. },
  92. {
  93. "cell_type": "markdown",
  94. "metadata": {},
  95. "source": [
  96. "\n"
  97. ]
  98. },
  99. {
  100. "cell_type": "markdown",
  101. "metadata": {},
  102. "source": [
  103. "同时也可以使用下面的方法将 `PyTorch Tensor` 转换为 `NumPy ndarray`"
  104. ]
  105. },
  106. {
  107. "cell_type": "code",
  108. "execution_count": 5,
  109. "metadata": {
  110. "collapsed": true
  111. },
  112. "outputs": [],
  113. "source": [
  114. "# 如果 pytorch tensor 在 cpu 上\n",
  115. "numpy_array = pytorch_tensor1.numpy()\n",
  116. "\n",
  117. "# 如果 pytorch tensor 在 gpu 上\n",
  118. "numpy_array = pytorch_tensor1.cpu().numpy()"
  119. ]
  120. },
  121. {
  122. "cell_type": "markdown",
  123. "metadata": {},
  124. "source": [
  125. "需要注意 GPU 上的 Tensor 不能直接转换为 NumPy ndarray,需要使用`.cpu()`先将 GPU 上的 Tensor 转到 CPU 上"
  126. ]
  127. },
  128. {
  129. "cell_type": "markdown",
  130. "metadata": {},
  131. "source": [
  132. "### 1.2 PyTorch Tensor 使用 GPU 加速\n",
  133. "\n",
  134. "我们可以使用以下两种方式将 Tensor 放到 GPU 上"
  135. ]
  136. },
  137. {
  138. "cell_type": "code",
  139. "execution_count": 7,
  140. "metadata": {
  141. "collapsed": true
  142. },
  143. "outputs": [],
  144. "source": [
  145. "# 第一种方式是定义 cuda 数据类型\n",
  146. "dtype = torch.cuda.FloatTensor # 定义默认 GPU 的 数据类型\n",
  147. "gpu_tensor = torch.randn(10, 20).type(dtype)\n",
  148. "\n",
  149. "# 第二种方式更简单,推荐使用\n",
  150. "gpu_tensor = torch.randn(10, 20).cuda(0) # 将 tensor 放到第一个 GPU 上\n",
  151. "gpu_tensor = torch.randn(10, 20).cuda(1) # 将 tensor 放到第二个 GPU 上"
  152. ]
  153. },
  154. {
  155. "cell_type": "markdown",
  156. "metadata": {},
  157. "source": [
  158. "使用第一种方式将 tensor 放到 GPU 上的时候会将数据类型转换成定义的类型,而是用第二种方式能够直接将 tensor 放到 GPU 上,类型跟之前保持一致\n",
  159. "\n",
  160. "推荐在定义 tensor 的时候就明确数据类型,然后直接使用第二种方法将 tensor 放到 GPU 上"
  161. ]
  162. },
  163. {
  164. "cell_type": "markdown",
  165. "metadata": {},
  166. "source": [
  167. "而将 tensor 放回 CPU 的操作如下"
  168. ]
  169. },
  170. {
  171. "cell_type": "code",
  172. "execution_count": 8,
  173. "metadata": {
  174. "collapsed": true
  175. },
  176. "outputs": [],
  177. "source": [
  178. "cpu_tensor = gpu_tensor.cpu()"
  179. ]
  180. },
  181. {
  182. "cell_type": "markdown",
  183. "metadata": {},
  184. "source": [
  185. "Tensor 属性的访问方式"
  186. ]
  187. },
  188. {
  189. "cell_type": "code",
  190. "execution_count": 9,
  191. "metadata": {},
  192. "outputs": [
  193. {
  194. "name": "stdout",
  195. "output_type": "stream",
  196. "text": [
  197. "torch.Size([10, 20])\n",
  198. "torch.Size([10, 20])\n"
  199. ]
  200. }
  201. ],
  202. "source": [
  203. "# 可以通过下面两种方式得到 tensor 的大小\n",
  204. "print(pytorch_tensor1.shape)\n",
  205. "print(pytorch_tensor1.size())"
  206. ]
  207. },
  208. {
  209. "cell_type": "code",
  210. "execution_count": 10,
  211. "metadata": {},
  212. "outputs": [
  213. {
  214. "name": "stdout",
  215. "output_type": "stream",
  216. "text": [
  217. "torch.FloatTensor\n",
  218. "torch.cuda.FloatTensor\n"
  219. ]
  220. }
  221. ],
  222. "source": [
  223. "# 得到 tensor 的数据类型\n",
  224. "print(pytorch_tensor1.type())\n",
  225. "print(gpu_tensor.type())"
  226. ]
  227. },
  228. {
  229. "cell_type": "code",
  230. "execution_count": 11,
  231. "metadata": {},
  232. "outputs": [
  233. {
  234. "name": "stdout",
  235. "output_type": "stream",
  236. "text": [
  237. "2\n"
  238. ]
  239. }
  240. ],
  241. "source": [
  242. "# 得到 tensor 的维度\n",
  243. "print(pytorch_tensor1.dim())"
  244. ]
  245. },
  246. {
  247. "cell_type": "code",
  248. "execution_count": 12,
  249. "metadata": {},
  250. "outputs": [
  251. {
  252. "name": "stdout",
  253. "output_type": "stream",
  254. "text": [
  255. "200\n"
  256. ]
  257. }
  258. ],
  259. "source": [
  260. "# 得到 tensor 的所有元素个数\n",
  261. "print(pytorch_tensor1.numel())"
  262. ]
  263. },
  264. {
  265. "cell_type": "markdown",
  266. "metadata": {},
  267. "source": [
  268. "\n"
  269. ]
  270. },
  271. {
  272. "cell_type": "markdown",
  273. "metadata": {},
  274. "source": [
  275. "## 2. Tensor的操作\n",
  276. "Tensor 操作中的 API 和 NumPy 非常相似,如果熟悉 NumPy 中的操作,那么 tensor 基本操作是一致的,下面列举其中的一些操作"
  277. ]
  278. },
  279. {
  280. "cell_type": "markdown",
  281. "metadata": {},
  282. "source": [
  283. "### 2.1 基本操作"
  284. ]
  285. },
  286. {
  287. "cell_type": "code",
  288. "execution_count": 13,
  289. "metadata": {},
  290. "outputs": [
  291. {
  292. "name": "stdout",
  293. "output_type": "stream",
  294. "text": [
  295. "tensor([[1., 1.],\n",
  296. " [1., 1.],\n",
  297. " [1., 1.]])\n"
  298. ]
  299. }
  300. ],
  301. "source": [
  302. "x = torch.ones(3, 2)\n",
  303. "print(x) # 这是一个float tensor"
  304. ]
  305. },
  306. {
  307. "cell_type": "code",
  308. "execution_count": 14,
  309. "metadata": {},
  310. "outputs": [
  311. {
  312. "name": "stdout",
  313. "output_type": "stream",
  314. "text": [
  315. "torch.FloatTensor\n"
  316. ]
  317. }
  318. ],
  319. "source": [
  320. "print(x.type())"
  321. ]
  322. },
  323. {
  324. "cell_type": "code",
  325. "execution_count": 15,
  326. "metadata": {},
  327. "outputs": [
  328. {
  329. "name": "stdout",
  330. "output_type": "stream",
  331. "text": [
  332. "tensor([[1, 1],\n",
  333. " [1, 1],\n",
  334. " [1, 1]])\n"
  335. ]
  336. }
  337. ],
  338. "source": [
  339. "# 将其转化为整形\n",
  340. "x = x.long()\n",
  341. "# x = x.type(torch.LongTensor)\n",
  342. "print(x)"
  343. ]
  344. },
  345. {
  346. "cell_type": "code",
  347. "execution_count": 16,
  348. "metadata": {},
  349. "outputs": [
  350. {
  351. "name": "stdout",
  352. "output_type": "stream",
  353. "text": [
  354. "tensor([[1., 1.],\n",
  355. " [1., 1.],\n",
  356. " [1., 1.]])\n"
  357. ]
  358. }
  359. ],
  360. "source": [
  361. "# 再将其转回 float\n",
  362. "x = x.float()\n",
  363. "# x = x.type(torch.FloatTensor)\n",
  364. "print(x)"
  365. ]
  366. },
  367. {
  368. "cell_type": "code",
  369. "execution_count": 17,
  370. "metadata": {},
  371. "outputs": [
  372. {
  373. "name": "stdout",
  374. "output_type": "stream",
  375. "text": [
  376. "tensor([[-1.2200, 0.9769, -2.3477],\n",
  377. " [ 1.0125, -1.3236, -0.2626],\n",
  378. " [-0.3501, 0.5753, 1.5657],\n",
  379. " [ 0.4823, -0.4008, -1.3442]])\n"
  380. ]
  381. }
  382. ],
  383. "source": [
  384. "x = torch.randn(4, 3)\n",
  385. "print(x)"
  386. ]
  387. },
  388. {
  389. "cell_type": "code",
  390. "execution_count": 18,
  391. "metadata": {
  392. "collapsed": true
  393. },
  394. "outputs": [],
  395. "source": [
  396. "# 沿着行取最大值\n",
  397. "max_value, max_idx = torch.max(x, dim=1)"
  398. ]
  399. },
  400. {
  401. "cell_type": "code",
  402. "execution_count": 19,
  403. "metadata": {},
  404. "outputs": [
  405. {
  406. "data": {
  407. "text/plain": [
  408. "tensor([0.9769, 1.0125, 1.5657, 0.4823])"
  409. ]
  410. },
  411. "execution_count": 19,
  412. "metadata": {},
  413. "output_type": "execute_result"
  414. }
  415. ],
  416. "source": [
  417. "# 每一行的最大值\n",
  418. "max_value"
  419. ]
  420. },
  421. {
  422. "cell_type": "code",
  423. "execution_count": 20,
  424. "metadata": {},
  425. "outputs": [
  426. {
  427. "data": {
  428. "text/plain": [
  429. "tensor([1, 0, 2, 0])"
  430. ]
  431. },
  432. "execution_count": 20,
  433. "metadata": {},
  434. "output_type": "execute_result"
  435. }
  436. ],
  437. "source": [
  438. "# 每一行最大值的下标\n",
  439. "max_idx"
  440. ]
  441. },
  442. {
  443. "cell_type": "code",
  444. "execution_count": 21,
  445. "metadata": {},
  446. "outputs": [
  447. {
  448. "name": "stdout",
  449. "output_type": "stream",
  450. "text": [
  451. "tensor([-2.5908, -0.5736, 1.7909, -1.2627])\n"
  452. ]
  453. }
  454. ],
  455. "source": [
  456. "# 沿着行对 x 求和\n",
  457. "sum_x = torch.sum(x, dim=1)\n",
  458. "print(sum_x)"
  459. ]
  460. },
  461. {
  462. "cell_type": "code",
  463. "execution_count": 22,
  464. "metadata": {},
  465. "outputs": [
  466. {
  467. "name": "stdout",
  468. "output_type": "stream",
  469. "text": [
  470. "torch.Size([4, 3])\n",
  471. "torch.Size([1, 4, 3])\n",
  472. "tensor([[[-1.2200, 0.9769, -2.3477],\n",
  473. " [ 1.0125, -1.3236, -0.2626],\n",
  474. " [-0.3501, 0.5753, 1.5657],\n",
  475. " [ 0.4823, -0.4008, -1.3442]]])\n"
  476. ]
  477. }
  478. ],
  479. "source": [
  480. "# 增加维度或者减少维度\n",
  481. "print(x.shape)\n",
  482. "x = x.unsqueeze(0) # 在第一维增加\n",
  483. "print(x.shape)\n",
  484. "print(x)"
  485. ]
  486. },
  487. {
  488. "cell_type": "code",
  489. "execution_count": 23,
  490. "metadata": {},
  491. "outputs": [
  492. {
  493. "name": "stdout",
  494. "output_type": "stream",
  495. "text": [
  496. "torch.Size([1, 1, 4, 3])\n"
  497. ]
  498. }
  499. ],
  500. "source": [
  501. "x = x.unsqueeze(1) # 在第二维增加\n",
  502. "print(x.shape)"
  503. ]
  504. },
  505. {
  506. "cell_type": "code",
  507. "execution_count": 24,
  508. "metadata": {},
  509. "outputs": [
  510. {
  511. "name": "stdout",
  512. "output_type": "stream",
  513. "text": [
  514. "torch.Size([1, 4, 3])\n",
  515. "tensor([[[-1.2200, 0.9769, -2.3477],\n",
  516. " [ 1.0125, -1.3236, -0.2626],\n",
  517. " [-0.3501, 0.5753, 1.5657],\n",
  518. " [ 0.4823, -0.4008, -1.3442]]])\n"
  519. ]
  520. }
  521. ],
  522. "source": [
  523. "x = x.squeeze(0) # 减少第一维\n",
  524. "print(x.shape)\n",
  525. "print(x)"
  526. ]
  527. },
  528. {
  529. "cell_type": "code",
  530. "execution_count": 25,
  531. "metadata": {},
  532. "outputs": [
  533. {
  534. "name": "stdout",
  535. "output_type": "stream",
  536. "text": [
  537. "torch.Size([4, 3])\n"
  538. ]
  539. }
  540. ],
  541. "source": [
  542. "x = x.squeeze() # 将 tensor 中所有的一维全部都去掉\n",
  543. "print(x.shape)"
  544. ]
  545. },
  546. {
  547. "cell_type": "code",
  548. "execution_count": 26,
  549. "metadata": {},
  550. "outputs": [
  551. {
  552. "name": "stdout",
  553. "output_type": "stream",
  554. "text": [
  555. "torch.Size([3, 4, 5])\n",
  556. "torch.Size([4, 3, 5])\n",
  557. "torch.Size([5, 3, 4])\n"
  558. ]
  559. }
  560. ],
  561. "source": [
  562. "x = torch.randn(3, 4, 5)\n",
  563. "print(x.shape)\n",
  564. "\n",
  565. "# 使用permute和transpose进行维度交换\n",
  566. "x = x.permute(1, 0, 2) # permute 可以重新排列 tensor 的维度\n",
  567. "print(x.shape)\n",
  568. "\n",
  569. "x = x.transpose(0, 2) # transpose 交换 tensor 中的两个维度\n",
  570. "print(x.shape)"
  571. ]
  572. },
  573. {
  574. "cell_type": "code",
  575. "execution_count": 27,
  576. "metadata": {},
  577. "outputs": [
  578. {
  579. "name": "stdout",
  580. "output_type": "stream",
  581. "text": [
  582. "torch.Size([3, 4, 5])\n",
  583. "torch.Size([12, 5])\n",
  584. "torch.Size([3, 20])\n"
  585. ]
  586. }
  587. ],
  588. "source": [
  589. "# 使用 view 对 tensor 进行 reshape\n",
  590. "x = torch.randn(3, 4, 5)\n",
  591. "print(x.shape)\n",
  592. "\n",
  593. "x = x.view(-1, 5) # -1 表示任意的大小,5 表示第二维变成 5\n",
  594. "print(x.shape)\n",
  595. "\n",
  596. "x = x.view(3, 20) # 重新 reshape 成 (3, 20) 的大小\n",
  597. "print(x.shape)"
  598. ]
  599. },
  600. {
  601. "cell_type": "code",
  602. "execution_count": 32,
  603. "metadata": {},
  604. "outputs": [
  605. {
  606. "name": "stdout",
  607. "output_type": "stream",
  608. "text": [
  609. "tensor([[-3.1321, -0.9734, 0.5307, 0.4975],\n",
  610. " [ 0.8537, 1.3424, 0.2630, -1.6658],\n",
  611. " [-1.0088, -2.2100, -1.9233, -0.3059]])\n"
  612. ]
  613. }
  614. ],
  615. "source": [
  616. "x = torch.randn(3, 4)\n",
  617. "y = torch.randn(3, 4)\n",
  618. "\n",
  619. "# 两个 tensor 求和\n",
  620. "z = x + y\n",
  621. "# z = torch.add(x, y)\n",
  622. "print(z)"
  623. ]
  624. },
  625. {
  626. "cell_type": "markdown",
  627. "metadata": {},
  628. "source": [
  629. "### 2.2 `inplace`操作\n",
  630. "另外,pytorch中大多数的操作都支持 `inplace` 操作,也就是可以直接对 tensor 进行操作而不需要另外开辟内存空间,方式非常简单,一般都是在操作的符号后面加`_`,比如"
  631. ]
  632. },
  633. {
  634. "cell_type": "code",
  635. "execution_count": 33,
  636. "metadata": {},
  637. "outputs": [
  638. {
  639. "name": "stdout",
  640. "output_type": "stream",
  641. "text": [
  642. "torch.Size([3, 3])\n",
  643. "torch.Size([1, 3, 3])\n",
  644. "torch.Size([3, 1, 3])\n"
  645. ]
  646. }
  647. ],
  648. "source": [
  649. "x = torch.ones(3, 3)\n",
  650. "print(x.shape)\n",
  651. "\n",
  652. "# unsqueeze 进行 inplace\n",
  653. "x.unsqueeze_(0)\n",
  654. "print(x.shape)\n",
  655. "\n",
  656. "# transpose 进行 inplace\n",
  657. "x.transpose_(1, 0)\n",
  658. "print(x.shape)"
  659. ]
  660. },
  661. {
  662. "cell_type": "code",
  663. "execution_count": 34,
  664. "metadata": {},
  665. "outputs": [
  666. {
  667. "name": "stdout",
  668. "output_type": "stream",
  669. "text": [
  670. "tensor([[1., 1., 1.],\n",
  671. " [1., 1., 1.],\n",
  672. " [1., 1., 1.]])\n",
  673. "tensor([[2., 2., 2.],\n",
  674. " [2., 2., 2.],\n",
  675. " [2., 2., 2.]])\n"
  676. ]
  677. }
  678. ],
  679. "source": [
  680. "x = torch.ones(3, 3)\n",
  681. "y = torch.ones(3, 3)\n",
  682. "print(x)\n",
  683. "\n",
  684. "# add 进行 inplace\n",
  685. "x.add_(y)\n",
  686. "print(x)"
  687. ]
  688. },
  689. {
  690. "cell_type": "markdown",
  691. "metadata": {},
  692. "source": [
  693. "## 练习题\n"
  694. ]
  695. },
  696. {
  697. "cell_type": "markdown",
  698. "metadata": {},
  699. "source": [
  700. "* 查阅[PyTorch的Tensor文档](http://pytorch.org/docs/tensors.html)了解 tensor 的数据类型,创建一个 float64、大小是 3 x 2、随机初始化的 tensor,将其转化为 numpy 的 ndarray,输出其数据类型\n",
  701. "* 查阅[PyTorch的Tensor文档](http://pytorch.org/docs/tensors.html)了解 tensor 更多的 API,创建一个 float32、4 x 4 的全为1的矩阵,将矩阵正中间 2 x 2 的矩阵,全部修改成2"
  702. ]
  703. },
  704. {
  705. "cell_type": "markdown",
  706. "metadata": {},
  707. "source": [
  708. "## 参考\n",
  709. "* [PyTorch官方说明文档](https://pytorch.org/docs/stable/)\n",
  710. "* http://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html\n",
  711. "* http://cs231n.github.io/python-numpy-tutorial/"
  712. ]
  713. }
  714. ],
  715. "metadata": {
  716. "kernelspec": {
  717. "display_name": "Python 3",
  718. "language": "python",
  719. "name": "python3"
  720. },
  721. "language_info": {
  722. "codemirror_mode": {
  723. "name": "ipython",
  724. "version": 3
  725. },
  726. "file_extension": ".py",
  727. "mimetype": "text/x-python",
  728. "name": "python",
  729. "nbconvert_exporter": "python",
  730. "pygments_lexer": "ipython3",
  731. "version": "3.5.4"
  732. }
  733. },
  734. "nbformat": 4,
  735. "nbformat_minor": 2
  736. }

机器学习越来越多应用到飞行器、机器人等领域,其目的是利用计算机实现类似人类的智能,从而实现装备的智能化与无人化。本课程旨在引导学生掌握机器学习的基本知识、典型方法与技术,通过具体的应用案例激发学生对该学科的兴趣,鼓励学生能够从人工智能的角度来分析、解决飞行器、机器人所面临的问题和挑战。本课程主要内容包括Python编程基础,机器学习模型,无监督学习、监督学习、深度学习基础知识与实现,并学习如何利用机器学习解决实际问题,从而全面提升自我的《综合能力》。