You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

1-tensor.ipynb 17 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724
  1. {
  2. "cells": [
  3. {
  4. "cell_type": "markdown",
  5. "metadata": {},
  6. "source": [
  7. "# PyTorch\n",
  8. "\n",
  9. "PyTorch是基于Python的科学计算包,其旨在服务两类场合:\n",
  10. "* 替代NumPy发挥GPU潜能\n",
  11. "* 提供了高度灵活性和效率的深度学习平台\n",
  12. "\n",
  13. "PyTorch的简洁设计使得它入门很简单,本部分内容在深入介绍PyTorch之前,先介绍一些PyTorch的基础知识,让大家能够对PyTorch有一个大致的了解,并能够用PyTorch搭建一个简单的神经网络,然后在深入学习如何使用PyTorch实现各类网络结构。在学习过程,可能部分内容暂时不太理解,可先不予以深究,后续的课程将会对此进行深入讲解。\n",
  14. "\n",
  15. "\n",
  16. "\n",
  17. "![PyTorch Demo](imgs/PyTorch.png)\n"
  18. ]
  19. },
  20. {
  21. "cell_type": "markdown",
  22. "metadata": {},
  23. "source": [
  24. "## 1. Tensor基本用法\n",
  25. "\n",
  26. "张量(Tensor)是一种专门的数据结构,非常类似于数组和矩阵。在PyTorch中,我们使用张量来编码模型的输入和输出,以及模型的参数。\n",
  27. "\n",
  28. "张量类似于`NumPy`的`ndarray`,不同之处在于张量可以在GPU或其他硬件加速器上运行。事实上,张量和NumPy数组通常可以共享相同的底层内存,从而消除了复制数据的需要(请参阅使用NumPy的桥接)。张量还针对自动微分进行了优化,在Autograd部分中看到更多关于这一点的内介绍。\n",
  29. "\n",
  30. "`variable`是一种可以不断变化的变量,符合反向传播,参数更新的属性。PyTorch的`variable`是一个存放会变化值的内存位置,里面的值会不停变化,像装糖果(糖果就是数据,即tensor)的盒子,糖果的数量不断变化。pytorch都是由tensor计算的,而tensor里面的参数是variable形式。\n",
  31. "\n",
  32. "PyTorch基础的数据是张量(Tensor),PyTorch 的很多操作好 NumPy 都是类似的,但是因为其能够在 GPU 上运行,所以有着比 NumPy 快很多倍的速度。本节内容主要包括 PyTorch 中的基本元素 Tensor 和 Variable 及其操作方式。"
  33. ]
  34. },
  35. {
  36. "cell_type": "markdown",
  37. "metadata": {},
  38. "source": [
  39. "### 1.1 Tensor定义与生成"
  40. ]
  41. },
  42. {
  43. "cell_type": "code",
  44. "execution_count": 2,
  45. "metadata": {},
  46. "outputs": [],
  47. "source": [
  48. "import torch\n",
  49. "import numpy as np"
  50. ]
  51. },
  52. {
  53. "cell_type": "code",
  54. "execution_count": 3,
  55. "metadata": {},
  56. "outputs": [],
  57. "source": [
  58. "# 创建一个 numpy ndarray\n",
  59. "numpy_tensor = np.random.randn(10, 20)"
  60. ]
  61. },
  62. {
  63. "cell_type": "markdown",
  64. "metadata": {},
  65. "source": [
  66. "可以使用下面两种方式将numpy的ndarray转换到tensor上"
  67. ]
  68. },
  69. {
  70. "cell_type": "code",
  71. "execution_count": 9,
  72. "metadata": {},
  73. "outputs": [],
  74. "source": [
  75. "pytorch_tensor1 = torch.tensor(numpy_tensor)\n",
  76. "pytorch_tensor2 = torch.from_numpy(numpy_tensor)"
  77. ]
  78. },
  79. {
  80. "cell_type": "markdown",
  81. "metadata": {},
  82. "source": [
  83. "使用以上两种方法进行转换的时候,会直接将 NumPy ndarray 的数据类型转换为对应的 PyTorch Tensor 数据类型"
  84. ]
  85. },
  86. {
  87. "cell_type": "markdown",
  88. "metadata": {},
  89. "source": [
  90. "\n"
  91. ]
  92. },
  93. {
  94. "cell_type": "markdown",
  95. "metadata": {},
  96. "source": [
  97. "同时也可以使用下面的方法将 `PyTorch Tensor` 转换为 `NumPy ndarray`"
  98. ]
  99. },
  100. {
  101. "cell_type": "code",
  102. "execution_count": 5,
  103. "metadata": {},
  104. "outputs": [],
  105. "source": [
  106. "# 如果 pytorch tensor 在 cpu 上\n",
  107. "numpy_array = pytorch_tensor1.numpy()\n",
  108. "\n",
  109. "# 如果 pytorch tensor 在 gpu 上\n",
  110. "numpy_array = pytorch_tensor1.cpu().numpy()"
  111. ]
  112. },
  113. {
  114. "cell_type": "markdown",
  115. "metadata": {},
  116. "source": [
  117. "需要注意 GPU 上的 Tensor 不能直接转换为 NumPy ndarray,需要使用`.cpu()`先将 GPU 上的 Tensor 转到 CPU 上"
  118. ]
  119. },
  120. {
  121. "cell_type": "markdown",
  122. "metadata": {},
  123. "source": [
  124. "### 1.2 PyTorch Tensor 使用 GPU 加速\n",
  125. "\n",
  126. "我们可以使用以下两种方式将 Tensor 放到 GPU 上"
  127. ]
  128. },
  129. {
  130. "cell_type": "code",
  131. "execution_count": 7,
  132. "metadata": {},
  133. "outputs": [],
  134. "source": [
  135. "# 第一种方式是定义 cuda 数据类型\n",
  136. "dtype = torch.cuda.FloatTensor # 定义默认 GPU 的 数据类型\n",
  137. "gpu_tensor = torch.randn(10, 20).type(dtype)\n",
  138. "\n",
  139. "# 第二种方式更简单,推荐使用\n",
  140. "gpu_tensor = torch.randn(10, 20).cuda(0) # 将 tensor 放到第一个 GPU 上\n",
  141. "gpu_tensor = torch.randn(10, 20).cuda(1) # 将 tensor 放到第二个 GPU 上"
  142. ]
  143. },
  144. {
  145. "cell_type": "markdown",
  146. "metadata": {},
  147. "source": [
  148. "使用第一种方式将 tensor 放到 GPU 上的时候会将数据类型转换成定义的类型,而是用第二种方式能够直接将 tensor 放到 GPU 上,类型跟之前保持一致\n",
  149. "\n",
  150. "推荐在定义 tensor 的时候就明确数据类型,然后直接使用第二种方法将 tensor 放到 GPU 上"
  151. ]
  152. },
  153. {
  154. "cell_type": "markdown",
  155. "metadata": {},
  156. "source": [
  157. "而将 tensor 放回 CPU 的操作如下"
  158. ]
  159. },
  160. {
  161. "cell_type": "code",
  162. "execution_count": 8,
  163. "metadata": {},
  164. "outputs": [],
  165. "source": [
  166. "cpu_tensor = gpu_tensor.cpu()"
  167. ]
  168. },
  169. {
  170. "cell_type": "markdown",
  171. "metadata": {},
  172. "source": [
  173. "Tensor 属性的访问方式"
  174. ]
  175. },
  176. {
  177. "cell_type": "code",
  178. "execution_count": 9,
  179. "metadata": {},
  180. "outputs": [
  181. {
  182. "name": "stdout",
  183. "output_type": "stream",
  184. "text": [
  185. "torch.Size([10, 20])\n",
  186. "torch.Size([10, 20])\n"
  187. ]
  188. }
  189. ],
  190. "source": [
  191. "# 可以通过下面两种方式得到 tensor 的大小\n",
  192. "print(pytorch_tensor1.shape)\n",
  193. "print(pytorch_tensor1.size())"
  194. ]
  195. },
  196. {
  197. "cell_type": "code",
  198. "execution_count": 10,
  199. "metadata": {},
  200. "outputs": [
  201. {
  202. "name": "stdout",
  203. "output_type": "stream",
  204. "text": [
  205. "torch.FloatTensor\n",
  206. "torch.cuda.FloatTensor\n"
  207. ]
  208. }
  209. ],
  210. "source": [
  211. "# 得到 tensor 的数据类型\n",
  212. "print(pytorch_tensor1.type())\n",
  213. "print(gpu_tensor.type())"
  214. ]
  215. },
  216. {
  217. "cell_type": "code",
  218. "execution_count": 11,
  219. "metadata": {},
  220. "outputs": [
  221. {
  222. "name": "stdout",
  223. "output_type": "stream",
  224. "text": [
  225. "2\n"
  226. ]
  227. }
  228. ],
  229. "source": [
  230. "# 得到 tensor 的维度\n",
  231. "print(pytorch_tensor1.dim())"
  232. ]
  233. },
  234. {
  235. "cell_type": "code",
  236. "execution_count": 12,
  237. "metadata": {},
  238. "outputs": [
  239. {
  240. "name": "stdout",
  241. "output_type": "stream",
  242. "text": [
  243. "200\n"
  244. ]
  245. }
  246. ],
  247. "source": [
  248. "# 得到 tensor 的所有元素个数\n",
  249. "print(pytorch_tensor1.numel())"
  250. ]
  251. },
  252. {
  253. "cell_type": "markdown",
  254. "metadata": {},
  255. "source": [
  256. "\n"
  257. ]
  258. },
  259. {
  260. "cell_type": "markdown",
  261. "metadata": {},
  262. "source": [
  263. "## 2. Tensor的操作\n",
  264. "Tensor 操作中的 API 和 NumPy 非常相似,如果熟悉 NumPy 中的操作,那么 tensor 基本操作是一致的,下面列举其中的一些操作"
  265. ]
  266. },
  267. {
  268. "cell_type": "markdown",
  269. "metadata": {},
  270. "source": [
  271. "### 2.1 基本操作"
  272. ]
  273. },
  274. {
  275. "cell_type": "code",
  276. "execution_count": 13,
  277. "metadata": {},
  278. "outputs": [
  279. {
  280. "name": "stdout",
  281. "output_type": "stream",
  282. "text": [
  283. "tensor([[1., 1.],\n",
  284. " [1., 1.],\n",
  285. " [1., 1.]])\n"
  286. ]
  287. }
  288. ],
  289. "source": [
  290. "x = torch.ones(3, 2)\n",
  291. "print(x) # 这是一个float tensor"
  292. ]
  293. },
  294. {
  295. "cell_type": "code",
  296. "execution_count": 14,
  297. "metadata": {},
  298. "outputs": [
  299. {
  300. "name": "stdout",
  301. "output_type": "stream",
  302. "text": [
  303. "torch.FloatTensor\n"
  304. ]
  305. }
  306. ],
  307. "source": [
  308. "print(x.type())"
  309. ]
  310. },
  311. {
  312. "cell_type": "code",
  313. "execution_count": 15,
  314. "metadata": {},
  315. "outputs": [
  316. {
  317. "name": "stdout",
  318. "output_type": "stream",
  319. "text": [
  320. "tensor([[1, 1],\n",
  321. " [1, 1],\n",
  322. " [1, 1]])\n"
  323. ]
  324. }
  325. ],
  326. "source": [
  327. "# 将其转化为整形\n",
  328. "x = x.long()\n",
  329. "# x = x.type(torch.LongTensor)\n",
  330. "print(x)"
  331. ]
  332. },
  333. {
  334. "cell_type": "code",
  335. "execution_count": 16,
  336. "metadata": {},
  337. "outputs": [
  338. {
  339. "name": "stdout",
  340. "output_type": "stream",
  341. "text": [
  342. "tensor([[1., 1.],\n",
  343. " [1., 1.],\n",
  344. " [1., 1.]])\n"
  345. ]
  346. }
  347. ],
  348. "source": [
  349. "# 再将其转回 float\n",
  350. "x = x.float()\n",
  351. "# x = x.type(torch.FloatTensor)\n",
  352. "print(x)"
  353. ]
  354. },
  355. {
  356. "cell_type": "code",
  357. "execution_count": 17,
  358. "metadata": {},
  359. "outputs": [
  360. {
  361. "name": "stdout",
  362. "output_type": "stream",
  363. "text": [
  364. "tensor([[-1.2200, 0.9769, -2.3477],\n",
  365. " [ 1.0125, -1.3236, -0.2626],\n",
  366. " [-0.3501, 0.5753, 1.5657],\n",
  367. " [ 0.4823, -0.4008, -1.3442]])\n"
  368. ]
  369. }
  370. ],
  371. "source": [
  372. "x = torch.randn(4, 3)\n",
  373. "print(x)"
  374. ]
  375. },
  376. {
  377. "cell_type": "code",
  378. "execution_count": 18,
  379. "metadata": {
  380. "collapsed": true
  381. },
  382. "outputs": [],
  383. "source": [
  384. "# 沿着行取最大值\n",
  385. "max_value, max_idx = torch.max(x, dim=1)"
  386. ]
  387. },
  388. {
  389. "cell_type": "code",
  390. "execution_count": 19,
  391. "metadata": {},
  392. "outputs": [
  393. {
  394. "data": {
  395. "text/plain": [
  396. "tensor([0.9769, 1.0125, 1.5657, 0.4823])"
  397. ]
  398. },
  399. "execution_count": 19,
  400. "metadata": {},
  401. "output_type": "execute_result"
  402. }
  403. ],
  404. "source": [
  405. "# 每一行的最大值\n",
  406. "max_value"
  407. ]
  408. },
  409. {
  410. "cell_type": "code",
  411. "execution_count": 20,
  412. "metadata": {},
  413. "outputs": [
  414. {
  415. "data": {
  416. "text/plain": [
  417. "tensor([1, 0, 2, 0])"
  418. ]
  419. },
  420. "execution_count": 20,
  421. "metadata": {},
  422. "output_type": "execute_result"
  423. }
  424. ],
  425. "source": [
  426. "# 每一行最大值的下标\n",
  427. "max_idx"
  428. ]
  429. },
  430. {
  431. "cell_type": "code",
  432. "execution_count": 21,
  433. "metadata": {},
  434. "outputs": [
  435. {
  436. "name": "stdout",
  437. "output_type": "stream",
  438. "text": [
  439. "tensor([-2.5908, -0.5736, 1.7909, -1.2627])\n"
  440. ]
  441. }
  442. ],
  443. "source": [
  444. "# 沿着行对 x 求和\n",
  445. "sum_x = torch.sum(x, dim=1)\n",
  446. "print(sum_x)"
  447. ]
  448. },
  449. {
  450. "cell_type": "code",
  451. "execution_count": 22,
  452. "metadata": {},
  453. "outputs": [
  454. {
  455. "name": "stdout",
  456. "output_type": "stream",
  457. "text": [
  458. "torch.Size([4, 3])\n",
  459. "torch.Size([1, 4, 3])\n",
  460. "tensor([[[-1.2200, 0.9769, -2.3477],\n",
  461. " [ 1.0125, -1.3236, -0.2626],\n",
  462. " [-0.3501, 0.5753, 1.5657],\n",
  463. " [ 0.4823, -0.4008, -1.3442]]])\n"
  464. ]
  465. }
  466. ],
  467. "source": [
  468. "# 增加维度或者减少维度\n",
  469. "print(x.shape)\n",
  470. "x = x.unsqueeze(0) # 在第一维增加\n",
  471. "print(x.shape)\n",
  472. "print(x)"
  473. ]
  474. },
  475. {
  476. "cell_type": "code",
  477. "execution_count": 23,
  478. "metadata": {},
  479. "outputs": [
  480. {
  481. "name": "stdout",
  482. "output_type": "stream",
  483. "text": [
  484. "torch.Size([1, 1, 4, 3])\n"
  485. ]
  486. }
  487. ],
  488. "source": [
  489. "x = x.unsqueeze(1) # 在第二维增加\n",
  490. "print(x.shape)"
  491. ]
  492. },
  493. {
  494. "cell_type": "code",
  495. "execution_count": 24,
  496. "metadata": {},
  497. "outputs": [
  498. {
  499. "name": "stdout",
  500. "output_type": "stream",
  501. "text": [
  502. "torch.Size([1, 4, 3])\n",
  503. "tensor([[[-1.2200, 0.9769, -2.3477],\n",
  504. " [ 1.0125, -1.3236, -0.2626],\n",
  505. " [-0.3501, 0.5753, 1.5657],\n",
  506. " [ 0.4823, -0.4008, -1.3442]]])\n"
  507. ]
  508. }
  509. ],
  510. "source": [
  511. "x = x.squeeze(0) # 减少第一维\n",
  512. "print(x.shape)\n",
  513. "print(x)"
  514. ]
  515. },
  516. {
  517. "cell_type": "code",
  518. "execution_count": 25,
  519. "metadata": {},
  520. "outputs": [
  521. {
  522. "name": "stdout",
  523. "output_type": "stream",
  524. "text": [
  525. "torch.Size([4, 3])\n"
  526. ]
  527. }
  528. ],
  529. "source": [
  530. "x = x.squeeze() # 将 tensor 中所有的一维全部都去掉\n",
  531. "print(x.shape)"
  532. ]
  533. },
  534. {
  535. "cell_type": "code",
  536. "execution_count": 26,
  537. "metadata": {},
  538. "outputs": [
  539. {
  540. "name": "stdout",
  541. "output_type": "stream",
  542. "text": [
  543. "torch.Size([3, 4, 5])\n",
  544. "torch.Size([4, 3, 5])\n",
  545. "torch.Size([5, 3, 4])\n"
  546. ]
  547. }
  548. ],
  549. "source": [
  550. "x = torch.randn(3, 4, 5)\n",
  551. "print(x.shape)\n",
  552. "\n",
  553. "# 使用permute和transpose进行维度交换\n",
  554. "x = x.permute(1, 0, 2) # permute 可以重新排列 tensor 的维度\n",
  555. "print(x.shape)\n",
  556. "\n",
  557. "x = x.transpose(0, 2) # transpose 交换 tensor 中的两个维度\n",
  558. "print(x.shape)"
  559. ]
  560. },
  561. {
  562. "cell_type": "code",
  563. "execution_count": 27,
  564. "metadata": {},
  565. "outputs": [
  566. {
  567. "name": "stdout",
  568. "output_type": "stream",
  569. "text": [
  570. "torch.Size([3, 4, 5])\n",
  571. "torch.Size([12, 5])\n",
  572. "torch.Size([3, 20])\n"
  573. ]
  574. }
  575. ],
  576. "source": [
  577. "# 使用 view 对 tensor 进行 reshape\n",
  578. "x = torch.randn(3, 4, 5)\n",
  579. "print(x.shape)\n",
  580. "\n",
  581. "x = x.view(-1, 5) # -1 表示任意的大小,5 表示第二维变成 5\n",
  582. "print(x.shape)\n",
  583. "\n",
  584. "x = x.view(3, 20) # 重新 reshape 成 (3, 20) 的大小\n",
  585. "print(x.shape)"
  586. ]
  587. },
  588. {
  589. "cell_type": "code",
  590. "execution_count": 32,
  591. "metadata": {},
  592. "outputs": [
  593. {
  594. "name": "stdout",
  595. "output_type": "stream",
  596. "text": [
  597. "tensor([[-3.1321, -0.9734, 0.5307, 0.4975],\n",
  598. " [ 0.8537, 1.3424, 0.2630, -1.6658],\n",
  599. " [-1.0088, -2.2100, -1.9233, -0.3059]])\n"
  600. ]
  601. }
  602. ],
  603. "source": [
  604. "x = torch.randn(3, 4)\n",
  605. "y = torch.randn(3, 4)\n",
  606. "\n",
  607. "# 两个 tensor 求和\n",
  608. "z = x + y\n",
  609. "# z = torch.add(x, y)\n",
  610. "print(z)"
  611. ]
  612. },
  613. {
  614. "cell_type": "markdown",
  615. "metadata": {},
  616. "source": [
  617. "### 2.2 `inplace`操作\n",
  618. "另外,pytorch中大多数的操作都支持 `inplace` 操作,也就是可以直接对 tensor 进行操作而不需要另外开辟内存空间,方式非常简单,一般都是在操作的符号后面加`_`,比如"
  619. ]
  620. },
  621. {
  622. "cell_type": "code",
  623. "execution_count": 33,
  624. "metadata": {},
  625. "outputs": [
  626. {
  627. "name": "stdout",
  628. "output_type": "stream",
  629. "text": [
  630. "torch.Size([3, 3])\n",
  631. "torch.Size([1, 3, 3])\n",
  632. "torch.Size([3, 1, 3])\n"
  633. ]
  634. }
  635. ],
  636. "source": [
  637. "x = torch.ones(3, 3)\n",
  638. "print(x.shape)\n",
  639. "\n",
  640. "# unsqueeze 进行 inplace\n",
  641. "x.unsqueeze_(0)\n",
  642. "print(x.shape)\n",
  643. "\n",
  644. "# transpose 进行 inplace\n",
  645. "x.transpose_(1, 0)\n",
  646. "print(x.shape)"
  647. ]
  648. },
  649. {
  650. "cell_type": "code",
  651. "execution_count": 34,
  652. "metadata": {},
  653. "outputs": [
  654. {
  655. "name": "stdout",
  656. "output_type": "stream",
  657. "text": [
  658. "tensor([[1., 1., 1.],\n",
  659. " [1., 1., 1.],\n",
  660. " [1., 1., 1.]])\n",
  661. "tensor([[2., 2., 2.],\n",
  662. " [2., 2., 2.],\n",
  663. " [2., 2., 2.]])\n"
  664. ]
  665. }
  666. ],
  667. "source": [
  668. "x = torch.ones(3, 3)\n",
  669. "y = torch.ones(3, 3)\n",
  670. "print(x)\n",
  671. "\n",
  672. "# add 进行 inplace\n",
  673. "x.add_(y)\n",
  674. "print(x)"
  675. ]
  676. },
  677. {
  678. "cell_type": "markdown",
  679. "metadata": {},
  680. "source": [
  681. "## 练习题\n"
  682. ]
  683. },
  684. {
  685. "cell_type": "markdown",
  686. "metadata": {},
  687. "source": [
  688. "* 查阅[PyTorch的Tensor文档](http://pytorch.org/docs/tensors.html)了解 tensor 的数据类型,创建一个 float64、大小是 3 x 2、随机初始化的 tensor,将其转化为 numpy 的 ndarray,输出其数据类型\n",
  689. "* 查阅[PyTorch的Tensor文档](http://pytorch.org/docs/tensors.html)了解 tensor 更多的 API,创建一个 float32、4 x 4 的全为1的矩阵,将矩阵正中间 2 x 2 的矩阵,全部修改成2"
  690. ]
  691. },
  692. {
  693. "cell_type": "markdown",
  694. "metadata": {},
  695. "source": [
  696. "## 参考\n",
  697. "* [PyTorch官方说明文档](https://pytorch.org/docs/stable/)\n",
  698. "* http://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html\n",
  699. "* http://cs231n.github.io/python-numpy-tutorial/"
  700. ]
  701. }
  702. ],
  703. "metadata": {
  704. "kernelspec": {
  705. "display_name": "Python 3",
  706. "language": "python",
  707. "name": "python3"
  708. },
  709. "language_info": {
  710. "codemirror_mode": {
  711. "name": "ipython",
  712. "version": 3
  713. },
  714. "file_extension": ".py",
  715. "mimetype": "text/x-python",
  716. "name": "python",
  717. "nbconvert_exporter": "python",
  718. "pygments_lexer": "ipython3",
  719. "version": "3.7.9"
  720. }
  721. },
  722. "nbformat": 4,
  723. "nbformat_minor": 2
  724. }

机器学习越来越多应用到飞行器、机器人等领域,其目的是利用计算机实现类似人类的智能,从而实现装备的智能化与无人化。本课程旨在引导学生掌握机器学习的基本知识、典型方法与技术,通过具体的应用案例激发学生对该学科的兴趣,鼓励学生能够从人工智能的角度来分析、解决飞行器、机器人所面临的问题和挑战。本课程主要内容包括Python编程基础,机器学习模型,无监督学习、监督学习、深度学习基础知识与实现,并学习如何利用机器学习解决实际问题,从而全面提升自我的《综合能力》。