You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

1-tensor.ipynb 16 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726
  1. {
  2. "cells": [
  3. {
  4. "cell_type": "markdown",
  5. "metadata": {},
  6. "source": [
  7. "# Tensor and Variable\n",
  8. "\n",
  9. "\n",
  10. "张量(Tensor)是一种专门的数据结构,非常类似于数组和矩阵。在PyTorch中,我们使用张量来编码模型的输入和输出,以及模型的参数。\n",
  11. "\n",
  12. "张量类似于`NumPy`的`ndarray`,不同之处在于张量可以在GPU或其他硬件加速器上运行。事实上,张量和NumPy数组通常可以共享相同的底层内存,从而消除了复制数据的需要(请参阅使用NumPy的桥接)。张量还针对自动微分进行了优化,在Autograd部分中看到更多关于这一点的内介绍。\n",
  13. "\n",
  14. "`variable`是一种可以不断变化的变量,符合反向传播,参数更新的属性。PyTorch的`variable`是一个存放会变化值的内存位置,里面的值会不停变化,像装糖果(糖果就是数据,即tensor)的盒子,糖果的数量不断变化。pytorch都是由tensor计算的,而tensor里面的参数是variable形式。\n"
  15. ]
  16. },
  17. {
  18. "cell_type": "markdown",
  19. "metadata": {},
  20. "source": [
  21. "## 1. Tensor基本用法\n",
  22. "\n",
  23. "PyTorch基础的数据是张量(Tensor),PyTorch 的很多操作好 NumPy 都是类似的,但是因为其能够在 GPU 上运行,所以有着比 NumPy 快很多倍的速度。本节内容主要包括 PyTorch 中的基本元素 Tensor 和 Variable 及其操作方式。"
  24. ]
  25. },
  26. {
  27. "cell_type": "markdown",
  28. "metadata": {},
  29. "source": [
  30. "### 1.1 Tensor定义与生成"
  31. ]
  32. },
  33. {
  34. "cell_type": "code",
  35. "execution_count": 1,
  36. "metadata": {
  37. "collapsed": true
  38. },
  39. "outputs": [],
  40. "source": [
  41. "import torch\n",
  42. "import numpy as np"
  43. ]
  44. },
  45. {
  46. "cell_type": "code",
  47. "execution_count": 2,
  48. "metadata": {
  49. "collapsed": true
  50. },
  51. "outputs": [],
  52. "source": [
  53. "# 创建一个 numpy ndarray\n",
  54. "numpy_tensor = np.random.randn(10, 20)"
  55. ]
  56. },
  57. {
  58. "cell_type": "markdown",
  59. "metadata": {},
  60. "source": [
  61. "可以使用下面两种方式将numpy的ndarray转换到tensor上"
  62. ]
  63. },
  64. {
  65. "cell_type": "code",
  66. "execution_count": 3,
  67. "metadata": {
  68. "collapsed": true
  69. },
  70. "outputs": [],
  71. "source": [
  72. "pytorch_tensor1 = torch.Tensor(numpy_tensor)\n",
  73. "pytorch_tensor2 = torch.from_numpy(numpy_tensor)"
  74. ]
  75. },
  76. {
  77. "cell_type": "markdown",
  78. "metadata": {},
  79. "source": [
  80. "使用以上两种方法进行转换的时候,会直接将 NumPy ndarray 的数据类型转换为对应的 PyTorch Tensor 数据类型"
  81. ]
  82. },
  83. {
  84. "cell_type": "markdown",
  85. "metadata": {},
  86. "source": [
  87. "\n"
  88. ]
  89. },
  90. {
  91. "cell_type": "markdown",
  92. "metadata": {},
  93. "source": [
  94. "同时也可以使用下面的方法将 `PyTorch Tensor` 转换为 `NumPy ndarray`"
  95. ]
  96. },
  97. {
  98. "cell_type": "code",
  99. "execution_count": 4,
  100. "metadata": {
  101. "collapsed": true
  102. },
  103. "outputs": [],
  104. "source": [
  105. "# 如果 pytorch tensor 在 cpu 上\n",
  106. "numpy_array = pytorch_tensor1.numpy()\n",
  107. "\n",
  108. "# 如果 pytorch tensor 在 gpu 上\n",
  109. "numpy_array = pytorch_tensor1.cpu().numpy()"
  110. ]
  111. },
  112. {
  113. "cell_type": "markdown",
  114. "metadata": {},
  115. "source": [
  116. "需要注意 GPU 上的 Tensor 不能直接转换为 NumPy ndarray,需要使用`.cpu()`先将 GPU 上的 Tensor 转到 CPU 上"
  117. ]
  118. },
  119. {
  120. "cell_type": "markdown",
  121. "metadata": {},
  122. "source": [
  123. "### 1.2 PyTorch Tensor 使用 GPU 加速\n",
  124. "\n",
  125. "我们可以使用以下两种方式将 Tensor 放到 GPU 上"
  126. ]
  127. },
  128. {
  129. "cell_type": "code",
  130. "execution_count": 7,
  131. "metadata": {
  132. "collapsed": true
  133. },
  134. "outputs": [],
  135. "source": [
  136. "# 第一种方式是定义 cuda 数据类型\n",
  137. "dtype = torch.cuda.FloatTensor # 定义默认 GPU 的 数据类型\n",
  138. "gpu_tensor = torch.randn(10, 20).type(dtype)\n",
  139. "\n",
  140. "# 第二种方式更简单,推荐使用\n",
  141. "gpu_tensor = torch.randn(10, 20).cuda(0) # 将 tensor 放到第一个 GPU 上\n",
  142. "gpu_tensor = torch.randn(10, 20).cuda(1) # 将 tensor 放到第二个 GPU 上"
  143. ]
  144. },
  145. {
  146. "cell_type": "markdown",
  147. "metadata": {},
  148. "source": [
  149. "使用第一种方式将 tensor 放到 GPU 上的时候会将数据类型转换成定义的类型,而是用第二种方式能够直接将 tensor 放到 GPU 上,类型跟之前保持一致\n",
  150. "\n",
  151. "推荐在定义 tensor 的时候就明确数据类型,然后直接使用第二种方法将 tensor 放到 GPU 上"
  152. ]
  153. },
  154. {
  155. "cell_type": "markdown",
  156. "metadata": {},
  157. "source": [
  158. "而将 tensor 放回 CPU 的操作如下"
  159. ]
  160. },
  161. {
  162. "cell_type": "code",
  163. "execution_count": 8,
  164. "metadata": {
  165. "collapsed": true
  166. },
  167. "outputs": [],
  168. "source": [
  169. "cpu_tensor = gpu_tensor.cpu()"
  170. ]
  171. },
  172. {
  173. "cell_type": "markdown",
  174. "metadata": {},
  175. "source": [
  176. "Tensor 属性的访问方式"
  177. ]
  178. },
  179. {
  180. "cell_type": "code",
  181. "execution_count": 9,
  182. "metadata": {},
  183. "outputs": [
  184. {
  185. "name": "stdout",
  186. "output_type": "stream",
  187. "text": [
  188. "torch.Size([10, 20])\n",
  189. "torch.Size([10, 20])\n"
  190. ]
  191. }
  192. ],
  193. "source": [
  194. "# 可以通过下面两种方式得到 tensor 的大小\n",
  195. "print(pytorch_tensor1.shape)\n",
  196. "print(pytorch_tensor1.size())"
  197. ]
  198. },
  199. {
  200. "cell_type": "code",
  201. "execution_count": 10,
  202. "metadata": {},
  203. "outputs": [
  204. {
  205. "name": "stdout",
  206. "output_type": "stream",
  207. "text": [
  208. "torch.FloatTensor\n",
  209. "torch.cuda.FloatTensor\n"
  210. ]
  211. }
  212. ],
  213. "source": [
  214. "# 得到 tensor 的数据类型\n",
  215. "print(pytorch_tensor1.type())\n",
  216. "print(gpu_tensor.type())"
  217. ]
  218. },
  219. {
  220. "cell_type": "code",
  221. "execution_count": 11,
  222. "metadata": {},
  223. "outputs": [
  224. {
  225. "name": "stdout",
  226. "output_type": "stream",
  227. "text": [
  228. "2\n"
  229. ]
  230. }
  231. ],
  232. "source": [
  233. "# 得到 tensor 的维度\n",
  234. "print(pytorch_tensor1.dim())"
  235. ]
  236. },
  237. {
  238. "cell_type": "code",
  239. "execution_count": 12,
  240. "metadata": {},
  241. "outputs": [
  242. {
  243. "name": "stdout",
  244. "output_type": "stream",
  245. "text": [
  246. "200\n"
  247. ]
  248. }
  249. ],
  250. "source": [
  251. "# 得到 tensor 的所有元素个数\n",
  252. "print(pytorch_tensor1.numel())"
  253. ]
  254. },
  255. {
  256. "cell_type": "markdown",
  257. "metadata": {},
  258. "source": [
  259. "\n"
  260. ]
  261. },
  262. {
  263. "cell_type": "markdown",
  264. "metadata": {},
  265. "source": [
  266. "## 2. Tensor的操作\n",
  267. "Tensor 操作中的 API 和 NumPy 非常相似,如果熟悉 NumPy 中的操作,那么 tensor 基本操作是一致的,下面列举其中的一些操作"
  268. ]
  269. },
  270. {
  271. "cell_type": "markdown",
  272. "metadata": {},
  273. "source": [
  274. "### 2.1 基本操作"
  275. ]
  276. },
  277. {
  278. "cell_type": "code",
  279. "execution_count": 13,
  280. "metadata": {},
  281. "outputs": [
  282. {
  283. "name": "stdout",
  284. "output_type": "stream",
  285. "text": [
  286. "tensor([[1., 1.],\n",
  287. " [1., 1.],\n",
  288. " [1., 1.]])\n"
  289. ]
  290. }
  291. ],
  292. "source": [
  293. "x = torch.ones(3, 2)\n",
  294. "print(x) # 这是一个float tensor"
  295. ]
  296. },
  297. {
  298. "cell_type": "code",
  299. "execution_count": 14,
  300. "metadata": {},
  301. "outputs": [
  302. {
  303. "name": "stdout",
  304. "output_type": "stream",
  305. "text": [
  306. "torch.FloatTensor\n"
  307. ]
  308. }
  309. ],
  310. "source": [
  311. "print(x.type())"
  312. ]
  313. },
  314. {
  315. "cell_type": "code",
  316. "execution_count": 15,
  317. "metadata": {},
  318. "outputs": [
  319. {
  320. "name": "stdout",
  321. "output_type": "stream",
  322. "text": [
  323. "tensor([[1, 1],\n",
  324. " [1, 1],\n",
  325. " [1, 1]])\n"
  326. ]
  327. }
  328. ],
  329. "source": [
  330. "# 将其转化为整形\n",
  331. "x = x.long()\n",
  332. "# x = x.type(torch.LongTensor)\n",
  333. "print(x)"
  334. ]
  335. },
  336. {
  337. "cell_type": "code",
  338. "execution_count": 16,
  339. "metadata": {},
  340. "outputs": [
  341. {
  342. "name": "stdout",
  343. "output_type": "stream",
  344. "text": [
  345. "tensor([[1., 1.],\n",
  346. " [1., 1.],\n",
  347. " [1., 1.]])\n"
  348. ]
  349. }
  350. ],
  351. "source": [
  352. "# 再将其转回 float\n",
  353. "x = x.float()\n",
  354. "# x = x.type(torch.FloatTensor)\n",
  355. "print(x)"
  356. ]
  357. },
  358. {
  359. "cell_type": "code",
  360. "execution_count": 17,
  361. "metadata": {},
  362. "outputs": [
  363. {
  364. "name": "stdout",
  365. "output_type": "stream",
  366. "text": [
  367. "tensor([[-1.2200, 0.9769, -2.3477],\n",
  368. " [ 1.0125, -1.3236, -0.2626],\n",
  369. " [-0.3501, 0.5753, 1.5657],\n",
  370. " [ 0.4823, -0.4008, -1.3442]])\n"
  371. ]
  372. }
  373. ],
  374. "source": [
  375. "x = torch.randn(4, 3)\n",
  376. "print(x)"
  377. ]
  378. },
  379. {
  380. "cell_type": "code",
  381. "execution_count": 18,
  382. "metadata": {
  383. "collapsed": true
  384. },
  385. "outputs": [],
  386. "source": [
  387. "# 沿着行取最大值\n",
  388. "max_value, max_idx = torch.max(x, dim=1)"
  389. ]
  390. },
  391. {
  392. "cell_type": "code",
  393. "execution_count": 19,
  394. "metadata": {},
  395. "outputs": [
  396. {
  397. "data": {
  398. "text/plain": [
  399. "tensor([0.9769, 1.0125, 1.5657, 0.4823])"
  400. ]
  401. },
  402. "execution_count": 19,
  403. "metadata": {},
  404. "output_type": "execute_result"
  405. }
  406. ],
  407. "source": [
  408. "# 每一行的最大值\n",
  409. "max_value"
  410. ]
  411. },
  412. {
  413. "cell_type": "code",
  414. "execution_count": 20,
  415. "metadata": {},
  416. "outputs": [
  417. {
  418. "data": {
  419. "text/plain": [
  420. "tensor([1, 0, 2, 0])"
  421. ]
  422. },
  423. "execution_count": 20,
  424. "metadata": {},
  425. "output_type": "execute_result"
  426. }
  427. ],
  428. "source": [
  429. "# 每一行最大值的下标\n",
  430. "max_idx"
  431. ]
  432. },
  433. {
  434. "cell_type": "code",
  435. "execution_count": 21,
  436. "metadata": {},
  437. "outputs": [
  438. {
  439. "name": "stdout",
  440. "output_type": "stream",
  441. "text": [
  442. "tensor([-2.5908, -0.5736, 1.7909, -1.2627])\n"
  443. ]
  444. }
  445. ],
  446. "source": [
  447. "# 沿着行对 x 求和\n",
  448. "sum_x = torch.sum(x, dim=1)\n",
  449. "print(sum_x)"
  450. ]
  451. },
  452. {
  453. "cell_type": "code",
  454. "execution_count": 22,
  455. "metadata": {},
  456. "outputs": [
  457. {
  458. "name": "stdout",
  459. "output_type": "stream",
  460. "text": [
  461. "torch.Size([4, 3])\n",
  462. "torch.Size([1, 4, 3])\n",
  463. "tensor([[[-1.2200, 0.9769, -2.3477],\n",
  464. " [ 1.0125, -1.3236, -0.2626],\n",
  465. " [-0.3501, 0.5753, 1.5657],\n",
  466. " [ 0.4823, -0.4008, -1.3442]]])\n"
  467. ]
  468. }
  469. ],
  470. "source": [
  471. "# 增加维度或者减少维度\n",
  472. "print(x.shape)\n",
  473. "x = x.unsqueeze(0) # 在第一维增加\n",
  474. "print(x.shape)\n",
  475. "print(x)"
  476. ]
  477. },
  478. {
  479. "cell_type": "code",
  480. "execution_count": 23,
  481. "metadata": {},
  482. "outputs": [
  483. {
  484. "name": "stdout",
  485. "output_type": "stream",
  486. "text": [
  487. "torch.Size([1, 1, 4, 3])\n"
  488. ]
  489. }
  490. ],
  491. "source": [
  492. "x = x.unsqueeze(1) # 在第二维增加\n",
  493. "print(x.shape)"
  494. ]
  495. },
  496. {
  497. "cell_type": "code",
  498. "execution_count": 24,
  499. "metadata": {},
  500. "outputs": [
  501. {
  502. "name": "stdout",
  503. "output_type": "stream",
  504. "text": [
  505. "torch.Size([1, 4, 3])\n",
  506. "tensor([[[-1.2200, 0.9769, -2.3477],\n",
  507. " [ 1.0125, -1.3236, -0.2626],\n",
  508. " [-0.3501, 0.5753, 1.5657],\n",
  509. " [ 0.4823, -0.4008, -1.3442]]])\n"
  510. ]
  511. }
  512. ],
  513. "source": [
  514. "x = x.squeeze(0) # 减少第一维\n",
  515. "print(x.shape)\n",
  516. "print(x)"
  517. ]
  518. },
  519. {
  520. "cell_type": "code",
  521. "execution_count": 25,
  522. "metadata": {},
  523. "outputs": [
  524. {
  525. "name": "stdout",
  526. "output_type": "stream",
  527. "text": [
  528. "torch.Size([4, 3])\n"
  529. ]
  530. }
  531. ],
  532. "source": [
  533. "x = x.squeeze() # 将 tensor 中所有的一维全部都去掉\n",
  534. "print(x.shape)"
  535. ]
  536. },
  537. {
  538. "cell_type": "code",
  539. "execution_count": 26,
  540. "metadata": {},
  541. "outputs": [
  542. {
  543. "name": "stdout",
  544. "output_type": "stream",
  545. "text": [
  546. "torch.Size([3, 4, 5])\n",
  547. "torch.Size([4, 3, 5])\n",
  548. "torch.Size([5, 3, 4])\n"
  549. ]
  550. }
  551. ],
  552. "source": [
  553. "x = torch.randn(3, 4, 5)\n",
  554. "print(x.shape)\n",
  555. "\n",
  556. "# 使用permute和transpose进行维度交换\n",
  557. "x = x.permute(1, 0, 2) # permute 可以重新排列 tensor 的维度\n",
  558. "print(x.shape)\n",
  559. "\n",
  560. "x = x.transpose(0, 2) # transpose 交换 tensor 中的两个维度\n",
  561. "print(x.shape)"
  562. ]
  563. },
  564. {
  565. "cell_type": "code",
  566. "execution_count": 27,
  567. "metadata": {},
  568. "outputs": [
  569. {
  570. "name": "stdout",
  571. "output_type": "stream",
  572. "text": [
  573. "torch.Size([3, 4, 5])\n",
  574. "torch.Size([12, 5])\n",
  575. "torch.Size([3, 20])\n"
  576. ]
  577. }
  578. ],
  579. "source": [
  580. "# 使用 view 对 tensor 进行 reshape\n",
  581. "x = torch.randn(3, 4, 5)\n",
  582. "print(x.shape)\n",
  583. "\n",
  584. "x = x.view(-1, 5) # -1 表示任意的大小,5 表示第二维变成 5\n",
  585. "print(x.shape)\n",
  586. "\n",
  587. "x = x.view(3, 20) # 重新 reshape 成 (3, 20) 的大小\n",
  588. "print(x.shape)"
  589. ]
  590. },
  591. {
  592. "cell_type": "code",
  593. "execution_count": 32,
  594. "metadata": {},
  595. "outputs": [
  596. {
  597. "name": "stdout",
  598. "output_type": "stream",
  599. "text": [
  600. "tensor([[-3.1321, -0.9734, 0.5307, 0.4975],\n",
  601. " [ 0.8537, 1.3424, 0.2630, -1.6658],\n",
  602. " [-1.0088, -2.2100, -1.9233, -0.3059]])\n"
  603. ]
  604. }
  605. ],
  606. "source": [
  607. "x = torch.randn(3, 4)\n",
  608. "y = torch.randn(3, 4)\n",
  609. "\n",
  610. "# 两个 tensor 求和\n",
  611. "z = x + y\n",
  612. "# z = torch.add(x, y)\n",
  613. "print(z)"
  614. ]
  615. },
  616. {
  617. "cell_type": "markdown",
  618. "metadata": {},
  619. "source": [
  620. "### 2.2 `inplace`操作\n",
  621. "另外,pytorch中大多数的操作都支持 `inplace` 操作,也就是可以直接对 tensor 进行操作而不需要另外开辟内存空间,方式非常简单,一般都是在操作的符号后面加`_`,比如"
  622. ]
  623. },
  624. {
  625. "cell_type": "code",
  626. "execution_count": 33,
  627. "metadata": {},
  628. "outputs": [
  629. {
  630. "name": "stdout",
  631. "output_type": "stream",
  632. "text": [
  633. "torch.Size([3, 3])\n",
  634. "torch.Size([1, 3, 3])\n",
  635. "torch.Size([3, 1, 3])\n"
  636. ]
  637. }
  638. ],
  639. "source": [
  640. "x = torch.ones(3, 3)\n",
  641. "print(x.shape)\n",
  642. "\n",
  643. "# unsqueeze 进行 inplace\n",
  644. "x.unsqueeze_(0)\n",
  645. "print(x.shape)\n",
  646. "\n",
  647. "# transpose 进行 inplace\n",
  648. "x.transpose_(1, 0)\n",
  649. "print(x.shape)"
  650. ]
  651. },
  652. {
  653. "cell_type": "code",
  654. "execution_count": 34,
  655. "metadata": {},
  656. "outputs": [
  657. {
  658. "name": "stdout",
  659. "output_type": "stream",
  660. "text": [
  661. "tensor([[1., 1., 1.],\n",
  662. " [1., 1., 1.],\n",
  663. " [1., 1., 1.]])\n",
  664. "tensor([[2., 2., 2.],\n",
  665. " [2., 2., 2.],\n",
  666. " [2., 2., 2.]])\n"
  667. ]
  668. }
  669. ],
  670. "source": [
  671. "x = torch.ones(3, 3)\n",
  672. "y = torch.ones(3, 3)\n",
  673. "print(x)\n",
  674. "\n",
  675. "# add 进行 inplace\n",
  676. "x.add_(y)\n",
  677. "print(x)"
  678. ]
  679. },
  680. {
  681. "cell_type": "markdown",
  682. "metadata": {},
  683. "source": [
  684. "## 练习题\n"
  685. ]
  686. },
  687. {
  688. "cell_type": "markdown",
  689. "metadata": {},
  690. "source": [
  691. "* 查阅[PyTorch的Tensor文档](http://pytorch.org/docs/tensors.html)了解 tensor 的数据类型,创建一个 float64、大小是 3 x 2、随机初始化的 tensor,将其转化为 numpy 的 ndarray,输出其数据类型\n",
  692. "* 查阅[PyTorch的Tensor文档](http://pytorch.org/docs/tensors.html)了解 tensor 更多的 API,创建一个 float32、4 x 4 的全为1的矩阵,将矩阵正中间 2 x 2 的矩阵,全部修改成2"
  693. ]
  694. },
  695. {
  696. "cell_type": "markdown",
  697. "metadata": {},
  698. "source": [
  699. "## 参考\n",
  700. "* http://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html\n",
  701. "* http://cs231n.github.io/python-numpy-tutorial/"
  702. ]
  703. }
  704. ],
  705. "metadata": {
  706. "kernelspec": {
  707. "display_name": "Python 3 (ipykernel)",
  708. "language": "python",
  709. "name": "python3"
  710. },
  711. "language_info": {
  712. "codemirror_mode": {
  713. "name": "ipython",
  714. "version": 3
  715. },
  716. "file_extension": ".py",
  717. "mimetype": "text/x-python",
  718. "name": "python",
  719. "nbconvert_exporter": "python",
  720. "pygments_lexer": "ipython3",
  721. "version": "3.9.7"
  722. }
  723. },
  724. "nbformat": 4,
  725. "nbformat_minor": 2
  726. }

机器学习越来越多应用到飞行器、机器人等领域,其目的是利用计算机实现类似人类的智能,从而实现装备的智能化与无人化。本课程旨在引导学生掌握机器学习的基本知识、典型方法与技术,通过具体的应用案例激发学生对该学科的兴趣,鼓励学生能够从人工智能的角度来分析、解决飞行器、机器人所面临的问题和挑战。本课程主要内容包括Python编程基础,机器学习模型,无监督学习、监督学习、深度学习基础知识与实现,并学习如何利用机器学习解决实际问题,从而全面提升自我的《综合能力》。