You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

1-Perceptron.ipynb 12 kB

6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258
  1. {
  2. "cells": [
  3. {
  4. "cell_type": "markdown",
  5. "metadata": {},
  6. "source": [
  7. "## 感知机\n",
  8. "\n",
  9. "感知机(perceptron)是二分类的线性分类模型,输入为实例的特征向量,输出为实例的类别(取+1和-1)。感知机对应于输入空间中将实例划分为两类的分离超平面。感知机旨在求出该超平面,为求得超平面导入了基于误分类的损失函数,利用梯度下降法 对损失函数进行最优化(最优化)。感知机的学习算法具有简单而易于实现的优点,分为原始形式和对偶形式。感知机预测是用学习得到的感知机模型对新的实例进行预测的,因此属于判别模型。感知机由Rosenblatt于1957年提出的,是神经网络和支持向量机的基础。\n",
  10. "\n",
  11. "模仿的是生物神经系统内的神经元,它能够接受来自多个源的信号输入,然后将信号转化为便于传播的信号在进行输出(在生物体内表现为电信号)。\n",
  12. "\n",
  13. "![neuron](images/neuron.png)\n",
  14. "\n",
  15. "* dendrites - 树突\n",
  16. "* nucleus - 细胞核\n",
  17. "* axon - 轴突\n",
  18. "\n",
  19. "心理学家Rosenblatt构想了感知机,它作为简化的数学模型解释大脑神经元如何工作:它取一组二进制输入值(附近的神经元),将每个输入值乘以一个连续值权重(每个附近神经元的突触强度),并设立一个阈值,如果这些加权输入值的和超过这个阈值,就输出1,否则输出0(同理于神经元是否放电)。对于感知机,绝大多数输入值不是一些数据,就是别的感知机的输出值。\n",
  20. "\n",
  21. "唐纳德·赫布提出了一个出人意料并影响深远的想法,称知识和学习发生在大脑主要是通过神经元间突触的形成与变化,简要表述为赫布法则:\n",
  22. "\n",
  23. "> 当细胞A的轴突足以接近以激发细胞B,并反复持续地对细胞B放电,一些生长过程或代谢变化将发生在某一个或这两个细胞内,以致A作为对B放电的细胞中的一个,效率增加。\n",
  24. "\n",
  25. "\n",
  26. "感知机并没有完全遵循这个想法,**但通过调输入值的权重,可以有一个非常简单直观的学习方案:给定一个有输入输出实例的训练集,感知机应该「学习」一个函数:对每个例子,若感知机的输出值比实例低太多,则增加它的权重,否则若设比实例高太多,则减少它的权重。**\n"
  27. ]
  28. },
  29. {
  30. "cell_type": "markdown",
  31. "metadata": {},
  32. "source": [
  33. "## 1. 感知机模型\n",
  34. "\n",
  35. "假设输入空间(特征向量)为$X \\subseteq R^n$,输出空间为$Y=\\{-1, +1\\}$。输入$x \\in X$ 表示实例的特征向量,对应于输入空间的点;输出$y \\in Y$表示示例的类别。由输入空间到输出空间的函数为\n",
  36. "\n",
  37. "$$\n",
  38. "f(x) = sign(w x + b)\n",
  39. "$$\n",
  40. "\n",
  41. "称为感知机。其中,参数$w$叫做权值向量,$b$称为偏置。$w·x$表示$w$和$x$的内积。$sign$为符号函数,即\n",
  42. "![sign_function](images/sign.png)\n",
  43. "\n",
  44. "### 1.1 几何解释 \n",
  45. "感知机模型是线性分类模型,感知机模型的假设空间是定义在特征空间中的所有线性分类模型,即函数集合{f|f(x)=w·x+b}。线性方程 w·x+b=0对应于特征空间Rn中的一个超平面S,其中w是超平面的法向量,b是超平面的截踞。这个超平面把特征空间划分为两部分。位于两侧的点分别为正负两类。超平面S称为分离超平面,如下图:\n",
  46. "![perceptron_geometry_def](images/perceptron_geometry_def.png)\n",
  47. "\n",
  48. "### 1.2 生物学类比\n",
  49. "![perceptron_2](images/perceptron_2.PNG)\n",
  50. "\n",
  51. "\n"
  52. ]
  53. },
  54. {
  55. "cell_type": "markdown",
  56. "metadata": {},
  57. "source": [
  58. "## 2. 感知机学习策略\n",
  59. "\n",
  60. "假设训练数据集是线性可分的,感知机学习的目标是求得一个能够将训练数据的正负实例点完全分开的分离超平面,即最终求得参数w、b。这需要一个学习策略,即定义(经验)损失函数并将损失函数最小化。\n",
  61. "\n",
  62. "损失函数的一个自然的选择是误分类的点的总数。但是这样得到的损失函数不是参数w、b的连续可导函数,不宜优化。损失函数的另一个选择是误分类点到分类面的距离之和。\n",
  63. "\n",
  64. "首先,对于任意一点xo到超平面的距离为\n",
  65. "$$\n",
  66. "\\frac{1}{||w||} | w \\cdot xo + b |\n",
  67. "$$\n",
  68. "\n",
  69. "其次,对于误分类点$(x_i,y_i)$来说 $-y_i(w \\cdot x_i + b) > 0$\n",
  70. "\n",
  71. "这样,假设超平面S的总的误分类点集合为M,那么所有误分类点到S的距离之和为\n",
  72. "$$\n",
  73. "-\\frac{1}{||w||} \\sum_{x_i \\in M} y_i (w \\cdot x_i + b)\n",
  74. "$$\n",
  75. "不考虑1/||w||,就得到了感知机学习的损失函数。\n",
  76. "\n",
  77. "### 经验风险函数\n",
  78. "\n",
  79. "给定数据集$T = \\{(x_1,y_1), (x_2, y_2), ... (x_N, y_N)\\}$(其中$x_i \\in R^n$, $y_i \\in \\{-1, +1\\},i=1,2...N$),感知机sign(w·x+b)学习的损失函数定义为\n",
  80. "$$\n",
  81. "L(w, b) = - \\sum_{x_i \\in M} y_i (w \\cdot x_i + b)\n",
  82. "$$\n",
  83. "其中M为误分类点的集合,这个损失函数就是感知机学习的[经验风险函数](https://blog.csdn.net/zhzhx1204/article/details/70163099)。\n",
  84. "\n",
  85. "显然,损失函数L(w,b)是非负的。如果没有误分类点,那么L(w,b)为0,误分类点数越少,L(w,b)值越小。一个特定的损失函数:在误分类时是参数w,b的线性函数,在正确分类时,是0.因此,给定训练数据集T,损失函数L(w,b)是w,b的连续可导函数。\n"
  86. ]
  87. },
  88. {
  89. "cell_type": "markdown",
  90. "metadata": {},
  91. "source": [
  92. "## 3. 感知机学习算法\n",
  93. "\n",
  94. "\n",
  95. "最优化问题:给定数据集$T = \\{(x_1,y_1), (x_2, y_2), ... (x_N, y_N)\\}$(其中$x_i \\in R^n$, $y_i \\in \\{-1, +1\\},i=1,2...N$),求参数w,b,使其成为损失函数的解(M为误分类的集合):\n",
  96. "\n",
  97. "$$\n",
  98. "min_{w,b} L(w, b) = - \\sum_{x_i \\in M} y_i (w \\cdot x_i + b)\n",
  99. "$$\n",
  100. "\n",
  101. "感知机学习是误分类驱动的,具体采用[随机梯度下降法](https://blog.csdn.net/zbc1090549839/article/details/38149561)。首先,任意选定$w_0$、$b_0$,然后用梯度下降法不断极小化目标函数,极小化的过程不是一次性的把M中的所有误分类点梯度下降,而是一次随机选取一个误分类点使其梯度下降。\n",
  102. "\n",
  103. "假设误分类集合M是固定的,那么损失函数L(w,b)的梯度为\n",
  104. "$$\n",
  105. "\\triangledown_w L(w, b) = - \\sum_{x_i \\in M} y_i x_i \\\\\n",
  106. "\\triangledown_b L(w, b) = - \\sum_{x_i \\in M} y_i \\\\\n",
  107. "$$\n",
  108. "\n",
  109. "随机选取一个误分类点$(x_i,y_i)$,对$w,b$进行更新:\n",
  110. "$$\n",
  111. "w = w + \\eta y_i x_i \\\\\n",
  112. "b = b + \\eta y_i\n",
  113. "$$\n",
  114. "\n",
  115. "式中$\\eta$(0 ≤ $ \\eta $ ≤ 1)是步长,在统计学是中成为学习速率。步长越大,梯度下降的速度越快,更能接近极小点。如果步长过大,有可能导致跨过极小点,导致函数发散;如果步长过小,有可能会耗很长时间才能达到极小点。\n",
  116. "\n",
  117. "直观解释:当一个实例点被误分类时,调整w,b,使分离超平面向该误分类点的一侧移动,以减少该误分类点与超平面的距离,直至超越该点被正确分类。\n",
  118. "\n",
  119. "\n",
  120. "\n",
  121. "算法\n",
  122. "\n",
  123. "\n",
  124. "输入:T={(x1,y1),(x2,y2)...(xN,yN)}(其中xi∈X=Rn,yi∈Y={-1, +1},i=1,2...N,学习速率为η)\n",
  125. "\n",
  126. "输出:w, b;感知机模型f(x)=sign(w·x+b)\n",
  127. "\n",
  128. "1. 初始化$w_0$,$b_0$\n",
  129. "2. 在训练数据集中选取$(x_i, y_i)$\n",
  130. "3. 如果$y_i(w * x_i+b)≤0$\n",
  131. " \n",
  132. " $w = w + η y_i x_i$\n",
  133. " \n",
  134. " $b = b + η y_i$\n",
  135. "\n",
  136. "4. 如果所有的样本都正确分类,或者迭代次数超过设定值,则终止\n",
  137. "5. 否则,跳转至(2)\n",
  138. "\n"
  139. ]
  140. },
  141. {
  142. "cell_type": "markdown",
  143. "metadata": {},
  144. "source": [
  145. "## 4. 示例程序\n"
  146. ]
  147. },
  148. {
  149. "cell_type": "code",
  150. "execution_count": 1,
  151. "metadata": {
  152. "lines_to_end_of_cell_marker": 2
  153. },
  154. "outputs": [
  155. {
  156. "name": "stdout",
  157. "output_type": "stream",
  158. "text": [
  159. "update weight and bias: 1.0 3.0 0.5\n",
  160. "update weight and bias: -2.5 1.5 0.0\n",
  161. "w = [-2.5, 1.5]\n",
  162. "b = 0.0\n",
  163. "ground_truth: [1, 1, 1, 1, -1, -1, -1, -1]\n",
  164. "predicted: [1, 1, 1, 1, -1, -1, -1, -1]\n"
  165. ]
  166. }
  167. ],
  168. "source": [
  169. "import random\n",
  170. "import numpy as np\n",
  171. "\n",
  172. "# 符号函数\n",
  173. "def sign(v):\n",
  174. " if v > 0: return 1\n",
  175. " else: return -1\n",
  176. " \n",
  177. "def perceptron_train(train_data, eta=0.5, n_iter=100):\n",
  178. " weight = [0, 0] # 权重\n",
  179. " bias = 0 # 偏置量\n",
  180. " learning_rate = eta # 学习速率\n",
  181. "\n",
  182. " train_num = n_iter # 迭代次数\n",
  183. "\n",
  184. " for i in range(train_num):\n",
  185. " #FIXME: the random chose sample is to slow\n",
  186. " train = random.choice(train_data)\n",
  187. " x1, x2, y = train\n",
  188. " predict = sign(weight[0] * x1 + weight[1] * x2 + bias) # 输出\n",
  189. " #print(\"train data: x: (%2d, %2d) y: %2d ==> predict: %2d\" % (x1, x2, y, predict))\n",
  190. " \n",
  191. " if y * predict <= 0: # 判断误分类点\n",
  192. " weight[0] = weight[0] + learning_rate * y * x1 # 更新权重\n",
  193. " weight[1] = weight[1] + learning_rate * y * x2\n",
  194. " bias = bias + learning_rate * y # 更新偏置量\n",
  195. " print(\"update weight and bias: \", weight[0], weight[1], bias)\n",
  196. "\n",
  197. " #print(\"stop training: \", weight[0], weight[1], bias)\n",
  198. "\n",
  199. " return weight, bias\n",
  200. "\n",
  201. "def perceptron_pred(data, w, b):\n",
  202. " y_pred = []\n",
  203. " for d in data:\n",
  204. " x1, x2, y = d\n",
  205. " yi = sign(w[0]*x1 + w[1]*x2 + b)\n",
  206. " y_pred.append(yi)\n",
  207. " \n",
  208. " return y_pred\n",
  209. "\n",
  210. "# set training data\n",
  211. "train_data = np.array([[1, 3, 1], [2, 5, 1], [3, 8, 1], [2, 6, 1], \n",
  212. " [3, 1, -1], [4, 1, -1], [6, 2, -1], [7, 3, -1]])\n",
  213. "\n",
  214. "# do training\n",
  215. "w, b = perceptron_train(train_data)\n",
  216. "print(\"w = \", w)\n",
  217. "print(\"b = \", b)\n",
  218. "\n",
  219. "# predict \n",
  220. "y_pred = perceptron_pred(train_data, w, b)\n",
  221. "\n",
  222. "print(\"ground_truth: \", list(train_data[:, 2]))\n",
  223. "print(\"predicted: \", y_pred)"
  224. ]
  225. },
  226. {
  227. "cell_type": "markdown",
  228. "metadata": {},
  229. "source": [
  230. "## Reference\n",
  231. "* [感知机(Python实现)](http://www.cnblogs.com/kaituorensheng/p/3561091.html)\n",
  232. "* [Programming a Perceptron in Python](https://blog.dbrgn.ch/2013/3/26/perceptrons-in-python/)\n",
  233. "* [损失函数、风险函数、经验风险最小化、结构风险最小化](https://blog.csdn.net/zhzhx1204/article/details/70163099)"
  234. ]
  235. }
  236. ],
  237. "metadata": {
  238. "kernelspec": {
  239. "display_name": "Python 3",
  240. "language": "python",
  241. "name": "python3"
  242. },
  243. "language_info": {
  244. "codemirror_mode": {
  245. "name": "ipython",
  246. "version": 3
  247. },
  248. "file_extension": ".py",
  249. "mimetype": "text/x-python",
  250. "name": "python",
  251. "nbconvert_exporter": "python",
  252. "pygments_lexer": "ipython3",
  253. "version": "3.6.9"
  254. }
  255. },
  256. "nbformat": 4,
  257. "nbformat_minor": 2
  258. }

机器学习越来越多应用到飞行器、机器人等领域,其目的是利用计算机实现类似人类的智能,从而实现装备的智能化与无人化。本课程旨在引导学生掌握机器学习的基本知识、典型方法与技术,通过具体的应用案例激发学生对该学科的兴趣,鼓励学生能够从人工智能的角度来分析、解决飞行器、机器人所面临的问题和挑战。本课程主要内容包括Python编程基础,机器学习模型,无监督学习、监督学习、深度学习基础知识与实现,并学习如何利用机器学习解决实际问题,从而全面提升自我的《综合能力》。