You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

bp.ipynb 6.1 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128
  1. {
  2. "cells": [
  3. {
  4. "cell_type": "markdown",
  5. "metadata": {},
  6. "source": [
  7. "# 反向传播算法\n",
  8. "\n",
  9. "前面我们介绍了三个模型,整个处理的基本流程都是定义模型,读入数据,给出损失函数$f$,通过梯度下降法更新参数。PyTorch 提供了非常简单的自动求导帮助我们求解导数,对于比较简单的模型,我们也能手动求出参数的梯度,但是对于非常复杂的模型,比如一个 100 层的网络,我们如何能够有效地手动求出这个梯度呢?这里就需要引入反向传播算法,自动求导本质是就是一个反向传播算法。\n",
  10. "\n",
  11. "反向传播算法是一个有效地求解梯度的算法,本质上其实就是一个链式求导法则的应用,然而这个如此简单而且显而易见的方法却是在 Roseblatt 提出感知机算法后将近 30 年才被发明和普及的,对此 Bengio 这样说道:“很多看似显而易见的想法只有在事后才变得的显而易见。”\n",
  12. "\n",
  13. "下面我们就来详细将一讲什么是反向传播算法。"
  14. ]
  15. },
  16. {
  17. "cell_type": "markdown",
  18. "metadata": {},
  19. "source": [
  20. "## 链式法则\n",
  21. "\n",
  22. "首先来简单地介绍一下链式法则,考虑一个简单的函数,比如\n",
  23. "$$f(x, y, z) = (x + y)z$$\n",
  24. "\n",
  25. "我们当然可以直接求出这个函数的微分,但是这里我们要使用链式法则,令\n",
  26. "$$q=x+y$$\n",
  27. "\n",
  28. "那么\n",
  29. "\n",
  30. "$$f = qz$$\n",
  31. "\n",
  32. "对于这两个式子,我们可以分别求出他们的微分 \n",
  33. "\n",
  34. "$$\\frac{\\partial f}{\\partial q} = z, \\frac{\\partial f}{\\partial z}=q$$\n",
  35. "\n",
  36. "同时$q$是$x$和$y$的求和,所以我们能够得到\n",
  37. "\n",
  38. "$$\\frac{\\partial q}{x} = 1, \\frac{\\partial q}{y} = 1$$\n",
  39. "\n",
  40. "我们关心的问题是\n",
  41. "\n",
  42. "$$\\frac{\\partial f}{\\partial x}, \\frac{\\partial f}{\\partial y}, \\frac{\\partial f}{\\partial z}$$\n",
  43. "\n",
  44. "链式法则告诉我们如何来计算出他们的值\n",
  45. "\n",
  46. "$$\n",
  47. "\\frac{\\partial f}{\\partial x} = \\frac{\\partial f}{\\partial q}\\frac{\\partial q}{\\partial x}\n",
  48. "$$\n",
  49. "$$\n",
  50. "\\frac{\\partial f}{\\partial y} = \\frac{\\partial f}{\\partial q}\\frac{\\partial q}{\\partial y}\n",
  51. "$$\n",
  52. "$$\n",
  53. "\\frac{\\partial f}{\\partial z} = q\n",
  54. "$$\n",
  55. "\n",
  56. "通过链式法则我们知道如果我们需要对其中的元素求导,那么我们可以一层一层求导然后将结果乘起来,这就是链式法则的核心,也是反向传播算法的核心,更多关于链式法则的算法,可以访问这个[文档](https://zh.wikipedia.org/wiki/%E9%93%BE%E5%BC%8F%E6%B3%95%E5%88%99)"
  57. ]
  58. },
  59. {
  60. "cell_type": "markdown",
  61. "metadata": {},
  62. "source": [
  63. "## 反向传播算法\n",
  64. "\n",
  65. "了解了链式法则,我们就可以开始介绍反向传播算法了,本质上反向传播算法只是链式法则的一个应用。我们还是使用之前那个相同的例子$q=x+y, f=qz$,通过计算图可以将这个计算过程表达出来\n",
  66. "\n",
  67. "![](https://ws1.sinaimg.cn/large/006tNc79ly1fmiozcinyzj30c806vglk.jpg)\n",
  68. "\n",
  69. "上面绿色的数字表示其数值,下面红色的数字表示求出的梯度,我们可以一步一步看看反向传播算法的实现。首先从最后开始,梯度当然是1,然后计算\n",
  70. "\n",
  71. "$$\\frac{\\partial f}{\\partial q} = z = -4,\\ \\frac{\\partial f}{\\partial z} = q = 3$$\n",
  72. "\n",
  73. "接着我们计算\n",
  74. "$$\\frac{\\partial f}{\\partial x} = \\frac{\\partial f}{\\partial q} \\frac{\\partial q}{\\partial x} = -4 \\times 1 = -4,\\ \\frac{\\partial f}{\\partial y} = \\frac{\\partial f}{\\partial q} \\frac{\\partial q}{\\partial y} = -4 \\times 1 = -4$$\n",
  75. "\n",
  76. "这样一步一步我们就求出了$\\nabla f(x, y, z)$。\n",
  77. "\n",
  78. "直观上看反向传播算法是一个优雅的局部过程,每次求导只是对当前的运算求导,求解每层网络的参数都是通过链式法则将前面的结果求出不断迭代到这一层,所以说这是一个传播过程\n",
  79. "\n",
  80. "### Sigmoid函数举例\n",
  81. "\n",
  82. "下面我们通过Sigmoid函数来演示反向传播过程在一个复杂的函数上是如何进行的。\n",
  83. "\n",
  84. "$$\n",
  85. "f(w, x) = \\frac{1}{1+e^{-(w_0 x_0 + w_1 x_1 + w_2)}}\n",
  86. "$$\n",
  87. "\n",
  88. "我们需要求解出\n",
  89. "$$\\frac{\\partial f}{\\partial w_0}, \\frac{\\partial f}{\\partial w_1}, \\frac{\\partial f}{\\partial w_2}$$\n",
  90. "\n",
  91. "首先我们将这个函数抽象成一个计算图来表示,即\n",
  92. "$$\n",
  93. " f(x) = \\frac{1}{x} \\\\\n",
  94. " f_c(x) = 1 + x \\\\\n",
  95. " f_e(x) = e^x \\\\\n",
  96. " f_w(x) = -(w_0 x_0 + w_1 x_1 + w_2)\n",
  97. "$$\n",
  98. "\n",
  99. "这样我们就能够画出下面的计算图\n",
  100. "\n",
  101. "![](https://ws1.sinaimg.cn/large/006tNc79ly1fmip1va5qjj30lb08e0t0.jpg)\n",
  102. "\n",
  103. "同样上面绿色的数子表示数值,下面红色的数字表示梯度,我们从后往前计算一下各个参数的梯度。首先最后面的梯度是1,,然后经过$\\frac{1}{x}$这个函数,这个函数的梯度是$-\\frac{1}{x^2}$,所以往前传播的梯度是$1 \\times -\\frac{1}{1.37^2} = -0.53$,然后是$+1$这个操作,梯度不变,接着是$e^x$这个运算,它的梯度就是$-0.53 \\times e^{-1} = -0.2$,这样不断往后传播就能够求得每个参数的梯度。"
  104. ]
  105. }
  106. ],
  107. "metadata": {
  108. "kernelspec": {
  109. "display_name": "Python 3",
  110. "language": "python",
  111. "name": "python3"
  112. },
  113. "language_info": {
  114. "codemirror_mode": {
  115. "name": "ipython",
  116. "version": 3
  117. },
  118. "file_extension": ".py",
  119. "mimetype": "text/x-python",
  120. "name": "python",
  121. "nbconvert_exporter": "python",
  122. "pygments_lexer": "ipython3",
  123. "version": "3.5.2"
  124. }
  125. },
  126. "nbformat": 4,
  127. "nbformat_minor": 2
  128. }

机器学习越来越多应用到飞行器、机器人等领域,其目的是利用计算机实现类似人类的智能,从而实现装备的智能化与无人化。本课程旨在引导学生掌握机器学习的基本知识、典型方法与技术,通过具体的应用案例激发学生对该学科的兴趣,鼓励学生能够从人工智能的角度来分析、解决飞行器、机器人所面临的问题和挑战。本课程主要内容包括Python编程基础,机器学习模型,无监督学习、监督学习、深度学习基础知识与实现,并学习如何利用机器学习解决实际问题,从而全面提升自我的《综合能力》。