{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Softmax & 交叉熵代价函数\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`Softmax`经常被添加在分类任务的神经网络中的输出层,神经网络的反向传播中关键的步骤就是求导,从这个过程也可以更深刻地理解反向传播的过程,还可以对梯度传播的问题有更多的思考。\n", "\n", "## 1. softmax 函数\n", "\n", "`softmax`(柔性最大值)函数,一般在神经网络中, `softmax`可以作为分类任务的输出层。其实可以认为`softmax`输出的是几个类别选择的概率,比如有一个分类任务,要分为三个类,softmax函数可以根据它们相对的大小,输出三个类别选取的概率,并且概率和为1。\n", "\n", "Softmax从字面上来说,可以分成`soft`和`max`两个部分。`max`故名思议就是最大值的意思。Softmax的核心在于`soft`,而`soft`有软的含义,与之相对的是`hard`硬。很多场景中需要找出数组所有元素中值最大的元素,实质上都是求的`hardmax`。下面使用`Numpy`模块实现hardmax。" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "5\n" ] } ], "source": [ "import numpy as np\n", "\n", "a = np.array([1, 2, 3, 4, 5]) # 创建ndarray数组\n", "a_max = np.max(a)\n", "print(a_max) # 5" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "通过上面的例子可以看出hardmax最大的特点就是只选出其中一个最大的值,即非黑即白。但是往往在实际中这种方式是不合情理的,比如对于文本分类来说,一篇文章或多或少包含着各种主题信息,我们更期望得到文章对于每个可能的文本类别的概率值(置信度),可以简单理解成属于对应类别的可信度。所以此时用到了soft的概念,**Softmax的含义就在于不再唯一的确定某一个最大值,而是为每个输出分类的结果都赋予一个概率值,表示属于每个类别的可能性。**\n", "\n", "softmax函数的公式是这种形式:\n", "\n", "$$\n", "S_i = \\frac{e^{z_i}}{\\sum_k e^{z_k}}\n", "$$\n", "\n", "* $S_i$是经过softmax的类别概率输出\n", "* $z_k$是神经元的输出\n", "\n", "\n", "更形象的如下图表示:\n", "\n", "![softmax_demo](images/softmax_demo.png)\n", "\n", "softmax直白来说就是将原来输出是$[3,1,-3]$通过softmax函数作用,就映射成为(0,1)的值,而这些值的累和为1(满足概率的性质),那么我们就可以将它理解成概率,在最后选取输出结点的时候,选取概率最大(也就是值对应最大的)结点,作为预测目标!\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "首先是神经元的输出,一个神经元如下图:\n", "\n", "![softmax_neuron](images/softmax_neuron.png)\n", "\n", "神经元的输出设为:\n", "\n", "$$\n", "z_i = \\sum_{j} w_{ij} x_{j} + w_b\n", "$$\n", "\n", "其中$W_{ij}$是第$i$个神经元的第$j$个权重,$w_b$是偏置。$z_i$表示该网络的第$i$个输出。**请注意这里没有使用sigmoid等激活函数。**\n", "\n", "给这个网络输出加上一个softmax函数,那就变成了这样:\n", "\n", "$$\n", "a_i = \\frac{e^{z_i}}{\\sum_k e^{z_k}}\n", "$$\n", "\n", "$a_i$代表softmax的第$i$个输出值,右侧套用了softmax函数。\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. 交叉熵损失函数\n", "\n", "在神经网络反向传播中,需要设计一个损失函数,这个损失函数表示真实值与网络的估计值的误差,知道误差了,才能知道怎样去修改网络中的权重。\n", "\n", "神经网络的设计目的之一是为了使机器可以像人一样学习知识。**人在学习分析新事物时,当发现自己犯的错误越大时,改正的力度就越大**。比如投篮:当运动员发现自己的投篮方向离正确方向越远,那么他调整的投篮角度就应该越大,篮球就更容易投进篮筐。同理,我们希望:ANN在训练时,如果预测值与实际值的误差越大,那么在反向传播训练的过程中,各种参数调整的幅度就要更大,从而使训练更快收敛。然而,**如果使用二次代价函数训练ANN,看到的实际效果是,如果误差越大,参数调整的幅度可能更小,训练更缓慢。**\n", " \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "以一个神经元的二类分类训练为例,进行两次实验(神经网络常用的激活函数为`sigmoid`函数,该实验也采用该函数):输入一个相同的样本数据$x=1.0$(该样本对应的实际分类$y=0$);两次实验各自随机初始化参数,从而在各自的第一次前向传播后得到不同的输出值,形成不同的代价(误差):\n", "\n", "![cross_entropy_loss_1](images/cross_entropy_loss_1.png)\n", "实验1:第一次输出值为0.82\n", "\n", "![cross_entropy_loss_2](images/cross_entropy_loss_2.png)\n", "实验2:第一次输出值为0.98\n", "\n", "\n", "在实验1中,随机初始化参数,使得第一次输出值为0.82(该样本对应的实际值为0);经过300次迭代训练后,输出值由0.82降到0.09,逼近实际值。而在实验2中,第一次输出值为0.98,同样经过300迭代训练,输出值只降到了0.20。\n", "\n", "\n", "神经网络常用的激活函数为sigmoid函数,该函数的曲线如下所示:\n", "![cross_entropy_loss_sigmod.png](images/cross_entropy_loss_sigmod.png)\n", "\n", "如图所示,实验2的初始输出值(0.98)对应的梯度明显小于实验1的输出值(0.82),因此实验2的参数梯度下降得比实验1慢。这就是初始的代价(误差)越大,导致训练越慢的原因。与我们的期望不符,即:不能像人一样,错误越大,改正的幅度越大,从而学习得越快。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "损失函数可以有很多形式,这里用的是交叉熵函数,主要是由于这个求导结果比较简单,易于计算,并且交叉熵解决某些损失函数学习缓慢的问题。**[交叉熵函数](https://blog.csdn.net/u014313009/article/details/51043064)**是这样的:\n", "\n", "$$\n", "C = - \\sum_i y_i ln a_i\n", "$$\n", "\n", "其中$y_i$表示真实的分类结果。\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. 推导过程\n", "\n", "首先,我们要明确一下我们要求什么,我们要求的是我们的$loss$对于神经元输出($z_i$)的梯度,即:\n", "\n", "$$\n", "\\frac{\\partial C}{\\partial z_i}\n", "$$\n", "\n", "根据复合函数求导法则:\n", "\n", "$$\n", "\\frac{\\partial C}{\\partial z_i} = \\frac{\\partial C}{\\partial a_j} \\frac{\\partial a_j}{\\partial z_i}\n", "$$\n", "\n", "有个人可能有疑问了,这里为什么是$a_j$而不是$a_i$,这里要看一下$softmax$的公式了,因为$softmax$公式的特性,它的分母包含了所有神经元的输出,所以,对于不等于$i$的其他输出里面,也包含着$z_i$,所有的$a$都要纳入到计算范围中,并且后面的计算可以看到需要分为$i = j$和$i \\ne j$两种情况求导。\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3.1 针对$a_j$的偏导\n", "\n", "$$\n", "\\frac{\\partial C}{\\partial a_j} = \\frac{(\\partial -\\sum_j y_j ln a_j)}{\\partial a_j} = -\\sum_j y_j \\frac{1}{a_j}\n", "$$\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3.2 针对$z_i$的偏导\n", "\n", "如果 $i=j$ :\n", "\n", "\\begin{eqnarray}\n", "\\frac{\\partial a_i}{\\partial z_i} & = & \\frac{\\partial (\\frac{e^{z_i}}{\\sum_k e^{z_k}})}{\\partial z_i} \\\\\n", " & = & \\frac{\\sum_k e^{z_k} e^{z_i} - (e^{z_i})^2}{\\sum_k (e^{z_k})^2} \\\\\n", " & = & (\\frac{e^{z_i}}{\\sum_k e^{z_k}} ) (1 - \\frac{e^{z_i}}{\\sum_k e^{z_k}} ) \\\\\n", " & = & a_i (1 - a_i)\n", "\\end{eqnarray}\n", "\n", "如果 $i \\ne j$:\n", "\\begin{eqnarray}\n", "\\frac{\\partial a_j}{\\partial z_i} & = & \\frac{\\partial (\\frac{e^{z_j}}{\\sum_k e^{z_k}})}{\\partial z_i} \\\\\n", " & = & \\frac{0 \\cdot \\sum_k e^{z_k} - e^{z_j} \\cdot e^{z_i} }{(\\sum_k e^{z_k})^2} \\\\\n", " & = & - \\frac{e^{z_j}}{\\sum_k e^{z_k}} \\cdot \\frac{e^{z_i}}{\\sum_k e^{z_k}} \\\\\n", " & = & -a_j a_i\n", "\\end{eqnarray}\n", "\n", "当$u$,$v$都是变量的函数时的导数推导公式:\n", "$$\n", "(\\frac{u}{v})' = \\frac{u'v - uv'}{v^2} \n", "$$\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3.3 整体的推导\n", "\n", "\\begin{eqnarray}\n", "\\frac{\\partial C}{\\partial z_i} & = & (-\\sum_j y_j \\frac{1}{a_j} ) \\frac{\\partial a_j}{\\partial z_i} \\\\\n", " & = & - \\frac{y_i}{a_i} a_i ( 1 - a_i) + \\sum_{j \\ne i} \\frac{y_j}{a_j} a_i a_j \\\\\n", " & = & -y_i + y_i a_i + \\sum_{j \\ne i} y_j a_i \\\\\n", " & = & -y_i + a_i \\sum_{j} y_j \\\\\n", " & = & -y_i + a_i\n", "\\end{eqnarray}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3.4 参数更新\n", "\n", "误差与参数的偏导为:\n", "$$\n", "\\frac{\\partial C}{\\partial w_{ij}} = (-y_i + a_i) x_i\n", "$$\n", "\n", "误差项为:\n", "$$\n", "\\delta_i = -(-y_i + a_i)\n", "$$\n", "\n", "参数跟新公式为:\n", "$$\n", "w_{ij} = w_{ij} + \\eta \\delta_i x_i\n", "$$\n", "\n", "\n", "其中\n", "$$\n", "a_i = \\frac{e^{z_i}}{\\sum_k e^{z_k}}\n", "$$\n", "\n", "$$\n", "z_i = \\sum_{j} w_{ij} x_{j} + w_b\n", "$$\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3.4 二次代价函数的更行方程\n", "\n", "最为对比,使用二次代价函数的更新方程为:\n", "\n", "$$\n", "\\delta_i = a_i (1-a_i) (y_i - a_i)\n", "$$\n", "\n", "$$\n", "w_{ji} = w_{ji} + \\eta \\delta_j x_{ji}\n", "$$\n", "\n", "需要注意这里 $w_{ji}$ 和上面的定义不太一样!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. 问题\n", "如何将本节所讲的softmax,交叉熵代价函数应用到上节所讲的BP方法中?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 参考资料\n", "\n", "* [一文详解Softmax函数](https://zhuanlan.zhihu.com/p/105722023)\n", "* [损失函数:交叉熵详解](https://zhuanlan.zhihu.com/p/115277553)\n", "* [交叉熵代价函数(作用及公式推导)](https://blog.csdn.net/u014313009/article/details/51043064)\n", "* [手打例子一步一步带你看懂softmax函数以及相关求导过程](https://www.jianshu.com/p/ffa51250ba2e)\n", "* [简单易懂的softmax交叉熵损失函数求导](https://www.jianshu.com/p/c02a1fbffad6)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.9" } }, "nbformat": 4, "nbformat_minor": 2 }