|
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174 |
- {
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Softmax & Cross entropy cost function\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Softmax is often added as an output layer in the neural network for sorting tasks, the key process in the backward propagation is derivation. This process can also provide a deeper understanding of the back propagation process and give more thought to the problem of gradient propagation.\n",
- "\n",
- "## 1. softmax function\n",
- "\n",
- "Softmax(Flexible maximum) function, usually in neural network, can work as the output layer of classification assignment. Actually we can think of softmax output as the probability of selecting several categories. For example, If I have a classification task that is divided into three classes, the Softmax function can output the probability of the selection of the three classes based on their relative size, and the probability sum is 1.\n",
- "\n",
- "The form of softmax function is:\n",
- "\n",
- "$$\n",
- "S_i = \\frac{e^{z_i}}{\\sum_k e^{z_k}}\n",
- "$$\n",
- "\n",
- "* $S_i$ is the class probability output that pass through the softmax\n",
- "* $z_k$ is the output of neuron\n",
- "\n",
- "More vivid expression is shown as the following graph:\n",
- "\n",
- "\n",
- "\n",
- "Softmax straightforward is the original output is $[3, 1, 3] $by softmax function role, is mapping the value of (0, 1), and these values are tired and 1 (meet the properties of probability), then we can understand it into probability, in the final selection of the output nodes, we can choose most probability (that is, value corresponding to the largest) node, as we predict the goal.\n",
- "softm\n",
- "\n",
- "First is the output of neuron, the following graph shows a neuron:\n",
- "\n",
- "\n",
- "\n",
- "we assume that the output of neuron is:\n",
- "\n",
- "$$\n",
- "z_i = \\sum_{j} w_{ij} x_{j} + b\n",
- "$$\n",
- "\n",
- "Among them $W_{ij}$ is the $jth$ weight of $ith$ neuron and $b$ is the bias. $z_i$ represent the $ith$ output of this network.\n",
- "\n",
- "Add a softmax function to the outpur we have:\n",
- "\n",
- "$$\n",
- "a_i = \\frac{e^{z_i}}{\\sum_k e^{z_k}}\n",
- "$$\n",
- "\n",
- "$a_i$ represent the $ith$ output value of softmax, while the right side uses softmax function.\n",
- "\n",
- "\n",
- "### 1.1 loss function\n",
- "\n",
- "In the propagation of neural networks, we need to calculate a loss function, this loss function is actually the error between the true value and the estimation of network. Only when we get the error, it is possible to know how to change the weight in the network.\n",
- "\n",
- "There are many form of loss function, what we used here is the cross entropy function, it is mainly because that the derivation reasult is quiet easy and convenient to calculate, and cross entropy can solve some lower learning rate problem**[Cross entropy function](https://blog.csdn.net/u014313009/article/details/51043064)**is this:\n",
- "\n",
- "$$\n",
- "C = - \\sum_i y_i ln a_i\n",
- "$$\n",
- "\n",
- "Among them $y_i$ represent the truly classification result.\n",
- "\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## 2. Derive process\n",
- "\n",
- "Firstly, we need to make sure what we want, we want to get the gradient of our $loss$ to neuron output($z_i$), which is:\n",
- "\n",
- "$$\n",
- "\\frac{\\partial C}{\\partial z_i}\n",
- "$$\n",
- "\n",
- "According to the derivation rule of composite function:\n",
- "\n",
- "$$\n",
- "\\frac{\\partial C}{\\partial z_i} = \\frac{\\partial C}{\\partial a_j} \\frac{\\partial a_j}{\\partial z_i}\n",
- "$$\n",
- "\n",
- "Someone may have question, why we have $a_j$ instead of $a_i$. We need to check the formula of $softmax$ here, because of the special characteristcs, its denominatorc contains all the output of neurons. Therefore, for the other output which do not equal to i, it also contains $z_i$, all the $a$ are needed to be included into the calcultaion range and the calcultaion backwards need to be divide into two parts, which is $i = j$ and $i\\ne j$.\n",
- "\n",
- "### 2.1 The partial derviation of $a_j$\n",
- "\n",
- "$$\n",
- "\\frac{\\partial C}{\\partial a_j} = \\frac{(\\partial -\\sum_j y_j ln a_j)}{\\partial a_j} = -\\sum_j y_j \\frac{1}{a_j}\n",
- "$$\n",
- "\n",
- "### 2.2 The partial derviation of $z_i$\n",
- "\n",
- "If $i=j$ :\n",
- "\n",
- "\\begin{eqnarray}\n",
- "\\frac{\\partial a_i}{\\partial z_i} & = & \\frac{\\partial (\\frac{e^{z_i}}{\\sum_k e^{z_k}})}{\\partial z_i} \\\\\n",
- " & = & \\frac{\\sum_k e^{z_k} e^{z_i} - (e^{z_i})^2}{\\sum_k (e^{z_k})^2} \\\\\n",
- " & = & (\\frac{e^{z_i}}{\\sum_k e^{z_k}} ) (1 - \\frac{e^{z_i}}{\\sum_k e^{z_k}} ) \\\\\n",
- " & = & a_i (1 - a_i)\n",
- "\\end{eqnarray}\n",
- "\n",
- "IF $i \\ne j$:\n",
- "\\begin{eqnarray}\n",
- "\\frac{\\partial a_j}{\\partial z_i} & = & \\frac{\\partial (\\frac{e^{z_j}}{\\sum_k e^{z_k}})}{\\partial z_i} \\\\\n",
- " & = & \\frac{0 \\cdot \\sum_k e^{z_k} - e^{z_j} \\cdot e^{z_i} }{(\\sum_k e^{z_k})^2} \\\\\n",
- " & = & - \\frac{e^{z_j}}{\\sum_k e^{z_k}} \\cdot \\frac{e^{z_i}}{\\sum_k e^{z_k}} \\\\\n",
- " & = & -a_j a_i\n",
- "\\end{eqnarray}\n",
- "\n",
- "When u, v are the dependent variable the derivation formula of derivative:\n",
- "$$\n",
- "(\\frac{u}{v})' = \\frac{u'v - uv'}{v^2} \n",
- "$$\n",
- "\n",
- "### 2.3 Derivation of the whole\n",
- "\n",
- "\\begin{eqnarray}\n",
- "\\frac{\\partial C}{\\partial z_i} & = & (-\\sum_j y_j \\frac{1}{a_j} ) \\frac{\\partial a_j}{\\partial z_i} \\\\\n",
- " & = & - \\frac{y_i}{a_i} a_i ( 1 - a_i) + \\sum_{j \\ne i} \\frac{y_j}{a_j} a_i a_j \\\\\n",
- " & = & -y_i + y_i a_i + \\sum_{j \\ne i} y_j a_i \\\\\n",
- " & = & -y_i + a_i \\sum_{j} y_j \\\\\n",
- " & = & -y_i + a_i\n",
- "\\end{eqnarray}"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## 3. Question\n",
- "How to apply the softmax, cross entropy cost function in this section to the BP method in the previous section?"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## References\n",
- "\n",
- "* Softmax & 交叉熵\n",
- " * [交叉熵代价函数(作用及公式推导)](https://blog.csdn.net/u014313009/article/details/51043064)\n",
- " * [手打例子一步一步带你看懂softmax函数以及相关求导过程](https://www.jianshu.com/p/ffa51250ba2e)\n",
- " * [简单易懂的softmax交叉熵损失函数求导](https://www.jianshu.com/p/c02a1fbffad6)"
- ]
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.6.8"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 2
- }
|