@@ -3,3 +3,4 @@ | |||
*.tar.gz | |||
*.pth | |||
__pycache__ | |||
fig-res |
@@ -36,6 +36,11 @@ Python 是一门上手简单、功能强大、通用型的脚本编程语言。 | |||
## 参考资料 | |||
### 视频教程 | |||
* [《90分钟学会Python》](https://www.bilibili.com/video/BV1Uz4y167xY) | |||
* [《零基础入门学习Python》教学视频](https://www.bilibili.com/video/BV1c4411e77t) | |||
### 教程 | |||
* [安装Python环境](../references_tips/InstallPython.md) | |||
* [IPython Notebooks to learn Python](https://github.com/rajathkmp/Python-Lectures) | |||
* [廖雪峰的Python教程](https://www.liaoxuefeng.com/wiki/1016959663602400) | |||
@@ -1,4 +1,4 @@ | |||
# numpy, matplotlib, scipy等常用库 | |||
# Numpy, Matplotlib, Scipy等常用库 | |||
## 内容 | |||
* [numpy教程](1-numpy_tutorial.ipynb) | |||
@@ -1,3 +1,3 @@ | |||
0.81041 0.69606 0.42944 | |||
0.99033 0.60317 0.82435 | |||
0.70689 0.05605 0.53930 | |||
0.34743 0.34666 0.67796 | |||
0.37776 0.74529 0.44639 | |||
0.70970 0.54722 0.96401 |
@@ -0,0 +1,14 @@ | |||
# kNN 分类算法 | |||
K最近邻(k-Nearest Neighbor,kNN)分类算法,是一个理论上比较成熟的方法,也是最简单的机器学习算法之一。该方法的思路是:***如果一个样本在特征空间中的k个最相似(即特征空间中最邻近)的样本中的大多数属于某一个类别,则该样本也属于这个类别\***。 | |||
kNN算法不仅可以用于分类,还可以用于回归。通过找出一个样本的`k`个最近邻居,将这些邻居的属性的平均值赋给该样本,就可以得到该样本的属性。更有用的方法是将不同距离的邻居对该样本产生的影响给予不同的权值(weight),如权值与距离成正比(组合函数)。 | |||
 | |||
## 内容 | |||
* [knn_classification]([knn_classification.ipynb]) | |||
@@ -322,7 +322,9 @@ | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 3, | |||
"metadata": {}, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"import numpy as np\n", | |||
@@ -479,7 +481,9 @@ | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 9, | |||
"metadata": {}, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"# split train / test data\n", | |||
@@ -570,7 +574,7 @@ | |||
"name": "python", | |||
"nbconvert_exporter": "python", | |||
"pygments_lexer": "ipython3", | |||
"version": "3.7.9" | |||
"version": "3.5.4" | |||
} | |||
}, | |||
"nbformat": 4, | |||
@@ -16,7 +16,9 @@ | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 1, | |||
"metadata": {}, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"%matplotlib inline\n", | |||
@@ -206,7 +208,7 @@ | |||
"name": "python", | |||
"nbconvert_exporter": "python", | |||
"pygments_lexer": "ipython3", | |||
"version": "3.7.9" | |||
"version": "3.5.4" | |||
} | |||
}, | |||
"nbformat": 4, | |||
@@ -1,8 +1,19 @@ | |||
## k-Means | |||
k-Means算法是无监督学习领域最为经典的算法之一。k-means算法就是将n个数据点进行聚类分析,得到 k 个聚类,使得每个数据点到聚类中心的距离最小。而实际上,这个问题往往是NP-hard的,以此有许多启发式的方法求解,从而避开局部最小值。 | |||
 | |||
## 内容 | |||
增加一个Bag of Words的说明和例子程序 (https://blog.csdn.net/wsj998689aa/article/details/47089153) | |||
* [k-Means原理、算法](1-k-means.ipynb) | |||
* [应用-图像压缩](2-kmeans-color-vq.ipynb) | |||
* [聚类算法对比](3-ClusteringAlgorithms.ipynb) | |||
## References | |||
* [如何使用 Keras 实现无监督聚类](http://m.sohu.com/a/236221126_717210) | |||
* [Bag-of-words模型入门](https://blog.csdn.net/wsj998689aa/article/details/47089153) |
@@ -0,0 +1,17 @@ | |||
# 逻辑回归 | |||
逻辑回归(Logistic Regression, LR)模型其实仅在线性回归的基础上,套用了一个逻辑函数,但也就由于这个逻辑函数,使得逻辑回归模型能够输出类别的概率。逻辑回归的本质是:假设数据服从这个分布,然后使用极大似然估计做参数的估计。 | |||
 | |||
## 内容 | |||
* [线性回归-最小二乘法](1-Least_squares.ipynb) | |||
* [逻辑回归](2-Logistic_regression.ipynb) | |||
* [特征处理 - 降维](3-PCA_and_Logistic_Regression.ipynb) | |||
@@ -719,7 +719,9 @@ | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 9, | |||
"metadata": {}, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"import numpy as np\n", | |||
@@ -1024,7 +1026,7 @@ | |||
"name": "python", | |||
"nbconvert_exporter": "python", | |||
"pygments_lexer": "ipython3", | |||
"version": "3.7.9" | |||
"version": "3.5.4" | |||
} | |||
}, | |||
"nbformat": 4, | |||
@@ -11,13 +11,13 @@ | |||
"cell_type": "markdown", | |||
"metadata": {}, | |||
"source": [ | |||
"softmax经常被添加在分类任务的神经网络中的输出层,神经网络的反向传播中关键的步骤就是求导,从这个过程也可以更深刻地理解反向传播的过程,还可以对梯度传播的问题有更多的思考。\n", | |||
"`Softmax`经常被添加在分类任务的神经网络中的输出层,神经网络的反向传播中关键的步骤就是求导,从这个过程也可以更深刻地理解反向传播的过程,还可以对梯度传播的问题有更多的思考。\n", | |||
"\n", | |||
"## 1. softmax 函数\n", | |||
"\n", | |||
"softmax(柔性最大值)函数,一般在神经网络中, softmax可以作为分类任务的输出层。其实可以认为softmax输出的是几个类别选择的概率,比如我有一个分类任务,要分为三个类,softmax函数可以根据它们相对的大小,输出三个类别选取的概率,并且概率和为1。\n", | |||
"`softmax`(柔性最大值)函数,一般在神经网络中, `softmax`可以作为分类任务的输出层。其实可以认为`softmax`输出的是几个类别选择的概率,比如有一个分类任务,要分为三个类,softmax函数可以根据它们相对的大小,输出三个类别选取的概率,并且概率和为1。\n", | |||
"\n", | |||
"Softmax从字面上来说,可以分成`soft`和`max`两个部分。`max`故名思议就是最大值的意思。Softmax的核心在于`soft`,而`soft`有软的含义,与之相对的是`hard`硬。很多场景中需要我们找出数组所有元素中值最大的元素,实质上都是求的`hardmax`。下面使用`Numpy`模块实现hardmax。" | |||
"Softmax从字面上来说,可以分成`soft`和`max`两个部分。`max`故名思议就是最大值的意思。Softmax的核心在于`soft`,而`soft`有软的含义,与之相对的是`hard`硬。很多场景中需要找出数组所有元素中值最大的元素,实质上都是求的`hardmax`。下面使用`Numpy`模块实现hardmax。" | |||
] | |||
}, | |||
{ | |||
@@ -62,7 +62,7 @@ | |||
"\n", | |||
"\n", | |||
"\n", | |||
"softmax直白来说就是将原来输出是$[3,1,-3]$通过softmax函数作用,就映射成为(0,1)的值,而这些值的累和为1(满足概率的性质),那么我们就可以将它理解成概率,在最后选取输出结点的时候,我们就可以选取概率最大(也就是值对应最大的)结点,作为我们的预测目标!\n" | |||
"softmax直白来说就是将原来输出是$[3,1,-3]$通过softmax函数作用,就映射成为(0,1)的值,而这些值的累和为1(满足概率的性质),那么我们就可以将它理解成概率,在最后选取输出结点的时候,选取概率最大(也就是值对应最大的)结点,作为预测目标!\n" | |||
] | |||
}, | |||
{ | |||
@@ -78,12 +78,12 @@ | |||
"神经元的输出设为:\n", | |||
"\n", | |||
"$$\n", | |||
"z_i = sigmoid( \\sum_{j} w_{ij} x_{j} + b )\n", | |||
"z_i = \\sum_{j} w_{ij} x_{j} + w_b\n", | |||
"$$\n", | |||
"\n", | |||
"其中$W_{ij}$是第$i$个神经元的第$j$个权重,$b$是偏置。$z_i$表示该网络的第$i$个输出。\n", | |||
"其中$W_{ij}$是第$i$个神经元的第$j$个权重,$w_b$是偏置。$z_i$表示该网络的第$i$个输出。**请注意这里没有使用sigmoid等激活函数。**\n", | |||
"\n", | |||
"给这个输出加上一个softmax函数,那就变成了这样:\n", | |||
"给这个网络输出加上一个softmax函数,那就变成了这样:\n", | |||
"\n", | |||
"$$\n", | |||
"a_i = \\frac{e^{z_i}}{\\sum_k e^{z_k}}\n", | |||
@@ -108,7 +108,7 @@ | |||
"cell_type": "markdown", | |||
"metadata": {}, | |||
"source": [ | |||
"以一个神经元的二类分类训练为例,进行两次实验(神经网络常用的激活函数为`sigmoid`函数,该实验也采用该函数):输入一个相同的样本数据x=1.0(该样本对应的实际分类y=0);两次实验各自随机初始化参数,从而在各自的第一次前向传播后得到不同的输出值,形成不同的代价(误差):\n", | |||
"以一个神经元的二类分类训练为例,进行两次实验(神经网络常用的激活函数为`sigmoid`函数,该实验也采用该函数):输入一个相同的样本数据$x=1.0$(该样本对应的实际分类$y=0$);两次实验各自随机初始化参数,从而在各自的第一次前向传播后得到不同的输出值,形成不同的代价(误差):\n", | |||
"\n", | |||
"\n", | |||
"实验1:第一次输出值为0.82\n", | |||
@@ -143,7 +143,7 @@ | |||
"cell_type": "markdown", | |||
"metadata": {}, | |||
"source": [ | |||
"## 2. 推导过程\n", | |||
"## 3. 推导过程\n", | |||
"\n", | |||
"首先,我们要明确一下我们要求什么,我们要求的是我们的$loss$对于神经元输出($z_i$)的梯度,即:\n", | |||
"\n", | |||
@@ -158,14 +158,26 @@ | |||
"$$\n", | |||
"\n", | |||
"有个人可能有疑问了,这里为什么是$a_j$而不是$a_i$,这里要看一下$softmax$的公式了,因为$softmax$公式的特性,它的分母包含了所有神经元的输出,所以,对于不等于$i$的其他输出里面,也包含着$z_i$,所有的$a$都要纳入到计算范围中,并且后面的计算可以看到需要分为$i = j$和$i \\ne j$两种情况求导。\n", | |||
"\n", | |||
"### 2.1 针对$a_j$的偏导\n", | |||
"\n" | |||
] | |||
}, | |||
{ | |||
"cell_type": "markdown", | |||
"metadata": {}, | |||
"source": [ | |||
"### 3.1 针对$a_j$的偏导\n", | |||
"\n", | |||
"$$\n", | |||
"\\frac{\\partial C}{\\partial a_j} = \\frac{(\\partial -\\sum_j y_j ln a_j)}{\\partial a_j} = -\\sum_j y_j \\frac{1}{a_j}\n", | |||
"$$\n", | |||
"\n", | |||
"### 2.2 针对$z_i$的偏导\n", | |||
"\n" | |||
] | |||
}, | |||
{ | |||
"cell_type": "markdown", | |||
"metadata": {}, | |||
"source": [ | |||
"### 3.2 针对$z_i$的偏导\n", | |||
"\n", | |||
"如果 $i=j$ :\n", | |||
"\n", | |||
@@ -188,8 +200,14 @@ | |||
"$$\n", | |||
"(\\frac{u}{v})' = \\frac{u'v - uv'}{v^2} \n", | |||
"$$\n", | |||
"\n", | |||
"### 2.3 整体的推导\n", | |||
"\n" | |||
] | |||
}, | |||
{ | |||
"cell_type": "markdown", | |||
"metadata": {}, | |||
"source": [ | |||
"### 3.3 整体的推导\n", | |||
"\n", | |||
"\\begin{eqnarray}\n", | |||
"\\frac{\\partial C}{\\partial z_i} & = & (-\\sum_j y_j \\frac{1}{a_j} ) \\frac{\\partial a_j}{\\partial z_i} \\\\\n", | |||
@@ -211,7 +229,7 @@ | |||
"\n", | |||
"其中\n", | |||
"$$\n", | |||
"z_i = \\sum_{j} w_{ij} x_{j} + b\n", | |||
"z_i = \\sum_{j} w_{ij} x_{j} + w_b\n", | |||
"$$\n" | |||
] | |||
}, | |||
@@ -219,7 +237,7 @@ | |||
"cell_type": "markdown", | |||
"metadata": {}, | |||
"source": [ | |||
"对于使用二次代价函数的更新方程为:\n", | |||
"最为对比,使用二次代价函数的更新方程为:\n", | |||
"\n", | |||
"$$\n", | |||
"\\delta_i = a_i (1-a_i) (y_i - a_i)\n", | |||
@@ -234,7 +252,7 @@ | |||
"cell_type": "markdown", | |||
"metadata": {}, | |||
"source": [ | |||
"## 3. 问题\n", | |||
"## 4. 问题\n", | |||
"如何将本节所讲的softmax,交叉熵代价函数应用到上节所讲的BP方法中?" | |||
] | |||
}, | |||
@@ -245,6 +263,7 @@ | |||
"## 参考资料\n", | |||
"\n", | |||
"* [一文详解Softmax函数](https://zhuanlan.zhihu.com/p/105722023)\n", | |||
"* [损失函数:交叉熵详解](https://zhuanlan.zhihu.com/p/115277553)\n", | |||
"* [交叉熵代价函数(作用及公式推导)](https://blog.csdn.net/u014313009/article/details/51043064)\n", | |||
"* [手打例子一步一步带你看懂softmax函数以及相关求导过程](https://www.jianshu.com/p/ffa51250ba2e)\n", | |||
"* [简单易懂的softmax交叉熵损失函数求导](https://www.jianshu.com/p/c02a1fbffad6)" | |||
@@ -267,7 +286,7 @@ | |||
"name": "python", | |||
"nbconvert_exporter": "python", | |||
"pygments_lexer": "ipython3", | |||
"version": "3.7.9" | |||
"version": "3.5.4" | |||
} | |||
}, | |||
"nbformat": 4, | |||
@@ -1,4 +1,21 @@ | |||
# 神经网络 | |||
人工神经网络(artificial neural network,ANN),简称神经网络(neural network,NN),是一种模仿生物神经网络的结构和功能的数学模型或计算模型。神经网络由大量的人工神经元联结进行计算。大多数情况下人工神经网络能在外界信息的基础上改变内部结构,是一种自适应系统。现代神经网络是一种非线性统计性数据建模工具,常用来对输入和输出间复杂的关系进行建模,或用来探索数据的模式。 | |||
 | |||
## 内容 | |||
* [感知机](1-Perceptron.ipynb) | |||
* [多层神经网络和反向传播](2-mlp_bp.ipynb) | |||
* [Softmax和交叉熵](3-softmax_ce.ipynb) | |||
## References | |||
* https://iamtrask.github.io/2015/07/12/basic-python-network/ | |||
* http://www.wildml.com/2015/09/implementing-a-neural-network-from-scratch/ | |||
@@ -42,7 +42,9 @@ | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 2, | |||
"metadata": {}, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"import torch\n", | |||
@@ -52,7 +54,9 @@ | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 3, | |||
"metadata": {}, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"# 创建一个 numpy ndarray\n", | |||
@@ -69,7 +73,9 @@ | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 9, | |||
"metadata": {}, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"pytorch_tensor1 = torch.tensor(numpy_tensor)\n", | |||
@@ -100,7 +106,9 @@ | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 5, | |||
"metadata": {}, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"# 如果 pytorch tensor 在 cpu 上\n", | |||
@@ -129,7 +137,9 @@ | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 7, | |||
"metadata": {}, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"# 第一种方式是定义 cuda 数据类型\n", | |||
@@ -160,7 +170,9 @@ | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 8, | |||
"metadata": {}, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"cpu_tensor = gpu_tensor.cpu()" | |||
@@ -716,7 +728,7 @@ | |||
"name": "python", | |||
"nbconvert_exporter": "python", | |||
"pygments_lexer": "ipython3", | |||
"version": "3.7.9" | |||
"version": "3.5.4" | |||
} | |||
}, | |||
"nbformat": 4, | |||
@@ -119,7 +119,9 @@ | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 4, | |||
"metadata": {}, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"z = torch.mean(torch.matmul(w, x) + b) # torch.matmul 是做矩阵乘法\n", | |||
@@ -275,7 +277,9 @@ | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 8, | |||
"metadata": {}, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"n.backward(torch.ones_like(n)) # 将 (w0, w1) 取成 (1, 1)" | |||
@@ -349,7 +353,9 @@ | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 18, | |||
"metadata": {}, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"y.backward(retain_graph=True) # 设置 retain_graph 为 True 来保留计算图" | |||
@@ -375,7 +381,9 @@ | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 20, | |||
"metadata": {}, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"y.backward() # 再做一次自动求导,这次不保留计算图" | |||
@@ -455,7 +463,9 @@ | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 10, | |||
"metadata": {}, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"x = torch.tensor([2, 3], dtype=torch.float, requires_grad=True)\n", | |||
@@ -553,7 +563,7 @@ | |||
], | |||
"metadata": { | |||
"kernelspec": { | |||
"display_name": "Python 3 (ipykernel)", | |||
"display_name": "Python 3", | |||
"language": "python", | |||
"name": "python3" | |||
}, | |||
@@ -567,7 +577,7 @@ | |||
"name": "python", | |||
"nbconvert_exporter": "python", | |||
"pygments_lexer": "ipython3", | |||
"version": "3.9.7" | |||
"version": "3.5.4" | |||
} | |||
}, | |||
"nbformat": 4, | |||
@@ -151,7 +151,9 @@ | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 3, | |||
"metadata": {}, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"# 转换成 Tensor\n", | |||
@@ -166,7 +168,9 @@ | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 4, | |||
"metadata": {}, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"# 构建线性回归模型\n", | |||
@@ -180,7 +184,9 @@ | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 5, | |||
"metadata": {}, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"y_ = linear_model(x_train)" | |||
@@ -275,7 +281,9 @@ | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 8, | |||
"metadata": {}, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"# 自动求导\n", | |||
@@ -305,7 +313,9 @@ | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 10, | |||
"metadata": {}, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"# 更新一次参数\n", | |||
@@ -542,7 +552,9 @@ | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 17, | |||
"metadata": {}, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"# 构建数据 x 和 y\n", | |||
@@ -582,7 +594,9 @@ | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 19, | |||
"metadata": {}, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"# 定义参数\n", | |||
@@ -670,7 +684,9 @@ | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 22, | |||
"metadata": {}, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"# 自动求导\n", | |||
@@ -702,7 +718,9 @@ | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 24, | |||
"metadata": {}, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"# 更新一下参数\n", | |||
@@ -853,7 +871,7 @@ | |||
], | |||
"metadata": { | |||
"kernelspec": { | |||
"display_name": "Python 3 (ipykernel)", | |||
"display_name": "Python 3", | |||
"language": "python", | |||
"name": "python3" | |||
}, | |||
@@ -867,7 +885,7 @@ | |||
"name": "python", | |||
"nbconvert_exporter": "python", | |||
"pygments_lexer": "ipython3", | |||
"version": "3.9.7" | |||
"version": "3.5.4" | |||
} | |||
}, | |||
"nbformat": 4, | |||
@@ -11,8 +11,20 @@ PyTorch的简洁设计使得它入门很简单,本部分内容在深入介绍P | |||
 | |||
## 内容 | |||
- [Tensor](1-tensor.ipynb) | |||
- [autograd](2-autograd.ipynb) | |||
- [linear-regression](3-linear-regression.ipynb) | |||
- [logistic-regression](4-logistic-regression.ipynb) | |||
- [nn-sequential-module](5-nn-sequential-module.ipynb) | |||
- [deep-nn](6-deep-nn.ipynb) | |||
- [param_initialize](7-param_initialize.ipynb) | |||
- [optim/sgd](optimizer/6_1-sgd.ipynb) | |||
- [optim/adam](optimizer/6_6-adam.ipynb) | |||
## References | |||
* [code of book "Learn Deep Learning with PyTorch"](https://github.com/L1aoXingyu/code-of-learn-deep-learning-with-pytorch) | |||
* [PyTorch tutorials and fun projects including neural talk, neural style, poem writing, anime generation](https://github.com/chenyuntc/pytorch-book) | |||
* [Awesome-Pytorch-list](https://github.com/bharathgs/Awesome-pytorch-list) | |||
@@ -47,7 +47,9 @@ | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 1, | |||
"metadata": {}, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"def adam(parameters, vs, sqrs, lr, t, beta1=0.9, beta2=0.999):\n", | |||
@@ -63,7 +65,9 @@ | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 2, | |||
"metadata": {}, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"import numpy as np\n", | |||
@@ -267,7 +271,7 @@ | |||
], | |||
"metadata": { | |||
"kernelspec": { | |||
"display_name": "Python 3 (ipykernel)", | |||
"display_name": "Python 3", | |||
"language": "python", | |||
"name": "python3" | |||
}, | |||
@@ -281,7 +285,7 @@ | |||
"name": "python", | |||
"nbconvert_exporter": "python", | |||
"pygments_lexer": "ipython3", | |||
"version": "3.9.7" | |||
"version": "3.5.4" | |||
} | |||
}, | |||
"nbformat": 4, | |||
@@ -0,0 +1,157 @@ | |||
{ | |||
"cells": [ | |||
{ | |||
"cell_type": "markdown", | |||
"metadata": {}, | |||
"source": [ | |||
"# LeNet5\n", | |||
"\n", | |||
"LeNet 诞生于 1994 年,是最早的卷积神经网络之一,并且推动了深度学习领域的发展。自从 1988 年开始,在多次迭代后这个开拓性成果被命名为 LeNet5。LeNet5 的架构的提出是基于如下的观点:图像的特征分布在整张图像上,通过带有可学习参数的卷积,从而有效的减少了参数数量,能够在多个位置上提取相似特征。\n", | |||
"\n", | |||
"在LeNet5提出的时候,没有 GPU 帮助训练,甚至 CPU 的速度也很慢,因此,LeNet5的规模并不大。其包含七个处理层,每一层都包含可训练参数(权重),当时使用的输入数据是 $32 \\times 32$ 像素的图像。LeNet-5 这个网络虽然很小,但是它包含了深度学习的基本模块:卷积层,池化层,全连接层。它是其他深度学习模型的基础,这里对LeNet5进行深入分析和讲解,通过实例分析,加深对与卷积层和池化层的理解。" | |||
] | |||
}, | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 1, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"import sys\n", | |||
"sys.path.append('..')\n", | |||
"\n", | |||
"import numpy as np\n", | |||
"import torch\n", | |||
"from torch import nn\n", | |||
"from torch.autograd import Variable\n", | |||
"from torchvision.datasets import CIFAR10\n", | |||
"from torchvision import transforms as tfs" | |||
] | |||
}, | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 1, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"import torch\n", | |||
"from torch import nn\n", | |||
"\n", | |||
"lenet5 = nn.Sequential(\n", | |||
" nn.Conv2d(1, 6, kernel_size=5, padding=2), nn.Sigmoid(),\n", | |||
" nn.AvgPool2d(kernel_size=2, stride=2),\n", | |||
" nn.Conv2d(6, 16, kernel_size=5), nn.Sigmoid(),\n", | |||
" nn.AvgPool2d(kernel_size=2, stride=2),\n", | |||
" nn.Flatten(),\n", | |||
" nn.Linear(16 * 5 * 5, 120), nn.Sigmoid(),\n", | |||
" nn.Linear(120, 84), nn.Sigmoid(),\n", | |||
" nn.Linear(84, 10) )" | |||
] | |||
}, | |||
{ | |||
"cell_type": "code", | |||
"execution_count": null, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"from utils import train\n", | |||
"\n", | |||
"# 使用数据增强\n", | |||
"def train_tf(x):\n", | |||
" im_aug = tfs.Compose([\n", | |||
" tfs.Resize(224),\n", | |||
" tfs.ToTensor(),\n", | |||
" tfs.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])\n", | |||
" ])\n", | |||
" x = im_aug(x)\n", | |||
" return x\n", | |||
"\n", | |||
"def test_tf(x):\n", | |||
" im_aug = tfs.Compose([\n", | |||
" tfs.Resize(224),\n", | |||
" tfs.ToTensor(),\n", | |||
" tfs.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])\n", | |||
" ])\n", | |||
" x = im_aug(x)\n", | |||
" return x\n", | |||
" \n", | |||
"train_set = CIFAR10('../../data', train=True, transform=train_tf)\n", | |||
"train_data = torch.utils.data.DataLoader(train_set, batch_size=64, shuffle=True)\n", | |||
"test_set = CIFAR10('../../data', train=False, transform=test_tf)\n", | |||
"test_data = torch.utils.data.DataLoader(test_set, batch_size=128, shuffle=False)\n", | |||
"\n", | |||
"net = lenet5\n", | |||
"optimizer = torch.optim.SGD(net.parameters(), lr=1e-1)\n", | |||
"criterion = nn.CrossEntropyLoss()" | |||
] | |||
}, | |||
{ | |||
"cell_type": "code", | |||
"execution_count": null, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"(l_train_loss, l_train_acc, l_valid_loss, l_valid_acc) = train(net, \n", | |||
" train_data, test_data, \n", | |||
" 20, \n", | |||
" optimizer, criterion,\n", | |||
" use_cuda=False)" | |||
] | |||
}, | |||
{ | |||
"cell_type": "code", | |||
"execution_count": null, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"import matplotlib.pyplot as plt\n", | |||
"%matplotlib inline\n", | |||
"\n", | |||
"plt.plot(l_train_loss, label='train')\n", | |||
"plt.plot(l_valid_loss, label='valid')\n", | |||
"plt.xlabel('epoch')\n", | |||
"plt.legend(loc='best')\n", | |||
"plt.savefig('fig-res-lenet5-train-validate-loss.pdf')\n", | |||
"plt.show()\n", | |||
"\n", | |||
"plt.plot(l_train_acc, label='train')\n", | |||
"plt.plot(l_valid_acc, label='valid')\n", | |||
"plt.xlabel('epoch')\n", | |||
"plt.legend(loc='best')\n", | |||
"plt.savefig('fig-res-lenet5-train-validate-acc.pdf')\n", | |||
"plt.show()" | |||
] | |||
} | |||
], | |||
"metadata": { | |||
"kernelspec": { | |||
"display_name": "Python 3", | |||
"language": "python", | |||
"name": "python3" | |||
}, | |||
"language_info": { | |||
"codemirror_mode": { | |||
"name": "ipython", | |||
"version": 3 | |||
}, | |||
"file_extension": ".py", | |||
"mimetype": "text/x-python", | |||
"name": "python", | |||
"nbconvert_exporter": "python", | |||
"pygments_lexer": "ipython3", | |||
"version": "3.5.4" | |||
} | |||
}, | |||
"nbformat": 4, | |||
"nbformat_minor": 2 | |||
} |
@@ -0,0 +1,99 @@ | |||
{ | |||
"cells": [ | |||
{ | |||
"cell_type": "markdown", | |||
"metadata": {}, | |||
"source": [ | |||
"# AlexNet\n", | |||
"\n", | |||
"\n", | |||
"第一个典型的卷积神经网络是 LeNet5 ,但是第一个开启深度学习的网络却是 AlexNet,这个网络在2012年的ImageNet竞赛中取得冠军。这网络提出了深度学习常用的技术:ReLU和Dropout。AlexNet网络结构在整体上类似于LeNet,都是先卷积然后在全连接,但在细节上有很大不同,AlexNet更为复杂,Alexnet模型由5个卷积层和3个池化Pooling层,其中还有3个全连接层构成,共有$6 \\times 10^7$个参数和65000个神经元,最终的输出层是1000通道的Softmax。AlexNet 跟 LeNet 结构类似,但使⽤了更多的卷积层和更⼤的参数空间来拟合⼤规模数据集 ImageNet,它是浅层神经⽹络和深度神经⽹络的分界线。\n" | |||
] | |||
}, | |||
{ | |||
"cell_type": "code", | |||
"execution_count": null, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"import torch.nn as nn\n", | |||
"import torch\n", | |||
"\n", | |||
"class AlexNet(nn.Module):\n", | |||
" def __init__(self, num_classes=1000, init_weights=False): \n", | |||
" super(AlexNet, self).__init__()\n", | |||
" self.features = nn.Sequential( \n", | |||
" nn.Conv2d(3, 96, kernel_size=11, stride=4, padding=2), \n", | |||
" nn.ReLU(inplace=True), #inplace 可以载入更大模型\n", | |||
" nn.MaxPool2d(kernel_size=3, stride=2), \n", | |||
"\n", | |||
" nn.Conv2d(96, 256, kernel_size=5, padding=2),\n", | |||
" nn.ReLU(inplace=True),\n", | |||
" nn.MaxPool2d(kernel_size=3, stride=2),\n", | |||
"\n", | |||
" nn.Conv2d(256, 384, kernel_size=3, padding=1),\n", | |||
" nn.ReLU(inplace=True),\n", | |||
"\n", | |||
" nn.Conv2d(384, 384, kernel_size=3, padding=1),\n", | |||
" nn.ReLU(inplace=True),\n", | |||
"\n", | |||
" nn.Conv2d(384, 256, kernel_size=3, padding=1),\n", | |||
" nn.ReLU(inplace=True),\n", | |||
" nn.MaxPool2d(kernel_size=3, stride=2),\n", | |||
" )\n", | |||
" self.classifier = nn.Sequential(\n", | |||
" nn.Dropout(p=0.5),\n", | |||
" nn.Linear(256*6*6, 4096), #全链接\n", | |||
" nn.ReLU(inplace=True),\n", | |||
" nn.Dropout(p=0.5),\n", | |||
" nn.Linear(4096, 4096),\n", | |||
" nn.ReLU(inplace=True),\n", | |||
" nn.Linear(4096, num_classes),\n", | |||
" )\n", | |||
" if init_weights:\n", | |||
" self._initialize_weights()\n", | |||
"\n", | |||
" def forward(self, x):\n", | |||
" x = self.features(x)\n", | |||
" x = torch.flatten(x, start_dim=1) #展平或者view()\n", | |||
" x = self.classifier(x)\n", | |||
" return x\n", | |||
"\n", | |||
" def _initialize_weights(self):\n", | |||
" for m in self.modules():\n", | |||
" if isinstance(m, nn.Conv2d):\n", | |||
" #何教授方法\n", | |||
" nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') \n", | |||
" if m.bias is not None:\n", | |||
" nn.init.constant_(m.bias, 0)\n", | |||
" elif isinstance(m, nn.Linear):\n", | |||
" #正态分布赋值\n", | |||
" nn.init.normal_(m.weight, 0, 0.01) \n", | |||
" nn.init.constant_(m.bias, 0)" | |||
] | |||
} | |||
], | |||
"metadata": { | |||
"kernelspec": { | |||
"display_name": "Python 3", | |||
"language": "python", | |||
"name": "python3" | |||
}, | |||
"language_info": { | |||
"codemirror_mode": { | |||
"name": "ipython", | |||
"version": 3 | |||
}, | |||
"file_extension": ".py", | |||
"mimetype": "text/x-python", | |||
"name": "python", | |||
"nbconvert_exporter": "python", | |||
"pygments_lexer": "ipython3", | |||
"version": "3.5.4" | |||
} | |||
}, | |||
"nbformat": 4, | |||
"nbformat_minor": 2 | |||
} |
@@ -48,7 +48,7 @@ | |||
"VGG网络的特点:\n", | |||
"* 小卷积核和连续的卷积层: VGG中使用的都是3×3卷积核,并且使用了连续多个卷积层。这样做的好处主要有,\n", | |||
" - 使用连续的的多个小卷积核(3×3),来代替一个大的卷积核(例如(5×5)。使用小的卷积核的问题是,其感受野必然变小。所以,VGG中就使用连续的3×3卷积核,来增大感受野。VGG认为2个连续的3×3卷积核能够替代一个5×5卷积核,三个连续的3×3能够代替一个7×7。\n", | |||
" - 小卷积核的参数较少。3个3×3的卷积核参数为3×3×=27,而一个7×7的卷积核参数为7×7=49\n", | |||
" - 小卷积核的参数较少。3个3×3的卷积核参数为3×3×3=27,而一个7×7的卷积核参数为7×7=49\n", | |||
" - 由于每个卷积层都有一个非线性的激活函数,多个卷积层增加了非线性映射。\n", | |||
"* 小池化核,使用的是2×2\n", | |||
"* 通道数更多,特征度更宽: 每个通道代表着一个FeatureMap,更多的通道数表示更丰富的图像特征。VGG网络第一层的通道数为64,后面每层都进行了翻倍,最多到512个通道,通道数的增加,使得更多的信息可以被提取出来。\n", | |||
@@ -64,7 +64,7 @@ | |||
}, | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 2, | |||
"execution_count": 1, | |||
"metadata": { | |||
"ExecuteTime": { | |||
"end_time": "2017-12-22T09:01:51.296457Z", | |||
@@ -81,7 +81,8 @@ | |||
"import torch\n", | |||
"from torch import nn\n", | |||
"from torch.autograd import Variable\n", | |||
"from torchvision.datasets import CIFAR10" | |||
"from torchvision.datasets import CIFAR10\n", | |||
"from torchvision import transforms as tfs" | |||
] | |||
}, | |||
{ | |||
@@ -98,7 +99,7 @@ | |||
}, | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 3, | |||
"execution_count": 2, | |||
"metadata": { | |||
"ExecuteTime": { | |||
"end_time": "2017-12-22T09:01:51.312500Z", | |||
@@ -108,10 +109,8 @@ | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"def vgg_block(num_convs, in_channels, out_channels):\n", | |||
" net = [nn.Conv2d(in_channels, out_channels, \n", | |||
" kernel_size=3, padding=1), \n", | |||
" nn.ReLU(True)] # 定义第一层\n", | |||
"def VGG_Block(num_convs, in_channels, out_channels):\n", | |||
" net = [nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1), nn.ReLU(True)] # 定义第一层\n", | |||
"\n", | |||
" for i in range(num_convs-1): # 定义后面的很多层\n", | |||
" net.append(nn.Conv2d(out_channels, out_channels, \n", | |||
@@ -131,7 +130,7 @@ | |||
}, | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 4, | |||
"execution_count": 3, | |||
"metadata": { | |||
"ExecuteTime": { | |||
"end_time": "2017-12-22T08:20:40.819497Z", | |||
@@ -156,13 +155,13 @@ | |||
} | |||
], | |||
"source": [ | |||
"block_demo = vgg_block(3, 64, 128)\n", | |||
"block_demo = VGG_Block(3, 64, 128)\n", | |||
"print(block_demo)" | |||
] | |||
}, | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 5, | |||
"execution_count": 4, | |||
"metadata": { | |||
"ExecuteTime": { | |||
"end_time": "2017-12-22T07:52:04.632406Z", | |||
@@ -196,7 +195,7 @@ | |||
}, | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 6, | |||
"execution_count": 5, | |||
"metadata": { | |||
"ExecuteTime": { | |||
"end_time": "2017-12-22T09:01:54.497712Z", | |||
@@ -206,12 +205,12 @@ | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"def vgg_stack(num_convs, channels):\n", | |||
"def VGG_Stack(num_convs, channels):\n", | |||
" net = []\n", | |||
" for n, c in zip(num_convs, channels):\n", | |||
" in_c = c[0]\n", | |||
" out_c = c[1]\n", | |||
" net.append(vgg_block(n, in_c, out_c))\n", | |||
" net.append(VGG_Block(n, in_c, out_c))\n", | |||
" return nn.Sequential(*net)" | |||
] | |||
}, | |||
@@ -224,7 +223,7 @@ | |||
}, | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 7, | |||
"execution_count": 6, | |||
"metadata": { | |||
"ExecuteTime": { | |||
"end_time": "2017-12-22T09:01:55.149378Z", | |||
@@ -283,7 +282,7 @@ | |||
} | |||
], | |||
"source": [ | |||
"vgg_net = vgg_stack((2, 2, 3, 3, 3), ((3, 64), (64, 128), (128, 256), (256, 512), (512, 512)))\n", | |||
"vgg_net = VGG_Stack((2, 2, 3, 3, 3), ((3, 64), (64, 128), (128, 256), (256, 512), (512, 512)))\n", | |||
"print(vgg_net)" | |||
] | |||
}, | |||
@@ -291,12 +290,12 @@ | |||
"cell_type": "markdown", | |||
"metadata": {}, | |||
"source": [ | |||
"可以看到网络结构中有个 5 个 最大池化,说明图片的大小会减少 5 倍。可以验证一下,输入一张 256 x 256 的图片看看结果是什么" | |||
"可以看到网络结构中有个 5 个 最大池化,说明图片的大小会减少 5 倍。可以验证一下,输入一张 224 x 224 的图片看看结果是什么" | |||
] | |||
}, | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 8, | |||
"execution_count": 7, | |||
"metadata": { | |||
"ExecuteTime": { | |||
"end_time": "2017-12-22T08:52:44.049650Z", | |||
@@ -308,12 +307,12 @@ | |||
"name": "stdout", | |||
"output_type": "stream", | |||
"text": [ | |||
"torch.Size([1, 512, 8, 8])\n" | |||
"torch.Size([1, 512, 7, 7])\n" | |||
] | |||
} | |||
], | |||
"source": [ | |||
"test_x = Variable(torch.zeros(1, 3, 256, 256))\n", | |||
"test_x = Variable(torch.zeros(1, 3, 224, 224))\n", | |||
"test_y = vgg_net(test_x)\n", | |||
"print(test_y.shape)" | |||
] | |||
@@ -327,7 +326,7 @@ | |||
}, | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 9, | |||
"execution_count": 8, | |||
"metadata": { | |||
"ExecuteTime": { | |||
"end_time": "2017-12-22T09:01:57.323034Z", | |||
@@ -337,14 +336,14 @@ | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"class vgg(nn.Module):\n", | |||
"class VGG_Net(nn.Module):\n", | |||
" def __init__(self):\n", | |||
" super(vgg, self).__init__()\n", | |||
" self.feature = vgg_net\n", | |||
" super(VGG_Net, self).__init__()\n", | |||
" self.feature = VGG_Stack((2, 2, 3, 3, 3), ((3, 64), (64, 128), (128, 256), (256, 512), (512, 512)))\n", | |||
" self.fc = nn.Sequential(\n", | |||
" nn.Linear(512, 100),\n", | |||
" nn.Linear(512*7*7, 4096),\n", | |||
" nn.ReLU(True),\n", | |||
" nn.Linear(100, 10)\n", | |||
" nn.Linear(4096, 10)\n", | |||
" )\n", | |||
" def forward(self, x):\n", | |||
" x = self.feature(x)\n", | |||
@@ -362,74 +361,88 @@ | |||
}, | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 6, | |||
"execution_count": 9, | |||
"metadata": { | |||
"ExecuteTime": { | |||
"end_time": "2017-12-22T09:01:59.921373Z", | |||
"start_time": "2017-12-22T09:01:58.709531Z" | |||
}, | |||
"collapsed": true | |||
} | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"from utils import train\n", | |||
"\n", | |||
"def data_tf(x):\n", | |||
" x = np.array(x, dtype='float32') / 255\n", | |||
" x = (x - 0.5) / 0.5 # 标准化,这个技巧之后会讲到\n", | |||
" x = x.transpose((2, 0, 1)) # 将 channel 放到第一维,只是 pytorch 要求的输入方式\n", | |||
" x = torch.from_numpy(x)\n", | |||
"# 使用数据增强\n", | |||
"def train_tf(x):\n", | |||
" im_aug = tfs.Compose([\n", | |||
" tfs.Resize(224),\n", | |||
" tfs.ToTensor(),\n", | |||
" tfs.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])\n", | |||
" ])\n", | |||
" x = im_aug(x)\n", | |||
" return x\n", | |||
"\n", | |||
"def test_tf(x):\n", | |||
" im_aug = tfs.Compose([\n", | |||
" tfs.Resize(224),\n", | |||
" tfs.ToTensor(),\n", | |||
" tfs.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])\n", | |||
" ])\n", | |||
" x = im_aug(x)\n", | |||
" return x\n", | |||
" \n", | |||
"train_set = CIFAR10('../../data', train=True, transform=data_tf)\n", | |||
"train_set = CIFAR10('../../data', train=True, transform=train_tf)\n", | |||
"train_data = torch.utils.data.DataLoader(train_set, batch_size=64, shuffle=True)\n", | |||
"test_set = CIFAR10('../../data', train=False, transform=data_tf)\n", | |||
"test_set = CIFAR10('../../data', train=False, transform=test_tf)\n", | |||
"test_data = torch.utils.data.DataLoader(test_set, batch_size=128, shuffle=False)\n", | |||
"\n", | |||
"net = vgg()\n", | |||
"net = VGG_Net()\n", | |||
"optimizer = torch.optim.SGD(net.parameters(), lr=1e-1)\n", | |||
"criterion = nn.CrossEntropyLoss()" | |||
] | |||
}, | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 7, | |||
"execution_count": null, | |||
"metadata": { | |||
"ExecuteTime": { | |||
"end_time": "2017-12-22T09:12:46.868967Z", | |||
"start_time": "2017-12-22T09:01:59.924086Z" | |||
} | |||
}, | |||
"outputs": [ | |||
{ | |||
"name": "stdout", | |||
"output_type": "stream", | |||
"text": [ | |||
"Epoch 0. Train Loss: 2.303118, Train Acc: 0.098186, Valid Loss: 2.302944, Valid Acc: 0.099585, Time 00:00:32\n", | |||
"Epoch 1. Train Loss: 2.303085, Train Acc: 0.096907, Valid Loss: 2.302762, Valid Acc: 0.100969, Time 00:00:33\n", | |||
"Epoch 2. Train Loss: 2.302916, Train Acc: 0.097287, Valid Loss: 2.302740, Valid Acc: 0.099585, Time 00:00:33\n", | |||
"Epoch 3. Train Loss: 2.302395, Train Acc: 0.102042, Valid Loss: 2.297652, Valid Acc: 0.108782, Time 00:00:32\n", | |||
"Epoch 4. Train Loss: 2.079523, Train Acc: 0.202026, Valid Loss: 1.868179, Valid Acc: 0.255736, Time 00:00:31\n", | |||
"Epoch 5. Train Loss: 1.781262, Train Acc: 0.307625, Valid Loss: 1.735122, Valid Acc: 0.323279, Time 00:00:31\n", | |||
"Epoch 6. Train Loss: 1.565095, Train Acc: 0.400975, Valid Loss: 1.463914, Valid Acc: 0.449565, Time 00:00:31\n", | |||
"Epoch 7. Train Loss: 1.360450, Train Acc: 0.495225, Valid Loss: 1.374488, Valid Acc: 0.490803, Time 00:00:31\n", | |||
"Epoch 8. Train Loss: 1.144470, Train Acc: 0.585758, Valid Loss: 1.384803, Valid Acc: 0.524624, Time 00:00:31\n", | |||
"Epoch 9. Train Loss: 0.954556, Train Acc: 0.659287, Valid Loss: 1.113850, Valid Acc: 0.609968, Time 00:00:32\n", | |||
"Epoch 10. Train Loss: 0.801952, Train Acc: 0.718131, Valid Loss: 1.080254, Valid Acc: 0.639933, Time 00:00:31\n", | |||
"Epoch 11. Train Loss: 0.665018, Train Acc: 0.765945, Valid Loss: 0.916277, Valid Acc: 0.698972, Time 00:00:31\n", | |||
"Epoch 12. Train Loss: 0.547411, Train Acc: 0.811241, Valid Loss: 1.030948, Valid Acc: 0.678896, Time 00:00:32\n", | |||
"Epoch 13. Train Loss: 0.442779, Train Acc: 0.846228, Valid Loss: 0.869791, Valid Acc: 0.732496, Time 00:00:32\n", | |||
"Epoch 14. Train Loss: 0.357279, Train Acc: 0.875440, Valid Loss: 1.233777, Valid Acc: 0.671677, Time 00:00:31\n", | |||
"Epoch 15. Train Loss: 0.285171, Train Acc: 0.900096, Valid Loss: 0.852879, Valid Acc: 0.765131, Time 00:00:32\n", | |||
"Epoch 16. Train Loss: 0.222431, Train Acc: 0.923374, Valid Loss: 1.848096, Valid Acc: 0.614023, Time 00:00:31\n", | |||
"Epoch 17. Train Loss: 0.174834, Train Acc: 0.939478, Valid Loss: 1.137286, Valid Acc: 0.728639, Time 00:00:31\n", | |||
"Epoch 18. Train Loss: 0.144375, Train Acc: 0.950587, Valid Loss: 0.907310, Valid Acc: 0.776800, Time 00:00:31\n", | |||
"Epoch 19. Train Loss: 0.115332, Train Acc: 0.960878, Valid Loss: 1.009886, Valid Acc: 0.761175, Time 00:00:31\n" | |||
] | |||
} | |||
], | |||
"outputs": [], | |||
"source": [ | |||
"train(net, train_data, test_data, 20, optimizer, criterion)" | |||
"(l_train_loss, l_train_acc, l_valid_loss, l_valid_acc) = train(net, \n", | |||
" train_data, test_data, \n", | |||
" 20, \n", | |||
" optimizer, criterion,\n", | |||
" use_cuda=False)" | |||
] | |||
}, | |||
{ | |||
"cell_type": "code", | |||
"execution_count": null, | |||
"metadata": { | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"import matplotlib.pyplot as plt\n", | |||
"%matplotlib inline\n", | |||
"\n", | |||
"plt.plot(l_train_loss, label='train')\n", | |||
"plt.plot(l_valid_loss, label='valid')\n", | |||
"plt.xlabel('epoch')\n", | |||
"plt.legend(loc='best')\n", | |||
"plt.savefig('fig-res-vgg-train-validate-loss.pdf')\n", | |||
"plt.show()\n", | |||
"\n", | |||
"plt.plot(l_train_acc, label='train')\n", | |||
"plt.plot(l_valid_acc, label='valid')\n", | |||
"plt.xlabel('epoch')\n", | |||
"plt.legend(loc='best')\n", | |||
"plt.savefig('fig-res-vgg-train-validate-acc.pdf')\n", | |||
"plt.show()" | |||
] | |||
}, | |||
{ | |||
@@ -465,7 +478,7 @@ | |||
"name": "python", | |||
"nbconvert_exporter": "python", | |||
"pygments_lexer": "ipython3", | |||
"version": "3.7.9" | |||
"version": "3.5.4" | |||
} | |||
}, | |||
"nbformat": 4, |
@@ -47,7 +47,8 @@ | |||
"ExecuteTime": { | |||
"end_time": "2017-12-22T12:56:06.772059Z", | |||
"start_time": "2017-12-22T12:56:06.766027Z" | |||
} | |||
}, | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
@@ -69,7 +70,8 @@ | |||
"ExecuteTime": { | |||
"end_time": "2017-12-22T12:47:49.222432Z", | |||
"start_time": "2017-12-22T12:47:49.217940Z" | |||
} | |||
}, | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
@@ -85,7 +87,8 @@ | |||
"ExecuteTime": { | |||
"end_time": "2017-12-22T13:14:02.429145Z", | |||
"start_time": "2017-12-22T13:14:02.383322Z" | |||
} | |||
}, | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
@@ -203,13 +206,14 @@ | |||
"ExecuteTime": { | |||
"end_time": "2017-12-22T13:27:46.099404Z", | |||
"start_time": "2017-12-22T13:27:45.986235Z" | |||
} | |||
}, | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"class ResNet(nn.Module):\n", | |||
" def __init__(self, in_channel, num_classes, verbose=False):\n", | |||
" super(resnet, self).__init__()\n", | |||
" super(ResNet, self).__init__()\n", | |||
" self.verbose = verbose\n", | |||
" \n", | |||
" self.block1 = nn.Conv2d(in_channel, 64, 7, 2)\n", | |||
@@ -290,7 +294,7 @@ | |||
} | |||
], | |||
"source": [ | |||
"test_net = resnet(3, 10, True)\n", | |||
"test_net = ResNet(3, 10, True)\n", | |||
"test_x = Variable(torch.zeros(1, 3, 96, 96))\n", | |||
"test_y = test_net(test_x)\n", | |||
"print('output: {}'.format(test_y.shape))" | |||
@@ -414,7 +418,7 @@ | |||
"name": "python", | |||
"nbconvert_exporter": "python", | |||
"pygments_lexer": "ipython3", | |||
"version": "3.7.9" | |||
"version": "3.5.4" | |||
} | |||
}, | |||
"nbformat": 4, |
@@ -45,7 +45,8 @@ | |||
"ExecuteTime": { | |||
"end_time": "2017-12-22T15:38:31.113030Z", | |||
"start_time": "2017-12-22T15:38:30.612922Z" | |||
} | |||
}, | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
@@ -73,11 +74,12 @@ | |||
"ExecuteTime": { | |||
"end_time": "2017-12-22T15:38:31.121249Z", | |||
"start_time": "2017-12-22T15:38:31.115369Z" | |||
} | |||
}, | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"def conv_block(in_channel, out_channel):\n", | |||
"def Conv_Block(in_channel, out_channel):\n", | |||
" layer = nn.Sequential(\n", | |||
" nn.BatchNorm2d(in_channel),\n", | |||
" nn.ReLU(True),\n", | |||
@@ -100,17 +102,18 @@ | |||
"ExecuteTime": { | |||
"end_time": "2017-12-22T15:38:31.145274Z", | |||
"start_time": "2017-12-22T15:38:31.123363Z" | |||
} | |||
}, | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
"class dense_block(nn.Module):\n", | |||
"class Dense_Block(nn.Module):\n", | |||
" def __init__(self, in_channel, growth_rate, num_layers):\n", | |||
" super(dense_block, self).__init__()\n", | |||
" super(Dense_Block, self).__init__()\n", | |||
" block = []\n", | |||
" channel = in_channel\n", | |||
" for i in range(num_layers):\n", | |||
" block.append(conv_block(channel, growth_rate))\n", | |||
" block.append(Conv_Block(channel, growth_rate))\n", | |||
" channel += growth_rate\n", | |||
" \n", | |||
" self.net = nn.Sequential(*block)\n", | |||
@@ -170,7 +173,8 @@ | |||
"ExecuteTime": { | |||
"end_time": "2017-12-22T15:38:31.222120Z", | |||
"start_time": "2017-12-22T15:38:31.215770Z" | |||
} | |||
}, | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
@@ -234,7 +238,8 @@ | |||
"ExecuteTime": { | |||
"end_time": "2017-12-22T15:38:31.318822Z", | |||
"start_time": "2017-12-22T15:38:31.236857Z" | |||
} | |||
}, | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
@@ -305,7 +310,8 @@ | |||
"ExecuteTime": { | |||
"end_time": "2017-12-22T15:38:32.894729Z", | |||
"start_time": "2017-12-22T15:38:31.656356Z" | |||
} | |||
}, | |||
"collapsed": true | |||
}, | |||
"outputs": [], | |||
"source": [ | |||
@@ -403,7 +409,7 @@ | |||
"name": "python", | |||
"nbconvert_exporter": "python", | |||
"pygments_lexer": "ipython3", | |||
"version": "3.7.9" | |||
"version": "3.5.4" | |||
} | |||
}, | |||
"nbformat": 4, |
@@ -601,7 +601,7 @@ | |||
"name": "python", | |||
"nbconvert_exporter": "python", | |||
"pygments_lexer": "ipython3", | |||
"version": "3.7.9" | |||
"version": "3.5.4" | |||
} | |||
}, | |||
"nbformat": 4, |
@@ -403,7 +403,7 @@ | |||
"name": "python", | |||
"nbconvert_exporter": "python", | |||
"pygments_lexer": "ipython3", | |||
"version": "3.7.9" | |||
"version": "3.5.4" | |||
} | |||
}, | |||
"nbformat": 4, |
@@ -13,16 +13,22 @@ def get_acc(output, label): | |||
return num_correct / total | |||
def train(net, train_data, valid_data, num_epochs, optimizer, criterion): | |||
if torch.cuda.is_available(): | |||
def train(net, train_data, valid_data, num_epochs, optimizer, criterion, use_cuda=True): | |||
if use_cuda and torch.cuda.is_available(): | |||
net = net.cuda() | |||
l_train_loss = [] | |||
l_train_acc = [] | |||
l_valid_loss = [] | |||
l_valid_acc = [] | |||
prev_time = datetime.now() | |||
for epoch in range(num_epochs): | |||
train_loss = 0 | |||
train_acc = 0 | |||
net = net.train() | |||
for im, label in train_data: | |||
if torch.cuda.is_available(): | |||
if use_cuda and torch.cuda.is_available(): | |||
im = Variable(im.cuda()) # (bs, 3, h, w) | |||
label = Variable(label.cuda()) # (bs, h, w) | |||
else: | |||
@@ -50,7 +56,7 @@ def train(net, train_data, valid_data, num_epochs, optimizer, criterion): | |||
valid_acc = 0 | |||
net = net.eval() | |||
for im, label in valid_data: | |||
if torch.cuda.is_available(): | |||
if use_cuda and torch.cuda.is_available(): | |||
im = Variable(im.cuda(), volatile=True) | |||
label = Variable(label.cuda(), volatile=True) | |||
else: | |||
@@ -65,13 +71,21 @@ def train(net, train_data, valid_data, num_epochs, optimizer, criterion): | |||
% (epoch, train_loss / len(train_data), | |||
train_acc / len(train_data), valid_loss / len(valid_data), | |||
valid_acc / len(valid_data))) | |||
l_valid_acc.append(valid_acc / len(valid_data)) | |||
l_valid_loss.append(valid_loss / len(valid_data)) | |||
else: | |||
epoch_str = ("Epoch %d. Train Loss: %f, Train Acc: %f, " % | |||
(epoch, train_loss / len(train_data), | |||
train_acc / len(train_data))) | |||
l_train_acc.append(train_acc / len(train_data)) | |||
l_train_loss.append(train_loss / len(train_data)) | |||
prev_time = cur_time | |||
print(epoch_str + time_str) | |||
return (l_train_loss, l_train_acc, l_valid_loss, l_valid_acc) | |||
def conv3x3(in_channel, out_channel, stride=1): | |||
return nn.Conv2d( | |||
@@ -19,7 +19,36 @@ | |||
 | |||
## 内容 | |||
- CNN | |||
- [CNN Introduction](1_CNN/CNN_Introduction.pptx) | |||
- [CNN simple demo](../demo_code/3_CNN_MNIST.py) | |||
- [Basic of Conv](1_CNN/01-basic_conv.ipynb) | |||
- [LeNet5](1_CNN/02-LeNet5.ipynb) | |||
- [AlexNet](1_CNN/03-AlexNet.ipynb) | |||
- [VGG Network](1_CNN/04-vgg.ipynb) | |||
- [GoogleNet](1_CNN/05-googlenet.ipynb) | |||
- [ResNet](1_CNN/06-resnet.ipynb) | |||
- [DenseNet](1_CNN/07-densenet.ipynb) | |||
- [Batch Normalization](1_CNN/08-batch-normalization.ipynb) | |||
- [Learning Rate Decay](1_CNN/09-lr-decay.ipynb) | |||
- [Regularization](1_CNN/10-regularization.ipynb) | |||
- [Data Augumentation](1_CNN/11-data-augumentation.ipynb) | |||
- RNN | |||
- [rnn/pytorch-rnn](2_RNN/pytorch-rnn.ipynb) | |||
- [rnn/rnn-for-image](2_RNN/rnn-for-image.ipynb) | |||
- [rnn/lstm-time-series](2_RNN/time-series/lstm-time-series.ipynb) | |||
- GAN | |||
- [gan/autoencoder](3_GAN/autoencoder.ipynb) | |||
- [gan/vae](3_GAN/vae.ipynb) | |||
- [gan/gan](3_GAN/gan.ipynb) | |||
## 参考资料 | |||
* [深度学习 – Deep learning](https://easyai.tech/ai-definition/deep-learning/) | |||
* [深度学习](https://www.jiqizhixin.com/graph/technologies/01946acc-d031-4c0e-909c-f062643b7273) | |||
@@ -52,15 +52,17 @@ | |||
- CNN | |||
- [CNN Introduction](7_deep_learning/1_CNN/CNN_Introduction.pptx) | |||
- [CNN simple demo](demo_code/3_CNN_MNIST.py) | |||
- [Basic of Conv](7_deep_learning/1_CNN/1-basic_conv.ipynb) | |||
- [VGG Network](7_deep_learning/1_CNN/2-vgg.ipynb) | |||
- [GoogleNet](7_deep_learning/1_CNN/3-googlenet.ipynb) | |||
- [ResNet](7_deep_learning/1_CNN/4-resnet.ipynb) | |||
- [DenseNet](7_deep_learning/1_CNN/5-densenet.ipynb) | |||
- [Batch Normalization](7_deep_learning/1_CNN/6-batch-normalization.ipynb) | |||
- [Learning Rate Decay](7_deep_learning/2_CNN/7-lr-decay.ipynb) | |||
- [Regularization](7_deep_learning/1_CNN/8-regularization.ipynb) | |||
- [Data Augumentation](7_deep_learning/1_CNN/9-data-augumentation.ipynb) | |||
- [Basic of Conv](7_deep_learning/1_CNN/01-basic_conv.ipynb) | |||
- [LeNet5](7_deep_learning/1_CNN/02-LeNet5.ipynb) | |||
- [AlexNet](7_deep_learning/1_CNN/03-AlexNet.ipynb) | |||
- [VGG Network](7_deep_learning/1_CNN/04-vgg.ipynb) | |||
- [GoogleNet](7_deep_learning/1_CNN/05-googlenet.ipynb) | |||
- [ResNet](7_deep_learning/1_CNN/06-resnet.ipynb) | |||
- [DenseNet](7_deep_learning/1_CNN/07-densenet.ipynb) | |||
- [Batch Normalization](7_deep_learning/1_CNN/08-batch-normalization.ipynb) | |||
- [Learning Rate Decay](7_deep_learning/1_CNN/09-lr-decay.ipynb) | |||
- [Regularization](7_deep_learning/1_CNN/10-regularization.ipynb) | |||
- [Data Augumentation](7_deep_learning/1_CNN/11-data-augumentation.ipynb) | |||
- RNN | |||
- [rnn/pytorch-rnn](7_deep_learning/2_RNN/pytorch-rnn.ipynb) | |||
- [rnn/rnn-for-image](7_deep_learning/2_RNN/rnn-for-image.ipynb) | |||
@@ -81,21 +83,19 @@ | |||
## 3. 参考资料 | |||
## 3. [参考资料](References.md) | |||
* [教程、代码](References.md) | |||
* 资料速查 | |||
* [相关学习参考资料汇总](References.md) | |||
* [一些速查手册](references_tips/cheatsheet) | |||
* 机器学习方面技巧等 | |||
* [Confusion Matrix](references_tips/confusion_matrix.ipynb) | |||
* [Datasets](references_tips/datasets.ipynb) | |||
* [构建深度神经网络的一些实战建议](references_tips/构建深度神经网络的一些实战建议.md) | |||
* [Intro to Deep Learning](references_tips/Intro_to_Deep_Learning.pdf) | |||
* Python技巧等 | |||
* [安装Python环境](references_tips/InstallPython.md) | |||
* [Python tips](references_tips/python) | |||
* [Git教程](https://gitee.com/pi-lab/learn_programming/blob/master/6_tools/git/README.md) | |||
* [Markdown教程](https://gitee.com/pi-lab/learn_programming/blob/master/6_tools/markdown/README.md) | |||
@@ -1,11 +1,27 @@ | |||
# References | |||
# 参考资料 | |||
可以自行在下属列表找找到适合自己的学习资料,虽然罗列的比较多,但是个人最好选择一个深入阅读、练习。当练习到一定程度,可以再看看其他的资料,这样弥补单一学习资料可能存在的欠缺。 | |||
列表等在 https://gitee.com/pi-lab/pilab_research_fields/blob/master/references/ML_References.md | |||
## 1. 教程、代码 | |||
## References | |||
### 1.1 教程 | |||
* [《动手学深度学习》- PyTorch版本](https://tangshusen.me/Dive-into-DL-PyTorch/#/) | |||
* [Introduction — Neuromatch Academy: Deep Learning](https://deeplearning.neuromatch.io/tutorials/intro.html) | |||
### 1.2 代码 | |||
* [《统计学习方法》的代码](https://gitee.com/afishoutis/MachineLearning) | |||
* [《统计学习方法》PyTorch实现](https://github.com/fengdu78/lihang-code) | |||
* [pytorch-cifar100](https://github.com/weiaicunzai/pytorch-cifar100) 实现ResNet, DenseNet, VGG, GoogleNet, InceptionV3, InceptionV4, Inception-ResNetv2, Xception, Resnet In Resnet, ResNext,ShuffleNet, ShuffleNetv2, MobileNet, MobileNetv2, SqueezeNet, NasNet, Residual Attention Network, SENet, WideResNet | |||
* [Attention: xmu-xiaoma666/External-Attention-pytorch: Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐ (github.com)](https://github.com/xmu-xiaoma666/External-Attention-pytorch) 注意力机制,多层神经网络,重参数。 | |||
* [Python TheAlgorithms/Python: All Algorithms implemented in Python (github.com)](https://github.com/TheAlgorithms/Python) | |||
* PytTorch 训练手册 https://github.com/zergtant/pytorch-handbook | |||
## 2. 工具、技巧 | |||
* [形象直观了解谷歌大脑新型优化器LAMB](https://www.toutiao.com/i6687162064395305475/) | |||
* [梯度下降方法的视觉解释(动量,AdaGrad,RMSProp,Adam)](https://www.toutiao.com/i6836422484028293640/) | |||
@@ -35,10 +51,8 @@ | |||
## Course & Code | |||
* [《统计学习方法》的代码](https://gitee.com/afishoutis/MachineLearning) | |||
## Exercise | |||
## 3. 练习 | |||
* http://sofasofa.io/competitions.php?type=practice | |||
* https://www.kaggle.com/competitions | |||
* Machine learning project ideas | |||
@@ -50,10 +64,12 @@ | |||
* Titanic: notebooks/data-science-ipython-notebooks/kaggle/titanic.ipynb | |||
* 使用神经网络解决拼图游戏 https://www.toutiao.com/a6855437347463365133/ | |||
* [Sudoku-Solver](https://github.com/shivaverma/Sudoku-Solver) | |||
* Python 小项目 https://github.com/kyclark/tiny_python_projects | |||
## Method | |||
## 4. 机器学习方法 | |||
### 4.1 经典机器学习方法 | |||
* Programming Multiclass Logistic Regression | |||
notebooks/MachineLearningNotebooks/05.%20Logistic%20Regression.ipynb | |||
@@ -74,7 +90,7 @@ http://localhost:8889/notebooks/machineLearning/10_digits_classification.ipynb | |||
http://localhost:8889/notebooks/machineLearning/notebooks/01%20-%20Model%20Selection%20and%20Assessment.ipynb | |||
## NN | |||
### 4.2 NN | |||
* 神经网络——梯度下降&反向传播 https://blog.csdn.net/skullfang/article/details/78634317 | |||
* 零基础入门深度学习(3) - 神经网络和反向传播算法 https://www.zybuluo.com/hanbingtao/note/476663 | |||
* 如何直观地解释 backpropagation 算法? https://www.zhihu.com/question/27239198 | |||
@@ -85,10 +101,10 @@ http://localhost:8889/notebooks/machineLearning/notebooks/01%20-%20Model%20Selec | |||
* https://www.python-course.eu/neural_networks_with_python_numpy.php | |||
## k-Means | |||
### 4.3 k-Means | |||
* [如何使用 Keras 实现无监督聚类](http://m.sohu.com/a/236221126_717210) | |||
## AutoEncoder (自编码/非监督学习) | |||
### 4.4 AutoEncoder (自编码/非监督学习) | |||
* https://morvanzhou.github.io/tutorials/machine-learning/torch/4-04-autoencoder/ | |||
* https://github.com/MorvanZhou/PyTorch-Tutorial/blob/master/tutorial-contents/404_autoencoder.py | |||
* pytorch AutoEncoder 自编码 https://www.jianshu.com/p/f0929f427d03 |