Browse Source

Pre Merge pull request !7 from yuhui liu/master

pull/7/MERGE
yuhui liu Gitee 3 years ago
parent
commit
08bcc6148f
23 changed files with 666 additions and 567 deletions
  1. +447
    -426
      1_numpy_matplotlib_scipy_sympy/1-numpy_tutorial.ipynb
  2. +3
    -3
      1_numpy_matplotlib_scipy_sympy/random-matrix.csv
  3. BIN
      1_numpy_matplotlib_scipy_sympy/random-matrix.npy
  4. +7
    -3
      2_knn/knn_classification.ipynb
  5. +45
    -45
      3_kmeans/1-k-means.ipynb
  6. +4
    -2
      3_kmeans/2-kmeans-color-vq.ipynb
  7. BIN
      3_kmeans/k-means_data.pdf
  8. BIN
      3_kmeans/k-means_groundtruth.pdf
  9. BIN
      3_kmeans/k-means_predict.pdf
  10. BIN
      3_kmeans/k-means_silhouette_coef.pdf
  11. +70
    -41
      5_nn/1-Perceptron.ipynb
  12. +4
    -2
      5_nn/2-mlp_bp.ipynb
  13. +37
    -19
      5_nn/3-softmax_ce.ipynb
  14. BIN
      5_nn/images/figures.pptx
  15. BIN
      5_nn/images/softmax_neuron.0.png
  16. BIN
      5_nn/images/softmax_neuron.png
  17. BIN
      5_nn/perceptron_sample_data.pdf
  18. +4
    -4
      7_deep_learning/1_CNN/8-regularization.ipynb
  19. BIN
      7_deep_learning/1_CNN/CNN_Introduction.pptx
  20. +8
    -0
      7_deep_learning/README.md
  21. BIN
      7_deep_learning/imgs/resnet-development.png
  22. +12
    -13
      README.md
  23. +25
    -9
      References.md

+ 447
- 426
1_numpy_matplotlib_scipy_sympy/1-numpy_tutorial.ipynb
File diff suppressed because it is too large
View File


+ 3
- 3
1_numpy_matplotlib_scipy_sympy/random-matrix.csv View File

@@ -1,3 +1,3 @@
0.81041 0.69606 0.42944
0.99033 0.60317 0.82435
0.70689 0.05605 0.53930
0.34743 0.34666 0.67796
0.37776 0.74529 0.44639
0.70970 0.54722 0.96401

BIN
1_numpy_matplotlib_scipy_sympy/random-matrix.npy View File


+ 7
- 3
2_knn/knn_classification.ipynb View File

@@ -322,7 +322,9 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 3, "execution_count": 3,
"metadata": {},
"metadata": {
"collapsed": true
},
"outputs": [], "outputs": [],
"source": [ "source": [
"import numpy as np\n", "import numpy as np\n",
@@ -479,7 +481,9 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 9, "execution_count": 9,
"metadata": {},
"metadata": {
"collapsed": true
},
"outputs": [], "outputs": [],
"source": [ "source": [
"# split train / test data\n", "# split train / test data\n",
@@ -570,7 +574,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.7.9"
"version": "3.5.4"
} }
}, },
"nbformat": 4, "nbformat": 4,


+ 45
- 45
3_kmeans/1-k-means.ipynb
File diff suppressed because it is too large
View File


+ 4
- 2
3_kmeans/2-kmeans-color-vq.ipynb View File

@@ -16,7 +16,9 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 1, "execution_count": 1,
"metadata": {},
"metadata": {
"collapsed": true
},
"outputs": [], "outputs": [],
"source": [ "source": [
"%matplotlib inline\n", "%matplotlib inline\n",
@@ -206,7 +208,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.7.9"
"version": "3.5.4"
} }
}, },
"nbformat": 4, "nbformat": 4,


BIN
3_kmeans/k-means_data.pdf View File


BIN
3_kmeans/k-means_groundtruth.pdf View File


BIN
3_kmeans/k-means_predict.pdf View File


BIN
3_kmeans/k-means_silhouette_coef.pdf View File


+ 70
- 41
5_nn/1-Perceptron.ipynb
File diff suppressed because it is too large
View File


+ 4
- 2
5_nn/2-mlp_bp.ipynb View File

@@ -719,7 +719,9 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 9, "execution_count": 9,
"metadata": {},
"metadata": {
"collapsed": true
},
"outputs": [], "outputs": [],
"source": [ "source": [
"import numpy as np\n", "import numpy as np\n",
@@ -1024,7 +1026,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.7.9"
"version": "3.5.4"
} }
}, },
"nbformat": 4, "nbformat": 4,


+ 37
- 19
5_nn/3-softmax_ce.ipynb View File

@@ -11,13 +11,13 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"softmax经常被添加在分类任务的神经网络中的输出层,神经网络的反向传播中关键的步骤就是求导,从这个过程也可以更深刻地理解反向传播的过程,还可以对梯度传播的问题有更多的思考。\n",
"`Softmax`经常被添加在分类任务的神经网络中的输出层,神经网络的反向传播中关键的步骤就是求导,从这个过程也可以更深刻地理解反向传播的过程,还可以对梯度传播的问题有更多的思考。\n",
"\n", "\n",
"## 1. softmax 函数\n", "## 1. softmax 函数\n",
"\n", "\n",
"softmax(柔性最大值)函数,一般在神经网络中, softmax可以作为分类任务的输出层。其实可以认为softmax输出的是几个类别选择的概率,比如有一个分类任务,要分为三个类,softmax函数可以根据它们相对的大小,输出三个类别选取的概率,并且概率和为1。\n",
"`softmax`(柔性最大值)函数,一般在神经网络中, `softmax`可以作为分类任务的输出层。其实可以认为`softmax`输出的是几个类别选择的概率,比如有一个分类任务,要分为三个类,softmax函数可以根据它们相对的大小,输出三个类别选取的概率,并且概率和为1。\n",
"\n", "\n",
"Softmax从字面上来说,可以分成`soft`和`max`两个部分。`max`故名思议就是最大值的意思。Softmax的核心在于`soft`,而`soft`有软的含义,与之相对的是`hard`硬。很多场景中需要我们找出数组所有元素中值最大的元素,实质上都是求的`hardmax`。下面使用`Numpy`模块实现hardmax。"
"Softmax从字面上来说,可以分成`soft`和`max`两个部分。`max`故名思议就是最大值的意思。Softmax的核心在于`soft`,而`soft`有软的含义,与之相对的是`hard`硬。很多场景中需要找出数组所有元素中值最大的元素,实质上都是求的`hardmax`。下面使用`Numpy`模块实现hardmax。"
] ]
}, },
{ {
@@ -62,7 +62,7 @@
"\n", "\n",
"![softmax_demo](images/softmax_demo.png)\n", "![softmax_demo](images/softmax_demo.png)\n",
"\n", "\n",
"softmax直白来说就是将原来输出是$[3,1,-3]$通过softmax函数作用,就映射成为(0,1)的值,而这些值的累和为1(满足概率的性质),那么我们就可以将它理解成概率,在最后选取输出结点的时候,我们就可以选取概率最大(也就是值对应最大的)结点,作为我们的预测目标!\n"
"softmax直白来说就是将原来输出是$[3,1,-3]$通过softmax函数作用,就映射成为(0,1)的值,而这些值的累和为1(满足概率的性质),那么我们就可以将它理解成概率,在最后选取输出结点的时候,选取概率最大(也就是值对应最大的)结点,作为预测目标!\n"
] ]
}, },
{ {
@@ -78,12 +78,12 @@
"神经元的输出设为:\n", "神经元的输出设为:\n",
"\n", "\n",
"$$\n", "$$\n",
"z_i = sigmoid( \\sum_{j} w_{ij} x_{j} + b )\n",
"z_i = \\sum_{j} w_{ij} x_{j} + w_b\n",
"$$\n", "$$\n",
"\n", "\n",
"其中$W_{ij}$是第$i$个神经元的第$j$个权重,$b$是偏置。$z_i$表示该网络的第$i$个输出。\n",
"其中$W_{ij}$是第$i$个神经元的第$j$个权重,$w_b$是偏置。$z_i$表示该网络的第$i$个输出。**请注意这里没有使用sigmoid等激活函数。**\n",
"\n", "\n",
"给这个输出加上一个softmax函数,那就变成了这样:\n",
"给这个网络输出加上一个softmax函数,那就变成了这样:\n",
"\n", "\n",
"$$\n", "$$\n",
"a_i = \\frac{e^{z_i}}{\\sum_k e^{z_k}}\n", "a_i = \\frac{e^{z_i}}{\\sum_k e^{z_k}}\n",
@@ -108,7 +108,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"以一个神经元的二类分类训练为例,进行两次实验(神经网络常用的激活函数为`sigmoid`函数,该实验也采用该函数):输入一个相同的样本数据x=1.0(该样本对应的实际分类y=0);两次实验各自随机初始化参数,从而在各自的第一次前向传播后得到不同的输出值,形成不同的代价(误差):\n",
"以一个神经元的二类分类训练为例,进行两次实验(神经网络常用的激活函数为`sigmoid`函数,该实验也采用该函数):输入一个相同的样本数据$x=1.0$(该样本对应的实际分类$y=0$);两次实验各自随机初始化参数,从而在各自的第一次前向传播后得到不同的输出值,形成不同的代价(误差):\n",
"\n", "\n",
"![cross_entropy_loss_1](images/cross_entropy_loss_1.png)\n", "![cross_entropy_loss_1](images/cross_entropy_loss_1.png)\n",
"实验1:第一次输出值为0.82\n", "实验1:第一次输出值为0.82\n",
@@ -143,7 +143,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## 2. 推导过程\n",
"## 3. 推导过程\n",
"\n", "\n",
"首先,我们要明确一下我们要求什么,我们要求的是我们的$loss$对于神经元输出($z_i$)的梯度,即:\n", "首先,我们要明确一下我们要求什么,我们要求的是我们的$loss$对于神经元输出($z_i$)的梯度,即:\n",
"\n", "\n",
@@ -158,14 +158,26 @@
"$$\n", "$$\n",
"\n", "\n",
"有个人可能有疑问了,这里为什么是$a_j$而不是$a_i$,这里要看一下$softmax$的公式了,因为$softmax$公式的特性,它的分母包含了所有神经元的输出,所以,对于不等于$i$的其他输出里面,也包含着$z_i$,所有的$a$都要纳入到计算范围中,并且后面的计算可以看到需要分为$i = j$和$i \\ne j$两种情况求导。\n", "有个人可能有疑问了,这里为什么是$a_j$而不是$a_i$,这里要看一下$softmax$的公式了,因为$softmax$公式的特性,它的分母包含了所有神经元的输出,所以,对于不等于$i$的其他输出里面,也包含着$z_i$,所有的$a$都要纳入到计算范围中,并且后面的计算可以看到需要分为$i = j$和$i \\ne j$两种情况求导。\n",
"\n",
"### 2.1 针对$a_j$的偏导\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.1 针对$a_j$的偏导\n",
"\n", "\n",
"$$\n", "$$\n",
"\\frac{\\partial C}{\\partial a_j} = \\frac{(\\partial -\\sum_j y_j ln a_j)}{\\partial a_j} = -\\sum_j y_j \\frac{1}{a_j}\n", "\\frac{\\partial C}{\\partial a_j} = \\frac{(\\partial -\\sum_j y_j ln a_j)}{\\partial a_j} = -\\sum_j y_j \\frac{1}{a_j}\n",
"$$\n", "$$\n",
"\n",
"### 2.2 针对$z_i$的偏导\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.2 针对$z_i$的偏导\n",
"\n", "\n",
"如果 $i=j$ :\n", "如果 $i=j$ :\n",
"\n", "\n",
@@ -188,8 +200,14 @@
"$$\n", "$$\n",
"(\\frac{u}{v})' = \\frac{u'v - uv'}{v^2} \n", "(\\frac{u}{v})' = \\frac{u'v - uv'}{v^2} \n",
"$$\n", "$$\n",
"\n",
"### 2.3 整体的推导\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.3 整体的推导\n",
"\n", "\n",
"\\begin{eqnarray}\n", "\\begin{eqnarray}\n",
"\\frac{\\partial C}{\\partial z_i} & = & (-\\sum_j y_j \\frac{1}{a_j} ) \\frac{\\partial a_j}{\\partial z_i} \\\\\n", "\\frac{\\partial C}{\\partial z_i} & = & (-\\sum_j y_j \\frac{1}{a_j} ) \\frac{\\partial a_j}{\\partial z_i} \\\\\n",
@@ -211,7 +229,7 @@
"\n", "\n",
"其中\n", "其中\n",
"$$\n", "$$\n",
"z_i = \\sum_{j} w_{ij} x_{j} + b\n",
"z_i = \\sum_{j} w_{ij} x_{j} + w_b\n",
"$$\n" "$$\n"
] ]
}, },
@@ -219,7 +237,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"对于使用二次代价函数的更新方程为:\n",
"最为对比,使用二次代价函数的更新方程为:\n",
"\n", "\n",
"$$\n", "$$\n",
"\\delta_i = a_i (1-a_i) (y_i - a_i)\n", "\\delta_i = a_i (1-a_i) (y_i - a_i)\n",
@@ -234,7 +252,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## 3. 问题\n",
"## 4. 问题\n",
"如何将本节所讲的softmax,交叉熵代价函数应用到上节所讲的BP方法中?" "如何将本节所讲的softmax,交叉熵代价函数应用到上节所讲的BP方法中?"
] ]
}, },
@@ -267,7 +285,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.7.9"
"version": "3.5.4"
} }
}, },
"nbformat": 4, "nbformat": 4,


BIN
5_nn/images/figures.pptx View File


BIN
5_nn/images/softmax_neuron.0.png View File

Before After
Width: 907  |  Height: 328  |  Size: 25 kB

BIN
5_nn/images/softmax_neuron.png View File

Before After
Width: 907  |  Height: 328  |  Size: 25 kB Width: 1258  |  Height: 425  |  Size: 51 kB

BIN
5_nn/perceptron_sample_data.pdf View File


+ 4
- 4
7_deep_learning/1_CNN/8-regularization.ipynb View File

@@ -20,7 +20,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"如果我们对新的损失函数 f 求导进行梯度下降,就有\n",
"如果对新的损失函数 $f$ 求导进行梯度下降,就有\n",
"\n", "\n",
"$$\n", "$$\n",
"\\frac{\\partial f}{\\partial p_j} = \\frac{\\partial loss}{\\partial p_j} + 2 \\lambda p_j\n", "\\frac{\\partial f}{\\partial p_j} = \\frac{\\partial loss}{\\partial p_j} + 2 \\lambda p_j\n",
@@ -37,9 +37,9 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"可以看到 $p_j - \\eta \\frac{\\partial loss}{\\partial p_j}$ 和没加正则项要更新的部分一样,而后面的 $2\\eta \\lambda p_j$ 就是正则项的影响,可以看到加完正则项之后会对参数做更大程度的更新,这也被称为权重衰减(weight decay),在 pytorch 中正则项就是通过这种方式来加入的,比如想在随机梯度下降法中使用正则项,或者说权重衰减,`torch.optim.SGD(net.parameters(), lr=0.1, weight_decay=1e-4)` 就可以了,这个 `weight_decay` 系数就是上面公式中的 $\\lambda$,非常方便\n",
"可以看到 $p_j - \\eta \\frac{\\partial loss}{\\partial p_j}$ 和没加正则项要更新的部分一样,而后面的 $2\\eta \\lambda p_j$ 就是正则项的影响,可以看到加完正则项之后会对参数做更大程度的更新,这也被称为权重衰减(weight decay)。在 PyTorch 中正则项就是通过这种方式来加入的,比如想在随机梯度下降法中使用正则项,或者说权重衰减,`torch.optim.SGD(net.parameters(), lr=0.1, weight_decay=1e-4)` 就可以了,这个 `weight_decay` 系数就是上面公式中的 $\\lambda$,非常方便\n",
"\n", "\n",
"注意正则项的系数的大小非常重要,如果太大,会极大的抑制参数的更新,导致欠拟合,如果太小,那么正则项这个部分基本没有贡献,所以选择一个合适的权重衰减系数非常重要,这个需要根据具体的情况去尝试,初步尝试可以使用 `1e-4` 或者 `1e-3` \n",
"注意正则项的系数的大小非常重要,如果太大,会极大的抑制参数的更新,导致欠拟合;如果太小,那么正则项这个部分基本没有贡献。所以选择一个合适的权重衰减系数非常重要,这个需要根据具体的情况去尝试,初步尝试可以使用 `1e-4` 或者 `1e-3` \n",
"\n", "\n",
"下面我们在训练 cifar 10 中添加正则项" "下面我们在训练 cifar 10 中添加正则项"
] ]
@@ -159,7 +159,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.5.4"
"version": "3.7.9"
} }
}, },
"nbformat": 4, "nbformat": 4,


BIN
7_deep_learning/1_CNN/CNN_Introduction.pptx View File


+ 8
- 0
7_deep_learning/README.md View File

@@ -11,6 +11,14 @@
典型的深度学习模型有[卷积神经网络(convolutional neural network)](1_CNN)、深度置信网络(Deep Belief Network, DBN)、堆栈自编码网络(stacked auto-encoder network)、循环神经网络(Recurrent Neural Network)、对抗生成网络(Generative Adversarial Networks,GAN)等。 典型的深度学习模型有[卷积神经网络(convolutional neural network)](1_CNN)、深度置信网络(Deep Belief Network, DBN)、堆栈自编码网络(stacked auto-encoder network)、循环神经网络(Recurrent Neural Network)、对抗生成网络(Generative Adversarial Networks,GAN)等。





## 深度学习的发展历程

下图展示了深度学习常见网络的发展历程

![resnet-development.png](imgs/resnet-development.png)


## 参考资料 ## 参考资料
* [深度学习 – Deep learning](https://easyai.tech/ai-definition/deep-learning/) * [深度学习 – Deep learning](https://easyai.tech/ai-definition/deep-learning/)
* [深度学习](https://www.jiqizhixin.com/graph/technologies/01946acc-d031-4c0e-909c-f062643b7273) * [深度学习](https://www.jiqizhixin.com/graph/technologies/01946acc-d031-4c0e-909c-f062643b7273)


BIN
7_deep_learning/imgs/resnet-development.png View File

Before After
Width: 1542  |  Height: 886  |  Size: 150 kB

+ 12
- 13
README.md View File

@@ -52,14 +52,15 @@
- CNN - CNN
- [CNN Introduction](7_deep_learning/1_CNN/CNN_Introduction.pptx) - [CNN Introduction](7_deep_learning/1_CNN/CNN_Introduction.pptx)
- [CNN simple demo](demo_code/3_CNN_MNIST.py) - [CNN simple demo](demo_code/3_CNN_MNIST.py)
- [cnn/basic_conv](7_deep_learning/1_CNN/1-basic_conv.ipynb)
- [cnn/batch-normalization](7_deep_learning/1_CNN/2-batch-normalization.ipynb)
- [cnn/lr-decay](7_deep_learning/2_CNN/1-lr-decay.ipynb)
- [cnn/regularization](7_deep_learning/1_CNN/4-regularization.ipynb)
- [cnn/vgg](7_deep_learning/1_CNN/6-vgg.ipynb)
- [cnn/googlenet](7_deep_learning/1_CNN/7-googlenet.ipynb)
- [cnn/resnet](7_deep_learning/1_CNN/8-resnet.ipynb)
- [cnn/densenet](7_deep_learning/1_CNN/9-densenet.ipynb)
- [Basic of Conv](7_deep_learning/1_CNN/1-basic_conv.ipynb)
- [VGG Network](7_deep_learning/1_CNN/2-vgg.ipynb)
- [GoogleNet](7_deep_learning/1_CNN/3-googlenet.ipynb)
- [ResNet](7_deep_learning/1_CNN/4-resnet.ipynb)
- [DenseNet](7_deep_learning/1_CNN/5-densenet.ipynb)
- [Batch Normalization](7_deep_learning/1_CNN/6-batch-normalization.ipynb)
- [Learning Rate Decay](7_deep_learning/2_CNN/7-lr-decay.ipynb)
- [Regularization](7_deep_learning/1_CNN/8-regularization.ipynb)
- [Data Augumentation](7_deep_learning/1_CNN/9-data-augumentation.ipynb)
- RNN - RNN
- [rnn/pytorch-rnn](7_deep_learning/2_RNN/pytorch-rnn.ipynb) - [rnn/pytorch-rnn](7_deep_learning/2_RNN/pytorch-rnn.ipynb)
- [rnn/rnn-for-image](7_deep_learning/2_RNN/rnn-for-image.ipynb) - [rnn/rnn-for-image](7_deep_learning/2_RNN/rnn-for-image.ipynb)
@@ -72,7 +73,7 @@




## 2. 学习的建议 ## 2. 学习的建议
1. 为了更好的学习本课程,需要大家把Python编程能力培养好,通过一定数量的练习题、小项目培养Python编程思维,为后续的机器学习理论与实践打好坚实的基础。
1. 为了更好的学习本课程,需要大家把[Python编程](0_python)能力培养好,通过一定数量的练习题、小项目培养Python编程思维,为后续的机器学习理论与实践打好坚实的基础。
2. 每个课程前半部分是理论基础,后半部分是代码实现。如果想学的更扎实,可以自己把各个方法的代码亲自实现一下。做的过程如果遇到问题尽可能自己想解决办法,因为最重要的目标不是代码本身,而是学会分析问题、解决问题的能力。 2. 每个课程前半部分是理论基础,后半部分是代码实现。如果想学的更扎实,可以自己把各个方法的代码亲自实现一下。做的过程如果遇到问题尽可能自己想解决办法,因为最重要的目标不是代码本身,而是学会分析问题、解决问题的能力。
3. **不能直接抄已有的程序,或者抄别人的程序**,如果自己不会要自己去想,去找解决方法,或者去问。如果直接抄别人的代码,这样的练习一点意义都没有。**如果感觉太难,可以做的慢一些,但是坚持自己思考、自己编写练习代码**。。 3. **不能直接抄已有的程序,或者抄别人的程序**,如果自己不会要自己去想,去找解决方法,或者去问。如果直接抄别人的代码,这样的练习一点意义都没有。**如果感觉太难,可以做的慢一些,但是坚持自己思考、自己编写练习代码**。。
4. **请先遍历一遍所有的文件夹,了解有什么内容,资料**。各个目录里有很多说明文档,如果不会先找找有没有文档,如果找不到合适的文档就去网上找找。通过这个过程锻炼自己搜索文献、资料的能力。 4. **请先遍历一遍所有的文件夹,了解有什么内容,资料**。各个目录里有很多说明文档,如果不会先找找有没有文档,如果找不到合适的文档就去网上找找。通过这个过程锻炼自己搜索文献、资料的能力。
@@ -80,21 +81,19 @@






## 3. 参考资料
## 3. [参考资料](References.md)
* [教程、代码](References.md)
* 资料速查 * 资料速查
* [相关学习参考资料汇总](References.md) * [相关学习参考资料汇总](References.md)
* [一些速查手册](references_tips/cheatsheet) * [一些速查手册](references_tips/cheatsheet)

* 机器学习方面技巧等 * 机器学习方面技巧等
* [Confusion Matrix](references_tips/confusion_matrix.ipynb) * [Confusion Matrix](references_tips/confusion_matrix.ipynb)
* [Datasets](references_tips/datasets.ipynb) * [Datasets](references_tips/datasets.ipynb)
* [构建深度神经网络的一些实战建议](references_tips/构建深度神经网络的一些实战建议.md) * [构建深度神经网络的一些实战建议](references_tips/构建深度神经网络的一些实战建议.md)
* [Intro to Deep Learning](references_tips/Intro_to_Deep_Learning.pdf) * [Intro to Deep Learning](references_tips/Intro_to_Deep_Learning.pdf)

* Python技巧等 * Python技巧等
* [安装Python环境](references_tips/InstallPython.md) * [安装Python环境](references_tips/InstallPython.md)
* [Python tips](references_tips/python) * [Python tips](references_tips/python)

* [Git教程](https://gitee.com/pi-lab/learn_programming/blob/master/6_tools/git/README.md) * [Git教程](https://gitee.com/pi-lab/learn_programming/blob/master/6_tools/git/README.md)
* [Markdown教程](https://gitee.com/pi-lab/learn_programming/blob/master/6_tools/markdown/README.md) * [Markdown教程](https://gitee.com/pi-lab/learn_programming/blob/master/6_tools/markdown/README.md)




+ 25
- 9
References.md View File

@@ -1,11 +1,27 @@
# References
# 参考资料
可以自行在下属列表找找到适合自己的学习资料,虽然罗列的比较多,但是个人最好选择一个深入阅读、练习。当练习到一定程度,可以再看看其他的资料,这样弥补单一学习资料可能存在的欠缺。 可以自行在下属列表找找到适合自己的学习资料,虽然罗列的比较多,但是个人最好选择一个深入阅读、练习。当练习到一定程度,可以再看看其他的资料,这样弥补单一学习资料可能存在的欠缺。


列表等在 https://gitee.com/pi-lab/pilab_research_fields/blob/master/references/ML_References.md 列表等在 https://gitee.com/pi-lab/pilab_research_fields/blob/master/references/ML_References.md




## 1. 教程、代码


## References
### 1.1 教程

* [《动手学深度学习》 — 动手学深度学习 2.0.0-alpha2 documentation](https://zh-v2.d2l.ai/index.html)
* [Introduction — Neuromatch Academy: Deep Learning](https://deeplearning.neuromatch.io/tutorials/intro.html)


### 1.2 代码

* [《统计学习方法》的代码](https://gitee.com/afishoutis/MachineLearning)
* [《统计学习方法》pytorch实现](https://github.com/fengdu78/lihang-code)
* [pytorch-cifar100](https://github.com/weiaicunzai/pytorch-cifar100) 实现ResNet, DenseNet, VGG, GoogleNet, InceptionV3, InceptionV4, Inception-ResNetv2, Xception, Resnet In Resnet, ResNext,ShuffleNet, ShuffleNetv2, MobileNet, MobileNetv2, SqueezeNet, NasNet, Residual Attention Network, SENet, WideResNet
* [Attention: xmu-xiaoma666/External-Attention-pytorch: Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐ (github.com)](https://github.com/xmu-xiaoma666/External-Attention-pytorch) 注意力机制,多层神经网络,重参数。
* [Python TheAlgorithms/Python: All Algorithms implemented in Python (github.com)](https://github.com/TheAlgorithms/Python)
* PytTorch 训练手册 https://github.com/zergtant/pytorch-handbook

## 2. 工具、技巧


* [形象直观了解谷歌大脑新型优化器LAMB](https://www.toutiao.com/i6687162064395305475/) * [形象直观了解谷歌大脑新型优化器LAMB](https://www.toutiao.com/i6687162064395305475/)
* [梯度下降方法的视觉解释(动量,AdaGrad,RMSProp,Adam)](https://www.toutiao.com/i6836422484028293640/) * [梯度下降方法的视觉解释(动量,AdaGrad,RMSProp,Adam)](https://www.toutiao.com/i6836422484028293640/)
@@ -35,10 +51,8 @@






## Course & Code
* [《统计学习方法》的代码](https://gitee.com/afishoutis/MachineLearning)


## Exercise
## 3. 练习
* http://sofasofa.io/competitions.php?type=practice * http://sofasofa.io/competitions.php?type=practice
* https://www.kaggle.com/competitions * https://www.kaggle.com/competitions
* Machine learning project ideas * Machine learning project ideas
@@ -50,10 +64,12 @@
* Titanic: notebooks/data-science-ipython-notebooks/kaggle/titanic.ipynb * Titanic: notebooks/data-science-ipython-notebooks/kaggle/titanic.ipynb
* 使用神经网络解决拼图游戏 https://www.toutiao.com/a6855437347463365133/ * 使用神经网络解决拼图游戏 https://www.toutiao.com/a6855437347463365133/
* [Sudoku-Solver](https://github.com/shivaverma/Sudoku-Solver) * [Sudoku-Solver](https://github.com/shivaverma/Sudoku-Solver)
* Python 小项目 https://github.com/kyclark/tiny_python_projects




## Method
## 4. 机器学习方法


### 4.1 经典机器学习方法
* Programming Multiclass Logistic Regression * Programming Multiclass Logistic Regression
notebooks/MachineLearningNotebooks/05.%20Logistic%20Regression.ipynb notebooks/MachineLearningNotebooks/05.%20Logistic%20Regression.ipynb


@@ -74,7 +90,7 @@ http://localhost:8889/notebooks/machineLearning/10_digits_classification.ipynb
http://localhost:8889/notebooks/machineLearning/notebooks/01%20-%20Model%20Selection%20and%20Assessment.ipynb http://localhost:8889/notebooks/machineLearning/notebooks/01%20-%20Model%20Selection%20and%20Assessment.ipynb




## NN
### 4.2 NN
* 神经网络——梯度下降&反向传播 https://blog.csdn.net/skullfang/article/details/78634317 * 神经网络——梯度下降&反向传播 https://blog.csdn.net/skullfang/article/details/78634317
* 零基础入门深度学习(3) - 神经网络和反向传播算法 https://www.zybuluo.com/hanbingtao/note/476663 * 零基础入门深度学习(3) - 神经网络和反向传播算法 https://www.zybuluo.com/hanbingtao/note/476663
* 如何直观地解释 backpropagation 算法? https://www.zhihu.com/question/27239198 * 如何直观地解释 backpropagation 算法? https://www.zhihu.com/question/27239198
@@ -85,10 +101,10 @@ http://localhost:8889/notebooks/machineLearning/notebooks/01%20-%20Model%20Selec
* https://www.python-course.eu/neural_networks_with_python_numpy.php * https://www.python-course.eu/neural_networks_with_python_numpy.php




## k-Means
### 4.3 k-Means
* [如何使用 Keras 实现无监督聚类](http://m.sohu.com/a/236221126_717210) * [如何使用 Keras 实现无监督聚类](http://m.sohu.com/a/236221126_717210)


## AutoEncoder (自编码/非监督学习)
### 4.4 AutoEncoder (自编码/非监督学习)
* https://morvanzhou.github.io/tutorials/machine-learning/torch/4-04-autoencoder/ * https://morvanzhou.github.io/tutorials/machine-learning/torch/4-04-autoencoder/
* https://github.com/MorvanZhou/PyTorch-Tutorial/blob/master/tutorial-contents/404_autoencoder.py * https://github.com/MorvanZhou/PyTorch-Tutorial/blob/master/tutorial-contents/404_autoencoder.py
* pytorch AutoEncoder 自编码 https://www.jianshu.com/p/f0929f427d03 * pytorch AutoEncoder 自编码 https://www.jianshu.com/p/f0929f427d03

Loading…
Cancel
Save