Browse Source

Add course index

pull/10/MERGE
bushuhui 3 years ago
parent
commit
05f9ca85ac
9 changed files with 106 additions and 3 deletions
  1. +5
    -0
      0_python/README.md
  2. +1
    -1
      1_numpy_matplotlib_scipy_sympy/README.md
  3. +14
    -0
      2_knn/README.md
  4. +12
    -1
      3_kmeans/README.md
  5. +17
    -0
      4_logistic_regression/README.md
  6. +17
    -0
      5_nn/README.md
  7. +12
    -0
      6_pytorch/README.md
  8. +27
    -0
      7_deep_learning/README.md
  9. +1
    -1
      README.md

+ 5
- 0
0_python/README.md View File

@@ -36,6 +36,11 @@ Python 是一门上手简单、功能强大、通用型的脚本编程语言。


## 参考资料
### 视频教程
* [《90分钟学会Python》](https://www.bilibili.com/video/BV1Uz4y167xY)
* [《零基础入门学习Python》教学视频](https://www.bilibili.com/video/BV1c4411e77t)

### 教程
* [安装Python环境](../references_tips/InstallPython.md)
* [IPython Notebooks to learn Python](https://github.com/rajathkmp/Python-Lectures)
* [廖雪峰的Python教程](https://www.liaoxuefeng.com/wiki/1016959663602400)


+ 1
- 1
1_numpy_matplotlib_scipy_sympy/README.md View File

@@ -1,4 +1,4 @@
# numpy, matplotlib, scipy等常用库
# Numpy, Matplotlib, Scipy等常用库

## 内容
* [numpy教程](1-numpy_tutorial.ipynb)


+ 14
- 0
2_knn/README.md View File

@@ -0,0 +1,14 @@
# kNN 分类算法

K最近邻(k-Nearest Neighbor,kNN)分类算法,是一个理论上比较成熟的方法,也是最简单的机器学习算法之一。该方法的思路是:***如果一个样本在特征空间中的k个最相似(即特征空间中最邻近)的样本中的大多数属于某一个类别,则该样本也属于这个类别\***。

kNN算法不仅可以用于分类,还可以用于回归。通过找出一个样本的`k`个最近邻居,将这些邻居的属性的平均值赋给该样本,就可以得到该样本的属性。更有用的方法是将不同距离的邻居对该样本产生的影响给予不同的权值(weight),如权值与距离成正比(组合函数)。

![knn](images/knn.png)



## 内容

* [knn_classification]([knn_classification.ipynb])


+ 12
- 1
3_kmeans/README.md View File

@@ -1,8 +1,19 @@
## k-Means

k-Means算法是无监督学习领域最为经典的算法之一。k-means算法就是将n个数据点进行聚类分析,得到 k 个聚类,使得每个数据点到聚类中心的距离最小。而实际上,这个问题往往是NP-hard的,以此有许多启发式的方法求解,从而避开局部最小值。

![cluster illustration](images/kmeans-illustration.jpeg)

## 内容

增加一个Bag of Words的说明和例子程序 (https://blog.csdn.net/wsj998689aa/article/details/47089153)
* [k-Means原理、算法](1-k-means.ipynb)
* [应用-图像压缩](2-kmeans-color-vq.ipynb)
* [聚类算法对比](3-ClusteringAlgorithms.ipynb)



## References

* [如何使用 Keras 实现无监督聚类](http://m.sohu.com/a/236221126_717210)

* [Bag-of-words模型入门](https://blog.csdn.net/wsj998689aa/article/details/47089153)

+ 17
- 0
4_logistic_regression/README.md View File

@@ -0,0 +1,17 @@
# 逻辑回归



逻辑回归(Logistic Regression, LR)模型其实仅在线性回归的基础上,套用了一个逻辑函数,但也就由于这个逻辑函数,使得逻辑回归模型能够输出类别的概率。逻辑回归的本质是:假设数据服从这个分布,然后使用极大似然估计做参数的估计。

![theory](images/linear_logistic_regression.png)



## 内容

* [线性回归-最小二乘法](1-Least_squares.ipynb)
* [逻辑回归](2-Logistic_regression.ipynb)

* [特征处理 - 降维](3-PCA_and_Logistic_Regression.ipynb)


+ 17
- 0
5_nn/README.md View File

@@ -1,4 +1,21 @@
# 神经网络

人工神经网络(artificial neural network,ANN),简称神经网络(neural network,NN),是一种模仿生物神经网络的结构和功能的数学模型或计算模型。神经网络由大量的人工神经元联结进行计算。大多数情况下人工神经网络能在外界信息的基础上改变内部结构,是一种自适应系统。现代神经网络是一种非线性统计性数据建模工具,常用来对输入和输出间复杂的关系进行建模,或用来探索数据的模式。

![mlp_theory](images/mlp_theory.gif)

## 内容

* [感知机](1-Perceptron.ipynb)

* [多层神经网络和反向传播](2-mlp_bp.ipynb)

* [Softmax和交叉熵](3-softmax_ce.ipynb)


## References

* https://iamtrask.github.io/2015/07/12/basic-python-network/
* http://www.wildml.com/2015/09/implementing-a-neural-network-from-scratch/


+ 12
- 0
6_pytorch/README.md View File

@@ -11,8 +11,20 @@ PyTorch的简洁设计使得它入门很简单,本部分内容在深入介绍P

![PyTorch Demo](imgs/PyTorch.png)

## 内容

- [Tensor](1-tensor.ipynb)
- [autograd](2-autograd.ipynb)
- [linear-regression](3-linear-regression.ipynb)
- [logistic-regression](4-logistic-regression.ipynb)
- [nn-sequential-module](5-nn-sequential-module.ipynb)
- [deep-nn](6-deep-nn.ipynb)
- [param_initialize](7-param_initialize.ipynb)
- [optim/sgd](optimizer/6_1-sgd.ipynb)
- [optim/adam](optimizer/6_6-adam.ipynb)

## References

* [code of book "Learn Deep Learning with PyTorch"](https://github.com/L1aoXingyu/code-of-learn-deep-learning-with-pytorch)
* [PyTorch tutorials and fun projects including neural talk, neural style, poem writing, anime generation](https://github.com/chenyuntc/pytorch-book)
* [Awesome-Pytorch-list](https://github.com/bharathgs/Awesome-pytorch-list)


+ 27
- 0
7_deep_learning/README.md View File

@@ -19,7 +19,34 @@
![resnet-development.png](imgs/resnet-development.png)



## 内容

- CNN
- [CNN Introduction](1_CNN/CNN_Introduction.pptx)
- [CNN simple demo](../demo_code/3_CNN_MNIST.py)
- [Basic of Conv](1_CNN/1-basic_conv.ipynb)
- [VGG Network](1_CNN/2-vgg.ipynb)
- [GoogleNet](1_CNN/3-googlenet.ipynb)
- [ResNet](1_CNN/4-resnet.ipynb)
- [DenseNet](1_CNN/5-densenet.ipynb)
- [Batch Normalization](1_CNN/6-batch-normalization.ipynb)
- [Learning Rate Decay](1_CNN/7-lr-decay.ipynb)
- [Regularization](1_CNN/8-regularization.ipynb)
- [Data Augumentation](1_CNN/9-data-augumentation.ipynb)
- RNN
- [rnn/pytorch-rnn](2_RNN/pytorch-rnn.ipynb)
- [rnn/rnn-for-image](2_RNN/rnn-for-image.ipynb)
- [rnn/lstm-time-series](2_RNN/time-series/lstm-time-series.ipynb)
- GAN
- [gan/autoencoder](3_GAN/autoencoder.ipynb)
- [gan/vae](3_GAN/vae.ipynb)
- [gan/gan](3_GAN/gan.ipynb)



## 参考资料

* [深度学习 – Deep learning](https://easyai.tech/ai-definition/deep-learning/)
* [深度学习](https://www.jiqizhixin.com/graph/technologies/01946acc-d031-4c0e-909c-f062643b7273)


+ 1
- 1
README.md View File

@@ -58,7 +58,7 @@
- [ResNet](7_deep_learning/1_CNN/4-resnet.ipynb)
- [DenseNet](7_deep_learning/1_CNN/5-densenet.ipynb)
- [Batch Normalization](7_deep_learning/1_CNN/6-batch-normalization.ipynb)
- [Learning Rate Decay](7_deep_learning/2_CNN/7-lr-decay.ipynb)
- [Learning Rate Decay](7_deep_learning/1_CNN/7-lr-decay.ipynb)
- [Regularization](7_deep_learning/1_CNN/8-regularization.ipynb)
- [Data Augumentation](7_deep_learning/1_CNN/9-data-augumentation.ipynb)
- RNN


Loading…
Cancel
Save