Browse Source

Update autograd description

pull/5/head
bushuhui 3 years ago
parent
commit
281702859e
6 changed files with 130 additions and 116 deletions
  1. +35
    -37
      6_pytorch/0_basic/2-autograd.ipynb
  2. +33
    -33
      6_pytorch/1_NN/1-linear-regression-gradient-descend.ipynb
  3. +59
    -43
      6_pytorch/1_NN/2-logistic-regression.ipynb
  4. +1
    -1
      6_pytorch/2_CNN/1-basic_conv.ipynb
  5. +1
    -1
      6_pytorch/2_CNN/2-batch-normalization.ipynb
  6. +1
    -1
      README.md

+ 35
- 37
6_pytorch/0_basic/2-autograd.ipynb View File

@@ -10,7 +10,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
@@ -28,7 +28,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"metadata": {},
"outputs": [
{
@@ -67,7 +67,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 3,
"metadata": {},
"outputs": [
{
@@ -93,7 +93,7 @@
},
{
"cell_type": "code",
"execution_count": 25,
"execution_count": 4,
"metadata": {},
"outputs": [
{
@@ -115,7 +115,7 @@
},
{
"cell_type": "code",
"execution_count": 26,
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
@@ -127,12 +127,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"如果你对矩阵乘法不熟悉,可以查看下面的[网址进行复习](https://baike.baidu.com/item/%E7%9F%A9%E9%98%B5%E4%B9%98%E6%B3%95/5446029?fr=aladdin)"
"如果你对矩阵乘法不熟悉,可以查看下面的[《矩阵乘法说明》](https://baike.baidu.com/item/%E7%9F%A9%E9%98%B5%E4%B9%98%E6%B3%95/5446029?fr=aladdin)进行复习"
]
},
{
"cell_type": "code",
"execution_count": 27,
"execution_count": 6,
"metadata": {},
"outputs": [
{
@@ -326,7 +326,7 @@
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 16,
"metadata": {},
"outputs": [
{
@@ -345,7 +345,7 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
@@ -354,7 +354,7 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": 18,
"metadata": {},
"outputs": [
{
@@ -371,7 +371,7 @@
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": 19,
"metadata": {},
"outputs": [],
"source": [
@@ -413,7 +413,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4 练习题\n",
"## 4. 练习题\n",
"\n",
"定义\n",
"\n",
@@ -462,7 +462,7 @@
},
{
"cell_type": "code",
"execution_count": 29,
"execution_count": 25,
"metadata": {},
"outputs": [],
"source": [
@@ -475,18 +475,31 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 29,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tensor([2., 3., 4.], requires_grad=True)\n",
"tensor([2., 0., 0.])\n"
]
}
],
"source": [
"#k.backward(torch.ones_like(k)) \n",
"#print(x.grad)\n",
"# 和上一个的区别在于该算法是求得导数和,并不是分布求解。"
"# demo to show how to use `.backward`\n",
"x = torch.tensor([2,3,4], dtype=torch.float, requires_grad=True)\n",
"print(x)\n",
"y = x*2\n",
"\n",
"y.backward(torch.tensor([1, 0, 0], dtype=torch.float))\n",
"print(x.grad)"
]
},
{
"cell_type": "code",
"execution_count": 30,
"execution_count": 26,
"metadata": {},
"outputs": [
{
@@ -500,6 +513,7 @@
}
],
"source": [
"# calc k_0 -> (x_0, x_1)\n",
"j = torch.zeros(2, 2)\n",
"k.backward(torch.FloatTensor([1, 0]), retain_graph=True)\n",
"print(k)\n",
@@ -508,6 +522,7 @@
"\n",
"x.grad.data.zero_() # 归零之前求得的梯度\n",
"\n",
"# calc k_1 -> (x_0, x_1)\n",
"k.backward(torch.FloatTensor([0, 1]))\n",
"j[1] = x.grad.data\n",
"print(x.grad.data)\n"
@@ -515,24 +530,7 @@
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tensor([13., 13.], grad_fn=<CopySlices>)\n"
]
}
],
"source": [
"print(k)"
]
},
{
"cell_type": "code",
"execution_count": 32,
"execution_count": 30,
"metadata": {},
"outputs": [
{


+ 33
- 33
6_pytorch/1_NN/1-linear-regression-gradient-descend.ipynb
File diff suppressed because it is too large
View File


+ 59
- 43
6_pytorch/1_NN/2-logistic-regression.ipynb
File diff suppressed because it is too large
View File


+ 1
- 1
6_pytorch/2_CNN/1-basic_conv.ipynb View File

@@ -355,7 +355,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.9"
"version": "3.7.9"
}
},
"nbformat": 4,


+ 1
- 1
6_pytorch/2_CNN/2-batch-normalization.ipynb View File

@@ -572,7 +572,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.9"
"version": "3.7.9"
}
},
"nbformat": 4,


+ 1
- 1
README.md View File

@@ -4,7 +4,7 @@

由于**本课程需要大量的编程练习才能取得比较好的学习效果**,因此需要认真去完成[《机器学习-作业和报告》](https://gitee.com/pi-lab/machinelearning_homework),写作业的过程可以查阅网上的资料,但是不能直接照抄,需要自己独立思考并独立写出代码。

为了让大家更好的自学本课程,课程讲座的视频会陆续上传到[《B站 - 机器学习》](https://www.bilibili.com/video/BV1oZ4y1N7ei/),欢迎大家观看学习。
为了让大家更好的自学本课程,课程讲座的视频[《B站 - 机器学习》](https://www.bilibili.com/video/BV1oZ4y1N7ei/),欢迎大家观看学习。

![Machine Learning Cover](images/machine_learning_1.png)



Loading…
Cancel
Save