{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# 参数初始化\n", "参数初始化对模型具有较大的影响,不同的初始化方式可能会导致截然不同的结果,所幸的是很多深度学习的先驱们已经帮我们探索了各种各样的初始化方式,所以我们只需要学会如何对模型的参数进行初始化的赋值即可。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "PyTorch 的初始化方式并没有那么显然,如果你使用最原始的方式创建模型,那么你需要定义模型中的所有参数,当然这样你可以非常方便地定义每个变量的初始化方式,但是对于复杂的模型,这并不容易,而且我们推崇使用 Sequential 和 Module 来定义模型,所以这个时候我们就需要知道如何来自定义初始化方式" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 使用 NumPy 来初始化\n", "因为 PyTorch 是一个非常灵活的框架,理论上能够对所有的 Tensor 进行操作,所以我们能够通过定义新的 Tensor 来初始化,直接看下面的例子" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import torch\n", "from torch import nn" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "# 定义一个 Sequential 模型\n", "net1 = nn.Sequential(\n", " nn.Linear(30, 40),\n", " nn.ReLU(),\n", " nn.Linear(40, 50),\n", " nn.ReLU(),\n", " nn.Linear(50, 10)\n", ")" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "# 访问第一层的参数\n", "w1 = net1[0].weight\n", "b1 = net1[0].bias" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Parameter containing:\n", "tensor([[ 0.0276, -0.1197, -0.0397, ..., 0.0759, -0.1630, 0.1599],\n", " [ 0.1419, 0.0903, -0.1630, ..., -0.0615, 0.1502, 0.0596],\n", " [-0.0451, 0.1103, 0.1070, ..., -0.1506, -0.1346, 0.1284],\n", " ...,\n", " [-0.0975, -0.1264, 0.0738, ..., -0.1058, -0.1396, 0.1800],\n", " [-0.1352, 0.0287, 0.0779, ..., 0.1773, -0.1585, 0.1046],\n", " [-0.1194, 0.1526, -0.0018, ..., 0.0946, -0.1453, -0.1512]],\n", " requires_grad=True)\n" ] } ], "source": [ "print(w1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "注意,这是一个 Parameter,也就是一个特殊的 Variable,我们可以访问其 `.data`属性得到其中的数据,然后直接定义一个新的 Tensor 对其进行替换,我们可以使用 PyTorch 中的一些随机数据生成的方式,比如 `torch.randn`,如果要使用更多 PyTorch 中没有的随机化方式,可以使用 numpy" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "# 定义一个 Tensor 直接对其进行替换\n", "net1[0].weight.data = torch.from_numpy(np.random.uniform(3, 5, size=(40, 30)))" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Parameter containing:\n", "tensor([[3.0403, 4.7550, 4.9311, ..., 3.0626, 4.3593, 3.9823],\n", " [4.4812, 4.5463, 4.4052, ..., 3.7669, 3.4201, 4.6582],\n", " [3.7711, 3.3997, 4.1416, ..., 3.4086, 3.1681, 4.0410],\n", " ...,\n", " [4.4137, 4.1779, 4.8741, ..., 3.4678, 3.4457, 4.7489],\n", " [3.8246, 4.2699, 4.9944, ..., 4.8576, 3.8945, 4.5525],\n", " [3.4959, 3.6991, 4.4047, ..., 4.7308, 3.5796, 3.2013]],\n", " dtype=torch.float64, requires_grad=True)\n" ] } ], "source": [ "print(net1[0].weight)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "可以看到这个参数的值已经被改变了,也就是说已经被定义成了我们需要的初始化方式,如果模型中某一层需要我们手动去修改,那么我们可以直接用这种方式去访问,但是更多的时候是模型中相同类型的层都需要初始化成相同的方式,这个时候一种更高效的方式是使用循环去访问,比如" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "for layer in net1:\n", " if isinstance(layer, nn.Linear): # 判断是否是线性层\n", " param_shape = layer.weight.shape\n", " layer.weight.data = torch.from_numpy(np.random.normal(0, 0.5, size=param_shape)) \n", " # 定义为均值为 0,方差为 0.5 的正态分布" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**小练习:一种非常流行的初始化方式叫 Xavier,方法来源于 2010 年的一篇论文 [Understanding the difficulty of training deep feedforward neural networks](http://proceedings.mlr.press/v9/glorot10a.html),其通过数学的推到,证明了这种初始化方式可以使得每一层的输出方差是尽可能相等的,有兴趣的同学可以去看看论文**\n", "\n", "我们给出这种初始化的公式\n", "\n", "$$\n", "w\\ \\sim \\ Uniform[- \\frac{\\sqrt{6}}{\\sqrt{n_j + n_{j+1}}}, \\frac{\\sqrt{6}}{\\sqrt{n_j + n_{j+1}}}]\n", "$$\n", "\n", "其中 $n_j$ 和 $n_{j+1}$ 表示该层的输入和输出数目,所以请尝试实现以下这种初始化方式" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "对于 Module 的参数初始化,其实也非常简单,如果想对其中的某层进行初始化,可以直接像 Sequential 一样对其 Tensor 进行重新定义,其唯一不同的地方在于,如果要用循环的方式访问,需要介绍两个属性,children 和 modules,下面我们举例来说明" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": true }, "outputs": [], "source": [ "class sim_net(nn.Module):\n", " def __init__(self):\n", " super(sim_net, self).__init__()\n", " self.l1 = nn.Sequential(\n", " nn.Linear(30, 40),\n", " nn.ReLU()\n", " )\n", " \n", " self.l1[0].weight.data = torch.randn(40, 30) # 直接对某一层初始化\n", " \n", " self.l2 = nn.Sequential(\n", " nn.Linear(40, 50),\n", " nn.ReLU()\n", " )\n", " \n", " self.l3 = nn.Sequential(\n", " nn.Linear(50, 10),\n", " nn.ReLU()\n", " )\n", " \n", " def forward(self, x):\n", " x = self.l1(x)\n", " x =self.l2(x)\n", " x = self.l3(x)\n", " return x" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": true }, "outputs": [], "source": [ "net2 = sim_net()" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Sequential(\n", " (0): Linear(in_features=30, out_features=40)\n", " (1): ReLU()\n", ")\n", "Sequential(\n", " (0): Linear(in_features=40, out_features=50)\n", " (1): ReLU()\n", ")\n", "Sequential(\n", " (0): Linear(in_features=50, out_features=10)\n", " (1): ReLU()\n", ")\n" ] } ], "source": [ "# 访问 children\n", "for i in net2.children():\n", " print(i)" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "sim_net(\n", " (l1): Sequential(\n", " (0): Linear(in_features=30, out_features=40)\n", " (1): ReLU()\n", " )\n", " (l2): Sequential(\n", " (0): Linear(in_features=40, out_features=50)\n", " (1): ReLU()\n", " )\n", " (l3): Sequential(\n", " (0): Linear(in_features=50, out_features=10)\n", " (1): ReLU()\n", " )\n", ")\n", "Sequential(\n", " (0): Linear(in_features=30, out_features=40)\n", " (1): ReLU()\n", ")\n", "Linear(in_features=30, out_features=40)\n", "ReLU()\n", "Sequential(\n", " (0): Linear(in_features=40, out_features=50)\n", " (1): ReLU()\n", ")\n", "Linear(in_features=40, out_features=50)\n", "ReLU()\n", "Sequential(\n", " (0): Linear(in_features=50, out_features=10)\n", " (1): ReLU()\n", ")\n", "Linear(in_features=50, out_features=10)\n", "ReLU()\n" ] } ], "source": [ "# 访问 modules\n", "for i in net2.modules():\n", " print(i)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "通过上面的例子,看到区别了吗?\n", "\n", "children 只会访问到模型定义中的第一层,因为上面的模型中定义了三个 Sequential,所以只会访问到三个 Sequential,而 modules 会访问到最后的结构,比如上面的例子,modules 不仅访问到了 Sequential,也访问到了 Sequential 里面,这就对我们做初始化非常方便,比如" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "for layer in net2.modules():\n", " if isinstance(layer, nn.Linear):\n", " param_shape = layer.weight.shape\n", " layer.weight.data = torch.from_numpy(np.random.normal(0, 0.5, size=param_shape)) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "这上面实现了和 Sequential 相同的初始化,同样非常简便" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## torch.nn.init\n", "因为 PyTorch 灵活的特性,我们可以直接对 Tensor 进行操作从而初始化,PyTorch 也提供了初始化的函数帮助我们快速初始化,就是 `torch.nn.init`,其操作层面仍然在 Tensor 上,下面我们举例说明" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "from torch.nn import init" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Parameter containing:\n", "tensor([[3.0403, 4.7550, 4.9311, ..., 3.0626, 4.3593, 3.9823],\n", " [4.4812, 4.5463, 4.4052, ..., 3.7669, 3.4201, 4.6582],\n", " [3.7711, 3.3997, 4.1416, ..., 3.4086, 3.1681, 4.0410],\n", " ...,\n", " [4.4137, 4.1779, 4.8741, ..., 3.4678, 3.4457, 4.7489],\n", " [3.8246, 4.2699, 4.9944, ..., 4.8576, 3.8945, 4.5525],\n", " [3.4959, 3.6991, 4.4047, ..., 4.7308, 3.5796, 3.2013]],\n", " dtype=torch.float64, requires_grad=True)\n" ] } ], "source": [ "print(net1[0].weight)" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/home/bushuhui/.virtualenv/dl/lib/python3.5/site-packages/ipykernel_launcher.py:1: UserWarning: nn.init.xavier_uniform is now deprecated in favor of nn.init.xavier_uniform_.\n", " \"\"\"Entry point for launching an IPython kernel.\n" ] }, { "data": { "text/plain": [ "Parameter containing:\n", "tensor([[-0.0889, 0.2279, 0.1816, ..., 0.1091, 0.0207, -0.2063],\n", " [ 0.0394, 0.1860, 0.1261, ..., 0.2250, -0.2881, 0.0727],\n", " [-0.2252, -0.0639, 0.2077, ..., 0.0328, -0.0075, 0.0339],\n", " ...,\n", " [-0.0932, 0.2806, -0.2377, ..., -0.2087, 0.0325, 0.0504],\n", " [-0.2305, 0.2866, -0.1872, ..., 0.2127, 0.1487, 0.0645],\n", " [-0.0072, 0.2771, 0.0928, ..., -0.0234, -0.1238, 0.1197]],\n", " dtype=torch.float64, requires_grad=True)" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "init.xavier_uniform(net1[0].weight) # 这就是上面我们讲过的 Xavier 初始化方法,PyTorch 直接内置了其实现" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Parameter containing:\n", "tensor([[-0.0889, 0.2279, 0.1816, ..., 0.1091, 0.0207, -0.2063],\n", " [ 0.0394, 0.1860, 0.1261, ..., 0.2250, -0.2881, 0.0727],\n", " [-0.2252, -0.0639, 0.2077, ..., 0.0328, -0.0075, 0.0339],\n", " ...,\n", " [-0.0932, 0.2806, -0.2377, ..., -0.2087, 0.0325, 0.0504],\n", " [-0.2305, 0.2866, -0.1872, ..., 0.2127, 0.1487, 0.0645],\n", " [-0.0072, 0.2771, 0.0928, ..., -0.0234, -0.1238, 0.1197]],\n", " dtype=torch.float64, requires_grad=True)\n" ] } ], "source": [ "print(net1[0].weight)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "可以看到参数已经被修改了\n", "\n", "`torch.nn.init` 为我们提供了更多的内置初始化方式,避免了我们重复去实现一些相同的操作" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "上面讲了两种初始化方式,其实它们的本质都是一样的,就是去修改某一层参数的实际值,而 `torch.nn.init` 提供了更多成熟的深度学习相关的初始化方式,非常方便\n", "\n", "下一节课,我们将讲一下目前流行的各种基于梯度的优化算法" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.9" } }, "nbformat": 4, "nbformat_minor": 2 }