|
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472 |
- {
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# 参数初始化\n",
- "参数初始化对模型具有较大的影响,不同的初始化方式可能会导致截然不同的结果,所幸的是很多深度学习的先驱们已经帮我们探索了各种各样的初始化方式,所以我们只需要学会如何对模型的参数进行初始化的赋值即可。"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "PyTorch 的初始化方式并没有那么显然,如果你使用最原始的方式创建模型,那么需要定义模型中的所有参数,当然这样可以非常方便地定义每个变量的初始化方式。但是对于复杂的模型,这并不容易,而且推荐使用 Sequential 和 Module 来定义模型,所以这个时候就需要知道如何来自定义初始化方式。"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## 1. 使用 NumPy 来初始化\n",
- "因为 PyTorch 是一个非常灵活的框架,理论上能够对所有的 Tensor 进行操作,所以我们能够通过定义新的 Tensor 来初始化,直接看下面的例子"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 1,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": [
- "import numpy as np\n",
- "import torch\n",
- "from torch import nn"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 2,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": [
- "# 定义一个 Sequential 模型\n",
- "net1 = nn.Sequential(\n",
- " nn.Linear(30, 40),\n",
- " nn.ReLU(),\n",
- " nn.Linear(40, 50),\n",
- " nn.ReLU(),\n",
- " nn.Linear(50, 10)\n",
- ")"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 3,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": [
- "# 访问第一层的参数\n",
- "w1 = net1[0].weight\n",
- "b1 = net1[0].bias"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 4,
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Parameter containing:\n",
- "tensor([[-0.0784, 0.1559, 0.0451, ..., 0.0432, 0.0325, -0.0626],\n",
- " [ 0.0436, 0.0976, 0.1529, ..., -0.1601, -0.1227, -0.0831],\n",
- " [ 0.0890, 0.0343, 0.1744, ..., -0.0332, 0.0897, 0.0002],\n",
- " ...,\n",
- " [-0.1447, -0.0411, -0.0851, ..., 0.0117, 0.1457, 0.0585],\n",
- " [ 0.1642, 0.0744, -0.1118, ..., 0.0623, -0.0591, 0.0512],\n",
- " [-0.1610, 0.0070, 0.0184, ..., -0.1529, -0.0314, 0.1748]],\n",
- " requires_grad=True)\n"
- ]
- }
- ],
- "source": [
- "print(w1)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "注意,这是一个 Parameter,也就是一个特殊的 Variable,我们可以访问其 `.data`属性得到其中的数据,然后直接定义一个新的 Tensor 对其进行替换,我们可以使用 PyTorch 中的一些随机数据生成的方式,比如 `torch.randn`,如果要使用更多 PyTorch 中没有的随机化方式,可以使用 numpy"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 5,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": [
- "# 定义一个 Tensor 直接对其进行替换\n",
- "net1[0].weight.data = torch.from_numpy(np.random.uniform(3, 5, size=(40, 30)))"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 6,
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Parameter containing:\n",
- "tensor([[3.5493, 3.2984, 4.3041, ..., 4.5181, 3.7561, 4.5633],\n",
- " [4.4523, 3.7956, 3.7448, ..., 3.5031, 3.9477, 4.8617],\n",
- " [3.5174, 4.1082, 4.6358, ..., 3.5759, 4.5291, 3.9545],\n",
- " ...,\n",
- " [3.6757, 4.2100, 3.9763, ..., 3.2017, 3.4422, 4.0191],\n",
- " [3.0283, 3.8147, 3.1705, ..., 3.9442, 4.1054, 4.9491],\n",
- " [3.5879, 3.7237, 4.0656, ..., 3.2279, 3.1818, 4.7489]],\n",
- " dtype=torch.float64, requires_grad=True)\n"
- ]
- }
- ],
- "source": [
- "print(net1[0].weight)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "可以看到这个参数的值已经被改变了,也就是说已经被定义成了我们需要的初始化方式,如果模型中某一层需要我们手动去修改,那么我们可以直接用这种方式去访问,但是更多的时候是模型中相同类型的层都需要初始化成相同的方式,这个时候一种更高效的方式是使用循环去访问,比如"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 7,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": [
- "for layer in net1:\n",
- " if isinstance(layer, nn.Linear): # 判断是否是线性层\n",
- " param_shape = layer.weight.shape\n",
- " layer.weight.data = torch.from_numpy(np.random.normal(0, 0.5, size=param_shape)) \n",
- " # 定义为均值为 0,方差为 0.5 的正态分布"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "一种非常流行的初始化方式叫 Xavier,方法来源于 2010 年的一篇论文 [Understanding the difficulty of training deep feedforward neural networks](http://proceedings.mlr.press/v9/glorot10a.html),其通过数学的推到,证明了这种初始化方式可以使得每一层的输出方差是尽可能相等。这种初始化的公式为:\n",
- "\n",
- "$$\n",
- "w\\ \\sim \\ Uniform[- \\frac{\\sqrt{6}}{\\sqrt{n_j + n_{j+1}}}, \\frac{\\sqrt{6}}{\\sqrt{n_j + n_{j+1}}}]\n",
- "$$\n",
- "\n",
- "其中 $n_j$ 和 $n_{j+1}$ 表示该层的输入和输出数目,所以请尝试实现以下这种初始化方式"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "对于 Module 的参数初始化,其实也非常简单,如果想对其中的某层进行初始化,可以直接像 Sequential 一样对其 Tensor 进行重新定义,其唯一不同的地方在于,如果要用循环的方式访问,需要介绍两个属性,children 和 modules,下面我们举例来说明"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 8,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": [
- "class sim_net(nn.Module):\n",
- " def __init__(self):\n",
- " super(sim_net, self).__init__()\n",
- " self.l1 = nn.Sequential(\n",
- " nn.Linear(30, 40),\n",
- " nn.ReLU()\n",
- " )\n",
- " \n",
- " self.l1[0].weight.data = torch.randn(40, 30) # 直接对某一层初始化\n",
- " \n",
- " self.l2 = nn.Sequential(\n",
- " nn.Linear(40, 50),\n",
- " nn.ReLU()\n",
- " )\n",
- " \n",
- " self.l3 = nn.Sequential(\n",
- " nn.Linear(50, 10),\n",
- " nn.ReLU()\n",
- " )\n",
- " \n",
- " def forward(self, x):\n",
- " x = self.l1(x)\n",
- " x =self.l2(x)\n",
- " x = self.l3(x)\n",
- " return x"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 9,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": [
- "net2 = sim_net()"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 10,
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Sequential(\n",
- " (0): Linear(in_features=30, out_features=40, bias=True)\n",
- " (1): ReLU()\n",
- ")\n",
- "Sequential(\n",
- " (0): Linear(in_features=40, out_features=50, bias=True)\n",
- " (1): ReLU()\n",
- ")\n",
- "Sequential(\n",
- " (0): Linear(in_features=50, out_features=10, bias=True)\n",
- " (1): ReLU()\n",
- ")\n"
- ]
- }
- ],
- "source": [
- "# 访问 children\n",
- "for i in net2.children():\n",
- " print(i)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 11,
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "sim_net(\n",
- " (l1): Sequential(\n",
- " (0): Linear(in_features=30, out_features=40, bias=True)\n",
- " (1): ReLU()\n",
- " )\n",
- " (l2): Sequential(\n",
- " (0): Linear(in_features=40, out_features=50, bias=True)\n",
- " (1): ReLU()\n",
- " )\n",
- " (l3): Sequential(\n",
- " (0): Linear(in_features=50, out_features=10, bias=True)\n",
- " (1): ReLU()\n",
- " )\n",
- ")\n",
- "Sequential(\n",
- " (0): Linear(in_features=30, out_features=40, bias=True)\n",
- " (1): ReLU()\n",
- ")\n",
- "Linear(in_features=30, out_features=40, bias=True)\n",
- "ReLU()\n",
- "Sequential(\n",
- " (0): Linear(in_features=40, out_features=50, bias=True)\n",
- " (1): ReLU()\n",
- ")\n",
- "Linear(in_features=40, out_features=50, bias=True)\n",
- "ReLU()\n",
- "Sequential(\n",
- " (0): Linear(in_features=50, out_features=10, bias=True)\n",
- " (1): ReLU()\n",
- ")\n",
- "Linear(in_features=50, out_features=10, bias=True)\n",
- "ReLU()\n"
- ]
- }
- ],
- "source": [
- "# 访问 modules\n",
- "for i in net2.modules():\n",
- " print(i)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "通过上面的例子,看到区别了吗?\n",
- "\n",
- "children 只会访问到模型定义中的第一层,因为上面的模型中定义了三个 Sequential,所以只会访问到三个 Sequential,而 modules 会访问到最后的结构,比如上面的例子,modules 不仅访问到了 Sequential,也访问到了 Sequential 里面,这就对我们做初始化非常方便,比如"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 12,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": [
- "for layer in net2.modules():\n",
- " if isinstance(layer, nn.Linear):\n",
- " param_shape = layer.weight.shape\n",
- " layer.weight.data = torch.from_numpy(np.random.normal(0, 0.5, size=param_shape)) "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "这上面实现了和 Sequential 相同的初始化,同样非常简便"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## 2. `torch.nn.init`\n",
- "因为 PyTorch 灵活的特性,可以直接对 Tensor 进行操作从而初始化,PyTorch 也提供了初始化的函数帮助我们快速初始化,就是 `torch.nn.init`,其操作层面仍然在 Tensor 上,下面我们举例说明"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 13,
- "metadata": {
- "collapsed": true
- },
- "outputs": [],
- "source": [
- "from torch.nn import init"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 14,
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Parameter containing:\n",
- "tensor([[ 0.2725, -0.2262, -0.4229, ..., -0.2451, 0.2344, 0.1583],\n",
- " [ 0.1886, 0.3226, -0.5023, ..., -0.2228, 0.5089, -0.6994],\n",
- " [-0.4689, 0.2612, 0.3464, ..., -0.0423, -0.2999, -0.5813],\n",
- " ...,\n",
- " [ 0.4200, 0.2091, -0.3690, ..., 0.4142, 0.1120, 0.0771],\n",
- " [ 0.6540, 0.0475, 0.0594, ..., 0.1726, -0.2264, 0.1510],\n",
- " [-1.0729, -0.2862, 0.4953, ..., 0.4702, 0.5555, -0.2246]],\n",
- " dtype=torch.float64, requires_grad=True)\n"
- ]
- }
- ],
- "source": [
- "print(net1[0].weight)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 16,
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "Parameter containing:\n",
- "tensor([[ 0.1173, -0.0864, 0.1008, ..., -0.1053, 0.2642, -0.1045],\n",
- " [-0.0244, 0.1722, 0.1330, ..., 0.2443, -0.2385, 0.1613],\n",
- " [-0.1767, 0.0678, 0.1282, ..., 0.1033, -0.2423, -0.0864],\n",
- " ...,\n",
- " [-0.1673, -0.1338, -0.0839, ..., 0.0267, 0.1693, -0.2911],\n",
- " [ 0.2146, 0.0194, 0.2873, ..., 0.1486, 0.2775, 0.2740],\n",
- " [-0.0400, 0.2231, 0.0800, ..., 0.2804, 0.2121, 0.2764]],\n",
- " dtype=torch.float64, requires_grad=True)"
- ]
- },
- "execution_count": 16,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "init.xavier_uniform_(net1[0].weight) # 这就是上面我们讲过的 Xavier 初始化方法,PyTorch 直接内置了其实现"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 17,
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Parameter containing:\n",
- "tensor([[ 0.1173, -0.0864, 0.1008, ..., -0.1053, 0.2642, -0.1045],\n",
- " [-0.0244, 0.1722, 0.1330, ..., 0.2443, -0.2385, 0.1613],\n",
- " [-0.1767, 0.0678, 0.1282, ..., 0.1033, -0.2423, -0.0864],\n",
- " ...,\n",
- " [-0.1673, -0.1338, -0.0839, ..., 0.0267, 0.1693, -0.2911],\n",
- " [ 0.2146, 0.0194, 0.2873, ..., 0.1486, 0.2775, 0.2740],\n",
- " [-0.0400, 0.2231, 0.0800, ..., 0.2804, 0.2121, 0.2764]],\n",
- " dtype=torch.float64, requires_grad=True)\n"
- ]
- }
- ],
- "source": [
- "print(net1[0].weight)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "可以看到参数已经被修改了\n",
- "\n",
- "`torch.nn.init` 提供了更多的内置初始化方式,避免了重复去实现一些相同的操作。"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "上面讲了两种初始化方式,其实它们的本质都是一样的,就是去修改某一层参数的实际值,而 `torch.nn.init` 提供了更多成熟的深度学习相关的初始化方式。\n"
- ]
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3 (ipykernel)",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.9.7"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 2
- }
|