|
- # -*- coding: utf-8 -*-
- # ---
- # jupyter:
- # jupytext_format_version: '1.2'
- # kernelspec:
- # display_name: Python 3
- # language: python
- # name: python3
- # language_info:
- # codemirror_mode:
- # name: ipython
- # version: 3
- # file_extension: .py
- # mimetype: text/x-python
- # name: python
- # nbconvert_exporter: python
- # pygments_lexer: ipython3
- # version: 3.5.2
- # ---
-
- # # Least squares
- #
- # A mathematical procedure for finding the best-fitting curve to a given set of points by minimizing the sum of the squares of the offsets ("the residuals") of the points from the curve. The sum of the squares of the offsets is used instead of the offset absolute values because this allows the residuals to be treated as a continuous differentiable quantity. However, because squares of the offsets are used, outlying points can have a disproportionate effect on the fit, a property which may or may not be desirable depending on the problem at hand.
- #
-
- # ### Show the data
- #
-
- # +
- # %matplotlib inline
-
- import matplotlib.pyplot as plt
- import numpy as np
- import sklearn
- from sklearn import datasets
-
- # load data
- d = datasets.load_diabetes()
-
- X = d.data[:, 2]
- Y = d.target
-
- # draw original data
- plt.scatter(X, Y)
- plt.xlabel("X")
- plt.ylabel("Y")
- plt.show()
-
- # -
-
- # ### Theory
- # For $N$ observation data:
- # $$
- # \mathbf{X} = \{x_1, x_2, ..., x_N \} \\
- # \mathbf{Y} = \{y_1, y_2, ..., y_N \}
- # $$
- #
- # We want to find the model which can predict the data. The simplest model is linear model, which has the form of
- # $$
- # y = ax + b
- # $$
- #
- # The purpose is to find parameters $a, b$ which best fit the model to the observation data.
- #
- # We use the sum of squares to measure the differences (loss function) between the model's prediction and observation data:
- # $$
- # L = \sum_{i=1}^{N} (y_i - a x_i + b)^2
- # $$
- #
- # To make the loss function minimize, we can find the parameters:
- # $$
- # \frac{\partial L}{\partial a} = -2 \sum_{i=1}^{N} (y_i - a x_i - b) x_i \\
- # \frac{\partial L}{\partial b} = -2 \sum_{i=1}^{N} (y_i - a x_i - b)
- # $$
- # When the loss is minimized, therefore the partial difference is zero, then we can get:
- # $$
- # -2 \sum_{i=1}^{N} (y_i - a x_i - b) x_i = 0 \\
- # -2 \sum_{i=1}^{N} (y_i - a x_i - b) = 0 \\
- # $$
- #
- # We reoder the items as:
- # $$
- # a \sum x_i^2 + b \sum x_i = \sum y_i x_i \\
- # a \sum x_i + b N = \sum y_i
- # $$
- # By solving the linear equation we can obtain the model parameters.
-
- # ### Program
-
- # +
- N = X.shape[0]
-
- S_X2 = np.sum(X*X)
- S_X = np.sum(X)
- S_XY = np.sum(X*Y)
- S_Y = np.sum(Y)
-
- A1 = np.array([[S_X2, S_X],
- [S_X, N]])
- B1 = np.array([S_XY, S_Y])
-
- coeff = np.linalg.inv(A1).dot(B1)
-
- print('a = %f, b = %f' % (coeff[0], coeff[1]))
-
- x_min = np.min(X)
- x_max = np.max(X)
- y_min = coeff[0] * x_min + coeff[1]
- y_max = coeff[0] * x_max + coeff[1]
-
- plt.scatter(X, Y, label='original data')
- plt.plot([x_min, x_max], [y_min, y_max], 'r', label='model')
- plt.legend()
- plt.show()
- # -
-
- # ## 如何使用迭代的方法求出模型参数
- #
- # 当数据比较多的时候,或者模型比较复杂,无法直接使用解析的方式求出模型参数。因此更为常用的方式是,通过迭代的方式逐步逼近模型的参数。
- #
- # ### 梯度下降法
- # 在机器学习算法中,对于很多监督学习模型,需要对原始的模型构建损失函数,接下来便是通过优化算法对损失函数进行优化,以便寻找到最优的参数。在求解机器学习参数的优化算法中,使用较多的是基于梯度下降的优化算法(Gradient Descent, GD)。
- #
- # 梯度下降法有很多优点,其中,在梯度下降法的求解过程中,只需求解损失函数的一阶导数,计算的代价比较小,这使得梯度下降法能在很多大规模数据集上得到应用。梯度下降法的含义是通过当前点的梯度方向寻找到新的迭代点。
- #
- # 梯度下降法的基本思想可以类比为一个下山的过程。假设这样一个场景:一个人被困在山上,需要从山上下来(i.e. 找到山的最低点,也就是山谷)。但此时山上的浓雾很大,导致可视度很低。因此,下山的路径就无法确定,他必须利用自己周围的信息去找到下山的路径。这个时候,他就可以利用梯度下降算法来帮助自己下山。具体来说就是,以他当前的所处的位置为基准,寻找这个位置最陡峭的地方,然后朝着山的高度下降的地方走,同理,如果我们的目标是上山,也就是爬到山顶,那么此时应该是朝着最陡峭的方向往上走。然后每走一段距离,都反复采用同一个方法,最后就能成功的抵达山谷。
- #
- #
- # 我们同时可以假设这座山最陡峭的地方是无法通过肉眼立马观察出来的,而是需要一个复杂的工具来测量,同时,这个人此时正好拥有测量出最陡峭方向的能力。所以,此人每走一段距离,都需要一段时间来测量所在位置最陡峭的方向,这是比较耗时的。那么为了在太阳下山之前到达山底,就要尽可能的减少测量方向的次数。这是一个两难的选择,如果测量的频繁,可以保证下山的方向是绝对正确的,但又非常耗时,如果测量的过少,又有偏离轨道的风险。所以需要找到一个合适的测量方向的频率,来确保下山的方向不错误,同时又不至于耗时太多!
- #
- #
- # 
- #
- # 如上图所示,得到了局部最优解。x,y表示的是$\theta_0$和$\theta_1$,z方向表示的是花费函数,很明显出发点不同,最后到达的收敛点可能不一样。当然如果是碗状的,那么收敛点就应该是一样的。
- #
- # 对于某一个损失函数
- # $$
- # L = \sum_{i=1}^{N} (y_i - a x_i + b)^2
- # $$
- #
- # 我们更新的策略是:
- # $$
- # \theta^1 = \theta^0 - \alpha \triangledown L(\theta)
- # $$
- # 其中$\theta$代表了模型中的参数,例如$a$, $b$
- #
- # 此公式的意义是:L是关于$\theta$的一个函数,我们当前所处的位置为$\theta_0$点,要从这个点走到L的最小值点,也就是山底。首先我们先确定前进的方向,也就是梯度的反向,然后走一段距离的步长,也就是$\alpha$,走完这个段步长,就到达了$\theta_1$这个点!
- #
- # 下面就这个公式的几个常见的疑问:
- #
- # * **$\alpha$是什么含义?**
- # $\alpha$在梯度下降算法中被称作为学习率或者步长,意味着我们可以通过$\alpha$来控制每一步走的距离,以保证不要步子跨的太大扯着蛋,哈哈,其实就是不要走太快,错过了最低点。同时也要保证不要走的太慢,导致太阳下山了,还没有走到山下。所以$\alpha$的选择在梯度下降法中往往是很重要的!$\alpha$不能太大也不能太小,太小的话,可能导致迟迟走不到最低点,太大的话,会导致错过最低点!
- # 
- #
- # * **为什么要梯度要乘以一个负号?**
- # 梯度前加一个负号,就意味着朝着梯度相反的方向前进!我们在前文提到,梯度的方向实际就是函数在此点上升最快的方向!而我们需要朝着下降最快的方向走,自然就是负的梯度的方向,所以此处需要加上负号
- #
- #
-
- # ### Program
-
- # +
- n_epoch = 3000 # epoch size
- a, b = 1, 1 # initial parameters
- epsilon = 0.001 # learning rate
-
- for i in range(n_epoch):
- for j in range(N):
- a = a + epsilon*2*(Y[j] - a*X[j] - b)*X[j]
- b = b + epsilon*2*(Y[j] - a*X[j] - b)
-
- L = 0
- for j in range(N):
- L = L + (Y[j]-a*X[j]-b)**2
- print("epoch %4d: loss = %f, a = %f, b = %f" % (i, L, a, b))
-
- x_min = np.min(X)
- x_max = np.max(X)
- y_min = a * x_min + b
- y_max = a * x_max + b
-
- plt.scatter(X, Y, label='original data')
- plt.plot([x_min, x_max], [y_min, y_max], 'r', label='model')
- plt.legend()
- plt.show()
- # -
-
- # ## How to show the iterative process
-
- # +
- # %matplotlib nbagg
-
- import matplotlib.pyplot as plt
- import matplotlib.animation as animation
-
- n_epoch = 3000 # epoch size
- a, b = 1, 1 # initial parameters
- epsilon = 0.001 # learning rate
-
- fig = plt.figure()
- imgs = []
-
- for i in range(n_epoch):
- for j in range(N):
- a = a + epsilon*2*(Y[j] - a*X[j] - b)*X[j]
- b = b + epsilon*2*(Y[j] - a*X[j] - b)
-
- L = 0
- for j in range(N):
- L = L + (Y[j]-a*X[j]-b)**2
- #print("epoch %4d: loss = %f, a = %f, b = %f" % (i, L, a, b))
-
- if i % 50 == 0:
- x_min = np.min(X)
- x_max = np.max(X)
- y_min = a * x_min + b
- y_max = a * x_max + b
-
- img = plt.scatter(X, Y, label='original data')
- img = plt.plot([x_min, x_max], [y_min, y_max], 'r', label='model')
- imgs.append(img)
-
- ani = animation.ArtistAnimation(fig, imgs)
- plt.show()
- # -
-
- # ## How to use batch update method?
- #
- # If some data is outliear, then only use one data can make the learning inaccuracy and slow.
- #
- #
- # * [梯度下降方法的几种形式](https://blog.csdn.net/u010402786/article/details/51188876)
-
- # ## How to fit polynomial function?
- #
- # If we observe a missle at some time, then how to estimate the trajectory? Acoording the physical theory, the trajectory can be formulated as:
- # $$
- # y = at^2 + bt + c
- # $$
- # The we need at least three data to compute the parameters $a, b, c$.
- #
- # $$
- # L = \sum_{i=1}^N (y_i - at^2 - bt - c)^2
- # $$
- #
-
- # +
- t = np.array([2, 4, 6, 8])
- #t = np.linspace(0, 10)
-
- pa = -20
- pb = 90
- pc = 800
-
- y = pa*t**2 + pb*t + pc
-
-
- plt.scatter(t, y)
- plt.show()
- # -
-
- # ### How to get the update items?
- #
- # $$
- # L = \sum_{i=1}^N (y_i - at^2 - bt - c)^2
- # $$
- #
- # \begin{eqnarray}
- # \frac{\partial L}{\partial a} & = & - 2\sum_{i=1}^N (y_i - at^2 - bt -c) t^2 \\
- # \frac{\partial L}{\partial b} & = & - 2\sum_{i=1}^N (y_i - at^2 - bt -c) t \\
- # \frac{\partial L}{\partial c} & = & - 2\sum_{i=1}^N (y_i - at^2 - bt -c)
- # \end{eqnarray}
-
- # ## How to use sklearn to solve linear problem?
- #
- #
-
- # +
- from sklearn import linear_model
-
- # load data
- d = datasets.load_diabetes()
-
- X = d.data[:, np.newaxis, 2]
- Y = d.target
-
- # create regression model
- regr = linear_model.LinearRegression()
- regr.fit(X, Y)
-
- a, b = regr.coef_, regr.intercept_
- print("a = %f, b = %f" % (a, b))
-
- x_min = np.min(X)
- x_max = np.max(X)
- y_min = a * x_min + b
- y_max = a * x_max + b
-
- plt.scatter(X, Y)
- plt.plot([x_min, x_max], [y_min, y_max], 'r')
- plt.show()
- # -
-
- # ## How to use sklearn to fit polynomial function?
-
- # +
- # Fitting polynomial functions
-
- from sklearn.preprocessing import PolynomialFeatures
- from sklearn.linear_model import LinearRegression
- from sklearn.pipeline import Pipeline
-
- t = np.array([2, 4, 6, 8])
-
- pa = -20
- pb = 90
- pc = 800
-
- y = pa*t**2 + pb*t + pc
-
- model = Pipeline([('poly', PolynomialFeatures(degree=2)),
- ('linear', LinearRegression(fit_intercept=False))])
- model = model.fit(t[:, np.newaxis], y)
- model.named_steps['linear'].coef_
-
- # -
-
- # ## How to estimate some missing value by the model?
- #
-
- # +
- # load data
- d = datasets.load_diabetes()
-
- N = d.target.shape[0]
- N_train = int(N*0.9)
- N_test = N - N_train
-
- X = d.data[:N_train, np.newaxis, 2]
- Y = d.target[:N_train]
-
- X_test = d.data[N_train:, np.newaxis, 2]
- Y_test = d.target[N_train:]
-
- # create regression model
- regr = linear_model.LinearRegression()
- regr.fit(X, Y)
-
- Y_est = regr.predict(X_test)
- print("Y_est = ", Y_est)
- print("Y_test = ", Y_test)
- err = (Y_est - Y_test)**2
- err2 = sklearn.metrics.mean_squared_error(Y_test, Y_est)
- score = regr.score(X_test, Y_test)
- print("err = %f (%f), score = %f" % (np.sqrt(np.sum(err))/N_test, np.sqrt(err2), score))
-
-
- # plot data
- a, b = regr.coef_, regr.intercept_
- print("a = %f, b = %f" % (a, b))
-
- x_min = np.min(X)
- x_max = np.max(X)
- y_min = a * x_min + b
- y_max = a * x_max + b
-
-
- plt.scatter(X, Y, label='train data')
- plt.scatter(X_test, Y_test, label='test data')
- plt.plot([x_min, x_max], [y_min, y_max], 'r', label='model')
- plt.legend()
- plt.show()
- # -
-
|