Homework 1 MSA XXXXXXXXXXMachine Learning for Analytics (Spring 2021) Instructor: Alireza Aghasi Due date: See iCollege Deadline January 16, 2021 Please revise the homework guidelines reviewed in the...

1 answer below »
The homework is a blend of theory and programming. Please submit all answers as a single word document


Homework 1 MSA 8150 - Machine Learning for Analytics (Spring 2021) Instructor: Alireza Aghasi Due date: See iCollege Deadline January 16, 2021 Please revise the homework guidelines reviewed in the first lecture. Specifically note that: • Start working on the homework early • Late homework is not accepted and will receive zero credit. • Each student must write up and turn in their own solutions • (IMPORTANT) If you solve a question together with another colleague, each need to write up your own solution and need to list the name of people who you discussed the problem with on the first page of the material turned in • The homework is a blend of theory and programming. Please submit all answers as a single word document as explained in the class. 1 Q1. The goal is to find the optimal values of β1 and β2 which fit the model y = β1x1 + β2x2 (1) to some data points. The data points are in the form (x1,1, x2,1, y1), (x1,2, x2,2, y2) · · · , (x1,n, x2,n, yn), where x1,i and x2,i are our input features for sample i, and yi is the response variable for sample i. (a) Write the RSS formulation for this problem. (b) Let’s do a quick review of basic linear algebra. Consider the following system of two equations, where β1 and β2 are the unknowns: { aβ1 + bβ2 = c dβ1 + eβ2 = f . (2) Show that if ae− bd 6= 0, then simultaneously solving the system above for β1 and β2 yields β1 = ce− bf ae− bd , β2 = af − cd ae− bd . (c) Minimize the RSS you obtained in part (a) and conclude that the optimal values of β1 and β2 are β̂1 = ( ∑n i=1 yix1,i) (∑n i=1 x 2 2,i ) − ( ∑n i=1 yix2,i) ( ∑n i=1 x1,ix2,i)(∑n i=1 x 2 1,i ) (∑n i=1 x 2 2,i ) − ( ∑n i=1 x2,ix1,i) 2 , (3) β̂2 = ( ∑n i=1 yix2,i) (∑n i=1 x 2 1,i ) − ( ∑n i=1 yix1,i) ( ∑n i=1 x1,ix2,i)(∑n i=1 x 2 1,i ) (∑n i=1 x 2 2,i ) − ( ∑n i=1 x2,ix1,i) 2 . (4) Hint: At some point you find the result in part (b) useful. (d) Assume that the data follows the model yi = β ∗ 1x1,i + �i, where � is a zero mean noise. In other words, the original God’s model (regression function) is β∗1x1 and does not use x2, but for whatever reason we have also included x2 in our model. Show that, despite this wrong inclusion, still β̂1 is an unbiased estimate of β∗1 , i.e., E(β̂1) = β∗1 . (e) Suppose that we want to minimize the RSS, but at the same time want to enforce that β1 and β2 are close to each other. So we consider the following modified RSS ˜RSS = (β1 − β2)2 + n∑ i=1 (yi − β1x1,i − β2x2,i)2. Show that minimizing ˜RSS yields the following optimal estimates for β1 and β2: β̃1 = ( ∑n i=1 yix1,i) ( 1 + ∑n i=1 x 2 2,i ) − ( ∑n i=1 yix2,i) (−1 + ∑n i=1 x1,ix2,i)( 1 + ∑n i=1 x 2 1,i ) ( 1 + ∑n i=1 x 2 2,i ) − (−1 + ∑n i=1 x2,ix1,i) 2 β̃2 = ( ∑n i=1 yix2,i) ( 1 + ∑n i=1 x 2 1,i ) − ( ∑n i=1 yix1,i) (−1 + ∑n i=1 x1,ix2,i)( 1 + ∑n i=1 x 2 1,i ) ( 1 + ∑n i=1 x 2 2,i ) − (−1 + ∑n i=1 x2,ix1,i) 2 . 2 Hint: It is totally fine to take a similar strategy as part (c), but if you think a little outside the box, there might be an easier way of getting to these results based on the results of part (c). (f) Lets use an example and see if the result you got in part (c) can also be obtained in R via the lm function (feel free to use Python for linear regression if you are more comfortable). For this purpose assume that n = 10 and our data are as follows: x1 x2 y 89.0900 78.4800 113.2700 84.2400 70.5600 109.7700 98.7700 93.5200 130.0800 95.4400 86.7200 120.4500 90.9800 79.2000 115.0900 97.3900 91.3600 125.3700 89.2700 80.0000 116.2200 88.5100 76.9600 112.0800 97.0600 92.5600 127.8500 84.4500 66.4000 107.6100 Write a program that takes the data as indicated in the table and calculates the values of β̂1 and β̂2 as suggested in part (c). Attach the code and results. (g) Write a program that takes the data as indicated in the table and calculates the values of β̂1 and β̂2 using the linear model function in R (or Python). If you write your program correctly, you should get identical results as part (f). Attach the code and results. Hint: When you use the lm command in R, you need to somehow enforce the intercept to be zero. Q2. The goal of this question is to mathematically show that for linear models the expected mean squared error (MSE) for the training is always less than the expected mean squared error for the test data. While the idea of proof in general is similar to what we will do, to avoid complications, lets work with a simple model. (a) Suppose that the God’s model is y = β∗x and our observations are in the form of y = β∗x+�, where � is a zero mean noise with variance σ2. Consider a training set of size n, as (x1, y1), (x2, y2), . . . , (xn, yn). We define the training MSE function as M(α) = 1 n n∑ i=1 (yi − αxi)2. Mathematically show that E(M(β∗)) = σ2. (5) 3 (b) Suppose that β̂ is obtained by minimizing the MSE associated with the training data. Mathemat- ically, or in simple words, discuss why we should have M(β̂) ≤M(β∗). (6) (-) I do this part for you. We know that if for two random variables u and v we always have u ≤ v, then we also have E(u) ≤ E(v). Using the result of part (b) this fact implies that E(M(β̂)) ≤ E(M(β∗)). (7) (c) Consider a test set of size n, as (x̃1, ỹ1), (x̃2, ỹ2), . . . , (x̃n, ỹn). We define the test MSE function as M̃(α) = 1 n n∑ i=1 (ỹi − αx̃i)2. Mathematically show that E(M̃(β̂)) = σ2 + 1 n n∑ i=1 (β̂x̃i − β∗x̃i)2. (8) (d) Now by comparing (5) and (7) and (8) explain why you can immediately conclude that E(M(β̂)) ≤ E(M̃(β̃)). Q3. To discover a physical law, we have collected 240 data samples, where p1, p2 and d are the input parameters, and F is the response variable. You can access the data in the homework folder and in a file named PhysicalLaw.csv. – Read the data file and split it into two sets. Set 1 includes the first 200 rows of the data (do not count the row associated with the feature/response names), and set 2, which includes the last 40 rows of the data. Name the first set train and the second set test. (a) Using the training data, fit a linear regression model as F = β0 + β1p1 + β2p2 + β3d, (9) report the fitted parameters, the 95% confidence interval for each estimated parameter, the p-values and the R2 statistic. Explain what the R2 statistic tells you. (b) Based on the p-values and α = 0.05, do you see any significance problem with any of the features? 4 (c) Use the fitted model and pass the features in your test file to generate the corresponding response F pred (a vector of length 40). Now compare this quantity with the original responses in your test file F test using the test root mean squared error (RMSE) defined as: RMSE = √√√√ 1 40 40∑ i=1 ( F testi − F pred i )2 . Q4. Read the data file ModelSelection.csv, which contains 1500 pairs of (x, y). These data are acquired from a model in the form y = β0 + β1x n1 + β2x n2 + β3x n3 . (10) We neither know the quantities β0, β1, β2, β3, nor know the exponents n1, n2, n3. All we know are the following: • β0, β1, β2, β3 are real numbers • n1, n2, n3 are integers not less than 1 and no greater than 10. In other words: ni ∈ {1, 2, 3, . . . , 10}, i = 1, 2, 3. Read the data and split it into two sets. Set 1 includes the first 1000 rows of the data (do not count the row associated with the x, y names), and set 2, which includes the last 500 rows of the data. Name the first set train and the second set test. Using the linear model function in R (or the counterpart in Python), write a program that explores all the models in the form (10), trains them on the train and tests them on test. The output of your program should be the values β0, β1, β2, β3 and n1, n2, n3 which correspond to the model with the best (i.e., minimum) MSE value. Please provide your code and the values β0, β1, β2, β3 and n1, n2, n3 that your code returns. 5
Answered 4 days AfterJan 22, 2021

Answer To: Homework 1 MSA XXXXXXXXXXMachine Learning for Analytics (Spring 2021) Instructor: Alireza Aghasi Due...

Mohd answered on Jan 27 2021
148 Votes
_
Q.1(a) Simple linear regression can be modeled as yi=β0+β1xi
yi=β0+β1xi. The residuals are calculated ei=yi−yi
ei=yi−yi.
They then go on to define the residual sum of squares as:
RSS=e21+e22…e2n
RSS=e12+e22…en2
or equivalently as
RSS=(y1−β0−β1x1)2+(y2−β0−β1x2)2+⋯+(yn−β0−β1xn)2
Q1(b).
Q.C
arg min RSS(θ)=arg min 1n⋅RSS(θ)
=arg min MSE(θ).arg min RSS(θ)
=arg
min 1n⋅RSS(θ)=arg min MSE(θ).
Q2:
    ∑(k=1) to N(Yk−βTXk)2≤ ∑(k=1 )to N(Yk−βTXk)2,
    1/N∑k=1/NE[(Yk−βTXk)2]=E[(Yi−βTXi)2],
    E[(Yi−βTXi)2]≤E[(Yi−βTXi)2],
    1/N∑k=1/NE[(Yk−βTXk)2]≤1/M∑(k=1)toM E[(Y~k−βTX~k)2],
    E[Rtr(β)]≤E[Rte(β)].
Q1(F) Write a program that takes the data as indicated in the table and calculates the values of beta1 and beta2.
library(readxl)
Q_2 <- read_excel("Q_2.xlsx")
summary(Q_2)
## X_1 X_2 Y
## Min. :84.24 Min. :66.40 Min. :107.6
## 1st Qu.:88.66 1st Qu.:77.34 1st Qu.:112.4
## Median :90.12 Median :79.60 Median :115.7
## Mean :91.52 Mean :81.58 Mean :117.8
## 3rd Qu.:96.66 3rd Qu.:90.20 3rd Qu.:124.1
## Max. :98.77 Max. :93.52 Max. :130.1
model_F<-lm(Y~X_1+X_2,data=Q_2)
summary(model_F)
##
## Call:
## lm(formula = Y ~ X_1 + X_2, data = Q_2)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.9228 -1.5469 -0.1664 1.3899 2.3130
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 34.1760 29.1716 1.172 0.280
## X_1 0.3663 0.6315 0.580 0.580
## X_2 0.6139 0.3629 1.691 0.135
##
## Residual standard error: 1.823 on 7 degrees of freedom
## Multiple R-squared: 0.9575, Adjusted R-squared: 0.9454
## F-statistic: 78.86 on 2 and 7 DF, p-value: 1.582e-05
deviance(model_F)
## [1] 23.2684
sum(resid(model_F)^2)
## [1] 23.2684
anova(model_F)["Residuals", "Sum Sq"]
## [1] 23.2684
anova(model_F)
## Analysis of Variance Table
##
## Response: Y
## Df Sum Sq Mean Sq F value Pr(>F)
## X_1 1 514.74 514.74 154.854 4.981e-06 ***
## X_2 1 9.51 9.51 2.861 0.1346
## Residuals 7 23.27 3.32
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
with(summary(model_F), df[2] * sigma^2)
## [1] 23.2684
Q1(g) Write a program that takes the data as indicated in the table and calculates the values of using the linear model function in R (or Python). If you write your program correctly, you should get identical results as part (f).
model_g<-lm(Y~0+X_1+X_2,data=Q_2)
summary(model_g)
##
## Call:
## lm(formula = Y ~ 0 + X_1 + X_2, data = Q_2)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.71757 -1.33676 -0.03497 1.54760 2.35368
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## X_1 1.0935 0.1189 9.196 1.58e-05 ***
## X_2 0.2168 0.1328 1.632 0.141
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.865 on 8 degrees of freedom
## Multiple R-squared: 0.9998, Adjusted R-squared: 0.9998
## F-statistic: 2.001e+04 on 2 and 8 DF, p-value: 1.595e-15
Q3. To discover a physical law, we have collected 240 data samples, where p1, p2 and d are the input parameters, and F is the response variable. You can access the data in the homework folder and in a le named PhysicalLaw.csv. { Read the data le and split it into two sets. Set 1 includes the first 200 rows of the data (do not count the row associated with the feature/response names), and set 2, which includes the last 40 rows of the data. Name the rst set train and the second set...
SOLUTION.PDF

Answer To This Question Is Available To Download

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here