Worksheet for CISC 3440 CISC 3440, Fall 2021 Lab 3 INSTRUCTIONS: Please fill out this form with a PDF form editor to retain Acroform metadata and save the file with a name containing the substring...

1 answer below »

handson-ml2/04_training_linear_models.ipynb at master · ageron/handson-ml2 · GitHub
4 - Linear Regression - Colaboratory (google.com)

https://docs.google.com/presentation/d/e/2PACX-1vRAfBxFIawKf7c735_30GO66ccIbnckvCu15BpQC9DpPlkcgPRgw02FjsIdGmGqXIa0lI-TXNZyuveg/pub?start=false&loop=false





Worksheet for CISC 3440 CISC 3440, Fall 2021 Lab 3 INSTRUCTIONS: Please fill out this form with a PDF form editor to retain Acroform metadata and save the file with a name containing the substring lab3 in all lower case. Submit a copy to the correct Dropbox link by the due date listed on the course calendar. This worksheet is meant for students of Brooklyn College CISC 3440 to complete on their own. Contents are created and copyrighted. Name: Emplid: Term: 1. (5 points) Training models Practice training models by setting parameters and hyperparameters for regression algorithms available in [1], then fill in the table below. An example is provided on the next page. Parameters & Hyperparameters Running time Intercept Coefficient(first 3) Training Score # use defaults SGDRegressor ().fit(x_train , y_train) References [1] Scikit Learn API Reference https://scikit-learn.org/stable/modules/classes.html 1 https://scikit-learn.org/stable/modules/classes.html CISC 3440, Fall 2021 Lab 3 2. (5 points) Which of the combination of parameters and hyperparameters from Question 1 is the best model? Explain briefly in your own words. Listing 1: Sample 1 import time # start Python file with imports 2 import numpy as np 3 from numpy import genfromtxt 4 from sklearn.impute import SimpleImputer 5 from sklearn.model_selection import train_test_split 6 from sklearn.linear_model import * 7 8 def print_row(item , t, i, s): 9 ii = (i.astype('float ')[0] if type(i) == np.ndarray else i) 10 print("{}\t {:.4f}\t\t{: >10.4f}\t\t{:.4f}".format(item , t, ii, s)) 11 12 # Get Data 13 data = genfromtxt('data.csv', delimiter=',', skip_header =1) 14 15 # Clean Data 16 imp = SimpleImputer(missing_values=np.nan , strategy='mean') 17 data_imputed = imp.fit_transform(data) # fill NaN values 18 y = data_imputed [:,8] # median household price , 9th column 19 x = np.delete(data_imputed , 8, 1) # drop column before modeling 20 21 # Split into train and test samples 22 x_train , x_test , y_train , y_test = train_test_split(x, y) 23 24 print("Item \t\tTime \t\t Intercept \t\tScore") 25 print("-------------- -------- ----------------- ------") 26 s1 = time.perf_counter () 27 model1 = SGDRegressor(loss='huber ').fit(x_train , y_train) 28 print_row("SGD Regression", (time.perf_counter ()-s1), model1.intercept_ , model1.score(x_train ,y_train)) 29 30 s2 = time.perf_counter () 31 model2 = LinearRegression ().fit(x_train , y_train) 32 print_row("Linear Reg.", time.perf_counter ()-s2, model2.intercept_ , model2.score(x_train ,y_train)) Listing 2: Sample Output Item Time Intercept Score -------------- -------- ----------------- ------ SGD Regression 0.2251 10.2277 0.0856 Linear Reg. 0.0037 -3568476.5740 0.6354 2 name: emplid: semester: Fall 2021 sgd1time: sgd1intercept: sgd1coef: sgd1score: sgd2code: sgd2time: asgd2intercept: sgd2coef: sgd2score: sgd3code: sgd3time: asgd3intercept: sgd3coef: sgd3score: sgd4code: sgd4time: asgd4intercept: sgd4coef: sgd4score: sgd5code: sgd5time: asgd5intercept: sgd5coef: sgd5score: sgd6code: sgd6time: asgd6intercept: sgd6coef: sgd6score: reflection:
Answered 1 days AfterOct 01, 2021

Answer To: Worksheet for CISC 3440 CISC 3440, Fall 2021 Lab 3 INSTRUCTIONS: Please fill out this form with a...

Dinesh answered on Oct 02 2021
124 Votes
2-linear-regression/lab3-u3lx51ud.pdf
CISC 3440, Fall 2021 Lab 3
INSTRUCTIONS: Please fill out this form with a PDF form editor to retain Acroform metadata and save the file with a
name containing the substring lab3 in all lower case. Submit a copy to the correct Dropbox link by the due date listed on the
course calendar. This worksheet is meant for students of Brooklyn College CISC 3440 to complete on their own. Contents are
created and copyrighted.
Name: Emplid: Term:
1. (5 points) Training models Practice training models by setting parameters and hyperparameters for regression algorithms
available in [1], then fill in the table below. An example is provided on the next page.
Parameters & Hyperparameters Running time Intercept Coefficient(first 3) Training Score
# use defaults
SGDRegressor ().fit(x_train , y_train)
References
[1] Scikit Learn API Reference https://scikit-learn.org/stable/modules/classes.html
1
https://scikit-learn.org/stable/modules/classes.html
CISC 3440, Fall 2021 Lab 3
2. (5 points) Which of the combination of parameters and hyperparameters from Question 1 is the best model? Explain briefly in
your own words.
Listing 1: Sample
1 import time # start Python file with imports
2 import numpy as np
3 from numpy import genfromtxt
4 from sklearn.impute import SimpleImputer
5 from sklearn.model_selection import train_test_split
6 from sklearn.linear_model import *
7
8 def print_row(item , t, i, s):
9 ii = (i.astype('float ')[0] if type(i) == np.ndarray else i)
10 print("{}\t {:.4f}\t\t{: >10.4f}\t\t{:.4f}".format(item , t, ii, s))
11
12 # Get Data
13 data = genfromtxt('data.csv', delimiter=',', skip_header =1)
14
15 # Clean Data
16 imp = SimpleImputer(missing_values=np.nan , strategy='mean')
17 data_imputed = imp.fit_transform(data) # fill NaN values
18 y = data_imputed [:,8] # median household price , 9th column
19 x = np.delete(data_imputed , 8, 1) # drop column before modeling
20
21 # Split into train and test samples
22 x_train , x_test , y_train , y_test = train_test_split(x, y)
23
24 print("Item \t\tTime \t\t Intercept \t\tScore")
25 print("-------------- -------- ----------------- ------")
26 s1 = time.perf_counter ()
27 model1 = SGDRegressor(loss='huber ').fit(x_train , y_train)
28 print_row("SGD Regression", (time.perf_counter ()-s1), model1.intercept_ , model1.score(x_train ,y_train))
29
30 s2 = time.perf_counter ()
31 model2 = LinearRegression ().fit(x_train , y_train)
32 print_row("Linear Reg.", time.perf_counter ()-s2, model2.intercept_ , model2.score(x_train ,y_train))
Listing 2: Sample Output
Item Time Intercept Score
-------------- -------- ----------------- ------
SGD Regression 0.2251 10.2277 0.0856
Linear Reg. 0.0037 -3568476.5740 0.6354
2
        name:
        emplid:
        semester: Fall 2021
        sgd1time: 0.00433
        sgd1intercept: 8.6171
        sgd1coef: [0.8868736 , 0.12427902, 0.16103357]
        sgd1score: 0.6744
        sgd2code: LinearRegression()
        sgd2time: 0.0371
        asgd2intercept: 10.4681
        sgd2coef: [ 0.88270615, 0.0902069 , 0.13382475]
        sgd2score: 0,7017
        sgd3code: SGDRegressor(penalty="l1", max_iter = 10000, random_state = 42, shuffle=True)
        sgd3time: 0.0802
        asgd3intercept: 8.6286
        sgd3coef: [0.87986317, 0.11726851, 0.15804908]
        sgd3score: 0.6757
        sgd4code: SGDRegressor(penalty="elasticnet", alpha = 0.005, l1_ratio = 0.5, max_iter = 10000, random_state = 42, shuffle=True)
        sgd4time: 0.0102
        asgd4intercept: 8.6286
        sgd4coef: [0.84211057, 0.10631453, 0.15566113]

        sgd4score: 0.6758
        sgd5code: Ridge(alpha=0.05, solver="cholesky")
        sgd5time: 0.0025
        asgd5intercept: 10.4683
        sgd5coef: [0.88236792, 0.09015322, 0.13390973]
        sgd5score: 0.7017
        sgd6code: Lasso(alpha=0.05, random_state = 42)
        sgd6time: 0.00374
        asgd6intercept: 10.518
        sgd6coef: [ 0.57224566, -0. , 0.11522988]
        sgd6score: 0.64427
        reflection: Red wine data set used to generate model. Target variable is to find alcohol level in the wine based on other dependent variable. linear models Ridge and linear regression performed well, compare to other models. In the ridge model used alpha and solver parameters. In the ridge regression model the loss function is linear and regularization is based on l2-norm. Alph parameter is used to tune regularization strength. it improves the conditioning of the problem and reduce the variance of the estimates. solver - cholesky uses the standard scipy.linalg.solve function to obtain a closed-form solution. The dataset spited into 80/20 ratio. as train and test data set. Applied standard scaler on the features and created pipeline to generate model.
2-linear-regression/linear_regression_california_house-price.ipynb
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"id": "a71b7c46-aa50-4a28-ae07-916360985b7b",
"metadata": {},
"outputs": [],
"source": [
"# import packages\n",
"import pandas as pd\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"import seaborn as sns"
]
},
{
"cell_type": "code",
"execution_count": 47,
"id": "711053ef-8fd1-4228-a55e-d8bae02a380b",
"metadata": {},
"outputs": [],
"source": [
"file = \"wineQualityReds.csv\""
]
},
{
"cell_type": "code",
"execution_count": 50,
"id": "241ab54d-482f-4570-8ff7-192c26e9ad5d",
"metadata": {},
"outputs": [],
"source": [
"wine = pd.read_csv(file)"
]
},
{
"cell_type": "code",
"execution_count": 51,
"id": "9a64a766-ae85-4d91-8f74-8fd01ba14984",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"
\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"
Unnamed: 0fixed.acidityvolatile.aciditycitric.acidresidual.sugarchloridesfree.sulfur.dioxidetotal.sulfur.dioxidedensitypHsulphatesalcoholquality
017.40.700.001.90.07611.034.00.99783.510.569.45
127.80.880.002.60.09825.067.00.99683.200.689.85
237.80.760.042.30.09215.054.00.99703.260.659.85
3411.20.280.561.90.07517.060.00.99803.160.589.86
457.40.700.001.90.07611.034.00.99783.510.569.45
\n",
"
"
],
"text/plain": [
" Unnamed: 0 fixed.acidity volatile.acidity citric.acid residual.sugar \\\n",
"0 1 7.4 0.70 0.00 1.9 \n",
"1 2 7.8 0.88 0.00 2.6 \n",
"2 3 7.8 0.76 0.04 2.3 \n",
"3 4 11.2 0.28 0.56 1.9 \n",
"4 5 7.4 0.70 0.00 1.9 \n",
"\n",
" chlorides free.sulfur.dioxide total.sulfur.dioxide density pH \\\n",
"0 0.076 11.0 34.0 0.9978 3.51 \n",
"1 0.098 25.0 67.0 0.9968 3.20 \n",
"2 0.092 15.0 54.0 0.9970 3.26 \n",
"3 0.075 17.0 60.0 0.9980 3.16 \n",
"4 0.076 11.0 34.0 0.9978 3.51 \n",
"\n",
" sulphates alcohol quality \n",
"0 0.56 9.4 5 \n",
"1 0.68 9.8 5 \n",
"2 0.65 9.8 5 \n",
"3 0.58 9.8 6 \n",
"4 0.56 9.4 5 "
]
},
"execution_count": 51,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"wine.head()"
]
},
{
"cell_type": "code",
"execution_count": 52,
"id": "266a2e72-3ff4-494f-a02d-b2ae13f65820",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Index(['Unnamed: 0', 'fixed.acidity', 'volatile.acidity', 'citric.acid',\n",
" 'residual.sugar', 'chlorides', 'free.sulfur.dioxide',\n",
" 'total.sulfur.dioxide', 'density', 'pH', 'sulphates', 'alcohol',\n",
" 'quality'],\n",
" dtype='object')"
]
},
"execution_count": 52,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"wine.columns"
]
},
{
"cell_type": "code",
"execution_count": 53,
"id": "804fc802-a485-4ce9-99dd-743b8c7845ed",
"metadata": {},
"outputs": [],
"source": [
"# Split features and labels\n",
"\n",
"X = wine.drop([\"alcohol\", \"Unnamed: 0\"], axis = 1)\n",
"y = wine[[\"alcohol\"]]"
]
},
{
"cell_type": "code",
"execution_count": 54,
"id": "015eb31d-3cb4-408c-9bed-23b0262add7e",
"metadata": {},
"outputs": [],
"source": [
"from sklearn.model_selection import train_test_split"
]
},
{
"cell_type": "code",
"execution_count": 57,
"id": "80a9fb48-2a82-4d25-a7f2-cb672f6a41c7",
"metadata": {},
"outputs": [],
"source": [
"X_train, X_test, y_train, y_test = train_test_split(X,y, \n",
" test_size = 0.2, \n",
" shuffle = True, \n",
" random_state = 12)"
]
},
{
"cell_type": "code",
"execution_count": 58,
"id": "b251c17d-9d22-4184-a29e-4c836a76e7a2",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"
\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"
fixed.acidityvolatile.aciditycitric.acidresidual.sugarchloridesfree.sulfur.dioxidetotal.sulfur.dioxidedensitypHsulphatesquality
13266.70.4600.241.70.07718.034.00.994803.390.606
4368.00.6700.302.00.06038.062.00.995803.260.566
1887.90.5000.332.00.08415.0143.00.996803.200.555
10348.90.7450.182.50.07715.048.00.997393.200.476
14139.90.5700.252.00.10412.089.00.996303.040.905
\n",
"
"
],
"text/plain": [
" fixed.acidity volatile.acidity citric.acid residual.sugar chlorides \\\n",
"1326 6.7 0.460 0.24 1.7 0.077 \n",
"436 8.0 0.670 0.30 2.0 0.060 \n",
"188 7.9 0.500 0.33 2.0 0.084 \n",
"1034 8.9 0.745 0.18 2.5 0.077 \n",
"1413 9.9 0.570 0.25 2.0 0.104 \n",
"\n",
" free.sulfur.dioxide total.sulfur.dioxide density pH sulphates \\\n",
"1326 18.0 34.0 0.99480 3.39 0.60 \n",
"436 38.0 62.0 0.99580 3.26 0.56 \n",
"188 15.0 143.0 0.99680 3.20 0.55 \n",
"1034 15.0 48.0 0.99739 3.20 0.47 \n",
"1413 12.0 89.0 0.99630 3.04 0.90 \n",
"\n",
" quality \n",
"1326 6 \n",
"436 6 \n",
"188 5 \n",
"1034 6 \n",
"1413 5 "
]
},
"execution_count": 58,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"X_train.head()"
]
},
{
"cell_type": "code",
"execution_count": 59,
"id": "4b880251-5efd-4749-88e3-cf7c29c64671",
"metadata": {},
"outputs": [],
"source": [
"# Removing categorical variable)\n",
"cat_attribs = ['quality']\n",
"num_attribs = list(X_train.drop(cat_attribs, axis=1))"
]
},
{
"cell_type": "code",
"execution_count": 60,
"id": "81ec704f-2361-44d2-a140-bbaee0a48f2c",
"metadata": {},
"outputs": [],
"source": [
"from sklearn.pipeline import Pipeline\n",
"from sklearn.preprocessing import StandardScaler\n",
"from sklearn.preprocessing import OneHotEncoder\n",
"from sklearn.base import BaseEstimator, TransformerMixin\n",
"from sklearn.pipeline import FeatureUnion\n",
"from sklearn.linear_model import LinearRegression\n",
"\n",
"class DFSelector(BaseEstimator, TransformerMixin):\n",
" def __init__(self, attribute_names):\n",
" self.attribute_names = attribute_names\n",
" def fit(self, X, y=None):\n",
" return self\n",
" def transform(self, X):\n",
" return X[self.attribute_names].values"
]
},
{
"cell_type": "code",
"execution_count": 61,
"id": "3b2d08fe-4b2e-4c7c-9d2f-e9118f8b453b",
"metadata": {},
"outputs": [],
"source": [
"num_pipe = Pipeline([\n",
" ('DFSelector', DFSelector(num_attribs)),\n",
" ('scaler', StandardScaler()) # Feature Scaling\n",
"\n",
"])\n",
"\n",
"cat_pipe = Pipeline([\n",
" ('DFSelector', DFSelector(cat_attribs)),\n",
" ('OneHot', OneHotEncoder(sparse = False)) #OneHotEncoding of Categorical Attributes\n",
"])\n",
"\n",
"\n",
"full_pipeline = FeatureUnion(transformer_list =[\n",
" (\"num_pipeline\", num_pipe),\n",
" (\"cat_pipeline\", cat_pipe)\n",
"])"
]
},
{
"cell_type": "code",
"execution_count": 62,
"id": "d5dab068-d653-4431-9011-ac3ea5cd7601",
"metadata": {},
"outputs": [],
"source": [
"# Preprocessing of the training set\n",
"X_train_prepared = full_pipeline.fit_transform(X_train)"
]
},
{
"cell_type": "code",
"execution_count": 114,
"id": "bcb9ef56-c904-4750-8f38-d7f0fe4fe059",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" Item \t\\ tTime \t\t Intercept \t\\ tScore \n",
" -------------- -------- ----------------- ------\n",
" SGD Regression 0.0037141999996492814 [10.46811739] 0.7017543335280096\n"
]
}
],
"source": [
"from sklearn.linear_model import LinearRegression\n",
"\n",
"\n",
"print (\" Item \\t\\ tTime \\t\\t Intercept \\t\\ tScore \")\n",
"print (\" -------------- -------- ----------------- ------\")\n",
"s1 = time.perf_counter()\n",
"model_1 = LinearRegression()\n",
"model_1.fit(X_train_prepared, y_train)\n",
"print(\" Linear Regression \", (time.perf_counter()-s1), model_1.intercept_ , model_1.score (X_train_prepared,y_train))"
]
},
{
"cell_type": "code",
"execution_count": 119,
"id": "9d59079c-d844-4381-9685-17b37c75d748",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([[ 0.88270615, 0.0902069 , 0.13382475, 0.37794551, -0.0197104 ,\n",
" -0.02669471, -0.04297927, -1.09997347, 0.57315103, 0.17116825,\n",
" -0.396002 , -0.3328361 , -0.20984539, 0.06136907, 0.28487665,\n",
" 0.59243777]])"
]
},
"execution_count": 119,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model_1.coef_[:3]"
]
},
{
"cell_type": "code",
"execution_count": 90,
"id": "88d2b901-29b5-4bf0-800d-72b3ff3e84b0",
"metadata": {},
"outputs": [],
"source": [
"# Transform y_train for cross val score. It works witout, but an error occurs\n",
"\n",
"y_train_rs = y_train.values"
]
},
{
"cell_type": "code",
"execution_count": 108,
"id": "f7864cf0-858e-4151-837d-9980ff412f23",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" Item \t\\ tTime \t\t Intercept \t\\ tScore \n",
" -------------- -------- ----------------- ------\n",
" SGD Regression 0.004333699999733653 [8.61719143] 0.6744331809841367\n"
]
}
],
"source": [
"from sklearn.linear_model import SGDRegressor\n",
"import time\n",
"\n",
"print (\" Item \\t\\ tTime \\t\\t Intercept \\t\\ tScore \")\n",
"print (\" -------------- -------- ----------------- ------\")\n",
"s1 = time.perf_counter ()\n",
"model_2 = SGDRegressor()\n",
"model_2.fit(X_train_prepared, y_train_rs.ravel())\n",
"print(\" SGD Regression \", (time.perf_counter()-s1), model_2.intercept_ , model_2.score (X_train_prepared,y_train_rs.ravel()))"
]
},
{
"cell_type": "code",
"execution_count": 113,
"id": "c06862fa-2e2c-4800-afcc-672f463e0b16",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([0.8868736 , 0.12427902, 0.16103357])"
]
},
"execution_count": 113,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model1.coef_[:3]"
]
},
{
"cell_type": "code",
"execution_count": 121,
"id": "817aa3c0-ab26-467b-a519-bcb51236aa02",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" Item \t\\ tTime \t\t Intercept \t\\ tScore \n",
" -------------- -------- ----------------- ------\n",
" SGD Regression 0.005118800000218471 [8.63085173] 0.6759119390329273\n"
]
},
{
"data": {
"text/plain": [
"array([0.87976688, 0.11744654, 0.1584008 ])"
]
},
"execution_count": 121,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"print (\" Item \\t\\ tTime \\t\\t Intercept \\t\\ tScore \")\n",
"print (\" -------------- -------- ----------------- ------\")\n",
"s1 = time.perf_counter()\n",
"model_3 = SGDRegressor(penalty=\"l2\", max_iter = 1000, random_state = 42)\n",
"model_3.fit(X_train_prepared, y_train_rs.ravel())\n",
"print(\" SGD Regression \", (time.perf_counter()-s1), model_3.intercept_ , model_3.score (X_train_prepared,y_train_rs.ravel()))\n",
"\n",
"model_3.coef_[:3]"
]
},
{
"cell_type": "code",
"execution_count": 123,
"id": "e7329da3-2d51-4e29-8537-d14149f9476c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" Item \t\\ tTime \t\t Intercept \t\\ tScore \n",
" -------------- -------- ----------------- ------\n",
" SGD Regression 0.00802109999995082 [8.62860819] 0.6757754267217662\n"
]
},
{
"data": {
"text/plain": [
"array([0.87986317, 0.11726851, 0.15804908])"
]
},
"execution_count": 123,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"print (\" Item \\t\\ tTime \\t\\t Intercept \\t\\ tScore \")\n",
"print (\" -------------- -------- ----------------- ------\")\n",
"s1 = time.perf_counter()\n",
"model_4 = SGDRegressor(penalty=\"l1\", max_iter = 10000, random_state = 42, shuffle=True)\n",
"model_4.fit(X_train_prepared, y_train_rs.ravel())\n",
"print(\" SGD Regression \", (time.perf_counter()-s1), model_4.intercept_ , model_4.score (X_train_prepared,y_train_rs.ravel()))\n",
"\n",
"model_4.coef_[:3]"
]
},
{
"cell_type": "code",
"execution_count": 125,
"id": "696b35f7-ac28-4415-a1b1-db2429c554b5",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" Item \t\\ tTime \t\t Intercept \t\\ tScore \n",
" -------------- -------- ----------------- ------\n",
" SGD Regression 0.010291599999618484 [8.62860819] 0.6757754267217662\n"
]
},
{
"data": {
"text/plain": [
"array([0.84211057, 0.10631453, 0.15566113])"
]
},
"execution_count": 125,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from sklearn.linear_model import SGDRegressor\n",
"\n",
"print (\" Item \\t\\ tTime \\t\\t Intercept \\t\\ tScore \")\n",
"print (\" -------------- -------- ----------------- ------\")\n",
"s1 = time.perf_counter()\n",
"model_5 = SGDRegressor(penalty=\"elasticnet\", alpha = 0.005, l1_ratio = 0.5, max_iter = 10000, random_state = 42, shuffle=True)\n",
"model_5.fit(X_train_prepared, y_train_rs.ravel())\n",
"print(\" SGD Regression \", (time.perf_counter()-s1), model_4.intercept_ , model_4.score(X_train_prepared,y_train_rs.ravel()))\n",
"\n",
"model_5.coef_[:3]\n"
]
},
{
"cell_type": "code",
"execution_count": 126,
"id": "caa59cf6-22f8-4510-9bda-11df26a45a8b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" Item \t\\ tTime \t\t Intercept \t\\ tScore \n",
" -------------- -------- ----------------- ------\n",
" SGD Regression 0.0025458999998591025 [10.46833313] 0.701754237415733\n"
]
},
{
"data": {
"text/plain": [
"array([[ 0.88236792, 0.09015322, 0.13390973, 0.37788314, -0.01981216,\n",
" -0.02666937, -0.04303708, -1.0997814 , 0.57296433, 0.17119437,\n",
" -0.39356499, -0.33257821, -0.21005571, 0.06113941, 0.28459845,\n",
" 0.59046105]])"
]
},
"execution_count": 126,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from sklearn.linear_model import Ridge\n",
"\n",
"\n",
"print (\" Item \\t\\ tTime \\t\\t Intercept \\t\\ tScore \")\n",
"print (\" -------------- -------- ----------------- ------\")\n",
"s1 = time.perf_counter()\n",
"model_6 = Ridge(alpha=0.05, solver=\"cholesky\")\n",
"model_6.fit(X_train_prepared, y_train)\n",
"print(\" Ridge \", (time.perf_counter()-s1), model_6.intercept_ , model_6.score(X_train_prepared,y_train))\n",
"\n",
"model_6.coef_[:3]\n"
]
},
{
"cell_type": "code",
"execution_count": 127,
"id": "9693857c-b37d-406b-9306-cb58c9b0cb0e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" Item \t\\ tTime \t\t Intercept \t\\ tScore \n",
" -------------- -------- ----------------- ------\n",
" Ridge 0.0037471000005098176 [10.51891754] 0.6442795561092307\n"
]
},
{
"data": {
"text/plain": [
"array([ 0.57224566, -0. , 0.11522988])"
]
},
"execution_count": 127,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from sklearn.linear_model import Lasso\n",
"\n",
"\n",
"\n",
"print (\" Item \\t\\ tTime \\t\\t Intercept \\t\\ tScore \")\n",
"print (\" -------------- -------- ----------------- ------\")\n",
"s1 = time.perf_counter()\n",
"model_7 = Lasso(alpha=0.05, random_state = 42)\n",
"model_7.fit(X_train_prepared, y_train)\n",
"print(\" Ridge \", (time.perf_counter()-s1), model_7.intercept_ , model_7.score(X_train_prepared,y_train))\n",
"\n",
"model_7.coef_[:3]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c8817a60-7d2c-4104-8798-8bf6c46a5405",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
2-linear-regression/wineQualityInfo.txt
Citation Request:
This dataset is public available for research. The details are described in [Cortez et al., 2009].
Please include this citation if you plan to use this database:
P. Cortez, A. Cerdeira, F. Almeida, T. Matos and J. Reis.
Modeling wine preferences by data mining from physicochemical properties.
In Decision Support Systems, Elsevier, 47(4):547-553. ISSN: 0167-9236.
Available at: [@Elsevier] http://dx.doi.org/10.1016/j.dss.2009.05.016
[Pre-press (pdf)] http://www3.dsi.uminho.pt/pcortez/winequality09.pdf
[bib] http://www3.dsi.uminho.pt/pcortez/dss09.bib
1. Title: Wine Quality
2. Sources
Created by: Paulo Cortez (Univ. Minho), Antonio Cerdeira, Fernando Almeida, Telmo Matos and Jose Reis (CVRVV) @ 2009

3. Past Usage:
P. Cortez, A. Cerdeira, F. Almeida, T. Matos and J. Reis.
Modeling wine preferences by data mining from physicochemical properties.
In Decision Support Systems, Elsevier, 47(4):547-553. ISSN: 0167-9236.
In the above reference, two datasets were created, using red and white wine samples.
The inputs include objective tests (e.g. PH values) and the output is based on sensory data
(median of at least 3 evaluations made by wine experts). Each expert graded the wine quality
between 0 (very bad) and 10 (very excellent). Several data mining methods were applied to model
these datasets under a regression approach. The support vector machine model achieved the
best results. Several metrics were computed: MAD, confusion matrix for a fixed error tolerance (T),
etc. Also, we plot the relative importances of the input variables (as measured by a sensitivity
analysis procedure).

4. Relevant Information:
The two datasets are related to red and white variants of the Portuguese "Vinho Verde" wine.
For more details, consult: http://www.vinhoverde.pt/en/ or the reference [Cortez et al., 2009].
Due to privacy and logistic issues, only physicochemical (inputs) and sensory (the output) variables
are available (e.g. there is no data about grape types, wine brand, wine selling price, etc.).
These datasets can be viewed as classification or regression tasks.
The classes are ordered and not balanced (e.g. there are munch more normal wines than
excellent or poor ones). Outlier detection algorithms could be used to detect the few excellent
or poor wines. Also, we are not sure if all input variables are relevant. So
it could be interesting to test feature selection methods.
5. Number of Instances: red wine - 1599; white wine - 4898.
6. Number of Attributes: 11 + output attribute

Note: several of the attributes may be correlated, thus it makes sense to apply some sort of
feature selection.
7. Attribute information:
For more information, read [Cortez et al., 2009].
Input variables (based on physicochemical tests):
1 - fixed acidity (tartaric acid - g / dm^3)
2 - volatile acidity (acetic acid - g / dm^3)
3 - citric acid (g / dm^3)
4 - residual sugar (g / dm^3)
5 - chlorides (sodium chloride - g / dm^3
6 - free sulfur dioxide (mg / dm^3)
7 - total sulfur dioxide (mg / dm^3)
8 - density (g / cm^3)
9 - pH
10 - sulphates (potassium sulphate - g / dm3)
11 - alcohol (% by volume)
Output variable (based on sensory data):
12 - quality (score between 0 and 10)
8. Missing Attribute Values: None
9. Description of attributes:
1 - fixed acidity: most acids involved with wine or fixed or nonvolatile (do not evaporate readily)
2 - volatile acidity: the amount of acetic acid in wine, which at too high of levels can lead to an unpleasant, vinegar taste
3 - citric acid: found in small quantities, citric acid can add 'freshness' and flavor to wines
4 - residual sugar: the amount of sugar remaining after fermentation stops, it's rare to find wines with less than 1 gram/liter and wines with greater than 45 grams/liter are considered sweet
5 - chlorides: the amount of salt in the wine
6 - free sulfur dioxide: the free form of SO2 exists in equilibrium between molecular SO2 (as a dissolved gas) and bisulfite ion; it prevents microbial growth and the oxidation of wine
7 - total sulfur dioxide: amount of free and bound forms of S02; in low concentrations, SO2 is mostly undetectable in wine, but at free SO2 concentrations over 50 ppm, SO2 becomes evident in the nose and taste of wine
8 - density: the density of water is close to that of water depending on the percent alcohol and sugar content
9 - pH: describes how acidic or basic a wine is on a scale from 0 (very acidic) to 14 (very basic); most wines are between 3-4 on the pH scale
10 - sulphates: a wine additive which can contribute to sulfur dioxide gas (S02) levels, wich acts as an antimicrobial and antioxidant
11 - alcohol: the percent alcohol content of the wine
Output variable (based on sensory data):
12 - quality (score between 0 and...
SOLUTION.PDF

Answer To This Question Is Available To Download

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here