Background In Australia, we have experienced extreme heat in the year 2019. With the inevitable rise of extreme weather events, it is crucial that we better understand its potential impact on our...

1 answer below »
Have attached instructions to the file and what things are required thank you


Background In Australia, we have experienced extreme heat in the year 2019. With the inevitable rise of extreme weather events, it is crucial that we better understand its potential impact on our everyday life. Some consequences of extreme weather events and the climate change were captured in this article: https://australasiantransportresearchforum.org.au/wp-content/uploads/2022/03/2007_Rowland_Davey_Freeman_Wishart.pdf Various weather events may affect the road safety. In this, you will use a dataset based on publicly available data to understand the relationship between weather patterns and number and severity of road traffic accidents. Your analysis could provide crucial knowledge for resource planning of emergency services. Assignment 1 will focus on the analysis of road traffic accidents data. Task 1: Road traffic accident dataset (16 points) The dataset is attached to the assignment. Please download it and place it in a folder where your Rstudio will be able to access it. · How many rows and columns are in the data? (2 points) · How many regions are in the data? (2 points)S · What data types are in the data? (Use data type selection tree and provide detailed explanation) (2 points for data types, 2 points for explanations) · What time period does the data cover? (2 points) · What do the variables FATAL, SERIOUS, … represent? (2 points) · What’s the difference between “FATAL” and “SERIOUS” accidents? (3 points) Task 2: Tidy data (20 points) Task 2.1 Cleaning up columns You may notice that the road traffic accidents csv file has two rows of heading. This is quite common in data generated by BI reporting tools. Let’s clean up the column names. cav_data_link <- 'car_accidents_victoria.csv'="" top_row=""><- read_csv(cav_data_link,="" col_names="FALSE," n_max="1)" second_row=""><- read_csv(cav_data_link,="" n_max="1)" column_names=""><- second_row="" %="">%    unlist(., use.names=FALSE) %>%    make.unique(., sep = "__") # double underscore column_names[2:5] <- str_c(column_names[2:5],="" '0',="" sep='__' )="" daily_accidents=""><-    read_csv(cav_data_link, skip = 2, col_names = column_names) now print out a list of regions in the data set. (1 point) task 2.2 tidying data 1. now we have a data frame. answer the following questions for this data frame. · does each variable have its own column? (1 point) · does each observation have its own row? (1 point) · does each value have its own cell? (1 point) 2. use spreading and/or gathering (or their pivot_wider and pivot_longer new equivalents) to transform the data frame into tidy data (6 points). the key is to put data from the same measurement source in a column and to put each observation in a row. please answer the following questions. · how many spreading (or pivot_wider) operations do you need? (1 point) · how many gathering (or pivot_longer) operations do you need? (1 point) · explain the steps in detail. (3 points) 3. are the variables having the expected variable types in r? clean up the data types. (3 points) 4. are there any missing values? fix the missing data. justify your actions. (2 points) task 3: exploratory data analysis (20 points) it is often a good idea to visually check your data before fitting a model. the purpose is to understand the distribution of different measurements and relations between them. task 3.1 select a region select a region and create a dataset for only the selected region. (1 point) print out the name of the chosen region (1 point), the number of serious road accidents (1 points), and the total number of road accidents in the region (2 points). add "total_accidents" column into the dataset for the selected region. (1 point) task 3.2 for the region selected, if we want to compare the number of road accidents across the year, which plot can we use? show your plot and explain what the plot shows. (3 points) task 3.3 how do the road accident numbers change during a week? show it visually using violin plots (2 points), describe the results (2 points) and provide your interpretation (2 points). task 3.4 use skimrand fitdistrplus libraries to answer the following questions. which distributions are appropriate for modelling the number of accidents? (1 point) which variables meet the assumptions for the poisson distribution and why? (2 points) to reduce the dependence between consecutive days, randomly sample 200 records out of the whole dataset (all records for the selected region) for modelling (2 points). task 4: fitting distributions (20 points) as you may have seen in the previous step, although we are dealing with count data, a poisson distribution may not provide a good fit. actually, unconditional poisson distribution is too restrictive for most real-world applications. in this task, we will fit a couple of distributions to the total_accidents data using the same sample of task 3.4. task 4.1: fitting distributions (4 points) fit a poisson distribution and a negative binomial distribution on total_accidents. you may use functions provided by the package fitdistrplus. task 4.2: compare distributions (6 points) compare the log-likelihood of two fitted distributions. which distribution fits the data better? why? task 4.3: try other distributions (research question 1) (10 points) find which distributions r stats library includes. try to fit some of them to different accident types. analyse and explain the results. write a short report (200 words). task 5: research question 2 (15 points) there is more than one way to fit a distribution to a set of numbers. produce a short literature review on different distribution fitting methods, showing the pros and cons of each method. 5 points will be given to relevance of the literature. 7 points will be given for the quality of comparative analysis of distribution fitting methods. 3 points will be given for the quality of presentation. task 6: ethics question (7 points) during your work, have you identified any issues that have ethical implications? (2 points) does it concern security or privacy? (2 points) was the risk mitigated? (3 points) task 7: reflection (2 points) answer the following questions: 1. what help did you receive from other students? what did you learn from them? (1 point) 2. please estimate the mark that you will receive for assignment 1. please provide both a point estimate and an interval estimate (a confidence interval). you don’t need to provide a mathematical model, but please explain how do you use conditional information to reach the estimates. based on the conditional information, explain what you would have done differently to improve that mark? (1 point) what to submit by the due date, you are required to submit the following files to the assignment 1. an ms word or pdf file containing your answers to all the assignment questions. 2. an r notebook file assignment1_submission.rmd filled in with the script for your calculations. the file should be able to run. include sufficient comments so that the script can be understood by the marker. indicate all the packages that need to be installed separately.  ="" read_csv(cav_data_link,="" skip="2," col_names="column_names)" now="" print="" out="" a="" list="" of="" regions="" in="" the="" data="" set. (1="" point)="" task="" 2.2="" tidying="" data="" 1.="" now="" we="" have="" a="" data="" frame.="" answer="" the="" following="" questions="" for="" this="" data="" frame.="" ·="" does="" each="" variable="" have="" its="" own="" column? (1="" point)="" ·="" does="" each="" observation="" have="" its="" own="" row? (1="" point)="" ·="" does="" each="" value="" have="" its="" own="" cell? (1="" point)="" 2.="" use="" spreading="" and/or="" gathering="" (or="" their="" pivot_wider="" and="" pivot_longer="" new="" equivalents)="" to="" transform="" the="" data="" frame="" into="" tidy="" data (6="" points).="" the="" key="" is="" to="" put="" data="" from="" the="" same="" measurement="" source="" in="" a="" column="" and="" to="" put="" each="" observation="" in="" a="" row.="" please="" answer="" the="" following="" questions.="" ·="" how="" many="" spreading="" (or="" pivot_wider)="" operations="" do="" you="" need? (1="" point)="" ·="" how="" many="" gathering="" (or="" pivot_longer)="" operations="" do="" you="" need? (1="" point)="" ·="" explain="" the="" steps="" in="" detail. (3="" points)="" 3.="" are="" the="" variables="" having="" the="" expected="" variable="" types="" in="" r?="" clean="" up="" the="" data="" types. (3="" points)="" 4.="" are="" there="" any="" missing="" values?="" fix="" the="" missing="" data.="" justify="" your="" actions. (2="" points)="" task="" 3:="" exploratory="" data="" analysis="" (20="" points)="" it="" is="" often="" a="" good="" idea="" to="" visually="" check="" your="" data="" before="" fitting="" a="" model.="" the="" purpose="" is="" to="" understand="" the="" distribution="" of="" different="" measurements="" and="" relations="" between="" them.="" task="" 3.1="" select="" a="" region="" select="" a="" region="" and="" create="" a="" dataset="" for="" only="" the="" selected="" region. (1="" point)="" print="" out="" the="" name="" of="" the="" chosen="" region (1="" point),="" the="" number="" of="" serious="" road="" accidents (1="" points),="" and="" the="" total="" number="" of="" road="" accidents="" in="" the="" region (2="" points).="" add="" "total_accidents"="" column="" into="" the="" dataset="" for="" the="" selected="" region. (1="" point)="" task="" 3.2="" for="" the="" region="" selected,="" if="" we="" want="" to="" compare="" the="" number="" of="" road="" accidents="" across="" the="" year,="" which="" plot="" can="" we="" use?="" show="" your="" plot="" and="" explain="" what="" the="" plot="" shows. (3="" points)="" task="" 3.3="" how="" do="" the="" road="" accident="" numbers="" change="" during="" a="" week?="" show="" it="" visually="" using="" violin="" plots (2="" points),="" describe="" the="" results (2="" points) and="" provide="" your="" interpretation (2="" points).="" task="" 3.4="" use skimrand fitdistrplus libraries="" to="" answer="" the="" following="" questions.="" which="" distributions="" are="" appropriate="" for="" modelling="" the="" number="" of="" accidents? (1="" point) which="" variables="" meet="" the="" assumptions="" for="" the="" poisson="" distribution="" and="" why? (2="" points) to="" reduce="" the="" dependence="" between="" consecutive="" days,="" randomly="" sample="" 200="" records="" out="" of="" the="" whole="" dataset="" (all="" records="" for="" the="" selected="" region)="" for="" modelling (2="" points).="" task="" 4:="" fitting="" distributions="" (20="" points)="" as="" you="" may="" have="" seen="" in="" the="" previous="" step,="" although="" we="" are="" dealing="" with="" count="" data,="" a="" poisson="" distribution="" may="" not="" provide="" a="" good="" fit.="" actually,="" unconditional="" poisson="" distribution="" is="" too="" restrictive="" for="" most="" real-world="" applications.="" in="" this="" task,="" we="" will="" fit="" a="" couple="" of="" distributions="" to="" the="" total_accidents="" data="" using="" the="" same="" sample="" of="" task="" 3.4.="" task="" 4.1:="" fitting="" distributions (4="" points)="" fit="" a="" poisson="" distribution="" and="" a="" negative="" binomial="" distribution="" on total_accidents.="" you="" may="" use="" functions="" provided="" by="" the="" package fitdistrplus.="" task="" 4.2:="" compare="" distributions (6="" points)="" compare="" the="" log-likelihood="" of="" two="" fitted="" distributions.="" which="" distribution="" fits="" the="" data="" better?="" why?="" task="" 4.3:="" try="" other="" distributions="" (research="" question="" 1) (10="" points)="" find="" which="" distributions="" r="" stats="" library="" includes.="" try="" to="" fit="" some="" of="" them="" to="" different="" accident="" types.="" analyse="" and="" explain="" the="" results.="" write="" a="" short="" report="" (200="" words).="" task="" 5:="" research="" question="" 2="" (15="" points)="" there="" is="" more="" than="" one="" way="" to="" fit="" a="" distribution="" to="" a="" set="" of="" numbers.="" produce="" a="" short="" literature="" review="" on="" different="" distribution="" fitting="" methods,="" showing="" the="" pros="" and="" cons="" of="" each="" method. 5="" points will="" be="" given="" to="" relevance="" of="" the="" literature. 7 points will="" be="" given="" for="" the="" quality="" of="" comparative="" analysis="" of="" distribution="" fitting="" methods. 3 points will="" be="" given="" for="" the="" quality="" of="" presentation.="" task="" 6:="" ethics="" question="" (7="" points)="" during="" your="" work,="" have="" you="" identified="" any="" issues="" that="" have="" ethical="" implications? (2="" points) does="" it="" concern="" security="" or="" privacy? (2="" points) was="" the="" risk="" mitigated? (3="" points)="" task="" 7:="" reflection="" (2="" points)="" answer="" the="" following="" questions:="" 1.="" what="" help="" did="" you="" receive="" from="" other="" students?="" what="" did="" you="" learn="" from="" them? (1="" point)="" 2.="" please="" estimate="" the="" mark="" that="" you="" will="" receive="" for="" assignment="" 1.="" please="" provide="" both="" a="" point="" estimate="" and="" an="" interval="" estimate="" (a="" confidence="" interval).="" you="" don’t="" need="" to="" provide="" a="" mathematical="" model,="" but="" please="" explain="" how="" do="" you="" use="" conditional="" information="" to="" reach="" the="" estimates.="" based="" on="" the="" conditional="" information,="" explain="" what="" you="" would="" have="" done="" differently="" to="" improve="" that="" mark? (1="" point)="" what="" to="" submit="" by="" the="" due="" date,="" you="" are="" required="" to="" submit="" the="" following="" files="" to="" the="" assignment="" 1.="" an="" ms="" word="" or="" pdf="" file="" containing="" your="" answers="" to="" all="" the="" assignment="" questions.="" 2.="" an="" r="" notebook="" file assignment1_submission.rmd filled="" in="" with="" the="" script="" for="" your="" calculations.="" the="" file="" should="" be="" able="" to="" run.="" include="" sufficient="" comments="" so="" that="" the="" script="" can="" be="" understood="" by="" the="" marker.="" indicate="" all="" the="" packages="" that="" need="" to="" be="" installed="">
Answered 3 days AfterAug 23, 2022

Answer To: Background In Australia, we have experienced extreme heat in the year 2019. With the inevitable rise...

Mohd answered on Aug 26 2022
72 Votes
Assignment 1
Assignment 1
-
2022-08-26
Importing required packages
library(dplyr)
library(markdown)
library(knitr)
library(magrittr)
library(ggplot2)
library(skimr)
library(readr)
library(stringr)
Importing required Data
caraccidentsvictoria <- read_csv("caraccidentsvictoria.csv",
col_types = cols(...1 = col_date(format ="%d/%m/%Y")))
#View(caraccidentsvictoria
)
Cleaning the Double header column name
csv_data_link <- 'caraccidentsvictoria.csv'
top_row <- read_csv(csv_data_link, col_names = FALSE, n_max = 1)
second_row <- read_csv(csv_data_link, n_max = 1)
column_names <- second_row %>%
unlist(., use.names=FALSE) %>%
make.unique(., sep = "__") # double underscore
column_names[2:5] <- str_c(column_names[2:5], '0', sep='__')
daily_accidents <-read_csv(csv_data_link, skip = 2, col_names = column_names,col_types = cols(DATE = col_date(format ="%d/%m/%Y")))
first look of Data
skim(daily_accidents)
Data summary
    Name
    daily_accidents
    Number of rows
    1827
    Number of columns
    29
    _______________________
    
    Column type frequency:
    
    Date
    1
    numeric
    28
    ________________________
    
    Group variables
    None
Variable type: Date
    skim_variable
    n_missing
    complete_rate
    min
    max
    median
    n_unique
    DATE
    0
    1
    2015-07-01
    2020-06-30
    2017-12-30
    1827
Variable type: numeric
    skim_variable
    n_missing
    complete_rate
    mean
    sd
    p0
    p25
    p50
    p75
    p100
    hist
    FATAL__0
    0
    1
    0.08
    0.29
    0
    0
    0
    0
    2
    ▇▁▁▁▁
    SERIOUS__0
    0
    1
    0.79
    0.92
    0
    0
    1
    1
    8
    ▇▂▁▁▁
    NOINJURY__0
    0
    1
    0.00
    0.00
    0
    0
    0
    0
    0
    ▁▁▇▁▁
    OTHER__0
    0
    1
    1.49
    1.32
    0
    1
    1
    2
    9
    ▇▅▁▁▁
    FATAL__1
    0
    1
    0.16
    0.40
    0
    0
    0
    0
    3
    ▇▁▁▁▁
    SERIOUS__1
    0
    1
    4.48
    2.79
    0
    2
    4
    6
    17
    ▇▇▅▁▁
    NOINJURY__1
    1
    1
    0.00
    0.00
    0
    0
    0
    0
    0
    ▁▁▇▁▁
    OTHER__1
    1
    1
    9.50
    4.54
    0
    6
    9
    12
    33
    ▃▇▂▁▁
    FATAL__2
    0
    1
    0.23
    0.48
    0
    0
    0
    0
    3
    ▇▂▁▁▁
    SERIOUS__2
    0
    1
    5.32
    3.02
    0
    3
    5
    7
    18
    ▅▇▃▁▁
    NOINJURY__2
    0
    1
    0.00
    0.03
    0
    0
    0
    0
    1
    ▇▁▁▁▁
    OTHER__2
    0
    1
    8.80
    4.16
    0
    6
    8
    11
    30
    ▅▇▂▁▁
    FATAL__3
    1
    1
    0.10
    0.32
    0
    0
    0
    0
    2
    ▇▁▁▁▁
    SERIOUS__3
    0
    1
    0.93
    1.09
    0
    0
    1
    1
    8
    ▇▂▁▁▁
    NOINJURY__3
    0
    1
    0.00
    0.00
    0
    0
    0
    0
    0
    ▁▁▇▁▁
    OTHER__3
    0
    1
    1.34
    1.29
    0
    0
    1
    2
    9
    ▇▅▁▁▁
    FATAL__4
    0
    1
    0.08
    0.28
    0
    0
    0
    0
    2
    ▇▁▁▁▁
    SERIOUS__4
    0
    1
    0.88
    0.98
    0
    0
    1
    1
    6
    ▇▂▁▁▁
    NOINJURY__4
    0
    1
    0.00
    0.00
    0
    0
    0
    0
    0
    ▁▁▇▁▁
    OTHER__4
    0
    1
    1.33
    1.22
    0
    0
    1
    2
    9
    ▇▅▁▁▁
    FATAL__5
    0
    1
    0.10
    0.32
    0
    0
    0
    0
    2
    ▇▁▁▁▁
    SERIOUS__5
    1
    1
    1.37
    1.28
    0
    0
    1
    2
    7
    ▇▃▂▁▁
    NOINJURY__5
    0
    1
    0.00
    0.00
    0
    0
    0
    0
    0
    ▁▁▇▁▁
    OTHER__5
    1
    1
    1.65
    1.42
    0
    1
    1
    2
    9
    ▇▆▁▁▁
    FATAL__6
    0
    1
    0.10
    0.32
    0
    0
    0
    0
    2
    ▇▁▁▁▁
    SERIOUS__6
    0
    1
    0.72
    0.86
    0
    0
    1
    1
    5
    ▇▁▁▁▁
    NOINJURY__6
    0
    1
    0.00
    0.00
    0
    0
    0
    0
    0
    ▁▁▇▁▁
    OTHER__6
    0
    1
    1.44
    1.29
    0
    0
    1
    2
    8
    ▇▅▁▁▁
Applying Pivot longer to tidy data into more appropriate form
library(tidyr)
daily_accidents_1<- pivot_longer(daily_accidents,cols =FATAL__0:OTHER__0,names_to = "Eastern",values_to = "Value_0" )
daily_accidents_2<- pivot_longer(daily_accidents,cols =FATAL__1:OTHER__1,names_to = "north_west",values_to = "Value_1" )
daily_accidents_3<- pivot_longer(daily_accidents,cols =FATAL__2:OTHER__2,names_to = "SOUTH_EAST",values_to = "Value_2" )
daily_accidents_4<- pivot_longer(daily_accidents,cols =FATAL__3:OTHER__3,names_to = "NORTH_EASTERN",values_to = "Value_3" )
daily_accidents_5<- pivot_longer(daily_accidents,cols =FATAL__4:OTHER__4,names_to =...
SOLUTION.PDF

Answer To This Question Is Available To Download

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here