By the early years of the 18th century, astronomers had fairly well determined the relative dimensions of the solar system, the relative distances between planetary orbits and between the planets and...

1 answer below »
Attached


By the early years of the 18th century, astronomers had fairly well determined the relative dimensions of the solar system, the relative distances between planetary orbits and between the planets and the sun. However, they lacked precise information on the absolute dimensions of the solar system, and were eager to determine even one such distance in miles, in particular the mean distance from the earth to the sun, from which all others could be found. Actually, the quantity that 18th century astronomers chose to pursue was the parallax of the sun, the angle subtended by the earth’s radius, as if viewed and measured from the surface of the sun. From this angle and available knowledge of the physical dimensions of the earth, the mean distance from earth to sun (or astronomical unit) could be easily determined. The astronomer Edmund Halley (1656-1742) is generally credited with having been the first to suggest that the parallax of the sun could be determined by observing a “transit of Venus,” the apparent passage of the planet across the face of the sun, as viewed from earth. If observers were dispatched to all corners of the globe from which this transit would be visible, and they carefully recorded their positions and the elapsed time of the transit (on the order of 5.5 hours), then each pair of observers would furnish one determination of the parallax of the sun. Unfortunately for the implementation of Halley’s plan, transits of Venus are quite rare, owing to the 3◦36’ inclinations between the orbits of Venus and Earth. The first recorded transit of Venus occurred on 1639 and was observed only in England; the next transits were due in 1761 and 1769, with later transits due in 1874, 1882, 2004 and 2012. By 1761, interest in the forthcoming transit was high, and observations were made at the Cape of Good Hope, and in Calcutta, Rome, Stockholm and most other European observatories. A good account of the transits of 1761 and 1769 can be found in Woolf (1959); and excellent discussion of the data generated by these transits is given by Newcomb (1891). James Short was a maker of optical instruments, particularly telescopes. He was born in Edinburgh, Scotland and entered the University of Edinburgh in 1726, where he attended lectures by the mathematician Colin Maclaurin. Short is perhaps best remembered for his observations of the transit of Venus on June 6, 1761, one of the two most important astronomical events of the mid-eighteenth century (the other one was the transit of Venus in 1769). Short recorded 53 observations and analysed them in a “robust” manner. He took the mean of all n = 53 determinations (8.61), then rejected all results differing from 8.61 by more than 1.00 and obtained the mean of the remainder (8.55), then rejected all results differing from 8.61 by more than 0.50 and obtained the mean of the remainder (8.57); finally he took the mean of 8.61, 8.55, 8.57 to obtain the sun’s parallax as 8.58. He applied similar analyses to other data sets of the same phenomenon. The parallax.txt file contains James Short’s measurements of the parallax of the sun (in seconds of a degree), based on the 1761 transit of Venus. The data were originally published in Philosophical Transactions of the Royal Society of London. The table with the observations and descriptive information here are adapted from an article by Stephen Stigler in the Annals of Statistics. The number of cases is 53 and the true value is 8.798. The parallax of the sun is the angle subtended by the earth, as seen from the surface of the sun. Michelson's Velocity of Light Data Measurement Velocity 1 850 2 740 3 900 4 1070 5 930 6 850 7 950 8 980 9 980 10 880 11 1000 12 980 13 930 14 650 15 760 16 810 17 1000 18 1000 19 960 20 960 21 960 22 940 23 960 24 940 25 880 26 800 27 850 28 880 29 900 30 840 31 830 32 790 33 810 34 880 35 880 36 830 37 800 38 790 39 760 40 800 41 880 42 880 43 880 44 860 45 720 46 720 47 620 48 860 49 970 50 950 51 880 52 910 53 850 54 870 55 840 56 840 57 850 58 840 59 840 60 840 61 890 62 810 63 810 64 820 65 800 66 770 67 760 68 740 69 750 70 760 71 910 72 920 73 890 74 860 75 880 76 720 77 840 78 850 79 850 80 780 81 890 82 840 83 780 84 810 85 760 86 810 87 790 88 810 89 820 90 850 91 870 92 870 93 810 94 740 95 810 96 940 97 950 98 800 99 810 100 870 Suppose that your are James Short’s data analyst at that time and you are asked to analyse his data in a Bayesian way. Note that small α (shape) and β (scale) values have a small impact on the conditional distribution of σ 2 . (a) Identify and define the parameters of the model. (b) Specify the prior distributions. Justify your answer. (c) Implement a Gibbs sampling algorithm. Comment on the MCMC specifications and the effective sample sizes (ESS). Generate trace plots of the posterior of the parameters and comment about them. (d) Generate a 95% credible interval for µ. Interpret your results and propose a point estimate for this parameter. Justify your answer. (e) We now know that the true value of the parallax of the sun is 8.798. What do you conclude about Short’s and Bayesian estimates? Justify your answer, don’t forget to consider the relative error. (f) Calculate the 95% confidence interval for µ and interpret it in the context of this problem. Compare it to the credible interval calculated previously and discuss their difference. (g) Write down the JAGS model. MCMC methods MCMC methods 3 / 63 MCMC methods MCMC Markov chain Monte Carlo (MCMC) is a simulation algorithm that gener- ates a dependent sample θ1, . . . , θn from the target distribution f(θ). 4 / 63 MCMC methods Use An expected value can be approximated as follows E (h(θ)) = ∫ Θ h(θ)f(θ)dθ ≈ 1 n n∑ i=1 h(θi), where θi ∼ f(θ), for i = 1, . . . , n. 5 / 63 MCMC methods Use: Bayesian statistic The Bayes’ theorem is given by p(θ|M,X) = L(X|M,θ)π(θ|M) z , where θ ∈ Θ is the parameter vector, L the likelihood, π the prior, p the posterior, X the data set, M the model and z the marginal likelihood given by ∫ Θ L(X|M,θ)π(θ|M)dθ. We use MCMC methods to: • to study the posterior distribution, • to estimate z. 6 / 63 MCMC methods Metropolis-Hastings algorithm History • MCMC was introduced by physicists (Metropolis et al. 1953). N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller, and E. Teller. “Equations of State Calculations by Fast Computing Machines”. Journal of Chemical Physics, 21(6):1087-1092, 1953. • Metropolis algorithm is generalized in the 1970’s by the statistician Hastings. • Applied mathematicians Geman and Geman (1984) used Gibbs sam- pler in image restoration problems. • Since the early 1990’s MCMC methods have revolutionized Bayesian statistics. 7 / 63 MCMC methods Metropolis-Hastings algorithm Metropolis-Hastings algorithm θ∗∼q(θn|θ∗) is accepted according to the acceptance ratio α(θ∗|θn) = min { 1, f(θ∗) f(θn) q(θn|θ∗) q(θ∗|θn) } . Otherwise, stay at point θn. • f is the target distribution. • q is the proposal distribution. • θn is the current position. • θ∗ is the proposal position. 8 / 63 MCMC methods Metropolis-Hastings algorithm Metropolis algorithm If q is symmetric, i.e., q(θn|θ∗) = q(θ∗|θn), we have that α(θ∗|θn) = min { 1, f(θ∗) f(θn) } . Note that the M-H algorithm requires to know f up to a (normalization) constant, i.e., if f(θ) = g(θ)/c we have that f(θ∗) f(θn) = g(θ∗) g(θn) , therefore we only need to know g(θ). This is why z (marginal likelihood in Bayes’ theorem) is generally ignored in the Bayesian parameter inference process. 9 / 63 MCMC methods Metropolis-Hastings algorithm MH proposal Consider α(θ∗|θn) = min { 1, f(θ∗) f(θn) } in the following cases: f(θ ) ● ● ● θ̂ θn θ ~ where θn is the current position and θ̂ and θ̃ are the proposal positions (θ ∗). 10 / 63 MCMC methods Metropolis-Hastings algorithm MH proposal • θ̂ is accepted with probability f(θ̂)/f(θn). • θ̃ is accepted with probability 1. 11 / 63 MCMC methods Metropolis-Hastings algorithm MH algorithm 1. Initialize θ0 2. For n = 0, . . . , N − 1, repeat N times: i. Sample θ∗ from q(θ|θn). ii. Sample u from Uniform(0,1). iii. If u ≤ α(θ∗|θn)→ θn+1 = θ∗ else θn+1 = θn 12 / 63 MCMC methods Metropolis-Hastings algorithm Exercise 8.1 Sample from N(0,1) using proposals from Uniform(θn − 2, θn + 2). Specifications: • Starting value = 0. • Starting value = -100. • Starting value = -100 and use log-densities. 13 / 63 MCMC methods Metropolis-Hastings algorithm Exercise 8.1 Simple R implementation of the MH algorithm. mh = function(n, starting_value, logden = FALSE){ out = NULL; # to store the points current_value = starting_value; current_den = dnorm(current_value, log = logden); for(i in 1:n){ proposal_value = runif(1, min = current_value - 2, max = current_value + 2); proposal_den = dnorm(proposal_value, log = logden); if(logden == TRUE){ alpha = proposal_den - current_den;# Acceptance probability u = log(runif(1)); }else{ alpha = proposal_den / current_den; u = runif(1); } 14 / 63 MCMC methods Metropolis-Hastings algorithm Exercise 8.1 if(u <= alpha){="" #="" the="" proposal="" is="" accepted="" out[i]="proposal_value;" current_value="proposal_value;" current_den="proposal_den;" }else{="" #="" the="" proposal="" is="" rejected="" out[i]="current_value;" }="" }="" #="" end="" loop="" return(out);="" }="" 15="" 63="" mcmc="" methods="" metropolis-hastings="" algorithm="" exercise="" 8.1="" mh(n="5000," starting="" value="0," logden="FALSE)" x="" d="" en="" si="" ty="" −2="" 0="" 2="" 4="" 0.="" 0="" 0.="" 1="" 0.="" 2="" 0.="" 3="" 0.="" 4="" iteration="" x="" 0="" 1000="" 2000="" 3000="" 4000="" 5000="" −="" 2="" 0="" 2="" 4="" 16="" 63="" mcmc="" methods="" metropolis-hastings="" algorithm="" exercise="" 8.1=""> x = mh(n=5000, starting_value=-100, logden = FALSE) Error in if (u <= alpha)="" {="" :="" missing="" value="" where="" true/false="" needed="" problem:="" dnorm(proposal="" value,="" log="FALSE)" =="" 0="" &="" dnorm(current="" value,="" log="FALSE)" =="" 0="" ⇒="" alpha="proposal" den="" current="" den="∞" in="" phylogenetics="" the="" likelihood="" values="" are="" so="" small,="" so="" the="" log-likelihood="" is="" used="" instead.="" 17="" 63="" mcmc="" methods="" metropolis-hastings="" algorithm="" exercise="" 8.1=""> x = mh(n=5000, starting_value=-100, logden = FALSE) Error in if (u <= alpha) { : missing value alpha)="" {="" :="" missing="">
Answered 4 days AfterMay 21, 2021

Answer To: By the early years of the 18th century, astronomers had fairly well determined the relative...

Saravana answered on May 25 2021
134 Votes
#######################################
# This function is a Gibbs sampler##
## Args
# start.a
: initial value for a
# start.b: initial value for b
# n.sims: number of iterations to run
# data: observed data, should be in a
# data frame with one column
##Returns:# A two column matrix with samples
# for a in first column and
# samples for b in second column
#######################################
rm(list = ls())
library(invgamma)
## data is read from parallax.txt
data <-read.csv("~/Downloads/parallax.txt", header = TRUE)
sampleGibbs <-...
SOLUTION.PDF

Answer To This Question Is Available To Download

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here