Problems 4 (20 points)Suppose (Y1,11),..., (Ys, I,) are i.i.d. observations defined as follows. With probability 7, ¥; is drawn froma N(uq,1) distribution, and with probability 1 — 7, Y; is drawn...

1 answer below »
Questions are in attached files


Problems 4 (20 points) Suppose (Y1,11),..., (Ys, I,) are i.i.d. observations defined as follows. With probability 7, ¥; is drawn from a N(uq,1) distribution, and with probability 1 — 7, Y; is drawn from a N(ug,1). The variable I; indicates which distribution Y; is drawn from i.e. Lo LEYin Nu, 1) ‘ 0, otherwise where 7, 11, uo are unknown parameters. Define the following statistics vo Zia Yili/Z,iE Z > 0 1M 0,ifZ=0 fy [ Ti Y= 1)/(n=2)i Z | 61,y) for available in closed form. Derive these distributions. 2. Now derive the marginal posterior distributions: p(6; | y) and p(fs | y). Do the data update the prior distributions for these parameters? 3. Set a; = ay = 50,b; = by = 1000, and suppose we observe y = 0. Run the Gibbs sampler defined in part (a) for ¢t = 100 iterations, starting your chains near the prior mean (say, between 40 and 50), and monitoring progress of 8,6, and u. Does this algorithm “converge” in any sense?. Estimate the posterior mean of i. Does your answer change using ¢ = 1000 iterations. 4. Now keep the same values for a; and a,, but set b; = by = 10. Again run 100 iterations using the same starting values as in part (c). What is the effect on convergence?. Repeat for ¢ = 1000 iterations; is your estimate for E(u | y) unchanged? Problem 2 (15 points) For the following data we will consider two competing models: Hi: alinear; H, : a quadratic regression. rz; -1.9 -039 0.79 -0.20 0.42 -035 0.67 0.63 -0.024 1.2 yi -1.7 -023 0.50 -0.66 1.97 0.10 0.60 1.13 0.943 2.6 Model H;: Yi =P1+ Pers +&,i=1,...,n e ~ N(0,1) B1 ~ N(0,1),8, ~ N(1,1), with 8; and $3, a priori independent. Model Hy: Yi =m +7 +i +e, i=1,...,n e ~ N(0,1) 71 ~N(0,1),72 ~ N(1,1),7s ~ N(0,1), with v1, ye, v3 a priori independent. 1. Find the marginal distribution p(y | H1) = [p(y | 8)p(B)dB and p(y | Hz) = [p(y | v)p(v)dy. (Hint: using the matrix form to represent regression models is easier to derive the marginal distribution. ) 2. Write down the Bayes factor B = p(y | H2)/p(y | H1) for comparing model H; vs. model H, and evaluate it for the given data set. 3. We now replace the prior distributions by improper constant priors: p(8) = ¢; in model Hi; and p(v) = cz in model H,. We can still form evaluate integrals [ p(y | 8)p(8)dB and [ p(y | v)p(v)dy and define a Bayes factor 5 [p61 Vp0)dy Joly | Bp(B)dB’ Show that the value of the Bayes factor B depends on the — arbitrarily chosen — constants ¢; and c,.
Answered 1 days AfterNov 13, 2022

Answer To: Problems 4 (20 points)Suppose (Y1,11),..., (Ys, I,) are i.i.d. observations defined as follows....

Banasree answered on Nov 14 2022
40 Votes
Problem 4.
Ans.
1.Likelihood
L1 = …………..1
L2 = …………..2
Simplifying 1) and 2)
L = }
    =
    
=
= ꭋ^z(1-ꭋ)^n-z exp[-z/2(µ1-Y1bar)^2]exp[-(n-z)/2(µ2-Y2bar)^2]
2.Ans.
conjugate prior for, µ1,µ2 and ꭋ
p(ꭋ|µ1) α ꭋ^z exp^(- z/2(µ1-Y1bar)^2
p[(1-ꭋ)|µ2) α (1-ꭋ)^(n-z)...
SOLUTION.PDF

Answer To This Question Is Available To Download

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here