24
conditional posterior distribution. That is, from the posterior distribution for
the linear regression:
G
υij -ri= x[βj + = ∂)μjg + ⅛ for i = 1,... ,n,
s=ι
with response Pi = υij - ri. The residuals are eɪ,... en ~ N(0, σ2) and the prior
is the multivariate normal distribution resulting from the product of (2.10)
and the prior distribution of (μjι, , μjβ) when marginalizing with respect to
φ. The latter is a multivariate normal distribution of dimension G with mean
zero and covariance matrix σ2μIo + 1g1gcγ⅛> where Ig is the identity matrix of
dimension G and lɑ is the column vector of length G with all entries equal to
1.
(b) For each i = 1,... ,n, we sample r⅛ from the complete conditional posterior dis-
tribution, or, equivalently, from the posterior distribution for a linear regression
with response y*j* = υij - (χTβj + μt,wij),
y*j* = ri + ej, for; = 1,...,7,
with eɪ,... Ej ~ ΛΓ(O, σ∣) and prior ri ~ ΛΓ(O, σ2).
(c) We update βj with a random walk Metropolis-Hasting transition probability. We
generate β ~ N(βj,c2) where c > 0 and compute the posterior distribution of
βj (marginalizing with respect to w):
p(βj I μ, r, p, z) ex fβj(∕37) n”=1 ι⅛.,σ2 (‰ι,j∙ - ri - ∑°=1 Pjgμja)
~ ^χTβj,σl (θzij,j ~ri~ Σg=lPjgμjg,)
where Φμσ2 (ɪ) represents the normal cdf with mean μ and variance σ2 evaluated
at x. Let A = p(β ∣ μ,r,p, z)∕p(∕¾ ∣ μ,r, p,z). With probability min{l,Λ} we
replace βj by β.