If (i) E [ω1∣θ,ω2] = 0 and (ii) ω2 is independent from θ, then the density of θ can be expressed
in terms of observable quantities as:
ZX z X-L f - θ X χx E ∖iW1eiζW2] ʌ
pθ (θ) = (2π) j e χ exp ( j e [eiζ∙w2] ∙dZ ) dχ,
where in this expression i = ʌ/-ɪ, provided that all the requisite expectations exist and
E , iζ∙l1 2 is nonvanishing. Note that the innermost integral is the integral of a vector-
valued field along a piecewise smooth path joining the origin and the point χ ∈ RL , while the
outermost integral is over the whole RL space. If θ does not admit a density with respect to the
Lebesgue measure, pθ (θ) can be interpreted within the context of the theory of distributions.
If some elements of θ are perfectly measured, one may simply set the corresponding elements
of W1 and W2 to be equal. In this way, the joint distribution of mismeasured and perfectly
measured variables is identified.
Proof. See Web Appendix, Part 3.1.16
The striking improvement in this analysis over the analysis of Cunha and Heckman (2008)
is that identification can be achieved under much weaker conditions regarding measurement
errors— far fewer independence assumptions are needed. The asymmetry in the analysis of
ω1 and ω2 generalizes previous analysis which treats these terms symmetrically. It gives the
analyst a more flexible toolkit for the analysis of factor models. For example, our analysis
allows analysts to accommodate heteroscedasticity in the distribution of ω1 that may depend
on ω2 and θ. It also allows for potential correlation of components within the vectors ω1 and
ω2 , thus permitting serial correlation within a given set of measurements.
The intuition for identification in this paper, as in all factor analyses, is that the signal is
common to multiple measurements but the noise is not. In order to extract the noise from the
signal, the disturbances have to satisfy some form of orthogonality with respect to the signal
and with respect to each other. These conditions are various uncorrelatedness assumptions,
conditional mean assumptions, or conditional independence assumptions. They are used in
various combinations in Theorem ɪ, in Theorem 2 below and in other results in this paper.
3.3 The Identification of a General Measurement Error Model
In this section, we extend the previous analysis for linear factor models to consider a mea-
surement model of the general form
Zj = aj∙ (θ,εj∙ ) for j ∈{1,...,M}, (3.7)
16The results of Theorem 1 are sketched informally in Schennach (2004a, footnote 11).
ɪ2