Introduction
Between 1989 and 1991 I was closely involved in the development of national
curriculum assessments for 14 year-olds in English, mathematics, science and
technology, and since then I have written on the problems that I believe afflict our
national curriculum assessment system (and indeed our examinations at GCSE and A-
level). However, I am aware that I have not drawn the various threads of these
arguments together, and so I am particularly grateful that Paul Newton has taken the
time to trawl through many of these papers (both published and unpublished) and
constructed a critique of the ideas I have advanced (Newton, this issue).
In responding to his critique, the first thing to say is that his characterisation of my work
in eminently fair and accurate. He has in some places highlighted areas where my thesis
was ambiguous or unclear, and therefore in this response I will try to clarify what I was
trying to say. He is also right to point out that some of my ideas have been laid out with
more rhetorical force that supporting empirical evidence. In most cases, this is because
the evidence does not yet exist, lending support to Newton’s argument that more
evidence is needed. In other cases I have tried to support the thesis, either by additional
argument or by citing empirical studies, which while not conducted specifically in the
context of the national curriculum of England, nevertheless may be regarded as
suggestive.
As far as possible, I have tried to adopt the same sequencing of topics as used by
Newton in his paper, although there are places where I have deviated from this in order
to avoid repetition.
The validity of national curriculum assessments
As Newton states, I have argued that national curriculum assessments assess only a part
of the domain which they are purported to represent. This is partly by design, and partly
by accident. By design, the national curriculum tests at key stages 2 and 3 do not assess
the first attainment target in mathematics and science nor do they assess Speaking and
Listening in English. By accident, or at least without, I think, being planned, the items
that do test particular aspects of the national curriculum do so in a distinctive way. For
example, in the national curriculum for mathematics, there are requirements for students
to collect and interpret discrete and continuous data, which are impossible to assess
adequately in a two-hour written test. It is also clear that teachers are able to predict
which aspects of a subject do come up in the tests, and which do not (Wiliam, 1993).
Whether the fact that teachers can predict which aspects of a subject are not going to be
tested subsequently results in these aspects not being taught is, as Newton notes, an
empirical question, but in the absence of appropriate research evidence, I would suggest
that the, admittedly less rigorous, evidence from the teaching unions and from school
inspection reports presents, at the very least, a case to answer. Indeed, it could be argued
that, given the way that narrow targets have distorted performance in the National
Health Service and on the railway network, it would be extraordinary were school
teaching not so affected.
The third link is the argument is, of course, that if these aspects of a subject are not
taught, then the related competences are not developed, which is again an empirical
question, and ideally would need to be undertaken for each national curriculum subject.