As I explained at the start of the chapter, I was lucky enough to be undamaged by
supposed research methods development of the kind now compulsory for publicly-
funded new researchers in the UK. Or perhaps I was critical and confident enough to
query what methods experts were saying and writing. Methods text, courses and
resources are replete with errors and misinformation, such that many do more
damage than good. Some mistakes are relatively trivial. I remember clearly being
told by international experts that triangulation was based on having three points of
view, or that the finite population correction meant that a sample must be smaller
than proposed, for example. I have heard colleagues co-teaching in my own modules
tell our students that regression is a test of causation (see also Robinson, et al. 2007),
or that software like Nvivo will analyse textual data for them. Some examples are
more serious. There is a widespread error in methods texts implicitly stating that the
probability of a hypothesis given the data is the same as, or closely related to, the
probability of the data given that the hypothesis is true. However, probably the most
serious mistakes currently made in researcher development are the lack of awareness
of design, and the suggestion that methods imply values, and are a matter of personal
preference rather than a consequence of the problems to be overcome via research.
Much research methods training in social science is predicated on the notion that
there are distinct categories of methods such as ‘qualitative’ or ‘quantitative’.
Methods are then generally taught to researchers in an isolated way, and this isolation
is reinforced by sessions and resources on researcher identities, paradigms, and
values. The schism between qualitative and quantitative work is very confusing for
student researchers (Ercikan and Wolff-Michael, 2006). It is rightly confusing
because it does not make sense. These artificial categories of data collection and
analysis are not paradigms. Both kinds of methods involve subjective judgements
about less than perfect evidence. Both involve consideration of quantity and of
quality, of type and frequency. Nothing is gained by the schism, and I have been
wrong in allowing publishers to use the q-words in the title of some of my books
(altering ‘The role of number made easy’ to ‘Quantitative methods’, for example).
Subsequently, many of the same methods training programmes taken by new
researchers refer to the value of mixing methods, such as those deemed ‘qualitative’
or ‘quantitative’. Perhaps, unsurprisingly, this leads to further confusion. Better to
18