These features of e-learning courses represent a challenge to the way quality
assurance and enhancement is managed, and in particular to the collection of
student feedback. A review of 129 institutional audit reports produced by the QAA
between 2003 and 2006 (Jara and Mellar, 2008) showed that modifications to on-
campus strategies for collecting feedback from students in e-learning courses were
reported by just 11% of the institutions. A number of audit reports admitted that
student feedback on e-learning courses was not always collected methodically;
where it was collected two main modifications were applied to the standard
procedures:
■ adaptation of forms to suit the special features of the e-learning courses (i.e.
adding or modifying questions)
■ move to online surveys and the creation of discussion forums as strategies for
collecting feedback - changes intended to improve on the low response rates to
traditionally administered questionnaires.
Although there were no mentions of any modification to the procedures for student
representation, several of the audit reports showed recognition of the difficulties
encountered with implementing student representation in e-learning courses.
This review of audit reports showed that although higher education institutions may
be aware of the need to adapt current quality assurance and enhancement
procedures for their e-learning courses, changes to existing practice - at least in the
case of the strategies for establishing student views - are not widespread. So, in
order to get to get a clearer picture of the relationship between the features of e-
learning courses and these procedures as effective mechanisms for the assurance
and enhancement of the courses we carried out a series of case studies.
4. Case studies
Four case studies were carried out of on-line or mixed mode courses that were part
of the academic offer of four different dual mode UK higher education institutions and
had been subject to quality assurance processes.
Large amounts of documentation, most of it confidential, were collected at each site
and face-to-face interviews of staff and students were carried out. All sites were
generous in allowing access, though they varied in their own ability to trace relevant
and up to date documentation.
For each case study two sets of data were gathered:
a) the quality assurance documentation related to the particular courses selected.
The documentation collected for each case study varied in size and content, as
the different institutions organised and presented their records in different ways
(e.g. module evaluations might or might not include the questionnaire used,
reports of its results, and reports of one-off consultation events carried out with
students). As a result of this variation a great deal of effort had to go into trying
to identify the necessary data to allow comparability between sites.
b) interviews with a group of participants for each course including academic staff,
tutors, administrators, students, support staff and developers/designers.
Seeking to cover as many roles as possible, the target was to interview at least
four staff and four students per course. This target was not always possible to