The Semiotic Resources of the Modes of Movement and Gesture
As discussed in Chapter One, when people use computers they point, click, hold the
mouse, lean forwards and back and move through screens often without a spoken
word. This highlights the importance of a multimodal approach to analysing
technology mediated learning and the role of action within this multimodal ensemble.
The semiotics of movement and gesture (used to refer specifically to hand
movements) are not as close to awareness of as explicit as the mode of image for
many people. For some communities however the semiotic resources of gesture are
fully articulated modes. The sign languages of the hearing impaired and to lesser
extent sign-systems such as semaphore are two examples of articulated gestural
modes within specific communities. Outside of these specific communities,
movement and gesture offer semiotic resources that are constantly brought into sign
making and possess regularities that are ‘grammatical’ enough so that they can be
contravened and are organised along the lines of meta-functions.
Movement and gesture realise ideational meanings about the world, different kinds of
engagement and interests, interpersonal meaning, and textual meaning. The main
dimensions that I use to analyse the interaction of students with the computer and
with one another via the modes of action (movement) and gesture, and the movement
of elements on the computer screen are discussed below. My use of these dimensions
in this thesis is developed from the semiotics of action developed by Martinec (2000).
Martinec draws on systemic functional grammar and social semiotics to classify
movements in action into different kinds of patterns on the basis of their observable
realisations. These dimensions also draw on research into movement and gesture
undertaken by the author and others in the Science classroom (Franks and Jewitt,
2001; Jewitt et. al, 2001; Jewitt and Kress, 2002b) and the English classroom (SEP,
2002).
76