34
M. Wilke et al.: Risk Specificity for Risk-Defusing Operators
pants to familiarize themselves with the task, that partici-
pants do not have previous knowledge of the theory in ques-
tion, and that at least two researchers code the data. Biggs,
Rosman, and Sergenian (1993) as well as by Payne, Braun-
stein, and Carroll (1978) also recommend combining AIS
and thinking aloud in order to construct detailed models of
decision-making behavior. Combining the C-AIS and the
thinking-aloud technique allowed us to collect data on the
importance of the information that the participants re-
quested (Williamson et al.). Therefore, the instruction for
thinking aloud should increase the validity of the verbal da-
ta as an indicator of the thinking process.
Finally, in accordance with Williamson et al. (2000) as
well as Huber and Huber (2003), a post-decision interview
was conducted to assess the validity and internal consis-
tency of the data collected during the decision process. To
this end, participants were asked to give a retrospective re-
port about their decision-making process (Ericsson & Si-
mon, 1980). At the end of the study, they were asked to com-
plete a questionnaire in which they described in their own
words and for each of the scenarios how they had made their
decision. This information was used to determine the rele-
vance for the decision maker of the different categories of
information. Demographic data (age, sex, profession) were
also collected.
Procedure
Interviews were conducted individually in a quiet room, in
most cases, at the Institute of Psychology at the University
of Heidelberg, but some participants were interviewed at
home. All interviews were recorded with a digital voice
recorder and transcribed afterwards. Breaks and distur-
bances (e.g., by mobile phones) were reduced to a mini-
mum. If they could not be avoided completely, the inter-
view was briefly interrupted and continued after a short
break. None of the participants cancelled the interview or
refused to answer the final questionnaire. The interviews
normally took approximately 45 min.
After participants were welcomed, the procedure for the
study was explained and written consent to recording and
subsequent anonymous analysis of their interviews sought.
Participants received specific information about the proce-
dure in written form and were asked to read the description
of the first scenario, obtain further information by posing
questions and receiving answers from the interviewer, and
to verbalize the statements important for their decision mak-
ing using the thinking aloud technique. A sample scenario
called “railway club” was used as a warm-up task so that
participants could become acquainted with the interview
situation and practice asking questions and thinking aloud.
The sample scenario was not used to collect data. Voice
recording started as soon as participants stated that they had
read the text and were ready to ask questions and think
aloud. After participants had given their final decision con-
cerning the central issue of the scenario (see Appendix for
an example), voice recording was stopped and the next sce-
nario presented. Finally, participants were to describe in
their own words how they had arrived at their decision for
each of the three scenarios. They were also to inform the
experimenter about any previous knowledge they had had
about the scenarios. At the end of the interviews, partici-
pants were given the chance to ask questions about the aims
of our study.
Quantitative Content Analysis
The first step was the development of a category system to
transform the interview data into quantitative data for analy-
sis. Our category system was based on Huber et al. (1997)
and Huber et al. (2001). Their system consists of the fol-
lowing categories: “situation,” “probabilities,” “secure/in-
secure consequences,” “evaluation,” “long-term plans,”
“RDO,” and “information about RDO”. This system was
used as a basic framework and was modified in the course
of our analysis (see Table 2). We added the categories “back-
ground knowledge,” “experience,” and “attitude/rules/prin-
ciples.” We did not use the categories long-term plans and
evaluation because they were irrelevant for our question and
could also be represented in the attitude categories. In the
coding system used by Huber and his colleagues, a differ-
entiation between insecure and secure consequences was
made. We named this category “negative and positive con-
sequences.” This allowed us to measure the advantages and
disadvantages connected with the risk. All participants’
questions and statements during the interviews as well as
all written explanations from the post-decision interview
were coded according to this category system.
Reliability Check
Sixty interviews were randomly chosen to determine inter-
rater reliability. There were two coders per interview. The
results showed high interrater reliability (κ = 0.94) as cal-
culated using the conventional method (Cohen, 1960). This
demonstrated that both the categories of questions used and
the thinking-aloud data had been classified with high reli-
ability. The remaining 60 interviews were coded by two
coders (30 interviews each).
Statistical Data Analysis
Data analysis of the effects of type of risk and risk domain
as well as their interaction was done simultaneously by
means of logit analysis. Logit models are special cases of
log-linear models that are used for multivariate analysis of
nominal-scale or categorical data. Natural logarithms of ob-
served frequencies in the different fields of a multidimen-
sional contingency table were computed and expressed as
a linear combination of main and interaction effects.
Swiss J Psychol 67 (1), © 2008 by Verlag Hans Huber, Hogrefe AG, Bern