Comparison of Key Evaluation Models in Health Informatics

Create graphic representations of four evaluation models. Use MS Word to create your graphic. Your graphic should include:

  • Descriptions of each overall model
  • Key components of each overall model
  • An important figure or figures in the development of each model
  • Significant ways each model has been used
  • Potential uses of each model in health informatics

These representations will be for your use in your upcoming course project, so the greater the detail, the more useful these representations will be to you.

Potential formats could include but are not limited to, tables, mind maps, Venn diagrams, or concept maps.

References attached.

HANDf~OOk:[ OF EVALUATION PII~TPlODS

11 Overview of Assessment Methods

In principle all aspects o f a system are candidates for assessment in all phases o f the system's development. In practice, some aspects are more prominent in some o f the phases than in others. During its life cycle, the assessment m a y change in nature from being prognostic (during planning), to screening and diagnosing (prior to switching over to daily operation), to treating (in the handling o f known error situations or shortcomings). Be aware, therefore, that even if a method is not listed under a specific phase, an information need may arise that requires inspiration from the methods listed under other phases.

Note that few o f the references given include a discussion o f the weaknesses, perils, and pitfalls o f the method described.

6.1 Overview of Assessment Methods" Explorative Phase

The methods included in this section are particularly relevant to the assessment o f issues raised during the establishment o f a User Re- quirements Specification, such as objectives, requirements, and expectations.

Method

Analysis of Work Procedures

Assessment of Bids

Balanced Scorecard

BIKVA

Areas of application

Elucidation of how things are actually carried out within an organization.

Comparative assessment of a number of offers from one or more bidders/vendors.

, . .

Ongoing optimization of the outcome of a development project by balancing focus areas by means of a set of indicators for a set of strategic objectives.

. .

Critical, subjective assessment of an existing practice.

Page no

73

78

85

88

61

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:20:40.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANDBOOI< OF: EVALUATION M{]TIdODS

Delphi

FieM Study

Focus Group Interview

Future Workshop

Grounded Theory

Heuristic Evaluation

Interview (no ns tandardiz ed)

KUBI

Logical Framework Approach

Organizational Readiness

�9 (Qualitative) assessment o f an e f f e c t – for instance, where the solution space is otherwise too big to handle

�9 Exploration o f development trends �9 Elucidation o f a problem a r e a – for instance,

prior to strategic planning.

Observation o f an organization to identify its practice and to clarify mechanisms controlling change.

This is in principle used for the same purposes as other interview methods. In practice, the method is most relevant during the early Explorative P h a s e – for instance, where attitudes or problems o f social groups need elucidation or when a model solution is being established.

Evaluation and analysis o f an (existing) situation in order to identify and focus on areas for change – that is, aiming at designing future practices.

Supportive analytical method for data acquisition methods that generate textual data, such as some open questionnaire methods and interviews (individual and group interviews).

This is used when no other realizable possibilities e x i s t – for instance, when:

�9 The organization does not have the necessary time or expertise

�9 There are no formalized methods �9 There is nothing tangible to assess yet.

This is particularly suited for elucidation o f individuals' opinions, attitudes, and perceptions regarding phenomena and observations.

Optimization of the outcome o f a long-term development project, based on a set o f user or customer/client-defined value norms and objectives.

Situation analysis to support the choice o f focus for a development but at the same time a simple technique for incorporation o f risk handling within project planning.

Assessment o f the readiness o f a healthcare organization for a clinical information system.

106

111

116

125

128

132

142

147

149

154

6~

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:20:40.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

NANDi~OOK O~ EVALUATION I"']gTBtODS

Pardizipp

Questionnaire (no ns tandardiz ed)

Requirements Assessment

Risk Assessment

Social Network Analysis

Stakeholder Analysis

SWOT

Usability

Videorecording

WHO: Framework for Assessment of Strategies

Preparation of future scenarios.

Questionnaires are used to answer a wide range of questions, but its main area of application is (qualitative) investigations of subjective aspects requiting a high level of accuracy.

Within the European culture the User Requirements Specification is the basis for purchasing an IT-based solution or engaging in a development project. Consequently, the User Requirements Specification is a highly significant legal document that needs thorough assessment.

Identification and subsequent monitoring of risk factors, making it possible to take preemptive action.

Assessment of relations between elements within an organization (such as individuals, professions, departments or other organizations), which influence the acceptance and use of an IT-based solution.

Assessment of stakeholder features and their inner dynamics, aiming to identify participants for the completion of a given task, problem-solving activity, or project.

Situation analysis: establishment of a holistic view of a situation or a model solution.

Assessment of user friendliness in terms of ergonomic and cognitive aspects of the interaction (dialogue) between an IT system and its users. In this phase the concern is a planning or purchasing situation.

Monitoring and documenting as a means of analysis of what/how the work procedures or the users' activities are actually carried out or for investigation of complex patterns of interaction.

Assessment of different (development) strategies either individually or as a comparative analysis.

156

163

180

185

190

192

196

207

219

222

65

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:20:40.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

ldANDSOOI< O5 EVALUATION M~TIdODS

6.2 Overview of Assessment Methods" Technical Development Phase

The m e t h o d s listed in this section are particularly suited to user activities during the d e v e l o p m e n t and installation o f an IT-based solution and m a y be used to provide feed-back for the technical development.

A s s e s s m e n t in this phase is typically carried out under experimental conditions and not during real operation. The phase is usually completed with a technical verification to make certain that all necessary functions and features are present and work properly in compliance with the established agreement.

Method

Balanced Scorecard

Clinical~Diagnostic Performance

Cognitive Assessment

Cognitive Walkthrough

Areas of application

Ongoing optimization of the outcome of a development project by balancing focus areas by means of a set of indicators for a set of strategic objectives.

Measurement of diagnostic 'correctness' (for instance, measures of accuracy and precision) of IT-based expert systems and decision-support systems. '

Assessment of cognitive aspects of the interaction between an IT system and its u s e r s – for instance:

�9 Identification of where and why operational errors occur

�9 Identification of areas to be focused on for improvement in user friendliness.

Assessment of user 'friendliness' on the basis of system design, from specifications, muck-ups, or prototypes, aimed at judging how well the system complies with the users' way of thinking for instance:

�9 Identification of where and why operational errors occur

�9 Identification of causes behind problems with respect to user friendliness and consequently identification of areas for improvement.

Page no

85

91

96

102

64

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:20:40.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

I-IANID~OOI< O~ ~VALUATION IVIETIdODS

Heuristic Evaluation

Risk Assessment

SWOT

Technical Verification

Think Aloud

Usability

This is used when no other realizable possibilities e x i s t – for instance, when:

�9 The organization does not have the necessary time or expertise

�9 There are no formalized methods �9 There is not something tangible to assess yet.

Identification and subsequent monitoring of risk factors, making it possible to take preemptive action.

Situation analysis: establishment of a holistic view of a situation or a model solution.

Verification that the agreed functions are present, and work correctly and in compliance with the agreement. This may take place, for instance, in connection with delivery of an IT system or prior to daily operations and at any subsequent change of the IT system (releases, versions, and patches).

An instrument for gaining insight into the cognitive processes as feed-back to the implementation and adaptation of IT-based systems.

Assessment of user friendliness in terms of ergonomic and cognitive aspects of the interaction (dialogue) between an IT system and its users.

132

185

196

199

204

207

6.3 Overview of Assessment Methods" Adaptation Phase

In this phase, evaluation has the purpose o f providing support for the modification or refinement o f the IT-based solution, work procedures, and functions implemented within the IT system to make them work optimally as a whole during daily operations. This phase should be fairly short, provided that the implemented solution is functioning well from the beginning.

6s

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:20:40.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

ldANDBOOI( OlZ EVALUATION IvlI~TI-IODS

N o w that real operational a s s e s s m e n t can take place, e r g o n o m i c , cognitive, and functionality a s s e s s m e n t will gain m u c h m o r e focus, as potential i n a d e q u a c i e s or s h o r t c o m i n g s will s h o w t h e m s e l v e s as operational errors, misuse, or the like.

Method

Analysis of Work Procedures

BIKVA

Clinical~Diagnostic Performance

Cognitive Assessment

Cognitive Walkthrough

Equity Implementation Model

Field Study

Focus Group Interview

Areas of appfication

Elucidation of how things are actually carried out, in comparison with the expected. This includes the actual use of the IT system in relation to its anticipated use.

Critical, subjective assessment of an existing practice.

Measurement of diagnostic ' correctness' (for instance, measures of accuracy and precision) in IT-based expert systems and decision-support systems.

Assessment of cognitive aspects of the interaction between an IT system and its u s e r s – for instance:

�9 Identification of where and why operational errors occur

�9 Identification of areas to be focused on for improvement in user friendliness.

Assessment of user 'friendliness' on the basis of system design, from specifications, muck-ups, or prototypes, aimed at judging how well the system complies with the users' way of t h i n k i n g – for instance:

�9 Identification of where and why operational errors occur

�9 Identification of causes behind problems with respect to user friendliness and consequently identification of areas for improvement.

Examine users' reaction to the implementation of a new system, focusing on the impact of the changes such a system brings about for the users.

Observation of an organization to identify its practices and to expose mechanisms that control change.

This is in principle used for the same purposes as other interview methods. In practice, the method is most relevant during the early Explorative P h a s e – for instance, where the attitudes or problems of social groups need elucidation or when a model solution is being established.

Page no

73

88

91

96

102

109

111

116

66

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:20:40.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

I-IANDBOOI< OF– ~VALUATION METHODS

Functionality Assessment

Grounded Theory

Heuristic Evaluation

Interview (nonstandardized)

Prospective Time Series

Questionnaire (nonstandardized)

RCT, Randomized Controlled Trial

Risk Assessment

Root Causes Analysis

1. Validation of fulfillment of objectives (realization of objectives) – that is, the degree of compliance between the desired effect and the actual solution

2. Impact Assessment (also called effect assessment) 3. Identification of problems in the relationship

between work procedures and the IT system's functional solution

The method will expose severe ergonomic and cognitive problems, but it is not dedicated to capture details of this type.

Supportive analytical method for data acquisition methods that generate textual data, such as some open questionnaire methods and interviews (individual and group interviews).

This is used when no other realizable possibilities e x i s t – for instance, when:

�9 The organization does not have the necessary time or expertise

�9 There are no formalized methods �9 There is not something tangible to assess yet.

Is in particular suited for the elucidation of individual opinions, attitudes, and perceptions regarding phenomena and observations.

Measurement of development trends, including the effect of an intervention.

Questionnaires are used to answer a wide range of questions, but its main area of application is (qualitative) investigations of subjective aspects requiting a high level of accuracy.

Verification of efficacy – that is, that the IT system – under ideal conditions – makes a difference to patient care. Particularly used in studies of decision-support systems and expert systems.

Identification and subsequent monitoring of risk factors, making it possible to take preemptive action.

Exploration of what, how, and why a given incident occurred to identify the root cause of undesirable events.

120

128

132

142

159

163

172

185

188

67

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:20:40.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANDBOOI( OiZ EVALUATION METHODS

Social Network Analysis

SWOT

Technical Verification

Think Aloud

Usability

User Acceptance and Satisfaction

Videorecording

Assessment of relations between elements within an organization (such as individuals, professions, departments, or other organizations), which influence the acceptance and use of an IT-based solution.

Situation analysis: establishing a holistic view of a situation or a model solution.

Verification that the agreed functions are present, and work correctly and in compliance with the agreement. This may take place, for instance, in connection with delivery of an IT system or prior to daily operations and at any subsequent change of the IT system (releases, versions, and patches).

An instrument for gaining insight into the cognitive processes as feed-back to the implementation and adaptation of IT-based systems.

Assessment of user friendliness in terms of ergonomic and cognitive aspects of the interaction (dialogue) between an IT system and its users.

Assessment of user opinion, attitudes, and perception of an IT system during daily operation.

Monitoring and documenting as a means of analyzing how work procedures and user activities, respectively, are actually carried out or for investigation of complex patterns of interaction.

190

196

199

204

207

215

219

6.4 Overview of Assessment Methods" Evolution Phase

The starting point in time o f this phase is usually considered to be when the entire IT-based solution has reached a state o f sufficient stability with respect to bugs and corrections and w h e n evolutionary activities are started. Consequently, the shi~ be twe e n this and the previous phase m a y be fluid.

66

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:20:40.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

I-IANDE~OOI< (3[= EVALUATION METHODS

Method

Analysis of Work Procedures

Balanced Scorecard

BIKVA

Clin&al/Diagnostic Performance

Cognitive Assessment

Cognitive Walkthrough

Delphi

Equity Implementation Model

FieM Study

Areas of application

Elucidation of how things are actually carried out, in comparison with the expected. This includes its use in relation to measures of effect.

Ongoing optimization of the outcome of a development project by balancing focus areas by means of a set of indicators for a set of strategic objectives.

Critical, subjective assessment of an existing practice.

Measurement of diagnostic ' correctness' (for instance, measures of accuracy and precision) of IT-based expert systems and decision-support systems.

Assessment of the cognitive aspects of the interaction between an IT system and its u s e r s – for instance:

�9 Identification of where and why operational errors occur

�9 Identification of areas to be focused on for improvement in user friendliness.

Assessment of the user 'friendliness' on the basis of system design, from specifications, muck-ups, or prototypes of the system, aimed at judging how well the system complies with the users' way of t h i n k i n g – for instance:

�9 Identification of where and why operational errors occur

�9 Identification of causes behind problems with respect to user friendliness and consequently identification of areas for improvement.

1. (Qualitative) assessment of an e f f e c t – for instance, where the solution space is otherwise too big to handle

2. Exploration of development trends 3. Elucidation of a problem a r e a – for instance, prior

to strategic planning.

Examine users' reaction to the implementation of a new system, focusing on the impact of the changes such a system brings about for the users.

Observation of an organization to identify its practices and to expose mechanisms that control change.

Page no

73

85

88

91

96

102

106

109

111

69

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:20:40.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANDE~OOI< O.l::Z EVALUATION I"~I~TIdODS

Focus Group Interview

Functionality Assessment

Grounded Theory

Heuristic Evaluation

Impact Assessment

Interview (nonstandardized)

KUBI

Prospective Time Series

It is in principle used for the same purposes as other interview methods. In practice, the method is most relevant during the early analysis s t a g e – for instance, where attitudes or problems o f social groups need clarification or elucidation or when a model solution is being established.

1. Validation o f fulfillment o f objectives (realization o f objectives) – that is, the degree o f compliance between the desired effect and the actual solution

2. Impact Assessment (also called effect assessment) 3. Identification o f problems in the relationship

between work procedures and the IT system's functional solution

The method will expose severe ergonomic and cognitive problems, but it is not dedicated to capture details o f this type.

Supportive analytical method for data acquisition methods that generate textual data, such as some open questionnaire methods and interviews (individual and group interviews).

This is used when no other realizable possibilities e x i s t – for instance, when:

�9 The organization does not have the necessary time or expertise

�9 There are no formalized methods �9 There is not something tangible to assess yet.

Measurement o f the e f f e c t – that is, the consequence or impact in its broadest sense – o f an IT-based solution, with or without the original objective as a frame o f reference.

This is in particular suited for elucidation o f individual opinions, attitudes, and perceptions regarding phenomena and observations.

Optimization o f the outcome o f a long-term development project, based on a set o f user or customer/client defined value norms and objectives.

Measurement o f development trends, including the effect o f an intervention.

116

120

128

132

135

142

147

159

7O

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:20:40.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANDE~OOk( O4Z ~VALUATION NI~TI4ODS

Questionnaire (nonstandardized)

RCT, Randomized Controlled Trial

Risk Assessment

Root Causes Analysis

Social Network Analysis

Stakeholder Analysis

SWOT

Technical Verification

Think Aloud

Usability

User Acceptance and Satisfaction

Questionnaires are used to answer a wide range of questions, but its main area of application is (qualitative) investigations of subjective aspects requiting a high level of accuracy.

Verification of efficacy – that is, that the IT s y s t e m – under ideal conditions – makes a difference to patient care. In particular used for studies of decision-support systems and expert systems.

Identification and subsequent monitoring of risk factors, making it possible to take preemptive action.

Exploration of what, how, and why a given incident occurred to identify the root cause of undesirable events.

Assessment of relations between elements within an organization (such as individuals, professions, departments, or other organizations), which influence the acceptance and use of an IT-based solution.

Assessment of stakeholder features and their inner dynamics, aiming to identify participants for the completion of a given task, problem-solving activity, or project.

Situation analysis: establishment of a holistic view of a situation or a model solution.

Verification that the agreed functions are present, work correctly, and are in compliance with the agreement. This may take place, for instance, in connection with delivery of an IT system or prior to daily operations, and at any subsequent change of the IT system (releases, versions, and patches).

An instrument for gaining insight into the cognitive processes as feed-back to the implementation and adaptation of IT-based systems.

Assessment of user friendliness in terms of ergonomic and cognitive aspects of the interaction (dialogue) between an IT system and its users.

Assessment of users' opinion, attitudes, and perception of an IT system at daily operation.

163

172

185

188

190

192

196

199

204

207

215

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:20:40.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

ldANDBOOI< OF EVALUATION M~TIdODS

Videorecording

WHO: Framework f o r Assessment o f Strategies

Monitoring and documenting as a means of analyzing how work procedures and user activities, respectively, are actually carried out or for investigation of complex patterns of interaction.

Assessment of different (development) strategies either individually or as a comparative analysis.

219

222

6.5 Other Useful Information

T h e r e is certain information that c a n n o t be c a t e g o r i z e d u n d e r ' m e t h o d s ' but that s h o u l d be included n e v e r t h e l e s s b e c a u s e an u n d e r s t a n d i n g o f these issues is valuable. In general, the areas o f application outlined in the table b e l o w are valid for all p h a s e s within the life cycle.

Information

Documentation in an Accreditation Situation

Measures and Metrics

Standards

Areas of application

Planning of assessment activities in connection with the purchase of a 'standard' IT system when the user organization is or considers becoming certified or accredited.

Measures and metrics are used throughout evaluation, irrespective of whether it is constructive or summative. Planning of an assessment/evaluation study includes the conversion of an evaluation purpose to specific measures and subsequent establishing metrics for their measurement.

A number of de facto and de jure standards exists, which each defines a series of issues such as the contents of a User Requirements Specification, verification of an IT system, quality aspects of an IT system, as well as roles and relations between a user organization and a vendor in connection with assessment.

Page no

227

232

238

72

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:20:40.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

,

The Triangle Model for evaluating the effect of health information technology on healthcare quality and safety

Jessica S Ancker,1,2,3 Lisa M Kern,2,3,4 Erika Abramson,1,2,3,5 Rainu Kaushal1,2,3,4,5

ABSTRACT With the proliferation of relatively mature health information technology (IT) systems with large numbers of users, it becomes increasingly important to evaluate the effect of these systems on the quality and safety of healthcare. Previous research on the effectiveness of health IT has had mixed results, which may be in part attributable to the evaluation frameworks used. The authors propose a model for evaluation, the Triangle Model, developed for designing studies of quality and safety outcomes of health IT. This model identifies structure-level predictors, including characteristics of: (1) the technology itself; (2) the provider using the technology; (3) the organizational setting; and (4) the patient population. In addition, the model outlines process predictors, including (1) usage of the technology, (2) organizational support for and customization of the technology, and (3) organizational policies and procedures about quality and safety. The Triangle Model specifies the variables to be measured, but is flexible enough to accommodate both qualitative and quantitative approaches to capturing them. The authors illustrate this model, which integrates perspectives from both health services research and biomedical informatics, with examples from evaluations of electronic prescribing, but it is also applicable to a variety of types of health IT systems.

INTRODUCTION The potential for health information technology (health IT) to improve the quality and safety of healthcare is the primary impetus behind the federal electronic health record (EHR) incentive program.1 2 However, previous research on the effects of health IT on healthcare delivery has had mixed results, with some studies finding improve- ments and others showing no effect or adverse effects on quality or safety.3e8

Mixed findings such as these may be in part due to the evaluation frameworks that have been used to assess associations between the quality and safety outcomes and the predictor variabledthat is, the health IT itself. For example, several of these studies were beforeeafter studies, which examined the outcomes of interest before and after the introduction of a technology. However, it may not be sufficient simply to categorize a study period by whether or not a specific technology was present. For example, two similar healthcare delivery settings with EHRs or computerized provider order entry (CPOE) systems may be very different from each other, because even the same product will be

customized with site-specific configuration of features such as order sets and interfaces with other clinical systems. Training and implementation procedures also differ between institutions and time periods. Furthermore, the technology is not only a predictor variable but also a confounder that can interact with other variables. Technology alters clinical workflow, staffing levels, and user percep- tions and attitudes; conversely, organizations can customize technologies to support specific organi- zational priorities, such as quality measurement or patient safety. Many of these factors may be potential explana-

tions for the observed differences in quality and safety outcomes for health IT. However, unfortu- nately, we cannot necessarily be sure of the role of any of these factors unless they are measured reli- ably and validly. We suggest that research on the impact of health ITon the delivery of healthcare will be stronger if potential predictor variables such as these are captured systematically and prospectively during the evaluation process. In this paper, we outline the Triangle Evaluation

Model, an evaluation model designed to capture the dimensions of assessment necessary to explain the quality and safety effects of health IT, and describe examples of how this model has informed our evaluation work.

MODEL FORMULATION AND THEORETICAL GROUNDING The rapid acceleration in use of health IT nation- wide, fueled by the federal ‘meaningful use’ policy,1

has resulted in an increased desire to understand how these systems are affecting the quality, safety, and efficiency of healthcare across a variety of healthcare delivery settings. In our view, a joint evaluation approach combining informatics and health services research is the most effective way to answer these research questions. In developing an evaluation model, we reviewed

the literature on published studies evaluating the effects of health ITon quality and safety as well as both evaluation and implementation models specific to health IT. We identified excellent guid- ance from previous evaluation models and imple- mentation researchers about evaluating a number of aspects of health IT, including technical opera- tions,9 diffusion, adoption, and fit,9e13 cognitive effects,14 15 social, organizational, and workflow impacts,9 16e20 and the general concept of ‘infor- mation systems success’.21 In addition, we drew from our own experience conducting quality and safety research in the field of health IT, and

1Department of Pediatrics, Weill Cornell Medical College, New York, New York, USA 2Department of Public Health, Weill Cornell Medical College, New York, New York, USA 3Health Information Technology Evaluation Collaborative (HITEC), New York, New York, USA 4Department of Medicine, Weill Cornell Medical College, New York, New York, USA 5New York-Presbyterian Hospital, New York, New York, USA

Correspondence to Dr Jessica S Ancker, Weill Cornell Medical College, 402 E 67th St, LA-251, New York, NY 10065, USA; [email protected]

Received 20 May 2011 Accepted 26 July 2011 Published Online First 20 August 2011

J Am Med Inform Assoc 2012;19:61e65. doi:10.1136/amiajnl-2011-000385 61

Research and applications D

ow nloaded from

https://academ ic.oup.com

/jam ia/article/19/1/61/735342 by guest on 04 M

arch 2022

conducted iterative discussions within our research team (which contains both health services researchers and informatics researchers) about constructs to be measured and potential operationalization of those measurements in the context of our ongoing and planned research studies.

We accomplished this by mapping elements and processes from health IT models on to the dominant theoretical model in health services research, the Donabedian Model, which empha- sizes a systems-level perspective on the determinants of healthcare quality.22 23 According to this model, the quality of a system of healthcare can be defined along three dimensions. ‘Structure’ is the system’s material, organizational, and human resources. ‘Processes’ are the activities performed by the system and its people, such as healthcare delivery methods. ‘Outcomes’ are the measurable end results, such as mortality, patient health status, and medical error rates. When applying these concepts to health IT, we were influenced not only by the evaluation liter- ature cited above but also by a second theoretical source, soci- otechnical theory, which describes how technology is interconnected with social structure.16 17 20 24 Introducing technology into an organization changes both the organization and the technology; there is ‘a process of mutual transformation; the organization and the technology transform each other’.24

We adapted the Donabedian Model by identifying structure and process factors with the potential to affect quality and safety outcomes of health IT. Four structural variables are depicted in figure 1 and described in additional detail in the next section: (1) the technology; (2) the provider using it; (3) the organiza- tional setting; and (4) the involved patient population. We also identified three categories of processes that connect pairs of structural variables: (1) the use of the technology by the provider; (2) the organizational implementation of the technology; and (3) organizational policies affecting providers (table 1).

MODEL DESCRIPTION In developing the model, we identified elements of healthcare structure and processes that should be assessed concurrently with the outcome variables of quality and safety. In addition, we incorporated the sociotechnical perspective that the organiza- tion, technology, and users would influence and change each other, especially through the processes. In this section, we describe the constructs that constitute the model, without specifying how they should be assessed. Assessment methods can be selected according to the resources of the researcher and to the research question at hand.

Structure In the Triangle Model, the relevant elements of structure are: (A) the technology; (B) the healthcare organization; (C) the healthcare provider user; and (D) the patients receiving care. In Figure 1, these elements are represented by the three points of the triangle and the central circle.

The technology In order to assess impact, it is first necessary to inventory the functional capabilities that could affect quality or safety. These would include issues such as the usability of the user interface and the availability of clinical decision support, electronic (e)-prescribing, or interfaces with other systems. Hardware issues and system reliability are also relevant to technology performance.

The provider The healthcare provider who uses the system has attributes that may affect quality and safety outcomes, such as years in

Table 1 Dimensions of evaluation in the Triangle Model

Donabedian Model categories

Triangle Model variable types Sample quantitative variables Sample qualitative variables

Structure Organization Size; type of healthcare organization Group-level workflow and communication

Provider Specialty; computer skills; hours spent in EHR training Attitudes toward health IT or quality improvement

Technology Inventory of features; hardware and software performance Usability

Patients Demographics; insurance status; severity of illness Attitudes toward health, healthcare, or health IT

Process Organization-technology Time and resources spent on implementation, training, and support

Institutional procedures for implementation, training, and support; user perceptions of implementation, training, and support

Provider-technology Individuals’ usage of system and of specific features Tasketechnology fit; perceived workflow integration; user satisfaction

Organization-provider Time and resources directed to quality or safety initiatives Perceptions of organizational quality and safety initiatives

Outcomes Patient safety Prescribing errors; adverse drug events Perceived patient safety culture

Healthcare quality Performance on nationally recognized quality metrics Patient and provider perceptions of quality

EHR, electronic health record; IT, information technology.

Figure 1 The Triangle Evaluation Model proposes simultaneous measurement of structure, process, and outcome variables in all evaluations of the impact of health information technology on healthcare quality and safety.

62 J Am Med Inform Assoc 2012;19:61e65. doi:10.1136/amiajnl-2011-000385

Research and applications D

ow nloaded from

https://academ ic.oup.com

/jam ia/article/19/1/61/735342 by guest on 04 M

arch 2022

practice, training, and attitude toward quality improvement. In addition, some provider attributes such as specialty, typing skills, EHR training and experience, and age may influence how much they use the technology.

The organization Organizational mission, resources, and policies affect quality outcomes directly and also influence how well a technology is used to pursue these outcomes. For example, organizations may or may not create usable EHR configurations in patient exami- nation rooms, devote sufficient resources to EHR training, or make good choices about system configuration. Small medical practices are likely to have different resources and needs for health IT than are large medical centers.

The patients An organization that treats sicker patients will perform poorly on patient outcome measures unless comparisons are adjusted for the population’s burden of illness. A variety of methods for risk adjustment, such as case mix adjustment and comorbidity indices, have been developed and validated to more appropriately compare quality outcomes across physicians or healthcare organizations.25e27 Other patient-specific characteristics such as health literacy, patient engagement, and attitudes toward health IT may also be relevant.

Processes In the Triangle Model, processes with the potential to affect quality and safety outcomes of health IT connect the points of the triangle.

Provideretechnology processes Only when a technology is used as intended can relevant quality outcomes be expected. It is thus important to assess the actual usage of the relevant features, which is likely to vary at the level of the individual physician according to usability and perceived usefulness,28e30 integration into clinical workflow and tasketechnology fit,12 31 and training on the system.

Organizationetechnology processes Organizational decisions affect which technologies are imple- mented, system configuration, implementation procedures, and resources allocated to hardware and technical infrastructure, technical support, and training. As recognized by the DeLone and McLean model of information system ‘success’, these organization-level factors affect the quality of the IT system as implemented in a specific setting, which has a strong impact on use as well as user satisfaction.21

Organizationeprovider processes Finally, organizational policies, culture, and workflow all have a direct effect on provider activities and the quality-related outcomes of these activities. For example, an organization may opt into a voluntary quality improvement initiative or pursue a care model transformation such as patient-centered medical home accreditation.32

MODEL APPLICATION AND VALIDATION BY EXAMPLE The Triangle Model specifies the predictor variables that should be captured in order to explain quality or safety outcomes of health IT, but it does not specify how these variables should be measured. In different situations, provider usage of a particular EHR feature might be captured by usage logs, by a researcher

making field observations, or by self-reported survey. These approaches each have strengths and weaknesses. In some situ- ations, researchers may prefer intensive qualitative studies to produce a rich and in-depth understanding of a particular situ- ation, whereas in others, researchers may exploit data available from the electronic system itself. The study sample size may limit the number of quantitative predictors that can be included in a regression model, whereas resources may limit the amount of qualitative research that can be performed, creating a need to balance the number of quantitative predictors and qualitative ones included in any particular study. Two examples presented here illustrate how we have used the

Triangle Model to inform our research. Although these both pertain to e-prescribing, the model can be applied to a variety of health IT evaluations.

Electronic prescribing improves medication safety in community- based office practices Our prospective, controlled study of a stand-alone e-prescribing technology was one of the first to demonstrate that e-prescribing was highly effective at reducing prescribing error rates in community-based office practices.33 The primary comparison in the study was users and non-users of this e-prescribing system. In terms of structural variables from the Triangle Model, we

first inventoried the features available in the technology. The inventory suggested that this technology did have the potential to reduce prescribing error rates, as it provided clinical decision support with a wide variety of alert types as well as additional reference resources. At the provider level, we controlled for variables that may have affected prescribing error rates, including years in practice, training, and specialty. Among the patient population, we limited inclusion to adults and collected age, gender, and medications. We studied a single independent practice association (organization), all of whom had access to the same e-prescribing technology and received relatively intensive implementation and technical support (organ- izationetechnology processes). For provideretechnology processes, we did not quantify usage frequency as a continuous variable because all providers were incentivized to use the system for 100% of prescriptions and thus had very high usage rates. Instead, we minimized variability in our dataset by limiting the study to providers who had used the e-prescribing system to write a minimum of 75 prescriptions. The outcome variable, prescribing errors, was assessed using

a rigorously controlled and previously validated manual review process in which research nurses used a standardized methodology to evaluate paper and electronic prescriptions. The results of this study were striking. Among providers who

adopted e-prescribing, error rates decreased from 42.5 to 6.6 per 100 prescriptions; among non-adopters, error rates remained nearly unchanged (37.3 to 38.4 per 100 prescriptions).33

Capturing the structural elements associated with technology, provider, and patient population allowed us to perform appro- priate adjustment in the statistical model, and designing the study to control the variability in the remaining structural and process elements simplified the analyses.

Ambulatory prescribing safety in two EHRs In this preepost study, the outcome of interest was also prescribing errors, but the primary comparison was between use of two different EHR systems, an in-house system that was replaced at the institutional level by a commercial EHR system (Abramson EL, Patel V, Malhotra S, et al; unpublished data).34

J Am Med Inform Assoc 2012;19:61e65. doi:10.1136/amiajnl-2011-000385 63

Research and applications D

ow nloaded from

https://academ ic.oup.com

/jam ia/article/19/1/61/735342 by guest on 04 M

arch 2022

We applied the Triangle Model by inventorying the features available in each technology. The locally developed e-prescribing system provided very little clinical decision support, whereas the commercial system provided a wide variety of clinical decision- support alerts and default dosing with the potential to reduce prescribing error rates. At the provider level, we adjusted for demographics and years in practice, and, among the patients, we restricted eligibility to adults and adjusted for age, sex, and insurance status. This study was conducted in a single organi- zation, where the locally developed system was replaced by the commercial system institution-wide, all physicians underwent the required training, and use of the new system was mandatory (organizationetechnology and organizatione provider processes). The study showed that implementation of the commercial system was associated with a marked fall in the rate of prescribing errors in the short term, with a further decrease at 1 year. However, when inappropriate abbreviations were excluded from the analysis, the rate of errors increased immediately after the transition to the new system, and at 1 year returned to baseline.

Concurrently, we sought additional insight into the providere technology processes through a survey, semistructured inter- views, and field observations; this qualitative data collection was performed concurrently with our quantitative data collection. Among other findings, the results suggested that physicians perceived the locally developed system as faster and easier to use, that the clinical decision-support alerts in the new system led to ‘alert fatigue’ and were often over-ridden, and that few users knew how to use system shortcuts to increase efficiency. These findings provided additional insight into the potential reasons behind the observed spike in certain types of prescribing errors during the transition from one system to the other (Abramson EL, Patel V, Malhotra S, et al; unpublished data). By considering these factors in the design of the research, and conducting qualitative and quantitative evaluation simultaneously, we increased the explanatory power of our study.

DISCUSSION AND IMPLICATIONS Research on the effects of health IT may oversimplify complex issues if health IT is treated as a simple categorical variable (present or absent, or before or after). Capturing more detailed predictor variables about the technology, users, and the surrounding context increases the ability to interpret findings and compare studies, while minimizing the need to cite unmeasured variables as potential explanations of results. In this paper, we have proposed a more comprehensive evaluation model specifically designed for studies of the quality or safety effects of health IT. The Triangle Model specifies that research studies should assess structural elements (the technology, the provider using it, and the organizational setting) and process variables (provideretechnology processes such as usage, organizationetechnology processes such as infrastructure support, and organizationeprovider processes such as quality improvement initiative), and that evaluations should adjust for characteristics of the patient population.

The Triangle Model carries the implication that unmeasured structure and process variables may account for why field studies of the effects of health IT on quality, safety, and effi- ciency have had mixed results, with some showing the expected improvements, others failing to find any effect, and others revealing adverse effects.3e7

As an example, we can apply the Triangle Model to under- standing two high-profile studies of a commercial CPOE system. Although this system had been shown to reduce prescribing

errors and adverse drug events,7 Han et al found that the system was associated with a mortality increase in a pediatric intensive care unit,5 whereas Del Beccaro and colleagues found no such association in a similar setting.8 A critical review of these two papers shows that both sets of researchers came up with plau- sible potential explanations for their results, but since data were not collected in a systematic fashion as part of the evaluation, no definitive conclusions can be drawn. Han et al, as well as other commentators,35 attribute their

findings to a number of variables they did not measure. They describe some aspects of the system that they felt may have presented usability barriers (interrupting provideretechnology processes), as well as problems with the hospital’s technical infrastructure such as lack of order sets and an overloaded wireless network (organizationetechnology processes), and perceived negative effects on workflow and communication (organization factors).5 Similarly, Del Beccaro et al list several unmeasured factors that may have played a role at the level of the technology and the organization, as well as in the interac- tions between them. These included the organization’s construction of order sets and perceived emphasis on encour- aging good communication processes among healthcare providers.8 Han et al suggested that the effect of the unmeasured factors was to render the CPOE system slower and less reliable than the old paper-based system and thus less safe in critical care; Del Beccaro et al dispute this interpretation and suggest it was based on an underestimate of the true time needed to place paper orders. However, neither study actually measured the speed of ordering or the other factors cited as potential expla- nations for the differences between the results. We propose that an evaluation of the system conducted under the Triangle Model might have more systematically captured factors such as these that may have contributed to the healthcare quality outcomes, reducing the need for speculation about the causes of the differences.

Comparisons In the Triangle Model, we have attempted to summarize and categorize elements from other evaluation models while emphasizing the relationship between technology and health- care quality and safety outcomes. This creates some similarities to some other evaluation frameworks. Like the SEIPS Model of Carayon et al,19 the Triangle Model adapts the Donabedian Model for use in health ITevaluations. However, SEIPS considers primarily healthcare delivery processes, whereas in the Triangle Model, additional processes of interest include the interactions between the individual user and the technology, and between the organization and the technology. This reflects our emphasis on capturing technology usage patterns as potential predictors of quality outcomes. Our focus on individual usage of technology also distinguishes the Triangle Model from another evaluation model, Westbrook’s multi-method evaluation model,20 which encourages the study of organizational-level factors.

Limitations The Triangle Model is not intended to be a model of diffusion, adoption, or implementation nor a framework to study outcomes such as successful technology adoption, satisfaction, or workflow. Rather, it is designed to guide evaluations that seek to assess the effect of health IT systems on healthcare delivery, specifically the quality and safety of healthcare. It is thus most appropriate for summative evaluations of relatively mature health IT systems with good adoption rates. As others have noted, summative evaluations are less appropriate for systems in

64 J Am Med Inform Assoc 2012;19:61e65. doi:10.1136/amiajnl-2011-000385

Research and applications D

ow nloaded from

https://academ ic.oup.com

/jam ia/article/19/1/61/735342 by guest on 04 M

arch 2022

development or in the process of implementation.36 An addi- tional limitation of this model is that we have not specified the measurement instruments or the level of measurement for the various predictor variables we have identified. However, we believe that the resulting flexibility may make this model more generalizable and widely applicable than it otherwise would be. Finally, this model has not been formally validated, and it is possible that additional dimensions could be determined to be useful.

Conclusions This paper proposes a general model for conducting evaluations of the impact of health IT systems on the outcomes of health- care quality and safety. This model outlines the domains and constructs that should be assessed, but does not specify whether the methods should be quantitative, qualitative, or hybrid. In our experience, we have found value in applying a variety of different methods, sometimes with the purpose of producing rich qualitative data to explain results, and other times taking advantage of the capabilities of electronic systems to obtain quantitative datasets that allow statistical modeling. We have provided illustrative examples from the domain of medication safety in the ambulatory setting, but the model is broadly applicable to a variety of health IT applications. An evaluation approach that integrates perspectives from health services research and biomedical informatics has the potential to capture the quality and safety effects of the health IT systems that are currently transforming the US healthcare system.

Funding The investigators are supported by the New York State Department of Health (NYS contract number C023699).

Competing interests None.

Provenance and peer review Not commissioned; externally peer reviewed.

REFERENCES 1. Blumenthal D, Tavenner M. The “meaningful use” regulation for electronic health

records. N Engl J Med 2010;363:501e4. 2. Centers for Medicare & Medicaid Services (CMS), HHS. Medicare and

Medicaid programs; electronic health record incentive program. Final rule. Fed Regist 2010;75:44313e588.

3. Kaushal R, Shojania KG, Bates DW. Effects of computerized physician order entry and clinical decision support systems on medication safety: a systematic review. Arch Intern Med 2003;163:1409e16.

4. Chaudhry B, Jerome W, Shinyi W, et al. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med 2006;144:742e52.

5. Han YY, Carcillo JA, Venkataraman ST, et al. Unexpected increased mortality after implementation of a commercially sold computerized physician order entry system. Pediatrics 2005;116:1506e12.

6. Koppel R, Metlay JP, Cohen A, et al. Role of computerized physician order entry systems in facilitating medication errors. JAMA 2005;293:1197e203.

7. Upperman JS, Staley P, Friend K, et al. The impact of hospitalwide computerized physician order entry on medical errors in a pediatric hospital. J Pediatr Surg 2005;40:57e9.

8. Del Beccaro MA, Jeffries HE, Eisenberg MA, et al. Computerized provider order entry implementation: no association with increased mortality rates in an intensive care unit. Pediatrics 2006;118:290e5.

9. Anderson JG, Aydin CE, eds. Evaluating the Organizational Impact of Healthcare Information Systems. 2nd edn. New York: Springer Science, 2005. Health Informatics Series.

10. Greenhalgh T, Stramer K, Bratan T, et al. Introduction of shared electronic records: multi-site case study using diffusion of innovation theory. BMJ 2008;337:a1786.

11. Rogers E. Diffusion of Innovations. 4th edn. New York: Free Press, 1995. 12. Ammenwerth E, Iller C, Mahler C. IT adoption and the interaction of task,

technology and individuals: a fit framework and a case study. BMC Med Inform Decis Mak 2006;6:3.

13. Kaplan B. Evaluating informatics applicationsdsome alternative approaches: theory, social interactionism, and call for methodological pluralism. Int J Med Inform 2001;64:39e56.

14. Patel VL, Kushniruk AW, Yang S, et al. Impact of a computer-based patient record system on data collection, knowledge organization, and reasoning. J Am Med Inform Assoc 2000;7:569e85.

15. Patel VL, Arocha JF, Kaufman DR. A primer on aspects of cognition for medical informatics. J Am Med Inform Assoc 2001;8:324e43.

16. Aarts J, Ash J, Berg M. Extending the understanding of computerized provider order entry: implications for professional collaboration, workflow and quality of care. Int J Med Inform 2007;76:S4e13.

17. Berg M, Aarts J, van der Lei J. ICT in health care: sociotechnical approaches. Methods Inf Med 2003;42:297e301.

18. Ash JS, Bates DW. Factors and forces affecting EHR system adoption: report of a 2004 ACMI discussion. J Am Med Inform Assoc 2005;12:8e12.

19. Carayon P, Schoofs Hundt S, Karsh BT, et al. Work system design for patient safety: the SEIPS model. Qual Saf Health Care 2006;15(Suppl 1):i50e8.

20. Westbrook JI, Braithwaite J, Georgiou A, et al. Multimethod evaluation of information and communication technologies in health in the context of wicked problems and sociotechnical theory. J Am Med Inform Assoc 2007;14:746e55.

21. DeLone WH, McLean ER. The Delone and McLean model of information systems success: a 10-year update. J Manag Inform Syst 2003;19:9e30.

22. Donabedian A. The quality of care. How can it be assessed? JAMA 1988;260:1743e8.

23. Mitchell PH, Ferketich S, Jennings BM. Quality health outcomes model. Health Policy 1998;30:43e6.

24. Berg M. Implementing information systems in health care organizations: myths and challenges. Int J Med Inform 2001;64:143e56.

25. Charlson ME, Pompei P, Ales K, et al. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis 1987;40:373e83.

26. Deyo RA, Cherkin DC, Ciol MA. Adapting a clinical comorbidity index for use with ICD-9-CM administrative databases. J Clin Epidemiol 1992;45:613e19.

27. Weiner J, Starfield B, Steinwachs D, et al. Development and application of a population-oriented measure of ambulatory care case-mix. Med Care 1991;29:452e72.

28. Rogers EM. Diffusion of Innovations. New York: Free Press, 1962. 29. Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of

Having Trouble Meeting Your Deadline?

Get your assignment on Comparison of Key Evaluation Models in Health Informatics completed on time. avoid delay and – ORDER NOW

information technology. MIS Quarterly 1989;13:319e40. 30. Davis F. User acceptance of information technology: system characteristics,

user perceptions and behavioral impacts. Int J Man Mach Stud 1993;38:475e87. 31. Goodhue D, Thompson R. Task-technology fit and individual performance. MIS

Quarterly 1995;19:213e36. 32. National Committee for Quality Assurance. Physician Practice

ConnectionsdPatient-Centered Medical HomeTM. Washington DC: National Committee for Quality Assurance. http://www.ncqa.org/tabid/631/Default.aspx (accessed 21 Jul 2011).

33. Kaushal R, Kern LM, Barron Y, et al. Electronic prescribing improves medication safety in community-based office practices. J Gen Intern Med 2010;25:530e6.

34. Abramson EL, Malhotra S, Fischer K, et al. Transitioning between electronic health records: effects on ambulatory prescribing safety. J Gen Intern Med 2011;26:868e74.

35. Sittig DF, Ash JS, Zhang J, et al. Lessons from “unexpected increased mortality after implementation of a commercially sold computerized physician order entry system”. Pediatrics 2006;118:797e801.

36. Friedman CP. “Smallball” evaluation: a prescription for studying community-based information interventions. J Med Libr Assoc 2005;93(4 Suppl):S43e8.

PAGE fraction trail=5

J Am Med Inform Assoc 2012;19:61e65. doi:10.1136/amiajnl-2011-000385 65

Research and applications D

ow nloaded from

https://academ ic.oup.com

/jam ia/article/19/1/61/735342 by guest on 04 M

arch 2022

,

Sociotechnical Analyis of a Neonatal ICU

Leanne CURRIE,a,b,e Barbara SHEEHAN,a Phillip L. GRAHAM III,c,e Peter STETSON,b,d,e Kenrick CATO, a and Adam WILCOX b,d,e

aSchool of Nursing, bDept of Biomedical Informatics, cDept of Pediatrics, dDept of Medicine,Columbia University;

and eNew York Presbyterian Hospital, New York, N.Y, USA

Abstract: Sociotechnical theory has been used to inform the development of computer systems in the complex and dynamic environment of healthcare. The key components of the sociotechnical system are the workers, their practices, their mental models, their interactions, and the tools used in the work process. We conducted a sociotechnical analysis of a neonatal intensive care unit towards the development of decision support for antimicrobial prescribing. We found that the core task was to save the baby in the face of complex and often incomplete information. Organizational climate characteristics were pride in clinical and educational practice. In addition, the structure of work identified interdisciplinary teamwork with some communication breakdown and interruptive work environment. Overall, sociotechnical analysis provided a solid method to understand work environment during the decision support development process.

Keywords: Sociotechnical analysis, Clinical decision support

1. Introduction

Clinical information systems are gradually being integrated into the healthcare environment. Many systems have shown improvement in preventing medical errors[1], however, some systems have resulted in increasing medical errors [2]. One of the main reasons attributed to failed systems is lack of understanding of the sociotechnical environment before, during and after system implementation [2, 3]. Sociotechnical theory has been used to inform the development of systems in the complex and dynamic environment of healthcare [4]. The key components of the sociotechnical system are the workers, their practices, their mental models, their interactions, and the tools (or artefacts) that are used in the work process. Proponents of sociotechnical theory posit that with a deep understanding of the work processes and work environment of the workers, technologies can be developed to support the work, rather than having technologies replicate poorly designed non-technical systems.

2. Objective

The purpose of this study was to examine the sociotechnical environment of the neonatal intensive care unit prior to the development of a clinical decision support system for antibiotic prescribing and management. Ethnographic observations, focus groups and key informant interviews were conducted with clinicians responsible for antimicrobial prescribing with the goal of understanding the sociotechnical environment.

3. Background

Safety scientists have used sociotechnical theory to understand complex systems such as work related to nuclear reactors, chemical industries and others. Complex

Connecting Health and Humans K. Saranto et al. (Eds.)

IOS Press, 2009 © 2009 The authors and IOS Press. All rights reserved.

doi:10.3233/978-1-60750-024-7-258

258

sociotechnical systems are socially constructed and dynamic cultures that are defined by their stories and rituals [5]. In contrast to an open systems model of organizational behaviour, in which activities are thought to be rational and orderly, activities in socially constructed systems may be irrational or disorderly. In this context, there is a ‘continual and collective reality building process’ that provides meaning to the work. In order to understand sociotechnical culture, one must understand the meaning of the work. However, one cannot grasp the meaning of work without knowing the core task.

The components of the core task are the characteristics of work, the objective of work and other external influences. In order to understand organizational culture, the structure of work (including tools, technologies and other artefacts), the organizational climate, and conceptions about work demands must be identified. The collective components of the core task provide the structure for sense making (i.e., internal understanding of core processes). The components of organizational culture provide the foundation for responses to the task at hand. Using this model, the relationship between the core task and organizational culture can be used to model dynamic clinical activities and to understand inherent social constructs.

Applying sociotechnical theory to the problem of antimicrobial resistance can facilitate an understanding of where and when technology can be used to support antibiotic prescribing decision making. The problem of antimicrobial resistance is growing in the acute care setting. Because of potentially lethal sequelae, babies in neonatal intensive care units (NICUs) who are suspected of having an infection are aggressively treated with antibiotics, often despite incomplete clinical information to guide a decision [6]. Several groups have encouraged efforts to promote practices that will decrease antimicrobial resistance; however, adherence to these guidelines has been documented to be inconsistent. The use of decision support for guidelines related to antibiotic prescribing has been examined by several researchers, however none of the studies to date have examined the NICU setting [7]. To date, high level studies reporting on decision support in the NICU have been limited to those examining trans parenteral nutrition (TPN) and physiologic monitoring [8]. Researchers have examined clinical decision support and computerized provider order entry systems on the management of antibiotic prescribing[9], however, such systems are not ubiquitous [2].

4. Materials and Methods

This study was conducted in two NICUs in an academic medical center in a large metropolitan city in northeastern United States. Both NICUs are situated in quaternary care centers and thus receive patients from lower level local and regional care centers. Both units are affiliated with medical and nursing schools where medical interns, medical residents, neonatology fellows, nurses and nurse practitioners receive their training. Both units have nurse practitioner teams. A team of neonatologists function as the physician in charge and are responsible for overseeing the day-to-day care of all patients in the unit.

4.1. Data Collection and Analysis

We conducted focus groups and key informant interviews with medical residents, pharmacists, nurses, nurse practitioners, neonatology fellows and neonatal attending physicians. IRB approval was received and informed consent was obtained prior to all interviews. Data were collected between January and June 2008. Focus groups and key

L. Currie et al. / Sociotechnical Analyis of a Neonatal ICU 259

informant interview lasted approximately one hour each and were audio recorded. Audio recordings were transcribed by a professional transcription service and were verified by the researchers. Data were analyzed for themes related to organizational core task and organizational culture.

5. Results

Thirty-three clinicians participated in the focus groups and key informant interviews. Fourty-eight hours of ethnographic observations were carried out at varying time points during the day. Using the Rieman and Oedewald framework, the following characteristics of the NICU environment were identified. Figure 1 shows the sociotechnical model with factors contributing to the core task and factors contributing to organizational culture.

Figure 1. Culture Task Continuum in the Neonatal ICU

5.1. Organizational Core Task

Factors contributing to the organizational core task were the objective of work, characteristics of work, and external influences. The objective of work involves the saving baby while, if possible, preventing morbidity. If a baby was stable, the objective of the work was for the babies to ‘feed and grow.’ These themes were identified across focus groups and interviews. The characteristics of the work are that the babies are complex, their presentation with infection is vague and the overall work is variable. The theme of managing antibiotics in the face of vague signs and symptoms was evident across provider groups. The following quote from a seasoned nurse practitioner captures the organizational core task:

“If you’ve ever seen a preterm infant die of gram negative sepsis where in two hours they go from being fine, eating, looking around, active and within, dead within hours. It’s really, really horrendous. So any time these kids do anything, they don’t run fevers the way pediatric patients do. They don’t give you a clear sense that they’re septic. Sepsis looks like NEC looks like, whatever they’re doing, it all presents the same way.”

L. Currie et al. / Sociotechnical Analyis of a Neonatal ICU260

5.2. Collective Sense Making

According to Reiman & Oedewald, the organizational core task ‘creates constraints and requirements for activities’ [5] such that the activities will make sense to the clinicians during their work process. Policies and procedures were verbalized by all participants reflecting their mental model of the processes associated with achieving the core task. For example, when asked about types of technologies that might support decision making around antibiotic prescribing, this attending neonatologist replied as follows:

“This is a fairly conservative unit. I think we’re fairly minimal in the antibiotics that we are using in our population. I think, you’re sort of looking for an algorithm of approach which we sort of drilled into our residents already. I’m not sure if they actually need it written down.”

5.3. Organizational Culture

The components of organizational culture included multidisciplinary rounds with multiple interruptions using highly technical tools. Other artefacts included paper copies of policies related to treating neonatal sepsis, paper copies of the neonatal medication ‘bible’ Neofax and managing information related to different technologies. The organizational climate components included mentoring and education as well as collaboration. Clinicians from both NICUs expressed a certain pride in practice in describing the uniqueness of the NICU clinical environment. The following quote from a neonatology fellow exemplifies the conception about the work and work demands:

“….we’re very conservative here…… the population we deal with tend to be infection prone, tend to have lines following preop time, tend to need long-term TPN, tend to be with us for months and are considered immunosuppressed when they are premature.”

5.4. Ways of Responding to Tasks

According to sociotechnical theory, the core task, mental model and organizational climate will create clinician behaviours. Several examples of this process were present in our data. For example, the following statement from a medical resident (low man on the totem pole), was in response to the question “How do you decide which antibiotics to prescribe?”

“Usually we do the ordering, the actual eclipsys ordering and figuring out which dose to use based on usually Neofax, but most of the time the decision of which antibiotic we use comes from a higher level.”

6. Discussion

We conducted a sociotechnical analysis of a neonatal intensive care unit and found that the core task was to save the baby in the face of vague symptoms, complex problems and the use of multiple technologies. This core task was present in the focus group and interview data from all participant groups which is consistent with Reiman and Oedewald’s posit that cultures are socially constructed. The core task was the primary driver for all protocols and plans of care that were described by our participants. Thus, the clinicians were making sense of their activities and building mental models about their work activities. The characteristics of organizational culture included teaching and learning, multitasking, and adapting based on the ‘higher level’ decision makers. The latter is important to understand since decision support via CPOE targets the order

L. Currie et al. / Sociotechnical Analyis of a Neonatal ICU 261

writer, rather than the senior decision maker. Decision support via CPOE may be the incorrect target in this setting.

7. Conclusion

In conclusion, we found that the NICU system was a very dynamic setting that was indeed socially constructed. Future research to best methods to provide decision support to key decision makers will be critical in this setting.

References 1. Bates DW, Teich JM, Lee J, Seger D, Kuperman GJ, Ma'Luf N, et al. The Impact of Computerized

Physician Order Entry on Medication Error Prevention. J Am Med Inform Assoc. 1999;6(4):313-21. 2. Koppel R, Metlay JP, Cohen A, Abaluck B, Localio AR, Kimmel SE, et al. Role of computerized

physician order entry systems in facilitating medication errors. JAMA. 2005 Mar 9;293(10):1197-203. 3. Kaplan B. Evaluating informatics applications–some alternative approaches: theory, social

interactionism, and call for methodological pluralism. International Journal of Medical Informatics. 2001;64(1):39-56.

4. Westbrook JI, Braithwaite J, Georgiou A, Ampt A, Creswick N, Coiera E, et al. Multimethod evaluation of information and communication technologies in health in the context of wicked problems and sociotechnical theory. J Am Med Inform Assoc. 2007 Nov-Dec;14(6):746-55.

5. Reiman T, Oedewald P. Assessment of complex sociotechnical systems: Theoretical issues concerning use of organizational culture and organizational core task concepts.Safety Science. 2007;45(7):745-68.

6. Jordan JA, Durso MB, Butchko AR, Jones JG, Brozanski BS. Evaluating the Near-Term Infant for Early Onset Sepsis: Progress and Challenges to Consider with 16S rDNA Polymerase Chain Reaction Testing. J Mol Diagn. 2006 July 1, 2006;8(3):357-63.

7. Thursky KA. Use of computerized decision support systems to improve antibiotic prescribing. Expert Review of Anti-infective Therapy. 2006;4(3):491-507.

8. Tan K, Dear PRF, Newell SJ. Clinical decision support systems for neonatal care. Cochrane Database of Systematic Reviews. 2005;Reviews 2005(2).

9. Shojania KG, Yokoe D, Platt R, Fiskio J, Ma'luf N, Bates DW. Reducing Vancomycin Use Utilizing a Computer Guideline: Results of a Randomized Controlled Trial. J Am Med Inform Assoc. 1998 November 1, 1998;5(6):554-62.

Email address for correspondence: [email protected]

L. Currie et al. / Sociotechnical Analyis of a Neonatal ICU262

,

IdANDBOOI< OF EVALUATION J"I{~TIdODS

7. Descriptions of Methods and Techniques

(Shown in alphabetical order by name o f method or technique)

Analysis of Work Procedures

Areas of Application When assessing what actually happens compared to the expectations o f what should happen, for instance, with respect to:

�9 Analysis of the extent to which an IT system is being used as expected (during the implementation phase)

�9 Impact assessment (during the Evolution Phase)

Description The methods listed here are not dedicated or designed specifically for evaluation studies, but they may clearly have a role as part o f an evaluation study. The intention thus is to provide an overview of some well-known options:

�9 The Leaming Organization: This paradigm theory and its strategies focus on analysis o f work procedures to ascertain whether the organization has the ability consciously to incorporate experiences, whereby new or changed structures and work process are continuously established (see Argyris and Sch6n 1996).

�9 Enterprise modeling: A number of diagramming techniques are used for enterprise modeling. A recent overview of some of these is provided by Shen et al. (2004).

�9 Business Process Reengineering: Here the focus o f the work process analysis is aimed at radical innovation and changes in work processes within the organization (see, for instance, the review in Willcocks 1996).

�9 Use Cases and scenarios: Use Cases describe the expectations of

75

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANI38001< O~ ~VAL._UATION M{~THODS

what the IT system should accomplish in a given situation but not h o w – that is, the method aims at capturing the essence o f the activities in a purpose-oriented way. Some variations of Use Cases use a combination o f present and future cases. For further details, see, for instance, (Alexander and Maiden 2004). A scenario is another way of capturing the essence of activities. See, for instance, (Carroll 1995; and Alexander and Maiden 2004).

�9 Total Quality Management: This focuses on flaws and shortcomings in an organization with the specific aim o f redressing them.

�9 Health Technology Assessment (HTA): Here it is the framework for assessment rather than the actual methods and techniques that are used. Goodman and Ahn (1999) provide a basic overview of HTA principles and methods.

�9 Computer-Supported Cooperative Work (CSCW): The focus on the work process analysis here is on the interaction between co- operating partners. As Monk et al. (1996) state, "[A]ll work is cooperative, and so any work supported is computer-supported cooperative work (CSCW)". These can be either person-to- person or person-to-machine relationships. An important aspect of this form o f analysis is to define tasks and delegate responsibility according to competencies. See, for instance, (Andriessen 1996) and other chapters in the same book.

�9 Cognitive Task Analysis: This approach, which is addressing the cognitive aspects of work procedures, is an emerging approach to the evaluation of health informatics applications. Kushniruk and Patel (2004) review this field.

Assumptions for Application The use of diagramming techniques and other forms of graphical modeling requires experience, including that o f understanding the principles and procedures involved, their pitfalls, perils, and assumptions. Often it also requires the ability to conceptualize, as it is impossible to include all the details and variations o f the organizational work processes. On the other hand, users have special means of interpretation and conception – thereby capturing what goes on and what is and is not important.

Perspectives Virtually all traditional diagramming techniques or scenarios for the descriptions of work processes are based on the assumption that work processes are either accomplished sequentially with little variation or that all relevant aspects can, nevertheless, be safely represented by

7/4

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

tdANDt~OOI< O~C EVALUATION METHODS

means of the diagramming technique in question. Variations of work processes are exactly the aspects that are otten not shown in diagram techniques and scenarios:

"[W]orkprocesses can usually be described in two ways: the way things are supposed to work and the way they do w o r k . . . People know when the 'spirit o f the law' takes precedence over the 'letter o f the law'"

(Gruding 1991)

It is still outside the capability of diagramming techniques to distinguish and incorporate ergonomic and cognitive aspects of the work processes, including the difference between how beginners and experts work and think. See, for instance, the discussion in (Dreyfus 1.997) or the rrsum6 in (Brender 1999).

Frame of Reference for Interpretation If one wishes to measure the effect of an IT system from a work process point of view, it is a prerequisite that there be an explicit and unambiguous frame of reference, also called baseline values. Depending on the questions of the study, the frame of reference could, for instance, be the description of the work process for the 'old' system or for the intended new system (from a preliminary analysis or from the User Requirements Specification). The frame of reference could also be what was prescribed for the new or changed work processes prepared before the 'new' system was put into operation.

A frame of reference is normally not used during the Explorative Phase of an investigative study of work processes.

Perils and Pitfalls Pitfalls in connection with descriptions of work procedures are indirectly evident from the aspects mentioned under Perspectives above. In other words, the most important thing is to find the method that will best describe exactly the information needs you have in connection with the context given.

As the methods described are so different, careful attention should focus on the objective of the study before choosing a method, while searching for descriptions of advantages and disadvantages.

Advice and Comments Methods normally used for analysis and design work during systems

75

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

)–tAND~OOI< O~ EVALUATION I"qETIdODS

development could potentially be used for assessment studies in cases o f qualitative and explorative assessments. However, it ot~en requires some lateral thinking, and maybe the process needs to be adjusted to the project in hand.

In principle many o f the systems analysis methods can be used for this purpose. It could certainly be an advantage to employ a technique that has already been used in the organization. This way you will draw on existing experiences o f the method while getting the opportunity to apply useful existing material as a frame o f reference for evaluating the effect. See also under Functionality Assessment, for example.

References

Alexander IF, Maiden N. Scenarios, stories, use cases, through the systems development life-cycle. Chichester: John Wiley & Sons Ltd.; 2004.

Andriessen JHE. The why, how and what to evaluate of interaction technology: a review and proposed integration. In: Thomas P, editor. CSCW requirements and evaluation. London: Springer; 1996. p. 107-24.

Argyris C, Schrn DA. Organisational learning II, theory, method, and practice. Reading: Addison-Wesley Publishing Company; 1996.

Brender J. Methodology for constructive assessment of IT-based systems in an organisational context. Int J Med Informatics 1999;56:67-86.

Carroll JM, editor. Scenario-based design, envisioning work and technology in system development. New York: John Wiley & Sons, Inc.; 1995.

Dreyfus HL. Intuitive, deliberative and calculative models of expert performance. In: Zsambok CE, Klein G, editors. Naturalistic decision making. Mahwah: Lawrence Erlbaum Associates, Publishers; 1997. p. 17-28.

Goodman CS, Ahn R. Methodological approaches of health technology assessment. Int J Med Inform 1999;56(1-3):97-105.

Gruding J. Groupware and social dynamics: eight challenges for developers. Scientific American 1991 ;Sept:762-74.

76

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANDBOOI< OF: EVALUATION METHODS

Kushniruk AW, Patel VL. Cognitive and usability engineering methods for the evaluation of clinical information systems. J Biomed Inform 2004;37:56-76.

Monk A, McCarthy J, Watts L, Daly-Jones O. Measures of process. In: Thomas P, editor. CSCW requirements and evaluation. London: Springer; 1996. p. 125-39.

The publication deals with features o f conversation and is thereby a valuable contribution f o r all assessments that address conversations.

Shen H, Wall B, Zaremba M, Chen Y, Browne J. Integration of business modeling methods for enterprise information system gathering and user requirements gathering. Comput Industry 2004;54(3):307-23.

Willcocks L. Does IT-enabled business process re-engineering pay off?. Recent findings on economics and impacts. In: Willcock L, editor. Investing in information systems, evaluation and management. London: Chapman & Hall; 1996. p. 171-92.

Supplementary Reading

Eason K, Olphert W. Early evaluation of the organisational implications of CSCW systems. In: Thomas P, editor. CSCW requirements and evaluation. London: Springer; 1996. p. 75-89.

Provides an arbitrary rating approach to assess cost and benefit f o r the different user groups and stakeholders.

77

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANDEt'OOI< OlZ:: .I=VALUATION METHODS

Assessment of Bids

Areas of Application Comparative assessment o f a number o f offers from one or more bidders/vendors

Description There is not yet a really good method for the assessment o f bids from one o f more vendors that covers the variety o f bid procedures. Thus, the existing approaches need to be adapted to the requirements specification upon which the bids have been made. The consequences are that all the procedures described use a multimethod approach. Therefore, the following points are meant only to be inspirational examples and food for thought.

A. (Brender and McNair 2002; Brender et al. 2002): The method described uses a spreadsheet-based bid table, where each single requirement is numbered and given a value in accordance with the level o f need (compulsory, desirable) and with the bidders indication o f fulfillment in their offer (0, 1 ….. 3). Initially an analysis will be made o f whether the offers overall indicate that there could be problems with one or more o f the requirements. Then, each bid is evaluated overall against the client's interpretation o f the same contingency table, based on a demo and a hands-on test, and so forth. Finally, an assessment o f all deviations is made, specifically with regard to an evaluation o f the overall potential to meet the objectives and the consequences o f any shortfalls.

B. (Beuscart-Zrphir et al. 2002, 2003, and 2005): The usability aspects (see description in separate section) are used as a point o f focus for a practical assessment o f the offer, combined with a heuristic evaluation or a multimethod approach to assess other aspects, both emanating from a week-long, on-site, hands-on demo test.

C. Rating & Weighting Techniques: The procedure o f techniques for rating & weighting is to give each requirement a priority, then to multiply its degree o f fulfillment in the bid solution with the priority. Finally, all numbers are summed into a single figure for all r e q u i r e m e n t s – the objectives fulfillment. See, for instance, (Jeanrenaud and Romanazzi 1994; and Celli et al. 1998).

78

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

~IAND~OOK O~ EVALUATION J'IE—THODS

D. Balanced Scorecard: See separate description o f this method. E. Structured product demonstrations (Stausberg 1999): This

methodology entails a gradual selection based on a strategic objective and technical functionality requirements, followed by a combination o f three activities: (1) demos with quantitative assessments, (2) benefit analysis, and (3) test installation.

Assumptions for Application Before the preparation o f a requirements specification, it is important to be clear on which assessment method will be used. This is because the chosen method has to comply with the type o f the requirements specification and with the principles used to prepare it.

Several o f the methods need full or part installation o f the systems and products that are to be assessed. This necessitates that the supplier is willing to install the product and also that it is possible to do so in the organization, as well as within its technical infrastructure. In itself, this may require a substantial demand on resources.

Method A assumes a stringent and formalized bid, which either is or can be converted to distinct response categories, as in '0' for "not included and not p o s s i b l e " , . . , to '3' for "fully achieved by the offer". It needs to be possible for the categories to be converted into objects for either mathematical or logical operations in a spreadsheet.

Method B assumes (1) that the bids are based on ready-made products; (2) that the aspects o f usability in the requirements specification are either operational or that professional assessors can assist with the rating; and (3) that the task and activity descriptions o f the organization are included as part o f the domain description in a requirements specification and therefore can be part o f the frame o f reference for the assessment.

Method C above assumes (1) that the requirements are mutually independent; (2) that different, equal, and equally desired solutions for a (partial) function will obtain the same outcome; and (3) that the assessment o f the degree to which each requirement fulfills the evaluation criteria can be converted into mathematical calculations.

From the methods mentioned above, it is only method B that requires special experience o f methods.

79

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

ldANIZ~OOI< OF EVALUATION METHODS

Perspectives Cultural factors are significant not only for the choice of the actual IT solution and how users at different levels within the organization get involved, but also for organizing the procedure and thereby the methods applied for the acquisition of IT systems. As a matter of fact, within the EU tenders are required for the procurement of purchases in excess of a certain financial limit, and there are regulations for instance in Denmark for the involvement of personnel with regard to technological developments. This is markedly different in other parts of the world, as, for instance, in the United States and Asia. These differences have a decisive influence on the choice of method for the preparation of a requirements specification and for a bid assessment. The methods described are all useful within the Westem culture.

It is rare that an offer from a vendor meets all the demands from the start and at a reasonable price. In other words, making a choice will need to be some 'give and take'.

All the methods are based on the notion that one cannot rely completely on vendors' statements, so one has to check the suitability for its actual use to make sure that the bidder cannot get around any problems (Rind 1997). This is not necessarily lack of honesty but could be due to too little understanding of the actual use – hence ignorance.

In the same way there may be hidden agendas in the user organization, or individuals among the stakeholders involved may have formed certain preconceptions. The choice of method should help to ensure objectivity.

Frame of Reference for Interpretation The formal frame of reference for the assessment is the basis of the offer that is, a requirements specification or similar, in which – according to European regulations (in case of tendering) – the assessment model and its criteria have to be specified.

Perils and Pitfalls It is important not just to select the bid with the highest score and 'jump to it', but to optimize the objectives fulfillment (at strategic and tactical levels) and to minimize potential problems and risks in the deviation between the requirements and in the future model

6 o

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANDE~OOI< OF {]VALUATION ME–TIdODS

solution. Beuscart-Zfphir et al. (2002) document how easily one can be duped by first impressions.

One pitfall that all the outlined methods have in common is that there can be decisive cultural differences – for example, in how users can or cannot be involved (at various levels within the organization) when selecting the future solution.

Advice and Comments There is not much literature about methods for choosing bids, probably because (1) it is very difficult to validate the suitability, so they are rarely published and because (2) it is inherently difficult to judge whether one has actually chosen the right solution. To further complicate matters, the fact is that the method publications found about the subject in the literature do not follow up with a causal analysis o f unexpected and unintentional occurrences in the future system against the method o f choice. For example, the Rating & Weighting method is mentioned several times in the literature, but the consequence o f the demand for mutual independence o f the requirement items is not sufficiently discussed. And common to the three methods A, B, and E is the fact that the conclusions and thereby their suitability have not been verified by a third party.

In order to optimize the objectives fulfillment it is important to acquire a thorough understanding o f the solution by means o f demos, hands-on, conversations with the vendor and reference clients, as well as site visits, and so forth. The reason being that the vendor's sales and marketing people are trained to camouflage or avoid possible shortcomings in the solution offered (Rind 1997).

Regarding method B, the week-long hands-on test o f the system turned out to be decisive for the final choice o f bid because it was not until then that the nitty-gritty details were discussed.

Bevan (2000) deals with human error in the use o f IT systems. It is a valuable source o f inspiration – for instance, as a checklist when assessing system design, and thus also when assessing a vendor's bid.

See also under the description o f Standards.

6J

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

~IAND~OOI< OF EVALUATION J"qETNODS

References

Beuscart-Z6phir M-C, Menu H, Evrard F, Guerlinger S, Watbled L, Anceaux F. Multidimensional evaluation of a clinical information system for anaesthesiology: quality management, usability, and performances, In: Baud R, Fieschi M, Le Beux P, Ruch P, editors. The new navigators: from professionals to patients. Proceedings of the MIE2003; 2003 May; St. Malo, France. Amsterdam: lOS Press. Stud Health Technol Inform 2003;95:649-54.

Beuscart-Z6phir MC, Watbled L, Carpentier AM, Degroisse M, Alao O. A rapid usability assessment methodology to support the choice of clinical information systems: a case study. In: Kohane I, editor. Proc AMIA 2002 Symp on Bio*medical Informatics: One Discipline; 2002 Nov; San Antonio, Texas; 2002. p. 46-50.

Beuscart-Z6phir M-C, Anceaux F, Menu H, Guerlinger S, Watbled L, Evrard F. User-centred, multidimensional assessment method of clinical information systems: a case study in anaesthesiology. Int J Med Inform 2005 ;74(2-4): 179-89.

Bevan N. Cost effective user centred design. London: Serco Ltd. 2000. (Available from http://www.usability.serco.com/trump/. The website was last visited 15.6.2005.)

Brender J, Schou-Christensen J, McNair P. A case study on constructive assessment of bids to a call for tender. In: Surj~n G, Engelbrecht R, McNair P, editors. Health data in the information society. Proceedings of the MIE2002 Congress; 2002; Budapest, Hungary. Amsterdam: IOS Press. Stud Health Technol Inform 2002;90:533-38.

Brender J, McNair P. Tools for constructive assessment of bids to a call for tender- some experiences. In: Surj~in G, Engelbrecht R, McNair P, editors. Health data in the information society. Proceedings of the MIE2002 Congress; 2002; Budapest, Hungary. Amsterdam: lOS Press. Stud Health Technol Inform 2002;90:527-32.

Celli M, Ryberg DE, Leaderman AV. Supporting CPR development with the commercial off-the-shelf systems evaluation technique: defining requirements, setting priorities, and evaluating choices. J Healthc InfManag 1998;12(4):11-9.

A fairly thorough case study using the rating & weighting method

Jeanrenaud A, Romanazzi P. Software product evaluation metrics: a methodological approach. In: Ross M, Brebbia CA, Staples G, Stapleton J, editors. Proceedings of the Software Quality Management II

8~

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

ldAND~OOI< O~ EVALUATION METHODS

Conference "Building Quality into Software". Southampton: Comp. Mech. Publications; 1994. p. 59-69.

Rind DM. Evaluating commercial computing systems. [Editorial]. M.D. Computing 1997; 14(1):6-7.

Editorial comment pointing out the bias in vendors'sales talk.

Stausberg J. Selection of hospital information systems: user participation. In: Kokol P, Zupan B, Stare J, Premik M, Engelbrecht R, editors. Medical Informatics Europe '99. Amsterdam: IOS Press. Stud Health Technol Inform 1999;68:106-9.

Supplementary Reading Please note the leg&lative difference between the United States and the European Union. In the E U all purchases over a certain value have to be submitted to tender. Nevertheless, the references all contain inspiration and good advice f o r one's own evaluation.

Beebe J. The request for proposal and vendor selection process. Top Health Inform Manag 1992; 13(1 ): 11-9.

A case study using a simple rating & weighting technique to preselect vendors and products.

Einbinder LH, Remz JB, Cochran D. Mapping clinical scenarios to functional requirements: a tool for evaluating clinical information systems. Proc AMIA Annu Fall Symp 1996;747-51.

A case study using a simple rating technique, but instead o f a weighted summation it simply uses a summary principle.

Feltham RKA. Procurement of information systems effectively (POISE): using the new UK guidelines to purchase an integrated clinical laboratory system. In: Greenes RA, Peterson HE, Protti DJ, editors. Medinfo'95. Proceedings of the Eighth World Congress on Medical Informatics; 1995 July 23-27; Vancouver, Canada. Edmonton: Healthcare Computing & Communications Canada Inc; 1995. p. 549-53.

B r i e f description o f a standardized procedure regarding I T acquisition based on a weighting score mechanism.

Friedman BA, Mitchell W. An analysis of the relationship between a pathology department and its laboratory information system vendor. Am J Clin Path 1992;97(3):363-68.

Discusses perils in the different types o f relationships between a vendor o f an I T system and the client.

Gell G, Madjaric M, Leodolter W, K61e W, Leitner H. HIS purchase

83

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

NAND6~OOI< O~ EVALUATION I'qETIdODS

projects in public hospitals of Styria, Austria. Int J Med Inform 2000;58- 9:147-55.

Outlines an approach to select one f r o m a group o f bids by means o f a combination o f established criteria, functional requirements, and on-site trials.

Madjaric M, Leodolter W, Leitner H, Gell G. HIS purchase project: preliminary report. In: Kokol P, Zupan B, Stare J, Premik M, Engelbrecht R, editors. Medical Informatics Europe '99. Amsterdam: lOS Press. Stud Health Technol Inform 1999;68:115-20.

A case study and some o f their experiences.

64

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

NANDBOOI< O~ EVALUATION METHODS

Balanced Scorecard

Areas of Application

Ongoing optimization of the outcome of a development project by balancing focus areas by means of a set of indicators for a set of strategic objectives.

Description Balanced Scorecard (BSC) is based on a principle of critical success factors in terms of a limited set of performance indices. These are continuously monitored and are balanced against each other in an attempt to create an appropriate relation between their results (Shul- kin and Joshi 2000; Protti 2002). Normally the success factor relates to the strategic objective at management level, hence the method is classified as being a strategic management tool. There is, however, nothing wrong in having the performance-related index focusing on other areas (Shulkin and Joshi 2000) – for instance, to obtain a better service profile toward service users or even taking it down to a personal level to assess performance.

The philosophy behind BSC is that o f a constructive assessment, as the method can be used through group dynamics to create understanding and awareness of how certain aspects of the organization works externally and internally (Protti 1999 and 2002).

Assumptions for Application The method requires some experience before it can be used in a larger context because, for instance:

1. Good results from this method depend on a good and detailed understanding of the causal relationship relating to the success factors in the model, and thereby also to the practical insight into the organization.

8 ~

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

~IANDE~OOI< OF:: EVALUATION I'vlE-TNODS

2. It requires the ability to handle the method in a dynamic project situation and in an organization undergoing change. This is because development projects involve constant changes in the object being monitored. In other words, it means that the object under investigation is a moving target or, as expressed by Gordon and Geiger (1999), the organization is in a double-loop learning.

Perspectives The value norms of the project will inevitably become apparent from the discussion of how to prioritize the strategic success factors. The Scandinavian culture is particularly able to harmonize this with the intentions behind a broad user involvement, because of their national regulations, and can consequently be used to create openness, commitment and motivation through a mutual understanding. For the same reason this method is less valid in some other cultures.

Frame of Reference for Interpretation (Not applicable)

Perils and Pitfalls Within cultures and organizations where there is a tradition of top- heavy management, a schism may occur between the management style and the need to get a detailed understanding of work processes and motivation from staff. Hence, it may in certain situations be difficult to get the necessary support and understanding from staff to carry the method through without it hampering the outcome from a management perspective.

Advice and Comments

It is important to be aware that this is a project management tool to obtain a basis for decision making in a constructive development process, rather than a method that can stand completely on its own as an assessment tool.

References

Gordon D, Geiger G. Strategic management of an electronic patient record project using the balanced scorecard. J Healthc Inf Manag 1999;13:113-23.

Quite a g o o d article to learn f r o m shouM you wish to use this m e t h o d

66

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

tdANDE~OOt< OF EVALUATION METHODS

Protti D. An assessment of the state of readiness and a suggested approach to evaluating 'information for health': an information strategy for the modem NHS (1998-2005). University of Victoria; 1999.

Protti D. A proposal to use a balanced scorecard to evaluate Information f o r Health: an information strategy for the modem NHS (1998-2005). Comput Biol Med 2002;32:221-36.

Protti argues in favor o f the use o f BSC and shows how to do i t – in both short and long versions.

Shulkin D, Joshi M. Quality management at the University of Pennsylvania health system. In: Kimberly JR, Minvielle E, editors. The quality imperative, measurement and management of quality in healthcare. London: Imperial College Press; 2000. p. 113-38.

The book is an anthology in quality management, and the chapter referenced gives a good description o f BSC (Scorecard Measurement and Improvement System) as a method

Supplementary Reading

Kaplan R, Norton D. The balanced scorecard- measures that drive performance. Harvard Business Review 1992;70(1):71-9.

Kaplan R, Norton D. Strategic learning & the balanced scorecard. Strategy Leadership 1996;24(5): 18.

These authors are the originators o f the method

Niss K. The use of the Balanced ScoreCard (BSC) in the model for investment and evaluation of medical information systems. In: Kokol P, Zupan B, Stare J, Premik M, Engelbrecht R, editors. Medical Informatics Europe '99. Amsterdam: lOS Press. Stud Health Technol Inform 1999;68:110-14.

6 7

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

~ANDE~OOI< O~ c EVALUATION METHODS

BIKVA (from Danish: "Brugerlnddragelse i KVAlitetsudvikling"; translated: User Involvement in Quality Development)

Areas of Application

The method is used as a tool for making critical, subjective decisions about an existing practice.

Description Like KUBI, this method originates from the social sector, where it has been developed as a constructive tool to improve conditions within the socio-medical sector with regard to specific action or problem areas (Krogstrup 2003; Dahler-Larsen and Krogstrup 2003).

The method is based on a reflective process in the sense o f getting food for thought. Users of an initiative, which can, for instance, be service functions or an IT-based solution, are the source of information. Its procedure consists of the following steps:

1. During group interviews, users are asked to indicate and justify "why they are either satisfied or not satisfied" with a given service function or an IT function. The users, who are the primary stakeholders in connection with the service function or the IT function, will typically describe incidents and interactions.

2. Summarizing the information obtained during the group interview.

3. Field workers (in IT projects probably the project management or vendor of the service or IT function) are confronted with the users' statements and are encouraged to identify the reasons behind the incidents and interactions referred to. This is

6 6

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

NANDSOOI< OF EVALUATION METHODS

incorporated into the observations previously summarized. 4. In a second group interview management goes through the

connections between the observations and the explanations in order to get the points of view of the user group.

. . . and possibly lead to an iterative process with the objective of clarifying all the disparities that relate to the issues of quality identified by the users.

5. The overall conclusion will then be presented to the decision makers in preparation for an assessment of the conclusion and possible initiatives.

Assumptions for Application It can be a particularly unpleasant experience for an individual or an organization to uncover all its skeletons. It requires a psychologically and socially skilled moderator to facilitate the various processes to avoid being unnecessarily hard on anybody.

Perspectives The built-in value norm of the BIKVA method is the user attitudes and perceptions. However, this should not be seen as though user attitudes alone determine the criteria. The philosophy behind it is to force the parties concemed to speak to one another in a formalized way to gather material as input for a process of change and leaming.

Quite clearly, this method is primarily of use in cultures with a strong tradition of user participation and openness, as is the case in the Scandinavian countries. The principle of user participation would not work in, say, Asian cultures. However, you don't have to go far south in Europe before user participation is far less common than in Scandinavia (Fr6hlich et al. 1993) and before openness stops at managements' (or others') desire for a good image and self- respect, 'la belle figure'. In other words, cultural circumstances can make it impossible to apply this method, even in Denmark.

The undisputed advantage of this method is that it gives the user on the ground a formal voice to express that something does not work without this being understood as negativism or lack of constructive cooperation.

8 0

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

I–IANDBOOI( OF:: EVALUATION METHODS

Frame of Reference for Interpretation The frame o f reference is the users' view o f where the problems lie. Having said that, it is o f course also those who wear the shoes who can best feel if the shoes pinch.

Perils and Pitfalls The pitfall is the representativeness o f those involved, technically and functionally, as it is not guaranteed that everybody can participate. However, the most pressing problems in an organization will, under all circumstances, become evident from the start. Therefore, a minor bias in the representativeness does not necessarily have consequences in the long term. Consideration should, however, also be given to psychological factors in the representation o f an organization with a broad user participation. Should the organization be large and complex, a Stakeholder Analysis method could be used to assess the issues regarding participation.

Advice and Comments The amount o f data resulting from this method can turn out to be overwhelming.

References

Dahler-Larsen P, Krogstrup HK. Nye veje i evaluering. Arhus: Systime. 2003 (in Danish)

Frrhlich D, Gill C, Krieger H. Workplace involvement in technological innovation in the European Community, vol I: roads to participation. Dublin: European Foundation for the Improvement of Living and Working Conditions; 1993.

Krogstrup HK. Evalueringsmodeller. Arhus: Systime; 2003. (in Danish) Even i f this reference and (Dahler-Larsen and Krogstrup 2003) are both in Danish, the description above should enable experienced evaluators to apply the method or apply it in their own version, with benefit.

9 o

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HAND~OOI< O,C EVALUATION M~:THODS

Clinical/Diagnostic Performance

Areas of Application �9 Measurement o f the diagnostic performance (as in measures o f

accuracy and precision, etc.) o f IT-based expert systems and decision-support systems.

This type o f assessment falls under the Technical Development P h a s e – before an expert system or decision-support system is implemented, but it can continue during the operational phase (the Adaptation Phase and eventually also the Evolution Phase).

Description The clinical performance o f the systems (for diagnostic, prognostic, screening tasks, etc.) is typically measured with the help o f a number o f traditional measures from medicine, such as accuracy, precision, sensitivity, specificity, and predictive values. Also see Measures and Metrics for some specific concepts and measures.

There are thousands o f references to clinical performance o f an IT- based decision-support system, including case studies. Smith et al. (2003) provide a valuable overview o f the literature, although it is limited to neural networks and to the use o f purely statistical tools. The advantage o f this reference is the authors' summary o f metrics and measures for binary as well as multiple categorical and continuous decision problems – that is, when the diagnostic problem consists o f two or more diagnoses or diagnoses on a continuous scale.

Assumptions for Application This depends on the actual measures and (statistical) tools applied. Refer, therefore, to their original literature.

Perspectives One o f the perspectives o f decision-support systems and specialist systems is that o f being able to measure the diagnostic performance

5 If looked upon as an independent activity, expenditure can be quite large, but if the elements of the study can be included in the usual clinical work, one can equally have the chance of it becoming close to cost-free, apart from the planning stage and data analysis.

91

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANDE~OOI< OF:: EVALUATION r"IETI–IODS

of an IT system just as if the IT system were a human being expressing the same.

The measures can without doubt give a significant statement of the value of an expert system's assertions (such as conclusion, recommendations, or similar). But the IT systems are characterized by not knowing their own limitations in such a way that they can adjust their assertions in accordance with the actual situation. Similarly, these systems are unable to take on a holistic view to the clinical object (the patient) in a way that a human being can. In other words, such systems may possibly be able to function with very high clinical, diagnostic performances within a narrow range of cases, but they cannot sensibly know when to abstain from all other cases.

Frame of Reference for Interpretation The frame of reference is typically either a (predefined) 'golden standard' or a consensus reached by a panel of experts.

Perils and Pitfalls The sources of error and pitfalls include partly general errors related to the use of concrete methods or metrics and partly problems related to the technology used for the development of clinical decision aids.

The first type of problems in assessing diagnostic performance is closely related to problems of getting a study design, which precludes a number of the pitfalls mentioned in Part III, such as the carryover effect (contamination), co-intervention, the Hawthorne effect, and so on. Friedman and Wyatt (1996) give a number of useful, overriding instructions for the organization of a clinical diagnostic performance study in order to avoid a number of the perils and pitfalls mentioned in Part III.

The other type of error is related to the technology used for the development of the decision-support system. All technology has its assumptions and limitations. When we deal with decision-support systems, there may be a requirement for mutual independence of the variables included or that there must be an adequate number o f learning cases to avoid overfitting. Schwartzer et al. (2002) explore a number of these problems by using neural networks, some of which are general and also are valid for other types o f decision-support systems.

92

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

I-lAND.ROOK OF EVALUATION METHODS

Advice and Comments The fact that an article like that o f Kaplan (2001) only finds one single publication on decision-support systems and system performance illustrates how difficult it is to capture all relevant articles through a literature search and how difficult it is to go beyond o n e ' s own professional background (in this case the healthcare sector). The article together with other references is recommended as an introduction to the literature on clinical diagnostic performance.

References

Friedman CP, Wyatt JC. Evaluation methods in medical informatics. New York: Springer-Verlag; 1996.

Contains a really good introduction to assessments.

Kaplan B. Evaluating informatics applications – clinical decision support systems literature review. Int J Med Inform 2001;64:15-37.

Kaplan reviews the literature (although only f o r the years 1997- 1998) on clinical performance o f decision-support systems f o r the period and includes other review articles on the subject.

Schwartzer G, Vach W, Schumacher M. On the misuses of artificial neural networks for prognostic and diagnostic classifications in oncology. In: Haux R, Kulikowski C, editors. Yearbook of Medical Informatics 2002:501-21.

Smith AE, Nugent CD, McClean SI. Evaluation of inherent performance of intelligent medical decision support systems: utilizing neural networks as an example. Int J Med Inform 2003;27:1-27.

Supplementary Reading

Brender J. Methodology for assessment of medical IT-based systems – in an organisational context. Amsterdam: lOS Press, Stud Health Technol Inform 1997;42.

lncludes a review o f literature relating to assessment o f medical knowledge-based decision-support systems and expert systems.

Brender J, Talmon J, Egmont-Petersen M, McNair P. Measuring quality of medical knowledge. In: Barahona P, Veloso M, Bryant J, editors. MIE 94 Proceedings of the Twelfth International Congress of the European Federation for Medical Informatics; 1994 May; Lisbon, Portugal. 1994. p. 69-74.

95

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANDBOOK OF EVALUATION I"I~THODS

Discusses concepts o f correctness from traditional medicine, when used f o r an n * n + I contingency table. The authors suggest generalized metrics for, among others, dispersion (random distribution) and bias (systematic variance) together with kappa values.

Egmont-Petersen M, Talmon J, Brender J, McNair P. On the quality of neural net classifiers. AIM 1994;6 (5):359-81.

Uses similar metrics as in (Brender et al. 1994), but specifically designed f o r neural networks.

Egmont-Petersen M, Talmon JL, Hasman A. Robustness metrics for measuring the influence of additive noise on the performance of statistical classifiers. Int J Med Inform 1997;46(2):103-12.

It is important to keep in mind that (changes in) variations o f one's data source influence the results o f one's own studies. That is the topic o f this article.

Ellenius J, Groth T. Transferability of neural network-based decision support algorithms for early assessment of chest-pain patients. Int J Med Inform 2000;60:1-20.

A thorough paper providing measures and corrective functions to obtain transferability o f decision-support systems.

Elstein AS, Friedman CP, Wolf FM, Murphy G, Miller J, Fine P, Heckerling P, Maisiak RS, Berner ES. Comparison of measures to assess change in diagnostic performance due to a decision support system. Proc Annu Symp Comput Appl Med Care 2000:532-36.

The authors make an empirical comparative investigation often different measures f o r diagnostic performance.

Jaeschke R, Sackett DL. Research methods for obtaining primary evidence. Int J Technology Assessment 1989;5:503-19.

Some o f the methods (and the rationale f o r their use)for evaluation studies o f therapeutic and diagnostic technologies are discussed with regard to the handling o f the placebo effect, confounders, and biases. Advantages and risk in using RCT, case control, and cohort studies are discussed

Kors JA, Sittig AC, van Bemmel JH. The Delphi Method to validate diagnostic knowledge in computerized ECG interpretation. Methods Inf Med 1990;29: 44-50.

A case study using the Delphi method to validate diagnostic performance.

Malchow-Moller A. An evaluation of computer-aided differential

9 4

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HAND~OOI< O1:::: EVALUATION METHODS

diagnostic models in jaundice [Doctoral dissertation]. Copenhagen: L~egeforeningens Forlag; 1994.

Describes the development and gives a comparative evaluation o f diagnostic performances with three different types o f decision- support systems.

Miller T, Sisson J, Barlas S, Biolsi K, Ng M, Mei X, Franz T, Capitano A. Effects of a decision support system on diagnostic accuracy of users: a preliminary report. JAMIA 1996;6(3):422-28.

A pilot case study.

Nolan J, McNair P, Brender J. Factors influencing transferability of knowledge-based systems. Int J Biomed Comput 1991 ;27:7-26.

A study offactors that influence the clinical diagnostic performance o f decision-support systems and expert systems.

O'Moore R, Clarke K, Smeets R, Brender J, Nyk~inen P, McNair P, Grimson J, Barber B. Items of relevance for evaluation of knowledge- based systems and influence from domain characteristics. Public report. Dublin: KAVAS (A1021) AIM Project; 1990. Report No.: EM-I.1.

The report, which is a freely available public technical report, contains a detailed description o f which aspects o f knowledge- based systems have been assessed in the literature.

O'Moore R, Clarke K, Brender J, McNair P, Nyk~inen P, Smeets R, Talmon J, Grimson J, Barber B. Methodology for evaluation of knowledge-based systems. Public report. Dublin: KAVAS (A 1021) AIM Project; 1990. Report No.: EM-1.2.

The report, which is a publicly available technical report, contains an in-depth description o f how to assess knowledge-based systems.

Tusch G. Evaluation of partial classification algorithms using ROC curves. In: Greenes RA, Peterson HE, Protti DJ, editors. Medinfo'95. Proceedings of the Eighth World Congress on Medical Informatics; 1995 Jul; Vancouver, Canada. Edmonton: Healthcare Computing & Communications Canada Inc; 1995. p. 904-8.

A methodological study o f the use o f R O C f o r decision-support systems.

Wyatt J, Spiegelhalter D. Field trials of medical decision-aids: potential problems and solutions. In: Clayton P, editor. Proc Annu Symp Comput Appl Med Care 1991:3-7.

Based on the literature, the authors describe a number o f biases related to the assessment o f medical decision-support systems.

9~

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANDBOOK OF {~VALUATION METHODS

Cognitive Assessment

Areas of Application Assessment of the cognitive aspects of the interaction between an IT system and its users, such as:

�9 Identification of where and why operational errors occur �9 Identification of focus areas requiring improvement in user

friendliness

(See also under Cognitive Walkthrough and Usability.)

Description Cognitive aspects deal with the extent to which the system functions in accordance with the way the users think and work in respect of user interaction with the IT system in general and with the user dialogue in particular. In plain language, this deals with the last '8 cm', which equals the process from the eye registering the visual input until the input has been processed by the brain and converted into a new instruction. Thus it is closely related to and very much overlaps with the measurement of the ergonomic aspects (see under Usability). The focus below is on the cognitive aspects that are not covered by the concept of usability.

This type of test is task-oriented- that is, the user simulates (or even tries) to complete an actual departmental work process by means of the IT system. All these tests are recorded on video (because there is no time to digest the information during the session). Kushniruk and Patel (2004) provide a review of methods in this respect.

One of the methods used for analyzing videorecordings is a diagram of the procedure (Beuscart-Z6phir, personal communication), where the course of the dialog with the IT system is noted down in symbolic form as shown in the figure below. The example could illustrate a doctor trying to make notes in the electronic patient record while talking to the patient.

96

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

~ANCSOOI< OF ~VALUATION M{]TI—IODS

example

/ ; –

w

1

1

~

, X

e x p l a n a t i o n o f s y m b o l s

, transaction ~ o n e s t e p b a c k w a r d

0 closed activity

X dead-end shift to a new subactivity

1 shift in search tactics

Symbols could, for instance, be designed for the following aspects o f the dialogue:

�9 H o w often did the user search in vain for information or for actual data fields where they believed them to be?

�9 H o w often was data input rejected simply because the user had misunderstood h o w the data had to be presented or what the field should be used for?

�9 H o w often did the user have to ask or to rack his or her brain to find out what to do? And how quickly did they find out, once they had tried it once or twice?

In an analogous approach, building on analytical constructs and techniques o f Conversation Analysis, Meira and Peres (2004) suggest an evaluative approach that identifies gaps or breakdowns in user dialogues, and it maps the mismatches between user actions and software behavior.

Assumptions for Application Assumptions for application depend entirely on what the results o f the study are to be used for. It normally requires an experienced cognitive psychologist to assist in the investigation as there is still no standardized and thoroughly tested method within the current literature on IT assessment. The difficulty with the use o f this m e t h o d is how to find a direction o f a synthesis and a conclusion on

C)7

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

NAND~OOI< O~C EVALUATION J'I{~TNODS

the functionality o f the IT system if one does not have a cognitive psychology background. Depending on the objective o f the study, a suitable person with relevant experience in the field should be engaged. It is also a prerequisite that there are detailed activity descriptions or scenarios for the user tasks within the department.

Perspectives The difference between this method and Cognitive Walkthrough (below) is the approach: Cognitive Walkthrough starts with a technical description or a physical realization o f the system and transforms this into scenarios o f its use, whereas Cognitive Assessment starts by using the users' practical everyday tasks to develop scenarios to test the IT-based system.

During interaction with an IT system, an understanding o f what goes on in the system is important for user friendliness and user satisfaction, and for the user to know what he or she has to do next or where he or she can find the relevant information. The cognitive aspect is not only o f great importance when learning the system, but it is indeed o f particular importance for the quantity o f operator errors during its life cycle. This could be a reason why EHR seems to work perfectly well in general practice but less successfully in clinical departments: Once you have learned to use the system and you use it a great deal every day, it becomes very easy, even though the system might be a bit clumsy. However, when there are competing activities for the many and varying staff, who only use the systems for short periods o f time, cognitive aspects demand much more attention.

The shortage o f this type o f assessment means that operational requirements, criteria, or recommendations have yet to be established. There is also a lack o f established practice for incorporating the cognitive aspects into a requirements specification. Hence, cognitive aspects do not normally form an explicit part o f a frame o f reference in a traditional agreement with a vendor.

Frame of Reference for Interpretation This method is normally used for explorative s t u d i e s – that is, for studies intended to give a picture o f a situation. Thus, there is no frame o f reference in the normal sense.

However, there is a need for detailed descriptions o f activities or scenarios o f user activities in a department. What is an a c t i v i t y – i n

9 6

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

~IANDE~OOI< OF EVALUATION METHODS

other words, what is its objective? What is its premise? What decision elements does it entail? And in what context (physically, socially, mentally) does it take place? Who performs the activity? What is the consequence of inadequate or delayed completion of the activity? This information is then used as a basis for comparison for the assessment o f the cognitive aspects of an IT system. An example is the preoperative patient record taking by an anaesthesiologist (see Beuscart-Zrphir et al. 1 9 9 7 ) – for example, where are the records physically kept during the operation (during the study in question, the paper-based record used to be placed on the patient's stomach)? And how does the doctor quickly find the information he or she needs during a crisis?

Perils and Pitfalls Studies within this area require specialist knowledge. However, the experience and level of expertise o f the user is a pitfall that even the experienced performer can fall into. There is a difference in how novices and experts work and think (see, for instance, Dreyfus 1997 and Beuscart-Zrphir et al. 2000), and experts can hardly discern between what they themselves say they do and what they actually do do (Beuscart-Zrphir et al. 1997; Kushniruk et al. 1997). See also the discussion in (Brender 1997a and 1999) and Part III in this handbook.

Advice and C o m m e n t s The questions raised in Description (above) c a n – i n i t i a l l y – assist in finding out whether a system has serious cognitive problems, as can the analysis of operational errors in Functionality Analysis or the Usability methods.

References

Beuscart-Zrphir MC. Personal communication; 1997.

Beuscart-Zrphir MC, Anceaux F, Renard JM. Integrating users' activity analysis in the design and assessment of medical software applications: the example of anesthesia. In: Hasman A, Blobel B, Dudeck J, Gell G, Prokosch H-U, editors. Medical Infobahn Europe. Amsterdam: lOS Press. Stud Health Technol Inform 2000;77:234-8.

Beuscart-Zrphir MC, Brender J, Beuscart R, Mdnager-Depriester I. Cognitive Evaluation: How to assess the usability of information technology in healthcare. Comput Methods Programs Biomed 1997;54(1-2): 19-28.

9 9

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

J—IANDE~OOI< OF: ~VALUATION M~TIdODS

Brender J. Methodology for assessment of medical IT-based systems- in an organisational context. Amsterdam: lOS Press, Stud Health Technol Inform 1997;42.

Brender J. Methodology for constructive assessment of IT-based systems in an organisational context. Int J Med Inform 1999;56:67-86.

This is a shortened version o f the previous reference and more accessible regarding those aspects referred to in the text.

Dreyfus HL. Intuitive, deliberative and calculative models of expert performance. In: Zsambok CE, Klein G, editors. Naturalistic Decision Making. Mahwah: Lawrence Erlbaum Associates, Publishers; 1997. p. 17-28.

Kushniruk AW, Patel VL. Cognitive and usability engineering methods for the evaluation of clinical information systems. J Biomed Inform 2004;37:56-76.

Kushniruk AW, Patel VL, Cimino JJ. Usability testing in medical informatics: cognitive approaches to evaluation of information systems and user interfaces. Proc AMIA Annu Fall Symp 1997:218-22.

The article clearly illustrates the difference between the users' subjective understanding and the objective measure o f the same.

Meira L, Peres F. A dialogue-based approach for evaluating educational software. Interacting Comput 2004; 16(4):615-33.

Supplementary Reading

Carroll JM, editor. Scenario-based design, envisioning work and technology in system development. New York: John Wiley & Sons, Inc.; 1995.

This book contains a number o f articles written by professionals in system development. It can therefore be on the heavy side, but there is a lot o f inspiration to be had regarding scenarios. The different articles deal with different forms o f scenarios, their use, as well as a discussion o f advantages, disadvantages, and assumptions.

Demeester M, Gruselle M, Beuscart R, Dorangeville L, Souf A, Mrnager-Depriester, et al. Common evaluation methodology. Brussels: ISAR-T (HC 1027) Health Care Telematics Project; 1997. Report No.: Deliverable 5.1.

This publicly available technical report can be obtained by

IOO

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANDBOOk~ OF: ~VALUATION METHODS

contacting the author. It describes all the assessment methods o f the project including the ergonomic and cognitive assessments.

Jaspers MWM, Steen T, van den Bos C, Genen M. The think aloud method: a guide to user interface design. Inf J Med Inform 2005;73:781-95.

The Think Aloud method combined with video recording was used as a means to guide the design o f a user interface in order to combat the traditional problems with usability.

Kushniruk A, Patel V, Cimino JJ, Barrows RA. Cognitive evaluation of the user interface and vocabulary of an outpatient information system. Proc AMIA Annu Fall Symp 1996:22-6.

Kushniruk AW, Patel VL. Cognitive computer-based video analysis: its application in assessing the usability of medical systems. In: Greenes RA, Peterson HE, Protti DJ, editors. Medinfo'95. Proceedings of the Eighth World Congress on Medical Informatics; 1995 Jul; Vancouver, Canada. Edmonton: Healthcare Computing & Communications Canada Inc; 1995. p. 1566-9.

Kushniruk AW, Patel VL. Cognitive evaluation of decision making processes and assessment of information technology in medicine. Int J Med Inform 1998;51:83-90.

Keep these two authors in mind with regard to studies o f cognitive aspects. Although some o f their articles are conference proceedings, they contain many good details.

Patel VL, Kushniruk AW. Understanding, navigating and communicating knowledge: issues and challenges. Methods Inf Med 1998;37:460-70.

Describes how they carry out assessments using video recordings and think-aloud techniques.

IOI

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANID~OOt< O~ EVALUATION I"qETNODS

Cognitive Walkthrough Areas of Application Assessment of user 'friendliness' on the basis of a system design, from specification, to muck-ups and early prototypes of the system, aimed at judging how well the system complies with the users' way of thinking, for instance:

�9 Identification of where and why operational errors occur �9 Identification of causes behind problems with respect to user

friendliness and consequently identification of areas for improvement

(Patel et al. 1995) . . . and thereby also

�9 The assessment of a demo IT system as a step in the process of choosing a system in a bid situation

The advantage of this method compared to many others is that it can be carried out at an early stage and with just a system specification as a basis. Therefore, the method can also be used to assess immature prototypes (Beuscart-Z6phir et al. 2002).

See also under Cognitive Assessment.

Description Every single task or activity in an organization requires a combination of cognitive (i.e., intellectual) functions and physical actions. Cognitive Walkthrough is an analytical method designed to evaluate usability. The method addresses the question of whether a user with a certain degree of system and domain knowledge can perform a task that the system is designed to support. See (Horsky et al. 2003), which provides an overview of theories and approaches used for studying usability aspects as well as a case study. It focuses on the cooperation between the cognitive functions and the parallel physical actions, while the ergonomic aspects are closely related to the organization of, and the interaction with, each of the individual fields on the screen images. There is a gentle graduation and quite an overlap between cognitive and ergonomic aspects (see Usability and Cognitive Assessment) of the user interface of an IT system.

IO"2

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

~ANI}8OOI< OF–EVALUATION IvIIZTIdODS

Cognitive Walkthrough is a review method for which a group of experts prepare task scenarios either from specifications or from earlier prototypes (could be video-based) (Kushniruk et al. 1996). The experts investigate the scenarios either on their own or together with a user. The one acting as a typical user goes through the tasks with the aid of the interface of the s c e n a r i o – "walking through the interface", as they pretend that the interface actually exists and works. Each step of the process is analyzed as seen from the point of the objective of the task with the intention of identifying obstacles in the interface that make it either impossible or difficult to complete the task. Complicated or roundabout ways through the system's functions indicate that the interface needs a new function or simplification of the existing one.

Assumptions for Application The method is designed to systematically go through all the user's possibilities of interaction with every screen of the IT system. The analyst is required to perform a manual simulation of the cognitive processes that form part of the successful execution of a task. In practice it could, however, be impossible to carry out a systematic cognitive walkthrough if the system is very complex.

It is necessary to know how the user interface will look; therefore, the description of the system needs to meet a certain level of detail.

It is a prerequisite that there is a task and activity analysis. See, for instance, under Analysis of Work Procedures, and see (Beuscart- Z6phir et al. 2002).

Normally users of a (future) IT system do not have the experience to prospectively and methodically assess the cognitive aspects themselves, as to analyze one's own activities from a different point of view requires special awareness and some lateral thinking.

Perspectives The difference between this method and Cognitive Assessment is the point of view. As a starting point Cognitive Walkthrough uses a technical description, or a physical realization of it, and transforms it into scenarios of its use, while Cognitive Assessment starts by using the practical daily tasks of the user to prepare scenarios to test the IT-based system.

A character-based system can have excellent cognitive qualities, and

IO5

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

NAND~OOI< O~ ~VALUATION M~TIdODS

systems with a graphical interface can have terrible cognitive qualities, so one should get rid of a priori prejudices. What is important is whether the system and the way the user thinks work t o g e t h e r – not whether the interface is esthetically pretty to look at. A person who knows the system can easily circumvent cognitive problems because one can learn how to navigate. Thus, thorough training can compensate for cognitive and ergonomic problems in IT-based systems.

Ordinary users and user organizations will typically experience and notice cognitive errors as operational errors with inadequacies in specific details or in the interaction between the IT system and the work-processes of the organization.

The method's systematic approach is based on the assumption that all screen actions bring discrete changes between states of the system.

Frame of Reference for Interpretation The frame of reference is purely based on the experience of what will normally cause problems in terms of gaps in the understanding and breakdowns in actions.

Perils and Pitfalls Development of user scenarios based on specifications or on an early prototype cannot become more complete and correct than the base upon which it has been developed. In other words, if this material is wanting, then the assessment will also be wanting, but this does not preclude that the method can be quite a valuable (constructive) assessment tool during the early development phase.

Advice and Comments The method is suited for use, for example, during a demo of an existing system, although it is easier if you get to the keyboard yourself. Alternatively, one can stipulate that the vendor or bidder goes through one or more actual scenarios.

References

Beuscart-Zrphir MC, Watbled L, Carpentier AM, Degroisse M, Alao O. A rapid usability assessment methodology to support the choice of clinical information systems: a case study. In: Kohane I, editor. Proc AMIA 2002 Symp on Bio*medical Informatics: One Discipline; 2002

IO/4

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

NANDBOOK O~: .I=VALUATION METHODS

Nov; San Antonio, Texas; 2002. p. 46-50.

Horsky J, Kaufman DR, Oppenheim Mereon Institute, Patel VL. A framework for analyzing the cognitive complexity of computer-assisted clinical ordering. J Biomed Inform 2003;36:4-22.

Kushniruk AW, Kaufman DR, Patel VL, L6vescque Y, Lottin P. Assessment of a computerized patient record system: a cognitive approach to evaluating medical technology. MD Comput 1996; 13(5):406-15.

Patel VL, Kaufman DR, Arocha JA, Kushniruk AW. Bridging theory and practice: cognitive science and medical informatics. In: Greenes RA, Peterson HE, Protti DJ, editors. Medinfo'95. Proceedings of the Eighth World Congress on Medical Informatics; 1995 Jul; Vancouver, Canada. Edmonton: Healthcare Computing & Communications Canada Inc; 1995. p. 1278-82.

Supplementary Reading

Carroll JM, editor. Scenario-based design, envisioning work and technology in system development. New York: John Wiley & Sons, Inc.; 1995.

This book contains a number o f articles written by system development professionals, so it can be somewhat heavy to read, but there is a lot o f inspiration regarding scenarios. The different articles deal with various forms o f scenarios, their use, as well as a discussion o f advantages, disadvantages, and assumptions.

Huart J, Kolski C, Sagar M. Evaluation of multimedia applications using inspection methods: the Cognitive Walkthrough case. Interacting Comput 2004; 16:183-215.

An exhaustive case study that also reviews usability methods.

Kushniruk AW, Patel VL. Cognitive and usability engineering methods for the evaluation of clinical information systems. J Biomed Inform 2004;37:56-76.

A thorough description o f a number o f usability inspection methods, including the walkthrough method.

http://jthom.best.vwh.net/usability/ Contains many method overviews with l&ks and references. (Last visited 31.05.2005)

IO5

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANDBOOI< OF:: EVALUATION M~TIdODS

Delphi Areas of Application

�9 (Qualitative) assessment o f an e f f e c t – for instance, where the solution space is otherwise too big to handle

�9 Elucidation o f a problem area – for instance, prior to strategic planning

�9 Exploration o f development trends

Description The Delphi method is a consensus method for the prediction o f the future (Dalkey 1969) developed by the Rand Corporation for the American military in the 1950s.

One characteristic feature o f the Delphi method is that a central team collaborates with and interrogates a panel o f experts in an iterative way to formulate the experts' knowledge o f a predefined topic. See, for instance, the references (Dalkey 1969; Linstone and Turoff 1975; Adler and Ziglio 1996; and Crisp et al. 1997). Being experts within a specific domain means that they have the capability to assess development trends and directions within that d o m a i n – in other words, they are able to extrapolate into the future. Another characteristic feature is that the central group remains neutral during the entire process and lets the experts' opinions and statements guide the outcome o f the process in an iterative way.

The approach depends on the topic, as the basis for interrogation may be predefined (for instance, some quantitative investigations) or may be the result o f an open opening question (for instance, in certain qualitative investigations). An example o f the latter is seen within (Brender et al. 2000):

1. Brainstorming phase based on an open question o f what is relevant to address the problem area with.

2. Evaluation phase, where the individual experts comment on (prioritize, elaborate, refine) each others' contributions from the first phase.

3. Feed-back to the authors on their fellow experts' commented,

io6

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

~-IAND~OOI< OF {~VALUATION M~TIdODS

providing an opportunity for them to refine their original intention. Following this refinement, the topics are fixed.

4. Preparation o f a questionnaire from previous material. 5. Collection and analysis o f the expert panel's rating o f the

individual topic.

Assumptions for Application The method is medium of difficulty, primarily because it assumes experience and cautiousness with the synthesis and with the preparation of a questionnaire, including the perils and pitfalls associated with a Questionnaire approach (see this elsewhere).

It is an assumption for a reliable outcome that the core team remains neutral toward the material and in dealing with the outcome.

Perspectives The professional perspective behind the design o f this method is that a collection of experts is capable of inspiring and correcting each other to predict the future or the nature of something, which none of them are able to judge accurately on their own. Cooperation makes them adjust and compensate for each other's weak points in an iterative process controlled by the core team so that the end results become fairly accurate. Thereby, a topic may be exhaustively investigated, while inaccuracy, nonsense, hidden agendas, and hobby horses vanish in the iterative process. It is difficult to predict anything about the future, but experience shows a fair amount of success by using the Delphi method.

Frame of Reference for Interpretation (Not applicable)

Perils and Pitfalls When the experts know each other, there is a risk that they become emotional or stick to the official view of the topic rather than responding according to their best convictions. Therefore, it is best to keep the contributions anonymous. This sometimes also includes the composition o f experts within the panel, at least as long as the process is ongoing.

Advice and Comments Depending on the need for precision in the outcome, one may continue iterating the last phases until the results no longer

IO7

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

NAND6~OOk~ O~ {]VALUATION ME-TIdODS

converge, indicating that the m a x i m u m level o f accuracy and precision has been reached for that specific combination o f experts.

References

Adler M, Ziglio E. Gazing into the oracle – the Delphi method and its application to social policy and public health. London: Jessical Kingsley Publishers; 1996.

Brender J, McNair P, Nohr C. Research needs and priorities in health informatics. Int J Med Inform 2000;58-9(1):257-89.

Crisp J, Pelletier D, Duffield C, Adams A, Nagy S. The Delphi method? Nurs Res 1997;46(2):116-8.

Dalkey NC. The Delphi method: an experimental study of group opinion – prepared for United States Air Force project. Santa Monica (CA): Rand; 1969.

Linstone HA, Turoff M. The Delphi method, techniques and applications. Reading, MA: Addison-Wesley Publishing Company; 1975.

Supplementary Reading

Kors JA, Sittig AC, van Bemmel JH. The Delphi method to validate diagnostic knowledge in computerized ECG interpretation. Methods Inf Med 1990;29:44-50.

A case study using the Delphi method f o r validating diagnostic performance.

O'Loughlin R, Kelly A. Equity in resource allocation in the Irish health service, a policy Delphi study. Health Policy 2004;67:271-80.

A case study using the Delphi method to explore policy issues in resource allocation.

Snyder-Halpem R. Indicators of organizational readiness for clinical information technology/systems innovation: a Delphi study. Int J Med Informatics. 2001 ;63:179-204. Erratum in Int J Med Inform 2002;65(3):243.

A case study using the Delphi method f o r a specific evaluation purpose.

i o 6

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

I-IAND~OOI< OF EVALUATION METHODS

Equity Implementation Model

Areas of Application Examine users' reaction to the implementation o f a new system, focusing on the impact o f the changes such a system brings about for the users.

Description The Equity Implementation Model is used to retrospectively investigate and understand user reactions to the implementation o f an IT-based system, based on incidents in an organization. The focus o f this method that originates in social science is the effect o f the changes that such a system brings about (Lauer et al. 2000).

The 1.

method consists o f three steps (Lauer et al. 2000): Deals with the changes from the perspective o f a user to identify possible stresses and benefits that may be affected by the new IT system, both o f which may be positive or negative in a picture o f the 'perceived benefit' Examines the fairness in relation to the employer in sharing the gains or losses brought about by the change, thereby comparing the perceived benefits o f the two parties Compares changes perceived by individual users with that o f other users or user groups

Assumptions for Application There is a need for insight into behavioral aspects and social science to avoid the pitfalls in studying humans.

Perspectives Implementation o f an IT-based solution implies organizational change. Many attempts at implementing IT-based systems in organizations have resulted in failures, o f which only a fraction are on account o f technical problems, and many researchers have demonstrated that critical issues in the implementation o f such

I o 9

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANDtgOOI< O1: .EVALUATION I"II~TIdODS

systems reside with the soft human aspects o f an organizational.

The method provides a means for explanation o f system implementation events, leading to an understanding o f user resistance or acceptance o f the new technology. It is based on the perspective that there is no fundamental or irrational resistance to change (Lauer et al. 2000): Each change is evaluated as being favorable or unfavorable by each individual affected by it.

Further, the method emphasizes that it is important to pay attention to the fairness concerns o f all users, a viewpoint that is highly culturally dependent.

Frame of Reference for Interpretation (Not applicable)

Perils and Pitfalls The sources o f bias come from the method's origin in social science, and hence it is concerned with the pitfalls in observing and interrogating humans (see Part III in general).

Advice and Comments Although the objective o f Lauer et al. (2000) is that o f research, it may certainly in some situations be useful for more down-to-earth investigations as well, particularly in cases o f special user- satisfaction problems.

As the method in general deals with equity, it may be applied as a measuring instrument to provide the decision-making basis relation to policy planning and priority setting, as well as for examination o f barriers and drivers at the implementation o f change. The method may also prospectively provide a means for pointing out other problem areas that may subsequently be addressed by preventive initiatives through its focus on equity.

References

Lauer TW, Joshi K, Browdy T. Use of the Equity Implementation Model to review clinical system implementation effort, a case report. JAMIA 2000;7:91-102.

IIO

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

]-IANIDBOOI< OF: EVALUATION METHODS

Field Study (In the sense o f observational studies)

Areas of Application Observation o f an organization to identify its practices and to elucidate mechanisms that control change.

Description In short, observational studies are used to diagnose various conditions in an organization (Harrison 1994). This method is widely used in psychology, sociology, anthropology, and so o n – that is, in professions that deal with different perspectives o f human factors – to identify what goes on and how. The studies encompass all sorts o f elements from social interaction and the influence o f specialist cultures on work practices, through to work process analysis, as seen in the informatics part o f the organizational spectrum observed. The studies can be transverse or in depth, exploring or verifying.

Assumptions for Application This type o f study normally requires professional psychological, sociological, or specialist knowledge o f organizational theory.

Perspectives It is an important aspect o f field studies to understand that the reality o f the actors is a determining frame for acting and maneuvering. Therefore, the outcome o f the study depends on prior knowledge o f the users' conditions and an understanding o f their way o f thinking in different contexts. The specialist culture is an acquired one.

Ill

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

~IANDt~OOK OF: ~VAI._.UATION METHODS

One o f the reasons why this type o f study has to be performed by specialists is that the observer must be able to free himself from his perspectives and thereby from his prejudices, expectations, and assumptions, as well as from established principles and understanding. Any kind o f preconception will influence the process o f observation and introduce bias. Professionals in the healthcare sector including IT users are subconsciously trained to think in a specific way (a profession-oriented culture and politics) to such a degree that it is difficult for them to disregard their own way o f thinking. Informatics specialists are trained to observe the movements o f information units and processes o f change in an organization, but not to observe (profession-oriented) cultural, social, and psychological phenomena, so even informatics specialists fall short.

There are two basic models behind field studies: Reductionist ('diffusion model') and holistic ('translation model').

The reductionistic understanding presupposes that things can be taken apart and individual items be observed in isolation, after which they are put back together as an entirety where the conclusion equals the sum o f the component items. It is also an assumption in this perspective that the results from one organization will work in the same way in other organizations and that the organization(s) can be more or less suited but can adapt to the technology. Specialists of informatics, natural sciences, or technically trained people are trained to work with the reductionistic model, while there is a tendency evolving to try to incorporate holistic aspects when developing new methodologies in order to compensate for known deficiencies.

The holistic view is based on the understanding that an organization or a person comprises many different factors and units that work in complex unity and that any external influence will cause them to mutually affect each other. Specialists with a background in the humanities use a holistic view to a far greater extent than informatics specialists. However, they often have to draw very strict limitations as the totality otherwise becomes too large.

Frame of Reference for Interpretation (Not applicable)

112

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANDt~OOI< OF ~VALUATION METHODS

Perils and Pitfalls Professionals from social and behavioral sciences are well aware o f the many serious pitfalls and take the organizational and behavioral circumstances as their methodological and methodical starting point. This is why field studies require a specialist background.

One must be aware that it takes time for a person from outside the domain to reach a stage o f observation enabling him to pose the right questions to the right people. This is because without prior knowledge o f the domain, one cannot know anything about what should and shouldn't happen. Even though they are trained to observe, it takes time to get to the core o f the matter.

Interview studies are often used to facilitate field studies, but as they are unable to elicit tacit knowledge, they embrace severe pitfalls, inasmuch as the users in an organization are unable to account for what or how they do things 6 (see the review in Brender 1997a and 1999). Instead they have a tendency to describe prescriptive procedures in a way that they would use to tell a colleague being trained in the department (Beuscart-Zrphir et al. 1997; and Beuscart- Zrphir, personal communication).

The Hawthorne effect may be significant in studies o f this kind: For psychological reasons people tend to change behavior and performance when under observation (see Part III). This points at the need for addressing the complex dynamics o f the wholeness and its mechanisms o f change rather than single and specific variables before and after the process o f change.

Advice and Comments This method is very time consuming.

Before you even start to consider undertaking a field study yourself, you need to read Baklien (2000), who examines a number o f aspects o f field studies particularly concerned with constructive assessment purposes.

6 This is a known phenomenon in the development of knowledge-based systems and has resulted in the establishment of a special profession, called knowledge-engineering with methods dedicated to the purpose. They still struggle to develop methods to elicit the users' knowledge of how they do things in real life and why. The focus is particularly aimed at expert knowledge in the area of diagnostics/therapy. But system analysts/informatics people struggle equally to develop efficient and reliable methods to elicit the entirety of relevant aspects in work processes (Brender 1999).

113

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

NANIDROOI< OF ~VALUATION METHODS

Svenningsen (2003) is a field study o f an exploratory character. It does not, therefore, have any element o f constructive assessment. However, the study is highly timely in its subject as it focuses on the clinical processes o f changes relating to the introduction o f E H R into the Danish healthcare system, and it encompasses a study o f the activities and accounts o f the causal explanations in order to identify the patterns o f change.

References

Baklien B. Evalueringsforskning for og om forvaltningen. In: Foss O, Monnesland J, editors. Evaluering av offentlig virksomhet, metoder og vurderinger. Oslo: NIBR; 2000. Report No.: NIBRs PLUSS-SERIE 4- 2000. p. 53-77. (in Norwegian)

Beuscart-Zrphir M-C. Personal communication 1996.

Beuscart-Zrphir MC, Brender J, Beuscart R, Mrnager-Depriester I. Cognitive evaluation: how to assess the usability of information technology in healthcare. Comput Methods Programs Biomed 1997;54(1-2): 19-28.

Brender J. Methodology for assessment of medical IT-based systems – in an organisational context. Amsterdam: lOS Press, Stud Health Technol Inform 1997;42.

Brender J. Methodology for constructive assessment of IT-based systems in an organisational context. Int J Med. Inform 1999;56:67-86.

This is a shortened version o f the previous reference and more accessible with regard to this subject.

Harrison MI. Diagnosing organizations: methods, models, and processes. 2nd ed. Thousand Oaks: Sage Publications. Applied Social Research Methods Series 1994. vol. 8.

The book discusses advantages and disadvantages o f various methods, including field studies, for the analysis o f a number o f aspects within an organization.

Svenningsen S. Electronic patient records and medical practice – reorganization of roles, responsibilities, and risks [PhD thesis]. Copenhagen: Samfundslitteratur; 2003. Report No.: Ph.D.-Series 10.2003.

A thorough Danish case study ofprocesses o f change in a clinic upon implementation o f EHR.

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANDE~OOt< O~ EVALUATION METHODS

Supplementary Reading

Jorgensen DL. Participant observation, a methodology for human studies. Newbury Park: Sage Publications. Applied Social Research Methods Series 1989. vol. 15.

Describes what f i e M studies can be used f o r and how one can observe people in an organization. However, the book avoids discussing the pitfalls.

Kuniavsky M. Observing the user experience, a practitioner's guide to user research. San Francisco: Morgan Kaufmann Publishers; 2003.

Chapter 8 o f this book is dedicated to the method o f Contextual lnquiry, a data collection technique that studies a f e w carefully selected individuals in depth in order to arrive at an understanding o f the work practice.

115

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

NANDE~OOI< OF EVALUATION METHODS

Focus Group Interview

Areas of Application In principle this group interview method can be used for the same purpose as other Interview methods, but in practice it is most relevant for the following tasks (see, for instance, Halkier 2002 and Kuniavski 2003):

�9 During the early analysis phases (the Explorative Phase) – for instance, when eliciting the attitudes, perceptions, and problems o f social groups, or when a model solution is being established.

�9 During o p e r a t i o n s – for instance, to identify a pattern in user attitudes to the system, norms, and behavior, or to identify problem areas o f the system functionality or o f operations in general.

Description The method is a variation on other interview methods, with individual interviews being replaced by a group interview process; see also general aspects under Interviews.

Focus Group Interviews can be carried out during workshops or other dedicated meeting activities, where one o f the purposes o f the introductory steps is to get the group dynamics working. Depending on the topic discussed, a skilled moderator can use a variety o f data- capture techniques (such as mind-maps, Post-it notes on a whiteboard, or similar brainstorming techniques) to ensure that the written output is generated as part o f the whole process.

Stewart and Shamdasani (1990) and Halkier (2002) meticulously investigate the various methodical issues for each step starting with preparation, through completion o f the interview, to transcription and analysis o f the data. Halkier (2002) and Kuniavsky (2003) recommend that the whole procedure be videotaped, as it can be virtually impossible to take notes that are sufficiently thorough and accurate.

In contrast to interviews o f individuals, where one is able to take firm control o f the process while taking notes, group dynamics will invariably make the participants all eagerly speak at the same time.

it6

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

~ANDEiOOK OF:: EVALUATION METHODS

This creates a lot o f overlapping communication, and small, parallel groups that communicate informally may form from time to time. Thus, it becomes very difficult for the moderator to take notes and get the whole context down at the same time as moderating the process.

Assumptions for Application It takes an experienced moderator and facilitator (Halkier 2002; Kuniavsky 2003), and practice of transcribing and analyzing this type o f data.

Perspectives When you gather a number of individuals under informal conditions, you get the opportunity to start a group dialog or debate right across the whole group. The advantages o f this method over that of individual interviews are that the participants can mutually inspire as well as adjust and moderate each other during a brainstorming process, stimulating the creativity of the group and, even more importantly, kindling the individual's memory and perception.

During the process a synthesis o f new knowledge will frequently occur, resulting directly from the interaction between the participants, which the interviewer could not possibly bring out due to the lack of prior contextual knowledge of the subject (Halkier 2002). At times it is easier to glean certain types of information from a group, maybe through group pressure, as it is precisely the social interaction that is the source of the data. For instance, members of a group can have insider information that the interviewer would not be able to draw from an individual. Hereby information will be revealed that often would not be communicated during an individual interview. At the same time one can iterate the process until the required or potential level of precision is reached.

However, there are – culturally defined – topics that will be difficult to discuss in a group, just as there is a tendency to put the official version forward rather than the real version of a topic.

Frame of Reference for Interpretation The method is primarily used to explore a subject. Therefore, studies based on this method normally do not have a frame of reference.

ll7

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

NANDBOOI< OF: EVALUATION METHODS

Perils and Pitfalls The psychological factors are prominent, as in Interview methods in general (see Interview section), but additionally there are the social factors to contend with. The interaction between delegates can have a powerful impact on the reliability of the result, as seen, for instance, from the domination of key persons (or the opposite). In other words, the result depends on social control, representativeness of stakeholders, the homogeneous and heterogeneous composition o f the group, internal power structures (Stewart 1990; Harrison 1994), and confidentiality and sensitivity of topics or taboos.

This form o f interview makes it difficult to get at the finer details or variances in individual practice or at atypical understandings (Stewart 1990; Halkier 2002), because the philosophy o f the method is that o f consensus seeking but also because it is difficult to get representativeness in the composition of the groups and the resulting group dynamic.

Advice and C o m m e n t s

References

Halkier B. Fokusgrupper. Frederiksberg: Samfundslitteratur og Roskilde Universitetsforlag; 2002. (in Danish)

This is a very thorough handbook o f methods dedicated to Focus Group Interviews, but it has the disadvantage that the error sources and pitfalls are only implicitly described

Harrison MI. Diagnosing organizations: methods, models, and processes. 2nd ed. Thousand Oaks: Sage Publications. Applied Social Research Methods Series 1994. vol. 8.

The book deals with the analysis o f a number o f aspects o f an organization and describes advantages and disadvantages o f different methods.

Kuniavsky M. Observing the user experience, a practitioner's guide to user research. San Francisco: Morgan Kaufmann Publishers; 2003.

Chapter 9 o f t his book is dedicated to Focus Group Interview and outlines four different types thereof as well as describes what the method is suited f o r or not in Web applications.

Jl6

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

ldANDE~OOI< OF:: EVALUATION M~THODS

Stewart DW, Shamdasani PN. Focus groups: theory and practice. Newbury Park: Sage Publications. Applied Social Research Methods Series 1990. vol. 20.

Describes a number o f aspects o f group dynamics, strengths, weaknesses, and the risk o f bias by application o f the method all o f which are very important when using this method.

Supplementary Reading

Robson C. Real world research, a resource for social scientists and practitioner-researchers. 2nd ed. Oxford: Blackwell Publishers Inc; 2002. p. 269-91.

This book gives a good description o f different types o f interviews, details and ideas regarding their use including the advantages and disadvantages o f each o f them.

http://www.icbl.hw.ac.uk/ltdi/cookbook/focus_groups/index.html The home page gives additional advice and guidelines on how to carry out the various phases o f a Focus Group Interview. (Last visited 31.05.2005)

http://jthom.best.vwh.net/usability, The home page contains a lot o f method overviews, links and references. Two o f the better links from this page that are relevant f o r Focus Groups are those o f George Silvermin and Thomas Greenbaum's (both last visited on 31.05.2005).

119

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

~IANI38OOK O.IZ EVALUATION J'I~TBIODS

Functionality Assessment

Areas of Application �9 Validation o f objectives fulfillment (realization o f o b j e c t i v e s ) –

as opposed to a pure verification o f the requirements specification – that is, assessment o f the degree o f compliance between the desired effect and the solution realized

�9 Impact Assessment (also called effect assessment) �9 Identification o f problems in the relation between work

procedures and the functional solution o f the IT system

The method will also capture severe ergonomic and cognitive problems but is not dedicated to capture details o f this type.

Description The method is as objective as its user. It can include both qualitative and quantitative elements but is best suited to qualitative studies.

An overview o f the methodological procedure is illustrated in the figure to the left (reprinted from Brender 1989 and 1997a). The 'normative system' and the 'real system' are described first (easiest when using the same modeling and descriptive techniques). The latter includes all o f the IT- based solution (i.e., IT system plus the surrounding work procedures) as it works in practice, while the normative system is the planned IT-based solution. If the actual IT system o f the normative system is not described, it can be replaced by the planned structure,

work processes, activities o f the organization, and new guidelines,

1"20

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

I—IANI}8OOI< O~ EVALUATION METHODS

for example. In step 3 all divergences between the two systems are described, indicating which of these are considered key divergences. Step 4 consists of an analysis of the causal connection. Step 5 is normally used (for development projects and constructive assessment activities) to design the new solution.

The principle behind the analysis of the causal connection is illustrated in the example given in the figure to the right: Each symptom is unwound from the end, from description o f a symptom to possible causes right back to the anticipated primary source of the problem based on the philosophy in Leavitt's model for organizational changes (see under Perspectives below).

Assumptions for Application The method is very work intensive when the complete picture of how work processes in an organization function with an IT system as an overall solution.

The degree of difficulty depends largely on the diagramming technique used (see examples under Work Process Analysis). However, it is most common to use the same methods of description as the ones used for analysis of the organization in earlier phases of the project, thereby also making the method accessible for a broader group of users.

The ability to be objective, persistent, and meticulous is a requirement for the causality analysis, as is in-depth knowledge of the organization in order to limit the solution space for propagating from cause to symptom in the way that best reflects the organization.

Perspectives During a period after taking the IT system into o p e r a t i o n – no matter how thorough you are during planning – a number of adaptations of the work processes and probably also of the IT system will take place. At the same time both the organization and the users will change; they will learn something new, giving rise to new possibilities and/or challenges. For this reason the connection between the IT system and the work processes around the system

I?1

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdAND~OOI< OF:: EVALUATION METHODS

will always be sensitive to time. Thus, the method has been designed (within certain limits) to make allowances for the development of the system and changes of work procedures as a function o f time when interpreting the result.

The method is based on Leavitt's model for organizational change (Leavitt 1970), so it is based on the philosophy that any change in one element o f an organization will propagate to the other elements of the organization in a chain reaction until a new, steady state is reached. If problems arise within an organization, compensating reactions will thus spread to other parts of the organization, implying that the problems will be visible in terms of these changes.

At the same time, the method is based on a generalization of the philosophy of Mumford's Participatory Approach to System Design from the late 1970s (Mumford et al. 1978 and 1979). The principle is to start by identifying all variances between the IT system's or the organization's desired processes (i.e., the normative system) and the way it actually works. Mumford's system development method then focuses on finding a solution to one or more o f the variances identified.

Frame of Reference for Interpretation The method has a built-in frame of r e f e r e n c e – namely the description of the Normative System (see above). The original m e t h o d – analysis of the realization of objectives of a specific IT s y s t e m – was based on a detailed Requirements Specification and a System Specification together with the user manual and minutes of meetings, and so on, as a flame o f reference (the normative system) (Brender 1989), while the analysis carried out in (Beuscart et al. 1994) is based on the same method but with the needs and expectations as the flame of reference.

The difference between these two applications (see Brender 1997a) illustrates that should the normative system not explicitly be prepared prior to the introduction of the system, there is a possibility o f 'making do' with the expectations of the system as a frame of reference, as, for instance, in descriptions of new work processes developed in connection with the introduction of the system.

Perils and Pitfalls The causality analysis is the weakest point of the method because the interpretation and therefore the final conclusion is completely

122

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANDt~OOI< O~ EVALUATION METHODS

dependent on this analysis. It can be difficult to uncover the skeletons in an organization and to discuss the problems without hurting s o m e b o d y ' s feelings. Alternatively, someone may try to cover up and hide the problems.

On the other hand, the method is less dependent on whether all divergences are included or whether you get the right divergences defined as key divergences because the actual causality will often manifest itself in more than just one symptom.

A d v i c e and C o m m e n t s The method is designed to address measures according to Leavitt's model for organizational change. However, the framework addressing 'structure, process, and effect', or any other similar framework structure, can be used in the same way.

The causal analysis o f this method may be supported by the Root Causes Analysis method (see separate section).

R e f e r e n c e s

Brender J. Quality assurance and validation of large information systems – as viewed from the user perspective [Master thesis, computer science]. Copenhagen: Copenhagen University; 1989. Report No.: 89- 1-22.

A r~sum~ o f the method and a case study can be found in (Brender 1997a). I f this does not suffice, the reference can be obtained from the author.

Brender J. Methodology for assessment of medical IT-based s y s t e m s – in an organisational context. Amsterdam: lOS Press, Stud Health Technol Inform 1997;42.

Beuscart R, Bossard B, Brender J, McNair P, Talmon J, Nyk/inen P, Demeester M. Methodology for assessment of the integration process. Lille: ISAR (A2052) Project; 1994. Report No.: Deliverable 3.

This report is freely available, and contact information can be obtained from the undersigned author.

Leavitt HJ. Applied organizational change in industry: structural, technological and humanistic approaches. In: March JG, editor. Handbook of organisations. Rand MacNally & Co: Chicago; 1970.

Mumford E, Land F, Hawgood J. A participative approach to the

1"25

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

I–IANIDBOOI< OF-EVALUATION METHODS

design of computer systems. Impact Sci Soc 1978;28 (3):235-53.

Mumford E, Henshall D. A participative approach to computer systems design, a case study of the introduction of a new computer system. London: Associated Business Press; 1979. Appendix C.

174

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANI}gOOK OF EVALUATION MI~THODS

Future Workshop

Areas of Application Evaluation and analysis of an (existing) situation with the view to identifying focus areas for change – that is, aiming at designing future practice.

Description The method resembles the Logical Framework Approach (see separate section) and uses the same terminology. However, the Future Workshop as a method has a different focus and concentrates on situation analysis supplemented by a follow-up and a far more thorough interpretation of how to carry out the individual phases. Another difference from the Logical Framework Approach is that in the Future Workshop it is not necessary to focus on just one problem during the implementation and process of change, but it uses a picture of the organization and its problems as a whole.

The method is carried out through a workshop, a preparatory phase, and a follow-up phase (Jungk and Miillert 1984; Mtiller 2002). The preparatory phase of the workshop consists of a simple stakeholder analysis and the participants getting to know each other. The purpose of the follow-up phase is an ongoing analysis of whether the right track is being pursued and whether it is being pursued in a satisfactory way. The workshop itself has the following three phases:

1. The critique phase, with the purpose of identifying and discussing existing problems

2. The fantasy phase, with subgroups discussing problems identified and submitting their vision of the future (thus, the result is a catalogue o f visions)

3. The realization phase, where means, resources, and opportunities to realize the visions are discussed

Assumptions for Application Depending on the degree of accuracy and precision needed, the person(s) in charge of the process must have sufficient experience of group dynamics – in other words, the method requires an experienced moderator and facilitator familiar with transcription and analysis of this type of data. An experienced moderator may use data-capture techniques (such as mind-maps, Post-it notes on a whiteboard, or

125

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANDBOOI< OF EVALUATION METHODS

similar brainstorming techniques) to ensure that the written output is generated as part of the whole process. However, in some situations it might inhibit the process resulting in loss of information.

Perspectives The Future Workshop as described is relatively neutral to political, organizational, and cultural conditions and bonds. It neither dictates nor prevents such bonds but can adapt and operate under the conditions of the organization.

There are, however, aspects of the method that makes it more suitable in certain cultures and forms of organization than in others. For instance, the proposal of a broad representation of staff as participants in the project is dependent on culture (at national and organization levels). However, there is nothing to stop you from making minor adaptations at some points, as long as you are aware of the magnitude of potential consequences and their further implications.

Frame of Reference for Interpretation There is no frame of reference built into the method itself, but references as to whether the result is valid and applicable are the premises against which the result should be used. In other words, the frame of reference has to be the formulation of the organization's (stakeholder groups') vision, mission, and strategic initiatives, as well as other constraints within the organization and possible declared intentions for different aspects of the organization.

Perils and Pitfalls It is a substantial pitfall in some cultures that the method description promotes broad staff participation to identify visions and objectives of the organization because there is no prior knowledge of whether management (or staff) is ready to take that step. It is the step from realizing the problem to establishing solutions and monitoring objectives that may become the point of issue. Therefore, it is important that the leader (the facilitator) of the workshop process has complete insight into, and works in accordance with, management intentions and the principles of the organization or, alternatively, that management accepts that the process of change in the organization also includes this stipulation.

Advice and Comments The follow-up phase does not have an explicit risk monitoring in the same neat way as in Logical Framework Approach, but there is

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

ldANDBOOI< 04= EVALUATION METHODS

nothing to stop you from incorporating it.

Analogous methods that might provide further inspiration are 'Future Search' and 'Cafe-Seminar' (see http://www.futuresearch.net and www.worldcafe.dk, respectively; both last visited 31.05.2005).

References

Jungk R, Miillert N. H~ndbog i fremtidsva.'rksteder. Viborg: Politisk Revy; 1984. (in Danish)

One o f the early references f o r this method. It encompasses a thorough investigation o f the method and a case study.

Mtiller J. Lecture note on future workshop, design and implementation. Aalborg: Dept. of Development and Planning, Aalborg University; 2002 (a copy can be obtained from the department).

177

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

~IANDE~OOk( O~ ~VALUATION METHODS

Grounded Theory

Areas of Application Grounded Theory is a supportive analytical method for data acquisition methods that generate textual data, some open Questionnaire methods, and Interviews (individual and group interviews), for example.

Description Grounded Theory aims at identifying and explaining phenomena in a social context through an iterative and comparative process (Cronholm 2002).

There are four steps: 1. Open coding, where single elements from the data are correlated

to and named as concepts with defined characteristics, including identification of tentative categories of theoretical importance

2. Axial coding, where the named concepts are definitively defined, patterns among them identified, and are then organized into categories according to characteristics and internal relationships, including potential causal relationships

3. Selective coding, which works toward arranging the concepts according to the relationships between them, including identification of mechanisms, processes, or circumstances that may bring about the concept characteristics identified. Finally, the central category to which everything else relates is identified

4. Theoretical coding, which provides a hypothetical model (generalized relationships) based on the previous coding. The model must be able to abstract and explain the phenomena observed and their interrelationships.

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

I-IANDEiOOK OF: EVALUATION MI~Tt–IODS

Assumptions for Application The method is considered difficult and requires special qualifications. Normally it should only be undertaken by experienced professionals.

Perspectives The method is subjective, and it is the implementer's perceptions and evaluations that influence the outcome of the analysis. The subjectivity is somewhat compensated for by the fact that it is iterative, inasmuch as the outcome – provided you have the right end of the s t i c k – starts out resembling chaos but converges toward a (relative) simplicity.

Frame of Reference for Interpretation The method does not use any frame of reference.

Perils and Pitfalls Lack of experience in the use of the method or very large amounts of data can easily result in an imprecise or inaccurate outcome.

It is necessary to be aware of the fact that the data acquisition method itself holds a pattern, precisely because an interview, a questionnaire, or literature searches, for example, are organized according to one's prior knowledge and expectations. One must therefore be careful not to restore this pattern, as the outcome as this simply creates circularity.

A b i a s – one that clearly must be balanced against the advantage of having prior knowledge of the application domain – lies in the risk of categorizing according to known pattems and prior experiences or attitudes, or experience reported in the literature (Goldkuhl and Cronholm 2003). This is because experiences and attitudes (often subconsciously) influence one's evaluation. Thus, preconception (but not necessarily prior knowledge) is a dangerous cocktail when executing this method, also called 'lack of neutrality' (Edwards et al. 2002). The danger is explained as the result of the analysis being driven by the user.

Another bias, as pointed out by Edwards et al. (2002), is 'personal bias' (closely related to 'hypothesis fixation' described in Part III). This appears as a risk when the researcher works on verification of a work hypothesis during observation of an actual case.

129

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANDE~OOI< O~ EVALUATION M~THODS

Advice and Comments The method is difficult to apply with a reliable and reproducible result and also rather work intensive. But if you do need textual analysis, there are not many alternatives.

Edwards et al. (2002) illustrate and discuss how triangulation can be used to avoid strong elements o f subjectivity during categorization. They make extensive use o f feed-back from the actors o f the case under study to ascertain whether it is meaningful to them. This way they get a good relationship between conceptions, attributes and categories.

Similarly, Allan (2003) and Balint (2003) give a number of ideas on how to safely manage the methodological challenges, including the coding.

References

Allan G. The use of Grounded Theory as a research method: warts & all. In: Remenyi D, Brown A, editors. Proceedings of the European Conference on Research Methodology for Business and Management Studies; 2003 Mar; Reading, UK. Reading: MCIL, 2003. p. 9-19.

Goes through experience of, and offers advice on, coding with Grounded Theory.

Balint S. Grounded research methodology- a users' view. In: Remenyi D, Brown A, editors. Proceedings of the European Conference on Research Methodology for Business and Management Studies; 2003 Mar; Reading, UK. Reading: MCIL, 2003. p. 35-42.

Discusses how as a user one can ensure validity, quality, and stringency during its u s e – in other words, methodological challenges.

Cronholm S. Grounded Theory in use – a review of experiences. In: Remenyi D, editor. Proceedings of the European Conference on Research Methodology for Business and Management Studies; 2002 Apr; Reading, UK. Reading: MCIL; 2002. p. 93-100.

Elaborates on the method and discusses experiences (strengths and weaknesses) o f the various steps o f its application, based on an analysis o f the application o f the method by a number o f PhD students.

Edwards M, McConnell R, Thorn K. Constructing reality through

130

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HAND{3OOI< O7 EVALUATION METHODS

Grounded Theory: a sport management study. In: Remenyi D, editor. Proceedings of the European Conference on Research Methodology for Business and Management Studies; 2002 Apr; Reading, UK. Reading: MCIL; 2002. p. 119-27.

Goes through a case and discusses the weaknesses o f the method.

Goldkuhl G, Cronholm S. Multi-Grounded Theory- adding theoretical grounding to Grounded Theory. In: Remenyi D, Brown A, editors. Proceedings of the European Conference on Research Methodology for Business and Management Studies; 2003 Mar; Reading, UK. Reading: MCIL, 2003. p. 177-86.

Discusses certain strengths and weaknesses o f the method and suggests an alternative procedure f o r applying the method at its weakest point.

Supplementary Reading Proceedings from the European Conference on Research Methodology f o r Business and Management Studies (an annual evenO can be generally recommended There are often some good references with practical experiences o f the method

Ericsson KA, Simon HA. Protocol analysis, verbal reports as data. Cambridge (MA): The MIT Press; 1984.

This book provides a thorough review o f the literature on approaches and problems in general terms at the elicitation o f information about cognitive processes from verbal data.

Esteves J, Ramos I, Carvalho J. Use of Grounded Theory in information systems area: an explorative study. In: Remenyi D, editor. Proceedings of the European Conference on Research Methodology for Business and Management Studies; 2002 Apr; Reading, UK. Reading: MCIL; 2002. p. 129-36.

Discusses a number o f recommendations and critical success factors f o r use with this method

Jorgensen DL. Participant observation, a methodology for human studies. Newbury Park: Sage Publications. Applied Social Research Methods Series 1989. vol. 15.

Describes a number o f issues with regard to observation studies and what they can be used for, including how to analyze data from the observation studies o f the Grounded Theory method. Unfortunately, however, the book avoids discussing the pitfalls.

131

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

~ANE)BOOk( OF: ~VALUATION METHODS

Heuristic Assessment

Areas of Application This inspection-based method may be used when no other realizable possibilities e x i s t – for instance, if:

�9 The organization does not have the necessary time or expertise �9 There are no formalized methods to be applied by users �9 There is not yet anything tangible to assess.

Description Basically, this method consists o f summoning experts (3-5 are recommended) from the area in question (usually the assessment o f usability) and letting them make a statement – one based on experience but in a formalized way. Kushniruk and Patel (2004) and Sutcliff and Gault (2004) propose approaches and a number o f heuristics.

In principle this method can be used for nearly everything, but in practice it is most commonly used for the assessment o f user interfaces where few methods exist that will assist the ordinary user in carrying out his own assessment on a formal basis. The website http://jthom.best.vwh.net/usability indicates that this method can be particularly useful for the assessment o f user interfaces because precisely this particular type o f assessment can be difficult for the user organization itself to carry out. Therefore, it can be a really good idea to call on one or more external experts and ask their (unreserved) opinions.

132

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

NANDE~OOl< O.lZ ~VALUATION METHODS

Assumptions for Application It is normally quite simple to summon experts to solve a problem. The difficulty lies in finding experts who are both professionals in the problem area and who also have an understanding of the area of application.

If you yourself have to synthesize the experts' comments into an overall conclusion a f t e r w a r d – for decision-making purposes, for instance, knowledge of the terminology as well as the ability and experience of assessing the consequences of the observations must be present.

Perspectives

Frame of Reference for Interpretation (Not applicable)

Perils and Pitfalls

Advice and Comments Although the literature states that users normally do not take part in this method, Beuscart-Z6phir et al. (2002) nevertheless show that users themselves can act as experts for certain types of IT system assessment, provided that there is suitable guidance and supervision.

References

http://jthom.best.vwh.net/usability/ This website contains a number o f descriptions and links with relevant references (last visited 31.05.2005).

Beuscart-Z6phir MC, Watbled L, Carpentier AM, Degroisse M, Alao O. A rapid usability assessment methodology to support the choice of clinical information systems: a case study. In: Kohane I, editor. Proc AMIA 2002 Symp on Bio*medical Informatics: One Discipline; 2002 Nov; San Antonio, Texas; 2002. p. 46-50.

Kushniruk AW, Patel VL. Cognitive and usability engineering methods for the evaluation of clinical information systems. J Biomed Inform 2004;37:56-76.

155

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

I—IANDE~OOI< (DE .I~VALUATION METHODS

A thorough description of a number o f usability inspection methods, including the heuristic evaluation method A number of heuristics is presented

Sutcliff A, Gault B. Heuristics evaluation of virtual reality applications. Interacting Comput 2004;16(4):831-49.

Supplementary Reading

Graham MJ, Kubose TK, Jordan D, Zhang J, Johnson TR, Patel VL. Heuristic evaluation of infusion pumps: implications for patient safety in intensive care units. Int J Med Inform 2004;73:771-9.

A case study applying heuristic assessment for the evaluation of usability aspects to uncover design and interface deficiencies. The study is performed by a combination of specialists and a healthcare professional without prior experience of the method.

15/4

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

I-4ANE~OOK O~ EVALUATION I'vl~THODS

Impact Assessment (Also called Effect Assessment)

Areas of Application M e a s u r e m e n t o f the e f f e c t – that is, the consequence or impact in its broadest sense from degree o f realization o f the objective plus assessment o f side effect – o f an IT-based solution, with or without the original objective as a flame o f reference. Hence, it includes not only the beneficial effects, but also potential adverse effects.

Description Within the social sector, the practice has been to assess the effect o f a given initiative, legislation, or project with the (political) objective as a frame o f reference to obtain a measurement for assessing to what extent the objectives have been realized (also denoted 'objectives fulfillment'). However, it is worth noting, that in this sector goal-free evaluations are also gaining ground (Krogstrup 2003) – that is, the type o f explorative studies aimed at reporting descriptively on a situation as seen from one or more perspectives. A goal-free evaluation has a nearly limitless solution space w h e n the concept is interpreted in its literal meaning because it entails m e a s u r e m e n t o f the impact with all its secondary effects.

Assessment o f the effect o f health informatics systems can be carried out in the same w a y as projects and initiatives within the social s e c t o r – that is, either as assessment o f objectives fulfillment or as exploratory studies. The indicators would o f course have to be identified and designed for the actual purpose.

As the solution space is very large in a goal-free evaluation, you need to start by finding out what you really need to know about the effect and what the result is going to be used for in order to limit the study: For instance, is it an assessment o f the gains as in the effect on the level o f service or on the quality? Consequences for ways o f cooperation or social interaction? Consequences on the organizational structure or culture? Legal consequences or cases derived from the system? User satisfaction? And so on and so on. The question now is whether the need is for qualitative and/or quantitative data or information. One does not necessarily exclude the other.

Depending on what y o u wish to know about the effect o f an IT-

�9169 w167

155

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-04 00:22:29.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

,

~ANDt~OOI< O~ EVALUATION METHODS

based solution, it makes a huge difference to how you proceed. Please see the list of references with regard to additional descriptions of procedures.

Assumptions for Application It is obvious that a measurement of fulfillment of the objective assumes that a strategic objective has been defined and can be made operational in terms of relevant and realistic m e a s u r e s – that is, that formulas (metrics) can be established for their measurement.

Perspectives Few people realize the wide range of consequences an IT-based solution can have even beyond the organization normally considered as the user of the system: On implementation of a (new) IT system, it is nearly always necessary to make compromises (a bit of give and take). This is not necessarily bad, as over time many organizations develop very specific ways of doing things in a never-ending number of variations that often cannot be handled by the IT system. Some forms o f rationalization will therefore typically result from the implementation of an IT system. This rationalization will impact on something or somebody, and therefore there is a risk that the changes within the user organization will reach far and wide into the surrounding organizations.

Once introduced into daily operation, the organization keeps on c h a n g i n g – initially as it gets accustomed to and adjusts to the system. One danger (and a pitfall) is to start measuring the effect too soon after the daily operation has started before a degree of stability has been achieved. On the other hand, after a certain period in operation staff will start using their creativity, and new ways of doing things will develop spontaneously- both appropriate ones and inappropriate ones, intentional ones and unintentional ones (as shown in van Gennip and Bakker 1995 or in Part III, Section 11.1.6.9). This phenomenon only reflects the development pattern from novice to expert: You start by following the prescribed way of doing things, but while gaining experience at a proficient level, you get to know the shortcuts (Dreyfus 1997). You could, for instance, compare the note-taking in patient records by a newly graduated physician to that of a consultant. This phenomenon may make comparisons of controlled studies and before and after studies rather difficult.

Frame of Reference for Interpretation The strategic objective may influence all aspects of an organization,

~s6

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

~ANI38OOK Og EVALUATION M~TNODS

right from (partly) financially, matters o f responsibility and competence, quality objectives for patient care or the overall service, overall to the number o f records missing or patient and staff satisfaction. Whether there is a need for before-values (baseline data) depends on the individual study.

IT systems that do not have a written defined strategic objective could possibly have an objective defined at a lower, tactical level (for instance, in terms o f the interim objectives that are expected to lead to the overall strategic objective), and this may perfectly well replace the strategic objective as a frame o f reference or they can form the frame o f reference together.

The frame o f reference must therefore be aligned with the real objective and to the conditions o f the actual study. In order to describe the conditions adequately, the objective will usually need to be covered by a whole range o f indicators (measures).

Perils and Pitfalls A consequence o f the extremely dynamic situations occurring in connection with the introduction o f an IT-based system could be the loss o f control o f what some o f the effect indicators actually signify. Before-and-after assessment, as, for instance, in those recommended in (Nyttevcerdi 2002), have inbuilt pitfalls in the shape o f the validity o f the frame o f reference. One must be sure that the concepts used are the same before and after in case they are directly measured and compared. It is often necessary to use different questionnaires for the before-and-after situations, as the work processes and many other aspects are different, thus making a direct comparison difficult.

It is difficult to separate the effect o f different variables: For instance, what is the effect o f having done anything at all? Simply focusing on an organization over a period o f time, asking lots o f questions about what and how things are done, has tremendous effect. How does one separate the effect arising purely from the fact that something is happening from the real (intentional) effect o f the IT system or the IT solution? If the effect is caused by the first possibility, there is a strong risk that given time it will revert toward the starting point to a certain degree.

157

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

I—-IANDt~OOK OF EVALUATION METHODS

Advice and C o m m e n t s See also under the method F u n c t i o n a l i t y assessment, which has been specifically designed with the evaluation of fulfillment of the objective o f an IT-based solution in mind.

The approach in (Clements et al. 2002) of a Utility Tree with a hierarchical breakdown from overall interest areas to specific measures might be valuable for narrowing the focus of an Impact Assessment.

References

Clements P, Kazman R, Klein M. Evaluating software architectures, methods and case studies. Boston: Addison-Wesley; 2002.

Dreyfus HL. Intuitive, deliberative and calculative models of expert performance. In: Zsambok CE, Klein G, editors. Naturalistic Decision Making. Mahwah: Lawrence Erlbaum Associates, Publishers; 1997. p. 17-28.

Krogstrup HK. Evalueringsmodeller. Arhus: Systime; 2003. (in Danish) This little instructive book gives a good overview o f applications and limitations o f a couple o f known assessment models. Although it originates in the social sector it is inspiring with regard to Impact Assessment and assessment o f performance.

Nyttev~erdi af EPJ, MTV-baseret metode til m~ling af nytteva~rdien af elektronisk journal. Roskilde: Amtssygehuset Roskilde, DSI og KMD Dialog; 2002. (in Danish)

This little book gives an excellent and quick overview o f how to carry out an actual Impact Assessment (as in the usefulness) o f an E P J including quantitative indicators, questionnaires, and the procedure.

van Gennip EM, Bakker AR: Assessment of effects and costs of information systems. Int J Biomed Comput 1995;39:67-72.

A valuable case study to learn from.

Supplementary Reading

Bates DW, Teich JM, Lee J, Seger D, Kuperman GJ, Ma'LufN, Boyle D, Leape L. The impact of computerized physician order entry on medication error prevention. JAMIA 1999;6(4):313-21.

A case study usingprospective time studies to assess the effect o f

136

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANDt~OOI< O~ -I=VALUATION I"Zl~TIdODS

the introduction o f an IT-based solution on a given activity.

Brinkerhoff RO, Dressier DE. Productivity measurement: a guide for managers and evaluators. Newbury Park: Sage Publications. Applied Social Research Methods Series 1990. vol. 19.

Measurement o f productivity as a function o f input and output variables (including identification o f measures) is extremely important in many studies o f objectives fulfillment and assessments o f effect. This book is useful as inspiration f o r that purpose.

Collinson M. Northern province HIS evaluation, defining an evaluation framework- workshop 24 July 1998. (Available from: http://www.sghms.ac.uk/depts./phsdaceu/safrica2.htm. Last visited 31.05.2005.)

The report illustrates a fairly thorough and effective way in which to get relevant measures.

Friedman CP, Abbas UL. Is medical informatics a mature science? a review of measurement practice in outcome studies of clinical systems. Int J Med Inform 2003;69:261-72.

A thorough and critical review o f studies that measure different types o f effect. It includes a very useful list o f earlier studies measuring effects within the healthcare sector.

Garrido T, Jamieson L, Zhou Y, Wiesenthal A, Liang L. Effect of electronic health records in ambulatory care: retrospective, serial, cross sectional study. BMJ 2005;330:581-5.

A fairly exhaustive, retrospective case study applying administrative data to assess usage and quality o f care before and after implementation o f an EHR.

Hagen TP. Demokrati eller effektivitet: hvad skal vi evaluere? In: Foss O, Monnesland J, editors. Evaluering av offentlig virksornhet, metoder og vurderinger. Oslo: NIBR; 2000. Report No.: NIBRs PLUSS-SERIE 4-2000. p. 79-110. (in Norwegian)

Discusses a number o f effectiveness measures, including that o f "through whose "eyes' is the organization being observed".

Hailey D, Jacobs P, Simpson J, Doze S. An assessment framework for telemedicine applications. J Telemed Telecare 1999;5:162-70.

This article outlines a list o f effect measures that can be used universally, even though the design originally is f o r tele-medicine purposes.

Kimberly JR, Minvielle E. The quality imperative, measurement and

Is9

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

~AND~OOI< OF EVALUATION METtdODS

management of quality in healthcare. London: Imperial College Press; 2000.

This anthology o f quality in the healthcare sector covers many methods and problems concerned with quality measurements and many o f its chapters are inspirational with regard to effect measurement.

Milholland DK. Information systems in critical care: a measure of their effectiveness. In: Greenes RA, Peterson HE, Protti DJ, editors. Medinfo'95. Proceedings of the Eighth World Congress on Medical Informatics; 1995 Jul; Vancouver, Canada. Edmonton: Healthcare Computing & Communications Canada Inc; 1995. p. 1068-70.

Development and verification o f a tool to measure the effect in terms o f a fulfillment o f the objective o f efficiency. Although it is only a proceeding, it includes valuable directions to useful statistical tools to verify the metrics and the results o f the analysis.

Mitchell E, Sullivan F. A descriptive feast but an evaluative famine: systematic review of published articles on primary care computing during 1980-97. BMJ 2001 ;322:279-82.

One o f the very f e w systematic reviews o f the effect o f I T within the primary healthcare sector. It is a supplement to (Sullivan and Michel11995) and particularly valuable to include, as it is critical and encompasses many assessments o f effect and references to studies with good case studies o f this particular measurement o f effect type. The authors do this without automatically keeping the non-RCT studies apart; instead they create a quality score applying a Delphi method

Sullivan F, Mitchell E. Has general practitioner computing made a difference to patient care? a systematic review of published reports. BMJ 1995;311:848-52.

A review containing references to numerous case stories about the effect o f I T on general practice. The article addresses the best studies o f patient outcome effect, the effect o f the consultation process, and clinical performance, respectively.

Vabo SI. Kritiske faktorer for evalueringer av kommunale reformer. In: Foss O, Monnesland J, editors. Evaluering av offentlig virksomhet, metoder og vurderinger. Oslo: NIBR; 2000. Report No.: NIBRs PLUSS-SERIE 4-2000. p. 139-82. (in Norwegian)

Discusses specific indicators f o r the measurement o f attitudes and behavior.

van der Loo RP, van Gennip EMSJ. Evaluation of personnel savings through PACS: a modelling approach. Int J Biomed Comput 1992;30:235-41.

1/40

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

~ANDt~OOK OF EVALUATION b'I{~TIdODS

Outlines a diagramming and modeling method where all divergences between before-and-after descriptions o f work processes are used as a baseline f o r a quantitative assessment o f the effect o f a PACS system on staff resources.

ZoE Stavri P, Ash JS. Does failure breed success: narrative analysis of stories about computerized provider order entry. Int J Med Inform 2003;70:9-15.

Uses a narrative analysis approach to retrospectively elicit aspects o f success and failure o f an IT-based solution. This approach might be useful f o r narrowing the scope o f an impact assessment through the identification o f areas o f interest.

1/41

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

ldAND~OOI< OlZ EVALUATION METHODS

Interview (Nonstandardized interviews)

Areas of Application This method is frequently used for qualitative studies o f subjective as well as objective circumstances. Interviews are particularly suited for the elucidation of individuals' opinions, attitudes, and perceptions regarding phenomena and observations. This is especially the case when non- or semistructured techniques or group techniques are being used to promote dynamic interaction.

Description Interviews are methods used to carry out a type o f formalized conversation that can be:

�9 Structured at different levels (structured, semistructured, and unstructured)

�9 Controlled to a greater or lesser extent by the interviewer �9 Conducted with individuals or in groups

The book by Fowler and Mangione (1990) (and a number of similar books) is indispensable for those who want to make an effective interview study. It describes all aspects o f the method and discusses what can diminish the value (precision, accuracy) o f the results and thereby the reliability of the study. It is necessary to examine thoroughly the different methodological considerations for each step from preparation (choice of theme, design) to completion o f the interview, transcription and analysis of data, and verification.

An often used type o f group interview is the Focus Group Interview, where brainstorming techniques can be useful to stimulate the dynamics and the creativity of the participants. See separate description o f this variation of interviews.

1/42

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANID~OOI< O~ EVALUATION MI~TIdODS

Assumptions for Application The flexibility o f un- or semistructured interview methods makes it necessary to be experienced in order to achieve a reliable result from the study. The level o f reliability (precision and accuracy) required is, however, the deciding factor. This is the case for both individual and group interviews.

Perspectives The methods are based on conversation between people, but it is not a dynamic dialogue o f equals between the interviewee and the interviewer. Therefore, there are a number o f social and psychological factors and pitfalls that have to be taken into consideration during the interaction.

Silverman (2001) describes the different perspectives behind approaches for interviews: Positivism, emotionalism, and constructionalism. In short, positivists acknowledge that interviewers interact with the interviewees giving facts or beliefs about behavior and attitudes and demand that this interaction is defined in the protocol; emotionalists recognize that interviews are inescapably encounters between subjects, while eliciting authentic accounts o f subjective experience; constructionalists see the interview as a focused interaction with its own right, while dealing with how interview participants actively and mutually create meaning.

Frame of Reference for Interpretation In interview studies, there is normally no frame o f reference in the usual sense, but it is quite possible to interview a group o f people about the same s u b j e c t – for instance, before-and-after the introduction o f an IT-based s y s t e m – and thereby get a feeling o f the development or change with regard to a specific question.

Perils and Pitfalls See (Robson 2002) and Part III in this handbook.

The influence o f the interviewer(s) is a common source o f errors (Rosenthal 1976; Fowler and Mangione 1990). It is significant whether or not the interviewer is known to, and respected in, the organization, whether the interviewer holds respect for the organization and its needs for privacy and anonymity, and whether the interviewer has prior knowledge o f the domain. It is significant whether there are one or more interviewers, as they may not

1/43

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

~AND6~OO~ OF:: -~VALUATJQN METNODS

necessarily get the same result. And it is significant whether you interview staff, middle management, or executive management, as the two latter groups are used to disregarding their own personal opinions in favor o f the official politics and principles o f the organization.

The way the questions are posed is important for the precision o f the answers, as are the procedures under which the interviews are carried out (Fowler and Mangione 1990). Questions may be seen as being threatening (Vinten 1998) or touch on (culturally based) taboos.

A potential pitfall when digging into historical information is postrationalization, see Part III.

There is also a risk o f a bias in quantitative studies if the user's ability to assess assumptions and correlations is challenged (see Part III).

Last but not least, it is important to realize that users in an organization find it extremely difficult to account for how they actually carry out their activities (see the discussions in Brender 1997a and 1999).

Advice and C o m m e n t s The level o f difficulty in interview methods is often underestimated, and reliability as well as internal and external validity o f the outcome is correlated to the experiences as interviewer. However, this should not deter from using interview methods, but one has to get to know the methods in depth before starting and put their quality aspect up against the objective o f the study. This is certainly important if the result o f the research is to be published in the scientific literature. However, there is a lot o f easily accessible literature about the subject.

If there is a particular need to measure the validity o f one's conclusions from an interview study, triangulation o f the method or o f specific observations can be made, as described in Part III.

References

Brender J. Methodology for assessment of medical IT-based systems- in an organisational context. Amsterdam: IOS Press, Stud Health

1/44

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANID~OOI< O~ ~]VALUATION METHODS

Technol Inform 1997;42.

Brender J. Methodology for constructive assessment of IT-based systems in an organisational context. Int J Med Inform 1999;56:67-86.

This is a shortened version o f the previous reference, so it does not go into as much depth and is more accessible.

Fowler Jr FJ, Mangione TW. Standardized survey interviewing: minimizing interviewer-related error. Newbury Park: Sage Publications. Applied Social Research Methods Series 1990. vol. 18.

This book is indispensable f o r those who want to make an effective interview study. It describes all aspects o f the method and discusses what can diminish the value (precision, accuracy) o f the results and thereby the reliability o f the study.

Robson C. Real world research, a resource for social scientists and practitioner-researchers. 2nd ed. Oxford: Blackwell Publishers Inc; 2002. p. 269-91.

This book gives a good description o f different types o f interviews, details, and ideas regarding their use including the advantages and disadvantages o f each o f them.

Rosenthal R. Experimenter effects in behavioral research, enlarged edition. New York: Irvington Publishers, Inc.; 1976.

This rather unsettling book reviews the impact o f the experimenters on their test objects,_including biosocial attributes, psychosocial factors, and situational factors (the context o f the studies).

Silverman D. Interpreting qualitative data, methods for analysing talk, text and interaction. 2nd ed. London: Sage Publications; 2001.

Vinten G. Taking the threat out of threatening questions. J Roy Soc Health 1998; 118(1): 10-4.

Supplementary Reading There are a number o f similar descriptions o f research methodologies f o r research within the social sciences including interview techniques. The advice they provide is as good as the advice provided by the above references.

Ericsson KA, Simon HA. Protocol analysis, verbal reports as data. Cambridge (MA): The MIT Press; 1984.

This book provides a thorough review o f the literature on approaches and problems at the elicitation o f information about

1/45

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANDBOOI< OF EVALUATION METHODS

cognitive processes from verbal data.

Harrison MI. Diagnosing organizations: methods, models, and processes. 2nd ed. Thousand Oaks: Sage Publications. Applied Social Research Methods Series 1994. vol. 8.

The book is concerned with a number o f aspects o f an organization, from individuals to their internal relationships (as in power structures, etc.), and discusses advantages and disadvantages o f different methods in this respect, including those o f interviews. It describes a number o f factors with regard to individuals as well as groups in an organization.

Leavitt F. Research methods for behavioral scientists. Dubuque: Wm. C. Brown Publishers; 1991.

Contains quite concrete guidelines on how to formulate questions and put them together (also in respect o f questionnaires) and how to avoid the worst pitfalls (see pages 162-172).

http://jthom.best.vwh.net/usability, This contains lots o f summaries o f methods and references, including some inspiration regarding interview studies (last visited 31.05.2005).

J46

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANDt~OOI( OE EVALUATION METHODS

KUBI (From Danish: "_.KvalitetsUdvikling gennem Brugerlnddragelse"; translated: Quality Development Through User Involvement)

Areas of Application

The method is used as a tool for the incremental optimization of the outcome of a longterm development project, based on a set of user or customer/client defined value norms and objectives.

Description The method originates in the social sector, where it was developed as a constructive tool to improve conditions in the healthcare and social sectors (Krogstrup 2003). It has many points in common with the Balanced Scorecard method and is applied to assess the degree of fulfillment and subsequently to balance and re-focus on areas for improvement. However, the KUBI method has some quite different success indicators as its driving force for development.

The 1.

procedure follows these steps: Establishment of values and criteria for the primary stakeholder group(s) (such as a user, customer, or client)

2. Interview phase during which selected members of the primary stakeholder group(s) are trained and carry out individual and group interviews under supervision

3. Summation of the interview data into a report that includes an assessment of the degree of fulfillment of user criteria

4. Preparation of plans for the areas for improvement and development initiatives following a discussion of the report with the stakeholders

5. Follow-up of the new situation with the stakeholders after about a year, followed by a discussion of whether the development plans and initiatives need revision or expansion

Assumptions for Application

1/47

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANDBOOI< OF EVALUATION MI~TIdODS

Perspectives The intense focus on the user-stakeholder group is somewhat unusual in IT development and implementation processes because it belongs to another world with other traditions and focal points. Nevertheless, the method has been included as it may provide inspiration in general terms. For example, one could imagine that it could be used in a slightly modified tbrm as a hearing tool for very large IT projects, such as assessment o f a regional or national strategy or implementation o f an EHR where the distance from project management to the user in the field can be quite great.

Frame of Reference for Interpretation The frame o f reference is the objectives established during the first phase o f the procedure.

Perils and Pitfalls

Advice and Comments

References

Krogstrup HK. Evalueringsmodeller. Arhus: Systime; 2003. (in Danish) Even i f this reference is in Danish, the above description should enable experienced evaluators to apply the method or apply it in their own version, with benefit.

148

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdAND~OOI< OF- EVALUATION METHODS

Logical Framework Approach

Areas of Application �9 Situation analysis either in general and at any time within a

project �9 Support the choice o f action prior to planning of the

development effort �9 Incorporating risk handling within project planning

Description Logical Framework Approach (LFA) is an objectives-oriented_plan- ning methodology. The authors call it a framework, but it serves as a methodology because it describes the overall sequence of activities for the whole process, from the start o f the project to the end, and the relationships between activities and guidelines of a methodical nature.

The methodology is designed to be used for reform projects in developing countries. This in itself gives it the advantage of being intended as an incredibly simple but effective planning and implementation tool.

The methodology consists o f two parts (Handbook for objectives- oriented planning 1992): (1) a situation analysis to identify stakeholder groups and problems and weaknesses in the existing system and (2) project design. The first part can stand alone, while the latter presupposes the first.

The philosophy of the methodology is to focus on one central problem during a change management process. The core of the process lies in producing a 'Problem Tree' where the leaves are directly converted into a 'Tree of Objectives' through a description of objectives. This is then transformed and becomes a change management tool, such as traditional activity descriptions for a project.

A. Situation Analysis 1. Stakeholder analysis: Identification of groups and subgroups in

order to identify participants for the future development project.

149

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANDBOOI< OF: EVALUATION METHODS

2. Problem analysis: By means of a brainstorming technique to capture elements and symptoms in the organization, which are subsequently synthesized into a 'Problem Tree' with the trunk being the central problem and the leaves the smallest symptoms. An analysis of the causality is used as the roots of the tree in such a way that all the branches and leaves are covered and thereby accounted for.

3. Objectives Analysis: The Problem Tree is converted into a tree of corresponding solutions, the 'Objectives Tree'.

4. Analysis of the alternatives with regard to choosing the best solution for the future and the establishment of a strategy.

B. Solution Design 5. Project Elements: Define the objectives for the development

(justification of the project), the resulting immediate subobjectives (which together define and limit the intended effect), and then break them down into results, activities, and resources.

6. External Factors: Identification (for each and every activity) of important risk factors.

7. Indicators: Definition of the measures for monitoring the progress of each activity.

The above approach is primarily suited for simple projects. Crawford and Bryce (2003) review other and more complex versions of the LFA and summarize the key limitations of this method, which are concerned with handling the preconditions and assumptions for the implementation work (Phase B). However, in an evaluation context it is the approach of the Situation Analysis that is of primary interest.

Assumptions for Application There is no real precondition with regard to the educational level of the participants, precisely because its intended use is in developing countries. A very simple version of the methodology can be used. However, depending on the accuracy and precision required, it may be necessary for the leader(s) of the process to have adequate experience of group dynamics and relevant methods to supplement specific information needs. It can not replace traditional system development methods but may be applied to identify areas that need focused efforts and as such it may serve as a means for constructive assessment.

150

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANDBOOt< OF:: EVALUATION METHODS

The methodology assumes quite a stable project plan, as it does not have any inbuilt mechanisms, such as feed-back loops, for handling modifications. However, this does not preclude changes to a project plan but requires that all elements o f the plan and its methods are explicitly dealt with or taken into consideration during a possible modification.

It is clearly a prerequisite that the supporting methods and techniques are properly identified and adapted to detailed subactivities in such a way that they harmonize with the entirety and with each other. Depending on the size o f project concerned with the situation analysis o n l y – a need for supporting methods or techniques, for instance – may arise, as may a need for supporting the causal analysis. Examples o f useful methods in this respect are Stakeholder Analysis and Focus Group Interviews, for example.

Perspectives In itself the methodology is neutral to political, organizational and cultural conditions or constraints, it neither dictates nor prohibits such constraints, and it can work under their conditions. In the description there are aspects that make it more suitable in certain cultures and types o f organizations than in others (as, for instance, in the suggestion for choice o f project participants where the chosen procedures, principles, and criteria are clearly culture dependent). However, there is nothing to stop the use o f one's own principles.

The methodology is reasonably neutral toward the methods employed and contains a number o f steps for the process. As a rule it is necessary to add concrete and formalized methods or instructions (at least for large projects). As such it does not replace the stakeholder analysis method, cost-benefit analysis, impact assessments, and so on, but it has openings and rudimentary instructions where activities are prescribed and other methods therefore may supplement.

The perspective o f the Problem Tree is that o f focusing on just one single problem as a starting point for development. In the case o f large development projects, this can often be too simplified a point o f view. However, this can be resolved through a simple modification o f the problem analysis. For example: 'All o f the problem areas o f the organization' may be defined as the trunk o f the Problem Tree, instead o f picking just one o f the largest branches as the trunk and then disregarding the rest (Brender 1997). This will o f course make the investigation into the causality more extensive or

151

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

NANDE~OOt< OF {_?-VALUATION METHODS

difficult, but it will also make it far more rewarding.

Frame of Reference for Interpretation Preparation o f the Problem Tree (in the Situation Analysis) is relevant as an evaluation tool in connection with an assessment o f IT-based systems. But the Problem Tree does not have its own frame o f reference against which to compare the outcome. The validity o f the synthesized Problem Tree might be discussed with the user organization whose opinions and attitudes become a frame o f reference o f sorts.

Perils and Pitfalls When eliciting the Problem Tree, one must be aware o f the following sources o f error (see Part III):

1. Postrationalization (may partly be redressed by brainstorming techniques through the interaction and inspiration between the stakeholders)

2. Other error sources in connection with group dynamics, as for example under F o c u s Group I n t e r v i e w

Advice and Comments The methodology's principle o f risk monitoring is recommended in its own right, in terms o f monitoring the external factors that are incorporated in the implementation plan as explicit variables. It may be advantageous to incorporate them into other methods and strategies for ongoing monitoring (evaluation) o f development trends and risk factors.

The Affinity method may be used as an aid to brainstorming and modeling o f the Problem Tree or alternatively one o f the other similar methods outlined on http://jthom.best.vwh.net/usability/.

References

Handbook for objectives-oriented planning. 2nd ed. Oslo: Norwegian Agency for Development Cooperation; 1992.

I f this is not available, there is a r~sum~ o f it in the next reference. Alternatively, the reader is referred to the review in (Crawford and Bryce 2003).

Brender J. Methodology assessment of medical IT-based systems – in an organisational context. Amsterdam: lOS Press, Stud Health Technol Inform 1997;42.

152

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

I-IANE~OOI< O1:: EVALUATION M{~TIqODS

Crawford P, Bryce P. Project monitoring and evaluation: a method for enhancing the efficiency and effectiveness of aid project implementation. Int J Proj Manag 2003;21 (5):363-73.

http://jthom.best.vwh.net/usability/ Contains lots o f method overviews with links and references (last visited on 31.05.2005).

155

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

NANI}8OOK O~ EVALUATION MI~TIdODS

Organizational Readiness

Areas of Application Assessment o f a healthcare organization's readiness to assimilate a clinical information system.

Description A number of aspects determine the organizational readiness for change, such as organizational adaptability and flexibility, its willingness to absorb external solutions, and its ability to develop viable solutions. A potential cause of failure to innovate is the organizational inability to undergo transformation during the implementation of an information system.

The study of Snyder-Halpern (2001) briefly reviews previous attempts at determining readiness and validates a model of innovation readiness and a set of heuristics to assess organizational readiness. The method is not yet complete and has still to include the metrics o f the heuristics suggested. However, the description may still serve as valuable inspiration to preventive actions through assessment at the early stages o f IT systems purchase or implementation.

Assumptions for Application

Perspectives

Frame of Reference for Interpretation (Not applicable)

Perils and Pitfalls

Advice and Comments

154

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HAND~OOI< O~ ~VALUATION N~TIdODS

References

Snyder-Halpem R. Indicators of organizational readiness for clinical information technology/systems innovation: a Delphi study. Int J Med Inform 2001;63(3):179-204. Erratum in: Int J Med Inform 2002;65(3):243.

155

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANDt~OOI< O~ EVALUATION NE—TIdODS

Pardizipp Areas of Application Preparation of future scenarios

Description Scenarios are common-language descriptions of specific activities and of how users would normally go about executing them. They can, however, also be described diagrammatically. Pardizipp is based on the Delphi method. Development of scenarios, which in Pardizipp are textual, follows the six steps listed below (steps 2-4 are to be repeated jointly) (Mettler and Baumgartner 1998):

1. Definition of a general frame that will serve as the basis for group work around the creation of scenarios

2. Creation of scenarios and a thorough analysis of their consequences and assumptions

3. Quantifying and model building 4. Preparation of policies and specific actions – where a given

scenario should be implemented 5. Development of a consensus scenario, which takes into account

earlier scenarios developed for the same problem area 6. Preparation of recommendations for policies and actual actions

Assumptions for Application A prerequisite for a successful result is that a good mix of stakeholder groups are represented.

It is worth noting that a centrally placed t e a m – not the participants i n v o l v e d – prepares the resulting scenario(s) on the basis of the information gathered. Thus, it requires a certain degree of experience o f this type of undertaking. See also under Delphi.

Perspectives The philosophy behind the method is twofold: It is partly based on the philosophy built into the Delphi method and partly on the fact that modem technological development has a number o f unfortunate effects on surrounding social and organizational conditions. This

ts6

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANDt~OOI< O~ EVALUATION METHODS

makes the authors focus on improving the basis of decision-making and related processes, which is done by establishing a decision- making foundation built on the projections and aspirations o f a broad segment of participants. In other words, they believe that the basis for decision making will improve by expanding the group preparing it. This is obviously a culturally conditioned assumption, see the discussion in the Introduction, and it has its roots in Western cultural perceptions o f how things should be handled and how best to structure an organization.

Frame of Reference for Interpretation (Not applicable)

Perils and Pitfalls 1. The principle of using scenarios gives a fragmented picture of a

normally holistic entirety. Therefore, it takes some talent to organize the preparation o f a combination of scenarios in such a way that together they will give an adequately holistic impression.

2. People normally find it difficult to explicitly formulate their thought about their own work situation and work procedures explicitly (see the discussions in Brender 1997a and 1999 and others). This is where the Delphi method shows its strength, but it cannot completely compensate in all situations.

3. One pitfall is the ethnocentricity (see Part III) – that is, lack of acknowledgement and consideration of cultural backgrounds. This may introduce a bias in the selection of participants. The authors mention principles for participation (for instance, lack o f female involvement) as a caveat, but in general the method is considered to be most appropriate in cultures where there is already a tradition for participatory methods and in organizations where there is a possibility of (informal) equality among participants of different levels of competence.

Advice and Comments Future scenarios are useful tools for the understanding o f development trends and options. Methods for preparing future scenarios often have the advantage of helping the participants to fantasize – irrespective of their technological understanding. This way the participants are often able to let go of actual constraints to the technological potentials and limitations induced by many systems analysis tools.

157

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANDE~OOI( O1::: EVALUATION METHODS

References

Brender J. Methodology for assessment of medical IT-based systems- in an organisational context. Amsterdam: lOS Press, Stud Health Technol Inform 1997;42.

Brender J. Methodology for constructive assessment of IT-based systems in an organisational context. Int J Med Inform 1999;56:67-86.

This is a shortened version o f the first reference and more easily accessible with regard to this subject.

Mettler PH, Baumgartner T. Large-scale participatory co-shaping of technological developments, first experiments with Pardizipp. Futures 1998;30(6):535-54.

i s 6

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANDBOOK OF EVALUATION P1ET~qODS

Prospective Time Series

Areas of Application M e a s u r e m e n t o f a development trend, including, for example, the effect o f an intervention:

�9 Time series �9 Before-and-after studies, where the work involved could be the

introduction o f an IT-based system, for instance

Description M e a s u r e m e n t o f a n u m b e r o f measures over time shows h o w an activity or an outcome changes as a function o f either time alone or as a function o f different initiatives. Such studies m a y be either simple or controlled, depending on the level o f control o f experimental factors. Simple before-and-after studies address a single case before and after an intervention, while the controlled approach matches a n u m b e r o f cases to be studied in parallel.

Assumptions for Application The use o f time series as o n e ' s assessment design requires control over what changes take p l a c e – intentional and u n i n t e n t i o n a l – within the organization during the study.

Perspectives Time series are in principle carried out as a matched-pair design, just like traditional, controlled studies including RCTs. Therefore, they contain the inbuilt assumption that there is no interaction between case-specific matching characteristics and a potential intervention (Suissa 1998). The p r o b l e m occurs in particular w h e n there is just one case in the s t u d y – the simple before-and-after s t u d y – which often happens in assessment o f IT-based solutions. Implementation o f IT-based solutions often involves radical changes in the organization, its structure, and w o r k processes. Consequently, when there is only one case and the intervention is the introduction o f an

7 The same comment is valid regarding the economy as described for Clinical~Diagnostic Performance.

159

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANDE~OOI< O~ z {]VALUATION M~TIdODS

IT-based solution, an interaction between the case and the intervention will be present. Thus, the control group (the first measurement(s) in the time series) and the intervention group are no longer identical in all aspects other than that o f the intervention applied. See also Sparrow and Thompson (1999).

Frame of Reference for Interpretation The frame o f reference is generally included in the overall d e s i g n – for instance, as the first point within the time series.

Perils and Pitfalls One question to keep in mind is: Do the measures keep representing exactly the same characteristics o f the system or the organization? This needs to be verified for the data collected, and it is particularly important when a time series spans the implementation o f an IT s y s t e m – often over several years. Evans et al. (1998) apparently elegantly handle this error source. However, they do fall into the trap o f giving the decision-support system the credit for the improvements without acknowledging that, simultaneously, drastic changes to work processes take place (over and above what is needed by the system functionality).

As always, one has to ascertain that the conditions for the use o f ones metrics (calculation techniques) are fulfilled, and if one does performs statistical calculations on successive steps in a series o f measures, it is a requirement for the use o f concrete techniques that the measures are independent ('carryover effects' and 'conditional independence on the subject'). See details in (Suissa 1998).

In cohort studies with the follow-up o f cases over time, biases may occur as a consequence o f cases that drop out along the way (see Pennefather et al. 1999). In principle this corresponds to cases with missing data for the later points o f the time series. This bias may be caused by lack o f representativeness for the drop-out group compared to the study group as a whole. An important example is successive questionnaire studies. Another example is the representativeness o f the users involved around the IT system. Therefore, in the final conclusion it is important to explain the causality in connection with these missing data.

Advice and Comments

16o

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

I-IANE~OOI< OF:: EVALUATION METHODS

References

Evans RS, Pestotnik SL, Classen DC, Clemmer TP, Weaver LK, Orme JF, Lloyd JF, Burke JP. A computer-assisted management program for antibiotics and other antiinfective agents. New Engl J Med 1998;338(4):232-8.

Pennefather PM, Tin W, Clarke MP, Fritz S, Hey EN. Bias due to incomplete follow-up in a cohort study. Br J Ophthalmol 1999;83:643- 5.

Sparrow JM, Thompson JR. Bias: adding to the uncertainty, editorial. Br J Ophthalmol 1999;83:637-8.

Suissa S. The case-time-control-design: further assumptions and conditions. Epidemiol 1998;9(4):441-5.

Supplementary Reading

Ammenwerth E, Kutscha A, Eichst/idter R, Haux R. Systematic evaluation of computer-based nursing documentation. In: Patel V, Roger R, Haux R, editors. Proceedings of the 10th World Congress on Medical Informatics; 2001 Sep; London, UK. Amsterdam: lOS Press; 2001. p. 1102-6.

A good before-and-after case study o f the quality o f nursing documentation records and user satisfaction based on a combination o f many methods (a multimethod design).

Bates DW, Teich JM, Lee J, Seger D, Kuperman GJ, Ma'LufN, Boyle D, Leape L. The impact of computerized physician order entry on medication error prevention. JAMIA 1999;6(4):313-21.

A case study using prospective time studies to assess the impact o f the introduction o f an IT-based solution on a concrete activity.

Brown SH, Coney RD. Changes in physicians' computer anxiety and attitudes related to clinical information system use. JAMIA 1994;1(5):381-94.

In a case study the authors investigate physicians 'fear o f new I T technology in a before-and-after study.

Kelly JR, McGrath JE. On time and method. Newbury Park: Sage Publications. Applied Social Research Methods Series 1988. vol. 13.

The book thoroughly and stringently (and with a philosophical background) discusses a number o f aspects concerning time and the influence o f time on experimental studies. Further, beyond the

161

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

NANDBOOK OF LZVALUATION METHODS

problems and pitfalls o f time studies, it discusses, for instance, changes in the observer and problems at causal analysis o f phenomena o f a cyclic nature.

Murphy CA, Maynard M, Morgan G. Pretest and post-test attitudes of nursing personnel toward a patient care information system. Comput Nurs 1994;12:239-44.

A very good case study with questionnaires in a before-and-after study. Recommended f o r studying because they make a real effort to verify the internal validity o f the elements in the questionnaire.

Ruland CM, Ravn IH. An information system to improve financial management resource allocation and activity planning: evaluation results. In: Patel V, Roger R, Haux R, editors. Proceedings of the 10th World Congress on Medical Informatics; 2001 Sep; London, UK. Amsterdam: IOS Press; 2001. p. 1203-6.

A before-and-after study o f a decision-support system f o r nurses with the primary focus on investigating financial aspects (which in this case corresponds to evaluation o f the objectives fulfillment) and user satisfaction. But, unfortunately, the study suffers (possibly)from a number o f biases, typical o f controlled studies (see Part III o f this handbook).

Wyatt JC, Wyatt SM. When and how to evaluate health information systems? Int J Med Inform 2003;69:251-9.

Outlines the differences between the simple before-and-after and the controlled before-and-after studies as opposed to the RCTs.

Yamaguchi K. Event history analysis. Newbury Park: Sage Publications. Applied Social Research Methods Series 1991. vol. 28.

A book f o r those who want to conduct studies over time in an organization.

~62

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANDBOOI< OE ~VALUATION METHODS

Questionnaire (Nonstandardized questionnaires)

Areas of Application Imagination is the only real limit to what questionnaires can, and have, been used for, but for investigations requiring a high level o f accuracy their main area o f application is (qualitative) studies o f subjective aspects.

N o t e : There is a sharp distinction between custom-made questionnaires and standardized, validated questionnaires, available f r o m the literature or as commercial tools. The section below deals with the former. However, even i f a standard questionnaire is chosen, one must still investigate the degree to which the assumptions and pitfalls and the quality meets one's needs.

Description The advantage o f questionnaires – and probably the reason why they are so widely u s e d – is that most people can manage to put a questionnaire together to investigate virtually any subject o f one's choice.

There are a number o f ways in which to ask questions, and they do not necessarily eliminate each other. However, by using a combination o f them there is a risk o f making the analysis (the mathematical and statistical analysis) more difficult:

�9 Open questions, where the respondent answers (the individual questions) the questionnaire in ordinary text

�9 Checklist questions, which normally consist o f three boxes: "Yes", "no", and " d o n ' t know"

�9 The Likert scale consisting o f a bar with fields to tick on a scale from "agree completely", "agree", through a neutral to "disagree" and "completely disagree"

�9 Multipoint scale, where the respondent indicates his or her assessment on a continuous scale indicating the two extremes with opposing rankings "agree completely" to "completely disagree"

�9 Semantic differential scale, which in tabular form uses columns with a value scale (for instance, "extremely", "very",

163

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANDt~OOI< O~ ~VALUATION M~TIdODS

"somewhat", "neutral", "a little", "somewhat", "very", "extremely") and where the rows of the table indicate the properties that should be evaluated by means of the two extremes on the scale (for example, " e a s y " . . . " d i f f i c u l t " . . . " f u n " . . . "boring"), and so on.

�9 Categorical scale, where the tick options (which may mutually preclude each other) are completely s e p a r a t e d – for example, questions about sex, age, or profession.

Note: No specific questionnaires are indicated here, as they are normally f o r m u l a t e d f o r each specific case, but in the literature there are a n u m b e r o f more or less 'standardized' questionnaires f o r measuring user satisfaction, which can be used as they are or with some adaptations. These are indicated below under References.

Assumptions for Application Questionnaires are tools, and tools need verification with respect to construct and content validity before application. It is important that questionnaires – and all elements in t h e m – have been tested (validated) to increase the likelihood that the questionnaire will serve its purpose as it is supposed to. It is very important that questionnaire studies are o f a suitably high standard qualitatively. This concerns the preparation establishing objectives for the studies, accurate wording o f hypothesizes and theories, qualitatively satisfactory questionnaires (which is probably the most difficult), clearly formulated rules of analysis, and requirements for reporting (opportunities and limitations) of the results.

Further, it is obvious that there must be a reasonable relationship between the resources required to prepare the questionnaire and the purpose of the study. Far from all studies need thorough scientific validation o f the questionnaire in order to provide results leading to optimal action.

The use o f concrete statistical tools normally implies assumptions, and one of the assumptions for the use of standard deviations and student's t-test is that the data come from a continuous scale (a scale with real numbers). Consequently, these statistical methods c a n n o t be used for the analysis of data obtained on a categorical scale such as the Likert scale(!) but may be used for a multipoint scale, for instance.

~64

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANDBOOI< OF .EVALUATION JVlETHODS

Perspectives The perspective o f the person formulating the questionnaire depends very much on his or her level o f experience. The inexperienced seem to think that it is easy to write a questionnaire and that this will produce answers and provide the truth about the questions asked. This rather naive understanding is based on the fact that we are all used to formulating lots o f questions and to getting sensible answers every day. The difference is, however, that when we formulate everyday questions between us, these questions form part o f a joint context in relation to the present and possibly also to a mutual past, which explains a lot about the background and what it refers to, implicitly and explicitly. If you don't get the right answer the first time around during the conversation, you repeat it once or twice without thinking about it. This type o f reiteration cannot take place in questionnaires, and the creation o f a mutual understanding is exactly what makes the formulation o f a valid questionnaire so difficult.

Frame of Reference for Interpretation The frame o f reference depends on the purpose o f the study. One may formulate before-and-after questionnaires, where the response to the first questionnaire becomes the frame o f reference for the second one. A strategic objective could also be a frame o f reference in a particular study. Furthermore, earlier studies from the organization, for instance, or studies described in the literature may also be used as a basis for comparison.

However, usually there is no frame o f reference for the analysis or the conclusion o f a questionnaire study.

Perils and Pitfalls There is a whole range o f pitfalls in questionnaires, such as the following:

�9 The internal validity o f the questions: Does the respondent read/understand the same as the author? Do all the respondents read/understand exactly the same thing?

�9 The problem o f postrationalization (see Part III) is a risk in studies using questionnaires presented sometime after the events addressed (Kushniruk and Pate12004).

�9 Each respondent will always answer the questionnaire in his or her own context including emotional factors and the actual response will therefore vary from one day to the next.

�9 For psychological reasons people have problems assessing

16s

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANDE~OOI< O~C {]VALUATION METHODS

probabilities correctly. Therefore, questions such as "how often . . . ? " should be avoided.

�9 Certain questions may be taboos, or they may be perceived as t h r e a t e n i n g – particularly in the case o f nonanonymous studies (Vinten 1998).

�9 When using a questionnaire from foreign literature, be aware that it needs adapting in case o f national, organizational, linguistic, or cultural differences. It is not valid per se to apply a questionnaire in a foreign language, and it is not easy to translate a questionnaire while preserving the meaning, in case one wants to make a multinational survey.

�9 It is rare that the target group o f a questionnaire study is so well grounded in a foreign language that one can expect a reliable answer should the questionnaire be used in its original language.

�9 Do not underestimate the pitfall that may be introduced when translating a questionnaire from one language (and culture) to another. This is by no means insignificant.

�9 It makes a difference who in the organization is being interviewed, because top and middle management have more training in promoting the official line at the expense o f their own personal opinion without even thinking about it. The same is the case in certain cultures, such as in Asia and in the former Soviet republics. The fact that these cultures belong to countries a great distance away from your own does not automatically preclude that these same attitudes and manners exist in your country or in the subculture o f your s o c i e t y – because they do.

Although there are many pitfalls and difficulties, one should not give up because it is to allay the problems. Depending on the intended use o f the study result and how accurate it needs to be, the list above illustrates that the task o f formulating a questionnaires often is one to be carried out by people with this expertise.

Advice and C o m m e n t s Do look for guidelines to qualitative evaluation studies, as they give lots o f tangible advice on the wording o f questionnaires and examples o f these.

References like (Ives et al. 1983; Murphy et al. 1994; Jacoby et al. 1999; and Par6 and Sicotte 2001) are examples o f how to verify and adapt a questionnaire for, for instance, reliability, predictive validity, accuracy, content validity, and internal validity.

166

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANDE~OOI< 04:: EVALUATION M{~THODS

R e f e r e n c e s

Ives B, Olson MH, Baroudi JJ. The measurement of user information satisfaction. Communications of the ACM 1983;26(19):785-93.

Brilliant little article with advice and guidelines regarding the measurement and adaptation o f different quality measures f o r a questionnaire.

Jacoby A, Lecouturier J, Bradshaw C, Lovel T, Eccles M. Feasibility of using postal questionnaires to examine career satisfaction with palliative care: a methodological assessment. Palliat Med 1999; 13:285- 98.

Kushniruk AW, Patel VL. Cognitive and usability engineering methods for the evaluation of clinical information systems. J Biomed Inform 2004;37:56-76.

Murphy CA, Maynard M, Morgan G. Pretest and post-test attitudes of nursing personnel toward a patient care information system. Comput Nurs 1994;12:239-44.

Recommended f o r studying, precisely because it makes a real effort to verify the internal validity o f the elements o f the questionnaire and explains how one can use questionnaires in a before-and-after study.

Par6 G, Sicotte C. Information technology sophistication in health care: an instrument validation study among Canadian hospitals. Int J Med Inform 2001 ;63:205-23.

Vinten G. Taking the threat out of threatening questions. J Roy Soc Health 1998;118(1):10-4.

References to Standard Questionnaires

Aydin CE. Survey methods for assessing social impacts of computers in health care organizations. In: Anderson JG, Aydin CE, Jay SJ, editors. Evaluating health care information systems, methods and applications. Thousand Oaks: Sage Publications, 1994, pp. 69-115.

The chapter refers to a number o f questionnaires used in the literature and contains a couple o f actual examples in the Appendix.

Harrison MI. Diagnosing organizations: methods, models, and processes. 2nd ed. Thousand Oaks: Sage Publications. Applied Social

~67

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANDBOOK O~ EVALUATION PI~THODS

Research Methods Series 1994. vol. 8. Appendix B o f the book contains a number o f references to standard questionnaires f o r (nearly) every purpose regarding conditions in an organization.

Supplementary Reading, Including Case Studies

Ammenwerth E, Kaiser F, Buerkly T, Gr/iber S, Herrmann G, Wilhelmy I. Evaluation of user acceptance of data management systems in hospitals – feasibility and usability. In: Brown A, Remenyi D, editors. Ninth European Conference on Information Technology Evaluation; 2002 Jul; Paris, France. Reading: MCIL; 2002:31-38. ISBN 0-9540488-5-7.

This reference and the next deal with the same case study, but at different phases o f the assessment.

Ammenwerth E, Kaiser F, Wilhelmy I, Hrfer S. Evaluation of user acceptance of information systems in health care – the value of questionnaires. In: Baud R, Fieschi M, Le Beux P, Ruch P, editors. The new navigators: from professionals to patients. Proceedings of MIE2003; 2003 May; St. Malo, France. Amsterdam: IOS Press. Stud Health Technol Inform 2003;95:643-8.

A case study o f user satisfaction, which also assesses the quality aspects (reliability and validity) o f the questionnaire used.

Andersen I, Enderud H. Udformning og brug af sporgeskemaer og interviewguides. In: Andersen I (Ed.). Valg af organisations- sociologiske m e t o d e r – et kombinationsperspektiv. Copenhagen: Samfundslitteratur; 1990. p. 261-81. (in Danish)

This reference contains some advice on how (not) to do things.

Bowman GS, Thompson DR, Sutton TW. Nurses' attitudes towards the nursing process. J Adv Nurs 1983; 8(2):125-9.

Concerns (measurement oj9 user attitudes to the nursing process.

Chin JP. Development of a tool measuring user satisfaction of the human-computer interface. In: Proceedings of the Chi'88 Conf. on Human factors in Computing. New York: Association for Computing Machinery; 1988. p. 213-8.

Addresses user attitudes toward specific characteristics o f nursing documentation.

Hicks LL, Hudson ST, Koening S, Madsen R, Kling B, Tracy J, Mitchell J, Webb W. An evaluation of satisfaction with telemedicine among health-care professionals. J Telemed Telecare 2000;6:209-15.

166

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANDBOOK OF: EVALUATION M~TidOD$

A user satisfaction case study.

Leavitt F. Research methods for behavioral scientists. Dubuque: Wm. C. Brown Publishers. 1991.

Contains quite tangible instructions on how to formulate and put questions together (also f o r interviews) and avoids the worst pitfalls, see pages 162- 72.

Lowry CH. Nurses' attitudes toward computerised care plans in intensive care. Part 2. Nurs Crit Care 1994; 10:2-11.

Deals with attitudes toward the use o f computers in nursing, particularly in connections with documentation.

L~erum H. Evaluation of electronic medical records, a clinical perspective [Doctoral dissertation]. Faculty of Medicine, Norwegian University of Science and Technology, Trondheim: NTNU; 2004. Report No.: 237. ISBN-82-471-6280-6.

This doctoral dissertation is a multimethod evaluation study, including an extensively validated questionnaire.

Murff HJ, Karmry J. Physician satisfaction with two order entry systems. J Am Med Inform Assoc 2001 ;8:499-509.

A very well-planned case study when it comes to the comparison o f two systems, as the users are the same f o r both systems. They also seem to have the statistics under control

Nickell GS, Pinto JN. The computer attitude scale. Comput Human Behav 1986;2:301-6.

Deals with user attitudes toward computers in general (for everyday use).

Ruland CM. A survey about the usefulness of computerized systems to support illness management in clinical practice. Int J Med Inform 2004;73:797-805.

A case study applying a questionnaire to survey clinical usefulness.

RS, dgivende Sociologer. Manual om sporgeskemaer for Hovedstadens Sygehusf~ellesskab.R~dgivende Sociologer; 2001. Report No.: manual om sporgeskemaer for H:S. (Available from: www.mtve.dk under 'publications'. Last visited 31.05.2005.) (in Danish)

This is a very useful reference f o r formulating a questionnaire study. It contains both instructions and actual finished questionnaires f o r different purposes within the healthcare sector.

169

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

~ANBSOOK O~ ~VALUATION NETIdODS

Shubart JR, Einbinder JS. Evaluation of a data warehouse in an academic health sciences center. Int J Med Inform 2000;60:319-33.

Is a structured, hypothesis driven case study with verification and validation o f the questionnaire and explicit validation o f the results.

Sleutel M, Guinn M. As good as it gets? going online with a clinical information system. Comput Nurs 1999; 17(4): 181-5.

A good case study to learn from because they are aware o f an analysis o f the internal validity o f the questionnaire and because they dig down and investigate unexpected observations. However, the drawback is that they do not mention the formal requirements for the use o f their statistical tools.

Terazzi A, Giordano A, Minuco G. How can usability measurement affect the re-engineering process of clinical software procedures? Int J Med Inform 1998;52:229-34.

Addresses "perceived usability'-that is, the user's subjective understanding o f usability, using standard questionnaires f o r this purpose.

Weir R, Stewart L, Browne G, Roberts J, Gafni A, Easton S, Seymour L. The efficacy and effectiveness of process consultation in improving staff morale and absenteeism. Med Care 1997;35(4):334-53.

A well-executed case study applying the RCT method based on existing, validated questionnaires covering subjects such as job satisfaction, attitudes, and personalities. Unfortunately, they have problems with the differences in the intervention between the two groups compared

See also:

http://jthom.best.vwh.net/usability Contains lots o f method overviews with links and references (last visited 31.05.2005).

Goldfield GS, Epstein LH, Davidson M, Saad F. Validation of a questionnaire measure of the relative reinforcing value of food. Eat Behav 2005;6:283-92.

The advantage o f this reference is the suggestions for statistical analysis methods taking into account the nature o f the investigation data.

McLinden DJ, Jinkerson DL. Picture this! multivariate analysis in

170

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANDi~OOk:[ OF: EVALUATION METHODS

organisational development. Evaluation Program Planning 1994;17(1):19-24.

This little article can be inspirational in how to treat a n d p r e s e n t data in a different way.

http://qpool.umit.at This website is still under construction, but it will eventually include a list o f more or less validated questionnaires f r o m the literature (last visited 31.05.2005).

171

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANDE~OOI< O~C EVALUATION METIdODS

RCT, Randomized Controlled Trial

Areas of Application The purpose o f this type o f study is verification o f efficacy 8 (Wall 1991; G o o d m a n 1992) – that is, that the IT system – under ideal conditions – makes a difference to patient care.

RCT is used to identify marginal differences between two or more types o f treatment. This method has been particular useful in assessment studies o f IT-based solutions for decision-support systems and expert systems, only to a limited degree for other types o f systems.

Description Randomization is used to avoid bias in allocation or choice o f cases and actors. In clinical studies randomization is used in relation to selection o f clinical personnel or patient treatment groups, respectively, while in the assessment o f IT-based solutions, for instance, it m a y be used to select study organization(s) or users.

The concept 'controlled' is used in relation to studies with a m i n i m u m o f two groups, one o f which is a reference group (control group, treated in the traditional way). The control group is used for comparison o f whether there is an effect on the intervention group – that is, as the frame o f reference. The principle being that the control group and the intervention group are treated in exactly the same w a y (except for the intervention), thereby making it possible to measure differences in the effect o f the intervention. The problem with RCT for IT-based solutions is to secure and achieve identical treatments between the two or more groups.

Assumptions for Application Use o f RCT in medicine is fairly standard, and the method is therefore known to many. However, it is not possible to transpose the method directly to assessment o f IT-based solutions, as it is by no means trivial to handle the assumptions on randomization and to

8 ,, '[Effficacy" addresses the performance o f the application under ideal circumstances, while 'effectiveness' is related to application under real circumstances (the capability o f bringing about the result intended- i.e., doing the right things); and 'efficiency' is related to a measure o f the capability o f doing the things right. "(from Brender 1997a)

177

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

I–IANIDBOOI< OF- EVALUATION MI~TIdODS

obtain comparable groups. One o f the challenges is to identify criteria to select the study objects and thereby the population (cases, users, and patients) with consideration to the size o f the population and the generalization of a later conclusion. See below regarding the many pitfalls.

Perspectives RCT is widely used for testing clinical procedures and pharmaceutical products. One argument in favor of RCT is that this procedure is so well established that a number of biases can be avoided. It is a characteristic of medical and pharmaceutical tests that the effect is often marginal in comparison to existing pharmaceuticals and procedures. Thus, it is necessary to be extremely careful about as many types of bias as possible in order to document that the new product makes a difference at all. This is not necessarily the case with similar studies of IT systems. Neither is it a forgone conclusion that such a study would conclude to the IT system's advantage if impact parameters were viewed unilaterally. The reward might be found in quite a different place that is not addressed by an R C T – for example, in the soft human aspects of the system, in the system's feasibility to support change management in the organization, or, in the longer term, to other profession-oriented or management aspects of the system.

RCT is carried out under ideal conditions, and it is focus driven. But who is to say that day-to-day reality is ideal?

Some people promote the viewpoint that RCT can be carried out very cheaply because the existing infrastructure, service function, and resources of a hospital or a department can be used free of charge and therefore only needs compensation for extraordinary expenses such as analysis of statistics. No doubt, this is feasible in many cases today. In principle this is also the case for RCT o f IT- based systems, but it is naive to believe that real life is that simple.

Frame of Reference for Interpretation It is possible to carry out controlled studies in several different ways, as, for instance, (1) by some patients being treated under the old system and others under the new system, or equally with regard to staff; (2) by some departments keeping the old system and other department getting the new system; or (3) by comparing similar departments in different hospitals.

175

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANDBOOI< OF: EVALUATION MI~TIdODS

Perils and Pitfalls Quite the biggest problem is to make the groups being compared truly comparable. This includes having the groups go through the same conditions and circumstances during the process so that only relevant factors influence the outcome o f the study.

Randomization depends on the feasibility o f a real choice between participants to ensure that the resulting groups are comparable. It is o f course possible to make a draw between two departments to find the one that gets an EHR implemented and the one that will n o t – in other words, which is the intervention group and which is the control group, although this in itself does not render the control group and the intervention group comparable (see the review in Part III).

�9 Matching control and intervention groups. It is not always enough just to have comparable cases, as in different but comparable medical departments and comparing two departments with six physicians each, for instance. For certain systems, such as EHR and decision-support systems, the categories o f physicians the group is made up o f makes a difference because they have different backgrounds and levels o f competence, medically and in terms o f IT experience.

�9 One must be cautious when comparing groups from different medical specialist areas. This is the case not only during planning, but also in the interpretation (or worse, in extrapolation) o f the result with the purpose o f later putting it into wide practical use.

�9 Inclusion and exclusion criteria for involving cases and users as well as patients will normally cause the conclusion to be valid for a (well-defined) fraction o f the daily practice in the department or clinic.

�9 A demand for an RCT study will normally be to undertake comparable treatment o f both groups throughout the study. For IT systems this may be achieved by carrying out the phases o f requirements specification, design, and i m p l e m e n t a t i o n – that is, the Explorative Phase and the Technical Development Phase, on both the intervention group and the control group with a delayed start in one o f the departments. This way you may obtain a measure o f the effect on the intervention group. However, one must still be careful and avoid other biases, such as the Hawthorne effect, for example.

Advice and C o m m e n t s It is sometimes feasible to use the balanced block design when the IT-based system includes a number o f parallel applications (such as

17/4

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANtZSOOI< OF EVALUATION METHODS

knowledge-based support for a number of medical problems). The application will then be divided in to smaller clusters, each containing a proportion of the medical problems covered by the system. Each participant is assigned to one of the clusters, while serving as a control group for the other clusters and vice versa. See this approach in (Bindels et al. 2004), for instance.

There are a number o f initiatives and advocates for RCT to be used as the sole usable method for documenting the justification of a given IT system. But there are also well-argued debates, as, for instance, in (Heathfield 1998). The authors discuss the problems of using RCT for IT-based solutions and do not at all agree with those who promote the use of RCT to assess IT-based systems and solutions in the healthcare sector.

The personal view of the author is that every method should be used in the situation to which it is suited. However, in the absence o f more suitable methods, or if none can be applied to the objective, one may have to compromise. This is rarely in RCT's favor in case of IT systems.

Campbell et al. (2000) give advice and guidelines on the handling of RCT studies of complex interventions. It might be of some assistance to those who may be considering an RCT for IT-based systems.

References

Bindels R, Hasman A, van Wersch LWJ, Talmon J, Winkens RAG. Evaluation of an automated test ordering and feed-back system for general practitioners in daily practice. Int J Med Inform 2004; 73: 705- 12.

Campbell M, Fitzpatrick R, Haines A, Kinmonth AL, Sandercock P, Spiegelhalter D, Tyrer P. Framework for design and evaluation of complex interventions to improve health. BMJ 2000;321:694-6.

Goodman C. It's time to rethink health care technology assessment. Int J Technol Assess Health Care 1992;8:335-58.

Heathfield H, Pitty D, Hanka R. Evaluating information technology in health care: barriers and challenges. BMJ 1998;316:1959-61.

175

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

ldANDBOOK OF:: ~VALUATION METHODS

Wall R. Computer Rx: more harm than good? J Med Syst 1991;15:321- 34.

Supplementary Inspiration and Critical Opinions in the Literature, Including a Couple of Case Studies

Altman DG, Schultz KF, Moher D, Egger M, Davidoff F, Elbourne D, Gotzsche PC, Lang T. The revised CONSORT statement for reporting randomized trials. Ann Intern Med 2001 ;134(8):663-94.

This reference is a 'must" before getting started with putting anything about an R C T study in writing, and therefore it can also serve as an inspiration during planning o f such a study.

Ammenwerth E, Eichst/adter R, Haux R, Pohl U, Rebel S, Ziegler S. A randomized evaluation of a computer-based nursing documentation system. Methods Inf Med 2001 ;40:61-8.

The authors have chosen a design using a carryover effect between the study group and the control group to get a good f r a m e o f reference f o r the comparison See also the discussion in Part III, Section 11.1.6.5.

Assman SF, Pocock S J, Enos LE, Kasten LE. Subgroup analysis and other (mis)uses of baseline data in clinical trials. Lancet 2000;355:1064-9.

Biermann E, Dietrich W, Rihl J, Standl E. Are there time and cost savings by using telemanagement for patients on intensified insulin therapy? a randomized, controlled trial. Comput Methods Programs Biomed 2002;69:137-46.

This case illustrates how difficult it is to design a study to obtain truly comparable groups – one with, the other without, I T support. Without comparable procedures f o r the control group and the study group it is not possible to express what causes the potential (lack oJ) effect. At the same time the article shows how difficult it is to unambiguously describe similarities and differences in the procedures o f the two groups.

Brookes ST, Whitney E, Peters TJ, Mulheran PA, Egger M, Davey Smith G. Subgroup analyses in randomized controlled trials; quantifying the risks of false-positives and false-negatives. Health Technol Assess 2001 ;5(33).

Analyzes and discusses pitfalls in subgroup analysis when using RCT.

Chuang J-H, Hripcsak G, Jenders RA. Considering clustering: a

~76

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANDE~OOI< OF: EVALUATION I"IETIdODS

methodological review of clinical decision support system studies. Proc Annu Symp Comput Appl Med Care. 2000:146-50.

Review o f R C T studies f o r decision-support systems and expert systems.

Friedman CP, Wyatt JC. Evaluation methods in medical informatics. New York: Springer-Verlag; 1996.

A good book that also discusses the role o f different approaches to evaluation.

Gluud LL. Bias in intervention research, methodological studies of systematic errors in randomized trial and observational studies [Doctoral dissertation]. Faculty of Health Sciences, University of Copenhagen; 2005. (ISBN 87-990924-0-9)

A valuable discussion o f different types o f biases and their implication in RCTs and other intervention approaches.

Hetlevik I, Holmen J, Krfiger O, Kristensen P, Iversen H, Furuseth K. Implementing clinical guidelines in the treatment of diabetes mellitus in general practice; evaluation of effort, process, and patient outcome related to implementation of a computer-based decision support system. Int J Technol Assess Health Care 2000; 16( 1):210-27.

A thorough case study, randomizing the practices involved, but note that with their procedure the control group remains completely untreated, while along the way, the intervention group gets an I T system, training, and follow-up in several different ways. In other words, it is not just the impact o f the decision- support system that is measured.

Kuperman GJ, Teich JM, Tanasjevic MJ, Ma'lufN, Rittenberg E, Jha A, Fiskio J, Winkelman J, Bates DW. Improving response to critical laboratory results with automation: results of a randomised controlled trial. JAMIA 1999;6(6):512-22.

The same team and just as good a study as in (Shojania et al. 1998), but in this study the authors do not have the same co- intervention problem.

Marcelo A, Fontelo P, Farolan M, Cualing H. Effect of image compression on telepathology, a randomized clinical trial. In: Haux R, Kulikowski C, editors. Yearbook of Medical Informatics 2002:410-3.

A double-blind R C T case study.

Rotman BL, Sullivan AN, McDonald T, Brown BW, DeSmedt P, Goodnature D, Higgins M, Suermondt HJ, Young YC, Owens DK. A randomized evaluation of a computer-based physician's workstation: design considerations and baseline results. In: Gardner RM, editor. Proc

177

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

~ANDBOOl< O~Z ~-VALUATION JVl~TldODS

Ann Symp Comput Appl Med Care 1995:693-7. (See under the next reference.)

Rotman BL, Sullivan AN, McDonald TW, Brown BW, DeSmedt P, Goodnature D, Higgins MC, Suermondt HJ, Young C, Owens DK. A randomized controlled trial of a computer-based physician workstation in an outpatient setting: implementation barriers to outcome evaluation. JAMIA 1996;3:340-8.

An R C T combined with a before-and-after design to measure user satisfaction and costs o f medication as well as compliance to the recommendations concerning drug substitution. The two studies referenced f r o m this group show the design considerations and the execution o f an RCT, respectively.

See Tai S, Nazareth I, Donegan C, Haines A. Evaluation of general practice computer templates, lessons from a pilot randomized controlled trial. Methods Inf Med 1999;38:177-81.

The study uses an elegant way to handle control group problems. See also the discussion in Part III, Section 11.1.6.1.

Shea S, DuMouchel W, Bahamonde L. A meta-analysis of 16 randomized controlled trial to evaluate computer-based clinical reminder systems for preventive care in the ambulatory setting. JAMIA 1996;3(6):399-409.

The article contains a meta-analysis o f RCT, and thus it can also be used to identify a number o f case studies f o r inspiration.

Shojania KG, Yokoe D, Platt R, Fiskio J, Ma'lufN, Bates DW. Reducing Vancomycin use utilizing a computer guideline: results of a randomized controlled trial. JAMIA 1998;5(6):554-62.

Shows a well-executed R C T case study despite having a problem with co-intervention (see Part 111, Section 11.1.6.3).

Tierney WM, Miller ME, Overhage JM, McDonald CJ. Physician inpatient order writing on microcomputer workstations. JAMA 1993;269:379-83 (reprinted in: Yearbook of Medical Informatics 1994; 1994:208-12).

A thorough case study, but with the usualproblem o f making the control group and the intervention group comparable.

Weir R, Stewart L, Browne G, Roberts J, Gafni A, Easton S, Seymour L. The efficacy and effectiveness of process consultation in improving staff morale and absenteeism. Med Care 1997;35(4):334-53.

Another well-executed R C T case study with existing, validated questionnaires. But this also has problems with differences between the two groups.

178

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANDt~OOI< O~ EVALUATION METHODS

van Wijk MAM, van der Lei J, Mosseveld M, Bohnen AM, van Bemmel JH. Assessment o f decision support for blood test ordering in primary care, a randomized trial. Ann Intern Med 2001;134"274-81.

A case study.

Wyatt JC, Wyatt SM. When and how to evaluate health information systems? Int J Med Inform 2003;69:251-9.

Outlines the differences between the simple before-and-after and the controlled before-and-after studies as opposed to the RCTs.

179

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

~AN~OOK O~ ~VALUATION METHODS

Requirements Assessment

Areas of Application Within the European culture the User Requirements Specification forms the basis for the choice and purchase o f an IT-based solution or for entering into a development project. Consequently, the User Requirements Specification is a highly significant legal document, which needs thorough assessment.

Description When assessing a requirements specification, it is important that it includes a description o f (see Brender 1997a and 1999):

1. The user organization's needs 2. The conditions under which the organization functions

(including its mandate, limitations, and organizational culture) 3. The strategic objective o f the organization 4. The value norms o f future organizational development 5. Whether the functionality can be made to adapt to the work

procedures in the organization or the reverse

Furthermore, there are some overriding general issues o f importance: �9 R e l e v a n c e : Assessment o f whether the solution in question or a

combination o f solutions is at all able to solve the current problems and meet the demands and requirements o f the organization.

�9 P r o b l e m A r e a s : Where are the weaknesses and the elements o f risk in the model solution? For instance, an off-the-shelf product may have to be chosen because the old IT system is so unreliable that it is not possible to wait for a development project to be carried out. Or plans may be based on a given operational situation albeit a lack o f know-how would occur should certain employees give notice.

�9 F e a s i b i l i t y : Does the organization have the resources needed to implement the chosen solution (structurally in the organization, in terms o f competence and financially, for example), as well as the support o f management, staff, and politicians?

�9 C o m p l e t e n e s s a n d C o n s i s t e n c y : Is the solution a coherent entity that is neither over- nor undersized?

�9 V e r i f i a b i l i t y ( o r T e s t a b i l i t y ) : One must consider how to check

16o

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANDBOOI< O.lZ EVALUATION M~THODS

that every small requirement and function in the model solution have been fulfilled once the complete system is implemented and ready to be put to use. Elements o f Risk: Are there any external conditions outside organizational control that will involve a substantial risk to the project should it/they occur? This could, for instance, be dependence on a technology that is not yet fully developed, or dependence on (establishment or functional level of) o f the parent organization's technical or organizational infrastructure that has to fall into place first, or dependence on coordination and integration with other projects. See also Risk Assessment.

There are at least as many ways in which to assess a requirements specification as there are methods to produce it, and its flexibility is nearly as broad as its formalism is limited. Formal methods o f formulating requirements specifications are primarily the ones that have the formal methods o f verification. Apart from that, assessment o f a requirements specification is usually carried out informally but covering the aspects mentioned above.

A couple o f formal methods are: 1. Formal specification methods have their own tools to formulate

the requirements specification, often IT-based, and they have their own verification techniques, such as consistency control. These methods are mainly used to specify technical systems and are only occasionally suited for application by the users themselves.

2. Prototyping methods are based on repeated assessment workshops with user scenarios from real life; see, for example, the study in (Nowlan 1994). The scenarios are tested on a prototype, which gradually evolves from one workshop to the next (spiral development).

Instead, a number o f standards contain recommendations on how to prepare a requirements specification and what it should include as a minimum as well as what should be taken into consideration. These standards are therefore valuable sources o f inspiration or checklists to assess a requirements specification. See under Standards.

Assumptions for Application Assessment methods for requirements specifications depend entirely on the method used to prepare the specification.

iS1

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANDBOOK OE-I=VALUATION METHODS

Perspectives It is necessary to be quite clear about where in the organization the most serious problems occur (1) because you cannot solve a problem simply by introducing a new IT system, and (2) such weaknesses can be quite disastrous for the implementation work or for achieving the intended benefit.

It is the author's opinion that (1) ordinary users in the healthcare sector do not normally have a background enabling them to use the formal methods to produce a requirements specification, because they cannot separate the details from the entirety and the entirety from the details, and neither do they have experience of formulating the complexity. (2) It is not always enough to have IT consultants to assist in this particular task, and, therefore, (3) the users should apply their own premises and express themselves in their own language (Brender 1989). The case analyzed dealt with the development o f the first large IT-based solution for the organization. The longer an organization has had IT, the better able it will be to participate on technological p r e m i s e s – that is, use slightly more formal methods to formulate requirements specifications because its existing IT solution can be used as a kind o f checklist.

Frame of Reference for Interpretation The frame of reference for assessing a requirements specification is the whole o f the organization (structure, technology, actors, work p r o c e d u r e s , . . . ) , and its needs and premises, including conditions and objectives for acquiring an IT-based solution.

Perils and Pitfalls The pitfalls for assessing requirements specifications, which have been developed incrementally as a prototype (see under number 2 above), are used to illustrate where things may go wrong, such as:

�9 Representativeness of real operational details. Are all the exceptions included, or can they be dispensed with? If you work with prototyping, you risk that a number o f details are not included until the end.

�9 Representativeness of real operational variations. The way the organization has prescribed rules and procedures are different from how they are carried out in daily practice – during the process of formulating the requirements specification the users run the risk, subconsciously, o f switching between the prescribed procedures and the actual ones.

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

~ANBSOOI< O~ EVALUATION METHODS

�9 Representativeness o f the context o f operations. There are differences between how an activity is carried out when you have peace and quiet and nothing else to do and, for instance, when in clinical practice your activities are constantly being interrupted, and you are forced to leave a problem and return to it at a later stage. This and similar domain circumstances need to be taken into consideration in a requirements specification.

�9 Representativeness o f all the stakeholder groups that are actually involved in the assessment activities. This is particularly difficult (or rather costly) in an organization with a high degree o f specialization.

A d v i c e a n d C o m m e n t s See also under Standards.

R e f e r e n c e s

Brender J. Quality assurance and validation of large information systems – as viewed from the user perspective [Master thesis, computer science]. Copenhagen: Copenhagen University; 1989. Report No.: 89-1-22.

A rOsumO o f the method can be f o u n d in (Brender 1997a), or it can be obtained from the author.

Brender J. Methodology for assessment of medical IT-based systems- in an organisational context. Amsterdam: IOS Press, Stud Health Technol Inform 1997;42.

Brender J. Methodology for constructive assessment of IT-based systems in an organisational context. Int J Med Inform 1999;56:67-86.

This is a shortened version o f the previous reference and more accessible with regard to this subject.

Nowlan WA. Clinical workstations: identifying clinical requirements and understanding clinical information. Int J Biomed Comput 1994;34:85-94.

S u p p l e m e n t a r y R e a d i n g

Bevan N. Cost effective user centred design. London: Serco Ltd. 2000. (Available from: http://www.usability.serco.com/trump/. The website was last visited on 31.05.2005.)

This report, originating from a large E U telematics project, contains advice and guidelines f o r the formulation o f requirements, measurement o f usability, and so forth, in relation to Usability.

16s

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

~IANIDE~OOK O~ ~VALUATION JvJ{]TNODS

This little technical report also contains a lot o f literature references and links to websites on the subject o f Usability.

J84

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

I–IANDE~OOI< O~ EVALUATION METHODS

Risk Assessment

Areas of Application Identification and subsequent monitoring o f risk factors in a development or assessment project to make it possible to take preemptive action.

Description Risk is defined as the "the possibility of loss" (Hall 1998), and a risk factor is a variable controlling the likelihood o f such a loss. In other words, a risk is an aspect o f a development, assessment, or operational project, which in case it is brought to bear, will lead to an undesirable condition. There must, however, be a reasonable likelihood for such a condition to occur before it is called a risk. Therefore, risk assessment together with the matching risk control is a constructive project management tool.

Risk assessment, risk management, and risk control can be handled either retrospectively or prospectively. Retrospective risk control simply means that one keeps an eye on things, and the moment an element o f danger occurs in an activity/project, an assessment and identification o f the factors involved are carried out, and a plan for resolving it is instigated. In prospective risk control all dependencies o f a project and all aspects that are not entirely under control are identified from the start. Thereafter, measures for all o f it are found, and ongoing monitoring and assessment are carried out.

Risk assessment can be carried out either ad hoc or f o r m a l l y – the

18s

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

NAND~OOK O.IZ ~VAI_UATION METHODS

latter certainly giving the better result. But even formal techniques may not require a large methodical foundation and may be carried out by means o f consequence analysis supplemented by weighting p r i n c i p l e s – for instance, based on probability, implications o f actual occurrences (such as 'medical' side effects), chance of timely realization, resources needed to compensate or rectify the unwanted situation should it actually occur, and so on. If this semiformal level is insufficient, help should be searched for in the original literature.

Assumptions for Application Special experience is only required for strictly formal risk assessments o f large or complex projects.

Perspectives Risks will typically o c c u r – but not solely – at the interfaces between persons, organizations, and activities and will usually arise in their input/output relationships. This also includes the time aspects, particularly within the political and administrative decision process. This is because external parties do not have the same commitment and motivation and because they do not depend on a given decision or product and thus do not have the same impetus for a solution (a timely one), as does the project itself.

Frame of Reference for Interpretation The frame of reference for a development project is its objective, including its future plans for progress and resource requirement.

Perils and Pitfalls A bias occurs where there is unease in revealing specific risk factors long before these have been realized. There might, for instance, be political, psychological, or other tactical reasons to omit monitoring specific circumstances.

Advice and Comments Integration o f a prospective risk, if required, can be done by establishing monitoring points in terms of indicators (measures) o f the risk factors and if possible by defining milestones and deadlines in a project's contractual basis. One way of handling this is described in Logical Framework Approach (see relevant section), where risks are explicitly handled by means of the concept of external factors and corresponding measures that are identified and evaluated for all planned activities prior to the commencement of the project.

186

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANDBOOt< OF {]VALUATION METHODS

There is a lot o f literature about this area, also concerning IT projects, so just go ahead and search for precisely that, which suits you the best.

References

Hall EM. Managing Risk: Methods for software systems development. Reading: Addison-Wesley; 1998.

I87

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANDE~OOK O~ EVALUATION METHODS

Root Causes Analysis

Areas of Application Exploration o f what, how, and why a given incident occurred to identify the root causes o f undesirable events.

Description Root Causes Analysis is a family o f methods, applying a sequence o f component methods: (1) schematic representation o f the incident sequence and its contributing conditions, (2) identification o f critical events or active failures or conditions in the incident sequence, and (3) systematic investigation o f the management and organizational factors that allowed the active failures to occur (Livingston et al. 2001). The reference mentioned provides an exhaustive review o f the literature and case studies applying Root Causes Analysis.

Assumptions for Application

Perspectives The highly structured and prescriptive approach will aid domain people to perform the investigation themselves, which is necessary given the need for domain insight to get hold o f the propagation o f root causes in an organization.

Frame of Reference for Interpretation (Not applicable)

Perils and Pitfalls The issue is to get to the root o f the cause o f a problem rather than to obtain a plausible explanation. The point being that in order to be able to prevent similar incidents in the future one has to get hold o f the root cause before proper action can be taken; otherwise the

166

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANDE~OOI< OF:: EVALUATION M{]TIdODS

incident will reoccur.

A d v i c e and C o m m e n t s This method may be valuable in combination with Continuous Quality Improvement strategy for quality management.

R e f e r e n c e s

Livingston AD, Jackson G, Priestly K. Root causes analysis: literature review. Health & Safety Executive Contract Research Report 325/2001. (Available f r o m : www. hse. go v. uk/research/crr_~df/2 001/CRR O13 2 5 . p d f . Last visited 15.05.2005.)

Jr�9

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

I-IANE~OOI< OlZ EVALUATION MI~THODS

Social Networks Analysis

Areas of Application

Assessment o f relationships between elements within an organization (such as individuals, professions, departments, or other organizations), which influence the acceptance and use o f an IT- based solution. This could be used to identify key persons for success and opinion makers in an organization.

Description Social network analysis is carried out by means o f diagramming techniques, which do vary a little depending on the actual study (Rice and Anderson 1994; Anderson 2002): They are all described as networks with the knots being the actors (for instance, an individual, a profession, a department or a project . . . . ) and the relationships being described as named arrows between the knots. The type o f relationships could, for instance, be that o f communication, state o f competence, or economy. The relationships are described by a number o f characteristics such as frequency, type o f relationship, level or strength o f the interaction, and so on. Finally, the data collected are analyzed by means o f several techniques that illustrate the relationships.

Assumptions for Application Some experience o f the techniques used is needed.

Perspectives Attitudes toward information technology and its use are strongly influenced by the relationship between the individuals that form part o f an organization (Rice and Anderson 1994; Anderson 2002).

19o

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdAND~OOI< OF: EVALUATION METHODS

Frame of Reference for Interpretation (Not applicable)

Perils and Pitfalls Pitfalls, the same as under diagramming techniques in general (see under Work Procedure Analysis), are that one must judge the suitability of the specific modeling technique to answer one's information needs and to which degree variations (and exceptions) can/should be included.

Advice and C o m m e n t s This method may be used to carry out stakeholder analysis, but it may also be used as a basis to uncover what and why things happen in an organization and to identify individual or professional interfaces within or between different activities in a work procedure (Rice and Anderson 1994). This is particularly important, as both internal and external interfaces in an organization constitute points of risk where, for instance, the execution or transfer of a task may fail.

The reference (Rice and Anderson 1994) contains a vast number of references to description of methods and their use.

References

Anderson JG. Evaluation in health informatics: social network analysis. Comput Biol Med 2002;32:179-93.

Rice RE, Anderson JG. Social networks and health care informations systems, a structural approach to evaluation. In: Anderson JG, Aydin CE, Jay S J, editors. Evaluating health care information systems, methods and applications. Thousand Oaks: Sage Publications; 1994. p. 135-63.

19w

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANDE~OOK OF: EVALUATION I"'IETIdODS

Stakeholder Analysis

Areas of Application Assessment of stakeholder features and their inner dynamics, aiming to identify participants for the completion of a given task, a problem-solving activity or a project.

Description An in-depth stakeholder analysis covers a whole range of considerations, including:

1. Analysis of the rationale behind undertaking a stakeholder analysis: What are the value norms or the motivation behind a stakeholder analysis (legislative, moral, ethic, or labor-related, etc.)? What difference would it make if one stakeholder group were not involved? And what potential risks will it entail if they do not get involved?

2. Analysis of what definitive characteristics of a stakeholder group determine its influence on the decision makers, including the elucidation of the stakeholders' own expectations.

3. Observation of the stakeholder group's intemal and mutual dynamics (including organizational culture, power structure, control mechanisms, etc.) and their influence on the decision- making processes as a function of intemal and extemal factors.

4. Selection of participants and participating stakeholder groups by optimizing factors ('performance management indicators') that fulfill the policy of the organization- that is, beyond those of legal consideration as, for instance, the principles of user involvement, democracy or other principles of justice, motivational principles, know-how principles, minimizing risks by involving leading figures and spokespersons, and so on.

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

~ANDE~OOK 04:: EVALUATION I'vlETI–IODS

Assumptions for Application The task o f formal stakeholder analysis is not something that just anybody can undertake, as it requires some prior knowledge o f organizational theories and administrative law. In some situations an intuitive analysis, or one with the aforementioned aspects in mind, may suffice. In other words, unless it is quite a new type o f task or a very large investigation involving several organizations (particularly if they are not known beforehand) or if there is a risk o f resistance severely compromising the course o f the project or its outcome, there is no reason for the investigation to be as explicit and thorough as indicated in the description. If that is the case, most people can carry out a stakeholder analysis.

Perspectives A frequently quoted definition o f a stakeholder within the discipline o f stakeholder analysis is Friedman's (quoted in Simmonds and Lovegrove 2002): "any group or individual who can affect or is affected by the achievement o f the organization's objectives "'. From this the perspective is derived that anybody who is influenced by a concrete solution should also be involved in the preparation o f the solution model.

As often as not a number o f conditions to make allowances for the different stakeholder interests are normally implicitly or explicitly incorporated into the organizational culture. This means that precedents from earlier projects and circulars and so on will implicitly make allowances for who is (officially) accepted as a stakeholder.

Principles o f involvement o f stakeholders in a problem-solving context are highly dependent on the culture (see the discussion on the concept o f perspective in the introductory part, Section 2.4). One o f the differences is to be found in the interpretation o f the two English notions o f 'decision maker' and 'decision taker', as seen in their extremes: 'Decision making' reflects that there is a process between the stakeholders involved in various ways, whereby a basis for a decision is created and a decision made. Conversely 'decision taking' represents the attitude that somebody has the competence to make decisions singlehandedly. In some cultures (like the Asian culture, for instance) there is no discussion about it being the leader who d e c i d e s – and he or she is presumed to be knowledgeable about issues o f relevance to the decision. Consequently, a stakeholder analysis will be superfluous to these cultures, and it should be noted that before using the Stakeholder Analysis method it must be

~9s

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HAND~OOI,< OF EVALUATION Ms

adapted to the organization's own perspective o f stakeholder participation in a given problem-solving context.

Frame of Reference for Interpretation (Not applicable)

Perils and Pitfalls See the discussion under Perspectives: If the wrong perspective is used in an actual stakeholder analysis you run the risk o f taking account o f the wrong factors for solving the problem or for organizing the project, and if the worst is brought to bear, internal strife in the organization or an incomplete decision-making foundation will result.

Advice and Comments Logical F r a m e w o r k Approach or Social N e t w o r k Analysis can be inspirational when a formal analysis is not required.

Also see the Stakeholder Assessment method in (Krogstrup 2003), where all stakeholder groups get involved in a given assessment activity, and the result is subsequently summarized and negotiated. The same reference summarizes the Deliberative Democratic Assessment method, which also involves all the stakeholder groups. The latter focuses on identification o f value norms, and the questions asked are extremely relevant in deciding who should be involved in an assessment activity (or any sort o f task) when not everybody can take part.

References

Krogstrup HK. Evalueringsmodeller. Arhus: Systime; 2003. (in Danish) This little informative book from the social sector gives quite a good overview o f possibilities and limitations o f a couple o f known assessment models.

Simmonds J, Lovegrove I. Negotiating a research method's 'conceptual terrain': lessons from a stakeholder analysis perspective on performance appraisal in universities and colleges. In: Remenyi D, editor. Proceedings of the European Conference on Research Methodology for Business and Management Studies; 2002 Apr; Reading, UK. Reading: MCIL; 2002. p. 363-73.

The article is a theoretical debate and review, making it difficult to grasp.

194

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANDBOOI< OE .I~VALUATION METHODS

Supplementary Reading

Eason K, Olphert W. Early evaluation of the organisational implications of CSCW systems. In: Thomas P, editor. CSCW requirements and evaluation. London: Springer; 1996. p. 75-89.

Provides an arbitrary rating approach to assess cost and benefit f o r the different user groups and stakeholders.

19s

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

I–dANDE~OOt< OlZ EVALUATION METHODS

SWOT

Areas of Application

�9 Situation analysis: The SWOT method is intended for establishing a holistic view o f a situation or a model solution as objectively as is feasible.

This method was originally developed to evaluate business proposals and process re-engineering effects to assist in identifying strategic issues to assist in the formulation o f a strategy.

Description SWOT is an acronym for "Strengths, Weaknesses, Opportunities, and T_hreats". For most purposes these four concepts can be used in their common sense: 'Weaknesses' are characteristics o f the object o f the study, usually ongoing and internal, and which will hinder development in the desired direction. 'Threats' are risks that lurk but do not necessarily happen and over which you do not have any control; there should be a certain probability that they may happen before you choose to include them. They are usually to be found in the internal or external interfaces o f the organization (see also under Risk Assessment). 'Strengths' are to be interpreted similarly as having the same characteristics as internal assets, while 'Opportunities' are assumptions o f possibilities or options, which might alleviate some o f the problems arising during implementation o f the actual model solution.

The SWOT method is intended to identify and analyze the four aspects mentioned for an object o f study, based on a combination o f facts, assumptions, and opinions (the description should indicate what information and o f which type). The object o f the study could, for instance, be a decision-making situation, where the choice

196

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANE~OOI< O~ EVALUATION MI~TIdODS

between alternative solutions has to be made or as an introduction to a discussion on problem solving in a deadlocked situation. For a review see Dyson (2004).

A SWOT analysis can be carried out in a simple way or with increasing formality depending on the size of the topic and its complexity. A detailed SWOT analysis should, for instance, include:

�9 The probability that the opportunities can be realized and utilized

�9 The possibility to eliminate weaknesses and exploit strengths �9 The probability that a given risk (threat) will occur and if so,

what the consequences will be �9 The possibility of ongoing monitoring in order to detect and

identify risks in time �9 The possibility to compensate for a given risk should it become

threatening

Assumptions for Application

Perspectives The SWOT method is an incredibly simple and yet useful tool whose primary function is that of a framework structure ensuring the preparation of an elaborate picture for any given situation.

Frame of Reference for Interpretation (Not applicable)

Perils and Pitfalls By nature the method is subjective. Therefore, should an organization have strongly conservative forces or groups of employees who are afraid of change, a certain aspect could move from one category to another. For example, some could perceive the introduction of IT in an organization as a threat because it could entail rationalization and redundancies, while to management it can be an opportunity to solve a bottleneck problem in the organization.

The method can be misused if it is not used in an unbiased way – hence, group work is recommended. What might happen is that the balance between the sequences of aspects included is (sub)consciously askew or that important aspects are left out.

197

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

I-IANDBOOI< OE ~VALUATION METHODS

Advice and Comments The method is useful as a brainstorming method to elucidate and evaluate each o f the aspects brought forward.

References

Dyson RG. Strategic development and SWOT analysis at the University of Warwick. Eur J Operational Res 2004; 152(3):631-40.

Supplementary Reading

Balamuralikrishna R, Dugger JC. SWOT analysis: a management tool for initiating new programs in vocational schools. J Vocational Technical Education 1995;12(1). (Available from: http://scholar.lib.vt.edu/. Last visited 10.05.2004.)

An example o f its use as an illustration.

Jackson SE, Joshi A, Erhardt L. Recent research on team and organizational diversity: SWOT analysis and implications. J Manag 2003 ;29(6):801-30.

An extensive literature appraisal applying the S W O T method f o r the analysis o f the literature.

sq6

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

~ANDi~OOK OlZ EVALUATION I'vq~TIdODS

Technical Verification

Areas of Application Verification that the agreed functions are present, work correctly, and are in compliance with the agreement.

The purpose o f technical verification is to ensure that the management o f the organization can/will take administrative responsibility for the operation o f an IT-based system and for its impact on the quality o f work in the organization. This should not be understood as the responsibility o f an IT department for the technical operation o f the IT system, but as users applying the system in their daily operations such as their clinical work. This is the case:

�9 In connection with acceptance tests at delivery o f an IT system or a part-delivery

�9 Prior to taking the system into daily operation and before all subsequent changes to the IT system (releases, versions, and patches)

Description With the exception o f off-the-shelf IT products (see under perspectives below), technical verification is carried out by checking the delivery, screen by screen and field by field, interface by interface, function by function, and so on to see if everything is complete, correct, consistent, and coherent. For each o f the modular functionalities agreed on the following questions are assessed:

1. Re.: Completeness o f the functionality �9 Is everything that was promised included? �9 Has it been possible to verify all aspects o f the functionality

supplied, thereby all/most possibilities o f data input, including exceptions? Or has it been impossible due to errors or shortfalls in the product delivered?

�9 To which degree has it been possible for the users to simulate normal daily activities?

�9 To which degree does the functionality supplied comply with that expected functionality?

�9 How well does the system's functionality fulfill the functionality described in the c o n t r a c t – that is, the actual

E�9169

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

I–IANDE~OOtC OF {~VALUATION IVJE]TI4ODS

requirements in the contract? 2. Re.: Correctness of the functionality

�9 Does the system work as it should? That is, can the users work with data that have a logical similarity to normal production and function?

�9 And does it all happen correctly? 3. Re.: Coherence of the functionality

�9 When data depend on each other (some data are only valid when other given data have actual values): Does the system control these internal constraints and ensure that data are correct?

�9 Or does the user have to ensure this manually? 4. Re.: Consistency of the functionality

�9 Is the data being duplicated, running the risk of getting different instances of the same information?

�9 And has this actually been observed as happening? This normally only happens when several independent IT systems are put together to make up one IT-based solution.

5. Re.: Interconnectivity of the functionality: This means that technical (syntactic) aspects of communications between several IT-based systems, such as communication of patient data from one patient-administration system or a hospital information system to a laboratory system, and communication with systems dedicated to controlling user-access criteria.

�9 Is it the right (complete and correct) information that is received?

6. Re.: Interoperability of the functionality: The semantic aspects of the interaction between several IT-based systems. One example is that the change of a patient's hospital address or status in a patient administration system has to be communicated to other systems that depend on this information. See further details under Measures and Metrics. This is not necessarily updated immediately in the interconnected systems. It has a big impact on the perceived functionality in a ward, irrespective of how correctly an IT system seems to work. Coordination of updated data between various systems can easily go wrong. One reason could be that, for capacity reasons, the system developers store this type of information in a queue, which is only emptied every five minutes, for instance. This could be significant under some acute conditions.

7. Re.: Reliability: �9 How often does the system go down, and what are the

"200

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANDE~OOI< OF EVALUATION J'vlETNODS

consequences? �9 How does the system handle error situations, and what

interventions are necessary to re-create data or reestablish operations when this happens?

�9 What happens when an error situation is created on purpose – d u r i n g simulated breakdowns of parts of the network, for instance?

�9 What happens at monkey tests or at a database locking? 8. Re.: Performance: Response time and throughput time for

transactions on the system, as well as capacity. See also under Measures and Metrics.

What types of problems are users likely to run into? And how do they relate to the contract?

See also many of the concepts under Measures and Metrics.

Assumptions for Application It is a prerequisite that each of the requirements of the contract are operational or that they can be rendered operational without problem.

It requires previous experience in planning and implementation of such investigations, as well as punctiliousness with regard to details. It is an advantage to have tried it before. Should one want an in- depth technical verification, it could be an advantage to seek help for the more complex i s s u e s – for instance, to ensure systematism in the verification of the interoperability.

Perspectives In a contractual relationship with regard to the supply of a system, you cannot expect more than what is stipulated in the agreement between the customer and the vendor. Technical verification serves the purpose o f assessing whether the contract has been fulfilled. Thus, the contract is an indispensable flame of reference for technical verification. This does not necessarily mean that the system is appropriate, even if it has been verified with a successful outcome, or that it works properly, should the contract not make sure of this. Remember, it is not easy to formulate a good requirements specification. Management then must decide whether the system is good enough for it to undertake administrative responsibility for the system in operation. If not, it is necessary to

2OI

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

~ANIESOOK OF: EVALUATION blI~TPlODS

take the consequences and say "Stop", regardless of whether the fault lies with the vendor or the client, and thereafter to find out what the next step should be.

Whether the vendor of the IT-based system is internal or external makes no difference – that is, whether it is an independent company or the IT department of the hospital or the region.

Technical verification of off-the-shelf products, such as Microsoft Word for word processing, has a different purpose if the organization chooses to carry it out. For instance, this could be to get acquainted with the details of the system's functions in order to establish new and changed work processes. Whether the product is used for its intended purpose and whether management undertakes responsibility for its introduction are questions that must be addressed before its purchase.

Frame of Reference for Interpretation The frame of reference is the contract – or a similar agreement with attachments.

Perils and Pitfalls The most common pitfall is the lack of testing of specific conditions, which then show up during day-to-day use. The possible combinations, the aspects of capacity in its widest sense and the interoperability are particularly difficult to t e s t – also for the vendor. Furthermore, not all vendors have sufficiently detailed insight into the application domain to be able to test from a real-life perspective, implying that the focus and efforts put into technical verification depend on the number of similar reference sites of the IT system in question. In other words, even if the vendor has carried out a thorough test, undetected errors and omissions may occur when new people examine the same system with different data.

Advice and Comments In practice it is impossible to physically test all fields, types of data, and functions of nontrivial IT systems in all their relevant and irrelevant combinations. It is important explicitly to keep to this in an accreditation or certification situation (see separate description in Chapter 8).

"202

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

I-IANE~OOI< OlZ EVALUATION I'qI~THODS

R e f e r e n c e s (See under Measures and Metrics.)

? 0 3

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANDBOOI< OF EVALUATION METHODS

Think Aloud

Areas of Application An instrument for gaining insight into the cognitive processes as feed-back for the implementation and adaptation of IT-based systems.

Description Think Aloud is a method that requires users to speak as they interact with an IT-based system to solve a problem or perform a task, thereby generating data on the ongoing thought processes during task performance (Kushniruk and Pate12004). The user interaction data collected typically include the video recording of all displays along with the corresponding audio recording of the users' verbalizations. The resulting verbal protocols together with the videos are transcribed and systematically analyzed to develop a model of the user's behavior while performing a task.

Assumptions for Application Kushniruk and Patel (2004) describe carefully the recording and analysis tasks and illustrates implicitly the need for prior experience of using this method: The data constitutes raw data that require substantial analysis and interpretation to gain in-depth insight in the way subjects perform tasks.

Perspectives Jaspers et al. (2004) outline the various types of memory systems with different storing capacities and retrieval characteristics, putting across the message that humans are only capable of verbalizing the contents of the working memory (the currently active information),

7 0 4

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANIDBOOK O~ ~VALUATION M{]TIdODS

not the long-term memory.

A perspective behind this method is that the user's mental model is independent o f his or her level o f expertise. Thus, while using the output as input for a development process, the key issue is to elicit the joint mental model across different levels o f expertise, as these present themselves differently in the interaction with the IT-based system (see also the discussion under Pitfalls in Cognitive Assessment, as well as in Part III). This perspective is clearly opposed by Norman (1987) as being incorrect, stating that mental models are incomplete, parsimonious, and unstable; they change, and do not have firm boundaries. Nevertheless, the Think Aloud method may have its value for small and down-to-earth practical assessments, provided that the assessor knows these constraints and consequently does not overinterpret the outcome.

Frame of Reference for Interpretation (Not applicable)

Perils and Pitfalls

Advice and C o m m e n t s

References

Jaspers MWM, Steen T, van den Bos C, Genen M. The Think Aloud method: a guide to user interface design. Int J Med Inform 2004;73:781-95.

Kushniruk AW, Patel VL. Cognitive and usability engineering methods for the evaluation of clinical information systems. J Biomed Inform 2004;37:56-76.

Norman DA. Some observations on mental models. In: Baecker RM, Buxton WAS, editors. Readings in human-computer interaction: a multidisciplinary approach. Los Altos: Morgan Kaufman Publishers, Inc.; 1987. p. 241-4.

205

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

HANDBOOI( OF- EVALUATION METHODS

Supplementary Reading

Cho I, Park H-A. Development and evaluation of a terminology-based electronic nursing record system. J Biomed Inform 2003;36:304-12.

Use the Think Aloud method to identify success and failure in matching a precoordinated phrase with what the user wanted to document.

Ericsson KA, Simon HA. Protocol analysis, verbal reports as data. Cambridge (MA): The MIT Press; 1984.

This book provides a thorough review o f the literature on approaches to and problems o f elicitation o f information about cognitive processes from verbal data.

Preece J. Part VI, Interaction design: evaluation. In: Human-computer interaction. Wokingham: Addison-Wesley Publishing Company; 1994.

Part VI o f this book describes a number o f conventional methods f o r measuring Usability, from traditional Usability Engineering via analysis o f video data, verbal protocols, Think Aloud protocols, Interviews, and Questionnaire studies to ethnographic studies. Even though the perspective on page 693 indicates that the methods are intended f o r professionals, it may still be useful

f o r others to read these chapters.

~o6

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANDE~OOI< O~ ~VALUATION J"IETNODS

Usability Areas of Application

"Usability is the quality o f interaction in a context" (Bevan and Macleod 1993)

�9 Assessment o f user friendliness in terms of ergonomic and cognitive aspects of the interaction between an IT system and its users.

It can be used during all phases of an IT system's life c y c l e – for instance:

�9 During the analysis and design phase (the Explorative Phase) �9 For assessment of bids (the Explorative Phase) – see (Beuscart-

Zrphir et al. 2002 and 2005) and Assessment o f Bids �9 In connection with a delivery test (after installation at the end of

the Technical Development Phase) �9 As constructive assessment during the implementation process

or during adjustment of the functionality, also called usability engineering. See, for instance, the reviews in (Kushniruk 2002; and Kushniruk and Pate12004).

Description Usability is an aspect that must be taken into account in the functionality from the very first description of the idea and the objective of introducing the system until the system functions (well) in day-to-day use and during maintenance.

This is a very large subject area, stretching from vendors' and universities' advanced 'usability labs' with video monitoring of eye and hand movements attached to a log that tracks everything that goes on in the system. On the opposite scale you will find an assessment during hands-on use in a workplace. Somewhat dependent on the authors, the concept includes both ergonomic and cognitive aspects, as they are closely related. As already defined in a previous footnote (see Field Studies), this handbook distinguishes between the work procedure-related aspects (usually denoted as the functionality aspects), the dialogue-related aspects (the ergonomic aspects), and the perception-oriented aspects (thecognitive aspects).

"207

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

}-IANDE}OOI< OlZ EVALUATION METHODS

There is definitely an overlap between them, and they are all concerned with the functionality that users are confronted with during daily use of an IT-based system.

For a review of inspection methods as well as different measurement methods ranging from formal over automatic and empirical to informal usability assessment methods, see (Huart et al. 2004).

Measurement of cognitive aspects is still at an experimental stage with regard to assessment of IT-based systems and has not yet been sufficiently addressed in the development methodologies. This is one of the reasons why only simple measurements and assessments of ergonomic aspects are included in this section, as they are easy enough to undertake by a user organization with some prior experience. Assessment of the cognitive aspects is described elsewhere (see separate sections, Cognitive Assessment and Cognitive Walkthrough).

Usability, with regard to ergonomic assessments, is concerned with characteristics of how difficult it is to carry out a user dialog with the IT system o r – phrased in a positive sense – how effective an IT system is to use and how easy it is to learn to use it.

The work of ergonomic assessments is task-oriented that is, you simulate real small operational tasks. Use scenarios from every day as a base to identify suitable tasks. Correction o f data is usually a good task to evaluate in this respect, although data entry and other limited activities should also be assessed. For example, how effective does the user interface work when the user uses the prototype to simulate an actual task, such as correcting a patient ID? The effectiveness can be calculated in (1) the number of shifts between screens; (2) the distance between fields on a given screen where data has to be entered or changed in other words, how many times and how far does the cursor have to be moved actively; or (3) is it necessary manually to correct data that are logical functions of each other (for instance, affiliation of staff and patients to a ward and a department) with the risk of inconsistencies?

In practice this has been made operational by a number of metrics and suggestions of things to keep an eye out for. The report (Bastien and Scapin 1993), a pioneering work in this area, goes through a number of examples of relevant aspects including recommendations and criteria gathered under a series of headings, such as:

�9 Prompting: Which tools are used to prompt the user to the next

208

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

~ANI}8OOI< OF {=VALUATION J"'IETIdODS

specific step o f an action (a data entry field or a new action)? �9 G r o u p i n g o f information a n d f i e l d s : Is there a logical task-

related, coherent, visual organization o f the fields? This could be a topological (area) or a graphical (color, format, or similar) grouping

�9 I m m e d i a t e f e e d – b a c k : What is the immediate reaction o f the system when the user performs something and completes it?

�9 Readability: Concerns characteristics o f the readability, equivalent to the lix value (readability index) o f a text. Is it readable (also for those who have early stages o f age-related eyesight problems or are colorblind)?

�9 L o a d (on the user): This concerns all aspect that play a part in reducing the sensual and cognitive strains on the user and the effectiveness o f the dialog between the user and the system

�9 Consistency: Deals with uniformity o f the design o f the screens (does it show that different people have programmed it?)

�9 H a n d l i n g o f errors: Deals with tools to handle errors and ways to prevent them

�9 Flexibility: Deals with the overall flexibility o f the system

Assumptions for Application Experience is a prerequisite to really be able to employ the usability method and to profit from it as constructive feed-back tool during a development process. However, a usability assessment requires special professional background to have any deep impact.

Perspectives Bastien and Scapin's report builds on an anthropocentric perspective that is, the users are not robots that have to leam and adapt to the system and that the best overall function is achieved when the user understands what goes on and is in charge o f what happens and what needs to be done (see Bastien and Scapin 1993).

Usability is not just a fashion fad, but it has a decisive influence on user satisfaction, learning and training needs, and the strain o f screen work, so it influences the frequency o f operational errors and omissions. The ergonomic aspects are not only decisive for the efficiency o f the operation o f the system, but also when users try to avoid certain activities. For example, if the system is too cumbersome to handle, there is a risk that the user will carry out the activities in an incorrect way or maybe jot down the information on a piece o f paper, put it in a pocket, and delay the inputting or even forget all about it.

2oo

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

~ANID~OOI< O~ c EVALUATION METHODS

F r a m e o f R e f e r e n c e for Interpretation The report (Bastien and Scapin 1993) m a y work as a de facto standard for what is ' g o o d practice' – at least from the view o f what a user organization is normally capable o f dealing with itself.

Perils and Pitfalls A significant source o f error is the bias inherent in p e o p l e ' s subjective evaluations (see Kieras 1997 and Part III): One has to find objective metrics and measures if the ergonomics are a point o f discussion and negotiation with a development team or, even worse, with an external vendor.

A d v i c e and C o m m e n t s Y o u can find some good rules o f thumb for evaluations and assessments on http://jthom.best.vwh.net/usability/under "heuristic evaluation" under the reference to "Nielsen".

W h e n assessing usability, one should be aware that the context in which measures are taken is very important and needs to simulate real life as closely as possible (i.e., ideally in actual use) (Bevan and Macleod 1993; Coolican 1999).

I f the usability cannot be measured directly, it is sometimes possible to register and analyze the causes behind typical errors o f operation and other unintentional events.

References

Bastien JMC, Scapin DL. Ergonomic criteria for the evaluation of human-computer interfaces. Rocquencourt (France): Institut National de Recherche en Informatique et en Automatique; 1993. Report No.: 156.

The report is available from the institute's website: http : //www. inria.fr/pub lications/index, en. html. Click on 'Research Reports & Thesis' type "ergonomic criteria" in the text field o f the search engine, and press 'Enter'. Find the report RT0156, mentioned under the description o f 'Usability'assessment. By clicking on 'pour obtenir la version papier' you get the e-mail address o f the person concerned, and you may request the report (last visited 10.12.2003).

Beuscart-Z6phir MC, Watbled L, Carpentier AM, Degroisse M, Alao

210

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

IdANI}8OOK OF ~VALUATION METHODS

O. A rapid usability assessment methodology to support the choice of clinical information systems: a case study. In: Kohane I, editor. Proc AMIA 2002 Symp on Bio*medical Informatics: One Discipline; 2002 Nov; San Antonio, Texas; 2002. p. 46-50.

Beuscart-Z6phir M-C, Anceaux F, Menu H, Guerlinger S, Watbled L, Evrard F. User-centred, multidimensional assessment method of clinical information systems: a case study in anaesthesiology. Int J Med Inform 2005;74(2-4): 179-89.

Bevan N, Macleod M. Usability Assessment and Measurement. In: Kelly M. Management and measurement of software quality. Uxbridge: Unicom Seminars Ltd; 1993. p. 167-92.

An easily accessible opening for usability measurement.

Coolican H. Introduction to research methods and statistics in psychology. 2nd ed. London: Hodder & Stoughton; 1999.

Describes (mainly in the early chapters) problems o f experimental conditions o f psychological phenomena.

Huart J, Kolski C, Sagar M. Evaluation of multimedia applications using inspection methods: the Cognitive Walkthrough case. Interacting Comput 2004; 16:183-215.

Kieras D. A guide to GOMS model usability evaluation using NGOMSL. In: Helander MG, Landauer TK, Prabhu PV, editors. Handbook of human computer interaction. 2nd ed. Amsterdam: Elsevier Science B.V.; 1997. p. 733-66.

Kushniruk A. Evaluation in the design of health information systems: application of approaches emerging from usability engineering. Comput Biol Med 2002;32(3): 141-9.

Kushniruk AW, Patel VL. Cognitive and usability engineering methods for the evaluation of clinical information systems. J Biomed Inform 2004;37:56-76.

http://jthom.best.vwh.net/usability The page contains numerous links to method descriptions and references, including, for instance, Jacob Nielsen "s ten recommended aspects (last visited 15.05.2005).

Supplementary Reading

Baecker RM, Buxton WAS. Readings in human-computer interaction:

211

Brender, McNair, Jytte, and Jytte Brender. Handbook of Evaluation Methods for Health Informatics, Elsevier Science & Technology, 2006. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/waldenu/detail.action?docID=306691. Created from waldenu on 2022-03-06 01:57:27.

C o p yr

ig h t ©

2 0 0 6 . E

ls e vi

e r

S ci

e n ce

& T

e ch

n o lo

g y.

A ll

ri g h ts

r e

se rv

e d .

,

<fulfillmentToken fulfillmentType="loan" auth="user" xmlns="http://ns.adobe.com/adept"> <distributor>urn:uuid:501e16ca-f45c-4a0c-9a9f-8a270a9f2ed6</distributor> <operatorURL>http://rps2images.ebscohost.com/fulfillment</operatorURL> <transaction>s6527200:24845671:145750:1646806032750:872151436</transaction> <expiration>2022-03-09T07:07:12-05:00</expiration> <resourceItemInfo> <resource>urn:uuid:1fc835e8-cde4-4b12-91a3-5e4f5520189a</resource> <resourceItem>1</resourceItem> <metadata> <dc:title xmlns_dc="http://purl.org/dc/elements/1.1/">Transforming Health Care Through Information</dc:title> <dc:creator xmlns_dc="http://purl.org/dc/elements/1.1/">Lorenzi, Nancy M.;</dc:creator> <dc:publisher xmlns_dc="http://purl.org/dc/elements/1.1/">Springer Nature</dc:publisher> <dc:format xmlns_dc="http://purl.org/dc/elements/1.1/">application/pdf</dc:format> <dc:language xmlns_dc="http://purl.org/dc/elements/1.1/">eng</dc:language> </metadata> <licenseToken> <resource>urn:uuid:1fc835e8-cde4-4b12-91a3-5e4f5520189a</resource> <permissions> <display> <until>2022-03-16T02:14:12-04:00</until> </display> <excerpt> <until>2022-03-16T02:14:12-04:00</until> <count initial="15"/> </excerpt> <print> <until>2022-03-16T02:14:12-04:00</until> <count initial="15"/> </print> <play> <until>2022-03-16T02:14:12-04:00</until> </play> </permissions> </licenseToken> </resourceItemInfo> <hmac>s2hBuwAdtjod5euh4shXSheWRhI=</hmac> </fulfillmentToken>

Order Solution Now

Similar Posts