homework 3 rd kim

Please revise this hw according to the previous answer(word doc. named "week2 before answer"). Do not rewritte the hw, just need add question numbers to these original answers. and keep the question on the text, thank you.

SELECTED CASES

© Pixmann/Imagezoo/Getty Images

PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

4 Entering and Contracting

5 Diagnosing

6 Collecting, Analyzing, and Feeding Back

Diagnostic Information

7 Designing Interventions

8 Managing Change

9 Evaluating and Institutionalizing Organization

Development Interventions

Sunflower Incorporated

Kenworth Motors

Peppercorn Dining

Diagnosis and Feedback at Adhikar

Managing Change: Action Planning for the Vélo V Project in Lyon, France

74

© P

ix m

an n/

Im ag

ez oo

/ G

et ty

Im ag

es

4

Entering and Contracting

learning objectives

Describe the issues associated with entering into an OD process.

Describe the issues associated with contracting for an OD process.

T he planned change process described in Chapter 2 generally starts when one or more managers or administrators sense an

opportunity for their organization, department, or group, believe that new capabilities need to be developed, or decide that performance could be improved through organization development (OD). The organization might be successful yet have room for improvement. It might be facing impending environmental conditions that necessi- tate a change in how it operates. The organization could be experiencing particular problems, such as poor product quality, high rates of absenteeism, or dysfunctional conflicts among departments. Con- versely, the problems might appear more diffuse and consist simply of feelings that the organization should be “more innovative,” “more competi- tive,” or “more effective.”

Entering and contracting are the initial steps in the OD process. They involve defining in a preliminary manner the organization’s problems or opportunities for development and establishing a collaborative relationship between the OD practitioner and members of the client system about how to work on those issues. Entering and contracting set the initial parameters for carrying out the subsequent phases of OD: diagnosing, planning and implementing changes, and evaluating and institutionalizing them. They help to define

what issues will be addressed by those activities, who will carry them out, and how they will be accomplished.

Entering and contracting can vary in complexity and formality depending on the situation. In those cases where the manager of a work group or department serves as his or her own OD practitioner, entering and contracting typically involve the manager and group members meeting to discuss what issues to work on and how they will jointly meet the goals they set. Here, entering and contracting are relatively simple and informal. They involve all relevant members directly in the process—with a minimum of formal procedures. In situations where managers and administrators are considering the use of professional OD practitioners, either from inside or from outside the organization, entering and contracting tend to be more complex and formal.1 OD practitioners may need to collect preliminary information to help define the problematic or development issues. They may need to meet with representatives of the client organization rather than with the total membership; they may need to formalize their respective roles and how the change process will unfold. In cases where the anticipated changes are strategic and large in scale, formal proposals from multiple consulting firms may be requested and legal contracts drawn up.

75

This chapter first discusses the activities and content-oriented issues involved in entering into and contracting for an OD initiative. We will focus our attention on complex processes involving OD professionals and client organizations. Similar entering and contracting issues, however, need to be addressed in even the simplest OD efforts, where managers serve as OD practitioners for their

own work units. Unless there is clarity and agreement about what issues to work on, who will address them, how that will be accomplished, and what timetable will be followed, subsequent stages of the OD process are likely to be confusing and ineffective. The chapter concludes with a discussion of the interpersonal process issues involved in entering and contracting for OD work.

4-1 Entering into an OD Relationship An OD process generally starts when a member of an organization or unit contacts an OD practitioner about potential help in addressing an organizational issue.2 The organi- zation member may be a manager, staff specialist, or some other key participant; the practitioner may be an OD professional from inside or outside of the organization. Determining whether the two parties should enter into an OD relationship typically involves clarifying the nature of the organization’s current functioning and the issue(s) to be addressed, the relevant client system for that issue, and the appropriateness of the particular OD practitioner.3 In helping assess these issues, the OD practitioner may need to collect preliminary data about the organization. Similarly, the organization may need to gather information about the practitioner’s competence and experience.4 This knowl- edge will help both parties determine whether they should proceed to develop a contract for working together.

This section describes the activities involved in entering an OD relationship: clarifying the organizational issue, determining the relevant client, and selecting the appropriate OD practitioner.

4-1a Clarifying the Organizational Issue When seeking help from OD practitioners, organizations typically start with a presenting problem—the issue that has caused them to consider an OD process. It may be specific (decreased market share, increased absenteeism) or general (“we’re growing too fast,” “we need to prepare for rapid changes”). The presenting problem often has an implied or stated solution. For example, managers may believe that because costs are high, laying off members of their department is the obvious answer. They may even state the present- ing problem in the form of a solution: “We need to downsize our organization.”

In many cases, however, the presenting problem is only a symptom of an underlying problem. For example, high costs may result from several deeper causes, including inef- fective new-product development or manufacturing processes, inappropriate customer- service policies and procedures, or conflict between two interdependent groups. The issue facing the organization or department must be clarified early in the OD process so that subsequent diagnostic and intervention activities are focused correctly.5

Gaining a clearer perspective on the organizational issue may require collecting pre- liminary data.6 OD practitioners often examine company records and interview a few key members to gain an introductory understanding of the organization, its context, and the nature of the presenting problem. Those data are gathered in a relatively short period of time—typically over a few hours to one or two days. They are intended to pro- vide enough rudimentary knowledge of the organizational issue to enable the two parties to make informed choices about proceeding with the contracting process.

76 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

The diagnostic phase of OD involves a far more extensive assessment of the prob- lematic or development issue than occurs during the entering and contracting stage. The diagnosis also might discover other issues that need to be addressed, or it might lead to redefining the initial issue that was identified during the entering and contracting stage. This is a prime example of the emergent nature of the OD process: Things may change as new information is gathered and new events occur.

4-1b Determining the Relevant Client A second activity in entering an OD relationship is defining the relevant client for addressing the organizational issue.7 Generally, the relevant client includes those organi- zation members who can directly impact the change issue, whether it is solving a partic- ular problem or improving an already successful organization or department. Unless these members are identified and included in the entering and contracting process, they may withhold their support for and commitment to the OD process. In trying to improve the productivity of a unionized manufacturing plant, for example, the relevant client may need to include union officials as well as managers and staff personnel. It is not unusual for an OD project to fail because the relevant client was inappropriately defined.

Determining the relevant client can vary in complexity depending on the situation. In those cases where the organizational issue can be addressed in a specific organization unit, client definition is relatively straightforward. Members of that unit constitute the relevant client. They or their representatives must be included in the entering and contracting process. For example, if a manager asked for help in improving the decision-making process of his or her team, the manager and team members would be the relevant client. Unless they are actively involved in choosing an OD practitioner and defining the subsequent change process, there is little likelihood that OD will improve team decision making.

Determining the relevant client is more complex when the organizational issue can- not readily be addressed in a single unit. Here, it may be necessary to expand the defini- tion of the client to include members from multiple units, from different hierarchical levels, and even from outside of the organization. For example, the manager of a produc- tion department may seek help in resolving conflicts between his or her unit and other departments in the organization. The relevant client would extend beyond the bound- aries of the production department because that department alone cannot resolve the issue. The client might include members from all departments involved in the conflict as well as the executive to whom all of the departments report. If that interdepartmental conflict also involved key suppliers and customers from outside of the firm, the relevant client might include members of those groups.

In such complex situations, OD practitioners need to gather additional information about the organization to determine the relevant client, generally as part of the prelimi- nary data collection that typically occurs when clarifying the issue to be addressed. When examining company records or interviewing personnel, practitioners can seek to identify the key members and organizational units that need to be involved. For example, they can ask organization members questions such as these: Who can directly influence the organizational issue? Who has a vested interest in it? Who has the power to approve or reject the OD effort? Answers to those questions can help determine who is the relevant client for the entering and contracting stage. However, the client may change during the later stages of the OD process as new data are gathered and changes occur. If so, parti- cipants may have to return to and modify this initial stage of the OD effort.

CHAPTER 4 ENTERING AND CONTRACTING 77

4-1c Selecting an OD Practitioner The last activity involved in entering an OD relationship is selecting an OD practitioner who has the expertise and experience to work with members on the organizational issue. Unfortunately, little systematic advice is available on how to choose a competent OD professional, whether from inside or outside of the organization.8 To help lower the uncertainty of choosing from among external OD practitioners, organizations may request that formal proposals be submitted. In these cases, the OD practitioner must take all of the information gathered in the prior steps and create an outline of how the process might unfold. Table 4.1 provides one view of the key elements of such a pro- posal. It suggests that a written proposal include project objectives, outlines of proposed processes, a list of roles and responsibilities, recommended interventions, and proposed fees and expenses.

For less formal and structured selection processes, the late Gordon Lippitt, a pio- neering practitioner in the field, suggested several criteria for selecting, evaluating, and developing OD practitioners.9 Lippitt listed areas that managers should consider before selecting a practitioner—including their ability to form sound interpersonal relationships, the degree of focus on the problem, the skills of the practitioner relative to the problem, the extent that the consultant clearly informs the client as to his or her role and contri- bution, and whether the practitioner belongs to a professional association. References from other clients are highly important. A client may not like the consultant’s work, but it is critical to know the reasons for both pleasure and displeasure. One important consideration is whether the consultant approaches the organization with openness and an insistence on diagnosis or whether the practitioner appears to have a fixed program that is applicable to almost any problem or organization.

TABLE 4.1

Essentials of an Effective OD Proposal

Elements Description

Objectives of proposed project

A statement of the goals in clear and concise terms, including measurable results, if any.

Proposed process or action plan

Provide an overview of the process to be used. Usually includes a diagnosis (including how the data will be collected), feedback process, and action-planning or implementation process.

Roles and responsibilities

A list of key stakeholders in the process, including the OD practitioner, and the specific responsibilities for which they will be held accountable.

Recommended interventions

A description of the proposed change strategies, including training, off-site meetings, systems or pro- cesses to be redesigned, and other activities.

Fees, terms, and conditions

Provide an outline of the fees and expenses associated with project.

SOURCE: Adapted from A. Freedman and R. Zackrison, Finding Your Way in the Consulting Jungle, 141–47. San Francisco: Jossey-Bass/Pfeiffer. © 2001.

78 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

Certainly, OD consulting is as much a person specialization as it is a task specializa- tion. The OD professional needs not only a repertoire of technical skills but also the per- sonality and interpersonal competence to use himself or herself as an instrument of change. Regardless of technical training, the consultant must be able to maintain a boundary position, coordinating among various units and departments and mixing dis- ciplines, theories, technology, and research findings in an organic rather than in a mechanical way. The practitioner is potentially the most important OD technology available.

Thus, in selecting an OD practitioner perhaps the most important issue is the fun- damental question, “How effective has the person been in the past, with what kinds of organizations, using what kinds of techniques?” In other words, references must be checked. Interpersonal relationships are tremendously important, but even con artists have excellent interpersonal relationships and skills.

The burden of choosing an effective OD practitioner should not rest entirely with the client organization.10 As described in the Ethical Dilemmas section of Chapter 3, consultants also bear a heavy responsibility in finding whether there is a match between their skills and knowledge and what the organization or department needs. Few man- agers are sophisticated enough to detect or to understand subtle differences in expertise among OD professionals, and they often do not understand the difference between inter- vention specialties. Thus, practitioners should help educate potential clients, being explicit about their strengths and weaknesses and their range of competence. If OD pro- fessionals realize that a good match does not exist, they should inform the client and help them find more suitable help.

Application 4.1 describes the entering process at Alegent Health, a large health care system in Nebraska and western Iowa. The entry process was largely “virtual” in that the researchers worked through two consultants who were conducting OD interventions on a regular basis. The case highlights how OD work can come in different forms and through different channels. It also reflects how quickly the “entry” process can occur. This is the first in a series of applications based on the Alegent project that will be used throughout the text.

4-2 Developing a Contract The activities of entering an OD relationship are a necessary prelude to developing an OD contract. They define the major focus for contracting, including the relevant parties. Contracting is a natural extension of the entering process and clarifies how the OD pro- cess will proceed. It typically establishes the expectations of the parties, the time and resources that will be expended, and the ground rules under which the parties will operate.

The goal of contracting is to make a good decision about how to carry out the OD process.11 It can be relatively informal and involve only a verbal agreement between the client and the OD practitioner. A team leader with OD skills, for example, may voice his or her concerns to members about how the team is functioning. After some discussion, they might agree to devote one hour of future meeting time to diagnosing the team with the help of the leader. Here, entering and contracting are done together, informally. In other cases, contracting can be more protracted and result in a formal document. That typically occurs when organizations employ outside OD practitioners. Government agen- cies, for example, generally have procurement regulations that apply to contracting with outside consultants.12

CHAPTER 4 ENTERING AND CONTRACTING 79

a p

p lica

tio n

4 1

ENTERING ALEGENT HEALTH

A legent Health (AH) is a five-hospital sys- tem that serves the greater Omaha, Nebraska, and western Iowa region. Alegent was formed when two religious-

sponsored health care systems merged to leverage health care industry changes and to bargain more powerfully with physicians and insurance providers. The system had its own managed care insurance program, was imple- menting a consumer-directed health care pro- gram for its employees, and had about 100 employed physicians in addition to the physi- cians with privileges at its hospitals.

Two well-known OD consultants had been working with AH for about two years, doing a variety of OD work. By far, the largest project was the design and delivery of large group interventions known as decision accelerators (DAs) to create strategies for the major clinical service areas, such as orthopedics, cardiology, and women’s and children’s services. [Note: Large group interventions are multistakeholder meetings of over 50 people—see Chapter 11 for more information.]

At an organization design conference in April, one of the consultants was talking with researchers from the Center for Effective Orga- nizations at USC. The conversation turned to a discussion of the work at AH and the possibil- ity of evaluating the change effort. The researchers were excited about the organiza- tion development and large group intervention work in the health care context. The consultant agreed to pitch the idea to AH’s Chief Innova- tion Officer (CIO).

Following some additional background conversations with the researchers and the CIO, the consultant sent the following email in June:

Dear CIO, I would like to introduce you to the Center for Effective Organization researchers. As we discussed, the researchers are very interested in the work being done at AH and will be calling you early next week to discuss the possibility of doing a research project on the Decision Accelerator effort.

The form of research is typically action research, meaning the data will be valuable for Alegent in not only assessing the impact and effectiveness of the DA intervention but learning how to position this capability for improved Alegent organizational effective- ness. This can be quite valuable as Alegent moves into the next round of change and transformation.

Thanks all.

The researchers spent the next few days talking to the two consultants about AH, its history, strategy, structure, and culture, as well as the motivation for the large-group, deci- sion accelerator process. They also collected data on AH through the Internet. Alegent was indeed a unique organization. It was highly successful from a financial point of view, had a new CEO who had been brought in from Florida, and had a strong faith-based mission.

In the first phone call with the CIO, the researchers introduced themselves, described the mission of the research center, and their interest in doing a case study of change at Alegent. The CIO talked about the history of change at AH and asked questions about the value the project would have for them. He saw several benefits, including the opportunity to generate a history of the change, to learn about the impacts of the change process on the organization’s culture and members, and to build a database that could be used to advance AH’s objective of “changing the face of health care.” The call ended with the agree- ment that the CIO would talk with others in the organization, including the CEO, and that the researchers should begin to put together a project purpose, cost estimate, and schedule.

In the second call, the researchers presented their understanding of the project as a case study assessment of how innovation was created and implemented at Alegent. They described a way of working with organizations—the establish- ment of a “study team” composed of several key stakeholders in the organization. The study team would meet, before the project officially began, to review the objectives of the study

80 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

Regardless of the level of formality, all OD processes require some form of explicit contracting that results in either a verbal or a written agreement. Such contracting clari- fies the client’s and the practitioner’s expectations about how the OD process will take place. Unless there is mutual understanding and agreement about the process, there is considerable risk that someone’s expectations will be unfulfilled.13 That can lead to reduced commitment and support, to misplaced action, or to premature termination of the process.

The contracting step in OD generally addresses three key areas:14 setting mutual expectations or what each party expects to gain from the OD process; the time and resources that will be devoted to it; and the ground rules for working together.

4-2a Mutual Expectations This part of the contracting process focuses on the expectations of the client and the OD practitioner. The client states the services and outcomes to be provided by the OD prac- titioner and describes what the organization expects from the process and the consultant. Clients usually can describe the desired outcomes, such as lower costs or higher job sat- isfaction. Encouraging them to state their wants in the form of outcomes, working rela- tionships, and personal accomplishments can facilitate the development of a good contract.15

The OD practitioner also should state what he or she expects to gain from the OD process. This can include opportunities to try new interventions, report the results to other potential clients, and receive appropriate compensation or recognition.

4-2b Time and Resources To accomplish change, the organization and the OD practitioner must commit time and resources to the effort. Each must be clear about how much energy and how many resources will be dedicated to the change process. Failure to make explicit the necessary requirements of a change process can quickly ruin an OD effort. For example, a client may clearly state that the assignment involves diagnosing the causes of poor productivity in a work group. However, the client may expect the practitioner to complete the assign- ment without talking to the workers. Typically, clients want to know how much time will be necessary to complete the assignment, who needs to be involved, how much it will cost, and so on.

Peter Block has suggested that resources can be divided into two parts.16 Essential requirements are things that are absolutely necessary if the change process is to be suc- cessful. From the practitioner’s perspective, they can include access to key people or

and ensure that the work was relevant to the organi- zation. There was some conversation about who might be on that team, including the CEO, CFO, the hospital presidents, and the VPs of the clinical service areas.

Subsequent email exchanges among the con- sultants, the CIO, and the researchers led to a

verbal agreement that the project should begin in October. The CIO believed there was much to gain from the project, and asked the Director of the Right Track office (this was the internal name AH had given to the decision accelerator) to lead the contracting process and to help the researchers schedule meetings and interviews.

CHAPTER 4 ENTERING AND CONTRACTING 81

information, enough time to do the job, and commitment from certain stakeholder groups. The organization’s essential requirements might include a speedy diagnosis or assurances that the project will be conducted at the lowest price. Being clear about the constraints on carrying out the assignment will facilitate the contracting process and improve the chances for success. Desirable requirements are those things that would be nice to have but are not absolutely necessary, such as access to special resources or writ- ten rather than verbal reports.

4-2c Ground Rules The final part of the contracting process involves specifying how the client and the OD practitioner will work together. The parameters established may include such issues as confidentiality, if and how the OD practitioner will become involved in personal or interpersonal issues, how to terminate the relationship, and whether the practitioner is supposed to make expert recommendations or help the manager make decisions. For internal consultants, organizational politics make it especially important to clarify issues of how to handle sensitive information and how to deliver “bad news.”17 Such process issues are as important as the needed substantive changes. Failure to address the con- cerns may mean that the client or the practitioner has inappropriate assumptions about how the process will unfold.

Application 4.2 describes the contracting process for the evaluation project at Alegent Health. In this case, the contracting process was much more complicated than the entry process. What would you list as the strengths and weaknesses of this example?

4-3 Interpersonal Process Issues in Entering and Contracting The previous sections on entering and contracting addressed the activities and content- oriented issues associated with beginning an OD project. In this final section, we discuss the interpersonal issues an OD practitioner must be aware of to produce a successful agreement. In most cases, the client’s expectations, resources, and working relationship requirements will not fit perfectly with the OD practitioner’s essential and desirable requirements. Negotiating the differences to improve the likelihood of success can be personally and interpersonally challenging.18

Entering and contracting are the first exchanges between a client and an OD practi- tioner. Establishing a healthy relationship at the outset makes it more likely that the cli- ent’s desired outcomes will be achieved and that the OD practitioner will be able to improve the organization’s capacity to manage change in the future. As shown in Figure 4.1, this initial stage is full of uncertainty and ambiguity. On the one hand, the client is likely to feel exposed, inadequate, or vulnerable. The organization’s current effectiveness and the request for help may seem to the client like an admission that the organization is incapable of solving the problem or providing the leadership necessary to achieve a set of results. Moreover, clients are entering into a relationship where they may feel unable to control the activities of the OD practitioner. As a result, they feel vulnera- ble because of their dependency on the practitioner to provide assistance. Consciously or unconsciously, feelings of exposure, inadequacy, or vulnerability may lead clients to resist coming to closure on the contract. The OD practitioner must be alert to the signs of resistance, such as asking for extraordinary amounts of detail, and be able to address them skillfully.

82 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

a p

p li

ca ti

o n

4 2 CONTRACTING WITH ALEGENT HEALTH

F ollowing the verbal approval of the CIO to begin the work, the researchers began working with the Right Track director and the consultants to formulate an agreement

on how to proceed with the case study and assessment. The contracting process pro- ceeded on two parallel paths. One path was the specification of the formal contract—who, what, how much, and why—and the second

path was the project scheduling—who, when, and where.

FORMAL CONTRACTING PROCESS

The formal contracting process required the researchers to propose a purpose, cost esti- mate, and schedule for the case study. The researchers’ initial proposal looked like this:

Work Stream September October November December January

DA archives • Collect DA materials

• Create coding scheme

• Coding • Write up archival data

Interviews • Finalize interview questions

• Arrange interview schedule

• First round of interviews

• Develop coding scheme

• Second round of interviews

• Coding • Begin

analysis of interviews

Governance • Meet with “study team”

• Feedback meeting

• Transfer learning to organization

• Article writing

The first work stream was the DA archives. The researchers had learned, through the consul- tants and the Right Track director, that the Right Track staff kept nearly verbatim transcripts and descriptions of each of the decision accelerator meetings that took place. Thus, the researchers proposed an analysis of those documents as an important work stream in the process. The second work stream, representing the bulk of the data col- lection, would be two rounds of interviews with executives, managers, and staff involved in the change process. Finally, the project would be gov- erned by a study team that would work to frame project objectives, receive the feedback and assist in data interpretation, and help to transfer the learn- ing back to the organization.

In addition to the timeline, the research pro- posal outlined the purpose of the project; the likely benefits to Alegent; the estimated costs

for interviews, data analysis, and direct expenses; the support resources expected from AH, including the establishment of the study team; a statement about data confidential- ity; and some suggested publication outlets. The Right Track director reviewed the document and asked for some additional detail. As described in the “Project Scheduling Process” section below, the start date had slipped to early November.

Dear Right Track Director, We got a message from the consultants that you need a little extra “drill down detail” on the case study assessment project. We’ve taken a stab at such a document and it is attached.

The document includes a one-page description of proposed dates, activities, and information to be gathered. Please let me know if this meets your needs.

CHAPTER 4 ENTERING AND CONTRACTING 83

The document also lists a set of potential questions for the initial round of interviews. There are two issues we could use your guidance on. First, what is the appropriate time frame for questions about strategy? Second, we’ve listed a

couple of options for using a survey during the interview to collect information that would take too long to collect through just interview ques- tions. Your counsel would be appreciated. Thanks.

DATA COLLECTION PLAN—RIGHT TRACK ASSESSMENT PROJECT

Date Activity Data to Be Collected

Day 1 during the week of November 6th

• Meet with study team members to verify objectives and methods and refine them in order to incorporate sponsor concerns

• Initial interviews with senior executives* to understand broad strategic context of orga- nization and Right Track process

• Executive sense of business strategy, organization design, and Right Track impact on organization

• Broad scoping of the post-RT implementation/refinement activities germane to planning remainder of interviews/data gathering

• (Initial draft of questions attached)

Day 2 during the week of November 6th

• Initial interviews with senior executives* to understand broad strategic context of organization and Right Track process

• Executive sense of business strategy, organization design, and Right Track impact on organization

• Broad scoping of the post-RT implementation/refinement activities germane to planning remainder of interviews/data gathering

• (Initial draft of questions attached)

Prior to next visit • Finalize detailed interview questions for different stakeholders

• Validate questions and sam- pling approach with study team

• Work with Right Track office to schedule interviews

Potential dates: November 27, 28 December 4, 5 December 7, 8 December 13, 14

• Detailed interviews with RT participants, nonparticipants, service-line managers, and other related managers**

• Details about perceptions of RT process, service-line strategies, implementation processes, and implementation success

Ongoing • Telephone interviews with key personnel unavailable during visits to Omaha

January, 2007 (date to be mutually determined)

• Meeting with study team and/ or extended stakeholder group to review and discuss implications of findings

February • Work with Alegent sponsors to determine a publication strategy

*Initial interview sample includes as many of the following as possible: [List of executives and physicians.] **Interview sample for detailed background information includes: [List of executives, managers, and other roles expected

to be important.]

84 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

Shortly thereafter, the Right Track director sent the following email:

Center for Effective Organization Researchers, Thanks for this added info. I, along with one of my staff members, have taken this along with all the documentation you have sent me to date and have attempted to create one cohesive doc- ument that can serve as the contract, statement of work, action plan, cost estimate, etc … This document is attached for your review.

I have also tried to answer some of the outstanding questions we have had in this doc- ument and have tried to further narrow the onsite dates and activities to include the inter- view list and the two questions you mentioned below. On your questions I think the two-year window is appropriate and I preferred option 2 which is incorporated in the attached.

Please review this latest document and pro- vide any feedback and/or changes you might have to us all. I will be out of town for a few days but my staff can keep the process moving through Legal and the CIO’s office in my absence. I can also be reached via cell phone through the rest of the week as needed.

Thanks.

The attachment referred to in the Right Track director’s email was a standard, corporate consulting contract, with the researchers’ proposal and revised schedule attached as the scope of work. Within the standard contract was a paragraph noting that all sur- veys, data, and documents created during the project would become the exclusive property of the Alegent Health corporation. The paragraph directly contra- dicted the confidentiality statement in the research- ers’ proposal. A number of conversations among the consultants, the researchers, and the different Alegent departments ensued. Eventually, a para- graph was written that was satisfactory to all parties and allowed for the researchers to use the data in their publications, but also gave Alegent the right to review, edit, and approve any articles, chapters, or descriptions of the organization change effort.

PROJECT SCHEDULING PROCESS

The project scheduling process—which was done in parallel with the formal contracting process

described above—involved working with the Right Track office to pick dates, schedule interviews, communicate with interviewees, and set up other logistical requirements to begin the study. Following a few introductory emails, and based on the CIO’s interest in beginning in October, the researchers sent the following message in early September:

Hi Right Track Director, With the CIO’s approval, we’re ready to begin the Right Track assessment project. The con- sultants and the researchers are very excited about the effort. We need your help to set up the first couple of days in October, ideally on the 17th and 18th.

On the 17th, we’d like to have a meeting of the “study team.” This can be in the morn- ing or afternoon, whichever best fits into the CIO’s schedule.

The balance of the 17th and all day on the 18th should be 60-minute interviews with the senior leadership of Alegent. Based on our dis- cussions with the consultants and the CIO, the list for the initial round of interviews would be 10 to 12 of the following people:

[List of top 15 executives and 7 key physicians]

Thanks for your help.

In response, the Right Track director sent back the following email:

Center for Effective Organizations Researchers, Welcome aboard and looking forward to working with you on this effort. Is there a spe- cific reason you are targeting 10/17 & 18? I ask because there is a DA scheduled those two days that some of these folks are sup- posed to be in and that I will be helping to support. It is actually an external group, namely the Boy Scouts. Are you planning to come that week because of that or is this just a coincidence? My contact info is enclosed.

Thanks.

Thus, there was some initial confusion on the start date of the project, and subsequent phone calls and emails clarified that starting the project in November would be a better fit for the Alegent orga- nization. Some initial dates that fit in the researchers’

CHAPTER 4 ENTERING AND CONTRACTING 85

On the other hand, the OD practitioner may have feelings of empathy, unworthi- ness, and dependency. The practitioner may over-identify with the client’s issues and want to be so helpful that he or she agrees to unreasonable deadlines or inadequate resources. The practitioner’s desire to be seen as competent and worthy may lead to an agreement on a project for which the practitioner has few skills or experience. Finally, in response to reasonable client requests, the practitioner may challenge the client’s motivation and become defensive. Schein noted that OD practitioners too often underestimate or ignore the power and impact of entry and contracting as an intervention in their own right.19 With even the simplest request for help, there are a myriad of things the OD practitioner, entering a system for the first time, does not know. Establishing a relationship with a client must be approached carefully; the initial contacts and conversations must represent a model of how the OD process will be

schedule were not good for the AH executives and physicians, while dates that were good for AH didn’t fit with the researchers’ schedule.

Eventually, the beginning of the project was pushed back to early December, and the

researchers flew to Omaha to begin the interview- ing process. In the rush to schedule interviews, make travel arrangements, and finalize the inter- view questions and survey items, the meeting of the “study team” was overlooked.

FIGURE 4.1

Factors Affecting Client-Practitioner Dynamics

SOURCE: B. Jones and M. Brazzel (editors), The NTL Handbook of Organization Development & Change, Pfeiffer, 2006, Figure 10.2, pp. 177–91. Reproduced with permission of John Wiley & Sons Inc.

86 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

conducted. As a result, actually coming to agreement during the contracting phase can be difficult and intense. A number of complex emotional and psychological issues are in play, and OD practitioners must be mindful of their own as well as the client’s per- spectives. Attending to those issues as well as to the content of the contract will help increase the likelihood of success.

SUMMARY

Entering and contracting constitute the initial activities of the OD process. They set the parameters for the phases of planned change that follow: diagnosing, plan- ning and implementing change, and evaluating and institutionalizing it. Organizational entry involves clar- ifying the organizational issue or presenting problem, determining the relevant client, and selecting an OD

practitioner. Developing an OD contract focuses on making a good decision about whether to proceed and allows both the client and the OD practitioner to clarify expectations about how the change process will unfold. Contracting involves setting mutual expecta- tions, negotiating time and resources, and developing ground rules for working together.

NOTES

1. M. Lacey, “Internal Consulting: Perspectives on the Process of Planned Change,” Journal of Organization Change Management 8, no. 3 (1995): 75–84; J. Geirland and M. Maniker-Leiter, “Five Lessons for Internal Orga- nization Development Consultants,” OD Practitioner 27 (1995): 44–48; A. Freedman and R. Zackrison, Finding Your Way in the Consulting Jungle (San Francisco: Jossey-Bass/Pfeiffer, 2001).

2. P. Block, Flawless Consulting: A Guide to Getting Your Expertise Used, 3rd ed. (San Francisco: Jossey-Bass, 2011); C. Margerison, “Consulting Activities in Organiza- tional Change,” Journal of Organizational Change Man- agement 1 (1988): 60–67; R. Harrison, “Choosing the Depth of Organizational Intervention,” Journal of Applied Behavioral Science 6 (1970): 182–202.

3. S. Gallant and D. Rios, “Entry and Contracting Phase,” in The NTL Handbook of Organization Development and Change, ed. B. Jones and M. Brazzel (San Francisco: Pfeiffer, 2006); M. Beer, Organization Change and Development: A Systems View (Santa Monica, CA: Goodyear, 1980); G. Lippitt and R. Lippitt, The Consult- ing Process in Action, 2nd ed. (San Diego: University Associates, 1986).

4. L. Greiner and F. Poulfelt, Management Consulting Today and Tomorrow (New York: Routledge, 2010); L. Greiner and R. Metzger, Consulting to Management (Englewood Cliffs, NJ: Prentice Hall, 1983), 251–58; Beer, Organiza- tion Change and Development, 81–83.

5. Block, Flawless Consulting. 6. D. Jamieson, “Pre-Launch,” in Practicing Organization

Development, 2nd ed., ed. W. Rothwell and R. Sullivan (San Francisco: Pfeiffer, 2005); J. Fordyce and R. Weil, Managing WITH People, 2nd ed. (Reading, MA: Addison-Wesley, 1979).

7. Beer, Organization Change and Development; Fordyce and Weil, Managing WITH People.

8. L. Forcella, “Marketing Competency and Consulting Competency for External OD Practitioners” (unpublished master’s thesis, Pepperdine University, Malibu, CA, 2003).

9. G. Lippitt, “Criteria for Selecting, Evaluating, and Devel- oping Consultants,” Training and Development Journal 28 (August 1972): 10–15.

10. Greiner and Metzger, Consulting to Management. 11. Block, Flawless Consulting; Gallant and Rios, “Entry and

Contracting Phase,” in The NTL Handbook of Organiza- tion Development and Change; Beer, Organization Change and Development.

12. T. Cody, Management Consulting: A Game Without Chips (Fitzwilliam, NH: Kennedy and Kennedy, 1986), 108–16; H. Holtz, How to Succeed as an Independent Consultant, 2nd ed. (New York: John Wiley & Sons, 1988), 145–61.

13. G. Bellman, The Consultant’s Calling (San Francisco: Jossey-Bass, 1990).

CHAPTER 4 ENTERING AND CONTRACTING 87

14. M. Weisbord, “The Organization Development Con- tract,” Organization Development Practitioner 5 (1973): 1–4; M. Weisbord, “The Organization Contract Revis- ited,” Consultation 4 (Winter 1985): 305–15; D. Nadler, Feedback and Organization Development: Using Data- Based Methods (Reading, MA: Addison-Wesley, 1977), 110–14.

15. Block, Flawless Consulting. 16. Ibid.

17. Lacey, “Internal Consulting.” 18. S. Pellegrinelli, “Managing the Interplay and Tensions of

Consulting Interventions. The Consultant-Client Rela- tionship as Mediation and Reconciliation,” Journal of Management Development 21 (2002): 343–65.

19. E. Schein, “Taking Culture Seriously in Organization Development: A New Role for OD” (working paper no. 4287–03, MIT Sloan School of Management, Cambridge, Mass, 2003).

88 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

© P

ix m

an n/

Im ag

ez oo

/ G

et ty

Im ag

es

5

Diagnosing

learning objectives

Discuss the philosophy and purpose of diagnosis in organization development (OD).

Explain the role of diagnostic models in OD, especially the open-systems model.

Describe and apply organization-level diagnostic processes.

Describe and apply group-level diagnostic processes.

Describe and apply individual-level diagnostic processes.

D iagnosing is the second major phase in the general model of planned change described in Chapter 2 (Figure 2.2). It follows the enter-

ing and contracting stage (Chapter 4) and precedes the planning and implementation phase. When done well, diagnosis clearly points the organization and the organization development (OD) practitioner toward a set of appropriate intervention activities that will improve organization effectiveness.

Diagnosis is the process of understanding a system’s current functioning. It involves collecting pertinent information about existing operations as well as analyzing those data and drawing con- clusions about the reasons for current performance and the potential for change and improvement. Effective diagnosis provides the systematic know- ledge of the organization needed to design app- ropriate interventions. Thus, OD interventions derive from diagnosis and include specific actions

intended to improve organizational functioning. (Chapters 10–20 present the major interventions used in OD today.)

This and the next chapter describe different aspects of the diagnostic process. This chapter presents a general definition of diagnosis and discusses the need for diagnostic models in guiding the process. Diagnostic models derive from conceptions about how organizations function, and they tell OD practitioners what to look for in diagnosing organizations, groups, or jobs. They serve as a road map for discovering current functioning. A general, comprehensive diagnostic model is presented based on open-systems theory. We then describe and apply the model to diagnostic situations at the organization, group, and job levels. Chapter 6 completes the diagnostic phase by discussing processes of data collection, analysis, and feedback.

89

,

© P

ix m

an n/

Im ag

ez oo

/ G

et ty

Im ag

es

5

Diagnosing

learning objectives

Discuss the philosophy and purpose of diagnosis in organization development (OD).

Explain the role of diagnostic models in OD, especially the open-systems model.

Describe and apply organization-level diagnostic processes.

Describe and apply group-level diagnostic processes.

Describe and apply individual-level diagnostic processes.

D iagnosing is the second major phase in the general model of planned change described in Chapter 2 (Figure 2.2). It follows the enter-

ing and contracting stage (Chapter 4) and precedes the planning and implementation phase. When done well, diagnosis clearly points the organization and the organization development (OD) practitioner toward a set of appropriate intervention activities that will improve organization effectiveness.

Diagnosis is the process of understanding a system’s current functioning. It involves collecting pertinent information about existing operations as well as analyzing those data and drawing con- clusions about the reasons for current performance and the potential for change and improvement. Effective diagnosis provides the systematic know- ledge of the organization needed to design app- ropriate interventions. Thus, OD interventions derive from diagnosis and include specific actions

intended to improve organizational functioning. (Chapters 10–20 present the major interventions used in OD today.)

This and the next chapter describe different aspects of the diagnostic process. This chapter presents a general definition of diagnosis and discusses the need for diagnostic models in guiding the process. Diagnostic models derive from conceptions about how organizations function, and they tell OD practitioners what to look for in diagnosing organizations, groups, or jobs. They serve as a road map for discovering current functioning. A general, comprehensive diagnostic model is presented based on open-systems theory. We then describe and apply the model to diagnostic situations at the organization, group, and job levels. Chapter 6 completes the diagnostic phase by discussing processes of data collection, analysis, and feedback.

89

5-1 What Is Diagnosis? Diagnosis is the process of understanding how the organization is currently functioning, and it provides the information necessary to design change interventions.1 It generally follows from successful entry and contracting, which set the stage for successful diagno- sis. Those processes help OD practitioners and client members jointly determine which organizational issues to focus on, how to collect and analyze data to understand them, and how to work together to develop action steps from the diagnosis. In another sense, diagnosis is happening all the time. Managers, organization members, and OD practi- tioners are always trying to understand the drivers of organization effectiveness as well as how and why changes are proceeding in a particular way.

Unfortunately, the term diagnosis can be misleading when applied to organizations. It suggests a model of organization change analogous to the medical model of diagnosis: An organization (patient) experiencing problems seeks help from an OD practitioner (doctor); the practitioner examines the organization, finds the causes of the problems, and prescribes a solution. Diagnosis in organization development, however, is much more collaborative than such a medical perspective implies and does not accept the implicit assumption that something is wrong with the organization.

First, the values and ethical beliefs that underlie OD suggest that both organization members and OD practitioners should be involved in discovering the determinants of current organization effectiveness. Similarly, both should be involved actively in develop- ing appropriate interventions and implementing them. For example, a manager might seek an OD practitioner’s help to reduce absenteeism in his or her department. The manager and an OD consultant jointly might decide to diagnose the cause of the prob- lem by examining company absenteeism records and by interviewing selected employees about possible reasons for absenteeism. Alternatively, they might examine employee loy- alty and discover the organizational elements that encourage people to stay. Analysis of those data could uncover determinants of absenteeism or loyalty in the department, thus helping the manager and the OD practitioner jointly to develop an appropriate interven- tion to address the issue.

Second, the medical model of diagnosis also implies that something is wrong with the patient and that one needs to uncover the cause of the illness. In those cases where organizations do have specific problems, diagnosis can be problem oriented, seeking rea- sons for the problems. On the other hand, as suggested by the absenteeism example above, the OD practitioner and the client may choose one of the newer views of organi- zation change and frame the issue positively. Additionally, the client and the OD practi- tioner may be looking for ways to enhance the organization’s existing functioning. Many managers involved with OD are not experiencing specific organizational problems. Here, diagnosis is development oriented. It assesses the current functioning of the organization to discover areas for future development. For example, a manager might be interested in using OD to improve a department that already seems to be functioning well. Diagnosis might include an overall assessment of both the task performance capabilities of the department and the impact of the department on its individual members. This process seeks to uncover specific areas for future development of the department’s effectiveness.

In organization development, diagnosis is used more broadly than a medical defini- tion would suggest. It is a collaborative process between organization members and the OD practitioner to collect pertinent information, analyze it, and draw conclusions for action planning and intervention. Diagnosis may be aimed at uncovering the causes of specific problems, focused on understanding effective processes, or directed at assessing the overall functioning of the organization or department to discover areas for future

90 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

development. Diagnosis provides a systematic understanding of organizations so that appropriate interventions may be developed for solving problems and enhancing effectiveness.

5-2 The Need for Diagnostic Models Entry and contracting processes can result in a need to understand either a whole system or some part, process, or feature of the organization. To diagnose an organization, OD practitioners and organization members need to have an idea about what informa- tion to collect and analyze. Choices about what to look for invariably depend on how organizations are conceived. Such conceptions can vary from intuitive hunches to scien- tific explanations of how organizations function. Conceptual frameworks that OD practi- tioners use to understand organizations are referred to as “diagnostic models.” They describe the relationships among different features of the organization, as well as its environment and its effectiveness. As a result, diagnostic models point out what areas to examine and what questions to ask in assessing how an organization is functioning.

However, all models represent simplifications of reality and therefore emphasize cer- tain organizational features as critical while ignoring other features. Focusing attention on particular features, often to the exclusion of others, can result in a biased diagnosis. For example, a diagnostic model that relates team effectiveness to the handling of inter- personal conflict would lead an OD practitioner to ask questions about relationships among members, decision-making processes, and conflict resolution methods. Although relevant, those questions ignore other group issues such as member skills and knowledge, the complexity of the tasks performed by the group, and task interdependencies. Thus, OD practitioners must choose diagnostic models and processes carefully to address the organization’s presenting problems as well as to ensure comprehensiveness.

Potential diagnostic models are everywhere. Any collection of concepts and relation- ships that attempts to represent a system or explain its effectiveness can potentially qual- ify as a diagnostic model. Major sources of diagnostic models in OD are the thousands of articles and books that discuss, describe, and analyze how organizations function. They provide information about how and why certain organizational systems, processes, or functions are effective. The studies often concern a specific facet of organizational behav- ior, such as employee stress, leadership, motivation, problem solving, group dynamics, job design, and career development. They also can involve the larger organization and its context, including the environment, strategy, structure, and culture. Diagnostic mod- els can be derived from that information by noting the dimensions or variables that are associated with an organization’s effectiveness.

Another source of diagnostic models is OD practitioners’ experience in organiza- tions. So-called “field knowledge” offers a wealth of practical information about how organizations operate. Unfortunately, only a small part of that vast experience has been translated into diagnostic models that represent the professional judgments of people with years of experience in organizational diagnosis. The models generally link diagnosis with specific organizational processes, such as group problem solving, employee motiva- tion, or communication between managers and employees. The models list specific ques- tions for diagnosing such processes.

This chapter presents a general framework for diagnosing organizations rather than trying to cover the range of OD diagnostic models. The framework describes the systems perspective prevalent in OD today and integrates several of the more popular diagnostic models. The systems model provides a useful starting point for diagnosing organizations,

CHAPTER 5 DIAGNOSING 91

groups, and individual jobs. (Chapters 10–20 present additional diagnostic models that are linked to specific OD interventions.)

5-3 Open-Systems Model This section introduces systems theory, a set of concepts and relationships describing the properties and behaviors of things called systems—organizations, groups, and jobs, for example. Systems are viewed as unitary wholes composed of parts or subsystems; the sys- tem serves to integrate the parts into a functioning unit. For example, organization systems are composed of groups or departments, such as sales, operations, and finance. The organization serves to coordinate behaviors of its departments so that they function together in service of an organization goal or strategy. The general framework that underlies most of the diagnosing in OD is called the “open-systems model.”

5-3a Organizations as Open Systems As shown in Figure 5.1, the open-systems model recognizes that organizations exist in the context of a larger environment that affects how the organization performs, and, in turn, is affected by how the organization interacts with it. The model suggests that organizations acquire specific inputs from the environment and transform them using social and technical processes. The outputs of the transformation process are returned to the environment and information about the consequences of those outputs serve as feedback to the organization’s functioning.

The open-systems model also suggests that organizations and their subsystems— groups and individual jobs—share a number of common features that explain how they are organized and how they function. For example, open systems display a hierarchical ordering. Each higher level of system is composed of lower-level systems: Systems at the level of society are comprised of organizations; organizations are comprised of groups; and groups are comprised of individual jobs. Although systems at different levels vary in many ways—in size and complexity, for example—they have a number of common characteristics by virtue of being open systems. The following open-systems properties are described below: environments; inputs, transformations, and outputs; boundaries; feedback; and alignment.

FIGURE 5.1

The Open-Systems Model

© Ce

ng ag

e Le

ar ni

ng 20

15

92 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

Environments Environments are everything outside of the system that can directly or indirectly affect its outputs. Open systems, such as organizations and groups, exchange information and resources with their environments. Because these external forces influ- ence the system, organizations cannot completely control their own behavior. Organiza- tions, for example, are affected by such environmental conditions as the availability of labor and human capital, raw material, customer demands, competition, and government regulations. Understanding how these external forces affect the organization can help to explain some of its internal behavior.

Inputs, Transformations, and Outputs Organizational systems are composed of three related properties: inputs, transformations, and outputs. Inputs consist of human capital or other resources, such as information, energy, and materials, coming into the system from the environment. For example, a manufacturing organization acquires raw materials from an outside supplier. Similarly, a hospital nursing unit acquires informa- tion concerning a patient’s condition from the attending physician. In each case, the system (organization or nursing unit) obtains resources (raw materials or information) from its environment.

Transformations are the processes of converting inputs into outputs. In organiza- tions, a production or operations function composed of both social and technological components generally carries out transformations. The social component consists of peo- ple and their work relationships, whereas the technological component involves tools, techniques, and methods of production or service delivery. Organizations have developed elaborate mechanisms for transforming incoming resources into goods and services. Banks, for example, transform deposits into mortgage loans and interest income. Schools attempt to transform students into more educated people. Transformation processes also can take place at the group and individual levels. For example, research and development departments can transform the latest scientific advances into new product ideas, and bank tellers can transform customer requests into valued services.

Outputs are the results of what is transformed by the system and sent to the environ- ment. Thus, inputs that have been transformed represent outputs that leave the system. Group health insurance companies receive premiums and medical bills, transform them through record keeping, and export payments to hospitals and physicians.

Boundaries The idea of boundaries helps to distinguish between organizational sys- tems and their environments. Boundaries—the borders or limits of the system—help to protect or buffer the organization’s transformation process from external disruptions; they also assure that the right inputs enter the organization and the relevant outputs leave it. An organizational system’s boundaries can vary in permeability, with some sys- tems, such as a highly cohesive work team on the shop floor, being relatively closed to the environment and other systems, such as a field sales force, being open to external forces. Organizational boundaries are determined not only by physical location, but also can be defined for managerial, technical, or social purposes. For example, to facilitate managerial control, a department’s boundaries could encompass all members reporting to a common administrator; to promote a smooth workflow, the department’s bound- aries might include suppliers, employees, and customers located along a common supply chain; or to foster cohesion among members, the department’s boundaries could embrace those members sharing particular social connections and attitudes. Because organizational boundaries can serve different purposes, OD practitioners may need to determine early in the OD process if the client system’s boundaries are appropriate for the intended purpose of the change effort. This may result in redefining or changing the

CHAPTER 5 DIAGNOSING 93

client’s boundaries before diagnosing begins. For example, the boundaries that identify a particular client system on an organization chart might be well suited for addressing leadership issues in that unit. However, the client system’s boundaries might have to be enlarged to include other related departments if the intent of OD is to improve coordi- nation among interdependent work groups.

Feedback As shown in Figure 5.1, feedback is information regarding the actual per- formance or the outputs of the system. Not all such information is feedback, however. Only information used to control the future functioning of the system is considered feedback. Feedback can be used to maintain the system in a steady state (for example, keeping an assembly line running at a certain speed) or to help the organization adapt to changing circumstances. McDonald’s, for example, has strict feedback processes to ensure that a meal in one outlet is as similar as possible to a meal in any other outlet. On the other hand, a salesperson in the field may report that sales are not going well and may insist on some organizational change to improve sales. A market-research study may lead the marketing department to recommend a change to the organiza- tion’s advertising campaign.

Alignment How well a system’s different parts and elements align with each other partly determines its overall effectiveness. This alignment or fit concerns the relation- ships between the organization and its environment as well as among the components that comprise the design of the organization. Alignment represents the extent to which the features and operations of one component support the effectiveness of another com- ponent. Just as the teeth in the wheels of a watch must mesh perfectly for the watch to keep time, so do the parts of an organizational system need to mesh for it to be effective. Diagnosing environmental relationships and the interactions among the various compo- nents of an organizational system requires taking “a systemic perspective.” This view suggests that diagnosing often involves the search for misalignments among the various parts of an organizational system.

5-3b Diagnosing Organizational Systems When viewed as open systems, organizations can be diagnosed at three levels. The high- est level is the overall organization and includes the company’s strategy, structure, and processes. Large organization units, such as divisions, subsidiaries, or strategic business units, also can be diagnosed at that level. The next lowest level is the group or depart- ment, which includes group design and methods for structuring interactions among members, such as norms and work schedules. The lowest level is the individual position or job. This includes ways in which jobs are designed to elicit required task behaviors.

Diagnosis can occur at all three organizational levels, or it may be limited to issues occurring at a particular level. The key to effective diagnosis is knowing what to look for at each level as well as how the levels affect each other.2 For example, diagnosing a work group requires knowledge of the variables important for group functioning and how the larger organization design affects the group. In fact, a basic understanding of organization-level issues is important in almost any diagnosis because they serve as criti- cal inputs to understanding groups and jobs.

Figure 5.2 presents a comprehensive model for diagnosing these different organizational systems. For each level, it shows (1) the inputs that the system has to work with, (2) the key components for designing the system to create, and (3) the system’s outputs. The relation- ships shown in Figure 5.2 illustrate how each organization level affects the lower levels.

94 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

The environment is the key input to organization design decisions. Organization design is an input to group design, which in turn serves as an input to job design. These cross-level relationships emphasize that organizational levels must fit with each other if the organization is to operate effectively. For example, organization structure must fit with and support group task design, which in turn must fit with individual-job design.

FIGURE 5.2

Comprehensive Model for Diagnosing Organizational Systems

© Ce

ng ag

e Le

ar ni

ng 20

15

CHAPTER 5 DIAGNOSING 95

The following sections of this chapter address diagnosing at each of the three levels—organization, group, and individual job. General overviews of the dimensions (and their relationships) that need to be understood at each level are presented. It is beyond the scope of this chapter to describe in detail the many variables and relation- ships reported in the extensive literature on organizations. However, specific diagnostic questions are identified and concrete examples are included as an introduction to this phase of the planned change process.

5-4 Organization-Level Diagnosis The organization level of analysis is the broadest systems perspective typically taken in diagnostic activities. (In some cases, OD is applied to a multiorganization system; those change processes are discussed in Chapter 20 on Transorganizational Change.) The model shown in Figure 5.2 is similar to other popular organization-level diagnostic models. These include Weisbord’s six-box model,3 Nadler and Tushman’s congruency model,4 Galbraith’s star model,5 and Kotter’s organization dynamics model.6 Figure 5.2 shows that an organization’s design components represent the way the organization organizes itself within an environment (inputs) to achieve specific results (outputs).7 To understand how a total organization functions, it is necessary to examine particular inputs, design components, and the alignment of the two sets of dimensions.

5-4a Inputs Figure 5.2 shows that three key inputs or environmental types affect the way an organi- zation is designed. We first describe these environments and then identify environmental dimensions that influence how organizations respond to external forces.

Environmental Types Three classes of environments influence how organizations function and achieve results: the general environment, the task environment, and the enacted environment.8

The general environment consists of all external forces that can directly or indirectly affect an organization.9 The general environment can include a variety of social, techno- logical, economic, ecological, and political/regulatory forces. These forces may interact in unique and unpredictable ways, presenting the organization with challenging threats and opportunities. Each of the forces also can affect the organization in both direct and indi- rect ways. For example, an organization may have trouble obtaining raw materials from a supplier because a national union is grieving the supplier’s employment practices, a government regulator is bringing a lawsuit against the supplier, or a consumer group is boycotting the supplier’s products. Thus, parts of the general environment can affect the organization without having any direct connection to it.

The task environment is another important organization input. Michael Porter defined an organization’s task environment in terms of industry structure represented by five forces: supplier power, buyer power, threats of substitutes, threats of entry, and rivalry among competitors.10 First, an organization must be sensitive to powerful suppli- ers who can increase prices (and therefore lower profits) or force the organization to pay more attention to the supplier’s needs than to its own needs. For example, unions represent powerful suppliers of labor that can affect the costs of any organization within an industry. Second, a firm must respond to powerful buyers. Powerful retailers, such as Walmart and Costco, can force Procter & Gamble, Johnson & Johnson, or other

96 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

suppliers to lower prices or deliver their products in particular ways. Third, an organiza- tion must be sensitive to the threat of new firms entering into competition. Profits in the restaurant business tend to be low because of the ease of starting a new restaurant. Fourth, a company must respond to the threat of new products or services that can replace existing offerings. Ice cream producers must carefully monitor their costs and prices because it is easy for a consumer to purchase frozen yogurt or other types of desserts instead. Finally, an organization must be sensitive to rivalry among existing competitors. If many organizations are competing for the same customers, then the orga- nization must be responsive to product offerings, costs, and structures if it is to survive and prosper. Together, these five forces play an important role in determining an orga- nization’s success, whether it is a manufacturing or service firm, a nonprofit organiza- tion, or a government agency.

While the general environment and the task environment describe the objective pressures an organization faces, the organization must first recognize those forces. The enacted environment consists of organization members’ perception and representation of the general and task environments. Environments must be perceived before they can influence decisions about how to respond to them.11 Organization members must actively observe, register, and make sense of the environment before it can affect their decisions about what actions to take. Thus, only the enacted environment can affect which organizational responses are chosen. The general and task environments, however, influence whether those responses are successful or ineffective. For example, members may perceive customers as relatively satisfied with their products and may decide to make only token efforts at developing new products. If those perceptions are wrong and customers are dissatisfied with existing products, the meager product development efforts can have disastrous organizational consequences. As a result, an organization’s enacted environment should accurately reflect its general and task environments if mem- bers’ decisions and actions are to be effective.12

Environmental Dimensions In addition to understanding what inputs are at work, the environment can be understood in terms of its rate of change and complexity.13

The rate of change in an organization’s general environment or task environment can be characterized along a dynamic–static continuum. Dynamic environments change rap- idly and unpredictably while static environments change more slowly and expectedly. The complexity of the environment refers to the number of different elements in the gen- eral and task environments that can significantly affect the organization. Some organiza- tions, such as software development firms, face dynamic and complex environments. Not only do technologies, regulations, customers, and suppliers change rapidly, but also all of them are important to the firm’s survival. On the other hand, other organizations, such as manufacturers of glass containers, face more stable and less complex environments.

A useful way to understand how the rate of change and complexity of environments influence organizations is to view environments as information flows that organizations need to process to discover how to relate to their environments.14 The key dimension of the environment affecting information processing is information uncertainty, or the degree to which environmental information is ambiguous. Organizations seek to remove uncertainty from the environment so that they know how to transact with it. For exam- ple, organizations may try to discern customer needs through focus groups and surveys and attempt to understand competitor strategies through press releases, sales force beha- viors, and knowledge of key personnel. The greater an organization environment’s rate of change and complexity, the more information uncertainty the organization faces, and consequently, the more information the organization must process to learn about the

CHAPTER 5 DIAGNOSING 97

environment.15 Thus, dynamic and complex environments pose difficult information- processing problems for organizations. For example, global competition, technological change, and financial markets have created highly uncertain environments for many multinational firms and have severely strained their information-processing capacity.

5-4b Design Components Figure 5.2 shows that an organization’s design is composed of four components—technology, structure, management processes, and human resources systems. It is surrounded by an inter- mediate input—strategy—and an intermediate output—culture—that need to be considered along with the organization’s design. Effective organizations align their strategy to environ- mental inputs and then fit the design components to each other to support the strategy and to jointly promote strategic behaviors. (Chapter 18 describes strategy and organization design interventions.)

A strategy represents the way an organization uses its resources (human, economic, or technical) to achieve its goals and to gain a competitive advantage in a particular environ- ment.16 Because strategy defines how an organization positions itself to compete in an environment, it is shown in Figure 5.2 as an intermediate input between the environment and the four design components. A complete statement of strategy includes the organiza- tion’s mission, goals and objectives, strategic intent, and functional policies.17 An organiza- tion’s mission defines the long-term purpose of the organization, the range of products or services offered, the markets served, and the societal needs addressed. Goals and objectives include specific targets for achieving strategic success. They provide explicit direction, set organization priorities, provide guidelines for management decisions, and serve as the cor- nerstone for organizing activities and setting standards of achievement. “Strategic intent” is a succinct label or metaphor that describes how the organization intends to leverage three resource dimensions—breadth, aggressiveness, and differentiation—to achieve its goals and objectives. For example, in 2013, a turnaround strategic intent drove Nokia’s goals of restoring financial confidence. That turnaround can be characterized by a narrower (as opposed to broader) focus on specific markets and products, increased aggressiveness dem- onstrated by its marketing expenditures and internal cost reductions, and improved differ- entiation through its alliance with Microsoft and its Windows 8 operating system. Finally, functional policies are the methods, procedures, rules, or administrative practices that guide decision making and convert strategic plans into actions. In the semiconductor busi- ness, for example, Intel had a policy of allocating about 30% of revenues to research and development to maintain its lead in microprocessors production.18 (Chapters 18 and 19 describe strategy interventions.)

Technology is concerned with the way an organization converts inputs into products and services. It represents the core transformation process and includes production methods, workflow, and equipment. Two features of the technological core have been shown to influence other design components: technical interdependence and technical uncertainty.19 Technical interdependence involves the extent to which the different parts of a technological system are related. High interdependence requires considerable coordination among tasks, such as might occur when departments must work together to bring out a new product. Technical uncertainty refers to the amount of information processing and decision making required during task performance. Generally, when tasks require high amounts of information processing and decision making, they are difficult to plan and routinize. The technology of car manufacturing is relatively certain and moderately interdependent. As a result, automobile manufacturers can specify in advance the behaviors workers should exhibit and how their work should be coordinated.

98 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

Structure is the basic organizing mode for (1) dividing the overall work of an orga- nization into subunits that can assign tasks to groups or individuals and (2) coordinating these subunits for completion of the overall work.20 Structure, therefore, needs to be closely aligned with the organization’s technology. Organization structure can divide work by function (e.g., accounting, sales, or production), by product or service (e.g., Chevrolet, GMC, or Cadillac), by customer (e.g., large, medium, or small enter- prise), or by some combination of both (e.g., a matrix composed of functional depart- ments and product groupings). Structures can coordinate work across subunits through the managerial hierarchy or a variety of lateral mechanisms, such as plans and schedules, budgets, project managers, liaison positions, integrators, cross-departmental task forces, and matrix relationships. The amount of coordination required in a structure is a func- tion of (1) the amount of uncertainty in the environment, (2) the degree to which sub- units differ from each other, and (3) the amount of interdependence among subunits.21

As uncertainty, subunit difference, and interdependence increase, more sophisticated coordinating devices are required.22 (Chapter 12 discusses structural interventions.) Management processes are methods for processing information, making decisions, and controlling the operation of the organization. They help the organization to understand how well it is performing, to detect and control deviations from goals, to make relevant decisions, and to communicate the results. Closely related to structural coordination, management processes monitor organizational operations and feed data about work activities to managers and members so that they can better understand current perfor- mance, make relevant decisions, and coordinate work. Effective information, decision making, and control systems are linked closely to strategic objectives; provide accurate, understandable, and timely information; are accepted as legitimate by organization mem- bers; and produce benefits in excess of their cost.

Human resources systems include mechanisms for selecting, developing, appraising, and rewarding organization members. These influence the mix of skills, personal characteristics, and behaviors of organization members. An organization’s strategy and technology provide important information about the skills and knowledge required if the organization is to be successful. Appraisal processes identify whether those skills and knowledge are being applied to the work, and reward systems complete the cycle by recognizing performance that contributes to goal achievement. Reward systems may be tied to measurement systems so that rewards are allocated based on measured results. (Chapters 15, 16, and 17 discuss specific human resources systems, such as rewards and career development.)

Organization culture represents the basic assumptions, values, and norms shared by organization members.23 Those cultural elements are generally taken for granted and serve to guide members’ perceptions, thoughts, and actions. For example, McDonald’s culture emphasizes efficiency, speed, and consistency. It orients employees to company goals and suggests the kinds of behaviors necessary for success. In Figure 5.2, culture is shown as an intermediate output from the four design components because it represents both an outcome and a constraint. Culture initially derives from an organization foun- der’s values and is reinforced and sustained through organization selection and socializa- tion processes. It is also an outcome of the organization’s history and environment as well as of prior choices made about the strategy, technology, structure, management pro- cesses, and human resources systems. Because organization culture is personally internal- ized, it can be difficult to change and can restrict an organization’s ability to change its strategy and organization design components.24 In that sense, culture can either hinder or facilitate organization change. In diagnosing organizations, the culture needs to be understood well enough to determine its alignment with the organization’s strategy and the four design components. (Chapter 18 discusses culture change in more detail.)

CHAPTER 5 DIAGNOSING 99

5-4c Outputs The outputs of organization design are measures of how well the design contributes to organization effectiveness. This can include three kinds of outcomes. First, organization performance refers to financial outcomes, such as sales, profits, return on investment (ROI), or earnings per share (EPS). For nonprofit and government agencies, perfor- mance often refers to the extent to which costs were lowered or budgets met. Second, productivity concerns internal measurements of efficiency, such as sales per employee, waste, error rates, quality, or units produced per hour. Third, stakeholder satisfaction reflects how well the organization has met the expectations of different groups having an interest in the organization. For example, customer loyalty can be measured in terms of market share or focus-group data; employee engagement can be measured in terms of an opinion survey; investor satisfaction can be measured in terms of stock price or analyst opinions; and environmental sustainability can be measured by the orga- nization’s carbon footprint.

5-4d Alignment Diagnosing the effectiveness of an organization requires knowledge of the above ele- ments to determine the alignment or fit among them.

1. Does the organization’s strategy fit with the inputs? To be effective, an organi- zation’s strategy needs to be responsive to the general and task environments. They include external forces, threats, and opportunities that need to be consid- ered in making strategic choices about mission, goals and objectives, strategic intent, and functional policies. The organization makes those choices based on members’ perceptions of the environment (the enacted environment). Thus, the organization’s information-processing and strategy-making capabilities must match the information uncertainty of the general and task environments if the organization’s perceptions and strategic choices are to accurately reflect external realities. Environments that change rapidly and are complex are highly uncertain. In these situations, organizations need to constantly process information and monitor wide segments of their environments; their strategy-making process needs to be flexible resulting in strategic choices that can quickly be adapted to changing external conditions. (Chapter 19 describes dynamic strategy-making interventions.) Conversely, organizations can periodically assess selected parts of the environment and make strategic choices that are stable over moderate to long periods of time when the information uncertainty of their general and task envir- onments is relatively low.

2. Do the organization design components fit with each other to jointly support the strategy? For example, if the organization’s strategy is highly flexible and responsive to environmental change, then the design components must mutually support and reinforce agile and adaptable organizational behaviors. Successful firms in Silicon Valley, such as Apple and Oracle, tend to have flexible strategies that promote inno- vation and change. Their organization design components include leading-edge technologies that are complex and uncertain; flexible structures that emphasize agil- ity and fast responses; management processes that provide rapid information and feedback and promote employee decision making; human resource policies that select, develop, and reward talented employees. These flexible and agile firms have organization cultures that value technical sophistication, member commitment, invention, and customer loyalty.

100 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

5-4e Analysis Application 5.1 describes the Steinway organization and provides an opportunity to per- form an organization-level analysis.25 A useful starting point is to examine outputs and to ask about the organization’s current effectiveness. Steinway has excellent market share in the high-quality segment of the grand piano market, a string of improving financial measures, and strong customer loyalty. However, the data on employee satisfaction are mixed; there are both long-tenured people and an indication that some workers are leav- ing for other jobs. Financial improvements appear modest when contrasted with industry averages. Understanding the underlying causes of these effectiveness issues begins with an assessment of the inputs and organization design and then proceeds to an evaluation of the alignments among the different parts.

In diagnosing the inputs, two questions are important.

1. What is the company’s general environment? Steinway’s external environment is only moderately uncertain and not very complex. Socially, Steinway is an important part of a country’s artistic culture and the fine arts. It must be aware of fickle trends in music and display an appropriate sensitivity to them. Politically, the organization operates on a global basis and its distribution and sales networks must be attuned to different governmental and country requirements. The manufacturing plant in Hamburg, Germany, suggests an important political dependency that must be mon- itored. Technologically, Steinway appears reasonably concerned about the latest breakthroughs in piano design, materials, and construction. They are aware of alter- native technologies, such as the assembly-line process at Yamaha, but prefer the classic methods they have always used. Ecologically, Steinway must be mindful. Their product requires lumber and they are very selective (some would say wasteful) about the choices, rejecting many pieces. It is likely that environmentalists would express concern over how Steinway uses this natural resource. Together, these envi- ronmental forces paint a relatively moderate level of uncertainty. Most of these issues are knowable and can be forecast with some confidence. In addition, while there are several environmental elements that need to be addressed, not all of them are vitally important. The environment is not very complex.

2. What is the company’s task environment? Steinway’s industry is moderately com- petitive and profit pressures can be mapped by looking at five key forces. First, the threat of entry is fairly low. There are some important barriers to cross if an organi- zation wanted to get into the piano business. For example, Steinway, Yamaha, and Baldwin have very strong brands and dealer networks. Any new entrant would need to overcome these strong images to get people to buy their product. Second, the threat of substitute products is moderate. On the one hand, electronic keyboards have made important advances and represent an inexpensive alternative to grand and upright pianos. On the other hand, the sophisticated nature of many of the artists and audiences suggests that there are not many substitutes for a concert grand piano. Third, the bargaining power of suppliers, such as providers of labor and raw materials, is high. The labor union has effective control over the much- sought-after craft workers who manufacture and assemble grand pianos. Given the relatively difficult time that most high-end piano manufacturers have in holding onto these highly trained employees, the organization must expend considerable resources to retain them. Similarly, given the critical nature of wood to the final product, lumber suppliers can probably exert significant influence. Fourth, the bar- gaining power of buyers varies by segment. In the high-end segment, the number of buyers is relatively small and sophisticated, and the small number of high-quality

CHAPTER 5 DIAGNOSING 101

a p

p lica

tio n

5 1

STEINWAY & SONS

S teinway & Sons, which turned 160 years old in April 2013, is generally regarded as the fin- est piano maker in the world. Founded in 1853 by the Steinway family, the firm was

sold to CBS in 1972, taken private in 1985 by John and Robert Birmingham, and sold again in 1995 to Dana Messina and Kyle Kirkland, who took it public in 1996. Steinway & Sons is the piano division of the Steinway Musical Instruments Com- pany that also owns Selmer Instruments and other manufacturers of band instruments (www. steinwaymusical.com). Piano sales in 2002 were $169 million, down 7.6% from the prior year and mirroring the general economic downturn. Since going public, Steinway’s corporate revenues have grown a compounded 6–7% a year, while earnings per share have advanced, on average, a com- pounded 11%. The financial performance for the overall company in 2002 was slightly below indus- try averages.

The Steinway brand remains one of the company’s most valuable assets. The com- pany’s president notes that despite only 2% of all keyboard unit sales in the United States, they have 25% of the sales dollars and 35% of the profits. Their market share in the high-end grand piano segment is consistently over 80%. For example, 98% of the piano soloists at 30 of the world’s major symphony orchestras chose a Steinway grand during the 2000/2001 con- cert season. Over 1,300 of the world’s top pia- nists, all of whom own Steinways and perform solely on Steinways, endorse the brand with- out financial compensation.

Workers at Steinway & Sons manufacturing plants in New York and Germany have been with the company for an average of 15 years, often over 20 or 30 years. Many of Steinway’s employees are descendants of parents and grandparents who worked for the company.

THE ENVIRONMENT

The piano market is typically segmented into grand pianos and upright pianos, with the former being a smaller but higher-priced segment. In 1995, about 550,000 upright pianos and 50,000

grand pianos were sold. Piano customers can also be segmented into professional artists, amateur pianists, and institutions such as con- cert halls, universities, and music schools. The private (home) market accounts for about 90% of the upright piano sales and 80% of the grand piano sales, with the balance being sold to insti- tutional customers. New markets in Asia repre- sent important new growth opportunities.

The piano industry has experienced sev- eral important and dramatic changes for such a traditional product. Industry sales, for exam- ple, dropped 40% between 1980 and 1995. Whether the decline was the result of increased electronic keyboard sales, a real decline in the total market, or some temporary decline was a matter of debate in the industry. Since then, sales growth has tended to reflect the ups and downs of the global economy.

Competition in the piano industry has also changed. In the United States, several hundred piano makers at the turn of the century had consolidated to eight by 1992. The Baldwin Piano and Organ Company is Steinway’s pri- mary U.S. competitor. It offers a full line of pia- nos under the Baldwin and Wurlitzer brand names through a network of over 700 dealers. In addition to relatively inexpensive upright pia- nos produced in high-volume plants, Baldwin also makes handcrafted grand pianos that are well-respected and endorsed by such artists as Dave Brubeck and Stephen Sondheim, and by the Boston, Chicago, and Philadelphia orches- tras. Annual sales are in the $100 million range; Baldwin was recently sold to the Gibson Guitar Company. The European story is similar. Only Bösendorfer of Austria and Fazioli of Italy remain as legitimate Steinway competitors.

Several Asian companies have emerged as important competitors. Yamaha, Kawai, Young Chang, and Samick collectively held about 35% of the vertical piano market and 80% of the grand piano market in terms of units and 75% of global sales in 1995. Yamaha is the world’s largest piano manufacturer with sales of over $1 billion and a global market share of about 35%. Yamaha’s strategy has been to

102 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

produce consistent piano quality through continuous improvement. A separate handcrafted concert grand piano operation has also tried to use continuous improvement methods to create consistently high- quality instruments. More than any other high- quality piano manufacturer, Yamaha has been able to emulate and compete with Steinway.

THE STEINWAY ORGANIZATION

Steinway & Sons offers several different pianos, including two brands (Steinway and the less expensive Boston brand) and both upright and grand piano models. The company handcrafts its grand pianos in New York and Germany, and sells them through more than 200 independent dealers. About half of the dealers are in North and South America and approximately 85% of all Steinway pianos are sold through this network. The company also owns retail outlets in New York, New Jersey, London, Munich, Hamburg, and Berlin.

The dealer network is an important part of Stein- way’s strategy because of its role in the “concert bank” program. Once artists achieve a certain sta- tus, they are invited to become part of this elite group. The performer can go to any local dealer, try out different pianos, and pick the one they want to use at a performance for only the cost of bringing the piano to the concert hall. The concert bank con- tains over 300 pianos in more than 160 cities. In return for the service, Steinway is given exclusive use of the performer’s name for publicity purposes.

Creating a Steinway concert grand piano is an art, an intricate and timeless operation (although alternate methods have been created and improved, the basic process hasn’t changed much). It requires more than 12,000 mostly handcrafted parts and more than a little magic. The tone, touch, and sound of each instrument is unique, and 120 technical patents and innovations contribute to the Steinway sound. Two years are required to make a Steinway grand as opposed to a mass-produced piano that takes only about 20 days. There are three major steps in the production process: wood- drying (which takes about a year), parts-making, and piano-making.

Wood-drying operations convert moisture-rich lumber into usable raw material through air-drying and computer-controlled kilns. Time is a critical ele- ment in this process because slow and natural

drying is necessary to ensure the best sound- producing qualities of the wood. Even after all the care of the drying process, the workers reject approximately 50% of the lumber.

After drying, the parts-making operations begin. The first of these operations involves bending of the piano rim (the curved side giving a grand piano its familiar shape). These rims are formed of multiple layers of specially selected maple that are manually forced into a unified shape, held in presses for sev- eral hours, and then seasoned for 10 weeks before being joined to other wooden parts. During this time, the sounding board (a specially tapered Alaska Sitka spruce panel placed inside the rim to amplify the sound) and many other case parts are made. The final critical operation with parts-making involves the fabrication of the 88 individual piano action sets that exist inside a piano. Piano “actions” are the intricate mechanical assemblies—made almost completely of wood and some felt, metal, and leather—that transmit finger pressure on the piano keys into the force that propels the hammers that strike the strings. The action is a particularly important part of a piano because this mechanical linkage gives Steinways their distinctive feel. In the action department, each operator is responsible for inspecting his or her own work, with all assembled actions further subject to 100% inspection.

Piano-making operations include “bellying,” finishing, and tone regulating. The bellying process involves the precise and careful fitting of the soundboard, iron piano plate, and rim to each other. It requires workers to lean their stomachs against the rim of the piano to complete this task. Because of individual variations in material and the high degree of precision required, bellying takes considerable skill and requires several hours per piano. After the bellying operations, pianos are strung and moved to the finishing department. Dur- ing finishing, actions and keyboards are individually fit to each instrument to accommodate differences in materials and tolerances to produce a working instrument. The final piano-making step involves tone regulating. Here, the pianos are “voiced” for Steinway sound. Unlike tuning, which involves the loosening and tightening of strings, voicing requires careful adjustments to the felt surround- ing the hammers that strike the strings. This oper- ation is extremely delicate and is performed by only a small handful of tone regulators. The tone

CHAPTER 5 DIAGNOSING 103

pianos means that customers can put pressure on prices, although they are clearly willing and able to pay more for quality. In the middle and lower segments, the number of buyers is much larger and fragmented. It is unlikely that they could collectively exert influence over price. Finally, the rivalry among firms is severe. A number of well-known and well-funded domestic and international competitors exist. Almost all of them have adopted marketing and manufacturing tactics similar to Steinway’s in the high-end segment, and they are competing for the same custo- mers. The extensive resources available to Yamaha as a member of their keiretsu, for example, suggest that it is a strong and long-term competitor that will work hard to unseat Steinway from its position. Thus, powerful buyers and suppliers as well as keen competition make the piano industry only moderately attractive and represent the key sources of uncertainty that Steinway faces. Overall, Steinway executives’ perceptions of the general and task environments seem to be accurate.

The following questions are important in assessing Steinway’s strategy and organiza- tion design:

1. What is the company’s strategy? Steinway’s primary strategy is a sophisticated niche and differentiation strategy. It attempts to meet its financial and other objec- tives by offering a unique and high-quality product to sophisticated artists. However, its product line does blur the strategy’s focus. With both Boston and Steinway brands and both upright and grand models, a question about Steinway’s commit- ment to the niche strategy could be raised. No formal mission or goals are men- tioned in the case, and this makes it somewhat difficult to judge the effectiveness of the strategy. Nevertheless, it seems reasonable to assume a clear intent to main- tain its dominance in the high-end segment. However, with new owners in 1995, it is also reasonable to question whether goals of profitability or revenue growth, implying very different tactics, have been sorted out.

2. What are the company’s technology, structure, management processes, human resources systems, and culture? First, Steinway’s core technology is highly uncer- tain and moderately interdependent. The manufacturing process is craft-based and

regulators at Steinway are widely considered to be among the most skilled artisans in the factory. Their voicing of a concert grand can take as much as 20 to 30 hours. All tone regulators at Steinway have worked for the company in various other posi- tions before reaching their present posts, and sev- eral have more than 20 years with the firm. Finally, after tone regulation, all pianos are polished, cleaned, and inspected one last time before pack- ing and shipment.

Steinway produced more than 3,500 pianos in 2002 at its New York and Hamburg, Germany, plants. Almost 430 people work in the New York plant and all but about 100 of them work in

production. They are represented by the United Furniture Worker’s union. Seventy-five percent of the workers are paid on a straight-time basis; the others, primarily artisans, are paid on piece rates. Retaining workers has proved increasingly difficult as well-trained Steinway craftspeople are coveted by other manufacturers, and many of the workers could easily set up their own shop to repair or rebuild older Steinway pianos. Excess inventories due to weak sales both pre and post September 11 forced Steinway to adjust its production schedule; rather than lay off highly skilled workers needed to build its pianos, workers in the New York plant reported to work every other week.

104 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

dependent on the nature of the materials. Each piano is built and adjusted with the specific characteristics of the wood in mind. So much so that each piano has a dif- ferent sound that is produced as a result of the manufacturing process. The technol- ogy is moderately interdependent because the major steps in the process are not closely linked in time. Making the “action sets” is independent of the “bellying” pro- cess, for example. Similarly, the key marketing program, the concert bank, is inde- pendent of manufacturing. Second, the corporate structure is divisional (pianos and band instruments), while the piano subsidiary appears to have a functional structure. The key functions are manufacturing, distribution, and sales. A procurement, finance, and human resources group is also reasonable to assume. Third, manage- ment processes are focused on the production system. There are specific mentions of inspections by both the worker and the organization. For example, 100% inspec- tion (as opposed to statistical sampling) costs time and manpower and no doubt is seen as critical to quality. In addition, there must be some system of keeping track of work-in-progress, finished goods, and concert bank inventories. Fourth, the human resources system is highly developed. It includes a reward system that is both hourly and piece rate; a unionized employee relationship; worker retention programs; and global hiring, compensation, benefits, and training programs. Fifth, while there is lit- tle specific information, Steinway’s culture can be inferred. The dominant focus on the high-end segment, the craft nature of the production process, the importance of the concert bank program, and the long history of family influence all point to a culture of quality, craftsmanship, and responsiveness. These values are manifest in the way the organization chooses its raw materials, the way it caters to its prized customers, the care in the production process, and the image it works to retain.

Now that the organization inputs, design components, and outputs have been assessed, it is time to ask the crucial question about how well they fit together. The first concern is the fit between the environmental inputs and the strategy. The moderate com- plexity and uncertainty in the general and task environments argue for a strategy that is flexible enough to address the few critical dependencies but formal enough to control a sophisticated production process. Steinway’s focus on the high-end segment of the industry and the moderate breadth in its product line support this flexibility. It clearly intends to differentiate its product by serving the high-end segment with unique high-quality pianos. However, the market for higher-priced and more specialized concert grands is much smal- ler than the market for lower-priced uprights and limits the growth potential of sales unless Steinway wants to compete vigorously in the emerging Asian markets where the Asian companies have a proximity advantage. Steinway’s lack of clear strategic goals in general and policies that support neither growth nor profitability also would make entry into new markets difficult. Steinway’s flexible and responsive manufacturing process sup- ports and defends its preeminence as the top grand piano maker in the world. It also miti- gates the powerful buyer forces in this segment. Steinway’s moderate product line breadth gives it some flexibility and efficiency as well. It can achieve some production efficiencies in the upright and medium-market grand piano segments, and its brand image helps in marketing these products. Steinway must be careful not to broaden its product line too much, however, as this could dilute its strategic focus on the high-end market. Overall, the alignment between Steinway’s environment and its strategy appears sound.

The second concern is the alignment of the design components to support the strat- egy. There appears to be a good fit between Steinway’s strategy and the organization design components. The differentiated strategic intent requires technologies, structures, and systems that focus on creating sophisticated and unique products, specialized

CHAPTER 5 DIAGNOSING 105

marketing and distribution, and the concert bank program. The flexible structure, formal inspection systems, and responsive culture seem well suited for that purpose. Steinway’s technology appears aligned with its structure. The production process is craft-based and deliberately not standardized. The functional structure promotes specialization and professionalization of skills and knowledge. Specific tasks that require flexibility and adapt- ability from the organization are given a wide berth. Although a divisional structure over- lays Steinway’s corporate activities, the piano division’s structure is functional but not rigid and appears to be responsive to the craft and the artists it serves. In addition, the concert bank program is important for two reasons. First, it builds customer loyalty and ensures future demand. Second, it is a natural source of feedback on the instruments themselves, keeping the organization close to the artist’s demands and emerging trends in sound preferences. The well-developed human resources system supports the responsive production and marketing functions as well as the global nature of the enterprise. Finally, Steinway’s culture of quality and responsiveness promotes coordination among the produc- tion tasks, serves to socialize and develop people, and establishes methods for moving infor- mation throughout the organization. Clearly, any change effort at Steinway will have to acknowledge its long-established culture and design an intervention accordingly. The strong culture will either sabotage or facilitate change depending on how the change process aligns with the culture’s values and norms.

Based on this diagnosis of the Steinway organization, at least two OD interventions seem relevant. First, in collaboration with the client, the OD practitioner could suggest increasing the clarity of Steinway’s strategy. In this intervention, the practitioner would want to talk about formalizing—rather than changing—Steinway’s strategy because the culture would likely resist strategy change. However, there are obvious advantages to be gained from a clearer sense of Steinway’s future goals, its businesses, and the relation- ships among them. Second, Steinway could focus on better coordinating its structure, measurement systems, and human resources systems. The difficulty of retaining key pro- duction personnel warrants continuously improved retention systems as well as efforts to codify and retain key production knowledge in case workers do leave. This would apply to the marketing and distribution functions as well, since they control an important interface with the customer.

5-5 Group-Level Diagnosis Work groups are prevalent in all types and sizes of organizations. They generally consist of a relatively small number of people working together on a shared task either face- to-face or virtually via electronic communication. Work groups can be relatively perma- nent and perform an ongoing function, or they can be temporary and exist only to per- form a certain task or to make a specific decision. Figure 5.2 shows the inputs, design components, outputs, and relational fits for group-level diagnosis. The model is similar to other popular group-level diagnostic models such as Hackman and Morris’s task group design model,26 McCaskey’s framework for analyzing groups,27 and Ledford, Lawler, and Mohrman’s participation group design model.28

5-5a Inputs Organization design and culture are the major inputs to group design. They consist of the design components characterizing the larger organization within which the group is embedded—technology, structure, management processes, and human resources

106 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

systems—and organization culture. Technology can determine the characteristics of the group’s task; structural systems can specify the level of coordination required among groups. Management processes can determine how much information the group receives and how much decision making and self-control it can exercise. The human resources and measurement systems, such as performance appraisal and reward systems, play an important role in determining team functioning.29 For example, individual-based, forced-ranking performance appraisal and reward systems tend to interfere with team functioning because members may be concerned with maximizing their individual per- formance to the detriment of team performance. Organization culture can influence the norms that groups develop to control member behavior. Collecting information about the group’s organization design context can greatly improve the accuracy of diagnosis.

5-5b Design Components Figure 5.2 shows that group designs have five major components: goal clarity, task struc- ture, group composition, team functioning, and performance norms.

Goal clarity involves how well the group understands its objectives. In general, goals should be moderately challenging; there should be a method for measuring, monitoring, and feeding back information about goal achievement; and the goals should be clearly understood by all members.

Task structure is concerned with how the group’s work is designed. Task structures can vary along two key dimensions: coordination of members’ efforts and regulation of their task behaviors.30 The coordination dimension involves the degree to which group tasks are structured to promote effective interaction among group members. Coordina- tion is important in groups performing interdependent tasks, such as surgical teams and problem-solving groups. It is relatively unimportant, however, in groups composed of members who perform independent tasks, such as a group of call center specialists or salespeople. The regulation dimension involves the degree to which members can control their own task behaviors and be relatively free from external controls such as supervi- sion, plans, and programs. Self-regulation generally occurs when members can decide on such issues as task assignments, work methods, production goals, and membership. (Chapter 14 discusses OD interventions for designing group task structure.)

Group composition concerns the membership of groups. Members can differ on a number of dimensions having relevance to group behavior. Demographic variables, such as age, education, experience, and skills and abilities can affect how people behave and relate to each other in groups. Demographics can determine whether the group is composed of people having task-relevant skills and knowledge, including interpersonal skills. People’s internal needs and personal traits also can influence group behaviors. Individual differences in social needs can determine whether group membership is likely to be satisfying or stressful.31

Team functioning is the underlying basis of group life. It involves group processes hav- ing to do with how members relate to each other, which is important in work groups because the quality of relationships can affect task performance. In some groups, for exam- ple, interpersonal competition and conflict among members result in their providing little support and help for each other. Conversely, groups may become too concerned about sharing good feelings and spend too little time on task performance. In OD, considerable effort has been invested in helping work-group members develop healthy interpersonal rela- tions, including an ability and a willingness to share feelings and perceptions about mem- bers’ behaviors so that interpersonal problems and task difficulties can be worked through

CHAPTER 5 DIAGNOSING 107

and resolved.32 Group functioning, therefore, involves task-related activities, such as advo- cacy and inquiry; coordinating and evaluating activities; and the group-maintenance func- tion, which is directed toward holding the group together as a cohesive team and includes encouraging, harmonizing, compromising, setting standards, and observing.33 (Chapter 10 presents interpersonal and group process interventions.)

Performance norms are member beliefs about how the group should perform its task and what levels of performance are acceptable.34 Norms derive from interactions among members and serve as guides to group behavior. Once members agree on performance norms, either implicitly or explicitly, then members routinely perform tasks according to those norms. For example, members of problem-solving groups often decide early in the life of the group that decisions will be made through voting; voting then becomes a routine part of group task behavior. (Chapter 10 discusses interventions aimed at helping groups to develop appropriate performance norms.)

5-5c Outputs Team effectiveness has two dimensions: performance and quality of work life. Perfor- mance is measured in terms of the group’s ability to control or reduce costs, increase productivity, or improve quality. It is a “hard” measure of effectiveness. In addition, effectiveness is indicated by group members’ quality of work life. It concerns work satis- faction, team cohesion, and organizational commitment.

5-5d Alignment Diagnosing team effectiveness requires assessment of how well the group elements described above fit with each other.

1. Does the group design fit with the inputs? As shown in Figure 5.2, the key inputs into group design are the larger organization’s design and culture. Organization designs and cultures that are highly flexible and promote agile and adaptive orga- nizational behaviors generally fit with work groups composed of highly skilled and experienced members performing highly interdependent tasks. Conversely, organi- zation designs and cultures that are bureaucratic and support standardized beha- viors generally align with work groups that have clear, quantitative goals and norms and structures that support routine task behaviors and interactions. Although there is little direct research on these fits, the underlying rationale is that congruence between organization design and culture and group design sup- ports overall integration of task behaviors within the organization. When group designs are not compatible with organization designs and cultures, groups often conflict with the organization.35 They may develop norms that run counter to organizational effectiveness, such as occurs in groups supportive of horseplay, goldbricking, and other counterproductive behaviors.

2. Do the group design components fit with each other? The nature of a group’s task determines how the design components should align with each other. When the task is highly interdependent and requires coordination among group mem- bers, goal clarity, task structure, group composition, performance norms, and team functioning all need to promote effective task interaction among members. For example, task structure might physically locate related tasks together; group composition might include members with similar interpersonal skills and social needs; performance norms would support task-relevant interactions; and healthy interpersonal relationships would be developed. Conversely, when a group’s task

108 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

is independent, the design components should promote individual task perfor- mance.36 The other relevant task dimension concerns task uncertainty, which has to do with the amount of information processing and decision making that need to occur during task performance. When a work group has an uncertain task, then task structure, group composition, performance norms, and team functioning should promote self-regulation. Members should have the nec- essary freedom, information, and skills to assign members to appropriate tasks, to decide on production methods, and to set performance goals.37 For example, when self-regulation is needed, task structure might be relatively flexible and allow the interchange of members across group tasks; composition might include members with multiple skills, interpersonal competencies, and social needs; per- formance norms would support complex problem solving; and efforts would be made to develop healthy interpersonal relations. On the other hand, when tech- nology is relatively certain, group designs should promote standardization of behavior and groups should be externally controlled by supervisors, schedules, and plans.38

5-5e Analysis Application 5.2 presents an example of applying group-level diagnosis to a top- management team engaged in problem solving.

Examination of the group’s outputs shows that it is ineffective at problem solving. Members report a backlog of unresolved issues, poor use of meeting time, lack of follow-through and decision implementation, and a general dissatisfaction with the team meetings. Examining group inputs and design components and assessing the fit among them can uncover the causes of those group problems.

The key inputs into a work group are the design and culture of the larger organi- zation. The Ortiv Glass Corporation’s decentralized philosophy allows each plant to set up its own organization design. Freedom to innovate in the manufacturing plants is probably an outgrowth of the firm’s OD activities and culture, which promote par- ticipation and innovation. Although little specific data are given about the new plant’s organization design, tasks are structured into functional departments that must work together to produce plate glass. The team’s problem-solving activities reflect this inter- dependence among the departments as coordination among team members is needed to resolve plantwide issues. The team meetings also seem to involve many issues that are complex and not easily solved, so there is probably uncertainty in the technology or work process. This ambiguity is typical in a new plant and makes it difficult for a problem-solving team to determine the causes of problems or to find acceptable solu- tions. Consequently, members of the top-management team must process considerable information during problem solving.

Diagnosis of the team’s design components answers the following questions:

1. How clear are the group’s goals? The team’s goals seem relatively clear: they are to solve problems. There appears to be no clear agreement, however, on the specific problems to be addressed. As a result, members come late because they have “more pressing” problems needing attention.

2. What is the group’s task structure? The team’s task structure includes face-to-face interaction during the weekly meetings. This structure allows members from differ- ent functional departments to come together physically to share information and to solve problems mutually affecting them. It facilitates coordination of problem

CHAPTER 5 DIAGNOSING 109

solving among the departments in the plant. The structure also seems to provide team members with the freedom necessary to regulate their task behaviors in the meetings. They can adjust their behaviors and interactions to suit the flow of the discussion and problem-solving process.

a p

p lica

tio n

5 2

TOP-MANAGEMENT TEAM AT ORTIV GLASS CORPORATION

T he Ortiv Glass Corporation produces and markets plate glass for use primarily in the construction and automotive industries. The multiplant company has been involved

in OD for several years and actively supports par- ticipative management practices and employee involvement programs. Ortiv’s organization design is relatively flexible, and the manufacturing plants are given freedom and encouragement to develop their own organization designs and approaches to participative management. It recently put together a problem-solving group made up of the top-management team at its newest plant.

The team consisted of the plant manager and the managers of the five functional departments reporting to him: engineering (maintenance), administration, human resources, production, and quality control. In recruiting managers for the new plant, the company selected people with good technical skills and experience in their respective functions. It also chose people with some managerial experience and a desire to solve problems collaboratively, a hallmark of par- ticipative management. The team was relatively new, and members had been working together for only about five months.

The team met formally for two hours each week to share pertinent information and to deal with plantwide issues affecting all of the departments, such as safety procedures, inter- departmental relations, and personnel prac- tices. Members described these meetings as informative but often chaotic in terms of deci- sion making. The meetings typically started late as members straggled in at different times. The latecomers generally offered excuses about more pressing problems occur- ring elsewhere in the plant. Once started, the

meetings were often interrupted by “urgent” phone messages for various members, includ- ing the plant manager, and in most cases, the recipient would leave the meeting hurriedly to respond to the call.

The group had problems arriving at clear decisions on particular issues. Discussions often rambled from topic to topic, and mem- bers tended to postpone the resolution of pro- blems to future meetings. This led to a backlog of unresolved issues, and meetings often lasted far beyond the two-hour limit. When group decisions were made, members often reported problems in their implementation. Members typically failed to follow through on agreements, and there was often confusion about what had actually been agreed upon. Everyone expressed dissatisfaction with the team meetings and their results.

Relationships among team members were cordial yet somewhat strained, especially when the team was dealing with complex issues in which members had varying opinions and interests. Although the plant manager pub- licly stated that he wanted to hear all sides of the issues, he often interrupted the discussion or attempted to change the topic when mem- bers openly disagreed in their views of the problem. This interruption was typically fol- lowed by an awkward silence in the group. In many instances, when a solution to a pressing problem did not appear forthcoming, members either moved on to another issue or they infor- mally voted on proposed options, letting major- ity rule decide the outcome. Members rarely discussed the need to move on or vote; rather, these behaviors emerged informally over time and became acceptable ways of dealing with difficult issues.

110 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

3. What is the composition of the group? The team is composed of the plant man- ager and the heads of the five functional departments. All members appear to have task-relevant skills and experience, both in their respective functions and in their managerial roles. They also seem to be interested in solving problems collabora- tively. That shared interest suggests that members have job-related social needs and should feel relatively comfortable participating in group problem-solving situations.

4. What are the group’s performance norms? Group norms cannot be observed directly but must be inferred from group behaviors. The norms involve member beliefs about how the group should perform its task, including acceptable levels of performance. A useful way to describe norms is to list specific behaviors that complete the sentences “A good group member should …” and “It’s okay to … .” Examination of the team’s problem-solving behaviors suggests the following performance norms:

• It’s okay to come late to team meetings. • It’s okay to interrupt meetings with phone messages. • It’s okay to leave meetings to respond to phone messages. • It’s okay to hold meetings longer than two hours. • A good group member should not openly disagree with others’ views. • It’s okay to vote on decisions. • A good group member should be cordial to other members. • It’s okay to postpone solutions to immediate problems. • It’s okay not to follow through on previous agreements.

5. What is the nature of team functioning in the group? The case strongly suggests that interpersonal relations are not healthy on the management team. Members do not seem to confront differences openly. Indeed, the plant manager purposely deflects issues when conflicts emerge. Members feel dissatisfied with the meetings but spend little time talking about those feelings. Relationships are strained, but members fail to examine the underlying causes.

The problems facing the team can now be explained by assessing how well the group design fits the inputs. The plant’s organization design requires coordinated problem solv- ing among functional departments. The newness of the plant and the uncertainty of the technology result in complex, plantwide issues that require considerable information processing to resolve. The weekly team meetings are an attempt to address and resolve these interdependent and complex problems. The plant’s culture promotes participation in problem solving and the team meetings are a reflection of that involvement. Although it is too early to tell whether the team will succeed, there does not appear to be signifi- cant incongruity between the plant’s organization design and culture and what the team is trying to do.

Next, alignment among the group design components is assessed to determine how well they fit together to promote interdependent and complex problem solving. The team’s task structure and composition appear to fit the type of issues that the team is sup- posed to address. The face-to-face meetings help to coordinate problem solving among the department managers, and except for interpersonal and group problem-solving skills, members seem to have the necessary task-relevant expertise to address the plantwide pro- blems. There appears, however, to be a conflict in the priority between the problems to be solved by the team and the problems faced by individual managers. Moreover, there seems to be a mismatch between the demands of the problem-solving task and the team’s perfor- mance norms and interpersonal relations. Complex, interdependent problems require per- formance norms that support sharing of diverse and often conflicting kinds of information. The norms must encourage members to generate novel solutions and to assess the relevance of problem-solving strategies in light of new issues. Members need to

CHAPTER 5 DIAGNOSING 111

address explicitly how they are using their knowledge and skills and how they are weighing and combining members’ individual contributions. The team’s performance norms fail to support complex problem solving; rather, they promote a problem-solving method that is often superficial, haphazard, and subject to external disruptions. Members’ interpersonal relationships reinforce adherence to the ineffective norms. Members do not confront per- sonal differences or dissatisfactions with the group process. They fail to examine the very norms contributing to their problems. In this case, diagnosis suggests the need for group interventions aimed at improving performance norms and developing healthy interper- sonal relationships. (Chapter 10 describes interpersonal and group process interventions.)

5-6 Individual-Level Diagnosis The final level of organizational diagnosis is the individual job or position. An organiza- tion consists of numerous groups; a group, in turn, is composed of several individual jobs. This section discusses the inputs, design components, and relational fits needed for diagnosing jobs. The model shown in Figure 5.2 is similar to other popular job diagnostic frameworks, such as Hackman and Oldham’s job diagnostic survey and Herzberg’s job enrichment model.39

5-6a Inputs Four major inputs affect job design: organization design, culture, group design, and the personal characteristics of jobholders.

Organization design is concerned with the larger organization within which the indi- vidual job is the smallest unit. Organization design is a key part of the larger context surrounding jobs. An organization’s technology, structure, management processes, and human resources systems can have a powerful impact on the way jobs are designed and on people’s experiences in them. For example, company reward systems can orient employees to particular job behaviors and influence whether people see job performance as fairly rewarded. In general, technology characterized by relatively uncertain tasks is likely to support job designs allowing employees flexibility and discretion in performing tasks. Conversely, low-uncertainty tasks are likely to promote standardized job designs requiring routinized task behaviors.40

Culture represents the values and norms shared by organization members. Because they are generally taken for granted, they guide members’ perceptions, thoughts, and actions. Culture can influence the kinds of work designs that organizations consider and that members perceive as legitimate. The more an organization culture promotes member participation and innovation, the more likely job designs will be highly flexible and involve member decision making.

Group design concerns the work group containing the individual job. Like organiza- tion design, group design is an essential part of the job context. Task structure, goal clar- ity, group composition, performance norms, and team functioning serve as inputs to job design. They typically have a more immediate impact on jobs than do the larger, organi- zation design components. For example, group task structure can determine how indi- vidual jobs are grouped together—as in groups requiring coordination among jobs or in ones comprising collections of independent jobs. Group composition can influence the kinds of people who are available to fill jobs. Group performance norms can affect the kinds of job designs that are considered acceptable, including the level of jobholders’ per- formances. Goal clarity helps members to prioritize work, and group functioning can

112 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

affect how powerfully the group influences individual-job behaviors. When members maintain close relationships and the group is cohesive, group norms are more likely to be enforced and followed.41

Personal characteristics of individuals occupying jobs include age, education, experi- ence, skills, and abilities. All of these can affect how people react to job designs and per- form. Individual needs and expectations also can affect employee job responses. For example, individual differences in growth needs—the need for self-direction, learning, and personal accomplishment—can determine how much people are satisfied by jobs with high levels of skill variety, autonomy, and feedback about results.42 Similarly, work motivation can be influenced by people’s expectations that they can perform a job well and that good job performance will result in valued outcomes.43

5-6b Design Components Figure 5.2 shows that individual jobs have five key dimensions: skill variety, task identity, task significance, autonomy, and feedback about results.44

Skill variety is the degree to which a job requires a range of activities and abilities to perform the work. Assembly-line jobs, for example, generally have limited skill variety because employees perform a small number of repetitive activities. On the other hand, most professional jobs include a great deal of skill variety because people engage in diverse activities and employ several different skills in performing their work.

Task identity measures the degree to which a job requires the completion of a rela- tively whole, identifiable piece of work. Skilled craftspeople, such as tool-and-die makers and carpenters, generally have jobs with high levels of task identity. They are able to see a job through from beginning to end. Assembly-line jobs involve only a limited piece of work and score low on task identity.

Task significance identifies the degree to which a job has a significant impact on other people’s lives. Custodial jobs in a hospital are likely to have more task significance than similar jobs in a toy factory because hospital custodians are likely to see their jobs as affecting someone else’s health and welfare.

Autonomy indicates the degree to which a job provides freedom and discretion in scheduling the work and determining work methods. Assembly-line jobs generally have little autonomy; the work pace is scheduled and people perform preprogrammed tasks. College teaching positions have more autonomy. Professors usually can determine how a course is taught, even though they may have limited say over class scheduling.

Feedback about results involves the degree to which a job provides employees with direct and clear information about the effectiveness of task performance. Assembly-line jobs often provide high levels of feedback about results, whereas college professors must often contend with indirect and ambiguous feedback about how they are performing in the classroom.

Those five job dimensions can be combined into an overall measure of job enrich- ment. Enriched jobs have high levels of skill variety, task identity, task significance, autonomy, and feedback about results. They provide opportunities for self-direction, learning, and personal accomplishment at work. Many people find enriched jobs inter- nally motivating and satisfying. (Chapter 14 discusses job enrichment more fully.)

5-6c Outputs Individual-job effectiveness includes two kinds of outputs, those related to how well the job is performed and those having to do with how people experience their job. Perfor- mance is measured in terms of the quantity, quality, time, and cost of producing a

CHAPTER 5 DIAGNOSING 113

particular job outcome such as a product or service. Indicators of an individual’s experi- ence of the job include job satisfaction, absenteeism, and personal development.

5-6d Alignment The diagnostic model in Figure 5.2 suggests that the job-design elements just described must align with each other to produce effective job outputs, such as high quality and quantity of individual performance, low absenteeism, and high job satisfaction.

1. Does the job design fit with the inputs? Job design should be congruent with the larger organization design, culture, and group design within which the job is embed- ded.45 Both the organization and the group serve as powerful contexts for individual jobs or positions. They support and reinforce particular job designs. Highly flexible organization designs, participative cultures, and work groups that permit members to self-regulate their behavior align with enriched jobs. These organization and group inputs promote autonomy, flexibility, and innovation at the individual-job level. Conversely, bureaucratic organizations and cultures and groups relying on external controls are congruent with job designs scoring low on the five design com- ponents. These organization and group inputs reinforce standardized, routine jobs. As suggested earlier, congruence across different levels of organizational design pro- motes integration of the organization, group, and job levels. Whenever the levels do not fit each other, conflict is likely to emerge.

Job design also should fit jobholders’ personal characteristics if they are to perform effectively and derive satisfaction from work. Generally, enriched jobs fit people with strong growth needs.46 These people derive satisfaction and accomplish- ment from performing jobs involving skill variety, autonomy, and feedback about results. Enriched jobs also fit people possessing moderate-to-high levels of task- relevant skills, abilities, and knowledge. Enriched jobs generally require complex information processing and decision making; people must have comparable skills and abilities to perform effectively. Jobs scoring low on the five job-design compo- nents generally fit people with rudimentary skills and abilities and with low growth needs. Simpler, more routinized jobs requiring limited skills and experience fit better with people who place a low value on opportunities for self-direction and learning. However, because people can develop growth needs and expertise through educa- tion, training, and experience, job design must be monitored and adjusted from time to time to fit jobholders’ changing needs and enhanced knowledge and skills.

2. Do the job-design components fit with each other? The five job-design compo- nents must align with each other to provide a clear and consistent direction to how the job should be performed. Enriched job designs that score high on skill variety, task identity, task significance, autonomy, and feedback of results signal the need for flexibility, active engagement, and decision making to perform the job. Conversely, traditional job designs that score low on the design components indicate the need for routine and standardized job performances.

5-6e Analysis Application 5.3 presents an example of individual-level diagnosing. As part of a larger cost-cutting initiative, the university is considering a change in the job design of a program administrator. The application provides information about the administrator’s current job. Diagnosing the individual-level elements and the alignment among them can help to address whether or not the proposed change makes sense.

114 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

a p

p li

ca ti

o n

5 3 JOB DESIGN AT PEPPERDINE UNIVERSITY

T he Graziadio School of Business and Man- agement (GSBM) at Pepperdine University is one of the largest business schools in the country and has the third largest part-

time MBA program. The school also provides graduate education aimed at different markets including an executive MBA (EMBA), a presidential/key executive MBA (PKE), and a specialized master’s degree in organization development (MSOD). The MSOD program’s curriculum consists of 10 four-unit classes over 22 months. Eight of the classes are con- ducted off-site during eight-day sessions at both domestic and international locations. The MSOD program office consists of a faculty director, a program administrator, and an administrative assistant. In response to cost- cutting initiatives at the university level, a pro- posal was being considered to alter the job designs of the MSOD program staff.

The MSOD Program Administrator, the focus of this application, was responsible for marketing and recruiting new students, man- aging the delivery logistics of the off-site pro- gram, managing the students’ registration and financial relationships with the university, and maintaining relationships with the MSOD alumni. The marketing and recruiting duties involved working with the Program Director and the Director of Marketing for GSBM to develop marketing tactics including advertise- ments, brochures, conference marketing and support, and other market development activi- ties. The recruiting process involved explaining the curriculum to prospective applicants, over- seeing the application process for each appli- cant, working with the faculty to have qualified applicants interviewed, and managing the admissions process. This too had to be coordi- nated with the director and the administrative assistant. Once a class was admitted, the Pro- gram Administrator worked with various off- site facilities to establish room and board rates and catering services; managed the faculty’s travel and teaching requirements; managed various intersession activities includ- ing the final exam; managed the students’

enrollment and graduation processes including their interface with the university’s registrar and finance office and the school’s financial aid office; and coached students through the program. After graduation, the Program Admin- istrator served as an unofficial placement ser- vice, hooking up eligible graduates with prospective employers who called looking for MSOD talent, provided career guidance, and worked with the program’s alumni organization to sponsor conferences and other alumni activities.

Each of the above activities was some- what programmable; they occurred at specific times of the year and could be scheduled. However, because each applicant, student, class, or graduate was somewhat unique, the specific tasks or actions could not always be specified in advance and there were a number of exceptions and unique situations that arose during each day, month, or year.

The MSOD Program Administrator has worked with the MSOD program for over 15 years and was a fixture in both the MSOD and the general OD communities. Year over year, the Program Administrator delivered qual- ified applicants in excess of available space although that task had become increasingly dif- ficult in the face of tuition increases, increas- ingly restrictive corporate policies on tuition reimbursement, and the ups and downs of the economy. He handled both routine and nonroutine administrative details profes- sionally, displays and reports a high level of job satisfaction and commitment to the pro- gram, and has been complimented formally and informally by the students in the program. In fact, each cohort develops its own relation- ship with the administrator and he becomes a de facto member of almost every class. The alumni considered the Program Administrator a key and integral part of the MSOD program. The set of duties described above has evolved considerably over the Program Administrator’s tenure. In particular, he has become more involved and responsible for marketing and recruiting activities, and the alumni relations

CHAPTER 5 DIAGNOSING 115

Diagnosis of individual-level inputs answers the following questions:

1. What are the design and culture of the organization within which the individual job is embedded? Although the example says little about the organization’s design and culture, a number of inferences are possible. The business school’s administra- tion was attempting to reward the Program Administrator with a more enriched job. This suggests that the culture of the organization was supportive of employee involvement. However, the proposed change also was being considered as part of an efficiency drive. The school is large, hosting the third largest part-time MBA pro- gram in the United States. This helps to explain why a specialized master’s degree in OD has been paired with two EMBA programs and differentiated from the large, part-time MBA program. To the extent that the MSOD program has different stu- dents or different marketing, delivery, and alumni relations processes than the EMBA programs, there may be difficult points of integration between the two types of programs.

2. What is the design of the group containing the individual job? Three individual jobs were grouped together according to the type of program. In this case, a faculty director, program administrator, and administrative assistant comprise the program office. The office is clearly dependent on other university and school functions, such as the registrar’s office, financial aid, and the teaching faculty. Each of the three jobs has specific duties, but there is a clear sense that all three jobs are highly interdepen- dent. The Program Administrator must coordinate with the faculty director on mar- keting, admissions, and curriculum decisions and with the administrative assistant on recruiting, program delivery, and routine administrative processes. Interaction during task performance is thus intense, and although partly scheduled, the work involves a high number of exceptions.

3. What are the personal characteristics of the jobholder? The application provides some clues about the Program Administrator’s personal characteristics. First, he has stayed in the position for more than 15 years; this speaks to a loyalty or com- mitment to the OD program. Second, his job has evolved considerably and suggests at least a moderate amount of growth needs strength.

duties have been added in response to alumni requests that cannot be filled by traditional univer- sity departments.

In an effort to improve efficiencies, and in recognition of the MSOD Program Administrator’s outstanding productivity, a proposal was being con- sidered by GSBM administration to change the design of his job. The proposal suggested that the MSOD Program Administrator continue to perform all of the current duties of the position and, in addi- tion, provide administrative support to two PKE clas- ses from their initial class to graduation. The duties of administrating the PKE program would be similar in nature to the delivery aspects of the MSOD

program, including working with faculty to support their teaching efforts, managing textbook ordering processes, and providing different facilities logistics activities. It would not include marketing, recruiting, and alumni development activities. The Program Administrator would receive additional compensa- tion for the increased responsibilities and a title change. The new position would include joint super- vision, with the EMBA program administrator, of an assistant program administrator, who would in turn manage a pool of administrative assistants. In addi- tion, the new program administrator job would report to both the MSOD program director and the director of EMBA/PKE programs.

116 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

Diagnosing individual jobs involves the following design components:

1. How much skill variety is included in the job? The program administrator job involves a wide variety of tasks, including recruiting students; advising prospective and current students on career opportunities; making input into marketing strategies and tactics; handling routine and nonroutine administrative matters such as registra- tion, grade changes, and graduation processes; supervision of an administrative assistant; coordination with other functions and departments within the school and university; traveling to several class sessions and handling logistics details; negotiat- ing with a variety of resort properties on rooming costs, menus, meal costs, and room setup; working with alumni; and a variety of ancillary tasks.

2. How much task identity does the job contain? The program administrator job is “all of a piece.” It involves following individuals through an entire process, as appli- cants, students, and alumni. It engages them as individuals, as professionals, and as members of a family or other community.

3. How much task significance is involved in the job? The program administrator job scores high on task significance. It includes bringing potential students into a well- respected program, working with them during their matriculation, advising them on their experiences in the program, and taking an important hand in their personal and professional development. The job is an integral part of a transformational edu- cational process, which also contributes to its task significance.

4. How much autonomy is included in the job? There is a moderate-to-high amount of autonomy included in the program administrator job. It involves considerable discretionary decision making without much supervision or external controls.

5. How much feedback about results does the job contain? The program administra- tor job receives a lot of feedback. It comes from the faculty director on job perfor- mance; from program evaluations about service quality; and from students on the amount of support and guidance received.

Assessing individual-level outputs involves measures of job satisfaction, performance, absenteeism, and personal development. The Program Administrator performs his job well and seems to be very satisfied with it and the personal development opportunities it affords. Although there is no information on his level of absenteeism, it seems safe to assume that it is negligible.

These positive outcomes suggest that currently there is a good fit between the job design and the inputs and among the job-design components. When the job components are examined together, the program administrator job contains high levels of enrich- ment. Task variety, task identity, task significance, autonomy, and feedback about results are all high and mutually contribute to an enriched job experience. Over time, the level of enrichment appears to have increased because skill variety and autonomy have increased. The fit between the job design and the organization design is mixed, however. The business school’s technology of recruiting and educating students and managing alumni is at least moderately, if not highly, uncertain. Tasks that are uncertain require considerable information processing and decision making. Enriched jobs fit such tasks, and the program administrator job has gradually evolved to fit the high levels of task uncertainty. Structurally, as a specialized master’s degree that is different from an EMBA program, the MSOD program office, and the administrator job in particular, have evolved to be somewhat independent of the business school’s other programs. There does not appear to be much sharing or coordination of tasks across these different MBA programs, despite obvious opportunities such as student registration, graduation, book ordering, and others. Either the MSOD program is sufficiently different from the

CHAPTER 5 DIAGNOSING 117

EMBA programs that it warrants such independence, or there are some important opportunities for improved efficiencies from the proposed change. There also seems to be only a partial fit between Graziadio School of Business and Management’s culture and the administrator’s job design. The culture includes values that promote both employee involvement and efficiency, with the former supporting an enriched job and the latter potentially impeding enrichment. The program administrator job and the other jobs in the program office closely interact with each other to form a team that is cohesive and mostly self-managed. This suggests a good fit between the enriched program administrator job and the design of the office team. Finally, the design of the program administrator job aligns well with the personal characteristics of the Program Administrator.

In the current context, the proposed change to the program administrator job needs to be considered very carefully. Will the change likely improve productivity, enhance quality, or increase job satisfaction? In general, the answer appears to be “no.” For exam- ple, the proposed change argues that adding new responsibilities will increase task vari- ety, task identity, and task significance. However, the additional administrative tasks of the EMBA classes do not increase the skill variety of the existing program administrator job. There are, in fact, no new skills required to administer those classes, and adding these responsibilities may actually unbalance the existing skill mix. That is, under the proposed new job, the program delivery component of the job will increase dramatically with respect to the other job components and more or less dominate the mix. This could actually result in decreased task variety.

The proposed change also contends that task significance will increase because the program administrator job will be able to affect the lives of both the MSOD program participants and the EMBA students. There is some merit to this idea, but it must be tempered with the chance that task identity might decrease. The task identity of the pro- gram administrator job, as described in the application, is high while the task identity for the EMBA program is relatively low. In the EMBA program, the program administrator job would interact with the students only during the program; it would have little involvement with them in the recruiting process and later as alumni. Thus, any increase in the number of people the proposed new job affects (task significance) is likely to be offset by the reduced involvement it would have with about half of these people (task identity).

Finally, the proposed change claims that the Program Administrator is being given more responsibility, which is true, but he will have less autonomy. The new program administrator job will report to two bosses: the MSOD program director and the EMBA/PKE director. Thus, the Program Administrator will probably have more, not less, supervision as the MSOD program director ensures that the MSOD program objec- tives are addressed, and the EMBA/PKE program director ensures that his or her program objectives are being addressed.

Examining the proposed changes in relation to the design components of the program administrator job suggests an intervention dilemma in this case. Should the business school’s administration continue with the proposed change? The hoped-for effi- ciencies may or may not materialize. The Program Administrator’s extensive skills and knowledge may in fact be applied to improve productivity. However, will it do so at a cost to his work satisfaction? Over time, such a solution may not be sustainable. If the change is implemented, OD interventions probably should be aimed at mitigating the negative effects on task identity, task significance, and autonomy. The MSOD director and the EMBA/PKE director need to work with the Program Administrator to set out clear expectations for his new job. They need to figure out methods to allow the Program

118 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

Administrator to perform certain tasks that he finds most rewarding. (Chapter 14 describes interventions for matching people, technology, and job design.) If the proposed changes are not implemented, alternative structural arrangements within the EMBA programs organization may need to be examined.

SUMMARY

This chapter presented information for diagnosing organizations, groups, and individual jobs. Diagnosis is a collaborative process, involving both organization members and OD practitioners in collecting pertinent data, analyzing them, and drawing conclusions for action planning and intervention. Diagnosis may be aimed at discovering the causes of specific problems, or it may be directed at assessing the organization or department to find areas for future development. Diag- nosis provides the necessary practical understanding to devise interventions for solving problems and improv- ing organization effectiveness.

Diagnosis is based on conceptual frameworks about how organizations function. Such diagnostic models serve as road maps by identifying areas to examine and questions to ask in determining how an organization or department is operating.

The model presented here views organizations as open systems. The organization serves to coordinate

the behaviors of its departments. It is open to exchanges with the larger environment and is influenced by exter- nal forces. As open systems, organizations are hierar- chically ordered; that is, they are composed of groups, which in turn are composed of individual jobs. Organi- zations also display five key open-systems properties: environments; inputs, transformations, and outputs; boundaries; feedback; and alignment.

A comprehensive model for diagnosing organiza- tional systems was described and applied to three orga- nizational levels—organization, group, and individual job. It consists of inputs; a set of design components; and a variety of outputs, such as performance, produc- tivity, and stakeholder satisfaction. For each organiza- tional level, diagnosing involves understanding each of the parts in the model and then assessing how the design components align with each other and with the inputs. Effective outputs are likely to result from good alignment.

NOTES

1. C. Lundberg, “Organization Development Diagnosis,” in Handbook of Organization Development, ed. T. Cummings (Los Angeles: Sage Publications, 2008), 137–50; D. Nadler, “Role of Models in Organizational Assessment,” in Organi- zational Assessment, ed. E. Lawler III, D. Nadler, and C. Cammann (New York: John Wiley & Sons, 1980), 119–31; R. Burton, B. Obel, H. Starling, M. Sondergaard, and D. Dojbak, Strategic Organizational Diagnosis and Design: Developing Theory for Application, 2nd ed. (Dordrecht, The Netherlands: Kluwer Academic Publishers, 2001).

2. M. Poole and A. Van de Ven, eds., Handbook of Organi- zational Change and Innovation (New York: Cambridge University Press, 2004); D. Coghlan, “Organization Development through Interlevel Dynamics,” International Journal of Organizational Analysis 2 (1994): 264–79.

3. M. Weisbord, “Organizational Diagnosis: Six Places to Look for Trouble with or without a Theory,” Group and Organizational Studies 1 (1976): 430–37.

4. D. Nadler and M. Tushman, Competing by Design: The Power of Organizational Architecture (New York: Oxford University Press, 1997).

5. J. Galbraith, Designing Organizations (San Francisco: Jossey-Bass, 2002).

6. J. Kotter, Organizational Dynamics: Diagnosis and Inter- vention (Reading, MA: Addison-Wesley, 1978).

7. M. Tushman and E. Romanelli, “Organization Evolution: A Metamorphosis Model of Convergence and Reorienta- tion,” in Research in Organizational Behavior, vol. 7, ed. L. Cummings and B. Staw (Greenwich, CT: JAI Press, 1985); C. Worley, D. Hitchin, and W. Ross, Integrated

CHAPTER 5 DIAGNOSING 119

Strategic Change: How OD Builds Competitive Advantage (Reading, MA: Addison-Wesley, 1996).

8. R. Daft, Organization Theory and Design, 11th ed. (Cincinnati, OH: South-Western College Publishing, 2013); R. Miles, Macro Organization Behavior (Santa Monica, CA: Goodyear, 1980).

9. M. Porter, Competitive Strategy (New York: Free Press, 1980).

10. Ibid. 11. K. Weick, The Social Psychology of Organizing, 2nd ed.

(Reading, MA: Addison-Wesley, 1979). 12. J. Pfeffer and G. Salancik, The External Control of Orga-

nizations: A Resource Dependence Perspective (New York: Harper & Row, 1978); H. Aldrich, Organizations and Environments (New York: Prentice Hall, 1979); L. Hrebiniak and W. Joyce, “Organizational Adaptation: Strategic Choice and Environmental Determinism,” Administrative Science Quarterly 30 (1985): 336–49.

13. F. Emery and E. Trist, “The Causal Texture of Organiza- tional Environments,” Human Relations 18 (1965): 21–32; H. Aldrich, Organizations and Environments (Englewood Cliffs, NJ: Prentice Hall, 1979).

14. J. Galbraith, Competing with Flexible Lateral Organizations, 2nd ed. (Reading, MA: Addison-Wesley, 1994); P. Evans and T. Wurster, “Strategy and the New Economics of Information,” Harvard Business Review 75 (1997): 70–83.

15. M. Tushman and D. Nadler, “Information Processing as an Integrating Concept in Organizational Design,” Acad- emy of Management Review 3 (1978): 613–24.

16. M. Porter, Competitive Advantage (New York: Free Press, 1985); M. Hitt, R. D. Ireland, and R. Hoskisson, Strategic Management (Mason, OH: South-Western College Publishing, 2006).

17. C. Hofer and D. Schendel, Strategy Formulation: Analyti- cal Concepts (St. Paul, MN: West Publishing, 1978).

18. E. Lawler and C. Worley, Built to Change (San Francisco: Jossey-Bass, 2006).

19. J. Thompson, Organizations in Action (New York: McGraw-Hill, 1967); D. Gerwin, “Relationships between Structure and Technology,” in Handbook of Organiza- tional Design, vol. 2, ed. P. Nystrom and W. Starbuck (Oxford: Oxford University Press, 1981), 3–38.

20. Galbraith, Designing Organizations; Daft, Organization Theory and Design.

21. P. Lawrence and J. Lorsch, Organization and Environ- ment (Cambridge, MA: Harvard University Press, 1967).

22. Galbraith, Competing with Flexible Lateral Organizations. 23. J. Martin, Organizational Culture: Mapping the Terrain

(Newbury Park, CA: Sage Publishing, 2002); E. Schein, Organizational Culture and Leadership, 2nd ed. (San Francisco: Jossey-Bass, 1990).

Having Trouble Meeting Your Deadline?

Get your assignment on homework 3 rd kim completed on time. avoid delay and – ORDER NOW

24. E. Abrahamson and C. Fombrun, “Macrocultures: Deter- minants and Consequences,” Academy of Management Review 19 (1994): 728–56.

25. Adapted from material in R. Brammer, “Sizing Up Small Caps: Stay Tuned,” Barrons (April 19, 2002); A. Serwer, “Happy Birthday, Steinway,” Fortune, March 17, 2003, 96–98; D. Garvin, “Steinway & Sons,” Harvard Business School Case 628-025 (Boston: Harvard Business School, 1981); J. Gourville and J. Lassiter, Steinway & Sons: Buy- ing a Legend (Boston: Harvard Business School, 1999).

26. J. Hackman and C. Morris, “Group Tasks, Group Interaction Process, and Group Performance Effectiveness: A Review and Proposed Integration,” in Advances in Experimental Social Psychology, vol. 9, ed. L. Berkowitz (New York: Academic Press, 1975), 45–99; J. Hackman, ed., Groups That Work (and Those That Don’t): Creating Conditions for Effective Teamwork (San Francisco: Jossey-Bass, 1989).

27. M. McCaskey, “Framework for Analyzing Work Groups,” Harvard Business School Case 9-480-009 (Boston: Harvard Business School, 1997).

28. G. Ledford, E. Lawler, and S. Mohrman, “The Quality Circle and Its Variations,” in Productivity in Organiza- tions: New Perspectives from Industrial and Organiza- tional Psychology, ed. J. Campbell, R. Campbell, and Associates (San Francisco: Jossey-Bass, 1988), 255–94.

29. D. Ancona and H. Bresman, X-teams: How to Build Teams that Lead, Innovate, and Succeed (Boston: Harvard Business School Press, 2007); S. Mohrman, S. Cohen, and A. Mohrman, Designing Team-Based Organizations (San Francisco: Jossey-Bass, 1995).

30. G. Susman, Autonomy at Work (New York: Praeger, 1976); T. Cummings, “Self-Regulating Work Groups: A Socio-Technical Synthesis,” Academy of Management Review 3 (1978): 625–34; J. Slocum and H. Sims, “A Typology for Integrating Technology, Organization, and Job Design,” Human Relations 33 (1980): 193–212.

31. J. R. Hackman and G. Oldham, Work Redesign (Reading, MA: Addison-Wesley, 1980).

32. E. Schein, Process Consultation, vols. 1–2 (Reading, MA: Addison-Wesley, 1987).

33. W. Dyer, Team Building, 3rd ed. (Reading, MA: Addison- Wesley, 1994).

34. Hackman and Morris, “Group Tasks”; T. Cummings, “Designing Effective Work Groups,” in Handbook of Orga- nizational Design, vol. 2, ed. P. Nystrom and W. Starbuck (Oxford: Oxford University Press, 1981), 250–71.

35. Cummings, “Designing Effective Work Groups.” 36. Susman, Autonomy at Work; Cummings, “Self-Regulating

Work Groups”; Slocum and Sims, “Typology.” 37. Cummings, “Self-Regulating Work Groups”; Slocum and

Sims, “Typology.”

120 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

38. Ibid. 39. Hackman and Oldham, Work Redesign; F. Herzberg,

“One More Time: How Do You Motivate Employees?” Harvard Business Review 46 (1968): 53–62.

40. J. Pierce, R. Dunham, and R. Blackburn, “Social Systems Structure, Job Design, and Growth Need Strength: A Test of a Congruence Model,” Academy of Management Journal 22 (1979): 223–40.

41. Susman, Autonomy at Work; Cummings, “Self-Regulating Work Groups”; Slocum and Sims, “Typology.”

42. Hackman and Oldham, Work Redesign; Pierce, Dunham, and Blackburn, “Social Systems Structure.”

43. E. Lawler III, Motivation in Work Organizations (Monterey, CA: Brooks/Cole, 1973).

44. Hackman and Oldham, Work Redesign. 45. Pierce, Dunham, and Blackburn, “Social Systems Structure”;

Susman, Autonomy at Work; Cummings, “Self-Regulating Work Groups”; Slocum and Sims, “Typology.”

46. Hackman and Oldham, Work Redesign; Pierce, Dunham, and Blackburn, “Social Systems Structure.”

CHAPTER 5 DIAGNOSING 121

© P

ix m

an n/

Im ag

ez oo

/ G

et ty

Im ag

es

6

Collecting, Analyzing, and Feeding Back Diagnostic Information

learning objectives

Understand the importance of the diagnostic relationship in the organization development (OD) process.

Describe the methods for collecting diagnostic data.

Understand the primary techniques used to analyze diagnostic data.

Outline the process issues associated with data feedback.

Describe and evaluate the survey feedback intervention.

Organization development is vitally depen-dent on collecting diagnostic informationthat will be shared with the client in jointly assessing how the organization is functioning and determining the best change intervention. The qual- ity of the information gathered and the effective- ness of the feedback process, therefore, are critical parts of the OD process. In this chapter, we discuss several key issues associated with col- lecting, analyzing, and feeding back diagnostic data on how an organization or department functions.

Data collection involves gathering information on specific organizational features, such as the inputs, design components, and outputs presented in Chapter 5. The process begins by establishing an effective relationship between the organization development (OD) practitioner and those from

whom data will be collected and then choosing data collection techniques. Four methods can be used to collect data: questionnaires, interviews, observations, and unobtrusive measures. Data analysis organizes and examines the information to make clear the underlying causes of an organizational problem or to identify areas for future development. Data feedback presents diagnostic information to organizational members so they can understand it and draw action implications from it. Effective feedback involves attention to both the content and the process of data feedback. A popular technique for feeding back questionnaire data is called survey feedback. Its central role in many large-scale OD efforts warrants a special look. The overall process of data collection, analysis, and feedback is shown in Figure 6.1.

6-1 The Diagnostic Relationship In most cases of planned change, OD practitioners play an active role in gathering data from organization members for diagnostic purposes. For example, they might interview members of a work team about causes of conflict among members; they might survey

123

,

© P

ix m

an n/

Im ag

ez oo

/ G

et ty

Im ag

es

6

Collecting, Analyzing, and Feeding Back Diagnostic Information

learning objectives

Understand the importance of the diagnostic relationship in the organization development (OD) process.

Describe the methods for collecting diagnostic data.

Understand the primary techniques used to analyze diagnostic data.

Outline the process issues associated with data feedback.

Describe and evaluate the survey feedback intervention.

Organization development is vitally depen-dent on collecting diagnostic informationthat will be shared with the client in jointly assessing how the organization is functioning and determining the best change intervention. The qual- ity of the information gathered and the effective- ness of the feedback process, therefore, are critical parts of the OD process. In this chapter, we discuss several key issues associated with col- lecting, analyzing, and feeding back diagnostic data on how an organization or department functions.

Data collection involves gathering information on specific organizational features, such as the inputs, design components, and outputs presented in Chapter 5. The process begins by establishing an effective relationship between the organization development (OD) practitioner and those from

whom data will be collected and then choosing data collection techniques. Four methods can be used to collect data: questionnaires, interviews, observations, and unobtrusive measures. Data analysis organizes and examines the information to make clear the underlying causes of an organizational problem or to identify areas for future development. Data feedback presents diagnostic information to organizational members so they can understand it and draw action implications from it. Effective feedback involves attention to both the content and the process of data feedback. A popular technique for feeding back questionnaire data is called survey feedback. Its central role in many large-scale OD efforts warrants a special look. The overall process of data collection, analysis, and feedback is shown in Figure 6.1.

6-1 The Diagnostic Relationship In most cases of planned change, OD practitioners play an active role in gathering data from organization members for diagnostic purposes. For example, they might interview members of a work team about causes of conflict among members; they might survey

123

employees at a large industrial plant about factors contributing to poor product quality. Before collecting diagnostic information, practitioners need to establish a relationship with those who will provide and subsequently use it. Because the nature of that relation- ship affects the quality and usefulness of the data collected, it is vital that OD practitioners clarify for organization members who they are, why the data are being collected, what the data gathering will involve, and how the data will be used.1 That information can help allay people’s natural fears that the data might be used against them and gain members’ participation and support, which are essential to developing successful interventions.

Establishing the diagnostic relationship between the OD practitioner and relevant organization members is similar to forming a contract. It is meant to clarify expectations and to specify the conditions of the relationship. In those cases where members have been directly involved in the entering and contracting process described in Chapter 4, the diagnostic contract will typically be part of the initial contracting step. In situations where data will be collected from members who have not been directly involved in enter- ing and contracting, however, OD practitioners will need to establish a diagnostic con- tract as a prelude to diagnosis. The answers to the following questions provide the substance of the diagnostic contract:2

1. Who am I? The answer to this question introduces the OD practitioner to the orga- nization, particularly to those members who do not know the consultant and yet will be asked to provide diagnostic data.

2. Why am I here, and what am I doing? These answers are aimed at defining the goals of the diagnosis and data-gathering activities. The consultant needs to present the objectives of the action research process and to describe how the diagnostic activities fit into the overall developmental strategy.

3. Who do I work for? This answer clarifies who has hired the OD practitioner, whether it be a manager, a group of managers, or a group of employees and man- agers. One way to build trust and support for the diagnosis is to have those people directly involved in establishing the diagnostic contract. Thus, for example, if the

FIGURE 6.1

The Cycle of Data Collection and Feedback

SOURCE: Figure adapted from D. Nadler, Feedback and Organization Development, 1977, Pearson Education, Upper Saddle River, NJ.

124 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

consultant works for a joint labor–management committee, representatives from both sides of that group could help the consultant build the proper relationship with those from whom data will be gathered.

4. What do I want from you, and why? Here, the OD practitioner needs to specify how much time and effort people will need to give to provide valid data and subse- quently to work with these data in solving problems. Because some people may not want to participate in the diagnosis, it is important to specify that such involvement is voluntary.

5. How will I protect your confidentiality? This answer addresses member concerns about who will see their responses and in what form. This is especially critical when employees are asked to provide information about their attitudes or perceptions. Either OD practitioners can ensure confidentiality or state that full participation in the change process requires open information sharing. In the first case, employees are frequently concerned about privacy and the possibility of being punished for their responses. To alleviate concern and to increase the likelihood of obtaining honest responses, the con- sultant may need to assure employees of the confidentiality of their information, perhaps through explicit guarantees of response anonymity. In the second case, full involvement of the participants in their own diagnosis may be a vital ingredient of the change pro- cess. If sensitive issues arise, assurances of confidentiality can coopt the OD practitioner and thwart meaningful diagnosis. The consultant is bound to keep confidential the issues that are most critical for the group or organization to understand.3 OD practi- tioners must think carefully about how they want to handle confidentiality issues.

6. Who will have access to the data? Respondents typically want to know whether they will have access to their data and who else in the organization will have similar access. The OD practitioner needs to clarify access issues and, in most cases, should agree to provide respondents with their own results. Indeed, the collaborative nature of diagnosis means that organization members will work with their own data to dis- cover causes of problems and to devise relevant interventions.

7. What is in it for you? This answer is aimed at providing organization members with a clear delineation of the benefits they can expect from the diagnosis. This usu- ally entails describing the feedback process and how they can use the data to improve the organization.

8. Can I be trusted? The diagnostic relationship ultimately rests on the trust estab- lished between the OD practitioner and those providing the data. An open and hon- est exchange of information depends on such trust, and the practitioner should provide ample time and face-to-face contact during the contracting process to build this trust. This requires the consultant to listen actively and discuss openly all questions raised by participants.

Careful attention to establishing the diagnostic relationship helps to promote the three goals of data collection.4 The first and most immediate objective is to obtain valid information about organizational functioning. Building a data collection contract can ensure that organization members provide honest, reliable, and complete information.

Data collection also can rally energy for constructive organizational change. A good diagnostic relationship helps organization members start thinking about issues that con- cern them, and it creates expectations that change is possible. When members trust the OD practitioner, they are likely to participate in the diagnostic process and to generate energy and commitment for organizational change.

Finally, data collection helps to develop the collaborative relationship necessary for effecting organizational change. The diagnostic stage of action research is probably the

CHAPTER 6 COLLECTING, ANALYZING, AND FEEDING BACK DIAGNOSTIC INFORMATION 125

first time that most organization members meet the OD practitioner, and it can be the basis for building a longer-term relationship. The data collection contract and subsequent data-gathering and feedback activities provide members with opportunities for seeing the consultant in action and for knowing him or her personally. If the consultant can show employees that he or she is trustworthy, is willing to work with them, and is able to help improve the organization, then the data collection process will contribute to the longer- term collaborative relationship so necessary for carrying out organizational changes.

6-2 Collecting Data The four major techniques for gathering diagnostic data are questionnaires, interviews, observations, and unobtrusive measures. Table 6.1 briefly compares the methods and lists their major advantages and problems. No single method can fully measure the kinds of diagnostic variables important to OD because each has certain strengths and weaknesses.5 For example, perceptual measures, such as questionnaires and surveys, are open to self-report biases, such as respondents’ tendency to give socially desirable answers rather than honest opinions. Observations, on the other hand, are susceptible to observer biases, such as seeing what one wants to see rather than what is really there. Because of the biases inherent in any data collection method, more than one method should be used when collecting diagnostic data. If data from the different meth- ods are compared and found to be consistent, it is likely that the variables are being

TABLE 6.1

Strengths and Weaknesses of Different Data Collection Methods

Data Collection Method Primary Strengths Primary Weaknesses

Surveys and questionnaires

Member beliefs and attitudes can be quantified easily

Can gather large amount of data from many people

Inexpensive on a per-person basis

Relatively impersonal Mechanistic and rigid—assumes all

the right questions are asked Easy to “over interpret” the data Response bias

Interviews Very flexible—can adapt to interviewee and data collection subject

Data is “rich” Interview process builds rapport and

empathy

Relatively expensive Interviewer responses can be biased Difficult to code and interpret Self-report bias

Observations Collects data on actual behavior, rather than reports of behavior

Real time, not retrospective Adaptive and objective

Difficult to code and interpret Sampling may be inconsistent Observer bias and reliability can be

questioned Can be expensive

Unobtrusive measures

No response bias High face validity Easily quantified

Privacy, access, and retrieval difficulties Validity concerns Difficult to code and interpret

© Ce

ng ag

e Le

ar ni

ng 20

15

126 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

measured validly. For example, questionnaire measures of job discretion could be supple- mented with observations of the number and kinds of decisions employees are making. If the two kinds of data support each other, job discretion is probably being assessed accu- rately. If the two kinds of data conflict, the validity of the measures should be examined further—perhaps by using a third method, such as interviews.

6-2a Questionnaires One of the most efficient ways to collect data is through questionnaires. Because they typi- cally contain fixed-response queries about various features of an organization, these mea- sures can be administered to large numbers of people simultaneously. Also, they can be analyzed quickly, especially with the use of computers, thus permitting quantitative com- parison and evaluation. As a result, data can easily be fed back to employees. Numerous basic resource books on survey methodology and questionnaire development are available.6

Questionnaires can vary in scope, some measuring selected aspects of organizations and others assessing more comprehensive organizational characteristics. They also can vary in the extent to which they are either standardized or tailored to a specific organi- zation. Standardized instruments generally are based on an explicit model of organiza- tion, group, or individual effectiveness and contain a predetermined set of questions that have been developed and refined over time. For example, Table 6.2 presents a stan- dardized questionnaire for measuring the job-design dimensions identified in Chapter 5: skill variety, task identity, task significance, autonomy, and feedback about results. The questionnaire includes three items or questions for each dimension, and a total score for each job dimension is computed simply by adding the responses for the three rele- vant items and arriving at a total score from 3 (low) to 21 (high). The questionnaire has wide applicability. It has been used in a variety of organizations with employees in both blue-collar and white-collar jobs.

Several research organizations have been highly instrumental in developing and refining surveys. The Institute for Social Research at the University of Michigan (http:// home.isr.umich.edu) and the Center for Effective Organizations at the University of Southern California (http://ceo.usc.edu) are two prominent examples. Two of the Insti- tute’s most popular measures of organizational dimensions are the Survey of Organiza- tions and the Michigan Organizational Assessment Questionnaire. Few other instruments are supported by such substantial reliability and validity data.7 Other exam- ples of packaged instruments include Weisbord’s Organizational Diagnostic Question- naire, Dyer’s Team Development Survey, Cameron and Quinn’s Organizational Culture Assessment Instrument, and Hackman and Oldham’s Job Diagnostic Survey.8 In fact, so many questionnaires are available that rarely would an organization have to create a totally new one. However, because every organization has unique problems and special jargon for referring to them, almost any standardized instrument will need to have organization-specific additions, modifications, or omissions.

On the other hand, customized questionnaires are tailored to the needs of a particu- lar organization. Typically, they include questions composed by OD practitioners or organization members, receive limited use, and do not undergo longer-term develop- ment. They can be combined with standardized instruments to provide valid and reliable data focused toward the particular issues facing an organization.

Questionnaires, however, have a number of drawbacks that need to be taken into account in choosing whether to employ them for data collection. First, responses are limited to the questions asked in the instrument. They provide little opportunity to probe for additional data or to ask for points of clarification. Second, questionnaires tend to be impersonal, and

CHAPTER 6 COLLECTING, ANALYZING, AND FEEDING BACK DIAGNOSTIC INFORMATION 127

TABLE 6.2

Job-Design Questionnaire

Here are some statements about your job. How much do you agree or disagree with each?

My Job: Strongly Disagree Disagree

Slightly Disagree Undecided

Slightly Agree Agree

Strongly Agree

1. provides much variety … [1] [2] [3] [4] [5] [6] [7] 2. permits me to be left on my

own to do my own work … [1] [2] [3] [4] [5] [6] [7]

3. is arranged so that I often have the opportunity to see jobs or projects through to completion …

[1] [2] [3] [4] [5] [6] [7]

4. provides feedback on how well I am doing as I am working …

[1] [2] [3] [4] [5] [6] [7]

5. is relatively significant in our organization …

[1] [2] [3] [4] [5] [6] [7]

6. gives me considerable opportunity for independence and freedom in how I do my work …

[1] [2] [3] [4] [5] [6] [7]

7. gives me the opportunity to do a number of different things …

[1] [2] [3] [4] [5] [6] [7]

8. provides me an opportunity to find out how well I am doing …

[1] [2] [3] [4] [5] [6] [7]

9. is very significant or important in the broader scheme of things …

[1] [2] [3] [4] [5] [6] [7]

10. provides an opportunity for independent thought and action …

[1] [2] [3] [4] [5] [6] [7]

11. provides me with a great deal of variety at work …

[1] [2] [3] [4] [5] [6] [7]

12. is arranged so that I have the opportunity to complete the work I start …

[1] [2] [3] [4] [5] [6] [7]

13. provides me with the feeling that I know whether I am performing well or poorly …

[1] [2] [3] [4] [5] [6] [7]

14. is arranged so that I have the chance to do a job from the beginning to the end (i.e., a chance to do the whole job) …

[1] [2] [3] [4] [5] [6] [7]

15. is one where a lot of other people can be affected by how well the work gets done …

[1] [2] [3] [4] [5] [6] [7]

Scoring: Skill variety …………………………………………………………………………………………………………. questions 1, 7, 11 Task identity ……………………………………………………………………………………………………….. questions 3, 12, 14 Task significance ………………………………………………………………………………………………… questions 5, 9, 15 Autonomy ………………………………………………………………………………………………………….. questions 2, 6, 10 Feedback about results ……………………………………………………………………………………….. questions 4, 8, 13

SOURCE: Reproduced by permission of E. Lawler, S. Mohrman, and T. Cummings, Center for Effective Organizations, University of Southern California.

128 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

employees may not be willing to provide honest answers. Third, questionnaires often elicit response biases, such as the tendency to answer questions in a socially acceptable manner. This makes it difficult to draw valid conclusions from employees’ self-reports.

6-2b Interviews A second important measurement technique is the individual or group interview. Inter- views are probably the most widely used technique for collecting data in OD. They permit the interviewer to ask the respondent direct questions. Further probing and clarification is, therefore, possible as the interview proceeds. This flexibility is invaluable for gaining pri- vate views and feelings about the organization and for exploring new issues that emerge during the interview.

Interviews may be highly structured—resembling questionnaires—or highly unstructured—starting with general questions that allow the respondent to lead the way. Structured interviews typically derive from a conceptual model of organization function- ing; the model guides the types of questions that are asked. For example, a structured interview based on the organization-level design components identified in Chapter 5 would ask managers specific questions about strategy, technology, organization structure, management processes, human resources systems, and organization culture.

Unstructured interviews are more general and include the following broad questions about organizational functioning:

• What are the major goals or objectives of the organization or department? • How does the organization currently perform with respect to these purposes? • What are the strengths and weaknesses of the organization or department? • What barriers stand in the way of good performance?

Although interviewing typically involves one-to-one interaction between an OD practitioner and an employee, it can be carried out in a group context. Group interviews save time and allow people to build on others’ responses. A major drawback, however, is that group settings may inhibit some people from responding freely.

A popular type of group interview is the focus group or sensing meeting.9 These are unstructured meetings conducted by a manager or a consultant. A small group of 10 to 15 employees is selected to represent a cross section of functional areas and hierarchical levels or a homogeneous grouping, such as minorities or engi- neers. Group discussion is frequently started by asking general questions about organizational features and functioning, an OD intervention’s progress, or current performance. Group members are then encouraged to discuss their answers more fully. Consequently, focus groups and sensing meetings are an economical way to obtain interview data and are especially effective in understanding particular issues in greater depth. The richness and validity of the information gathered will depend on the extent to which the manager or the OD practitioner develops a trusting relationship with the group and listens to member opinions.

Another popular unstructured group interview involves assessing the current state of an intact work group. The manager or the consultant generally directs a question to the group, calling its attention to some part of group functioning. For example, group members may be asked how they feel the group is progressing on its stated task. The group might respond and then come up with its own series of questions about barriers to task performance. This unstructured interview is a fast, simple way to collect data about group behavior. It enables members to discuss issues of immediate concern and to engage actively in the questioning and

CHAPTER 6 COLLECTING, ANALYZING, AND FEEDING BACK DIAGNOSTIC INFORMATION 129

answering process. This technique is limited, however, to relatively small groups and to settings where there is trust among employees and managers and a commit- ment to assessing group processes.

Interviews are an effective method for collecting data in OD. They are adaptive, allowing the interviewer to modify questions and to probe emergent issues during the interview process. They also permit the interviewer to develop an empathetic relationship with employees, frequently resulting in frank disclosure of pertinent information.

A major drawback of interviews is the amount of time required to conduct and ana- lyze them. Interviews can consume a great deal of time, especially if interviewers take full advantage of the opportunity to hear respondents out and change their questions accord- ingly. Personal biases also can distort the data. Like questionnaires, interviews are subject to the self-report biases of respondents and, perhaps more important, to the biases of the interviewer. For example, the nature of the questions and the interactions between the interviewer and the respondent may discourage or encourage certain kinds of responses. These problems suggest that interviewing takes considerable skill to gather valid data. Interviewers must be able to understand their own biases, to listen and establish empathy with respondents, and to change questions to pursue issues that develop during the course of the interview.

6-2c Observations One of the more direct ways of collecting data is simply to observe organizational beha- viors in their functional settings. The OD practitioner may do this by walking casually through a work area and looking around or by simply counting the occurrences of specific kinds of behaviors (e.g., the number of times a phone call is answered after three rings in a service department). Observation can range from complete participant observation, in which the OD practitioner becomes a member of the group under study, to more detached observation, in which the observer is clearly not part of the group or situation itself and may use film, videotape, and other methods to record behaviors.

Observations have a number of advantages. They are free of the biases inher- ent in self-report data. They put the OD practitioner directly in touch with the behaviors in question, without having to rely on others’ perceptions. Observations also involve real-time data, describing behavior occurring in the present rather than the past. This avoids the distortions that invariably arise when people are asked to recollect their behaviors. Finally, observations are adaptive in that the consultant can modify what he or she chooses to observe, depending on the circumstances.

Among the problems with observations are difficulties interpreting the meaning underlying the observations. OD practitioners may need to devise a coding scheme to make sense out of observations, and this can be expensive, take time, and introduce biases into the data. When the observer is the data collection instrument, the data can be biased and subjective unless the observer is trained and skilled in knowing what to look for; how, where, and when to observe; and how to record data systematically. Another problem concerns sampling: Observers not only must decide which people to observe, but they also must choose the time periods, territory, and events in which to

130 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

make those observations. Failure to attend to these sampling issues can result in highly biased samples of observational data.

When used correctly, observations provide insightful data about organization and group functioning, intervention success, and performance. For example, observations are particularly helpful in diagnosing the interpersonal relations of members of work groups. As discussed in Chapter 5, interpersonal relationships are a key component of work groups; observing member interactions in a group setting can provide direct infor- mation about the nature of those relationships.

6-2d Unobtrusive Measures Unobtrusive data are not collected directly from respondents but from secondary sources, such as company records and archives. These data are generally available in organizations and include records of absenteeism or tardiness; grievances; quan- tity and quality of production or service; financial performance; meeting minutes; and correspondence with key customers, suppliers, or governmental agencies.

Unobtrusive measures are especially helpful in diagnosing the organization, group, and individual outputs presented in Chapter 5. At the organization level, for example, market share and return on investment usually can be obtained from company reports. Similarly, organizations typically measure the quantity and qual- ity of the outputs of work groups and individual employees. Unobtrusive measures also can help to diagnose organization-level design components—structure, man- agement processes, and human resources systems. A company’s organization chart, for example, can provide useful information about organization structure. Information about management processes usually can be obtained by examining the firm’s management information system, operating procedures, and accounting practices. Data about human resources systems often are included in a company’s employee manual.

Unobtrusive measures provide a relatively objective view of organizational function- ing. They are free from respondent and consultant biases and are perceived as being “real” by many organization members. Moreover, unobtrusive measures tend to be quan- tified and reported at periodic intervals, permitting statistical analysis of behaviors occur- ring over time. Examining monthly absenteeism rates, for example, might reveal trends in employee withdrawal behavior.

The major problems with unobtrusive measures occur in collecting such infor- mation and drawing valid conclusions from it. Company records may not include data in a form that is usable by the OD practitioner. If, for example, individual performance data are needed, the consultant may find that many firms only record production information at the group or department level. Unobtrusive data also may have their own built-in biases. Changes in accounting procedures and in methods of recording data are common in organizations, and such changes can affect company records independently of what is actually happening in the organi- zation. For example, observed changes in productivity over time might be caused by modifications in methods of recording production rather than by actual changes in organizational functioning.

Despite these drawbacks, unobtrusive data serve as a valuable adjunct to other diagnostic measures, such as interviews and questionnaires. Archival data can be

CHAPTER 6 COLLECTING, ANALYZING, AND FEEDING BACK DIAGNOSTIC INFORMATION 131

used in preliminary diagnosis, identifying those organizational units with absentee- ism, grievance, or production problems. Then, interviews might be conducted or observations made in those units to discover the underlying causes of the problems. Conversely, unobtrusive data can be used to cross-check other forms of information. For example, if questionnaires reveal that employees in a department are dissatisfied with their jobs, company records might show whether that discon- tent is manifested in heightened withdrawal behaviors, in lowered quality work, or in similar counterproductive behaviors.

6-3 Sampling Before discussing how to analyze data, the issue of sampling needs to be emphasized. Application of the different data collection techniques invariably raises the following questions: “How many people should be interviewed and who should they be?” “What events should be observed and how many?” “How many records should be inspected and which ones?”10

Sampling is not an issue in many OD cases. Because OD practitioners collect inter- view or questionnaire data from all members of the organization or department in ques- tion, they do not have to worry about whether the information is representative of the organization or unit.

Sampling becomes an issue in OD, however, when data are collected from selected members, behaviors, or records. This is often the case when diagnosing organization- level issues or large systems. In these cases, it may be important to ensure that the sam- ple of people, behaviors, or records adequately represents the characteristics of the total population. For example, a sample of 50 employees might be used to assess the percep- tions of all 300 members of a department. A sample of production data might be used to evaluate the total production of a work group. OD practitioners often find that it is more economical and quicker to gather a sampling of diagnostic data than to collect all possi- ble information. If done correctly, the sample can provide useful and valid information about the entire organization or unit.

Sampling design involves considerable technical detail, and consultants may need to become familiar with basic references in this area or to obtain professional help.11 The first issue to address is sample size, or how many people, events, or records are needed to carry out the diagnosis or evaluation. This question has no simple answer: The necessary sample size is a function of population size, the con- fidence desired in the quality of the data, and the resources (money and time) available for data collection.

First, the larger the population (for example, the number of organization members or total number of work outcomes) or the more complex the client sys- tem (e.g., the number of salary levels that must be sampled or the number of dif- ferent functions), the more difficult it is to establish a “right” sample size. As the population increases in size and complexity, simple measures, such as an overall average score on a questionnaire item, are less meaningful. Because the population comprises such different types of people or events, more data are needed to ensure an accurate representation of the potentially different subgroups. Second, the larger the proportion of the population that is selected, the more confidence one

132 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

can have about the quality of the sample. If the diagnosis concerns an issue of great importance to the organization, then extreme confidence may be needed, indicative of a very large sample size. Third, limited resources constrain sample size. If resources are limited but the required confidence is high, then question- naires will be preferred over interviews because more information can be collected per member per dollar.

The second issue to address is sample selection. Probably the most common approach to sampling diagnostic data in OD is a simple random sample, in which each member, behavior, or record has an equal chance of being selected. For example, assume that an OD practitioner would like to select 50 people randomly out of the 300 employees at a manufacturing plant. Using a complete list of all 300 employees, the consultant can generate a random sample in one of two ways. The first method is to use a random number table printed in the back of almost any statistics text; the consultant would pick out the employees corresponding to the first 50 numbers under 300 beginning anywhere in the table. The second method is to pick every sixth name (300/50 6) starting anywhere in the list.

If the population is complex, or many subgroups need to be represented in the sam- ple, a stratified sample may be more appropriate than a random one. In a stratified sample, the population of members, events, or records is segregated into a number of mutually exclusive subpopulations and a random sample is taken from each subpopulation. For example, members of an organization might be divided into three groups (managers, white-collar workers, and blue-collar workers), and a random sample of members, beha- viors, or records could be selected from each grouping to reach diagnostic conclusions about each of the groups.

Adequate sampling is critical to gathering valid diagnostic data, and the OD litera- ture has paid little attention to this issue. OD practitioners should gain rudimentary knowledge in this area and use professional help if necessary.

6-4 Analyzing Data Data analysis techniques fall into two broad classes: qualitative and quantitative. Qualita- tive techniques generally are easier to use because they do not rely on numerical data. That fact also makes them more open to subjective biases but also easier to understand and interpret. Quantitative techniques, on the other hand, can provide more accurate readings of the organizational problem.

6-4a Qualitative Tools Of the several methods for summarizing diagnostic data in qualitative terms, two of the most important are content analysis and force-field analysis.

Content Analysis A popular technique for assessing qualitative data, especially interview data, is content analysis, which attempts to summarize comments into meaningful categories. When done well, a content analysis can reduce hundreds of interview comments into a few themes that effectively summarize the issues or attitudes of a group of respondents. The process of content analysis can be quite

CHAPTER 6 COLLECTING, ANALYZING, AND FEEDING BACK DIAGNOSTIC INFORMATION 133

formal, and specialized references describe this technique in detail.12 In general, however, the process can be broken down into three major steps. First, responses to a particular question are read to gain familiarity with the range of comments made and to determine whether some answers are occurring over and over again. Second, based on this sampling of comments, themes are generated that capture recurring comments. Themes consolidate different responses that say essentially the same thing. For example, in answering the question “What do you like most about your job?,” different respondents might list their coworkers, their supervi- sors, the new machinery, and a good supply of tools. The first two answers con- cern the social aspects of work, and the second two address the resources available for doing the work. Third, the respondents’ answers to a question are then placed into one of the categories. The categories with the most responses rep- resent those themes that are most often mentioned.

Force-Field Analysis A second method for analyzing qualitative data in OD derives from Kurt Lewin’s three-step model of change described in Chapter 2. Called force-field analysis, this method organizes information pertaining to organi- zational change into two major categories: forces for change and forces for main- taining the status quo or resisting change.13 Using data collected through interviews, observations, or unobtrusive measures, the first step in conducting a force-field analysis is to develop a list of all the forces promoting change and all those resisting it. Then, based either on the OD practitioner’s personal belief or per- haps on input from several organization members, the most powerful positive and negative forces are determined. One can either rank the order or rate the strength of the different forces.

Figure 6.2 illustrates a force-field analysis of the performance of a work group. The arrows represent the forces, and the length of the arrows corresponds to the strength of the forces. The information could have been collected in a group interview in which members were asked to list those factors maintaining the current level of group performance and those factors pushing for a higher level. Members also could have been asked to judge the strength of each force, with the average judgment shown by the length of the arrows.

This analysis reveals two strong forces pushing for higher performance: pressures from the supervisor of the group and competition from other work groups performing similar work. These forces for change are offset by two strong forces for maintaining the status quo: group norms supporting present levels of performance and well- learned skills that are resistant to change. According to Lewin, efforts to change to a higher level of group performance, shown by the darker band in Figure 6.2, should focus on reducing the forces maintaining the status quo. This might entail changing the group’s performance norms and helping members to learn new skills. The reduc- tion of forces maintaining the status quo is likely to result in organizational change with little of the tension or conflict typically accompanying change caused by increas- ing the forces for change.

Application 6.1 describes another installment in the change evaluation process at Alegent Health. (The introduction of this longitudinal case began in Chapter 4.) In this application, the research team collected data from interviews and questionnaires, but also used observation and unobtrusive measures. The analysis used a combination of

134 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

qualitative and quantitative techniques. What do you see as the strengths and weaknesses of the data collection and analysis process at Alegent?

6-4b Quantitative Tools Methods for analyzing quantitative data range from simple descriptive statistics of items or scales from standard instruments to more sophisticated, multivariate analysis of the underlying instrument properties and relationships among measured variables.14 The most common quantitative tools are means, standard deviations, and frequency distribu- tions; scattergrams and correlation coefficients; and difference tests. These measures are routinely produced by most statistical computer software packages. Therefore, mathe- matical calculations are not discussed here.

Means, Standard Deviations, and Frequency Distributions One of the most economical and straightforward ways to summarize quantitative data is to compute a mean and standard deviation for each item or variable measured. These represent the respondents’ average score and the spread or variability of the responses, respectively. These two numbers easily can be compared across different measures or subgroups. For example, Table 6.3 shows the means and standard deviations for six questions asked of 100 employees concerning the value of different kinds of organizational rewards. Based on the 5-point scale ranging from 1 (very low value) to 5 (very high value),

FIGURE 6.2

Force-Field Analysis of Work-Group Performance

© Ce

ng ag

e Le

ar ni

ng

CHAPTER 6 COLLECTING, ANALYZING, AND FEEDING BACK DIAGNOSTIC INFORMATION 135

a p

p lica

tio n

6 1

COLLECTING AND ANALYZING DIAGNOSTIC DATA AT ALEGENT HEALTH

T he two applications in Chapter 4 described the entering and contracting processes at the Alegent Health (AH) organization. As a result of a recent merger and the hiring of a

new CEO and chief innovation officer (CIO), the organization had implemented a series of large group interventions, known as decision accel- erators (DAs), to generate innovative strategies in the six clinical service areas of women’s and children’s services, oncology, behavioral health, neuroscience, orthopedics, and cardiology. Alegent Health then hired two OD researchers to evaluate its change progress. The evaluation was intended to help AH understand what had changed, what had been learned, the impact of those changes, and how they might extend those changes and learnings into the future. The diagnostic phase involved the collection and analysis of unobtrusive, interview, and survey data.

UNOBTRUSIVE MEASURES

Immediately following each DA, the Right Track office (a group set up to manage the DA experience) compiled a report listing partic- ipant names and affiliations, an agenda, instructions and elapsed times for each activity and process, photographs of different activities and all small-group outputs, and nearly verba- tim transcripts of the large-group report-outs, activity debriefings, and discussions.

These reports were analyzed to under- stand the process and outcomes associated with each DA. The researchers created a cod- ing scheme and process to capture the charac- teristics of the participants, the nature of the process, and a description of the DA outputs. Two coders analyzed the data to ensure the reliability of the analysis.

First, the results suggested that the DAs varied in their composition. For example, some DAs were composed of higher percentages of physicians or community members than other DAs. Second, some DAs were more “intense” than others as indicated by the amount of debate over decisions or issues, the number

of different stakeholders who participated in the debates and discussions, and the extent to which the DA’s activities deviated from the preset agenda. Finally, some DAs produced comprehensive visions and strategies for their clinical area, while others produced visions that were more narrowly focused.

INTERVIEW MEASURES

A second data set consisted of interviews with various stakeholder groups. Initial interviews were conducted with executives and physicians about (1) the context of change at Alegent, including organization history, strategy, and recent changes; (2) their reflections on the DA process; and (3) clinical area implementation progress. The researchers conducted a second round of interviews with people who were closely connected with the implementation of each clinical service-area strategy. They were asked questions about the clarity of action plans, the level of involvement of different peo- ple, and implementation progress. Finally, a third set of interviews were conducted with a sample of staff nurses who had not participated in the original DAs or been directly involved in implementation activities, such as steering committees or design teams.

Each set of interview data was content analyzed for key themes and perspectives. A few of the summary results from the initial interviews are presented here.

When asked, “How clear were the action plans coming out of the DA?,” the executives were evenly split in their beliefs that the action plans were clear as opposed to the plans being essentially absent. Executives were also asked, “What is going well/not so well in implementa- tion of the different service line strategies?” About 20% of executives believed that the strategies were aligned with the mission/vision of the health system and that the DAs had pro- vided a clear vision to guide change. However, more than half of executives expressed concern that the organization lacked a real change capa- bility. Executives were also concerned about

136 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

the data suggest that challenging work and respect from peers are the two most highly valued rewards. Monetary rewards, such as pay and fringe benefits, are not as highly valued.

However, the mean can be a misleading statistic. It only describes the average value and thus provides no information on the distribution of the responses.

being overwhelmed by change, insufficient commu- nication, and the need to involve stakeholders more.

When asked, “What would you list as the ‘high points’ or ‘best success stories’ of the DA process?” and “What have been some of the least successful activities/concerns?,” the answers were more positive than negative. Nearly all of the interviewees noted the improved relationships with physicians, and more than a third of executives said there had been some good learning on how to increase the speed of decision making. Both of these results reflected cultural changes in the orga- nization that were among the purposes for conduct- ing the DAs. On the negative side, a small percentage of executives noted the continued diffi- culties associated with coordinating the operations of a multihospital system.

Another area of interview data concerned executive perceptions of how the DA might evolve in the future. There was a strong belief that the DA needed to evolve to fit the changed organizational conditions and a widespread perception that this should include a more explicit focus on execution, better change governance, and better follow-up and communication.

In addition to these initial interview results, data from the second round of implementation interviews were used to develop six cases studies, one for each clinical service area. They described the initial DA event and the subsequent decisions, activities, and events for the 18 months following the forma- tion of the clinical strategies. Importantly, the case studies listed the organizational changes that most people agreed had been implemented in the first 18 months. Each case study was given to the VP in charge of the clinical area for validation.

SURVEY MEASURES

The researchers also collected two sets of survey data. The first survey, administered during the

initial round of executive and physician interviews, asked them to rate several dimensions of clinical area strategy and progress. The second survey was administered to people who attended a “review DA” for three of the six clinical areas. It too measured perceptions of clinical strategy and progress.

The survey data were organized into three categories and analyzed by a statistical program. The first category measured five dimensions of strategy for each clinical area: comprehensiveness, innovativeness, aggressiveness, congruence with Alegent’s strategy, and business focus. Both execu- tives and managers rated the clinical strategies highest on comprehensiveness and lowest on congruence with Alegent’s mission. Executives also rated the strategies lower on innovativeness. In all dimensions and for each clinical area, managers rated the five dimensions higher than executives did.

The second category measured how well the implementation process was being managed. Executives “somewhat agreed” that the clinical area strategies were associated with a clear action plan; however, there was considerable variance, suggesting that some clinical areas had better action plans than others. Similarly, managers “somewhat agreed” that change governance sys- tems exist and that change was coordinated.

The third category assessed implementation success. As with the strategy dimensions, man- agers rated overall implementation progress higher than executives did, but both groups were some- what guarded (between neutral and agree) in their responses. Managers were asked a more detailed set of questions about implementation. There was more agreement that the clinical strategies were the “right thing to do” and had helped to “build social capital” in the organization, but they were neutral with respect to whether “people feel involved” in the change.

CHAPTER 6 COLLECTING, ANALYZING, AND FEEDING BACK DIAGNOSTIC INFORMATION 137

Different patterns of responses can produce the same mean score. Therefore, it is important to use the standard deviation along with the frequency distribution to gain a clearer understanding of the data. The frequency distribution is a graphical method for displaying data that shows the number of times a particular response was given. For example, the data in Table 6.3 suggest that both pay and praise from the supervisor are equally valued with a mean of 4.0. However, the standard deviations for these two measures are very different at 0.71 and 1.55, respectively. Table 6.4 shows the frequency distributions of the responses to the questions about pay and praise from the supervisor. Employees’ responses to the value of pay are distributed toward the higher end of the scale, with no one rating it of low or very low value. In contrast, responses about the value of praise from the supervisor fall into two distinct groupings: Twenty-five employees felt that supervisor praise has a low or very low value, whereas 75 people rated it high or very high. Although both rewards have the same mean value, their standard deviations and frequency distri- butions suggest different interpretations of the data.

In general, when the standard deviation for a set of data is high, there is consider- able disagreement over the issue posed by the question. If the standard deviation is small, the data are similar on a particular measure. In the example described above, there is disagreement over the value of supervisory praise (some people think it is important, but others do not), but there is fairly good agreement that pay is a reward with high value.

Scattergrams and Correlation Coefficients In addition to describing data, quanti- tative techniques also permit OD practitioners to make inferences about the relationships between variables. Scattergrams and correlation coefficients are measures of the strength of a relationship between two variables. For example, suppose the problem being faced by an organization is increased conflict between the manufacturing department and the engineering design department. During the data collection phase, information about the number of conflicts and change orders per month over the past year is collected. The data are shown in Table 6.5 and plotted in a scattergram in Figure 6.3.

TABLE 6.3

Descriptive Statistics of Value of Organizational Rewards

Organizational Rewards Mean Standard Deviation

Challenging work 4.6 0.76

Respect from peers 4.4 0.81

Pay 4.0 0.71

Praise from supervisor 4.0 1.55

Promotion 3.3 0.95

Fringe benefits 2.7 1.14

Number of respondents 100 1 very low value; 5 very high value

© Ce

ng ag

e Le

ar ni

ng

138 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

A scattergram is a diagram that visually displays the relationship between two vari- ables. It is constructed by locating each case (person or event) at the intersection of its value for each of the two variables being compared. For example, in the month of August, there were eight change orders and three conflicts, whose intersection is shown in Figure 6.3 as an .

Three basic patterns can emerge from a scattergram, as shown in Figure 6.4. The first pattern is called a positive relationship because as the values of x increase, so do the values of y. The second pattern is called a negative relationship because as the values of x increase, the values of y decrease. Finally, there is the “shotgun” pattern wherein no relationship between the two variables is apparent. In the example shown in Figure 6.3, an apparently strong positive relationship exists between the number of change orders and the number of conflicts between the engineering design department and the manufacturing department. This suggests that change orders may contribute to the observed conflict between the two departments.

The correlation coefficient is simply a number that summarizes data in a scatter- gram. Its value ranges between 1.0 and 1.0. A correlation coefficient of 1.0 means that there is a perfectly positive relationship between two variables, whereas a correlation

TABLE 6.4

Frequency Distributions of Responses to “Pay” and “Praise from Supervisor” Items

Pay (Mean 4.0)

Response Number Checking

Each Response Graph*

(1) Very low value 0

(2) Low value 0

(3) Moderate value 25 XXXXX

(4) High value 50 XXXXXXXXXX

(5) Very high value 25 XXXXX

Praise from Supervisor (Mean 4.0)

Response Number Checking

Each Response Graph*

(1) Very low value 15 XXX

(2) Low value 10 XX

(3) Moderate value 0

(4) High value 10 XX

(5) Very high value 65 XXXXXXXXXXXX

*Each X five people checking the response

© Ce

ng ag

e Le

ar ni

ng

CHAPTER 6 COLLECTING, ANALYZING, AND FEEDING BACK DIAGNOSTIC INFORMATION 139

of 1.0 signifies a perfectly negative relationship. A correlation of 0 implies a “shotgun” scattergram where there is no relationship between two variables.

Difference Tests The final technique for analyzing quantitative data is the differ- ence test. It can be used to compare a sample group against some standard or norm to determine whether the group is above or below that standard. It also can be used to determine whether two samples are significantly different from each other. In the first case, such comparisons provide a broader context for understanding the mean- ing of diagnostic data. They serve as a “basis for determining ‘how good is good or how bad is bad.’ ”15 Many standardized questionnaires have standardized scores based on the responses of large groups of people. It is critical, however, to choose a comparison group that is similar to the organization being diagnosed. For example, if 100 engineers take a standardized attitude survey, it makes little sense to compare their scores against standard scores representing married males from across the country. On the other hand, if industry-specific data are available, a comparison of sales per employee (as a measure of productivity) against the industry average would be valid and useful.

The second use of difference tests involves assessing whether two or more groups differ from one another on a particular variable, such as job satisfaction or absenteeism. For example, job satisfaction differences between an accounting department and a sales department can be determined with this tool. Given that each group took the same questionnaire, their means and standard deviations can be used to compute a difference score (t-score or z-score) indicating whether the two groups are statistically different.

TABLE 6.5

Relationship between Change Orders and Conflicts

Month Number of

Change Orders Number of Conflicts

April 5 2

May 12 4

June 14 3

July 6 2

August 8 3

September 20 5

October 10 2

November 2 1

December 15 4

January 8 3

February 18 4

March 10 5

© Ce

ng ag

e Le

ar ni

ng 20

15

140 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

The larger the difference score relative to the sample size and standard deviation for each group, the more likely that one group is more satisfied than the other.

Difference tests also can be used to determine whether a group has changed its score on job satisfaction or some other variable over time. The same questionnaire can be given to the same group at two points in time. Based on the group’s means and standard

FIGURE 6.3

Scattergram of Change Order versus Conflict

FIGURE 6.4

Basic Scattergram Patterns

© Ce

ng ag

e Le

ar ni

ng ©

Ce ng

ag e

Le ar

ni ng

CHAPTER 6 COLLECTING, ANALYZING, AND FEEDING BACK DIAGNOSTIC INFORMATION 141

deviations at each point in time, a difference score can be calculated. The larger the score, the more likely the group actually changed its job satisfaction level.

The calculation of difference scores can be very helpful for diagnosis but requires the OD practitioner to make certain assumptions about how the data were collected. These assumptions are discussed in most standard statistical texts, and OD practitioners should consult them before calculating difference scores for purposes of diagnosis or evaluation.16

6-5 Feeding Back Data Perhaps the most important step in the diagnostic process is feeding back diagnostic infor- mation to the client organization. Although the data may have been collected with the client’s help, the OD practitioner often organizes and presents them to the client. Properly analyzed and meaningful data can have an impact on organizational change only if organization members can use the information to devise appropriate action plans. A key objective of the feedback process is to be sure that the client has ownership of the data.

As shown in Figure 6.5, the success of data feedback depends largely on its ability to arouse organizational action and to direct energy toward problem solving. Whether feed- back helps to energize the organization depends on the content of the feedback data and on the process by which they are fed back to organization members.

6-5a Content of Feedback In the course of diagnosing the organization, a large amount of data is collected—often, more information than the client needs or can interpret in a realistic period of time. If too many data are fed back, the client may decide that changing is impossible. Therefore, OD practitioners need to summarize the data in ways that enable clients to understand the information and draw action implications from it. The techniques for data analysis described earlier in this chapter can inform this task. Additional criteria for determining the content of diagnostic feedback are described below.

Several characteristics of effective feedback data have been described in the litera- ture.17 They include the following nine properties:

1. Relevant. Organization members are likely to use feedback data for problem solving when they find the information meaningful. Including managers and employees in the initial data collection activities can increase the relevance of the data.

2. Understandable. Data must be presented to organization members in a form that is readily interpreted. Statistical data, for example, can be made understandable through the use of graphs and charts.

3. Descriptive. Feedback data need to be linked to real organizational behaviors if they are to arouse and direct energy. The use of examples and detailed illustrations can help employees gain a better feel for the data.

4. Verifiable. Feedback data should be valid and accurate if they are to guide action. Thus, the information should allow organization members to verify whether the findings really describe the organization. For example, questionnaire data might include information about the sample of respondents as well as frequency distribu- tions for each item or measure. Such information can help members verify whether the feedback data accurately represent organizational events or attitudes.

5. Timely. Data should be fed back to members as quickly as possible after being col- lected and analyzed. This will help ensure that the information is still valid and is linked to members’ motivations to examine it.

142 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

6. Limited. Because people can easily become overloaded with too much information, feedback data should be limited to what employees can realistically process at one time.

7. Significant. Feedback should be limited to those problems that organization mem- bers can do something about because it will energize them and help direct their efforts toward realistic changes.

8. Comparative. Feedback data can be ambiguous without some benchmark as a refer- ence. Whenever possible, data from comparative groups should be provided to give organization members a better idea of how their group fits into a broader context.

9. Unfinalized. Feedback is primarily a stimulus for action and thus should spur fur- ther diagnosis and problem solving. Members should be encouraged, for example, to use the data as a starting point for more in-depth discussion of organizational issues.

FIGURE 6.5

Possible Effects of Feedback

© Ce

ng ag

e Le

ar ni

ng 20

15

CHAPTER 6 COLLECTING, ANALYZING, AND FEEDING BACK DIAGNOSTIC INFORMATION 143

6-5b Process of Feedback In addition to providing effective feedback data, it is equally important to attend to the process by which that information is fed back to people. Typically, data are provided to organization members in a meeting or series of meetings. Feedback meetings provide a forum for discussing the data, drawing relevant conclusions, and devising preliminary action plans. Because the data might include sensitive material and evaluations about organization members’ behaviors, people may come to the meeting with considerable anxiety and fear about receiving the feedback. This anxiety can result in defensive beha- viors aimed at denying the information or providing rationales. More positively, people can be stimulated by the feedback and the hope that desired changes will result from the feedback meeting. Because people are likely to come to feedback meetings with anxiety, fear, and hope, OD practitioners need to manage the feedback process so that construc- tive discussion and problem solving occur. The most important objective of the feedback process is to ensure that organization members own the data. Ownership is the opposite of resistance to change and refers to people’s willingness to take responsibility for the data, their meaning, and the consequences of using them to devise a change strategy.18

If the feedback session results in organization members rejecting the data as invalid or useless, then the motivation to change is lost and members will have difficulty engaging in a meaningful process of change.

Ownership of the feedback data is facilitated by the following five features of successful feedback processes:19

1. Motivation to work with the data. Organization members need to feel that working with the feedback data will have beneficial outcomes. This may require explicit sanc- tion and support from powerful groups so that people feel free to raise issues and to identify concerns during the feedback sessions. If members have little motivation to work with the data or feel that there is little chance to use the data for change, then the information will not be owned by the client system.

2. Structure for the meeting. Feedback meetings need some structure or they may degenerate into chaos or aimless discussion. An agenda or outline for the meeting and the presence of a discussion leader can usually provide the necessary direction. If the meeting is not kept on track, especially when the data are negative, ownership can be lost in conversations that become too general. When this happens, the energy gained from dealing directly with the problem is lost.

3. Appropriate attendance. Generally, organization members who have common pro- blems and can benefit from working together should be included in the feedback meeting. This may involve a fully intact work team or groups comprising members from different functional areas or hierarchical levels. Without proper representation in the meeting, ownership of the data is lost because participants cannot address the problem(s) suggested by the feedback.

4. Appropriate power. It is important to clarify the power possessed by the group receiving the feedback data. Members need to know on which issues they can make necessary changes, on which they can only recommend changes, and over which they have no control. Unless there are clear boundaries, members are likely to have some hesitation about using the feedback data for generating action plans. Moreover, if the group has no power to make changes, the feedback meeting will become an empty exercise rather than a real problem-solving session. Without the power to address change, there will be little ownership of the data.

5. Process help. People in feedback meetings require assistance in working together as a group. When the data are negative, there is a natural tendency to resist the

144 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

implications, deflect the conversation onto safer subjects, and the like. An OD prac- titioner with group process skills can help members stay focused on the subject and improve feedback discussion, problem solving, and ownership.

When combined with effective feedback data, these features of successful feedback meet- ings enhance member ownership of the data. They help to ensure that organization members fully discuss the implications of the diagnostic information and that their con- clusions are directed toward relevant and feasible organizational changes.

Application 6.2 presents excerpts from some training materials that were delivered to a group of internal OD facilitators at a Fortune 100 telecommunications company.20

It describes how the facilitators were trained to deliver the results of a survey concerning problem solving, team functioning, and perceived effectiveness.

6-6 Survey Feedback Survey feedback is a process of collecting and feeding back data from an organization or department through the use of a questionnaire or survey. The data are analyzed, fed back to organization members, and used by them to diagnose the organization and to develop interventions to improve it. Because questionnaires often are used in organization diag- nosis, particularly in OD efforts involving large numbers of participants, and because it is a powerful intervention in its own right, survey feedback is discussed here as a special case of data feedback.

As discussed in Chapter 1, survey feedback is a major technique in the history and development of OD. Originally, this intervention included only data from questionnaires about members’ attitudes. However, attitudinal data can be supplemented with interview data and more objective measures, such as productivity, turnover, and absenteeism.21

Another trend has been to combine survey feedback with other OD interventions, including work design, structural change, large group interventions, and intergroup rela- tions. These change methods are the outcome of the planning and implementation phase following from survey feedback and are described fully in Chapters 10–20.

6-6a What Are the Steps? Survey feedback generally involves the following five steps:22

1. Members of the organization, including those at the top, are involved in preliminary planning of the survey. In this step, all parties must be clear about the level of anal- ysis (organization, group, or job) and the objectives of the survey. Because most sur- veys derive from a model about organization or group functioning, organization members must, in effect, approve that diagnostic framework. This is an important initial step in gaining ownership of the data and in ensuring that the right problems and issues are addressed by the survey.

Once the objectives are determined, the organization can use one of the stan- dardized questionnaires described earlier in this chapter, or it can develop its own survey instrument. If the survey is developed internally, pretesting the questionnaire is essential to ensure that it has been constructed properly. In either case, the survey items need to reflect the objectives established for the survey and the diagnostic issues being addressed.

2. The survey instrument is administered to all members of the organization or work group. This breadth of data collection is ideal, but it may be appropriate to administer

CHAPTER 6 COLLECTING, ANALYZING, AND FEEDING BACK DIAGNOSTIC INFORMATION 145

a p

p lica

tio n

6 2

TRAINING OD PRACTITIONERS IN DATA FEEDBACK

A s part of a large-scale, employee involve- ment (EI) program, a large telecommunica- tions company and the Communications Workers of America union were working

to build an internal organization development consulting capability. This involved the hiring and training of several union and management employees to work with managers, facilitate EI problem-solving team meetings, and assist in the implementation of recommended changes. The implementation process included an eval- uation component and the EI facilitators were expected to collect and feed back data to the organization.

The data collected included observation of various work processes and problem-solving meetings; unobtrusive measures such as minutes from all meetings, quarterly income statements, operational reports, and communi- cations; and questionnaire and interview data. A three-page questionnaire was administered every three months and it asked participants on EI problem-solving teams for their percep- tions of team functioning and performance. Internal EI facilitators were appointed from both management and union employees, and part of their work required them to feed back the results of the quarterly surveys.

To provide timely feedback to the problem- solving teams, the EI facilitators were trained to deliver survey feedback. Some of the mate- rial developed for that training is summarized below.

I. PLANNING FOR A SURVEY-FEEDBACK SESSION

The success of a survey-feedback meeting often has more to do with the level of preparation for the meeting than with anything else. There are several things to do in preparing for a survey-feedback meeting. A. Distribute copies of the feedback report

in advance. This enables people to devote more time at the meeting to problem solving and less to just digest- ing the data. This is especially important

when a large quantity of data is being presented.

B. Think about substantive issues in advance. Formulate your own view of what the data suggest about the strengths and weaknesses of the group. Does the general picture appear to be positive or problematic? Do the data fit the experience of the group as you know it? What issues do the data suggest need group attention? Is the group likely to avoid any of these issues? If so, how will you help the group confront the difficult issues?

C. Make sure you can answer likely tech- nical questions about the data. Survey data have particular strengths and weaknesses. Be able to acknowledge that the data are not perfect, but that a lot of effort has gone into ensuring that they are reliable and valid.

D. Plan your introduction to the survey- feedback portion of the meeting. Make the introduction brief and to the point. Remind the group of why it is considering the data, set the stage for problem solving by pointing out that many groups find such data helpful in tracking their progress, and be prepared to run through an example that shows how to understand the feedback data.

II. PROBLEM SOLVING WITH SURVEY- FEEDBACK DATA A. Chunk the feedback. If a lot of data are

being fed back, use your knowledge of the group and the data to present small portions of data. Stop periodically to see if there are questions or comments about each section or “chunk” of data.

B. Stimulate discussion on the data. What follows are various ways to help get the discussion going. 1. Help clarify the meaning of the data

by asking • What questions do you have

about what the data mean?

146 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

• What does [a specific number] mean?

• Does anything in the data surprise you?

• What do the data tell you about how we’re doing as a group?

2. Help develop a shared diagnosis about the meaning of the data by commenting

• What I hear people saying is… Does everyone agree with that?

• Several people are saying that… is a problem. Do we agree that this is something the group needs to address?

• Some people seem to be saying… while other comments suggest… Can you help me understand how the group sees this?

• The group has really been strug- gling with [specific issue that the facilitator is familiar with], but the data say that we are strong on this. Can someone explain this?

3. Help generate action alternatives by asking

• What are some of the things we can do to resolve… ?

• Do we want to brainstorm some action steps to deal with… ?

C. Focus the group on its own data. The major benefit of survey feedback for EI teams will be in learning about the group’s own behav- ior and outcomes. Often, however, groups will avoid dealing with issues concerning their own group in favor of broader and less helpful discussions about what other groups are doing right and wrong. Com- ments you might use to help get the group on track include: 1. What do the data say about how we

are doing as a group? 2. There isn’t a lot we can do about what

other groups are doing. What can we do about the things that are under our control?

3. The problem you are mentioning sounds like one this group also is fac- ing [explain]. Is that so?

D. Be prepared for problem-solving discus- sions that are only loosely connected to the data. It is more important for the group to use the data to understand itself better and to solve problems than it is to follow any particular steps in analyzing the data. Groups often are not very systematic in how they analyze survey-feedback data. They may ignore issues that seem obvious to them and instead focus on one or two issues that have meaning for them.

E. Hot issues and how to deal with them. Sur- vey data can be particularly helpful in addressing some hot issues within the group that might otherwise be overlooked. For example, a group often will prefer to portray itself as very effective even though group members privately acknowledge that such is not the case. If the data show pro- blems that are not being addressed, you can raise this issue as a point for discus- sion. If someone denies that group mem- bers feel there is a problem, you can point out that the data come from the group and that group members reported such-and- such on the survey. Be careful not to use a parental tone; if you sound like you’re wagging your finger at or lecturing the group, you’re likely to get a negative reac- tion. Use the data to raise issues for discus- sion in a less emotional way.

Ultimately, the group must take responsibility for its own use of the data. There will be times when the OD practitioner sees the issues differ- ently from the way group members see them or times when it appears certain to the practitioner that the group has a serious problem that it refuses to acknowledge. A facilitator cannot push a group to do something it’s not ready to do, but he or she can poke the group at times to find out if it is ready to deal with tough issues. “A little irritation is what makes a pearl in the oyster.”

CHAPTER 6 COLLECTING, ANALYZING, AND FEEDING BACK DIAGNOSTIC INFORMATION 147

the instrument to only a sample of members because of cost or time constraints. If so, the size of the sample should be as large as possible to improve the motivational basis for participation in the feedback sessions.

3. The OD practitioner usually analyzes the survey data, tabulates the results, suggests approaches to diagnosis, and trains client members to lead the feedback process.

4. Data feedback usually begins at the top of the organization and cascades downward to groups reporting to managers at successively lower levels. This waterfall approach ensures that all groups at all organizational levels involved in the survey receive appropriate feedback. Most often, members of each organization group at each level discuss and deal with only that portion of the data involving their particular group. They, in turn, prepare to introduce data to groups at the next lower organi- zational level if appropriate.

Data feedback also can occur in a “bottom-up” approach. Initially, the data for specific work groups or departments are fed back and action items proposed. At this point, the group addresses problems and issues within its control. The group notes any issues that are beyond its authority and suggests actions. That information is combined with information from groups reporting to the same manager, and the combined data are fed back to the managers who review the data and the recom- mended actions. Problems that can be solved at this level are addressed. In turn, their analyses and suggestions regarding problems of a broader nature are combined, and feedback and action sessions proceed up the hierarchy. In such a way, the peo- ple who most likely will carry out recommended action get the first chance to propose suggestions.

5. Feedback meetings provide an opportunity to work with the data. At each meeting, members discuss and interpret their data, diagnose problem areas, and develop action plans. OD practitioners can play an important role during these meetings,23

facilitating group discussion to produce accurate understanding, focusing the group on its strengths and weaknesses, and helping to develop effective action plans.

Although the preceding steps can have a number of variations, they generally reflect the most common survey-feedback design. Application 6.3 presents a contemporary example of how the survey-feedback methodology can be adapted to serve strategic pur- poses. The application describes how Cambia Health Solutions used a survey and survey feedback process to initiate a strategic change effort.

6-6b Survey Feedback and Organizational Dependencies Traditionally, the steps of survey feedback have been applied to work groups and organi- zational units with little attention to dependencies among them. Research suggests, how- ever, that the design of survey feedback should vary depending on how closely the participating units are linked with one another.24 When the units are relatively indepen- dent and have little need to interact, survey feedback can focus on the dynamics occur- ring within each group and can be applied to the groups separately. When there is greater dependency among units and they need to coordinate their efforts, survey feed- back must take into account relationships among the units, paying particular attention to the possibility of intergroup conflict. In these situations, the survey-feedback process needs to be coordinated across the interdependent groups. The process will typically be managed by special committees and task forces representing the groups. They will facili- tate the intergroup confrontation and conflict resolution generally needed when relations across groups are diagnosed.

148 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

a p

p li

ca ti

o n

6 3 SURVEY FEEDBACK AND PLANNED CHANGEAT CAMBIA HEALTH SOLUTIONS

C ambia Health Solutions (www.cambia health.com) is a nonprofit total health solu- tions company dedicated to transforming the way people experience the health care

system. Located in the Pacific Northwest and intermountain region of the United States, Cam- bia’s portfolio of companies spans health care information technology and software develop- ment; retail health care; health insurance plans; pharmacy benefit management; life, dis- ability, dental, vision, and other lines of protec- tion; alternative solutions to health care access; and freestanding health and wellness solutions. The largest business in the portfolio is Regence Health, a health insurance plan associated with the Blue Cross and Blue Shield brands. Regence Health is over 90 years old and operates in Washington, Oregon, Idaho, and Utah.

To support this increasingly broad portfolio, Cambia had restructured itself into two divi- sions: Regence Insurance Holding Company and Direct Health Solutions. All of the start-up, alternative health care products and services were housed in the direct health solutions divi- sion. In 2009, the organization was concerned about the health care reform initiatives taking place in Washington, D.C. and more specifically the implications of the recently passed “Obamacare” legislation. What were the impli- cations of establishing regional health exchanges and accountable care organizations? How would the organization have to change? In particular, was the organization’s culture “fit for the future?”

As corporate sponsors of USC’s Center for Effective Organizations, the vice president of human resources and the director of organiza- tion development called the Center to talk about the latest thinking in organization culture and how they might go about managing cultural change. After several conversations about dif- ferent approaches and the research being done at the Center regarding organization design, change, and agility, the researchers pro- posed an assessment process of Cambia’s cur- rent organization in terms of how people saw

the strategies, structures, systems, and culture. A design team composed of the executive vice president of corporate services, the VP of HR, the director of OD, and an internal HR business partner worked with the researcher to make the assessment relevant.

In early 2011, a three-page diagnostic sur- vey was administered to all managers with titles of assistant director or above, a popula- tion of about 150 people. In addition, 16 senior leaders were interviewed from the headquar- ters and regional organizations. The leaders represented a good mix of functions and ten- ure with the organization.

The survey consisted of about 50 items to be rated on a scale of 1 to 5 where 1 “Not at all” and a 5 “To a great extent.” These pre- tested items fell into 14 dimensions, including the extent to which the organization formu- lated “robust strategies,” engaged in future focused environmental scanning, had flat and responsive structures, rewarded performance and change, leveraged information systems, developed its talent well, and managed resources flexibly. In addition, the survey asked several questions about the organiza- tion’s cultural values and how members perceived leaders spending their time. The hour-long interviews asked questions addres- sing similar issues in terms of strategies, pro- cesses, and culture but were focused more on gathering rich stories and examples that might help the survey data “come alive.”

The results of the survey were placed into a spreadsheet and analyzed with statistical pro- grams that generated summary tables and charts of the data. The interview data was summarized using content analysis procedures and preliminary themes were discussed with design team members to ensure that the inter- view responses and categories had meaning for the organization.

The summary results were then placed into three categories: “Positive issues,” “Areas of Concern,” and “Problems.” Compared to the overall scores from other firms, Cambia’s scores

CHAPTER 6 COLLECTING, ANALYZING, AND FEEDING BACK DIAGNOSTIC INFORMATION 149

were generally below the overall average of other firms but were similar to other financial services firms. The economic recession and financial crises of the time had affected the culture of many of these firms and it was not surprising that the finan- cial services sector scores were lower.

The key “positive issue” was that people reported a strong sense of shared purpose in the organization. Captured by “The Cause,” a statement announced in early 2004 stating that the Cambia organization wanted to be a “catalyst for change” in health care, there was broad support for this clear direction. The Cause and the organization’s history also supported a clear “member-focused” culture. People liked working for a not-for-profit insurer and believed that such a corporate form was an important differentiator in the way the organization did busi- ness. This belief was reflected in the survey data as a very healthy balance between driving for results and taking care of people.

In the areas of concern category, and despite the strong shared sense of purpose scores, people struggled with what The Cause meant to their day- to-day behaviors. It was one thing to be clear about “being a catalyst for change” in health care, but how did that translate into how organization mem- bers were supposed to treat customers? In this sense, people were concerned about “who we are” as an organization and did not see how the Cause helped them have a real “voice” in making day-to-day decisions.

The recent reorganization into a health care business and a set of entrepreneurial “start-up” businesses that were intended to explore the future of the health care industry clearly reflected The Cause. However, people were concerned about what it meant for the culture. A lot of senior management’s attention was focused on the inno- vative nature of these new businesses, and some people in the insurance division felt left out. The culture of Cambia was clearly changing, but was the Regence culture expected to change as well and if so in what direction? The Cause helped peo- ple understand where the organization was headed, but it didn’t really help people answer the question “who are we?” and how to make decisions. People were frustrated by this.

In general, people were also concerned about how well the new direction was being supported by different organizational systems. They believed

that recent structural and reward systems changes were heading in the right direction, but other com- ments raised questions over other features, such as the way organizational and individual goals were set, how the organization responded to opportu- nities, and the way information and communication moved throughout the organization. These sys- tems were not changing and did not necessarily align with the new direction. The IT systems, in particular, had a very bad reputation. A complex systems changeover was generally regarded as an example of poor execution, and was producing a number of headaches around the organization.

Finally, two big problems loomed. First, there was widespread agreement that the organization did not have the change and learning capabilities to execute a change of this magnitude. As a 90-year-old organization in a slow-moving and reg- ulated industry, there was little expertise in the organization regarding how to manage change. Second, in a related way, the organization was rely- ing on innovation in both the new start-up busi- nesses and the traditional health care business as part of The Cause. However, the organization lacked the resources, processes, and experience to generate new product/service ideas or identify and implement process improvements. The pro- cesses that had helped them to adapt in the past were unlikely to be effective in the future.

The summary data were fed back in multiple forums. The first forum was an all-day meeting of the design team. A PowerPoint deck provided both detailed summaries and analyses of the data as well as charts that made interpretation more intui- tive. For example, the 14 scales regarding strategy, structure, and processes were presented as bar charts that allowed the design team to “see” how their data compared to other organizations and overall averages.

The data presentation was broken up by catego- ries. First, the “good news” was presented and dis- cussed. The organization had important strengths that any change process would naturally want to leverage or reinforce. The strong sense of shared pur- pose in the organization would provide an important base. The discussion among design team members centered on the acknowledgement that a strong his- tory in the different regions had created a “members first” culture. While it was acknowledged as a strength, the design team also wondered whether

150 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

such a legacy orientation would be strength or a weakness if change was necessary.

The areas of concern and problems were pre- sented next. The group spent quite a bit of time discussing their implications. There was ready agreement on the problems. Design team mem- bers believed that the organization needed (and lacked) change, learning, and innovation capabili- ties. But they also believed that just building these capabilities was not enough and might be a waste of time. They needed to be focused on changing and innovating the right things.

Much of the conversation then centered around the implication that there was a distinction to be made between the clear direction provided by The Cause and the concern that there was no guidance for decision making. The interviews clearly pointed to a frustration about how hard it was to “get things done” in the organization. There was a perception that too many decisions were pushed up to the top for resolution and that silos in the organization pre- vented the required cross-group collaboration.

From here, the diagnostic conversation turned to a broader subject. The design team members were concerned that not being able to “get stuff done” and pushing decisions up the hierarchy was indicative of a more basic problem. People gener- ally did not have clear goals (“it’s hard to get stuff done when you don’t know where you are going”) and were not held accountable (“it’s not my deci- sion”). The culture of “Northwest Nice” was work- ing against such culture change objectives. The design team believed that if change and innovation capability building could be focused on helping the organization more effectively execute specific strategies and goals, then that would represent an important impetus for culture change.

Before the meeting ended, the design team believed it was important to share the data and their conclusions with the CEO to gauge his level of interest in moving a change process forward. The team spent a considerable amount of time sort- ing through the data to find the most central and most influential data points to tell a story. The CEO’s summary was only two pages long and con- sisted of the high-level summary of positives, con- cerns, and negatives as well as a summary of the survey scale scores compared to other firms.

The CEO and the VP of HR met with the researcher. After a few brief comments about the

data, the CEO began by inquiring about the diag- nostic process. He wanted to know if the data he was looking at was “good” data or not. Once sat- isfied that a sound process had been followed in terms of sampling and analysis, he turned his attention to the actual data. Like the design team, he asked some clarifying questions about the dis- tinction between strategic direction and cultural influences. He also asked some insightful ques- tions about specific words that had been chosen to capture the design team’s “sense” of the data.

His attention was mostly on the concerned and negative themes. Many of the issues (both positive and negative) raised were familiar to him and he doubted that the organization could fulfill the promise of The Cause with this set of weak- nesses. On the spot, he commissioned the HR vice president with leading the design team to for- mulate a change strategy to address the issues raised in the assessment.

The HR vice president and the researcher reconvened the design team and added members from other departments, such as IT and the regional organizations, to better represent the overall enter- prise. They began to develop an action plan for the change. It began with feedback of the assessment data to other parts of the organization. This hap- pened in two primary ways. First, the results were fed back to the existing senior leadership team. They were tasked with committing to the change and formulating statements that would represent an organizational future state. Second, the data were fed back to the top 150 leaders at the organi- zation’s annual leadership summit. This group had been the primary group sampled in the survey and they were given a chance to review the data, ask questions, and provide guidance on a proposed action plan.

The design team also formally commissioned four initiative task forces to address specific issues in the assessment. One team took on the challenge of revising the human capital management process (see Application 15.1 for a summary of this effort). A second task force was charted to diagnose and explore in more detail the issues surrounding peo- ple’s beliefs that it was hard to “get stuff done” at Cambia. A third team addressed the related issue of strategic planning and corporate communication. Was there a clear, well-understood, and shared pro- cess for setting organization objectives that were

CHAPTER 6 COLLECTING, ANALYZING, AND FEEDING BACK DIAGNOSTIC INFORMATION 151

6-6c Limitations of Survey Feedback Although the use of survey feedback is widespread in contemporary organizations, the following limits and risks have been identified:25

1. Ambiguity of purpose. Managers and staff groups responsible for the survey- feedback process may have difficulty reaching sufficient consensus about the pur- poses of the survey, its content, and how it will be fed back to participants. Such confusion can lead to considerable disagreement over the data collected and paraly- sis about doing anything with them.

2. Distrust. High levels of distrust in the organization can render the survey feedback ineffective. Employees need to trust that their responses will remain anonymous and that management is serious about sharing the data and solving problems jointly.

3. Unacceptable topics. Most organizations have certain topics that they do not want examined. This can severely constrain the scope of the survey process, particularly if the neglected topics are important to employees.

4. Organizational disturbance. The survey-feedback process can unduly disturb orga- nizational functioning. Data collection and feedback typically infringe on employee work time. Moreover, administration of a survey can call attention to issues with which management is unwilling to deal, and can create unrealistic expectations about organizational improvement.

6-6d Results of Survey Feedback Survey feedback has been used widely in business organizations, schools, hospitals, fed- eral and state governments, and the military. The navy has used survey feedback in more than 500 navy commands. More than 150,000 individual surveys were completed, and a large bank of computerized research data was generated. Promising results were noted among survey indices on nonjudicial punishment rates, incidence of drug abuse reports, and performance of ships undergoing refresher training (a post overhaul training and evaluation period).26 Positive results have been reported in such diverse areas as an industrial organization in Sweden and the Israeli Army.27

relevant to the managers and departments in the organization and how were those objectives com- municated? Finally, a fourth team was given the task of creating and implementing an organization- wide change-management process.

The design team and VP of HR worked on a vari- ety of organization changes. These included a realign- ment of the senior leadership group, supporting key leadership changes, changes in the leadership devel- opment programs, and the reorganization of several functional groups, including HR, and a complete redesign of the performance management process.

After one year of implementation, the design team commissioned a midpoint review to gauge

progress on the action plan. Interviews with the design team members, a sample of managers who had participated on the task forces, and a sample of managers and executives who had not been directly involved in the change effort were conducted. In general, the interview data supported that the change was heading in the right direction. Many people believed that, in fact, the culture was changing and that the work of the design team was an important contributor to that change. The interviewees also made a variety of suggestions for continuing different initiatives as well as suggestions for “next steps.”

152 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

One of the most important studies of survey feedback was done by Bowers, who conducted a five-year longitudinal study (the Intercompany Longitudinal Study) of 23 organizations in 15 companies involving more than 14,000 people in both white- collar and blue-collar positions.28 In each of the 23 organizations studied, repeat measurements were taken. The study compared survey feedback with three other OD interventions: interpersonal process consultation, task process consultation, and labora- tory training. The study reported that survey feedback was the most effective of the four interventions and the only one “associated with large across-the-board positive changes in organization climate.”29 Although these findings have been questioned on a number of methodological grounds,30 the original conclusion that survey feedback is effective in achieving organizational change was supported. The study suggested that any conclusions to be drawn from action research and survey-feedback studies should be based, at least in part, on objective operating data.

Comprehensive reviews of the literature reveal differing perspectives on the effects of survey feedback. In one review, survey feedback’s biggest impact was on attitudes and per- ceptions of the work situation. The study suggested that survey feedback might best be viewed as a bridge between the diagnosis of organizational problems and the implementa- tion of problem-solving methods because little evidence suggests that survey feedback alone will result in changes in individual behavior or organizational output.31 This view is sup- ported by research suggesting that the more the data were used to solve problems between initial surveys and later surveys, the more the data improved.32 Similarly, Church and his colleagues, based on a longitudinal evaluation of a survey feedback process in a large multi- national corporation, found that groups that shared feedback data and acted on that data were more likely to report positive attitudes about the company, their manager, job training, and support for work-life balance.33 The authors stated, “Put another way, the impact of sharing and acting on survey data on overall employee attitudes is (a) significant and pro- nounced, (b) replicable over time, (c) applies across different employee groups/levels, and (d) applies across content areas and overall rating tendencies. If there was ever a reason to decide to take action from an organizational survey effort, this is a clear mandate.”

Another study suggested that survey feedback has positive effects on both outcome variables (for example, productivity, costs, and absenteeism) and process variables (for example, employee openness, decision making, and motivation) in 53% and 48%, respec- tively, of the studies measuring those variables. When compared with other OD approaches, survey feedback was only bettered by interventions using several approaches together—for example, change programs involving a combination of survey feedback, process consultation, and team building.34 On the other hand, another review found that, in contrast to laboratory training and team building, survey feedback was least effective, with only 33% of the studies that measured hard outcomes reporting success. The success rate increased to 45%, however, when survey feedback was combined with team building.35 Finally, a meta-analysis of OD process interventions and individual atti- tudes suggested that survey feedback was not significantly associated with overall satis- faction or attitudes about coworkers, the job, or the organization. Survey feedback was able to account for only about 11% of the variance in satisfaction and other attitudes.36

Studies of specific survey-feedback interventions identify conditions that improve the success of this technique. One study in an urban school district reported difficulties with survey feedback and suggested that its effectiveness depends partly on the quality of those leading the change effort, members’ understanding of the process, the extent to which the survey focuses on issues important to participants, and the degree to which the values expressed by the survey are congruent with those of the respondents.37 Another study in the military concluded that survey feedback works best when supervisors play an active

CHAPTER 6 COLLECTING, ANALYZING, AND FEEDING BACK DIAGNOSTIC INFORMATION 153

role in feeding back data to employees and helping them to work with the data.38 Simi- larly, a field study of funeral cooperative societies concluded that the use and dissemina- tion of survey results increased when organization members were closely involved in developing and carrying out the project and when the consultant provided technical assis- tance in the form of data analysis and interpretation.39 Finally, a long-term study of survey feedback in an underground mining operation suggested that continued, periodic use of survey feedback can produce significant changes in organizations.40

SUMMARY

This chapter described methods for collecting, analyz- ing, and feeding back diagnostic data. Because diagnos- ing is an important step that occurs frequently in the planned change process, a working familiarity with these techniques is essential. Methods of data collection include questionnaires, interviews, observation, and unobtrusive measures. Methods of analysis include qualitative techniques, such as content analysis and force-field analysis, and quantitative techniques, such as the determination of mean, standard deviation, and frequency distributions; scattergrams and correlation

coefficients; as well as difference tests. Feeding back data to a client system is concerned with identifying the content of the data to be fed back and designing a feedback process that ensures ownership of the data. If members own the data, they will be motivated to solve organizational problems. A special application of the data collection and feedback process is called survey feedback, which enables OD practitioners to collect diagnostic data from a large number of organization members and to feed back that information for pur- poses of problem solving.

NOTES

1. S. Mohrman, T. Cummings, and E. Lawler III, “Creating Useful Knowledge with Organizations: Relationship and Process Issues,” in Producing Useful Knowledge for Orga- nizations, ed. R. Kilmann and K. Thomas (New York: Praeger, 1983): 613–24; C. Argyris, R. Putnam, and D. Smith, eds., Action Science (San Francisco: Jossey-Bass, 1985); E. Lawler III, A. Mohrman, S. Mohrman, G. Ledford Jr., and T. Cummings, Doing Research That Is Useful for Theory and Practice (San Francisco: Jossey-Bass, 1985).

2. D. Nadler, Feedback and Organization Development: Using Data-Based Methods (Reading, MA: Addison- Wesley, 1977): 110–14.

3. W. Nielsen, N. Nykodym, and D. Brown, “Ethics and Organizational Change,” Asia Pacific Journal of Human Resources 29 (1991).

4. Nadler, Feedback, 105–7.

5. W. Wymer and J. Carsten, “Alternative Ways to Gather Opinion,” HR Magazine, April 1992, 71–78.

6. Examples of basic resource books on survey methodology include L. Rea and R. Parker, Designing and Conducting Survey Research: A Comprehensive Guide (San Francisco:

Jossey-Bass, 2012); W. Saris and I. Gallhofer, Design, Evaluation, and Analysis for Survey Research (New York: Wiley-Interscience, 2007); S. Seashore, E. Lawler III, P. Mirvis, and C. Cammann, Assessing Organizational Change (New York: Wiley-Interscience, 1983); E. Lawler III, D. Nadler, and C. Cammann, Organiza- tional Assessment: Perspectives on the Measurement of Organizational Behavior and the Quality of Work Life (New York: Wiley-Interscience, 1980).

7. J. Taylor and D. Bowers, Survey of Organizations: A Machine-Scored Standardized Questionnaire Instrument (Ann Arbor: Institute for Social Research, University of Michigan, 1972); C. Cammann, M. Fichman, G. Jenkins, and J. Klesh, “Assessing the Attitudes and Perceptions of Organizational Members,” in Assessing Organizational Change: A Guide to Methods, Measures, and Practices, ed. S. Seashore, E. Lawler III, P. Mirvis, and C. Cammann (New York: Wiley-Interscience, 1983), 71–138.

8. M. Weisbord, “Organizational Diagnosis: Six Places to Look for Trouble with or without a Theory,” Group and Organi- zation Studies 1 (1976): 430–37; R. Preziosi, “Organizational Diagnosis Questionnaire,” in The 1980 Handbook for Group

154 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

Facilitators, ed. J. Pfeiffer (San Diego: University Associates, 1980); W. Dyer, Team Building: Issues and Alternatives (Reading, MA: Addison-Wesley, 1977); J. Hackman and G. Oldham, Work Redesign (Reading, MA: Addison- Wesley, 1980); K. Cameron and R. Quinn, Diagnosing and Changing Organizational Culture (Reading, MA: Addison-Wesley, 1999).

9. J. Fordyce and R. Weil, Managing WITH People, 2nd ed. (Reading, MA: Addison-Wesley, 1979); R. Krueger and M. Casey, Focus Groups: A Practical Guide for Applied Research, 4th ed. (Thousand Oaks, CA: Sage Publications, 2009).

10. J. Daniel, Sampling Essentials: Practical Guidelines for Making Sampling Choices (Thousand Oaks, CA: Sage Publications, 2012).

11. Daniel, Sampling Essentials; W. Deming, Sampling Design (New York: John Wiley & Sons, 1960).

12. K. Krippendorf, Content Analysis: An Introduction to Its Methodology, 3rd ed. (Thousand Oaks, CA: Sage Publica- tions, 2013).

13. K. Lewin, Field Theory in Social Science (New York: Harper & Row, 1951).

14. A simple explanation on quantitative issues in OD can be found in: S. Wagner, N. Martin, and C. Hammond, “A Brief Primer on Quantitative Measurement for the OD Professional,” OD Practitioner 34 (2002): 53–57. More sophisticated methods of quantitative analysis are found in the following sources: W. Hays, Statistics (New York: Holt, Rinehart, & Winston, 1963); J. Nunnally and I. Bernstein, Psychometric Theory, 3rd ed. (New York: McGraw-Hill, 1994); F. Kerlinger, Foundations of Behav- ioral Research, 2nd ed. (New York: Holt, Rinehart, & Winston, 1973); J. Cohen, P. Cohen, S. West, and L. Aiken, Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences, 3rd ed. (Hillsdale, NJ: Routledge Academic, 2002); E. Pedhazur, Multiple Regression in Behavioral Research (New York: Harcourt Brace, 1997).

15. A. Armenakis and H. Field, “The Development of Orga- nizational Diagnostic Norms: An Application of Client Involvement,” Consultation 6 (Spring 1987): 20–31.

16. Cohen, Cohen, West, and Aiken, Applied Multiple Regression.

17. J. Folkman, The Power of Feedback: 35 Principles for Turning Feedback from Others into Personal and Profes- sional Change (New York: John Wiley & Sons, 2006); S. Mohrman, T. Cummings, and E. Lawler III, “Creating Useful Knowledge with Organizations: Relationship and Process Issues,” in Producing Useful Knowledge for Orga- nizations, ed. R. Kilmann and K. Thomas (New York: Praeger, 1983), 613–24.

18. C. Argyris, Intervention Theory and Method: A Behav- ioral Science View (Reading, MA: Addison-Wesley, 1970); P. Block, Flawless Consulting: A Guide to Getting

Your Expertise Used, 3rd ed. (San Francisco: Jossey-Bass, 2011).

19. D. Nadler, Feedback and Organization Development: Using Data-Based Methods (Reading, MA: Addison- Wesley, 1977), 156–58.

20. G. Ledford and C. Worley, “Some Guidelines for Effective Survey Feedback” (working paper, Center for Effective Organizations, University of Southern California, Los Angeles, 1987).

21. D. Nadler, P. Mirvis, and C. Cammann, “The Ongoing Feedback System: Experimenting with a New Managerial Tool,” Organizational Dynamics 4 (Spring 1976): 63–80.

22. F. Mann, “Studying and Creating Change,” in The Planning of Change, ed. W. Bennis, K. Benne, and R. Chin (New York: Holt, Rinehart, & Winston, 1964), 605–15; Nadler, Feedback; A. Church, A. Margiloff, and C. Coruzzi, “Using Surveys for Change: An Applied Example in a Pharmaceu- ticals Organization,” Leadership and Organization Develop- ment Journal 16 (1995): 3–12; J. Folkman and J. Zenger, Employee Surveys That Make a Difference: Using Custom- ized Feedback Tools to Transform Your Organization (New York: Executive Excellence, 1999).

23. Ledford and Worley, “Effective Survey Feedback.” 24. M. Sashkin and R. Cooke, “Organizational Structure as a

Moderator of the Effects of Data-Based Change Programs” (paper delivered at the thirty-sixth annual meeting of the Academy of Management, Kansas City, 1976); D. Nadler, “Alternative Data-Feedback Designs for Organizational Intervention,” The 1979 Annual Handbook for Group Facilitators, ed. J. Jones and J. Pfeiffer (La Jolla, CA: University Associates, 1979), 78–92.

25. S. Seashore, “Surveys in Organizations,” in Handbook of Organizational Behavior, ed. J. Lorsch (Englewood Cliffs, NJ: Prentice Hall, 1987), 142.

26. R. Forbes, “Quo Vadis: The Navy and Organization Development” (paper delivered at the Fifth Psychology in the Air Force Symposium, United States Air Force Academy, Colorado Springs, CO, April 8, 1976).

27. S. Rubenowitz, Gottenburg, Sweden: Göteborg Universitet, personal communication, 1988; D. Eden and S. Shlomo, “Survey-Based OD in the Israel Defense Forces: A Field Experiment” (undated manuscript, Tel Aviv University).

28. D. Bowers, “OD Techniques and Their Result in 23 Orga- nizations: The Michigan ICL Study,” Journal of Applied Behavioral Science 9 (January–March 1973): 21–43.

29. Ibid., 42. 30. W. Pasmore, “Backfeed, The Michigan ICL Study Revisited:

An Alternative Explanation of the Results,” Journal of Applied Behavioral Science 12 (April–June 1976): 245–51; W. Pasmore and D. King, “The Michigan ICL Study Revis- ited: A Critical Review” (working paper no. 548, Krannert Graduate School of Industrial Administration, West Lafayette, IN, 1976).

CHAPTER 6 COLLECTING, ANALYZING, AND FEEDING BACK DIAGNOSTIC INFORMATION 155

31. F. Friedlander and L. Brown, “Organization Development,” in Annual Review of Psychology, ed. M. Rosenzweig and L. Porter (Palo Alto, CA: Annual Reviews, 1974).

32. D. Born and J. Mathieu, “Differential Effects of Survey- Guided Feedback: The Rich Get Richer and the Poor Get Poorer,” Group and Organization Management 21 (1996): 388–404.

33. A. Church, L. Golay, C. Rotolo, M. Tuller, A. Shull, and E. Desrosiers, “Without Effort there can be no Change: Reexamining the Impact of Survey Feedback and Action Planning on Employee Attitudes,” in Research in Organi- zational Change and Development, vol. 20 (Emerald Group Publishing Limited, 2012), 223–64.

34. J. Porras and P. O. Berg, “The Impact of Organization Development,” Academy of Management Review 3 (April 1978): 249–66.

35. J. Nicholas, “The Comparative Impact of Organization Development Interventions on Hard Criteria Measures,” Academy of Management Review 7 (October 1982): 531–42.

36. G. Neuman, J. Edwards, and N. Raju, “Organizational Development Interventions: A Meta-Analysis of Their Effects on Satisfaction and Other Attitudes,” Personnel Psychology 42 (1989): 461–83.

37. S. Mohrman, A. Mohrman, R. Cooke, and R. Duncan, “Survey Feedback and Problem-Solving Intervention in a School District: ‘We’ll Take the Survey But You Can Keep the Feedback,’” in Failures in Organization Develop- ment and Change, ed. P. Mirvis and D. Berg (New York: John Wiley & Sons, 1977), 149–90.

38. F. Conlon and L. Short, “An Empirical Examination of Survey Feedback as an Organizational Change Device,” Academy of Management Proceedings (1983): 225–29.

39. R. Sommer, “An Experimental Investigation of the Action Research Approach,” Journal of Applied Behavioral Sci- ence 23 (1987): 185–99.

40. J. Gavin, “Observation from a Long-Term Survey-Guided Consultation with a Mining Company,” Journal of Applied Behavioral Science 21 (1985): 201–20.

156 PART 2 THE PROCESS OF ORGANIZATION DEVELOPMENT

© P

ix m

an n/

Im ag

ez oo

/ G

et ty

Im ag

es

7

Designing Interventions

learning objectives

Describe the interventions presented in the text.

Discuss how contingencies related to the change situation affect the design of effective organization development (OD) interventions.

Discuss how contingencies related to the target of change affect the design of effective OD interventions.

A n organization development intervention is a sequence of activities, actions, and events intended to help an organization improve its

performance and effectiveness. Designing interven- tions, or action planning, derives from careful diag- nosis and is meant to resolve specific problems and to improve particular areas of organizational functioning identified in the diagnosis. Organization development (OD) interventions vary from standard- ized programs that have been developed and used

in many organizations to relatively unique programs tailored to a specific organization or department.

This chapter serves as an overview of the intervention design process. It briefly describes the various types of OD interventions presented in this book. Parts 3–6 of this text describe fully the major interventions used in OD today. Criteria that define effective OD interventions are discussed and contingencies that guide successful intervention design are identified.

7-1 Overview of Interventions The OD interventions described here represent the major organization change methods used in OD today. They include four major types of planned change: human process interventions, technostructural interventions, human resource management interven- tions, and strategic change interventions.

7-1a Human Process Interventions Part 3 of the book presents interventions focusing on people within organizations and the processes through which they accomplish organizational goals. These processes include communication, problem solving, group decision making, and leadership. This type of intervention is deeply rooted in OD’s history and represents the earliest change

157

,

1

one and half page answer for HW part 1

hi, when you write this (Learning objectives), please follow the question and answer the question separately follow the number (not put the answer together for each chapter). You need answer each question one by one under each question. Again, please dont put the answer together. (also you need put the page number in the answer if you found from the text)

Below each Learning Objective you are to write your explanation, in your words. Your answers must be written in full college level sentences using proper structure, grammar, and no abbreviations. Your answers may not be quotes from the text or any other source. If so, you must cite them.

CH.4 Entering and Contracting

1. Describe the issues associated with entering into an OD process.

2. Describe the issues associated with contracting for an OD process

Entering and contracting to vary in complexity and formality. Organizational development practitioners typically organize meetings with members of workgroups or departments to decide on the challenges they will concentrate on and how they will cooperate to achieve the goals they have set. Entry and contracting are simple and informal here. It is feasible to absorb all essential personnel while minimizing formalities quickly.

Managers and administrators are exploring professional organizational development (OD) practitioners within and outside the firm. Organizational development professionals may need to gather preliminary data better to describe the difficulties (Cummins, 2015). They may need to meet with just a few clients rather than the full membership in other cases. They may need to codify their duties and the transformation process. Finally, they may need to grasp the client organization's power dynamics and impact the organizational development process.

CH.5 Entering and Contracting

1. Discuss the philosophy and purpose of diagnosis in organization development (OD).

2. Explain the role of diagnostic models in OD, especially the open-systems model.

3. Describe and apply organization-level diagnostic processes.

4. Describe and apply group-level diagnostic processes.

5. Describe and apply individual-level diagnostic processes.

An organization or unit member contacts an organizational development practitioner. The practitioner might be either internal or external. Before beginning an OD project, there are several factors to consider. It may be required to get information on the organization. It may also be necessary to evaluate the practitioner's abilities. This understanding will be useful to both parties while drafting a contract on identifying the client and selecting an OD practitioner.

They recognize the significance of the various parts in the equation. Contracting is used to characterize the OD process. Expectations, timeframes, and resources needed are all specified.

Contracting expands operating options. Alternatively, it might be a simple verbal agreement between the two parties. An organizational development specialist may help unhappy team members. The next time they meet, they can spend an hour with the leader diagnosing the team. This act is the point of entry informally. In certain circumstances, a formal contract is required. This idea is prevalent when businesses use outsourced human resources. Government entities seldom use outside expertise. Whether verbal or written, contracts are required for OD procedures (Cummins, 2015). They outline the expectations of both the client and the practitioner. There is a significant risk of failure unless everyone understands and agrees on the strategy. Reduced commitment, erroneous effort, or early process completion may occur as a consequence. Setting time and budget constraints while creating ground rules for cooperation.

CH.6 Collecting, Analyzing, and Feeding Back Diagnostic Information

1. Understand the importance of the diagnostic relationship in the organization development (OD) process.

2. Describe the methods for collecting diagnostic data.

3. Understand the primary techniques used to analyze diagnostic data.

4. Outline the process issues associated with data feedback.

5. Describe and evaluate the survey feedback intervention.

The complete diagnostic paradigm is data-driven. Building healthy connections between the practitioner and the persons whose data is gathered is the first step. Data is collected using surveys, interviews, and hidden techniques. Data analysis is used to identify fundamental causes of problems or future development possibilities. Participants may analyze and act on diagnostic findings via data feedback. It examines both content and manner. Notifying respondents through survey feedback is popular. Diagnostic data are collected in many ways. No one approach can accurately analyze all OD diagnostic criteria. Untruthful replies might be given as a result of self-reporting biases. While perceptions might be skewed, e.g., seeing what one desires rather than inherent biases in every data gathering approach (Cummins, 2015). This result means the variables are being measured accurately. Making job discretion surveys more countable and categorical might help. Suppose the data agree. Interviews should be employed if the two data sources conflict.

There is qualitative and quantitative data analysis. One may use Quantitative procedures in the absence of numerical data. As a consequence, they are clear. QM may yield more exact findings. Providing diagnostic data to the client firm is crucial.

In most cases, the OD practitioner organizes and shares the data collected by the customer. Change requires data-driven action plans. Our feedback mechanism ensures customer data management. They must not only provide but also collect correct diagnostic information. A session contains an overwhelming amount of data. Feedback meetings review lessons learned and activities planned. Attendees may worry about hearing insensitive facts and judgments regarding members' actions (Cummins, 2015). Anxious persons may ignore or rationalize facts. Encouragement and hope for progress may motivate. Achieving meaningful dialogue and issue solutions requires feedback management. The feedback mechanism is intended to empower members. The opposite of resistance. It is taking ownership of the data, it is worth, and the consequences of utilizing it to construct a change strategy. It's hard to adjust when the data is wrong or useless.

Order Solution Now

Similar Posts