Why It Matters
Standardization is a fundamental starting point for any improvement effort. Without standardization you do not have a single process but many. What might improve one aspect of a multi-layered process may not improve another form of the process or even make things worse.
When we discuss the concept of standardization during IHI programs, however, we frequently get some participants rolling their eyes and making comments such as, “It’s very hard to standardize in health care settings because every patient is different” or “We need to have freedom to make the clinical decisions we think are best.” My favorite reaction goes like this: “You’re proposing cookbook medicine which is totally contrary to what I was taught.”
While such comments do, in certain situations, have merit, they miss an essential point about standardization. Standardization does not mean the restriction of individual decision making or the end of autonomy. The definition of standardization from Webster’s Dictionary reads, “to standardize means to cause or be in agreement against a commonly used authority or acceptable quality; or conforming to an established norm.”
Adherence to this definition is common in medical practice. For example, we have standard practices and protocols we follow for hand hygiene, infection control, blood transfusions, and surgical sterilization. We have standardized tools to assess patients for falls or pressure ulcers. We would consider deviation from these standards unacceptable.
So, why do some health care professionals get concerned when we start talking about standardizing the work we do?
The origins of the concern over the word “standardization” in medical practice can be traced back to the very origins of the medical profession itself. Several key references help us understand the history and challenges related to the application of standardization in the medical profession. Howard Hiatt in 1975 addressed the historical roots of this issue when he wrote a thoughtful NEJM article entitled “Protecting the Medical Commons: Who Is Responsible?” Paul Starr’s classic book The Social Transformation of American Medicine outlines the causes and social forces that have led to what he refers to as the “professional authority and sovereignty” of the medical profession. The observation that independence of the physician has been a central assumption of medical practice has also been discussed quite thoroughly by Michael Millenson in Demanding Medical Excellence: Doctors and Accountability in the Information Age.
IHI’s own Don Berwick has provided an historical view of the sovereignty and authority challenges in his 2016 JAMA article entitled “Era 3 for Medicine and Health Care.” In this reflective piece, Dr. Berwick provides a useful timeline and orientation of different periods in the progression of attitudes and behaviors in medical practice. He points out that Era 1 was characterized by protectionism and professional dominance, while Era 2 has been more focused on accountability, scrutiny measurement, incentives, and markets. Era 3, he argues, is only beginning to emerge and take shape. It will require a series of changes that will focus on systems thinking, quality and the science of improvement, transparency, civility, patient involvement, and rejecting greed.
A central theme that flows throughout these viewpoints is that many health care professionals bristle at the notion of standardization because they believe it will limit their sovereignty and authority to make decisions. The key to understanding how standardization will affect medical practice is not to overreact to the belief that it will limit decision making and choice. Instead, it means appreciating how standardization can be used to cope with excessive and unintended variation in customizing care for different groups of patients. Controlling variation in health care, therefore, is a central tenant and objective of standardization.
Bob Lloyd, PhD, is a Vice President at the Institute for Healthcare Improvement.