Throughout this series the introduction of technology into the health care system has been framed as something positive, with benefits for patients, health care providers and perhaps even a potential for saving money. But technology can also contribute to adverse medical events and is even the cause of accidental deaths.
The issue of patient safety exploded into public consciousness in the United States with the 1999 publication of the book To Err is Human. Not only were thousands of people dying unnecessarily each year due to mistakes within hospitals and clinics, but no one really knew what was going on and what the actual numbers were. Dr. Ellen Balka, the principal investigator in the Action for Health Project has had a long-standing interest in the new healthcare discipline of patient safety, which looks at the reporting and prevention of medical errors. Balka and Doyle-Waters point out that Canada—as elsewhere—does not have good processes to gather information about these problems. We don’t even really know what’s going on.
Balka, along with Information Specialist Madeleine Doyle-Waters, examined issues in technology and patient safety in an article published in the International Journal of Medical Informatics in 2007. They developed this argument, namely that something that was developed to reduce error can actually result in the opposite effect. There are many reasons for this, some obvious, some not.
So what causes health care errors? Human error obviously. People are inadequately trained, inexperienced, tired, in an unfamiliar setting, or perhaps they just don’t admit they did something wrong. Technological problems however do not occur in isolation. They are part of the larger landscape and culture of systems which can of course go off beam for many and varied reasons. Balka and her colleagues argue that it’s unfair to automatically blame users. If the equipment itself is faulty or outdated, there may be other factors at play.
Technology is a wide-reaching area of concern. It does not just refer to medical devices, although these are obviously of concern to patients. High risk devices such as anesthesia units and infusion pumps are carefully controlled by regulatory agencies because failures are so frightening. But technology also incorporates electronic medical records, drug dispensing systems, and even Internet usage, seemingly benign systems that can actually start the process of introducing errors can causing something terrible to happen.
A number of potential sources of error have been identified with corresponding responsibilities. For example the manufacturer may be at fault with unclear labeling or packaging, inadequate instructions or a poor manual accompanying the device—like those cryptic Ikea instructions. The person selling the product may not offer proper support or evaluation of how the technology is going to be implemented. The user may operate the product improperly and might even re-use things incorrectly. For example, some medical devices can be used over and over again but must be properly sterilized in between patients.
Another way of looking at the problem of medical device failure is to divide it into five categories: device errors, user errors, external factors, support system failures, and errors related to tampering and sabotage—a truly disturbing thought. There are many ways in which things can go wrong, and it’s really a systems issue, argue Balka and her colleagues. Mistakes do not occur in isolation and there are usually multiple steps involved in a scenario where a patient is harmed or killed by errors. So for example, what appeared to be a device failure may really relate to poor communication among staff, unclear channels of authority, or the impression that someone else has taken responsibility for something.
One of the first things that could be done is to better standardize reporting and make sure it’s done properly and by everyone. We also need to take a systems approach and look at all aspects of device errors to see what can be improved. Who makes the decisions, who has the power to do so, and what processes are part of management and leadership are all areas of concern. These issues all relate to what Balka and Coyle-Waters refer to as governance and it is these processes that need to be examined. It’s not just one technology and one user making a mistake.
The bottom line is that it’s not the people that are at fault—everyone makes mistakes. It’s really a case of bad systems. Once again, researchers are going to be needed to see what can be done to improve those systems.
Questions or comments? Contact Guenther directly at firstname.lastname@example.org