Wednesday, September 3, 2008
The computer that decides if you live or die...
Dr. Jack Kevorkian, A.K.A Dr. Death, first drew national attention in 1990 when he hooked up a 54-year-old Alzheimer’s patient to his homemade suicide machine and watched as she pushed a button to release lethal drugs. According to the June 5, 2008 New York Times, by the time he was jailed nine years later, he claimed to have helped more than 130 terminally ill patients take their own lives. In 1997, however, the U.S. Supreme Court ruled that Americans who want to kill themselves, but are physically unable to do so, have no constitutional right to assisted suicide. Within euthanasia, and all life-or-death controversy, has always been the question of whether or not the person can or would choose to end their life. However, BBC News reports on January 12, 2008 that before the end of this year, critical end of life decisions may be turned over to a sophisticated computer program, enabling a form of digital euthanasia. The article goes on to say that Bioethicist Dr. David Wendler and his colleagues at the U.S. National Institute of Health have written a complex computer algorithm they refer to as a “population-based treatment indicator,” that can guess, entirely on its own, what life or death decision comatose patients would choose if they were able to make the choice themselves. So I think an important question arises when considering the implications of this technology: when it comes down to it…who has to pull the plug?
When serious injuries leave loved ones of all ages in a coma, reliant upon ventilators to keep them alive, relatives often face the most difficult decision: to wait it out, or to turn off the machines. According to the American Bar Association on March 11, 2008, the Patient Self-Determination Act of 1990 mandated that patients be informed that they can document their treatment options in an advanced directive, including “Do Not Resuscitate” orders, should they lose the ability to make the decision themselves. The December 13, 2007 CNN program, cleverly entitled “The computer that decides if we live or die,” explains that although patients have gained this control, many still fail to sign a living will or a directive looking to the future, and therefore, relatives or “surrogates,” are often asked to step in on a loved one’s behalf. Dr. Wendler continues to note that society has always gone with the idea that people who know the patient can best decide about their treatment. However, he was concerned this process put too much of the burden on families, and wanted to develop an alternative. His new approach of digital euthanasia essentially bases all of its decisions on a profiling system, similar to psychological or criminal profilers. According to a press release from the Public Library of Science on March 12, 2008, to use the population-based treatment indicator, the doctor first enters the incapacitated patient’s circumstances and personal characteristics into a computer. For example, the patient is a 60-year-old, well-educated Native American male who has Pneumonia and severe Alzheimer’s disease. The computer analyzes the treatment preferences of similar individuals who are able to voice end of life concerns and estimates the likelihood that this patient would want antibiotics to treat his pneumonia. A finding that 90 percent of highly educated Native American men over the age of 50 do not want to receive treatment in the setting of advanced Alzheimer’s would provide strong evidence that this patient would not want antibiotics in these circumstances either.
Since it’s impossible to simply ask patients what treatment they would want at the time they’re incapacitated, we need to explore the efficacy of this technology, and how it affects both patients and their families. The March 6, 2008 Chicago Tribune states, studies looking at whether surrogates accurately predict patients' treatment choices must use hypothetical situations. For example, one study used the following scenario: "You were recently in an accident leaving you in a coma and unable to breathe without a machine. After a few months, the doctor determines that it is unlikely that you will come out of the coma, but there is a chance. If your heart stopped beating in this situation, what would you have told the doctor to do?" Wendler explains in a Medical Health Policy Forum on August 6, 2006 that analysis of 16 such studies reveals that surrogates accurately predicted the patient’s choice about 68% of the time, and amazingly, preliminary tests with digital euthanasia resulted in the exact same precision. In fact, Dr. Wendler explains in an interview with Time Magazine in January 2008, that he hopes to build up a broader bank of personal profiles that will include age, gender, religion, ethnic background, and socio-economic standing that will enable predictions with 15 to 30 percent more accuracy than with humyn surrogates. This possibly near-perfect system also raises an interesting point as to how exactly we give power of attorney to computer software. Medical advances aside, some relatives may always want to make end-of-life treatment decisions for incapacitated loved ones, while others may prefer this system’s assistance. On the other hand, Wendler notes in a November 15, 2007 Columbia University Seminar that patients could choose this option in advance, seeing it as a way of unburdening their family or simply believing it’s more accurate. In the previously cited Chicago Tribune, Kathryn Muller states, “I was married to my husband for 30 years, but when he had his stroke, and couldn’t breath on his own, I had no idea how he felt and couldn’t decide what to do. I just wish he could have told me what he would have wanted.”
Putting your life in the hands of a computer would’ve seemed preposterous only a decade ago, but with the development of digital euthanasia, we must consider the social and personal implications. The International Journal on Emerging Medical Technology notes on February 17, 2008 that no doubt, there will be concerns about either the government or the hospital controlling the means to make decisions for those who cannot speak for themselves. Similarly Dr. Timothy Quill explains in the August 15, 2007 American Family Physician, that poor, non-empowered groups are worried this process will be used to cut off treatments, especially when patients have no medical coverage. Although, Wendler does contend that he is trying to identify the best way to make decisions for these patients. But the problem even lies in the language he use: “Make decisions FOR these patients.” In addition, Wendler’s research, presented in the August 2006 Medical Health Policy Forum, notes that these predictions do match the majority of Americans, but in the end, bases decisions only upon what the general population prefers, and doesn’t take into account those who vary from the norm. So, an Orthodox Jew and a member of the Greek Orthodox faith, who may have similar beliefs about God, but not end of life care, are considered equal in the profile. Beyond that, the personal implications of a technology that essentially has the ability to make life or death decisions for patients, especially those with no next of kin, are worth noting. In the previously cited CNN program, Wendler explains that not all family members are equipped or even appropriate to make this decision. Tumultuous, personal relationships are not usually taken into account. Also, many are so emotionally overwhelmed, that they are not in an appropriate mental state to make a decision about a loved one’s life. However, advice such as: “People like your father prefer this type of treatment,” might diminish doubts or offer support to overwhelmed family members. The decision is a difficult one, to end their life, end their suffering, or simply wait for a miracle.
The burden of whether or not to pull the plug on a loved one, as harsh as it sounds, is not uncommon, and can cause serious mental anguish for children and parents who are forced to do so. This software may be a new advent in Kevorkian-like medical treatment, but it may also be a ray of hope for families too grief stricken to cope with the death of a loved one, and just wish they could know what they would choose.
-Roger
Subscribe to:
Post Comments (Atom)
2 comments:
As technology advances rapidly, some people may find that technology is taking over our society of intelligent humans. This invention supports that idea. Not to say that human beings can't think for themselves, but because of technology, so many people find that it is easier for a machine to solve the problem, rather than for a human being to actively engaging in problem solving. How can a machine such as this one ever be perfected in regards to deciding on what is the "right" decision for a patient? A computer cannot logically think back to a person's life, as a loved one can, to determine whether the patient would want to continue on life support or not. Though there are people who may find it easier for a machine to make a decision for them, technology such as this one only states that human beings are not able to logically make their own decisions, even about life-or-death.
Great post and comment. I wonder how much advances in technology will have an impact on assisted suicide decisions in the next couple of years.
Also, for posters and commentators alike: don't forget to sign your posts and comments!
-Raul
Post a Comment