Can Artificial Intelligence Predict and Prevent Suicide Attempts?

Machine learning, a subset of artificial intelligence, is the scientific study of algorithms and statistical models that computer systems use in order to perform a specific task effectively without using explicit instructions, relying on patterns and inference instead. Machine learning algorithms build a mathematical model based on sample data in order to make predictions or decisions without being explicitly programmed to perform the task. Machine learning is closely related to computational statistics, which focuses on making predictions using computers. In its application for solving problems, machine learning is also referred to as predictive analytics. -  Wikipedia

How can we tell if someone is a suicide risk? Many clinicians, because of a lack of training, don’t know how to properly assess for suicide. Many individuals with suicidal thoughts attempt to mask them. With suicide now a leading cause of death in the United States, researchers and medical professionals must think outside the box and devise new strategies for assessment and prevention.

Staggering statistics call for new methods

The Centers for Disease Control and Prevention (CDC) reports a 30 percent increase in deaths by suicide from 1999-2016. In 2016 alone, 45,000 people lost their lives to suicide across the US. Roughly 54 percent of those who died by suicide were never even diagnosed with a psychiatric condition.

Some situational factors have been cited as causes or precipitants of suicide. These include:

  • Relationship issues – 42%
  • Past or future crisis – 29%
  • Physical health complications – 22%
  • Substance use – 28%
  • Job or financial issues – 16%
  • Criminal or legal issues – 9%
  • Loss of housing – 4%

Assessing suicide risk factors can be challenging, as a person’s situation or behavior may not create any outward cause for concern for those not skilled at suicide assessment. Intervention to prevent an attempted suicide will not be initiated because no warnings were detected.

Some people who die by suicide may exhibit a healthy outward appearance, have a healthy social life, and a good career, without risk factors being detected.

Artificial intelligence may be the answer

Identifying and assessing suicide risks requires several innovative approaches, as traditional methods may not always be effective. In medical settings, some doctors and medical staff are trained to identify suicide risks in patients who are treated for physical injuries and other health conditions. They may take demographics, medical history, and access to guns into account.

Now, one breakthrough method, using machine learning algorithms, may be effective at predicting suicide risks in medical patients who don’t show any outward signs of suicidal thoughts.

This technology was devised by data scientists at Vanderbilt University Medical Center in Nashville, Tennessee. The algorithm factors in patients’ hospital admission data, age, gender, zip code, medication, and diagnostic history to determine the likelihood of suicidal thoughts.

The algorithm was found to be 84 percent accurate at predicting near-future suicide attempts when tested on more than 5,000 patients who were admitted for self-harm or attempted suicide. In addition, the algorithm was 80 percent accurate when predicting potential suicide attempts within the next two years.

Machine learning algorithms are not currently used in the United States but Colin Walsh, the Vanderbilt data scientist who led the project, urges widespread use of this technology in all medical settings.

Facebook currently has similar algorithms that work on a much smaller scale but may quickly flag posts that suggest a user may be suicidal. It then connects users with mental health services.

Social media algorithms aren’t effective when it comes to identifying suicide risks in individuals who mask it. It doesn’t have access to a user’s medical history, so this only highlights the importance of implementing algorithm technology into hospitals and other medical facilities.

Rolling out the technology

Walsh seeks to someday make this a reality and is currently working with doctors to devise an intervention program based on artificial intelligence. If the program is fully implemented, patients found to be at high risk may be admitted to a hospital for several days under supervision. If a lower risk is identified, patients may be referred to mental health specialists for other treatment options.

Walsh expects this program to be widely used within the next two years. Until then, the algorithm may require further testing and development, as its effectiveness may not always be reliable.

“All models have times when they’re wrong,” said Walsh. “We need to learn from those examples, as well.”

Incorrect algorithm data must be corrected during the development of this technology. For example, if the algorithm found a patient to be suicidal, but further assessment from a doctor determined otherwise, that information must be fed to the algorithm. On the other hand, if a patient was not found to be suicidal (or found to be low risk), but dies by suicide weeks later, that data must also be updated.

In addition, ethical issues must be addressed, such as ensuring patients are informed when they are assessed for suicide risks and that their privacy is protected. Moreover, this technology should only be used to identify suicide risks, so medical professionals can further assess patients. The idea of holding patients against their will due to an inaccurate screening raises further future ethical issues.

Legal issues raised by medical errors

The Law Offices of Skip Simpson will keep an eye on these developments. We are hopeful that using machine learning to analyze data in medical settings will identify suicide risks in outwardly healthy individuals and prompt doctors to provide preventative treatment.

Machine learning has the potential to greatly improve our prediction of suicidal behavior. The technology may also raise legal issues if a patient is harmed due to a faulty screening. For example, if a patient was suicidal at the time of a screening, but an inaccurate screening determined otherwise, and medical staff failed to provide assessment and intervention, those responsible should be held accountable.

Suicide prevention in medical settings should be multifaceted. There must be a balance between technological methods and the judgment of trained doctors and psychiatric specialists.

If you lost a loved one to suicide following medical treatment, nationally known suicide attorney Skip Simpson would like to discuss your matter with you and explore your legal options.

Simpson has a successful track record representing medical patients and their families. To learn how he can help you, contact us to schedule a free consultation.

Click here to download a printable version of this article.

National Trial lawyers Top 100 Multi-Million Dollar Advocates Forum Million Dollar Advocates Forum American Association for Justice AVVO Peer Review
The Law Offices of Skip Simpson

2591 Dallas Parkway, Suite 300
Frisco, Texas 75034

Phone: (214) 618-8222
Fax: (214) 618-8242