Researchers found that the traditional way we monitor and track children receiving emergency care might miss a good number of those at risk of self-harm, but AI can help health providers make better assessments.
After the shocking case of a Belgian man who reportedly decided to end his life after an AI chatbot encouraged him to do so, a new study found that machine learning models may actually be effectively used for the exact opposite: preventing suicide among young people.
A peer-reviewed study by UCLA Health researchers published in the journal JMIR Mental Health last week found that machine learning can help detect thoughts or behaviour of self-injury in children much better than the actual data system currently used by health care providers.
According to a 2021 report from UNICEF, suicide is a leading cause of death among young people in Europe. Nine million children aged between 10 and 19 estimated to live with mental disorders with anxiety and depression accounting for more than half of all cases.
In the US, an estimated 20 million young people can currently be diagnosed with a mental health disorder, according to the US Department of Health and Human Services.
UCLA Health researchers reviewed clinical notes for 600 emergency department visits made by children aged between 10 and 17 to see how well current systems to evaluate their mental health could identify signs of self-harm and assess their suicide risk.
What they found is that these clinical notes missed 29% of children who came to the emergency department with self-injurious thoughts or behaviours, while statements made by health specialists flagging at risk-patients - called “chief complaint” in the US - overlooked 54% of patients.
In the latter case, health specialists failed to spot the sign of self-injurious thoughts or behaviours because children often do not report suicidal thoughts and behaviors during their first visit to the emergency department.