Can Artificial Intelligence Really Identify Suicidal Thoughts? Experts Aren’t Convinced

Posted on
Australian experts have spoken out about a recent US study that claimed to show artificial intelligence can identify people with suicidal thoughts – by analysing their brain scans.

It sounds promising – but it’s worth pointing out only 79 people were studied, so are the results enough to show this is a path worth pursing?

The research, published in Nature, studied brain activity in subjects when presented with a number of different words – like death, cruelty, trouble, carefree, good and praise. A machine-learning algorithm was then trained to see the nureal response differences between the two groups involved – those with suicidal thoughts, and those with non-suicidal thoughts.

And it showed promise – the algorithm correctly identified 15 of 17 patients as belonging to the suicide group, and 16 of 17 healthy individuals as belonging to the control group. But does this mean it could be used as a diagnostic tool?

Professor Max Coltheart is the Emeritus Professor of Cognitive Science at the ARC Centre of Excellence in Cognition and its Disorders, and Department of Cognitive Science at Macquarie University

The title of the paper says brain imaging data ‘identifies suicidal youth’. Read the fine print, though, and you will find this is not true.

This study had 79 people, 38 who reported that they thought about suicide and 41 who said they did not. Can brain imaging reliably tell us which subjects were which? The simple answer is no.

Of these 79 people, more than half (57 per cent) gave brain imaging data that were unusable for any attempt at classifying the subject as at-risk of suicide or not. That included 21 (55 per cent) of the people at risk of suicide. So, even if the results of this study generalized to all people, 55 per cent of people genuinely at risk could not be identified by the methods reported here.

Importantly, a check – which studies like this standardly use – was omitted. Even when you have found a way of classifying people into two groups on the basis of analysing brain imaging data, you cannot claim that you have a genuine method for doing such classification unless you show that the artificial intelligence algorithm can successfully classify a new set of people on whom it has not been trained. This is called cross-validation. Because this wasn’t done, the authors can’t even claim that this method will reliably detect risk of suicide in the 43 per cent of people who yield usable brain imaging data.

Leave a Reply

Your email address will not be published. Required fields are marked *