Tânia Rêgo / Agência Brasil

An “AI detective” does a very simple thing: we give him a block of text and he tells us whether the injury described there is likely to have a violent origin or not.
Violence sometimes hides in plain sight, especially in a busy hospital emergency room.
Between the chaos of tired teams and overloaded and victims’ reluctance speaking, gender-based violence and aggression in general often occur unnoticed.
A new Artificial Intelligence system, developed in Italy, now aims to overcome this flaw and, in the first tests, has already identified thousands of lesions that the country’s health professionals had incorrectly labeled.
The project is a interdisciplinary effort which involves the University of Turin, the local health unit ASL TO3 and the Mauriziano Hospital. Coordination is in charge of Daniele Radicioniassociate professor of Computer Science at the University of Turin.
“Our system do something very simple: we give it a block of text and it tells us whether the injury described there is likely to have a violent origin or not”, Radicioni explained to .
The team had access to a cvery large data set: 150 million emergency registers from the Istituto Superiore di Sanità (ISS) and more than 350 million from the Mauritian Hospital.
The objective was teach the computer to read the “screening notes” — face-to-face clinical assessments written by nurses and doctors. The system does not use medical images: works only with these notes.
The problem is that These notes are disorganized. They vary from hospital to hospital and are full of abbreviations, typos and clinical jargon. To interpret them, the researchers trained several AI architectures, including a custom model called BERTino.
BERTino has been pre-trained specifically for the Italian language. It is lighter and faster than large-scale models such as GPT, which makes it suitable for hospital computers with limited resources.
Unlike older systems, which typically limit themselves to searching keywords, such as “punch” or “hit”, This model uses what scientists call “attention mechanism”: analyzes the structure of the sentence as a whole to understand the contextdistinguishing, for example, between “hit by a car” (accident) and “hit by a partner” (violence).
A gap in data
At the beginning of the study, researchers noticed a strange discrepancy. In the national database (ISS), around 3.6% of injuries were marked as violent. But, at the Mauriziano Hospital, in Turin, this value plummeted to just 0.2%.
Would Turin be a much safer city — Or was something missing??
It was a good testing ground. The team applied AI to almost 360,000 hospital records classified as “non-violent” to see if the algorithm detected what humans had not detected.
The results were unsettlings: the system marked 2,085 records as potentially violent. When researchers manually analyzed them, they confirmed that 2,025 did, in fact, correspond to injuries resulting from violence.
“O Hospital Mauriziano tworks very effectively in preventing” Radicioni said. “Therefore, the low numbers could be due to the fact that some violence was avoided.”
Still, under-detection and under-reporting of violence persists, say the study authors. This underdetection is particularly common in domestic violence.
Notoriously difficult to identify
According to the most recent data from the Italian National Statistics Institute (ISTAT), only 13.3% of women who suffer violence report itand this percentage drops to 3.8% when the aggressor is the current partner.
Women rarely reveal the violence suffered at the hands of their partners, as they may depend on them financiallyfear reprisals or feel shame. They can alsoeecear being blamed — a problem that continues to be significant in many countries.
In addition to detecting violence, Artificial Intelligence has shown potential to identify who caused it. In a separate task, the model tried to classify the aggressor, distinguishing between partner, family member or thief.
A IA distinguishes who caused the injury by treating the “aggressor’s prediction” like a independent classification task.
After a record is identified as violent, the model reanalyzes the text to assign the perpetrator to one of eight specific categories. If the note says “assaulted by husband”, the system associates it with the category Spouse/Partner. If the text describes a robbery, classify the agent as Burglar.
It may seem like this doesn’t add anything new, but the AI found cases that were marked as “non-violent”, even when the text written in the screening contained clear signs of violence.
If the note says: “Patient fell down stairs“, but the person was actually pushed and didn’t tell anyone, the AI has no way of detecting it. But if the note says: “Patient reports aggression by her husband” and even if the case has been labeled as an “accident”, then the AI detects it.
This type of error happens surprisingly frequently.
Identifying the source of the injury is crucial, because physical violence is a strong indicator of escalation. “The vast majority of women who end up being killed had previously gone to the emergency department due to episodes of violence”, highlights Radicioni. Identify these cases early can literally save lives.
