(credit: Elnur / Adobe Stock)
There is no evidence that using machine learning to predict outcomes for families involved with children’s social care services is effective, research has found.
Models built by What Works for Children’s Social Care and trialled over 18 months in four local authority areas failed to identify, on average, four out of every five children at risk.
Where the models flagged a child as being at risk, meanwhile, they were wrong six out of 10 times.
The research found introducing text information extracted from social work reports did not reliably improve models’ performance, despite this offering a more nuanced picture of families than can be gleaned from demographic information and data tracking interactions with practitioners.
What is machine learning?
Machine learning (ML) seeks to find patterns in data. What Works examined a type of ML called predictive analytics, under which models use patterns from historic data to learn to what extent certain inputs or decisions are associated with particular outcomes. It then uses these patterns to predict the likelihood of the specified outcome in future, given the relevant input data.
The study report called on councils already trialling predictive technology in children’s social work to be transparent about its limitations. One such council, Hackney, axed its Early Help Profiling System (EHPS), commissioned from the private provider Xantura, late in 2019 after it did not “realise the expected benefits”.
“Given the extent of the real-world impact a recommendation from a predictive model could have on a family’s life, it is of utmost importance we work together as a sector to ensure these techniques are used responsibly if at all,” the report concluded.
‘Time to stop and reflect’
The new research follows on from a separate What Works review, published in January 2020, which questioned how ethically compatible machine learning was with children’s social work.
Michael Sanders, the What Works executive director and co-author of the study report, said the findings indicated that it was time for the children’s social care sector “to stop and reflect”.
“The onus is now on anyone who is trying to say [that predictive analytics] does work, to come out and transparently publish how effective their models are,” Sanders told Community Care.
“What we have shown in our research is that with a lot of the best techniques available to us, the data across these four local authorities says it’s not working,” he added.
Sanders, who has also researched machine learning in children’s social care as part of the Behavioural Insights Team (BIT), formerly part of government and known as the ‘nudge unit’, said his views had changed, in line with available evidence, as to the technology’s potential benefits.
“We don’t think we are infallible – if someone can find a mistake we’ve made, or can take our code [which will be publicly available] and do something good with it, then I am happy for that to happen,” he said. “But it needs to be in an open and transparent way, not behind closed doors.”
Sanders added that central government, or bodies such as the Local Government Association (LGA) or Association of Directors of Children’s Services (ADCS), could now take a lead in policing the use of machine learning until such a time as its worth could be demonstrated.
‘Surprisingly bad performance’
The What Works study’s models were developed to predict eight separate outcomes (see box), using three to seven years of data provided by the four councils, from the North West, South West, West Midlands and South East regions.
The eight predictions
The What Works study looked at eight different scenarios, each based on a decision-making point for a social worker in a case and looking at whether the case would escalate at a later point in time. They were:
- Is a child re-referred within 12 months of a ‘no further action’ decision, and does the case then escalate to statutory intervention?
- Does a child’s case progress to a child protection plan or beyond within six months of a contact?
- Is a child’s case open to children’s social care, but below statutory intervention, within 12 months of a ‘no further action’?
- Is a child’s case escalated to a child protection plan or beyond between three months and two years of a referral?
- Is a child’s case escalated to a child protection plan or beyond within six to 12 months of a contact?
- After successfully engaging with early help, is a child referred to statutory services within 12 months?
- Does a child’s case escalate to a child protection plan between one and 12 months of an assessment authorisation date?
- Does a child’s case escalate to them becoming looked-after between one and 12 months of an assessment authorisation date?
Each was tested in four different builds, so as to gauge whether including pseudonomised text data from social work records improved performance, and what impact only using historical data (thereby simulating real-world usage) had.
In each instance, the models failed to reach a pre-specified ‘success’ threshold of 65% precision. “This is lower than the threshold we would recommend for putting a model into practice but provides a useful low benchmark,” the report said.
In particular, the study found, the models tended to miss the majority of children at risk of a given outcome, which could potentially lead to results discouraging social workers from intervening.
In models where text had been introduced, performance improved in some scenarios. But it worsened in others, giving an overall picture of no consistent benefit – a result Sanders said was unexpected.
“I was surprised by just how bad the models performed overall,” he said. “From my previous research [with BIT, in a single borough], we found quite a big benefit to using text data as well, but that picture is much cloudier coming out of this piece of research.”
Sanders said that it was likely the evolution of systems and practice models, and turnover of staff, meant text data “is particularly vulnerable to changing over time”, making it less reliable as a basis for predictions.
A poll of 129 social workers conducted as part of the study, uncovered no clear support for the use of predictive analytics across a range of scenarios, with a tool to support practitioners to identify early help for families the most popular, but backed by only 26% of respondents. Just over a third (34%) of respondents said they did not think it should be used at all.
Credit: Google News