Training or Tech Overhaul? Fixing the Ethical Crisis in Social Work's Digital Future
AI is already rewriting social work assessments. Not in some distant pilot, but right now, in the tools practitioners open every morning. The question is no longer whether AI will change social care. It is whether practitioners will have the training and the systems to use it safely.
This is a debate with two sides, and both of them are right.
The Invisible Rewrite
AI-assisted writing tools are being marketed to social care as efficiency gains: faster documentation, clearer language, streamlined assessments. But clarity according to whom?
A practitioner writes: "The person and their family have concerns about managing daily routines."
The AI rewrites it as: "There is a significant risk around daily living."
That single change transforms a family's lived experience into a formalised risk judgement. No practitioner intended that. No one consented to it. But the record now reads as though a professional made that assessment, and the decisions that follow (eligibility, resource allocation, safeguarding thresholds) are shaped accordingly.
When AI starts defining what counts as significant in someone's life, social workers need the skills to push back.
The Tools We Already Cannot Regulate
Before we even get to AI, consider where we are with basic digital tools. Practitioners have used WhatsApp for years. We understand end-to-end encryption. We know these platforms were never designed for social care. We adapted, creating workarounds, avoiding sensitive document sharing, developing informal data awareness.
But there is no unified national standard for personal messaging in professional social work. The Data Protection Act does not address it directly. Practice relies on local custom, discretion, and organisational patchwork rather than shared professional guidance.
If we cannot regulate established communication tools, the challenge of managing algorithms that rewrite professional reasoning is significantly harder.
The Case for Training
Structured ethical AI and digital literacy training is not a technical upgrade. It is the foundation of professional accountability in a digital era.
Training is the mechanism that safeguards the Care Act's values in a changing practice environment. It enables practitioners to question AI outputs, interpret how data is processed, and recognise when technology has altered the meaning of someone's lived experience.
Practitioners do not need to become data scientists. They need to be confident, critical users who can interpret, challenge, and ethically apply technology. That means understanding not just how to write a prompt, but why a carefully written prompt might still produce output that contradicts person-centred practice.
The Case for System Reform
The counterargument is equally valid. If digital systems are poorly designed, untested for person-centred care, or built without practitioner input, no amount of training will fix the flaw.
Ethical practice cannot exist within unethical systems.
Organisations deploying AI in social care should be asking three questions of every vendor and digital partner:
- How does the tool embed practitioner and service user feedback in its design?
- How does it support the individual needs and preferences of the people being assessed?
- How does it prioritise data privacy and user empowerment?
True reform means embedding social work expertise directly into the development and governance of technology, before deployment, not after failure. Co-production is not optional. It is the only way to build systems that reflect the values of the profession using them.
Both Sides Need Each Other
Social care's digital ethics gap has two layers: user readiness and system readiness. Each depends on the other.
Practitioners need competence now, but the profession also requires a seat at the design table to prevent repeated ethical failures. Without trained practitioners, even well-designed systems will be misused. Without well-designed systems, even highly trained practitioners will be fighting tools that work against person-centred values.
Five Steps Organisations Can Take Now
- Audit current digital tools and governance policies. Know what your practitioners are actually using and where the gaps are.
- Deliver tailored AI and digital ethics training for all practitioners, not just IT leads.
- Establish multi-disciplinary teams that include practitioners, service users, and technology specialists.
- Co-produce new systems through advisory groups and pilot testing before full rollout.
- Evaluate ethical and operational impact continuously using practitioner and service user feedback, not just efficiency metrics.
Quality, Not Speed
The future of ethical digital practice depends on how we define and measure quality, not speed. AI can enhance care only when it is transparent, explainable, and guided by professional reasoning rather than commercial convenience.
That principle underpins everything we do at TESSA Tools: our research, our training, and our responsible AI framework built around Voice, Ethics, Reasoning, and Assurance at the heart of every assessment and algorithm.