Why Practitioners Do Not Fully Trust AI, Even When It Is Accurate

Professional trust and authorship dissonance in AI-generated text

Here is something that does not get talked about enough in the AI adoption conversation: a practitioner can read an AI-generated assessment, confirm it is factually correct, and still not trust it.

Not because of hallucination. Not because the tool failed. But because the words do not sound like theirs.

This is not irrational. It is something linguists and human-computer interaction researchers have been studying for years. It has a name: authorship dissonance.

What is authorship dissonance?

Authorship dissonance describes the discomfort people experience when presented with text that is attributed to them, or used in their professional context, but does not match their voice. The phrasing is off. The emphasis lands in the wrong place. The tone feels borrowed.

It is not about accuracy. It is about alignment.

Research from the Oxford Internet Institute and King's College London shows that professionals sort language instinctively. They are not just reading for content. They are scanning for signals of competence, empathy, and in-group membership. When AI-generated text uses generic phrasing or clinical tone where a practitioner would use warmth or professional shorthand, it triggers a quiet but powerful sense of unease.

The text is technically fine. But it does not feel like something a social worker would write. And that gap matters more than we think.

Why this matters in social work, healthcare, and education

Social workers, nurses, teachers, and other frontline professionals develop a professional voice over years of practice. That voice is not decorative. It carries meaning. The way a social worker frames a risk assessment, the way a nurse documents a patient interaction, the way a teacher writes a referral: these reflect professional judgement, not just information.

When AI generates text in a professional context, it often produces something that is correct but tonally flat. The factual content is there, but the professional signals are missing. Stanford's Human-Computer Interaction Group has found that people evaluate AI-generated text differently from human text, even when the content is identical. The absence of a recognisable voice shifts how much weight the reader gives the output.

Voice Professional identity is carried through language
Trust Readers assess credibility through tone, not just facts
Risk Dissonance leads to disengagement or uncritical acceptance

This creates a practical problem. Practitioners who notice the dissonance may reject AI-generated text, even when it is accurate, because it does not feel trustworthy. Practitioners who do not notice it may accept the text without scrutiny, because the fluent surface disguises a lack of professional depth. Both outcomes are risks.

The in-group question

There is a deeper layer here. Professional language functions as an in-group marker. When a social worker reads a Care Act assessment, they are not just processing information. They are evaluating whether the author understands the context, the legal framework, the human complexity behind the case.

AI-generated text does not carry those signals. It can use the right terminology, but it cannot convey the lived experience of professional practice. The result is text that passes a factual check but fails what researchers call the "authenticity test": does this sound like it was written by someone who understands this work?

This is not about nostalgia for handwritten notes. It is about recognising that professional trust is built partly through language, and AI-generated text does not yet carry the same weight.

What this means for AI training

Most AI training in professional settings focuses on accuracy: how to spot hallucinations, how to verify outputs, how to use prompts effectively. These are essential skills. But they miss the voice problem entirely.

Voice alignment turns accuracy into trust

Effective AI literacy training needs to go beyond factual accuracy. Practitioners need to understand how AI-generated text differs from professional writing, why that difference matters, and how to adapt outputs so they reflect genuine professional judgement. Without this, organisations risk adopting AI tools that are technically sound but professionally hollow.

Practitioners need training that addresses:

  • Recognition: Understanding that AI-generated text has a detectable "voice" and learning to identify it
  • Adaptation: Skills to reshape AI outputs so they carry professional authority and person-centred language
  • Critical evaluation: Moving beyond "is this accurate?" to "does this reflect the professional judgement I would apply?"
  • Ownership: Maintaining authorship and accountability when using AI as a drafting tool

The bigger picture

Authorship dissonance is not a minor UX issue. It sits at the intersection of professional identity, AI adoption, and trust in public services. If practitioners cannot trust AI-generated text, they will either reject it (wasting the investment) or accept it uncritically (creating risk).

The fix is not better models. It is better training. Practitioners who understand voice alignment can bridge the gap between AI capability and professional standards. They can use AI as a tool without surrendering their professional voice.

That is what effective AI literacy looks like. Not just understanding how the technology works, but understanding how it interacts with the deeply human aspects of professional practice.


← Automation Bias and the Training Gap Back to Blog →

Take the Next Step

Assess your team's AI readiness, explore the research, or get in touch about training.

AI Readiness Assessment View Research Book Training