The AI Transparency Trap: Why Telling the Public You Use AI Could Be Hurting Your Credibility

AI Trust Policy article image

With a Detour Through Van Buren Countyโ€™s AI Policy

Artificial Intelligence is no longer the domain of futuristic novels or that one IT consultant who always insists it will โ€œrevolutionize municipal workflows.โ€ In Van Buren County, itโ€™s already quietly reshaping how we draft reports, analyze data, and manage public communications. And, naturally, the initial impulse is to be transparent about it. After all, what could go wrong with a little openness?

Well, potentially, trust.

Welcome to the Trust Penalty

A recent study, The Transparency Dilemma: How AI Disclosure Erodes Trust, drops a small pebble into the serene pond of government transparency and watches the ripples. Across 13 experiments, researchers found that simply disclosing the use of AI for a task caused people to trust the outcome, and the person responsible, less. This so-called “trust penalty” is especially inconvenient in public service, where credibility is currency and suspicion spreads faster than budget overruns.

Imagine a Van Buren County department head uses AI to draft a clean, informative bulletin about a new recycling program. They decide to note at the bottom, with earnest pride, โ€œThis message was created with the assistance of AI.โ€

According to the research, that one line might quietly unravel the very public trust the rest of the message was trying to build.

Legitimacy, or Why People Still Prefer Humans Making Decisions (Even Imperfect Ones)

The problem, researchers argue, lies in legitimacy. The public expects that government work, particularly the kind that involves decisions, communications, or services, is conducted by trained professionals motivated by civic duty, not algorithms optimizing for character count.

In Van Buren County, this concern is not abstract. Our AI Usage Policy, adopted in March 2025, makes it clear: humans remain accountable. AI may assist, but oversight is not optional. Section 3.1 of the policy notes:

โ€œUsers are ultimately responsible for the content and outcomes produced by AI systems. Human oversight is essential to ensure AI decisions are justifiable, align with the countyโ€™s ethical standards, and meet public expectations.โ€

The takeaway? Even our policy reflects what the research confirms: people are more comfortable when a person is ultimately at the wheel, even if the steering is power-assisted.

Other Study Findings That Complicate Things Further

Several other uncomfortable truths emerge from the study that are especially relevant in Van Buren County, where transparency isnโ€™t just encouraged, itโ€™s a cultural expectation.

  • Being Outed is Worse Than Self-Reporting
    If the public discovers you used AI without telling them, itโ€™s worse than just admitting it. So while disclosure can hurt trust, nondisclosure followed by exposure? Catastrophic. This is why Van Burenโ€™s policy doesnโ€™t just allow for AI use, it requires oversight, logging, and incident reporting (see Section 3.4). Quiet AI isnโ€™t rogue AI. Itโ€™s just AI on probation.
  • “Human Oversight” Doesnโ€™t Help Much
    Ironically, saying that โ€œa human reviewed the AI outputโ€ doesnโ€™t improve trust much. In fact, according to the study, it may worsen it, creating ambiguity about who did the work and who is responsible. It’s the bureaucratic equivalent of “The intern handled it.”
  • AI + Human = Less Trusted Than AI Alone
    The most confounding insight: people trust an autonomous AI more than a human who used AI. Apparently, if you’re going to let the machines help, just don’t admit it unless you want to trigger a minor existential crisis.

Van Buren Countyโ€™s Approach: A Slightly Smarter Bureaucracy

Thankfully, Van Buren Countyโ€™s AI Usage Policy shows a level of foresight rare in documents that begin with “WHEREAS.” It acknowledges both the promise and peril of AI.

Key features of the policy that address the transparency trap include:

  • Clear Ethical Standards (Section 3.1): including the requirement that AI-generated outputs must be โ€œclearly labeled and explainableโ€ in public-facing contexts.
  • Oversight Structures: The AI Steering Committee and AI Task Force ensure decisions arenโ€™t made in a vacuum (or, worse, a vendor’s demo video).
  • Mandatory Training (Section 3.4): All staff must complete โ€œAI Basics 101โ€ before using AI, ensuring users know both the tools and the ethical minefield they may be walking into.
  • Prohibited Uses: Including the processing of sensitive data or using AI in a way that could violate anti-discrimination policies, because weโ€™ve all seen what happens when algorithms learn the wrong lessons from our history.

This is not policy for policyโ€™s sake. Itโ€™s a practical framework to help employees navigate a world where AI is part tool, part mystery, and part public relations hazard.

So, What Should We Do?

The study may offer cold water on the warm bath of AI enthusiasm, but it also points the way forward, particularly for counties like Van Buren, where both innovation and accountability are expected.

Hereโ€™s a refined roadmap:

  1. Acknowledge the Dilemma
    Donโ€™t pretend the choice between transparency and trust is easy. Itโ€™s not. But being aware of the tradeoffs is better than stumbling into them.
  2. Normalize, Donโ€™t Just Disclose
    Help the public understand why AI is being used, and when it isnโ€™t. Make it part of a broader conversation about modernization, not a footnote buried beneath the contact number.
  3. Let Outcomes Speak
    Van Burenโ€™s policy emphasizes measuring efficiency, savings, and satisfaction. Let those results guide the narrative. If the potholes are fixed faster or public forms are easier to understand, thatโ€™s the headline.
  4. Build Trust in the System, Not the Tool
    Ultimately, itโ€™s not about the AI, itโ€™s about the process around it. When people trust the system (and the people maintaining it), theyโ€™re less likely to panic when the system includes some automation.

The age of AI is here, and in Van Buren County, itโ€™s being ushered in with policies, training, oversight, and, yes, a touch of wary optimism. The transparency trap is real, but itโ€™s not inescapable. With clear policy, thoughtful communication, and a bit of old-fashioned human judgment, we can use these tools without losing the publicโ€™s trust, or our own footing in the process.

After all, in local government, the goal isnโ€™t to replace people with AI. Itโ€™s to make sure the people still look good when the AI gets involved.

So, was AI used to help write this article?……uhm, maybe.

Share the Post:

Related Posts

Sign Up for Notifications

Digital Information News

We send an email each Thursday between Noon and 1 PM with the latest posts from the past week.

We donโ€™t spam! Read our privacy policy for more info.

Close Search Window