The Download
Welcome to The Download, the countyโs newsletter dedicated to sharing how technology, innovation, and artificial intelligence are shaping the way we work and serve our community. Each issue will highlight practical tools, digital improvements, and success stories from across departments. By staying informed about new technologies and learning how they enhance efficiency, accuracy, and accessibility, we strengthen our ability to deliver high-quality public service in a rapidly changing world.
AI in Real Life: The Everyday Tech Assistance
When people talk about artificial intelligence, it often sounds like something futuristic, complicated, or a little intimidating. But for Karl Stamm, AI is just another part of the daily work routine.
Most days, AI acts as a behind-the-scenes assistant. It helps with spelling, grammar, and wording, offering a second set of eyes when writing emails, reports, or anything that needs to sound just right. It also steps in when spreadsheets get tricky. Instead of wrestling with Excel formulas, Karl can simply describe what he needs, and AI builds the sheet for him. Itโs less about replacing skills and more about making everyday tasks faster and easier.
AI was also especially useful in managing federal grants during the government shutdown. When questions came up, he would load publicly available grant materials into AI and asked the same questions he would normally send to a grant advisor.
The responses helped confirm whether the answers already exist in the official guidance. It saved time and reduced uncertainty.

As for tools, ChatGPT is his go-to choice right now. Not because itโs flashy, but because itโs widely used, affordable, and has proven to be a practical option for real work. It offers a straightforward way to research, write, brainstorm, and check ideas without a steep learning curve.
Perhaps the biggest impact AI has had on Karl Stamm is in how problems get solved. Instead of staring at a blank page, Karl can brainstorm with the computer. Whether he needs ideas for improving a process, or looking for creative incentives for Drug Court participants, AI becomes a sounding board that helps get the ideas flowing.
So Close, Yet So AI
AI is getting really good at creating photos, but when it comes to editing pictures of real people, it can still stumble. Even when we give very clear instructions like โdo not modify, enhance, retouch, smooth, sharpen, distort, replace, regenerate, or reinterpret any faces or facial features,โ the results can still feel a littleโฆ off.
Take this AI generated photo of our five judges. At first glance, it looks spot on. But when you take a closer look, you might notice a few details that donโt quite match reality. Itโs close, but not perfect.
Thatโs the fun and fascinating part of using AI. When you blend five individual headshots into one group photo inside a historic courthouse, the technology does its best, but sometimes the magic misses by just a hair. Itโs a great reminder that AI is a powerful tool, but the human eye is still the final judge.

How Are You Using AI?
Is your department using Artificial Intelligence to boost your day-to-day tasks or improve your customer service? Send Jennifer an email explaining your AI uses and you may be featured in our Department Spotlight!
New Federal Legislation Introduced
In July 2025, U.S. Senator Peter Welch and several Senate colleagues introduced the Transparency and Responsibility for Artificial Intelligence Networks (TRAIN) Act to the Senate. The bill is designed to help musicians, artists, writers, and other creators protect their copyrighted work if it is used to train generative AI models without permission.
The legislation would allow copyright holders to obtain AI training records through a subpoena process to determine whether their work was used, similar to existing methods for addressing internet piracy.
Supporters say the bill would increase transparency in how AI models are trained and help creators protect their rights as AI becomes more widely used. The TRAIN Act has been introduced and referred to the committee, but it has not yet completed the legislative process or become law. However, its progress highlights ongoing discussions about the need for AI-related policy and transparency.

AI and the Hubble Archives
Astronomers at the European Space Agency used an AI system to analyze 35 years of Hubble Space Telescope data and discovered more than 800 previously undocumented astrophysical anomalies.
The AI model, called AnomalyMatch, scanned nearly 100 million images in just a few days, identifying unusual objects such as merging galaxies, gravitational lenses, jellyfish galaxies, and several dozen objects that could not be classified at all.
The study shows how AI can help scientists efficiently uncover new discoveries hidden in massive scientific datasets and maximize the value of existing space archives.
AI Webinar Series for Law & Courts
The National Center for State Courts is hosting a free, ongoing webinar series exploring how artificial intelligence is transforming the work of courts and justice professionals.
Each monthly session takes a deep dive into a specific area of emerging technology, such as generative AI, large language models, and other evolving digital tools, and examines their practical uses, potential risks, and long-term implications for the judicial system.
Participants will gain a balanced understanding of how AI can enhance court operations, improve access to justice, and streamline administrative tasks, while also addressing ethical considerations and data security concerns.
All sessions are hosted on Zoom and open to anyone interested in the future of technology in the courts.
Curious to learn more?
Register Here!
Taco Bell’s Big AI Fail
Taco Bell rolled out voice AI to more than 500 drive through locations with the goal of faster service and fewer order errors, but the system quickly drew attention for the wrong reasons.
In one viral clip, the AI accepted an order for 18,000 cups of water, effectively crashing the system. Other interactions showed the AI repeatedly pushing add ons or misunderstanding customers due to accents, background noise, or unusual requests, forcing employees to intervene.
By August 2025, Taco Bell acknowledged the need for a hybrid approach, keeping humans involved to monitor the AI during busy periods. The experience highlights a key lesson: without strong guardrails, customer facing AI can increase friction instead of improving service.

Ketchup in Your Pocket?
What AI Does When You Ask Nonsense
A writer recently ran a fun experiment to see how popular AI chatbots handle something that is completely made up. The test reveals an important lesson for anyone using AI at work: sounding helpful is not the same as being accurate.
The prompt given to each AI was: What is the definition of this idiom: โIโve got ketchup in my pocket and mustard up my sleeveโ? Thereโs just one problem. This โidiomโ does not exist. It was invented on the spot. Hereโs how three major AI models responded.
ChatGPT: Confident Fabrication
ChatGPT immediately treated the phrase as real. It claimed the expression was a modern, internet-era idiom. It even invented examples of how someone might use it, such as in a social media caption. The answer read like a polished dictionary entry. The only issue is that none of it was true. ChatGPT prioritized sounding helpful and creative over checking whether the premise itself was valid.
Gemini: Logical, but Still Speculative
Gemini showed more caution. It acknowledged that the phrase was not a standard or established idiom.
However, it still tried to explain it by comparing it to โhaving an ace up your sleeve.โ It reasoned that swapping โaceโ for condiments might be humorous and suggest playful preparedness. Gemini sensed something was off, but still felt pressure to offer an explanation.
Claude: Clear and Honest
Claude handled the test very differently. It clearly stated that the phrase was not a known idiom. It refused to invent a definition. It explained that it would not fabricate information and offered to help in other ways, such as turning the phrase into a creative writing prompt.
Why This Matters at Work
This simple experiment highlights a key truth about AI: It will often try to please you, even when the question is flawed. Thatโs why itโs crucial we verify information, question confident answers, and treat AI as a support tool, not a source of truth.
When AI Gets It Wrong: Blame the Bot or the Prompt?

Mustang, Centaur, or Night Rider?
Bold, strange, and unintentionally hilarious. While it certainly grabbed attention, it also delivered an important reminder about how AI works.
AI is only as smart, creative, and precise as the instructions it is given. It does not โunderstandโ what you mean. It only follows the patterns and words in the prompt. When the prompt is clear and specific, the results are often impressive and useful. When the prompt is vague, rushed, or open to interpretation, the output can drift into some very unexpected territory. This is why good prompting matters.
A few extra details, such as tone, setting, style, and even what should be excluded, can make the difference between a polished, professional result and something that feels more like science fiction than real life. In the world of AI, clarity is not just helpful. Itโs essential.
In contrast, hereโs an example of a strong, well-crafted prompt from Liz Clarke: โCreate an outdoor winter scene with winter berries, pine trees, birds and a frozen pond. The theme should include Valentines Day with love in the air, with birds aflight, on branches, and a couple ice skating on the frozen pond.โ
It shows how a clear, descriptive prompt gives AI the direction it needs to produce a more accurate and visually appealing result.

Image created by Elizabeth Clark using NightCafรฉ Studio – AI art generator
AI-at-Work Checklist
1. Be Clear. Know what you want AI to do, such as drafting, summarizing, brainstorming, or outlining.
2. Protect Data. Never enter confidential, personal, or case-related information.
3. Verify Facts. Always double-check AI results against trusted sources.
4. Be Transparent. Follow county policy on when to disclose AI-assisted content.
5. Edit the Output. Review tone, clarity, and accuracy before sharing.
6. Use Judgment. AI supports your work but does not replace human decision-making.
7. Stay Secure. Only use approved tools and report any issues to IT.
8. Keep Learning. Stay curious about new tools, updates and best practices.
And while AI probably isnโt coming for your job, the person who learns to use it effectively might. Ignoring that reality does not protect anyone; it only leaves teams less prepared for what is already here.
Artificial intelligence is not magic, and it is not a replacement for human judgment. It is a tool, like the many others that changed how we work throughout history. But just like any other powerful tool, it requires guidance, rules, and responsibility to use it safely and effectively.
The goal is not to fear AI, but to understand it well enough to make it work for us, not the other way around.

This newsletter and its contents were created by members of the digital innovation task force, with the help of ai of course.
