The ether is filled with stories of how people were foolish enough to trust the various pieces of software and the data stored on immense servers when it comes to daily activities.
When the weaknesses of AI combine with the wonders of information technology, and Murphy’s Laws, life, limb, and liberty are at stake. We all remember (and shudder) the Blue Screen of Death (yes, it has an acronym: BSOD).


Many are funny (at least if you are not the victim of the falsely placed trust).
Consider this one:
| Artificial Intelligence Fools Hunters Into Shooting Ducks On The Wrong Day Using a Google-AI search for duck hunting season dates landed some Idaho hunters in trouble as the search gave them the wrong dates. An experienced Wyoming hunter said relying on AI for reliable information is a “crapshoot” at best. [READ MORE] |
Rolled snake-eyes that time, for certain!
But that is far from the worst that can happen. Just as people have died when following instructions on Google or Bing maps (for example, driving full speed into a lake), trusting AI for medical and legal matters can be terrifyingly painful and even deadly.
Some call these “AI hallucinations,” which, like LSD, impact on the real world. Consider the legal case of Mata v. Avianca. An attorney for the plaintiff relied on ChatGPT to conduct his legal research. A judge found that the brief contained internal citations and quotes that were nonexistent. The chatbot made them up, claiming they were available in major legal databases. Guess who lost the case?
It is not just New York attorneys, or attorneys in general who wrongly believe these very error-prone pieces of software.
Try them for yourself, on a subject you know well – or even one you are at least familiar with.
You will find them just as reliable as government agencies and bureaucrats. <grin>
And like government, they are getting worse. At least so an article published by Forbes a couple of months back claims. “On average, they found, the chatbots spread false claims when prompted with questions about controversial news topics 35% of the time — almost double the 18% rate of a year ago. The worst performer was Inflection, which, said the team, provided false claims to news prompts 57% of the time. The rate for Perplexity was 47% and for Meta and ChatGPT it was 40%. Claude was the most accurate, offering up false reports just 10% of the time.”
(Of course, for all we know, the Forbes article was written by or with the “help” of an AI, and all those numbers might be made up out of whole cloth.)
We recently saw this when one of the TPOL staff was viewing some history videos online. The voiceover or narration was actually pretty good: more accurate in certain points than most of what is written by amateur or even professional historians. But the illustrations were clearly generated by AI and did not portray the history accurately, even while appearing to be old paintings and photographs. (Nonsense and misspelled words, weird insignia and uniforms, anachronistic details all abounded. For example, 14th Century Hungarians did not have Gatling guns and wear WW1 style uniforms. Texian settlers fighting off Comanche raids did not have M-16s or AR-15s to defend themselves!)
A wise man once wrote, “Put not your trust in princes, in mortal man,..” and how much less can we trust what is produced by software or apps designed by humans and endorsed by governments?
And perhaps we should consider an option. What if the errors produced by AI are not a glitch but instead a feature of their systems? Lies are lies, false data is false. But why do these things exist? Who benefits? And who suffers?
Again, like all tools, AI and information technology can be used wisely for good or for evil purposes. They are neutral – the users are the ones who use them well or to deceive and control, aggress, against others.