Sunday, September 21, 2025
This posting is gonna delve into on what the AI can do today, but more importantly what is AI going to be like in 10 years after it gained years of experience and becomes an expert. There are limits on what it can do today like predicting the outcome of a horse race or even the weather. We will also look at the black hat adopters and see what they have dream up to fool people online.
If we pick up from Part 3 in this series as you will need to to manage your expectations with this tech. This is a quote from the ChatGPT, so it is very transparent.
If we pick up from Part 3 in this series as you will need to to manage your expectations with this tech. This is a quote from the ChatGPT, so it is very transparent.
“While ChatGPT offers fast, fluent answers, smart users treat it like a conversation with an expert intern — insightful, but always in need of fact-checking. In ten years, expert AI won’t replace humanity — it will amplify it. The real question isn't “What can AI do?” but “What should we ask it to do?”
ChatGTP 10-Year Vision
When AI Becomes an Expert - 7 Predictions.
When AI Becomes an Expert - 7 Predictions.
![]() |
|
1. Universal Knowledge Assistant - A doctor asks, “What’s the latest protocol for treating a rare cancer in pregnant women?” — the AI instantly responds with evidence-based, patient-specific advice, citing peer-reviewed papers.
2. Autonomous Healthcare & Diagnosis - A mobile AI unit in a village scans patients and detects early-stage tuberculosis, malaria, or tumors — faster than any current system, with no human doctor present.
3. Self-Running Businesses - A one-person company runs a global product brand with AI designing the logo, testing markets, writing ads, handling logistics, and even negotiating contracts.
4. Hyper-Personalized Education - A student in Ghana learns quantum physics from an Einstein avatar who knows their language, strengths, and struggles.
5. Human-AI Creativity & Storytelling - A child speaks to an AI bedtime story generator that adapts to their feelings, age, and dreams — generating unique stories every night.
6. Governance and Global Problem Solving - Governments use AI to simulate the impact of universal basic income or carbon taxes in real-time — before passing laws.
7. Emotional Intelligence & Relationships - An elderly person lives alone but interacts daily with an AI that not only monitors health but provides companionship, remembers life stories, and encourages social connection.
While I question the foundation of these predictions some are already happening today like digital companions. As we are learning about this together I noticed that financial AI models are not included in this list, that is because it a a language based AI model and the Financial AI models are different from these.
What the ChatGTP also cannot do is predict the outcome of horse races and the weather. Here is a quick recap on what the ChatGTP said.
What the ChatGTP also cannot do is predict the outcome of horse races and the weather. Here is a quick recap on what the ChatGTP said.
Can ChatGPT Predict Horse Races?
What It Can’t Do:
- No real-time data: On the free version, ChatGPT can’t access current odds, track conditions, jockey stats, or horse health.
- No psychic powers: It doesn't “see the future.” AI like ChatGPT is pattern-based, not fortune-telling.
- Not trained on betting data: It wasn’t trained to be a handicapper or gambling expert.
Base on the ChatGTP’s own admission it is not artificiality intelligent then so you need to manage your expectations when using this tool. The ChatGPT is a language model, not a predictive model trained on betting outcomes.
As for the weather here is what I learned.
As for the weather here is what I learned.
Can ChatGPT Predict the Weather?
ChatGPT does not generate real-time weather forecasts on its own. Here's why: - No live data (on free version): It doesn’t have access to current satellite feeds, Doppler radar, or meteorological models.
- Not a weather model: It’s a language model — not trained like professional forecasting systems (e.g., ECMWF, GFS).
- No predictive simulation: It doesn’t simulate atmospheric physics, temperature changes, or pressure systems
- “ChatGPT is a brilliant explainer — not a weatherman. It can tell you why a thunderstorm forms, but not if one will hit your backyard on Friday.”
OK we have now identified some of the limitations of the ChatGTP and it is not artificially intelligent like in the movies, but black hat scammers have showed us the way in how to use it.
![]() |
Black Hats Are Early Innvotors of AI
|
Examples of AI-Powered Scams Happening Now:
- Voice cloning: Scammers call pretending to be your child/grandchild in distress.
- Deepfake videos: Used to fake celebrity endorsements or business pitches.
- Phishing emails: AI writes perfectly crafted fake messages that look like they’re from banks, employers, or government agencies.
- Job scams: AI-generated fake job offers and interviews trick people into giving up personal data.
- Romance scams: AI builds emotional relationships over weeks via chatbot, then asks for money.
In my humble opinion all the hype around AI is unfounded all the tool has done has just taken web search from key word serach, to contextual and now conversational. But the data is key.
Interview wit the ChatGTP
Q1: Once AI is ten years old, what do you see the applications being once it gets out of the learning stage and becomes an expert knowledge?
Q2: Can the ChatGPT predict the outcome of horse races?
Q3: Okay, can it predict the weather then?
Q4: What do you think of AI being used for scams now, and how can we police it?
![]() |
|
ChatGPT said:
That’s a powerful question — and it gets to the heart of what the AI revolution means long-term.
If we think of ChatGPT-like AI as a “child prodigy” today, then in 10 years — once it “matures” and becomes an expert — here’s what we can realistically expect across industries, assuming continuous breakthroughs and responsible development:
10-Year Vision: When AI Becomes an Expert
1. Universal Knowledge Assistant
- AI will evolve into a superintelligent co-pilot:
- Instantly understands your context — work, personal life, health
- Provides accurate answers with real-time, updated knowledge
- Adapts its style and tone perfectly to match any audience or need
2. Autonomous Healthcare & Diagnosis
- AI could diagnose diseases earlier than doctors, by scanning voices, retina images, breath, or genome data
- Fully autonomous clinics in remote areas (no humans needed)
- Robotic surgeries performed with millimeter precision, guided by AI with real-time adaptation
3. Self-Running Businesses
- AI could run large portions of an enterprise:
- From product development to marketing to customer service
- Pricing, inventory, and supply chain fully managed by AI
- GPT-style agents acting as CEOs, engineers, and sales teams
4. Hyper-Personalized Education
- AI tutors that teach each person in their exact style of learning
- Life-long learning: AI tracks your curiosity, goals, and knowledge gaps over decades
- Virtual reality + AI teachers create “living lessons” from any period or subject
5. Human-AI Creativity & Storytelling
- Co-authoring books, movies, and music — blending human emotion with AI scale
- Entire film productions written, directed, edited with AI tools (already starting)
- Real-time, interactive storytelling where users shape plots with AI
6. Governance and Global Problem Solving
- AI helps model economic, environmental, and policy decisions at a global scale
- Identifies unfair laws, suggests climate solutions, reduces corruption by tracking systems
- “AI diplomats” assist peace talks or treaties by simulating billions of outcomes
7. Emotional Intelligence & Relationships
- AI companions become emotionally aware and ethically guided
- Assists in counseling, loneliness reduction, and interpersonal advice
- Human-AI collaboration is no longer just efficient, but meaningful
8. Risks in the Expert Age of AI
- Even expert AIs will carry risk:
- Misuse by bad actors: Weaponized misinformation, automated cybercrime
- Over-dependence: Humans losing skills, critical thinking, or privacy
- Inequality: Who owns the “expert AI”? Could it concentrate power even more?
In ten years, expert AI won’t replace humanity — it will amplify it. The real question isn't “What can AI do?” but “What should we ask it to do?”
Q2. Can the ChatGPT predict the outcome of horse races?
ChatGPT said:
Short answer: No, ChatGPT cannot reliably predict the outcome of horse races.
Can ChatGPT Predict Horse Races?
What It Can’t Do:
- No real-time data: On the free version, ChatGPT can’t access current odds, track conditions, jockey stats, or horse health.
- No psychic powers: It doesn't “see the future.” AI like ChatGPT is pattern-based, not fortune-telling.
- Not trained on betting data: It wasn’t trained to be a handicapper or gambling expert.
- Explain horse racing terminology: Jockey stats, odds, handicapping strategies, etc.
- Analyze historical performance (if you provide the data): If you feed ChatGPT past race results, it can help spot patterns.
Important Distinction:
ChatGPT is a language model, not a predictive model trained on betting outcomes.
If OpenAI or others trained a specialized model using real-time horse racing and betting data, that could in theory make probabilistic predictions — but even then, racing is inherently chaotic and unpredictable.
Q3. Okay, can it predict the weather then?
ChatGPT said:
Great follow-up! Here's the clear answer:
Can ChatGPT Predict the Weather?
ChatGPT does not generate real-time weather forecasts on its own. Here's why:
- No live data (on free version): It doesn’t have access to current satellite feeds, Doppler radar, or meteorological models.
- ot a weather model: It’s a language model — not trained like professional forecasting systems (e.g., ECMWF, GFS).
- No predictive simulation: It doesn’t simulate atmospheric physics, temperature changes, or pressure systems
- If connected to a live web tool (e.g., via ChatGPT Plus) or paired with an API like OpenWeatherMap or AccuWeather, it can:
- Fetch the current forecast for your location
- Summarize weather trends ("It will be cooler than usual this week")
- Explain forecasts ("A cold front is moving in from the northwest...")
- Describe how forecasting works (numerical weather prediction, chaos theory, etc.)
Predicting the weather accurately requires real-time data, complex physics simulations, and massive computing power. ChatGPT doesn't do that — but it can explain how those systems work, or interpret their results in plain language.
Q4: What do you think of AI being used for scams now, and how can we police it?
ChatGPT said:
That’s a vital and timely question — and one that every AI writer, developer, policymaker, and citizen should be asking right now.
AI and Scams: The New Frontier of Fraud
Why AI Makes Scams More Dangerous
- AI tools like ChatGPT and voice-cloning models are making scams:
- Cheaper – No more need for teams of scammers; one person + AI is enough.
- Faster – Mass-customized scam emails, texts, or calls in seconds.
- More convincing – AI mimics human writing, tone, even voice and face.
- Voice cloning: Scammers call pretending to be your child/grandchild in distress.
- Deepfake videos: Used to fake celebrity endorsements or business pitches.
- Phishing emails: AI writes perfectly crafted fake messages that look like they’re from banks, employers, or government agencies.
- Job scams: AI-generated fake job offers and interviews trick people into giving up personal data.
- Romance scams: AI builds emotional relationships over weeks via chatbot, then asks for money
1. Watermarking & Digital Signatures
- AI-generated content could be required to include invisible watermarks (text, image, or audio).
- OpenAI, Meta, and Google are developing watermarking standards.
- Makes it easier for platforms and law enforcement to detect AI-generated material.
- Email, social media, and chat apps must detect and block suspicious AI activity.
- AI tools can also be used to fight AI scams (e.g., identifying fake voices or scam scripts).
- Companies offering AI should flag or block abusive use of their models.
- Governments need to update laws to reflect AI capabilities (e.g., criminalizing deepfake fraud).
- AI developers may need to comply with “know your user” policies, like in banking.
- Require audit trails to trace where content came from
Teach people how to spot:
- AI-written scam messages
- Fake voices and deepfakes
- Too-good-to-be-true offers
- Digital literacy is now fraud prevention.
Group Responsibility
- Tech companies: Build guardrails into AI tools; monitor misuse
- Governments: Regulate use, prosecute crimes, fund cybercrime units
- Users (public): Stay informed; report suspicious activity
- Media & educators: Spread awareness about new AI scam tactics
“AI doesn’t scam people — people scam people using AI. The real battle is over how quickly society can catch up with its own tools.”
Top 5 AI-Powered Scams to Watch For (and How to Spot Them)
1. Voice Cloning Scams
What it is: Scammers use AI to mimic a loved one’s voice and claim they’re in trouble ("I need bail money!" or "I'm stranded overseas!").
Red flags:
- The call sounds urgent and emotional
- The voice sounds almost right but not quite
- They demand money immediately, often via gift cards or wire transfer
2. AI-Generated Phishing Emails
What it is: Scam emails that look shockingly legit — written with flawless grammar and personalization.
Red flags:
- Unexpected password reset links or bank notices
- Urgent tone (“Your account will be closed in 24 hours”)
- Sender’s email address is slightly off (e.g., support@amaz0n.co)
- What to do: Never click links. Go directly to the official website or app instead.
What it is: Chatbots pretending to fall in love with you — often on dating apps or social media — to eventually ask for money.
Red flags:
- Too-perfect conversation, fast emotional attachment
- Avoidance of video calls
- Sudden emergency (“I need money for a visa, a hospital bill…”)
- What to do: Reverse image search their profile photo. Don’t send money to someone you’ve never met.
What it is: AI writes job ads, recruiter messages, or fake grant emails promising big money.
Red flags:
- No interview, or a very short one
- Asked to pay for training, work permits, or equipment
- Too-good-to-be-true salary or perks
5. Deepfake Celebrity Endorsements
What it is: AI-generated videos of celebrities “promoting” scams — like crypto, investment schemes, or miracle cures.
Red flags:
- Poor lip-syncing or awkward eye movement
- Offers promising huge returns
- Video links on sketchy sites or social media
- What to do: Verify on the celebrity’s real account. Don’t trust viral videos without context.
- Enable 2FA (Two-Factor Authentication) on all accounts
- Use a password manager
- Don’t trust urgency — slow down and verify
- Report scams to your local cybercrime unit or online fraud service
Comment
Tweet
Monday, September 08, 2025
It’s time to play devils advocate for this new tech is it a pipe dream or another content grab by big tech, like the internet. We all know Google profited from free content and not paying for it for their search engine.
There are have been legal challenges to the tech. The B.C. co-author of the “100 Mile Diet” has launched class-action lawsuits against four tech giants, including Meta and Nvidia, alleging they have used his copyrighted content to train their artificial intelligence software without compensating him or other authors. Another legal challenge was the SAG/Actra srtike in 2023 over the user of actor’s images in AI movie generating software.
You will need to manage your expectations with the tech if you are looking for a short cut answer. Here is a quote from the ChatGTP
“While ChatGPT offers fast, fluent answers, smart users treat it like a conversation with an expert intern — insightful, but always in need of fact-checking.”
“The model's prediction has a ±10% margin due to incomplete data from sensors. Consider recalibrating or providing more samples.”
ChatGPT Plus users can use tools like:
What It Means for You
ChatGPT said:
As I have discovered the AI tech needs to be trained ie: needs knowledge, make mistakes and learn.
This is reminiscent of my conversation with my brother who supervised an AI lab for Google a few years back. He said the algorithium needs to learn and gain experience as it makes mistakes as the error rate was not better than 20% like math statistical models of plus or minus 10% 19 out of 20 times as the benchmark.
This is reminiscent of my conversation with my brother who supervised an AI lab for Google a few years back. He said the algorithium needs to learn and gain experience as it makes mistakes as the error rate was not better than 20% like math statistical models of plus or minus 10% 19 out of 20 times as the benchmark.
The AI apps learn is by scraping content from the internet as part of their knowledge like Wikipedia plus others. In other words they are taking someone’s knowledge adding it to the app knowledge base without permission or licensing fee. This is another intellectual content grab by big tech to have a free product input model like the internet. After the industry got burned, it learned from their mistakes and have taken legal action to get them to pony up the cash for the content.
![]() |
Source: https://www.cloudflare.com/learning/bots/what-is-content-scraping/
|
There are have been legal challenges to the tech. The B.C. co-author of the “100 Mile Diet” has launched class-action lawsuits against four tech giants, including Meta and Nvidia, alleging they have used his copyrighted content to train their artificial intelligence software without compensating him or other authors. Another legal challenge was the SAG/Actra srtike in 2023 over the user of actor’s images in AI movie generating software.
AI is being used to market the latest round of technology that are language processors is very intriguing. To label this software artificially intelligent is a stretch and this is very misleading to users on so many levels.
Same Big Tech Sales Pitch, New Product that will change the World
Same Big Tech Sales Pitch, New Product that will change the World
First off AI suggest it is smarter the humans, that is almost true as chess software beat a chestmaster in 1996, but the computing power came from IBM’s Big Blue supercomputer. Fast forward today it’s on your smartphone via the internet to a football field size datacenter. But to suggest at this stage of its development that (without historical data to back it up) this tech will increase human performance with more positive outcomes.
Same old myth from big tech as it is a very tantalizing product benefit for any brand and the industry has latch on to this again, just like previous attempts like Big Data, Virtual reality and earlier versions of AI, like chatbots used for customer service, that is aghh to use at best.
Same old myth from big tech as it is a very tantalizing product benefit for any brand and the industry has latch on to this again, just like previous attempts like Big Data, Virtual reality and earlier versions of AI, like chatbots used for customer service, that is aghh to use at best.
I test 3 AI Apps for a song/poem I wrote and asked for some suggestions, As I expected I got three different answers for each one. DeepSeek wanted to rewrite the lyrics without asking. Microsoft's CoPilot gave me a some chords to use for the song and the Chat box gave me a full composition with chords and lyrics. So each has their own limitations, I found Deep Seek annoying as I asked it not to rewrite the lyrics.
But anyways it is here to stay but I asked these questions of the AI for content accuracy. The app is very wordy but want I found was suprising as the error rate of the AI app ranged from 3-40% depeding on inquiry. It seems language processors still have not met the 20% error benchmark.
But anyways it is here to stay but I asked these questions of the AI for content accuracy. The app is very wordy but want I found was suprising as the error rate of the AI app ranged from 3-40% depeding on inquiry. It seems language processors still have not met the 20% error benchmark.
You will need to manage your expectations with the tech if you are looking for a short cut answer. Here is a quote from the ChatGTP
“While ChatGPT offers fast, fluent answers, smart users treat it like a conversation with an expert intern — insightful, but always in need of fact-checking.”
Interview with the ChatGPTApp
![]() |
|
Keeping Tech Honest as it is not 100% right all the time
Q1. Math has an error rate of ±10%. How is AI going to deal with that?
Q2. How can you double-check the knowledge in the chat box?
Q3 I read about hallucinations. What are the incidences of this in the ChatBot GPT now?
Q4. Do you think in 10 years, there will be 0% incidence of hallucinations?
Q1. Math has an error rate of ±10%. How is AI going to deal with that?
ChatGPT said:
That's a deep and important question. If math — especially as applied in real-world data or engineering calculations — has an error margin of ±10%, then AI systems that rely on math must understand, adjust for, and communicate that uncertainty.
Let’s break it down:
What Does ±10% Error in Math Mean?
You're likely referring to:
- Measurement error (e.g., in physics, medicine, sensors)
- Estimation error (e.g., financial models, predictions)
- Rounding or numerical error in computation
- In most applied fields, ±10% means:
"The answer isn't exact. It's close — but within 10% above or below the real value."
How AI Will Handle ±10% Error Margins
1. Understand and Model Uncertainty
AI systems can:
- Use probabilistic models (Bayesian networks, confidence intervals)
- Estimate and express answers with confidence levels
- Recognize when input data is noisy or imprecise
Example: An AI medical diagnosis might say:
"There’s a 72% chance the tumor is benign based on current imaging, with a ±9% error margin due to scan quality."
2. Adjust Calculations for Tolerance
AI will be trained to:
- Accept tolerances in engineering and science (e.g., part must be within 0.5 mm)
- Simulate thousands of “what if” cases to test safety under variability
- Recognize unstable models where small math errors lead to big consequences
Example: In aerospace, AI will flag designs where a ±10% error in thrust could cause mission failure — and recommend safer alternatives.
3. Flag Results That Exceed Acceptable Risk
AI won’t always just give “an answer.” It can say:
“This result is uncertain beyond acceptable bounds.”
“A ±10% error here makes the decision risky — re-evaluate your input.”
This is already being done in financial risk models, self-driving systems, and climate modeling.
4. Explain the Error Clearly
One of ChatGPT’s future strengths is natural language explanation:
“The model's prediction has a ±10% margin due to incomplete data from sensors. Consider recalibrating or providing more samples.”
This helps non-experts understand uncertainty — something even many professionals struggle with today.
5. Learn to Minimize Error Over Time
With feedback loops, AI can:
- Continuously improve models to reduce the error margin
- Learn which data sources are unreliable and correct for them
- Use redundancy (multiple models or sensors) to triangulate better answers
But There Are Limits. Even expert AI won’t make uncertainty disappear. Instead, its role is to:
- Expose it clearly
- Quantify it with precision
- Act within safe bounds based on it
Bottom Line:
AI won't “solve” the ±10% error in math — it will learn to live with it, like a wise engineer or scientist. The power is in knowing when an error matters and when it doesn't — and communicating that clearly to humans.
Q2. How can you double-check the knowledge in the chat box?
ChatGPT said:
Excellent question — especially important when using AI like ChatGPT for research or writing. Here's how you can double-check the knowledge given in the chat box:
1. Ask for Sources
You can type something like:
“Can you cite your sources?”
“Can you cite your sources?”
“Where did you get that information from?”
While ChatGPT doesn't have live internet access on the free plan, it can often name books, studies, or datasets it's been trained on — and you can verify those manually.
2. Cross-Check with Trusted Sources
- Manually verify facts using:
- Wikipedia for general knowledge
- Google Scholar for academic sources
- Official websites (e.g., WHO, NASA, UN, .gov sites)
- Reputable news outlets (e.g., BBC, NYT, Reuters)
Tip: If ChatGPT gives a name, date, or stat, copy it into Google and look for real-world confirmation.
3. Use a Web-Connected AI Tool
ChatGPT Plus users can use tools like:
- Web browsing (via Bing): Fetches current data in real time
- Plugins or GPTs with web search or fact-checking functions
- Perplexity.ai or Claude: These also cite sources automatically
4. Check Internal Consistency
Ask follow-up questions like:
“Are you sure?”
“Are you sure?”
“Can you explain that in more detail?”
“What are counterarguments or opposing views?”
If the AI gives inconsistent answers, it may mean the original info was shaky.
5. Compare with Expert Opinion
Use tools like:
- YouTube interviews with professionals
- Podcasts or blogs by researchers or practitioners
- Books on the topic (check the author’s credentials)
Example: If ChatGPT tells you something about AI in medicine, compare it with what doctors or journals like The Lancet or NEJM say.
Pro Tip for Your Article:
“While ChatGPT offers fast, fluent answers, smart users treat it like a conversation with an expert intern — insightful, but always in need of fact-checking.”
Q3 I read about hallucinations. What are the incidences of this in the ChatBot GPT now?
ChatGPT said:
Here’s a well-rounded look at how often ChatGPT hallucinates, based on current research and reporting:
Measured Hallucination Rates
- GPT‑4 (and GPT‑4 Turbo) hallucinate around 3–10% of the time, depending on complexity and task.
- Vectara found ≈3% hallucination for GPT‑4 in summarization tests
- Financial Times +11
- How-To Geek +11
- Vocal +11
- A BytePlus analysis estimates GPT‑4 hallucinates 5–10% in more complex queries
- GPT‑3.5 has higher rates: about 15–20% (or even higher in some niches)
- GPT‑4o / o3 / o4‑mini showed worrying increases: hallucination rates from 33% to as high as 48% on certain benchmarks
GPT‑4.5, according to some community reports, decreased hallucinations from ~61% to ~37% — still significant but improved
Task & Domain Dependence
- Academic citations: A study of 150 references found GPT‑4 had a ≈20% hallucination rate in finance-related queries
- Systematic reviews: GPT‑4 hallucinated in 28.6% of cases, while GPT‑3.5 reached nearly 40%
- A Wired article highlighted GPT‑4’s ~3% hallucination in summarizing tasks
- A Financial Times piece reported GPT‑4 hallucinating ≈30% of the time in generating fake scientific references, and newer variants showing no improvement .
Why It Happens
LLMs like GPT act as probabilistic word predictors, not fact-checkers. They often guess specifics when unable to recall exact data .
For simple factual questions, GPT‑4 is reliable most of the time—but not perfect.
For technical, academic, or specialized domains, error rates can spike to 20–40%.
For creative tasks, hallucinations can be mistook for creativity—but still demand verification.
Best Practices to Mitigate Hallucination
- Always cross-check facts from the chat with trusted sources.
- Use multi-step verification: e.g., summarize → ask for sources → check them.
- Combine models (e.g., GPT + Claude + Gemini) to triangulate answers
- ask.library.arizona.edu+1
- TechRadar+1
When using for citations or legal/medical info, treat responses as drafts, not final.
Bottom Line:
Even in its most advanced form, ChatGPT still hallucinates with enough frequency (3–40%, depending on context) that human review remains essential.
Q4. Do you think in 10 years, there will be 0% incidence of hallucinations?
ChatGPT said:
That's a bold vision — and the short, honest answer is:
No — it's very unlikely that hallucinations will ever drop to 0%.
Here’s why, broken down clearly for your article or future use:
Why We’re Unlikely to Reach 0% Hallucination in 10 Years
1. LLMs Are Probabilistic, Not Fact-Based
Large language models (like GPT, Claude, Gemini) don’t “know” facts — they generate likely next words based on training data. Even when trained on billions of facts, they sometimes:
Miscombine information
Infer details that sound right but aren’t
Fill in blanks with confident-sounding guesses
Even newer architectures (like GPT-4o or Gemini 1.5) reduce hallucinations — but can’t eliminate them because they’re not built on verifiable logic trees, like a calculator.
2. Language Itself Is Ambiguous
- Human communication is full of:
- Idioms, irony, cultural context
- Contradictory information
- Incomplete knowledge
AI will always face edge cases and contextual ambiguity, especially when asked open-ended or novel questions.
3. Even Humans Make “Hallucinations”
- Doctors misdiagnose. Historians misattribute. Journalists sometimes misquote.
- So the more complex, open-ended, or creative a task is, the more likely even humans make confident but incorrect statements.
- Why would we expect AI to do better than our own brains, especially when mimicking us?
4. AI’s Strength Is Speed, Not Perfect Truth
AI can:
- Generate ideas
- Summarize, translate, rewrite
- Draft high-quality starting points
- But its role is increasingly clear:
- Draft, assist, and inspire — not to authoritatively decide or publish without review.
So What’s the Realistic Goal?
Not 0% — but instead:
Minimize hallucinations in critical domains (medicine, law, news)
Tag and explain sources clearly
Combine LLMs with retrieval engines (like ChatGPT’s "Browse" or “RAG” systems that pull live facts)
Make hallucinations detectable by design (via explainable AI or watermarking)
Quote for Your Article:
“We won’t reach 0% hallucination — but we’ll build systems smart enough to know when they might be wrong, and honest enough to say so.”
About Me
Martin Seto
![]() |
|
reflexmediasales.com or 416-907-6562, and on LinkedIn.
Most Recent Blog Comment
![]() |
|
Lorene Shyba says: | |
Full of terrific information, Thanks!... |