If we pick up from Part 3 in this series as you will need to to manage your expectations with this tech. This is a quote from the ChatGPT, so it is very transparent.
When AI Becomes an Expert - 7 Predictions.
![]() |
|
What the ChatGTP also cannot do is predict the outcome of horse races and the weather. Here is a quick recap on what the ChatGTP said.
- No real-time data: On the free version, ChatGPT can’t access current odds, track conditions, jockey stats, or horse health.
- No psychic powers: It doesn't “see the future.” AI like ChatGPT is pattern-based, not fortune-telling.
- Not trained on betting data: It wasn’t trained to be a handicapper or gambling expert.
As for the weather here is what I learned.
- No live data (on free version): It doesn’t have access to current satellite feeds, Doppler radar, or meteorological models.
- Not a weather model: It’s a language model — not trained like professional forecasting systems (e.g., ECMWF, GFS).
- No predictive simulation: It doesn’t simulate atmospheric physics, temperature changes, or pressure systems
- “ChatGPT is a brilliant explainer — not a weatherman. It can tell you why a thunderstorm forms, but not if one will hit your backyard on Friday.”
![]() |
Black Hats Are Early Innvotors of AI
|
- Voice cloning: Scammers call pretending to be your child/grandchild in distress.
- Deepfake videos: Used to fake celebrity endorsements or business pitches.
- Phishing emails: AI writes perfectly crafted fake messages that look like they’re from banks, employers, or government agencies.
- Job scams: AI-generated fake job offers and interviews trick people into giving up personal data.
- Romance scams: AI builds emotional relationships over weeks via chatbot, then asks for money.
In my humble opinion all the hype around AI is unfounded all the tool has done has just taken web search from key word serach, to contextual and now conversational. But the data is key.
Interview wit the ChatGTP
Q1: Once AI is ten years old, what do you see the applications being once it gets out of the learning stage and becomes an expert knowledge?
Q2: Can the ChatGPT predict the outcome of horse races?
Q3: Okay, can it predict the weather then?
Q4: What do you think of AI being used for scams now, and how can we police it?
![]() |
|
ChatGPT said:
That’s a powerful question — and it gets to the heart of what the AI revolution means long-term.
If we think of ChatGPT-like AI as a “child prodigy” today, then in 10 years — once it “matures” and becomes an expert — here’s what we can realistically expect across industries, assuming continuous breakthroughs and responsible development:
10-Year Vision: When AI Becomes an Expert
1. Universal Knowledge Assistant
- AI will evolve into a superintelligent co-pilot:
- Instantly understands your context — work, personal life, health
- Provides accurate answers with real-time, updated knowledge
- Adapts its style and tone perfectly to match any audience or need
2. Autonomous Healthcare & Diagnosis
- AI could diagnose diseases earlier than doctors, by scanning voices, retina images, breath, or genome data
- Fully autonomous clinics in remote areas (no humans needed)
- Robotic surgeries performed with millimeter precision, guided by AI with real-time adaptation
3. Self-Running Businesses
- AI could run large portions of an enterprise:
- From product development to marketing to customer service
- Pricing, inventory, and supply chain fully managed by AI
- GPT-style agents acting as CEOs, engineers, and sales teams
4. Hyper-Personalized Education
- AI tutors that teach each person in their exact style of learning
- Life-long learning: AI tracks your curiosity, goals, and knowledge gaps over decades
- Virtual reality + AI teachers create “living lessons” from any period or subject
5. Human-AI Creativity & Storytelling
- Co-authoring books, movies, and music — blending human emotion with AI scale
- Entire film productions written, directed, edited with AI tools (already starting)
- Real-time, interactive storytelling where users shape plots with AI
6. Governance and Global Problem Solving
- AI helps model economic, environmental, and policy decisions at a global scale
- Identifies unfair laws, suggests climate solutions, reduces corruption by tracking systems
- “AI diplomats” assist peace talks or treaties by simulating billions of outcomes
7. Emotional Intelligence & Relationships
- AI companions become emotionally aware and ethically guided
- Assists in counseling, loneliness reduction, and interpersonal advice
- Human-AI collaboration is no longer just efficient, but meaningful
8. Risks in the Expert Age of AI
- Even expert AIs will carry risk:
- Misuse by bad actors: Weaponized misinformation, automated cybercrime
- Over-dependence: Humans losing skills, critical thinking, or privacy
- Inequality: Who owns the “expert AI”? Could it concentrate power even more?
In ten years, expert AI won’t replace humanity — it will amplify it. The real question isn't “What can AI do?” but “What should we ask it to do?”
Q2. Can the ChatGPT predict the outcome of horse races?
ChatGPT said:
Short answer: No, ChatGPT cannot reliably predict the outcome of horse races.
Can ChatGPT Predict Horse Races?
What It Can’t Do:
- No real-time data: On the free version, ChatGPT can’t access current odds, track conditions, jockey stats, or horse health.
- No psychic powers: It doesn't “see the future.” AI like ChatGPT is pattern-based, not fortune-telling.
- Not trained on betting data: It wasn’t trained to be a handicapper or gambling expert.
- Explain horse racing terminology: Jockey stats, odds, handicapping strategies, etc.
- Analyze historical performance (if you provide the data): If you feed ChatGPT past race results, it can help spot patterns.
Important Distinction:
ChatGPT is a language model, not a predictive model trained on betting outcomes.
If OpenAI or others trained a specialized model using real-time horse racing and betting data, that could in theory make probabilistic predictions — but even then, racing is inherently chaotic and unpredictable.
Q3. Okay, can it predict the weather then?
ChatGPT said:
Great follow-up! Here's the clear answer:
Can ChatGPT Predict the Weather?
ChatGPT does not generate real-time weather forecasts on its own. Here's why:
- No live data (on free version): It doesn’t have access to current satellite feeds, Doppler radar, or meteorological models.
- ot a weather model: It’s a language model — not trained like professional forecasting systems (e.g., ECMWF, GFS).
- No predictive simulation: It doesn’t simulate atmospheric physics, temperature changes, or pressure systems
- If connected to a live web tool (e.g., via ChatGPT Plus) or paired with an API like OpenWeatherMap or AccuWeather, it can:
- Fetch the current forecast for your location
- Summarize weather trends ("It will be cooler than usual this week")
- Explain forecasts ("A cold front is moving in from the northwest...")
- Describe how forecasting works (numerical weather prediction, chaos theory, etc.)
Predicting the weather accurately requires real-time data, complex physics simulations, and massive computing power. ChatGPT doesn't do that — but it can explain how those systems work, or interpret their results in plain language.
Q4: What do you think of AI being used for scams now, and how can we police it?
ChatGPT said:
That’s a vital and timely question — and one that every AI writer, developer, policymaker, and citizen should be asking right now.
AI and Scams: The New Frontier of Fraud
Why AI Makes Scams More Dangerous
- AI tools like ChatGPT and voice-cloning models are making scams:
- Cheaper – No more need for teams of scammers; one person + AI is enough.
- Faster – Mass-customized scam emails, texts, or calls in seconds.
- More convincing – AI mimics human writing, tone, even voice and face.
- Voice cloning: Scammers call pretending to be your child/grandchild in distress.
- Deepfake videos: Used to fake celebrity endorsements or business pitches.
- Phishing emails: AI writes perfectly crafted fake messages that look like they’re from banks, employers, or government agencies.
- Job scams: AI-generated fake job offers and interviews trick people into giving up personal data.
- Romance scams: AI builds emotional relationships over weeks via chatbot, then asks for money
1. Watermarking & Digital Signatures
- AI-generated content could be required to include invisible watermarks (text, image, or audio).
- OpenAI, Meta, and Google are developing watermarking standards.
- Makes it easier for platforms and law enforcement to detect AI-generated material.
- Email, social media, and chat apps must detect and block suspicious AI activity.
- AI tools can also be used to fight AI scams (e.g., identifying fake voices or scam scripts).
- Companies offering AI should flag or block abusive use of their models.
- Governments need to update laws to reflect AI capabilities (e.g., criminalizing deepfake fraud).
- AI developers may need to comply with “know your user” policies, like in banking.
- Require audit trails to trace where content came from
Teach people how to spot:
- AI-written scam messages
- Fake voices and deepfakes
- Too-good-to-be-true offers
- Digital literacy is now fraud prevention.
Group Responsibility
- Tech companies: Build guardrails into AI tools; monitor misuse
- Governments: Regulate use, prosecute crimes, fund cybercrime units
- Users (public): Stay informed; report suspicious activity
- Media & educators: Spread awareness about new AI scam tactics
“AI doesn’t scam people — people scam people using AI. The real battle is over how quickly society can catch up with its own tools.”
Top 5 AI-Powered Scams to Watch For (and How to Spot Them)
1. Voice Cloning Scams
What it is: Scammers use AI to mimic a loved one’s voice and claim they’re in trouble ("I need bail money!" or "I'm stranded overseas!").
Red flags:
- The call sounds urgent and emotional
- The voice sounds almost right but not quite
- They demand money immediately, often via gift cards or wire transfer
2. AI-Generated Phishing Emails
What it is: Scam emails that look shockingly legit — written with flawless grammar and personalization.
Red flags:
- Unexpected password reset links or bank notices
- Urgent tone (“Your account will be closed in 24 hours”)
- Sender’s email address is slightly off (e.g., support@amaz0n.co)
- What to do: Never click links. Go directly to the official website or app instead.
What it is: Chatbots pretending to fall in love with you — often on dating apps or social media — to eventually ask for money.
Red flags:
- Too-perfect conversation, fast emotional attachment
- Avoidance of video calls
- Sudden emergency (“I need money for a visa, a hospital bill…”)
- What to do: Reverse image search their profile photo. Don’t send money to someone you’ve never met.
What it is: AI writes job ads, recruiter messages, or fake grant emails promising big money.
Red flags:
- No interview, or a very short one
- Asked to pay for training, work permits, or equipment
- Too-good-to-be-true salary or perks
5. Deepfake Celebrity Endorsements
What it is: AI-generated videos of celebrities “promoting” scams — like crypto, investment schemes, or miracle cures.
Red flags:
- Poor lip-syncing or awkward eye movement
- Offers promising huge returns
- Video links on sketchy sites or social media
- What to do: Verify on the celebrity’s real account. Don’t trust viral videos without context.
- Enable 2FA (Two-Factor Authentication) on all accounts
- Use a password manager
- Don’t trust urgency — slow down and verify
- Report scams to your local cybercrime unit or online fraud service
This is reminiscent of my conversation with my brother who supervised an AI lab for Google a few years back. He said the algorithium needs to learn and gain experience as it makes mistakes as the error rate was not better than 20% like math statistical models of plus or minus 10% 19 out of 20 times as the benchmark.
![]() |
Source: https://www.cloudflare.com/learning/bots/what-is-content-scraping/
|
There are have been legal challenges to the tech. The B.C. co-author of the “100 Mile Diet” has launched class-action lawsuits against four tech giants, including Meta and Nvidia, alleging they have used his copyrighted content to train their artificial intelligence software without compensating him or other authors. Another legal challenge was the SAG/Actra srtike in 2023 over the user of actor’s images in AI movie generating software.
Same Big Tech Sales Pitch, New Product that will change the World
Same old myth from big tech as it is a very tantalizing product benefit for any brand and the industry has latch on to this again, just like previous attempts like Big Data, Virtual reality and earlier versions of AI, like chatbots used for customer service, that is aghh to use at best.
But anyways it is here to stay but I asked these questions of the AI for content accuracy. The app is very wordy but want I found was suprising as the error rate of the AI app ranged from 3-40% depeding on inquiry. It seems language processors still have not met the 20% error benchmark.
You will need to manage your expectations with the tech if you are looking for a short cut answer. Here is a quote from the ChatGTP
“While ChatGPT offers fast, fluent answers, smart users treat it like a conversation with an expert intern — insightful, but always in need of fact-checking.”
![]() |
|
- Measurement error (e.g., in physics, medicine, sensors)
- Estimation error (e.g., financial models, predictions)
- Rounding or numerical error in computation
- In most applied fields, ±10% means:
- Use probabilistic models (Bayesian networks, confidence intervals)
- Estimate and express answers with confidence levels
- Recognize when input data is noisy or imprecise
- Accept tolerances in engineering and science (e.g., part must be within 0.5 mm)
- Simulate thousands of “what if” cases to test safety under variability
- Recognize unstable models where small math errors lead to big consequences
“The model's prediction has a ±10% margin due to incomplete data from sensors. Consider recalibrating or providing more samples.”
- Continuously improve models to reduce the error margin
- Learn which data sources are unreliable and correct for them
- Use redundancy (multiple models or sensors) to triangulate better answers
- Expose it clearly
- Quantify it with precision
- Act within safe bounds based on it
“Can you cite your sources?”
- Manually verify facts using:
- Wikipedia for general knowledge
- Google Scholar for academic sources
- Official websites (e.g., WHO, NASA, UN, .gov sites)
- Reputable news outlets (e.g., BBC, NYT, Reuters)
ChatGPT Plus users can use tools like:
- Web browsing (via Bing): Fetches current data in real time
- Plugins or GPTs with web search or fact-checking functions
- Perplexity.ai or Claude: These also cite sources automatically
“Are you sure?”
- YouTube interviews with professionals
- Podcasts or blogs by researchers or practitioners
- Books on the topic (check the author’s credentials)
- GPT‑4 (and GPT‑4 Turbo) hallucinate around 3–10% of the time, depending on complexity and task.
- Vectara found ≈3% hallucination for GPT‑4 in summarization tests
- Financial Times +11
- How-To Geek +11
- Vocal +11
- A BytePlus analysis estimates GPT‑4 hallucinates 5–10% in more complex queries
- GPT‑3.5 has higher rates: about 15–20% (or even higher in some niches)
- GPT‑4o / o3 / o4‑mini showed worrying increases: hallucination rates from 33% to as high as 48% on certain benchmarks
- Academic citations: A study of 150 references found GPT‑4 had a ≈20% hallucination rate in finance-related queries
- Systematic reviews: GPT‑4 hallucinated in 28.6% of cases, while GPT‑3.5 reached nearly 40%
- A Wired article highlighted GPT‑4’s ~3% hallucination in summarizing tasks
- A Financial Times piece reported GPT‑4 hallucinating ≈30% of the time in generating fake scientific references, and newer variants showing no improvement .
- Always cross-check facts from the chat with trusted sources.
- Use multi-step verification: e.g., summarize → ask for sources → check them.
- Combine models (e.g., GPT + Claude + Gemini) to triangulate answers
- ask.library.arizona.edu+1
- TechRadar+1
ChatGPT said:
- Human communication is full of:
- Idioms, irony, cultural context
- Contradictory information
- Incomplete knowledge
- Doctors misdiagnose. Historians misattribute. Journalists sometimes misquote.
- So the more complex, open-ended, or creative a task is, the more likely even humans make confident but incorrect statements.
- Why would we expect AI to do better than our own brains, especially when mimicking us?
- Generate ideas
- Summarize, translate, rewrite
- Draft high-quality starting points
- But its role is increasingly clear:
- Draft, assist, and inspire — not to authoritatively decide or publish without review.
But this generation is based on language models, not numbers like earlier models. So it was a big learning curve for me to wrap my head around this. My first impression is that it is the next generation level of web search hyped as AI, but now in a conversational tone using old data sometimes. I see the AI moniker now on my Google searches.
![]() |
|
![]() |
This image was created by ChatGTb and spot on for correctness
|
Here is a quote generated by the ChatGPT and its rationale.
- AI Is Doing Real Work, Right Now
- Doctors use AI for cancer detection (e.g., Google’s DeepMind)
- Writers use GPT to draft, edit, translate content
- Coders use AI copilots to build real apps, saving hours daily
![]() |
AI Apps are are everywhere now
|
We are entering the early adoption phase of this tech and I have been told it is powerful, intuitive and scary at the same time, but also has hallucinations (error rate) that we will dig deeper into later. My instincts tell me that this latest hype is driven by stock market plays just like fake meat and we know how that fizzled out. But we will soon know.
I had a strange thought pop into my head based on the movie Galaxy Quest , where a TV series was thought of as reality by an alien race, can AI make the same mistake?
![]() |
|
Q1. I’m doing an article on the ChatGPT engine and how it would impact the world. Give me some ideas.
- Brief history: Start with GPT-1 to GPT-4 (or GPT-4.5/o4) and beyond.
- Highlight leaps in capability, especially in reasoning, coding, image understanding, and natural conversation.
- How it works (simply): Touch on transformer models, training on massive datasets, and reinforcement learning from human feedback (RLHF).
- Education: Personalized tutoring at scale (e.g., homework help, essay feedback, language learning).
- Access to expert-level instruction for remote or underserved communities.
- Workplace Transformation: Automating routine tasks (e.g., writing emails, reports, customer support).
- Enhancing productivity for creatives, marketers, programmers, etc.
- Job Displacement vs Creation: Explore which jobs might be replaced or redefined (copywriting, data entry).
- New roles in prompt engineering, AI oversight, ethics, and AI-enhanced creativity.
- AI as an economic multiplier: Helping small businesses scale and compete.
- Bridging language barriers: Multilingual translation and communication.
- Leveling the playing field: Low-cost access to expert knowledge and tools.
- Bias and misinformation: Risks from hallucinations or biased training data.
- Privacy concerns: Handling user data responsibly.
- Dependency and critical thinking: Will people outsource thinking to AI?
- Anyone can write a novel, code an app, or start a business with AI support.
- Enables people without formal training to create high-quality content and ideas.
- AGI (Artificial General Intelligence): How close is GPT-style tech to AGI?
- AI governance: Calls for regulation, transparency, and open-source alternatives.
- Human-AI collaboration: Rather than replacement, AIs as co-pilots.
- “The ChatGPT Engine: Humanity’s Smartest Tool Yet?”
- “ChatGPT and the AI Revolution: Promise, Peril, and Potential”
- “Talking to the Future: How ChatGPT is Changing the World”
- Email drafting & summarizing – e.g., Microsoft Copilot in Outlook and Word
- Meeting transcription & summarization – e.g., Zoom AI Companion, Otter.ai
- Data analysis with natural language – ChatGPT with code interpreter (aka Advanced Data Analysis)
- Slide and report generation – Generating PowerPoints, proposals, or research summaries
- Internal knowledge base search – AI assistants answering employee queries from company documents
- Automated chat agents – ChatGPT used in websites, banks, and e-commerce for 24/7 support
- Help desk ticket triage – Classifying and routing support requests
- FAQ generation – AI writes helpful articles based on common user questions
- Blog post & article generation – Used by journalists, bloggers, marketers
- Marketing copywriting – Ads, landing pages, social media captions
- Scriptwriting – For YouTube, TikTok, and podcasts
- SEO optimization – Keyword-rich content suggestions
- Homework help and explanation – Math steps, essay feedback, concept clarifications
- Language learning – Practicing conversations, translations, vocabulary drills
- Lesson planning – Teachers using ChatGPT to generate quizzes, worksheets, slides
- Code generation – Writing Python, JavaScript, HTML, etc.
- Debugging assistant – Explaining errors and fixing bugs
- Code documentation – Writing comments and README files
- Learning new frameworks/languages – Asking ChatGPT to explain code concepts
- AI-generated images – Using prompts with tools like DALL·E and Midjourney
- Story and character development – Writers using GPT to build plots or lore
- Game design – Writing quests, rules, or dialogue scripts
- Music lyric and melody assistance – Collaborating on song lyrics or ideas
- Medical Q&A and triage tools – Helping patients understand symptoms
- Clinical documentation – Drafting medical notes and summaries
- Mental health support – Companion chatbots for basic emotional support (not a replacement for professionals)
- Product description generation – At scale, across thousands of items
- Customer recommendation engines – Personalized suggestions via chat
- Review summarization – Summarizing product reviews for easier browsing
- Policy drafting support – Helping draft public policy or analyze legislation
- Legal research – Summarizing case law, finding precedents
- Plain-language rewriting – Making legal or bureaucratic documents easier to read
- Trip planning – Creating itineraries, booking suggestions
- Meal planning and recipes – Based on ingredients or diet
- Budgeting help – Analyzing spending patterns or suggesting savings tips
- Relationship or communication advice – Drafting tough messages or apologies
- AI is already being integrated into robotic surgical systems like:
- Da Vinci Surgical System – Assists with minimally invasive surgeries (controlled by humans).
- CMR Surgical’s Versius – Uses AI to guide tool movements and improve precision.
- Smart Tissue Autonomous Robot (STAR) – Demonstrated successful autonomous soft-tissue surgery (e.g., stitching intestines) in research settings.
- GPT-like models help summarize patient records or diagnostic data
- AI assists in predicting complications and proposing personalized plans
- Real-time guidance and decision support
- NLP systems interpret surgeon commands or medical records during operations
- Computer vision AIs track tissue types, blood vessels, and instrument positions
- Autonomous micro-tasks
- AI performs suturing, cutting, or laser procedures with minimal human input
- Real-time adjustments based on camera feedback (like self-driving cars but for tissue)
- Post-op reporting and documentation
- AI drafts surgical notes, summaries, and recommends follow-up care steps
- GPT-like tools summarize long surgery logs or sensor data for reporting
- Precision and consistency beyond human capability in fine motor tasks
- Reduced human fatigue in long or complex procedures
- Access to surgery in remote or understaffed regions (via tele-surgery and automation)
- Accountability: Who is responsible for errors — AI designer, hospital, or surgeon?
- Bias: AI systems trained on non-diverse data might perform worse on some patient groups
- Trust: Patients may be uneasy about a robot making life-or-death decisions
- Regulation lag: Technology is advancing faster than medical laws and standards
- Imagine a surgical AI assistant powered by ChatGPT that:
- Understands natural language ("increase pressure on the clamp" or "highlight vessel paths")
- Talks with the surgical team, explaining AI decisions in real-time
- Interfaces with diagnostics and medical records live during surgery
Here’s a thoughtful, layered response you can use in your article (or frame as a debate section):
Is AI Just Another Tech Fad? (A Skeptic's Take)
- The Case for “Yes”: It's the New Big Data Buzzword
- Everyone’s Jumping on It
- Just like “cloud”, “mobile-first”, or “big data”, AI is slapped onto everything — from toothbrushes to marketing emails.
- The Metaverse
- Blockchain for everything
- IoT in toasters
- Big data dashboards that no one used
- Doctors use AI for cancer detection (e.g., Google’s DeepMind)
- Writers use GPT to draft, edit, translate content
- Coders use AI copilots to build real apps, saving hours daily
- Write a novel
- Build an app
- Learn quantum physics in plain English
- Microsoft Office (Copilot)
- Adobe Creative Cloud
- Google Workspace
- AI is not a fad — but its hype is faddish.
- Like the internet in the '90s, most of what’s said about AI today is inflated — but that doesn’t mean it’s fake.
- AI may outlast the hype because it changes how humans interact with knowledge, creativity, and work.
The ChatGTP is spawning millions of users
![]() |
Jarvis is the Science Fiction version of the ChatGTB used in the Iron Man Movies
|
The Conversations with the ChatGTB felt real
Voice comands on the ChatGTP worked flawlessly
![]() |
|
An Interview with the ChatGTB about AI - Fact or Fiction
- Readers are already familiar with how ChatGPT works — it’s interactive.
- A Q&A captures the natural rhythm of curiosity → answer → follow-up.
- It lets you showcase ChatGPT’s capabilities, style, and tone directly.
- It’s not just an article about AI — it becomes an article with AI.
- The format makes it easy to show when ChatGPT:
- Offers detailed, accurate insights
- Cautions about hallucinations or uncertainty
- Gives balanced perspectives (e.g. “here’s the hype, and here’s the reality”)
- That kind of transparency builds reader trust.
- You ask: “Is AI a fad?”
- ChatGPT responds with nuance.
- Then you follow up: “What about radio in 10 years?”
- This format keeps readers engaged like a conversation, not a lecture.
- You can group it into sections like:
- The Big Picture
- “Where do you see AI going in 10 years?”
- Media Futures - “What happens to radio, magazines, newspapers?”
- Ethics & Risks - “What about scams?” Will hallucinations ever go away?”
- Behind the Curtain - “Where do you get your data?”, “Can you be wrong?”, “What’s your hallucination rate?”
- You get depth without needing transitions or lengthy intros.
- Framing your article as a real Q&A:
- Makes it personal (“I asked ChatGPT... here’s what it told me”)
- Gives you space to add your voice (“I challenged it with a skeptical question”)
- Makes the reader feel like they're part of the chat too
So the final test was to see the image it can create, I had the AI Hallucination Graphic created and here is what it looked like. Gotta admit it is good but is a very tired boring image to use and this look has been used many times, so creative originality is limited if it exists. Stay tuned for the image I use for the actual piece.
![]() |
AI Hallucinations is when the the ChatGTB gets it wrong and can be as high as 30% of the time. So you cannot 100% trust the results yet
|
OMG what is happening in the world today is utter chaos. Leaders of the world are astonished as the actions of one man in his ambition to make America great again. That’s the key word “Great Again". Like the fall of any empire there is a stage of denial and history is repeating itself again. Just ask the Romans during their decline.
![]() |
|
According to the words of Aristotle, an Ancient Greek philosopher and polymath (384-322 BC)
“Man is a political animal who seeks happiness by different ways and different means and so make for themselves different modes of life and forms of government. There is a struggle for power to determine who is master”
Sounds familiar? Lets call it the “Trump Effect”
What will this chaos mean for publishers in 2025?
A USA invasion of Canada has happened before economically (the digital publishing industry is a textbook example). Canada's history as a branch plant economy of big USA firms, is still true today, so what's the big deal now. We are also of the hewers of wood with a resource rich country that has now become in vogue. But...
If anybody has read “The Confessions of an Economic Hitman” you know what you are seeing now is part of that playbook for world trade with Trump as the frontman, just like a movie script with Trump as the reality star president of the USA. This arrogance will be their downfall as the rest of the world will circle the wagons and give somebody humble pie.
Whatever the case there is a global reset happening and what does that mean to publishers? When you don't know the expected results yet. Typically, Canadian firms let USA companies shoot it out and partner with the ones left standing after the gunfight, but the USA is now shooting at everybody. We need to duck, bob and weave now and maybe a little bit of treachery too to survive.
Time to button down the hatches is the first step as the first thing that will be cut is advertising in a period of uncertainty for 3-6months. The “Trump Effect” is a new/old phenomenon and needs to be monitored carefully. The issue for Canadian publishers is that the market is already owned by USA digital giants already, so we are already screaming “ Who is Your Daddy” and the Online News Act is to remedy that. So in many respects we know what can happen to a Canadian industry dominated by USA firms. We can only guess and fear what is going to happen next.
![]() |
|
A side effect of the “Trump Effect” is a growing trend to buy Canadian, which is a good thing with Canadian media. The CMDC has been advocating a Buy Canadian initiative “Canadian Media Manifesto” amongst their member agencies that control 90% of the media spend in Canada. This should get additional momentum in 2025. A Google Adsense and Facebook ad strike by all governments will be a great symbolic show of strength. We need government’s to stop advertising with USA media too! Here is an a Facebook ad from the Town of Whitby. Lets hope we can shame them too. https://www.cmdc.ca/cmm
![]() |
|
Here is a local firm that is benefiting from the Buy Canadian wave. Woah Dough provides a gluten-free line of baked goods and cookies and is now the official gluten free cookie of the Enercare Centre in Toronto. https://woahdough.ca/
All levels of government are jumping on the buy Canadian bandwagon, which may be a rash decision in the heat of the moment, just asked Rob Ford, the premier of Ontario, when he tried to jack up the electricity rates for our American friends/family. The Canadian way is what Alberta is doing and that is what we should do, you don’t cut off the energy supply to your friends and family and that is what Americans are. Be kind and compassionate. Anger only makes things worse and can cause pain for generations. What Trump is doing has happened in Canada (Oka crisis 1990 in Quebec) on a smaller scale (but resulted in military action) that we can learn from.
![]() |
|
Another side effect is that our Canadian identity is being challenged and our values are being tested and we are seeing “ How much Canadians love Canada” showing up everywhere, which is inspiring. The “Elbows Up” rallying cry that’s being adopted is very Canadian through our hockey roots. “Mr. Elbows”, Gordie Howe a Canadian hockey icon that played for the Detroit Red Wings, who always went into the corners with his elbows up for puck battles. That’s right Canadians are hard nosed and ya don’t want to pick a fight with them.
![]() |
Source: Elbow Room: “Gordie ‘Pow!'” by Detroit artist Zelley . Image: Howe Foundation
|
COPA 2025 to introduce "Best Patriotic Story"
The COPA’s want to hear more stories like this year. The COPAs this year will be introducing a new Free category for the “Best Patriotic story” that unites Canadians for a common cause. Details will be announced. Let’s show them that Canadians are a proud nation and don’t want to be American, plus face the possibility of medical bankruptcy, I had to throw this shot in, let’s call it a hockey face wash. LOL Also, you never divorce yourself from your family, cause you have a unpredicatable old uncle, that can't make up his mind. And friends are for life.
![]() |
|
Lorene Shyba says: | |
Full of terrific information, Thanks!... |