Artificial General Intelligence is a term that most of us have heard, a good number of us know how its defined, and some claim to know what it will mean for the average marketer. Here’s what OpenAI’s Sam Altman said “It will mean that 95% of what marketers use agencies, strategists, and creative professionals for today will easily, nearly instantly and at almost no cost be handled by the AI.”
What nobody knows for sure is when it will be here. Some said that GPT5 would herald the dawn of artificial general intelligence.
This episode is airing In mid-2025, and GPT5 has come out…and it is not widely believed to have AGI.
Our guest says AGI is a long way off, and more importantly, that it might not be the sought-for milestone we need for AI to be a revolutionary force in our lifetimes. Today’s guest takes us through what it will take for AGI to truly arrive. We also talk about public vs private models, Mixture of Experts (MoE) models, the Branches of AI like Foundational vs generative, Agents and Agentic Workflows.
Today’s guest graduated from DePaul with an MBA, has headed the AI/Analytics groups at (EY) Ernst & Young, Gartner, CSL Behring and now at the Hackett Group.
He has written several books and is here to talk about his 5th which came out in 2025.
So let’s go to Chicago now to speak about “The Path to AGI” with its author. Let’s welcome back for the 4th time on this show, more times than anyone else, John Thompson.
Most of the leading AI companies tell us how wonderful their technology will make our lives. In a recent post put out by OpenAI’s head, Sam Altman called The Gentle Singularity, he says “We will figure out new things to do and new things to want…Expectations will go up, but capabilities will go up equally quickly, and we’ll all get better stuff. We will build ever-more-wonderful things for each other.”
Of course, these new things need to be marketed and sold. Sam has good news there too, saying: “Generally speaking, the ability for one person to get much more done in 2030 than they could in 2020 will be a striking change”
This all sounds wonderful; it’s used so heavily by Silicon Valley, it’s been given the title of Effective Accelerationism. It’s essential thesis is that AI will cause progress all by itself. So we should just let it take over? Are we willing to bet our livelihoods on that?
Where we are here in 2025, it’s a challenge to do sales and marketing work using AI. Very few know how to run entire functions with Generative AI, which is why Sam qualified his 2030 prediction by saying that “many people will figure out how to benefit from [AI]” by then. How do we unlock AI’s activation in customer acquisition? How do we get out of the starters blocks?
I had the chance to moderate a panel discussion on “Gen AI Activation in Marketing & Sales” at an amazing event hosted by UC Labs and TCC Canada – please get the links to each of them in the shownotes.
The panel featured myself, Lubabah Bakht, Gary Amaral, Jim Cain, Peter MacKinnon, and Brett Serjeantson, zig zagging through everything from day-to-day challenges to legal and privacy concerns to the lack of skills barring our progress.
I count myself fortunate to not only share a panel with these experts, but for being able to call them friends.
And now, please listen to these experts on Activating generative AI in marketing and sales.
There’s no denying that ChatGPT and other GenerativeAI’s do amazing things.
Extrapolating how far they’ve come in 3 years, many can get carried away with thinking GenerativeAI will lead to machines reaching General and even Super Intelligence. We’re impressed by how clever they sound, and we’re tempted to believe that they’ll chew through problems just like the most expert humans do.
But according to many AI experts, this isn’t what’s going to happen.
The difference between what GenerativeAI can do and what humans can do is actually quite stark. Everything that it gives you has to be proofed and fact-checked.
The reason why is embedded in how they work. It uses a LLM to crawl the vast repository of human writing and multimedia on the web. It gobbles them up and chops them all up until they’re word salad. When you give it a prompt, it measures what words it’s usually seen accompanying your words, then spits back what usually comes next in those sequences. The output IS very impressive, so impressive that when one of these was being tested in 2022 by a Google Engineer with a Masters in Computer Science named Blake Lemoine, became convinced that he was talking with an intelligence that he characterized as having sentience. He spoke to Newsweek about it, saying:
“During my conversations with the chatbot, some of which I published on my blog, I came to the conclusion that the AI could be sentient due to the emotions that it expressed reliably and in the right context. It wasn’t just spouting words.”
All the same, GenerativeAI shouldn’t be confused with what humans do. Take a published scientific article written by a human. How they would have started is not by hammering their keyboard until all the words came out, they likely started by asking a “what if”, building a hypothesis that makes inferences about something, and they would have chained this together with reasoning by others, leading to experimentation, which proved/disproved the original thought. The output of all that is what’s written in the article. Although GenerativeAI seems smart, you would too if you skipped all the cognitive steps that had happened prior to the finished work.
This doesn’t mean General Artificial Intelligence is doomed. It means there’s more than one branch of AI – each is good at solving different kinds of problems. One branch called Causal AI doesn’t just look for patterns, but instead figures out what causes things to happen by building a model of something in the real world. That distinguishes it from GenerativeAI, and it’s what enables this type of AI to recommend decisions that rival the smartest humans. The types of decisions extend into business areas like marketing, making things run more efficiently, and delivering more value and ROI.
My guest is the Global Head of AI at (EY) Ernst & Young, having also been an analytics executive at Gartner and CSL Behring and graduating from DePaul with an MBA.
He has written five books. His 2024 book is about the branch of AI technology we don’t hear very much about, Causal AI. So let’s go to Chicago now to speak with John Thompson.
Rich Brooks is founder and president of flyte new media, a digital agency in Portland, Maine. He founded The Agents of Change a weekly podcast that has over 550 episodes. He is a nationally recognized speaker on using digital channels like search, social media and mobile for marketing to your audience. Rich also hosts the Agents of Change conference which takes place October 9th and 10th both virtually and in his hometown of Portland, Maine.
Timestamps/Chapters
0:00:00 Intro
00:02:49 welcome Rich
00:08:56 using GPT to make text seo-friendly
00:17:32 blending generative text with your own content
00:22:47 expanding to image & video
00:27:11 PSA
00:27:45 managing projects and events with AI
00:38:36 when to use a human vs aGPT
00:47:52 info on Rich. his podcast & his conference
One of the most famous western philosophers of all time is GWF Hegel. He influenced other thinkers like Karl Marx, Soren Kierkegaard and Jean-Paul Sartre. He lectured at the universities of Jena, Heidelberg and from 1818 until 1831, at Berlin. As a matter of fact, his lectures there drew students from all over campus, to the point that the belltower at the University would sound its bell to announce the start of Hegel’s lectures
People may have flocked to hear him, but that doesn’t mean they understood Hegel. One student who went on to write a biography of him was Karl Rosenkranz, who said “His lectures were not clear and systematic presentations, but profound expositions of the inner movement of concepts, which often raised more questions than they answered”…..in another part, he said “The students often complained that Hegel was difficult to understand.”
Many moons ago, I was a Political Science major, in which I had to take a philosophy course that covered Hegel – I had the toughest time understanding him and Hegel still confuses me to this day. I read & re-read his words, but I don’t get what he’s saying.
Same with Superintelligent AI like ChatGPT – when we ask it questions, there always seems to be a randomness factor. Sometimes it gives you amazing results, while other times it leaves you scratching your head at its hallucinations…its stupidity.
If you have this problem, it might not be the AI—it might be your prompts! There are hacks to how you craft them – and this has given rise to a whole field – prompt engineering.
Our guest co-founded a 50 person marketing agency called Ladder. He has designed courses on LinkedIn Learning & Udemy that 350,000 people have taken. And he was a very early user of Large Language Models – the brains behind Generative AI.
In 2023 he came on Ep 168 of this show for the book “Marketing Memetics.” In 2024 he came out with an O’Reilly book titled: Prompt Engineering for Generative AI. Let’s go to Liverpool, England to talk with Mike Taylor.
Chapter Timestamps
0:00:00 Intro
00:03:28 Welcome Mike
00:11:27 Expressing all that’s needed for a GPT to produce good response