Causal Artificial Intelligence, with John Thompson

Episode 206

There’s no denying that ChatGPT and other GenerativeAI’s do  amazing things.

Extrapolating how far they’ve come in 3 years, many can get carried away with thinking GenerativeAI will lead to machines reaching General and even Super Intelligence. We’re impressed by how clever they sound, and we’re tempted to believe that they’ll chew through problems just like the most expert humans do. 

But according to many AI experts, this isn’t what’s going to happen.  

The difference between what GenerativeAI can do and what humans can do is actually quite stark. Everything that it gives you has to be proofed and fact-checked. 

The reason why is embedded in how they work. It uses a LLM to crawl the vast repository of human writing and multimedia on the web. It gobbles them up and chops them all up until they’re word salad. When you give it a prompt, it measures what words it’s usually seen accompanying your words, then spits back what usually comes next in those sequences.  The output IS very impressive, so impressive that when one of these was being tested in 2022 by a Google Engineer with a Masters in Computer Science named Blake Lemoine, became convinced that he was talking with an intelligence that he characterized as having sentience. He spoke to Newsweek about it, saying:  

“During my conversations with the chatbot, some of which I published on my blog, I came to the conclusion that the AI could be sentient due to the emotions that it expressed reliably and in the right context. It wasn’t just spouting words.” 

All the same, GenerativeAI shouldn’t be confused with what humans do. Take a published scientific article written by a human. How they would have started is not by hammering their keyboard until all the words came out, they likely started by asking a “what if”, building a hypothesis that makes inferences about something,  and they would have chained this together with reasoning by  others, leading to experimentation, which proved/disproved the original thought. The output of all that is what’s written in the article. Although GenerativeAI seems smart, you would too if you skipped all the cognitive steps that had happened prior to the finished work.

This doesn’t mean General Artificial Intelligence is doomed. It means there’s more than one branch of AI – each is good at solving different kinds of problems. One branch called Causal AI doesn’t just look for patterns, but instead figures out what causes things to happen  by building a model of something in the real world. That  distinguishes it from GenerativeAI, and it’s what enables this type of AI to  recommend decisions that rival the smartest humans. The types of decisions extend into business areas like marketing, making things run more efficiently, and delivering more value and ROI.

My guest is the Global Head of AI at (EY) Ernst & Young, having also been an analytics executive at Gartner and CSL Behring and graduating from DePaul with an MBA. 

He has written five  books. His 2024 book is about the branch of AI technology we don’t hear very much about, Causal AI. So let’s go to Chicago now to speak with John Thompson.

 

Chapter Timestamps

0:00:00 Intro

00:04:36 Welcome John

00:09:05 drawbacks with current Generative AI

00:16:09 problems causal AI is a good fit for

00:22:47 Way Generative AI can help with causal

00:26:50 PSA

00:28:08 How DAGs help in modeling

00:38:36 what is Causal Discovery

00:47:52 contacting John; checking out his books

People/Products/Concepts Mentioned in Show

John is on LinkedIn

John Thompson has been on Funnel Reboot twice previously:

Episode 136

Episode 181

Causal Diagramming tools:

https://www.dagitty.net/

https://cbdrh.shinyapps.io/daggle/

 



Listen to episode

Components of Causal AI modeling process.

Leave a Reply

Your email address will not be published. Required fields are marked *