Businesses are pouring millions into generative AI—chatbots, copilots, “agents”—while quietly ignoring the other half of the AI stack that’s been delivering measurable value for decades. Predictive AI doesn’t write poetry. It predicts who’s going to churn, which transaction is fraud, and which customer is worth contacting. It calculates probabilities and helps you act on them at scale. Not glamorous. Just effective.
In this conversation, Eric Siegel—author of The AI Playbook and founder of Machine Learning Week—makes a subversive claim: most organizations should be investing at least as much in predictive AI as generative AI. The problem isn’t the math. It’s the gap between tech and business. Companies celebrate models as value. But the model isn’t the value. Acting on predictions is.
What You’ll Learn
- Why predictive AI is the “forgotten 50%” of the AI stack
- The critical difference between technical metrics and business metrics
- Why 95% accuracy can still make your AI system worthless
- How predictive AI can serve as a reliability layer for generative AI
- The six-step BizML framework—and why most companies skip the business half
- What business leaders need to understand about probabilities (and why it’s not rocket science)
Key Takeaways
- Generative AI is sexier. Predictive AI is often more autonomous.
GenAI feels human. That’s why it gets attention. But it usually requires a human in the loop. Predictive AI, by contrast, quietly automates decisions at scale—blocking fraud, targeting marketing, prioritizing inspections—without fanfare. - The hype is distorting investment decisions.
Predictive and generative AI solve different problems. They shouldn’t compete any more than a waterpark and a ski resort. But they do compete—for budget, talent, and oxygen. And that imbalance is costing organizations real value. - A 95% success rate can equal 0% viability.
If a generative AI system makes critical errors 5% of the time, it may never get deployed. That’s where predictive AI comes in—flagging high-risk cases and routing them to humans. Catch most of the risky 5%, and suddenly 85% automation becomes viable. - The model isn’t the value. Operational integration is.
Data scientists build models. Organizations celebrate. Then… nothing changes. Because deployment—actually integrating predictions into workflows—is where value lives. No operational change, no business improvement. - Technical metrics don’t pay the bills.
Area under the curve. Precision. Recall. Impressive, yes. But they don’t answer the only question that matters: How much money will we make or save if we use this?
Business metrics—profit, savings, KPI impact—must be part of the project from the beginning, not an afterthought. - Predictive AI is about probabilities, not magic.
No crystal balls. Just better odds. A number between 0 and 100 for each case—who’s likely to churn, buy, commit fraud. That probability lets you triage limited resources. At scale, those micro-decisions compound. - Most companies skip the business half of the project.
Eric’s BizML framework outlines six steps. Data teams are trained in the last three:- Prepare data
- Train model
- Deploy model
But the first three—defining what’s predicted, how well, and what’s done about it—require deep business collaboration. Skip those, and deployment rates remain dismal.
- This is a business endeavor that happens to use machine learning.
Predictive AI isn’t a tech install. It’s an operational shift. That requires stakeholders who understand enough to participate—not to change spark plugs, but to drive the car.
Chapters
- 00:00 – The Forgotten 50%
- 02:12 – Why GenAI Wins Attention
- 05:47 – Agents & Reliability
- 10:15 – Why Deployment Fails
- 17:56 – The Organizational Shift
- 21:06 – Hybrid AI in Practice
- 25:04 – The Power of Probabilities
- 29:14 – The BizML Framework
- 32:43 – Don’t Chase the Fuzz
Meet Our Guest

Eric Siegel, Ph.D., is a bestselling author, consultant, and machine learning expert known for making predictive analytics and AI accessible to business and technical audiences alike. He wrote the acclaimed Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die and The AI Playbook: Mastering the Rare Art of Machine Learning Deployment, and is the founder of the long-running Predictive Analytics World conference series. A former Columbia University and UVA Darden professor, Eric also serves as CEO of Gooder AI, executive editor of The Machine Learning Times, and a sought-after keynote speaker who helps organizations bridge the gap between data science and real-world impact.
Related Links:
- Join the People Managing People Community
- Subscribe to the newsletter to get our latest articles and podcasts
- Check out this episode’s sponsor: Deel
- Connect with Eric:
- Check out Eric’s book: The AI Playbook
Related articles and podcasts:
David Rice: Businesses are investing millions in generative AI. They're building agents, chasing the hype. And often they're ignoring the other 50% of the AI stack, which is the part that actually delivers measurable business value. Predictive AI isn't sexy. It doesn't write poems or generate images. It just predicts who's gonna click, who's gonna buy. It targets your marketing. It catches fraud before it happens, and it automates decisions at scale without a human in the loop. If you're leaving it on the table, it's often because it's not seemingly as human-like.
Today's guest is Eric Siegel, author of the AI Playbook and Founder of Machine Learning Week. According to him, most organizations should be investing at least as much in predictive AI as they are in generative AI, but too much of the oxygen is being sucked out of the room by gen AI hype, and it's a loss for everybody. Here's the problem that he keeps seeing. Companies treat predictive AI as a data science project.
They send the data scientist off to build a model, celebrate when it's done, and then call it value. But the model isn't the value. Acting on the predictions is the value. Operational integration is the value. And that requires deep collaboration with business stakeholders across six steps that Eric will outline, not just three technical ones.
Most companies skip the business side steps entirely. They skip defining what's predicted and what's done about it. They skip operational integration. And then they wonder why their AI investments aren't paying off. So today we're gonna cover why predictive AI is less sexy, but more autonomous than generative AI.
How predictive AI will become the reliability layer that makes Gen AI actually viable. The six steps of the biz ML framework and which three your business stakeholders are most likely skipping. Why a 95% accuracy rate sounds impressive, but makes your system worthless. And what business leaders need to understand about probabilities. Don't worry, it's not rocket science.
I'm David Rice. This is People Managing People. And if you've been chasing generative AI while leaving predictive AI on the table, this conversation shows you exactly what you're missing. So let's go.
Eric, welcome! It's good to have you on the show today.
Eric Siegel: Thanks, David. Great to be here.
David Rice: Generative AI gets a lot of the spotlight, but you've called predictive AI, sort of the forgotten 50% of the AI stack, right? It's essential for real business value though. I'm curious, why do you think organizations are so much more excited about generative models and what sort of gets overlooked when these predictive systems are kind of sidelined in people's thought processes?
Eric Siegel: Yeah, I think, I like the way you put it. I mean, predictive AI is older, but not old school. Most of its values largely untapped. It's sort of the o other, the original AI. In a sense, it's what most people meant by AI before the advent of chatt PT and other generated AI solutions a few years ago. It's learning from data to predict in order to target and improve any existing large scale operations. So predict who's gonna click by lie or die, commit an act of fraud, any outcome or behavior that be at valuable organization that runs large scale operations. Your question is, why is Gen AI getting so much of the spotlight?
Well, the answer is really simple. It's much sexier and it's much easier to use. Neither of those things means it's necessarily more valuable, and I'll address each of those two things. Let me sort of put a sweeping statement, which is really subversive in today's culture of focusing so strenuously on gene AI and a certain dose of hype about its potential value.
Although it is extremely valuable, and I don't criticize technology, I'm amazed by it. I criticize some of the storytelling and hype about it. But here's the subversive statement, which is that most organizations should be investing at least as much in predictive AI as they are in generative AI, and the two solve different problems and shouldn't compete any more than a waterpark and a ski resort.
They very much do compete for attention, data scientist time, resources. So it is unfortunately somewhat of a zero sum game. Much so our detriment, too much of the oxygen being sucked outta the room by gen AI. So it's disproportionate, right? It's a loss for everybody. So predictive AI will target marketing by predicting who's gonna buy target fraud detection by predicting which transaction is gonna turn out to be fraudulent.
And more specifically target which transaction to block or to hold or to audit. Right. So anytime you've got large scale operations, always limited resources. Business is a numbers game. We have no magic crystal ball, but we can tip the odds in our favor by predicting better than guessing. And that's the name of the numbers game that you play with predictive AI.
That is to say it calculates from history, probabilities. Probability is not the most sexy conversation topic in the world, but it, if you're interested in value. If you're interested in improving your existing largest scale operations, this is where you generally turn, right. So the sexiness of Gen AI comes from how seemingly human like it is and how it fits in so well with the long term narrative about AI.
Ever since the advent of that term in the fifties. The 1950s and of course all of the extremely entertaining movies and such, so it's so seemingly human-like, and it is remarkable. And that is not entirely unrelated to its potential value in all the use cases for it to serve as a thought partner and first drafts, and first draft of code and all the stuff that it does so remarkably well for the vast majority of its use cases, though, you do need a human very much in the loop.
And ironically, by being more seemingly it therefore gets applied to things that humans would normally do and are less lenient to errors. Whereas it turns out predictive AI less seemingly human-like, but actually more potentially autonomous 'cause it'll automatically decide which credit card transaction to block is potentially fraudulent, for example.
That was my little monologue there on the difference between the two fields.
David Rice: Oh, no worries. You know, it's funny 'cause it's like it's. It's like the person in the friend group who throws all the killer dinner parties, but nobody thinks of them a F as a foodie. 'cause they don't, their Instagram doesn't look like Food Network.
But it's interesting 'cause like it's like you said, it's been around a long time. I think a lot of companies, particularly at enterprise level, right, they've been working on these systems for a while and I had a cast on recently and we were talking about the fact that like for if you've put into some of these systems into place and you've done the work.
Like the whole agent revolution that's coming, that's not really gonna necessarily be your thing most likely because a lot of this stuff already is automated.
Eric Siegel: I take a concern about the terminology agentic AI and AI agents. I've written a couple articles in Forbes saying that I think that the term is mostly misleading and unhelpful.
It's a hype term. It doesn't allude to any particular new type of technology. It only alludes to the intention, which is to further automate, and obviously that's. Self-evident intention, right? That's what we wanna do. That's why we build any machine to do things that otherwise would have to be done by a person.
So just because we slapped the word ag agentic on it does not mean there's any new, I mean, it's just sort of relatively obvious use cases that you would try to do a gen AI, but very quickly reached scope of ambition and intentions for the project that are outside the ability for it to be autonomous without a human in the loop.
That would segue me to a main bullet that I've been starting to make a lot of noise about, which is where the two universes collide, where predictive AI and gen AI really need one another, which is that I believe predictive AI's next killer app will be to serve as the reliability layer for generative AI.
So to the degree that we all very quickly have these wonderful ambitions with gen AI, whether we call it an agent or whatever words you use, you have this idea it's gonna serve as a. Customer service agent, right? It's gonna be directly interacting with consumers over a relatively large amount of topics or even be able to conduct transactions and such.
The idea of having that fully automatic very quickly, depending on exactly the scope of what the system's supposed to be able to do very quickly, it gets unwieldy. It gets to the point where maybe 5% of the time, it'll make a critical error. It'll divulge information that's not supposed to conduct a transaction.
It really shouldn't. Is there any of a million different things that can kind of go wrong? And if it gets it right 95% of the time, that's incredible. It's impressive, unprecedented, something we couldn't have imagined several years ago. But the whole system has no value. It's not viable, it's not gonna get deployed.
The 5% error rate's too high. How do we address that? Well, we can sit around and wait for all the corners to get rounded off and the technology to improve, but that's a much taller order than we realize. The difference between what gen, AI and humans can do falls within that 5% or whatever it ends up being, depending on the task at hand.
The difference between what Gene AI can do and what humans can do is only gonna become increasingly apparent. There's diminishing returns in its improvement, but we could realize a nice big portion of that audacious promise of autonomy if we put a predictive layer. So let's say we use predictive AI, predictive analytics, enterprise machine learning, whatever you wanna call it.
We learn from a bunch of examples how to automatically flag the cases most likely to exhibit a problem, the cases of the situations, the interactions with the end consumer, whatever the system's supposed to do, and then automatically divert those most risky cases off to that more expensive human in the loop.
So for example, the top 15% most risky moments of interaction with end consumers for a chat bot, let's say. We have that human, you know, it gets paused. The human comes in. They either alter it or intervene if necessary for let's say 15% where there's a 5% error rate. We don't know. We don't have a magic crystal ball for any predictive use case, including this one.
But if we sort of do it more often than it happens, let's say 15%, where to try to find those 5%, that might be enough that we do catch the majority of them. Maybe the net result is that there's like a 1% error rate, which depending on the use case and the business context maybe does make it viable. Then what's happened, we've realized 85% of that audacious promise of autonomy, which is a lot better than the 0% that you realize when the system's not viable for deployment.
David Rice: You mentioned there the machine layer on top, and I'm curious 'cause you've just come out with a new book recently in the last couple years and it's, you know, the AI playbook. Take me through. 'cause Eli, I found the title interesting, right? Like mastering the rare art of machine learning deployment.
Why is it so rare, I guess is what I want to ask, where I wanna start?
Eric Siegel: Yeah, it's an uncomfortable truth. So machine learning deployment that's being alluded to there in the subtitle of the book refers to predictive AI or predictive analytics projects, right? Enterprise machine learning. You create a predictive model from data.
That's what machine learning does. It learns from the data and it's encapsulated as rules or formulas or something in some kind of a model. And just to be clear, by the way, both gen AI and predictive AI are both built on or with machine learning. They're both. Categories of use cases in machine learning, but the end result are very different systems that serve very different purposes.
So in the case of predictive AI, the predictions are per organizational unit, like the customer, the healthcare patient, the transaction that might be fraud, the satellite that might run outta battery, the place that drove for oil that might be prolific, whatever it is, you know, the train wheel that might break down whatever it is on that level of granularity.
And the use cases are infinite, virtually infinite. But it turns out that although those types of use cases of machine learning, the field of predictive analytics, of enterprise machine learning has been around for decades, right? Since the sixties when we were using aggression to target marketing and fraud detection or what have you, credit scoring, it's a much potentially more mature field.
It's been around that much longer, but still just like gen AI. It hasn't grown up. It hasn't reached maturation. Professionalization in one really important part, which is why I wrote the book, and also why I founded, co-founded my startup Gooder AI, where we're trying to like bridge that final gap that last mile between tech and biz.
Right? Tech and biz is sort of a notorious gap between, let's say the personnel tech and the data scientists, the quant. And they're a customer of the business stakeholder. Person in charge of the operation is meant to be improved with the predictions of a model. The gap is so wide and it's what kills projects over and over again.
What happens in particular is. The data scientist comes to you and says, hello. I made the machine learning model that you asked me to make to predict which customer's gonna cancel, churn defect, right? These are all synonyms, right? That's called churn modeling. So now you can target a retention offer discount that you can't afford to give to a hundred percent of the customer base, but the numbers come out great if you only give it to those high risk customers, right?
So you can frame predictive use cases often. Many of them in terms of predict something bad in order to intervene, predict which gen AI interaction is gonna go bad. So you put that human in the loop, predict which transaction's gonna turn out to be fraudulent so you can block it right, and predict which customer's gonna cancel or which employees gonna quit.
Hewlett Packard did that for all 300,000 of their employees before they split off HPE, which I covered in my first book, and they presented it at my conference and such. Any case, it turns out that the data scientist comes and says, I made the model. It predicts really well. It's a sound model. It learned from data.
It's panned out. I've done all the technical checks, so go ahead and deploy it. Then the business stakeholder says, well, how good is the model? I know that we don't have magic crystal balls. We can only put the odds on the possibility that an individual's gonna click by lie or die, right? We don't, you know, in general, it's just playing the odds.
So tell me how good is this model? How well do we do with the data available? And the data scientists will almost invariably say something like, it's awesome. You know, the area under the receiver operating characteristic curve is 0.837 and then long awkward pause. The data scientists are only trained and equipped in general to calculate what we call technical metrics of performance.
Precision recall, F score lift area under the curve. Even accuracy is only a technical metrics, and they only tell you it's the sort of raw, abstract, predictive performance in comparison to a baseline, like random guessing. That is, they give you a good sense that technically the model's a lot better than guessing, which is good to know.
It's important and generally the most we can hope for since we don't have a magic crystal ball. It tells you nothing about the potential value, like how much money would you make by using this model to target marketing, retention, offers fraud detection, predictive maintenance, whatever the use case is, right?
Who to hire, who to try to retain as employee. It's always about targeting limited resources and doing it predictably, but when you want to calculate that bottom line in terms of monetary performance, profit, or savings, or any other pertinent KPI business metrics, that's still to this day. Not a prevalent exercise in the field of data science.
And in a sense, I'm almost as bewildered as anyone else. Even though I've been in the field, I've been in machine learning for more than 30 years. Right. I got into the field as a nerd, right? For the same reason most data scientists do, which is as the coolest technology and learning from data. To make discoveries that hold in general to discover patterns, formulas that will apply equally well over unseen, unique novel cases and situations.
Right. That's cool. It's like technically in that very sort of well-defined sense of the word, it has actually learned something and it works, and that's panned out. So you get into it 'cause you're nerdy, but we're sort of fetishizing the core technologies, like being more excited about the rocket science than the launch of the rocket.
When you want to go to launch the rocket after you've built something, you have to test it according to KPIs that matter. You have to stress test this rocket. We can build the rocket, but we also just need to do something that's much less technical, which have the ability to steer the rocket and get a sense of its performance.
So that much needed move to actually calculate business metrics, monetary performance like profit and savings. Make a forecast or estimation of just how much the business upside would be if we were to actually use the predictions of this model. That's why we built Gooder AI, and that's a essential piece of my book, the AI Playbook.
The bad news is that this field still fails most of the time, most predictive models. Developed over data with the intention of being deployed for our enterprise operations, failed to reach deployment. But the good news is that it's just the last mile. We've already gone 10,000 miles. We're standing on the shoulders of giants, the people who develop machine learning algorithms that learn from data, and now it's just an organizational tweak and getting deep collaboration between the tech and business side and get them speaking the same language, including business metrics.
So that we can repair this dismal deployment rate and realize value much more regularly.
David Rice: Growing a global team shouldn't mean juggling five different systems for HR, payroll, and IT. Deel brings it all together so you can hire, onboard, pay, and equip people anywhere without the usual chaos. Whether you're bringing on employees in 10 countries or managing contractors across time zones, Deel handles compliance benefits, and payroll in one place, which means less tool sprawl, fewer headaches, and more time focusing on your people.
Ready to see what hiring without borders actually looks like? Visit deel.com/pmp. That's deel.com/pmp to book a demo. That's deel.com/pmp. Deel, hire, manage and pay anyone anywhere.
Just outta curiosity, like what is the hardest shift that companies have to make if they're gonna get beyond? Those business metrics that, or how they're measuring the tech as well and focus on sort of the organizational impact?
Eric Siegel: Yeah, I mean, that's a great question. So broadening it, the metrics is one thing that needs adjustments. It's only one piece of it. More generally, when you define a predictive AI project, you're defining in terms of a pair of things, what's predicting what's done about it, what are you putting probabilities on?
What exactly are you predicting? Like who's gonna buy if contacted, which transactions turn out to be fraud and what have you? And then what's done about it is the intervention, right? Or the thing that leaps on the opportunity. If you're predicting a positive, like who's gonna buy if contacted, then obviously you contact those.
So that pair of what's predicted and what's done about it. For which there are so many possible use cases, right, that defines the project, but that's literally only the first of what I formalize as six steps in the AI playbook as the titular playbook paradigm framework that I call biz ML business practice for running machine learning projects successfully through to deployment.
And the key ingredient across these steps is that there's deep collaboration between tech and biz. Data scientists cannot do this in a vacuum. Predictive AI projects are always a consulting gig. It's not a technology install. It's not like, Hey, we have a better database that operates more quickly, and we just sort of slip it in underneath and everything operates the same.
No, anytime you're changing operations with predictions or with probabilities, with odds, that's a very fundamental shift to a large scale operation. So the whole thing is very much a business endeavor first that happens to use machine learning. So we need to reframe it in that way. So to engender that deep collaboration across all the steps, probably the biggest challenge is that the business side needs to ramp up, but only on like a really accessible, interesting, pertinent, fascinating area, completely accessible.
It's not like you have to learn how to change the spark plugs and exactly how your engine works. You just need to learn to drive the car. That's a certain level of expertise. So it's the difference between riding in an Uber and drive operating the vehicle yourself. But if you don't operate the vehicle in this analogy, you don't reach your destination.
So the ramp up is one of the main purposes served by my book, the AI Playbook. So this is a ramp up that you're not gonna get in one hour, but it's not, it's barely a college course. It's much less than that. It's really basically one book's worth. Ramp up to get that semi-technical accessible understanding of how predictive AI projects serve, what it takes to run them, what you predict, what's done about it, what the metrics need to be.
And metrics is just arithmetic, right? It's just very particular arithmetic so that everyone can get on the same page, run these projects successfully through to deployment. So they actually get deployed, operationalized, integrated into operations. Operations don't improve, they don't change, and that change part is the deployment.
David Rice: Earlier when you were talking, you mentioned, you know, the two generative AI and predictive AI they meet, they intersect, they interact. And you've talked to kind of like hybridizing the two because each addresses sort of some of the limitations that the other has. And I'm curious what's a good example of this, like them working together in practice and what obstacles do teams run into when they're trying to integrate the two?
Eric Siegel: No, that's great. Thanks for bringing that topic back up. I've been calling it hybrid and some of my writing and recent keynotes. And hybrid can mean other things too. But in this case, I'm specifically we're talking about hybridizing, combining, mixing, predictive and generative AI. So one way is what I've already mentioned, you use predictive AI as a reliability layer on top of a generative AI system.
Another is that you have generative AI help you with a predictive AI project. So generative AI, I started earlier out by saying, you know, generative AI gets all attention because it's sexier and it's easier. I addressed the sexier thing because it's so seemingly human-like and sci-fi ish. The fact that it's easier also kind of goes without saying by definition.
It's easier. It's called a language model because it's been developed over and meant to perform human languages like English. That means you can just talk to it in English. There's very little training required to go ahead and start using generative AI. And if anyone hasn't tried it, what are you waiting for?
Right? It's amazing. And because of that, it can help bridge this tech biz gap. So for example, in our software solution, a Gooder AI. It serves sort of as a business console for predictive AI project, where one of the main charts that'll show has two dimensions. The X coordinate is you're using a model to decide how many individual cases to contact for marketing, or our block is potentially fraud if it's a transaction or inspect.
If it's a machine that might break down, or a train wheel that might break, whatever it's So from left to right, you're targeting according to the predictive model. Right? So it's job and talking about this applies to any kind of predictive project. The predictions, the probability serve to triage and prioritize which customers are the most valuable to contact, which transactions are most risky and should be blocked.
Whatever it is you're triaging, and then you have to decide exactly where you draw the line. You're not gonna block too few transactions and you're not gonna block too many. Same thing with marketing. Contacting too few too many, and inspecting a piece of equipment that might break down. There's always turns out to be this kind of Goldilocks zone, but the order that you're traversing, the spectrum that you're exploring to decide how to use the model and how to go to deployment is based on the model.
So that's the X coordinate horizontal, now go y, coordinate it's money. What's the potential returns, savings earnings that you're gonna get for this particular project. So that graph where it tends to go up and then down because there's gonna be that sweet spot, that goalie log zone where you're gonna have the best bottom line improvement.
That depiction is sort of one of the main visuals you get in our software solution. Gooder AI, but it's not a common visual depiction. It's very fundamental. It's just two dimensions. Both of 'em are business relevant. It's not rocket science, and you could teach this in high school, but it's relatively new even to many quants.
Because as I've mentioned, people generally don't even calculate business metrics like monetary savings and profit at all. That just hasn't become part of the culture yet as much as it's absolutely needed because it's new. Wouldn't it be nice to be able to ask a million questions without the fear of boring or sounding stupid to a data scientist?
Well, pop we have we integrated a language model. Chat bot gave it all the very specialized prompting related to what you're seeing. Now you can ask a question. So why does the curve go up and down? Explain that with a story to a kid and then it says, okay, well you're selling lemonade. You're gonna have a lemonade stand, and you're trying to decide how many customers they walk by to target and right.
And it, it does that great. So that's another example where the two areas can very much augment one another using Gen AI to help understand predictive AI projects.
David Rice: There's this kind of growing argument I've seen that predictive models can serve as like critical guardrails, especially when generative systems hallucinate or start to behave unpredictably.
Right. And I'm curious, how do you think predictive AI can be structured as sort of that safety layer and operational workflows?
Eric Siegel: Those are some of the classic ones that I even included in the first edition of my first book. Predictive Analytics came out in 2013 and the updated editions 2016. But like Shell Oil is predicting which teams had the highest risk of.
Incident in their, on their oil rigs. So workplace safety on a per team or per environment per case basis. Right. That fits in well, just like anything else. I mean, the thing that differentiates predictive use cases from forecasting, I mean, both of 'em are about trying to put some estimation on the future.
Forecasting usually refers to one singular outcome. Is the economy gonna go up or down? How many ice cream cones are we gonna sell next quarter? Whereas predictive AI, predictive analytics, you're using machine learning to put a prediction on each individual case. So which individual's most likely to be holding an ice cream cone?
Which voter is gonna vote for your candidate, or most likely to be persuaded to vote for your candidate if you have a volunteer knock on their door? Because it's on that per case basis. It's sort of the fact that it applies to a large scale operations makes it that much more valuable, obviously. Right?
It applies over millions of cases. That's what large scale's operations are made up of. Millions of micro decisions and the holy grail for driving or improving each of these individual micro decisions is predicting a pertinent outcome or behavior that applies to that decision. The good news is that because you're applying it so many times, that sort of.
It gives you the law of large numbers and makes up for the fact that unfortunately, we don't have a magic crystal ball, but we only have probabilities. Probability sounds like an arcane topic and many people kind of shy away from it, and it's true that the math that you might need to use to calculate a probability can be complex, but the output of these systems and each individual probability isn't technical at all.
In fact, it's the most obvious next best thing that you could possibly hope or wish for outside of magic crystal ball. If we had a hypothetical magic crystal ball, it would always tell you the definite answer This person's gonna buy and this transaction's definitely gonna turn out to be fraudulent or what have you.
Without that, what's the next best thing? A number between zero and a hundred. This is what are the chances for this particular case, and that's what enables you to triage, prioritize many cases and decide which ones to treat one way versus another. So that basic idea of putting odds on things and then prioritizing, which can apply to safety and comply to so many different things, and it's not rocket science to decide exactly how to apply.
So that's central to the idea of what business stakeholders need to understand if they're hoping to leverage this technology. Then to take the next step, they have to participate in the project. They can't just sort of send data scientists off. I mean, that's the Achilles heel. That's the problem. And it goes along with us fetishizing the technology, glamorizing it, because it is really cool, this core rocket science ability to learn from data to predict.
But you can't just say, because we're using the best technology, it's necessarily valuable. It's only valuable if we use it as we get the. Output of that data scientist. Hard work of doing the right number crunching. They create a predictive model. It can generate predictions. Now you have to act on those predictions.
You have to incorporate them into that large scale operation. That's pay dirt. That's where things actually come to fruition and you start to improve operations and gain value. And the only way you're gonna get there is having deep collaboration. So this sort of fetishizing a technology leads to this false narrative that we're gonna use the best technology known as machine learning.
There's all variations of that, but we're gonna use that. We have a certified expert data scientist. They're gonna use that, they're gonna prep the data accordingly and make a model, and that's the value. But no, it's not the value. The value is the overall business project that includes that and includes deep collaboration up until the point where you've actually integrated it.
Collaboration includes business side people that have ramped up on these ideas of what the predicting what's done about it. Across the six steps, which again, none of them are rocket science. They include rocket science, but you can leave that part up to the data scientist. But they also include lots of accessible, pertinent things as the idea of putting probabilities, ranking, triaging, drawing a line, deciding how to treat individual cases.
David Rice: You know, a lot of predictive AI frameworks, they tend to emphasize data modeling. But I think your approach, you know, you highlighted some to other things like prediction definition, operational integration, ongoing iteration. I'm curious what are some of the less obvious steps, but like that are critical, which companies often skip when they start to invest in predictive AI and sort of how does that affect outcomes?
Eric Siegel: Yeah, I mean, I like the way you asked that. What are the main steps they skip. The answer is that basically they skip all the steps or most of the steps from the business side. Wait, the data scientists try to do all the steps, but they're only trained to sort of do the technical number crunching steps and to do a technical evaluation in terms of bridging that gap.
Right. I mean, I'm encouraging them too, by the way. Right? Like the conference that I run, what's called Machine Learning Week and been running since 2009, used to be called Predictive Analytics World, more than half that audience are data scientists and, there are times I outside my own conference, I mean, most of my keynotes are for general audiences, but sometimes I'm addressing data scientists and believe me, I'm trying to get them also to help bridge that gap to reach across the aisle, to not just calculate technical metrics, but also business metrics to enlist the participation in detail of their customer or stakeholder.
Any case on the business side, the core steps relate to the, you know, what's predicting what's done about it. Right? There's another part that we've alluded to, which is how well does it predict adding that as a third, that's sort of a trio. What's predicted, how well and what's done about it. Again, not rocket science, it's just the basics that's sort of defined the project and its requirements what's predicted, how well and what's done about it.
So the six steps I described in the Biz ML framework in the book, the AI playbook. Consists of sort of two sets of three that concern themselves with exactly those three things. What's predicted, how well and what's done about it, not quite in that order. Right. The culminating three steps are the same three steps that data scientists have always been doing since the sixties.
We first started using this to do credit scoring and what have you, which is prepare the data according to what you need, what you're trying to predict. Do the actual machine learning part, train the model. That's the rocket science part. Learning from data to make the model and then deployment, right?
That's the actual integration. So by definition, those three have already always been there. The other three are pre-production, so those are actually steps four, five, and six. You know, it's kind of like Star Wars, right? Like a new Hope was episode four. So this, the original trio of steps that's always been definitionally part of any of these predictive or machine learning projects.
Prep the data, train the model, and then deploy the model. Right? That starts with step four because the prior three are pre-production steps, but all six steps require deep collaboration with the business side. So it's prior three and concern themselves with defining what's predicted, how well, and what's done about it.
Just not quite in that order. There's also a Harvard Business Review article that overviews that you don't have to have the book, but if you want to participate from the business side in one of these projects, that's the purpose of the book, is that not only will you kind of delve into these six steps across the six main chapters of the book, but along the way you'll ramp up on that semi-technical accessible understanding so that then you're prepared to participate.
David Rice: Alright, well, unfortunately that's all we have time for today, Eric, but I wanna thank for coming on the show and this has been a, it's been super fascinating.
Eric Siegel: Excellent, David. Thank you so much for having me.
David Rice: Listeners, if you haven't done so already, head on over to peoplemanagingpeople.com/subscribe. Get signed up for the newsletter. You'll get podcasts like this, all the latest articles that we publish straight to your inbox. If you wanna learn more about this topic, definitely check out the AI Playbook. It's definitely worth your time.
And until next time, stay curious about predictive AI. Don't just get lost in the generative fuzz.
