Resilient Supply Chain— stories and strategies that keep business moving
The Resilient Supply Chain Podcast is where global leaders explore how to make supply chains stronger, smarter, and more sustainable.
Hosted by Tom Raftery, technology evangelist, sustainability thought-leader, and former SAP Global VP, the show features C-suite executives, founders, and innovators from some of the world’s most influential companies. Together, we examine how organisations are building supply chains that can withstand shocks, adapt to change, and compete in a decarbonising economy.
New episodes drop every Monday at 7 a.m. CET, packed with real insight, not PR fluff.
From resilience and risk mitigation to AI-driven visibility, circular design, and ESG transformation, the podcast unpacks the data, systems, and strategies shaping global operations.
You’ll hear from the people doing the work on:Because a supply chain can’t be sustainable unless it’s resilient, and it can’t be resilient unless it’s sustainable.
- business continuity and crisis response
- Scope 3 emissions and supply chain sustainability
- digital twins and predictive resilience
- ethical sourcing and due diligence compliance
- nearshoring, automation, and future-ready logistics
Resilient Supply Chain+ subscribers also get access to bonus episodes, including highlight reels, extra analysis, trend briefings, and other subscriber-only insights.
If you’re a supply chain executive, sustainability strategist, or technology leader, this show gives you an edge.
Subscribe now and join the global conversation redefining how the world moves, makes, and measures everything.
Resilient Supply Chain— stories and strategies that keep business moving
AI in Supply Chain: Automation Is Not Autonomy
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Can AI make better supply chain decisions, or just make bad ones faster?
In this episode of Resilient Supply Chain, I’m joined by Simon Bezrukov, Chief AI Officer at Bristlecone, for a grounded conversation about AI in supply chain, resilience, risk, data, visibility, and the uncomfortable bit nobody likes to put on the first slide: accountability.
Simon’s core point is sharp: AI agents are great at doing the paperwork of decisions, but they’re not yet great at owning the consequences. And that matters now because supply chains are under pressure from volatility, geopolitical shocks, cost constraints, sustainability demands, and the growing temptation to automate first and ask governance questions later. A marvellous human habit, really.
You’ll hear how agentic AI can help with micro-decisions, missing data, supplier communications, replanning, and playbook orchestration, but also why autonomy without guardrails risks creating “fast and confident mistakes”. We break down why LLMs are brilliant explainers, but not supply chain decision engines, especially when the real problem is optimisation across service, cost, cash, carbon, and risk.
You might be surprised to learn why more data does not always mean better forecasts, why stress testing may matter more than forecast precision, and why a smaller, well-governed model can beat a perfect digital twin nobody trusts. Simon also explains why human expertise is not being replaced. It is being amplified. For better and worse.
🎙️ Listen now to hear how Bristlecone is cutting through the AI hype and helping build more resilient, practical, and sustainable supply chains.
Executive Wins PodcastThe Executive Wins Podcast features inspiring Executives who share their biggest wins.
Listen on: Apple Podcasts Spotify
Podcast supporters
I'd like to sincerely thank this podcast's generous Subscribers:
- Alicia Farag
- Kieran Ognev
- Gary Lynch
And remember you too can become a Resilient Supply Chain+ subscriber - it is really easy and hugely important as it will enable me to continue to create more excellent episodes like this one and give you access to bonus episodes of topical, timely supply chain resilience analysis.
Podcast Sponsorship Opportunities:
If you/your organisation is interested in sponsoring this podcast - I have several options available. Let's talk!
Finally
If you have any comments/suggestions or questions for the podcast - feel free to just send me a direct message on LinkedIn, or send me a text message using this link.
If you liked this show, please don't forget to rate and/or review it. It makes a big difference to help new people discover it.
Thanks for listening.
Agents And Consequences
SPEAKER_01Agents are great at doing the paperwork of decisions, but they're not yet great at owning the consequences.
Welcome And Big Idea
Simon And Digital Twins
SPEAKER_00You've just heard Simon say it. AI agents are great at doing the paperwork of decisions, but they're not yet great at owning the consequences. And that for me is the real AI question in supply chain. Not can it automate, but what should it never decide alone. Good morning, good afternoon, or good evening where everywhere in the world. Welcome to episode 121 of Resilient Supply Chain Stories and Strategies That Keep Business Moving. I'm your host, Tom Raftery. Today I'm talking with Simon Bezrukov, Chief AI Officer at BristolCone, about agentic AI, LLMs, digital twins, forecasting, simulation, and the governance nobody wants to talk about until something expensive breaks. The big idea. AI can speed up supply chain decisions, but resilience still depends upon judgment, guardrails, and knowing what not to model. Let's get into it. Simon, welcome to the podcast. Would you like to introduce yourself? Yeah, of course. Thanks, Tom.
SPEAKER_01Good to be here. So I am the chief officer here at Bristol Cohn, which means that my job is pretty simple in theory and hard in practice, because what I do is I take AI out of hype mode and turn it into something that actually moves supply chain metrics, right? So better service, lower inventory, faster decisions. Because this means building practical AI products and not slides, figuring out where AI works, where it doesn't, and where humans still matter a lot. Prior to Bristol Cone, I've spent 25 years in consulting, most recently with McKenzie, who's a member of Quantum Black, which is where they put all the super nerds. And I helped the firm to drive the digital twin service line. I think we'll talk more about digital twins today, because as you might imagine, when clients want to build twins, predominantly they focus on supply chains. And so for those in the audience who might not be familiar with digital twins, they're simply a virtual representation of physical assets that mirror their behavior over time. So think like the movie Matrix, right? A complex simulation of the real world. I originally started as a techie back in the distant year 2000, building video games, which actually still continues to be a passion to this day. Got into operations research and never looked back.
SPEAKER_00Okay. And talking about AI and supply chain, let's ground it a bit in reality first. So let's let's ask what's the real supply chain problem leaders are struggling with right now?
Agentic AI Reality Check
SPEAKER_01I I think that there are a couple, right? And I know that the set of topics we're gonna discuss today are the AI myths in supply chain. I do think that AI is absolutely delivering value in supply chains today, but only that works is if we marry up operational expertise, decision science, right? So math under constraints, and very clear objectives. Otherwise, we're optimizing for their own thing. And so I think that AI in supply chain today means three things, right? It's ability to predict, so forecasts and look at different kinds of risk signals. It's about optimization, right? So choosing this best plan under a variety of constraints that we have, and it's all about automation. And so I think we'll talk today about how all these different new capabilities, right? So generative AI, LLMs and such can help us build better systems.
SPEAKER_00So, I mean, let's talk about some of the myths you were mentioning there before we kicked off. And let's let's talk about something like agentic AI, which has become the newest buzz term around AI. Uh it was generative AI with all the buzz, but now it's gone more towards agentic AI. And when people say agentic AI in supply chain, what are they really talking about?
Guardrails And Accountability
SPEAKER_01Yeah, if you look at your LinkedIn feed, I suspect probably on the order of 50% of your posts are going to be how somebody cracked a problem using using agentic. But yeah, maybe you can define some terms first, right? So when I say agentic AI, I mean software with agency, right? As in software that can plan its own actions, then call a set of tools that it has access to, and then by itself execute multi-step workflows with minimum prompts from a human. And so I think in the last month in particular, some of your listeners might hear about what's going on with OpenClaw, right? Which is the evolution of ClaudeBot and Maltbook, right? So it's it's very it's very interesting. Moldbook, by the way, is a Facebook that was created by agents for agents. It's the biggest and fastest growing social network. Folks are asking how close are we to AGI, artificial general intelligence? So basically, AI that is equivalent to or better than a human across a wide range of cognitive tasks. By the way, I don't think that we're there yet because what we're seeing with all those agents is emergent behavior. But there's certainly a lot of interest because my thesis here is that although agentic AI is promising, this notion of autonomous supply chain is a bit of a category error, right? Which is when you confuse what something is with what something does. Because to me, the core issue at hand is that supply chains aren't a single problem. I think of them as a set of competing objectives because they have different constraints. You also have different kinds of human decisions. Autonomy without guardrails becomes, I heard somebody say, fast and confident mistakes. Ones of the worst kind. Because also it's important to recognize, I think it's a distinction that people don't make often enough, is that there's a huge difference between automation and autonomy. So a lot of the hype and I think the credit is being, it's not autonomous, it's automated. And honestly, we've had automation for years, right? Autonomy, on the other hand, requires something that is fairly difficult for us to articulate. It's clear objectives. If an agent is going to make decisions comparable to what a humans might be doing, they also need reliable state visibility. So typically that means that an action has a very immediate feedback, so the agent knows what it's doing. They need safe action spaces, right? So environments where they can experiment and also ultimately accountability. I think the last point is something that we don't talk about often enough because there are use cases right now where people are actually deploying agents and I would trust 10% of the claims that are made by the public. Almost a like a closed loop system where AI is allowed to make micro decisions, right? Can you automatically open a ticket? Can you go and fetch me some missing data? Can you propose a replan or draft a letter to my supplier? Absolutely. Like that is the sweet spot. That's that's the strike zone for these kinds of agentic systems. Or orchestration of playbooks, right? So if you have some sort of logic that you already have in your head, right? Like if my shipment's late and I have a high priority customer, there is a limited set of alternatives. Please pick amongst them and propose different methods that we can re-expedite or reallocate stock. Perfect. Because those are kinds of actions that can be automated almost entirely by the use of a Gentec. But really, where these systems fall apart is when the correct answer depends on trade-offs that we haven't codified or the leadership hasn't agreed to. Let's say you have to trade service level for cost or carbon impact, right? Like these are very complex trade spaces. And I feel that we haven't been able to express even how humans are making those decisions, right? Because everything is exception rather than the rule. And that's where agentic systems don't function well. Or, of course, if you have irreversible consequences, right, where an action that's taken by an agent has an impact on a key factor like losing a customer, contractual penalties, or in any way endangers human life. My favorite saying, I'm gonna borrow it from somebody else, is that agents are great at doing the paperwork of decisions, but they're not yet great at owning the consequences.
SPEAKER_00The sidebar, the software I use to uh edit these podcasts is called DScript. And they have an Agentic AI system built into it, which they've called Underlord. And one of the best uh features that's built into Underlord is when you ask it to do something complex and it messes it up completely, there's a little button at the bottom which says roll back. So you can revert back when it does mess things up beyond all recognition, thankfully, because it messed up several things for me when I tried it a couple of times. So I get your point about yeah, having to be able to take responsibility.
SPEAKER_01Yeah, that's brilliant. I also like the subtle nod, right? Instead of naming it overlord, they went with underlord. So human human is still in control.
SPEAKER_00And what just curious, what decision would you never fully delegate?
SPEAKER_01Uh I think it would be anything that has direct impact on human life. Because I think it was maybe it was IBM, but at some point I said computers will never make management decisions. Now, of course, 30 or 40 years have passed since that statement was ordered, and I think management decisions are getting made all the time. And there's a lot of frameworks for defining because it's interesting, right? You would not only look need to look at the first tier of impacts, but second and third tier consequences to any decision the system makes. And honestly, this is one of the spaces that's evolving right now because we've never had an ability for an AI to take as many actions and be as forward-leaning, because in theory, you could allow it to make most, if not all, the decisions, but that's just not prudent. So I actually really like some of the advances that are happening in the field right now based on the certain degrees of autonomy of controls that need to be put into place. It's kind of like riding the bike. First you write tandem, then you kind of go hands off. And so I think that the more and more as these decisions get made by the autonomous systems, and we can prove at a minimum human equivalence, kind of like with self-driving cars, right? Uh they're a reality now. But for the longest time, right, there was an argument for even allowing them on the road. The question is the statistics, right? And once we accumulate enough confidence to say that these systems behave as safe or safer than humans, that would allow us to relax some of those barriers. But I think that the humans are still going to be in control of the really big decisions that steer the ship.
[Ad] Executive Wins Podcast
SPEAKER_00There's another myth that LLMs can solve complex supply chain decisions. So what problems look like language problems, but are really optimization problems, for example?
Prediction Limits And Simulation
SPEAKER_01Maybe I could start with a few definitions for folks who haven't studied or been as close to LLMs. I want to differentiate between LLMs, which are large language models, and just broadly speaking, generated AI. A large language model is nothing but an algorithm that's trained on a ton of text data. So usually, let's say, all the internet. And what it's particularly good at is predicting the next word, given a couple of structural assumptions, right? You're looking at the pattern of human language, the structure of the sentence, and then context, right? So there's a couple of different schools of thought, but I am more in the camp of thinking of it as almost a fancy autocomplete. I'm probably going to get a little bit of heat for that perspective from some of my colleagues in the industry. Whereas Genai, actually, it's using the fairly the same set of technology, right? So using we're using deep neural networks. And generative technologies are systems that create new content. It's usually multimodal, right? So you can create text, images, code, audio, or video. But same same premise, right? They're learning from huge data sets, but then they're attempting to create a novel output that resembles human-created work. And we are very close to being able to pass the Turing test, right? Where for the longest time, just having grown up, I should have gone to school for computer science, right? And this that was the sort of always the holy grail, right? Can there ever be a machine that when a human interacts with it, the human can't reliably discern whether they're talking to the computer or to another human being? But both LLMs and Genai, I think, are effectively at this point, they have passed the Turing test, and it makes it seem that they can do magic, right? But there's certain things that LLMs were specifically not designed for. And so the way that I think about it, right? So LLMs were not built specific to supply chain to examine things like constraint satisfaction, right? So your capacity, elite times, MOQs, they're not very good at multi-objective optimization, right? So let's say we need to balance the level of service with costs, cash, carbon, what have you. And then numerical reasoning. These models, unless they're very extensively customized, they don't reason across our enterprise without proper grounding and tools. So it actually works better in the industry we call these things design patterns. So it's coupling LLMs, so large language model, basically a chat interface that you can engage with, and retrieval, because LLMs are fantastic at indexing documents. So think of your policies, your SOPs, your contracts. And the power comes from this technology being able to infer the context of your question, which is significantly better than searching for a specific term. Even 10 years ago, if you wanted to find a document that had to do with a certain policy, it was a binary, it was a one or zero, right? You type in a term and you either find it or you don't. But maybe there's another word that's kind of close to what you were trying to communicate. And LLMs, because they have a really powerful way of associating concepts, it's able to do this retrieval process particularly well. So that's one design pattern. Second one is using them as just as tools, right? So imagine that you're a supply chain planner and you want to be able to run some sort of a scenario. You can ask LLM to go and call an optimizer, run a simulation, and then bring you back the results. Now, the advantage there is that you don't have to learn all of these decision support instruments with interfaces that were clearly made for analysts and data scientists. So I think it allows us to empower the folks in our organization who need access to this information to engage with fairly sophisticated decision support tools in a way that is never just available before. And maybe the last one is explainability, basically combining an LLM with a planner co-pilot. So you can go in and say, hey, there's two scenarios that I'm considering, plan A and plan B. Can you explain to me why A is better than B, right, after some sort of a mathematical calculation happens, usually in a different piece of software? And that is the most promising technology, right? In fact, we, as Bristol Cohn, are doing a lot of investment in that area in developing some of these capabilities. Because if you think of a typical enterprise, right, for a sophisticated supply chain, very seldom do you have a single planning system. You probably have a combination that are working together to decide on some of your key network parameters, like let's say your reorder point for a certain SKU. And even if you just have a single system, there might be opaqueness in a system where you don't know why it made a certain decision, where even if the logic is laid out very clearly, how often do our folks in the field override these? So they get an output from a system and then they say they just recalibrate using human intuition. Maybe they have a relationship with that supplier, so they know something that the machine doesn't, and so they override the decision. And there's a whole field of explainable AI. I actually think that is one of the important themes to watch for this year, because as individual vendors build these explainable AI capabilities into their own systems, one thing that we noticed is that there's not really an emphasis on zooming out and creating a capability that looks across all of your planning tools and lets you understand why a certain decision was made, by which system, and then maybe even have an inference where it says, I think the math that was used to calculate your reorder point is this. And we can do that using some micro techniques that we don't have time to get into on the scalp, but looking at purely historic data, you're just your supply data and your demand data, and then identifying instances when there was either an overstock or an understock situation for a certain SKU or combination of SKUs, we can reconstruct the decisions that were made by some of those tools without knowing the specific algorithms within them. And then the goal of this kind of a diagnostic capability is then go back and then actually fix the issue in whatever planning system that you're using. So I think LLMs as a planner, co-pilot, and an explainer of why something happened is probably the most promising set of technologies when it comes to adopting this. And maybe like my macro level takeaway is that I treat, I think of generative intelligence and LLMs as a productivity level on top of decision intelligence. And those decision intelligence platforms is already something that you have in production, right? Those are the things that are helping you day to day. They just make it more accessible. They're not a substitute for any hardcore optimization simulation technologies. They just make it easier for your planners, for the folks on the ground in the warehouse to engage with these systems as opposed to relying on a data scientist or or a highly technical team to go and help you make some of the decisions.
SPEAKER_00Okay. So I guess top line is LLMs are brilliant explainers. They're not decision engines. Yeah, exactly. Okay. Let's talk about prediction. Sure. More data plus AI equals better predictions. Except Yeah, maybe.
SPEAKER_01Maybe. So just a funny, funny anecdote, right? So this weekend I was with Bathkin Roberts. By the way, I I love their business model, right? 31 flavors of ice cream. So that's every day that you can try something different. Which is great. I spent, yeah, I don't know, good five or six minutes thinking through my options and walked out with the one that I would usually pick, butter pecan. But what if there were 300 flavors in front of me? I definitely think that for humans, decision paralysis is a real thing, right? So more isn't necessarily better. To a certain extent, I think AI and machine learning, all these things, right, help you because that's the one thing that scales really well, right? The algorithms are able to do a lot of that. But for us to believe that machines can predict a future, because we're talking about like, is more data better for more accurately predicting what happens in the future, you have to bind to the hypothesis that past looks like the future.
unknownYeah.
Resilience Over Precision
SPEAKER_01And when it doesn't, then we're going to be in a lot of trouble. And the headline message here is that more data helps uh right up until the world changes. A supply chain, we we live in discontinuities, like pandemics, geopolitical shock, port closures, all those sorts of things. And under this sort of true uncertainty, machine learning models become confidently wrong. The worst kind of wrong. But I feel like that's really the difference. And that's where I wanted to talk a little bit about other means of thinking about it, in particular simulation. And there are multiple ways of approaching that subject, right? There's a simple approach where and each of us as a supply chain planner engages in scenario planning. What happens if my lead times double? What if the lane shuts down? Simulations offer you a more sophisticated way of examining these strategies. So a few years ago, I had a great aerospace client and they were manufacturing aircraft. In fact, this happened right before the pandemic. And they were very forward-thinking because when we sat down with them originally, we said, okay, we are going to help you build a production simulation for your facility. And we have two ways to do it, right? There's a simpler way where we can just use the statistics. And the interesting thing is that stats will often work just because of high volume. And sort of law of large numbers kicks into effect and things just work. Or we said we can do something else for you. We can build you a um discrete event simulation. And it was a very positive experience, right? Because they were very forward thinking. They said, okay, well, let's try this other more sophisticated thing, which we trust you. And what discrete event simulation does it's the difference between Deductive and inductive thinking. So top-down analysis, which is kind of machine learning, and bottom-up analysis where you're trying to use physics-based principles, which is sort of real-world mechanics, and you're trying to get that to generate data that's reflective of reality. And so in practice, what we built for them, we took their factory, we looked at each of the assembly stations, right? And the way that it works usually, right? You have an airframe that's moving through, let's say, 10 of those. Each of them has dedicated equipment, then dedicated parts, right, subject to the bill of material, and also skills. So people, the electrician versus somebody who's doing surface work in the aircraft, et cetera, et cetera, or welders. But basically, workforce is deployed against each of those stations, and then both parts and workforce are consumed at a certain rate, right? And then when you put all the parts in, you take the aircraft and you move it to the next station with that repeats. And so we worked with them to gather this information. We basically calculate, so through a time and motion study, they actually had some pretty good data when we started. But the interesting unlock was that COVID happened. It gave them, like this capability gave them an uncanny ability to predict throughput. 10% error, even under uncertainty of COVID, right? When critical components became scarce, there was absenteeism, so like literally fewer people showing to work. And then also even the effectiveness of the workforce that they had was reduced because people had to wear protective equipment and there were other human factors in the mix. But sort of if you build that kind of a system that sort of replicates reality at the appropriate level of abstraction and then can generate the data, it's a very powerful way to be able to model something that withstands the test of uncertainty. It definitely takes longer time to build those kinds of models, but I think they offer us a really powerful tool. And so my, I guess my practical takeaway is that I don't think that accuracy and precision is about the data, right? Because there's hypothetical maximum that you can achieve given the method that you choose, right? And sometimes you just need to flip the script and look at bottom-up tools like simulation methods versus top-down, right? So difference between inductive and deductive reasoning.
SPEAKER_00Okay. So would it be fair to say that resilience is about a range of outcomes, not forecast precision?
Human Expertise Still Matters
SPEAKER_01Yeah, absolutely. Because we want to be prepared for the uncertain, right? It's almost like that cone of uncertainty that they draw when they're plotting the path of the hurricane. Right. But even as good as these systems have gotten, there's always going to be a range of outcomes. And I think as supply chain experts, right, our job is to help prepare for what's the best case scenario, average case and worst, and then develop plans to address each of these that lets us build resilient networks. And by the way, that's where everything is headed. So one of the messages I was following the Davos and all the conversations that were taking place there earlier this year, the perspective has changed because last year was all about just-in-time manufacturing, right? How do we run our network as lean as possible? How do we reduce stock? It's just inventory, inventory on hand for the sake of efficiencies. I think the tone of the conversation instead has changed and it's all about resilience, right? And how do we build a resilience into our supply chains? Because the world of today is very, very different than it was just a year ago.
SPEAKER_00As the host of the Resilient Supply Chain Podcast, I'm delighted to hear that. You've come to the right place, Simon. Well, thank you. Which brings us to the human question. Now AI apparently eliminates human expertise. Yeah. Yeah. I do hear that sometimes.
Minimum Viable Modeling
SPEAKER_01But you know, it's it's it's funny. All of these, by the way, all of these myths make me seem like a contrarian, so maybe we should do another podcast. Not for myths, but best practices. But I guess the headline here, right? I feel that just AI does not replace expertise. Instead, it amplifies it, both for better and for worse. More automated the system, I think more dangerous the unexamined assumptions become. I have a favorite story that maybe some of you are listeners have already heard, but this is back in World War II. Right, there was this task force that was charged with increasing survivability of our aircraft. And they were analyzing this data, right? And they looked at the planes that had returned back, and they found all this damage, right, on wings and the fuselage of the plane. And so the top idea was okay, let's add armor to these areas. We're going to increase the survivability of the aircraft. But there was one statistician in that room that raised their hand, they stood up and they said, Wait, do you guys actually think that maybe the holes represent the areas where the aircraft can be hit and still come back? We might be missing the ones that didn't. So classic survivorship bias. Because of his work, we've significantly improved survivability of aircraft. But we have a lot of this in supply chains. So think of all the undocumented processes. The personal relationship between suppliers, procurement officers, all the things that we haven't codified. It's particularly dangerous, and leaders get burned when teams are asked to trust the outputs that they can't explain. It's a big no-no. Nobody can tell you what are the conditions that will make this model fail. That kind of testing is just not really done in our industry, but it should be. And also incentives are to blame, right? I think a lot of incentives these days, we need to be very careful because they're shifting to just do what the tool says instead of own the decision. So maybe if I was to synthesize all takeaway on the basis of what we just said is if you can't explain the assumptions, you really can't and shouldn't operationalize this output. And the way to get there is that you need to build AI literacy and also critical thinking into your supply chain leadership team, not just the data science team to be very effective when using these tools.
SPEAKER_00So the more complex system, the more dangerous blind trust becomes. Let's talk about overbuilding. I mean obviously the myth is you need a digital twin of everything. So how do you decide what not to model?
Value Based Use Cases
SPEAKER_01People get enamored with an idea of an Uber model. So you're going to build this massive simulation of everything that's going to tell you it tell you everything. But practically, right, I think knowing what to ignore is actually more important than knowing what to optimize. Because my headline for this topic is that our goal isn't to model reality, but the objective, right? The reason why we're doing this is to make better decisions. And sometimes the decision is don't model it because it's not actionable, right? You won't act on it, even if you have the information. Signal is too weak, so you're prone to overreact, and then correcting is going to cost you more than trying to fix the problem that isn't there. Or the complexity cost is too high. And even if you know what the best decision is, it's impractical to implement it. And so I have a funny, I don't know if it's funny, but an interesting anecdote. I was working with a client who really wanted to predict complexity of uh call center service requests. Just we did what they asked, right? We encoded the message in text, we ran them through all sorts of parsers, classifiers, and just tirelessly tried to figure out given this service request, how long is it going to take for them to close the ticket? Nothing worked. Do you know it ended up working in the end? It's very counterintuitive because it's one of my favorite outcomes. The best predictor was the length of the message. So if you simply counted the number of words that were used to describe the problem, that was the best predictor of how long it would take for them to close the ticket. And in retrospect, like it's duh, guys, like you were solving the wrong problem because like imagine I call a help desk and I say, hey guys, I need to reset my password. I can't get into Outlook. Simple. Compared to me calling them and saying, like, okay, guys, my machine powers down when I have Outlook open, it's after lunch, and I'm also browsing YouTube. So the more complex the request, the more complex the solution is going to be, the more expensive. But sometimes simple is best. Same comes with predicting demand. There are very, very mathematically sophisticated techniques that can be used to predict what your demand for your item is going to be. But if you actually do the test, just a three, like three-sample moving average adjusted for seasonality, is your best bet. And then we sadly need a PhD, right, to come in and actually give this advice because if you don't have the credibility, then people are going to look at you funny and say, like, hey, shouldn't we be using some fancy machine algorithm to figure this out? So, yes, I'm all about minimum viable models, right? So, which are, by the way, a great way to operationalize this, right? Because you tie it to decisions. So when you're thinking about building inventory buffers, look at your top skews. If you're trying to optimize your transportation network, do it for high volume lanes. Scenario simulations, listen, only look at your critical suppliers as single points of failure, right? So that's the same thesis as tiered modeling, right? So build deep models for 20% that drive 80% of the impact. And just use course rules everywhere else. And so, my my just to follow our formula here, my practical takeaway for the group would be that a smaller but well-governed model that drives actual action that you can take will always beat a perfect digital twin that nobody trusts. George Box was father of simulation models, and he said all models are wrong, but some models are useful. So something that I truly take to heart.
SPEAKER_00Another way I would summarize this is to say that the cost of modeling can exceed the value of the decision. Yes. Yes, 100%.
Adoption Gaps And Robots
SPEAKER_01And we just need to be ruthless when we're deciding what to prioritize, which is why I think a lot of our work is helping clients understand value at stake, because frequently that's not the way that we engage, because either we represent the business who's trying to solve a problem, or we represent IT. Those two parts of the organization don't talk the same language, but they do share one thing in common, right? And that's the amount of dollars that can be saved if they choose to take on one of these projects. So I always say that unless I can tell you how much money this problem is going to save and express it as dollars and cents and then link it to recurring revenue, you probably shouldn't take that option. You shouldn't take that off because it's not going to be embraced by the people in your organization. The best projects that I've done for biggest clients, the benchmark has always been recurring revenue savings. Because then you go, you tackle that problem, and it actually creates a flywheel because now the organization is saving$20 million a year, and then they use those funds to fund other initiatives, right? So there's that kind of thinking, business-based thinking to a use case identification, figuring out where to apply AI is something that's developing fast in the industry, but that's one muscle that people can work on in interim.
SPEAKER_00What's something that AI was expected to solve but didn't?
Lightning Round
SPEAKER_01That's a very good question. I feel that practical adoption of AI has taken longer than expected because it was solved as a panacea. And when people think about AI, I also think the one thing that they get wrong is the actual definition of what AI means. Like even internally, we've been reframing our practitioners, which includes just teaching people different skills, even creating different learning paths within our organization as we build out a data practice. Because AI is a combination of skills. So there is classic machine learning, there's AI, which is more generative in nature. So they're kind of coming at the problem from two different ends. And then there are other hyper-specialized disciplines that you need, like optimization, simulation, graph theory. So AI is a combination of things. And we have been using AI in supply chain for a very, very long time. It just hasn't gotten maybe the recognition that it's getting these days just because of the volume. And the signal-to-noise ratio isn't particularly good, just because everybody decided to take RPA, so process automation, and call it AI. But true AI, we haven't really felt the impact of it yet on our bottom line, but we will in the next two or three years. Because the the amount of problems where AI truly lifts performance, not just by 1% or 2%. And by the way, 1% or 2% can be a lot, right? If you're looking at the bottom line and reducing the amount of inventory that you have on hand. So it has practical impacts. But I feel that this next year is going to be very interesting, especially as everybody is betting on embodied AI, aka robots. So that's the next big promise that we're going to watch over the course of the next six months and see how that unfolds.
SPEAKER_00We're going to do a lightning round now, Simon. So one word or maximum one short sentence answers. First one, is forecasting becoming less important than stress testing?
SPEAKER_01Yes. Okay. So the problem with forecasting is false precision. So we are over-rotating on how important it is to be within one or two percent of the actual, whereas what's important is being able to be prepared for deflection of demand by a large margin. Also, perfectly optimized systems are not fault tolerant. Lightning round. Oh, I'm sorry, lightning out, later on. Okay, let's keep going. Let's keep going. I was gonna tell a fun client story, but next time.
SPEAKER_00Okay, most overhyped term in AI right now. Agents. One supply chain metric leaders trust too much.
SPEAKER_01Oh one time minimal.
SPEAKER_00One skill every supply chain leader should double down on.
SPEAKER_01Critical thinking.
SPEAKER_00Very good. One thing you would never fully automate. Executive decisions. Strategy decisions. Okay. And we've kind of answered this one already, but digital twin of everything, yes or no?
SPEAKER_01Reality is is the best digital twin, right? Unless unless you are creating an an alternate universe, right, you're gonna have to make simplifying assumptions. And so I think I think we need to choose our fights there, because you you don't you don't have enough compute to rebuild this reality. Okay.
SPEAKER_00Left field question for you now, Simon. If you could have any person or character alive or dead, real or fictional, as a champion for AI in supply chain, who would it be and why?
SPEAKER_01Oh my god. My personal hero is Stephen Hawking. So he would be my champion.
SPEAKER_00Okay. We're coming towards the end of the podcast now, Simon. Is there any question that I did not ask that you wish I did, or any aspect of this we haven't touched on that you think it's important for people to be aware of?
SPEAKER_01Yeah, I think so. The missing piece always is governance, because it's not as exciting, right? It's not directly linked to mission. And so we always leave it on the cutting room floor, right? Or we introduce it to the projects during the very end. But especially as we get serious about involving AI in any kind of a decision-making capacity, right? Things like audit trails, thresholds for automation, safe rollout plans, can we have AI test in simulation before it touches live ops? All of that needs to be figured, and folks need to be building that in from scratch, right? I mean AI is software engineering. In software engineering, if you spend less than 60% of your time planning and 40% of your time coding, you're doing something wrong.
SPEAKER_00Good. Simon, if people would like to know more about yourself or any of the things that we discussed on the podcast today, where would you have me direct them?
SPEAKER_01Please shoot me a message on LinkedIn. You can find me there easily, or please come to our website, www.bristlecone.com.
SPEAKER_00Okay. Simon, that's been fantastic. Thanks a million for coming on the podcast today.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
Peggy Smedley Show
Peggy Smedley
Supply Chain Revolution
Sheri Hinish, SupplyChainQueen
Transform Talks: The Supply Chain Transformation Podcast
Villablanca Consulting Limited
Leaders in Supply Chain and Logistics Podcast
Alcott GlobalSupply Chain Next
Supply Chain Next
Supply Chain Now
Supply Chain Now
Buzzcast
Buzzsprout