AI is reshaping business & shaping a new future | Author of "AI Value Playbook" joins us

AI is reshaping business & shaping a new future | Author of "AI Value Playbook" joins us

In our latest episode, Lisa Weaver-Lambert dispels the belief that is incapable of delivering impact in her book "The AI Value Playbook." She also lays out principles for succeeding in your implementation of AI:

1. Your tech stack determines winners: Orgs that already were built to process and leverage data as part of core decision making are at a huge advantage. Especially those that are focused on leveraging insights to learn and iterate.2. Leadership and strategy matter: The vision, guiding principles, and culture matter. They will dictate the strategy or lack of a cohesive strategy.3. AI shouldn’t be added on top: AI should be viewed as the pathway ro removing layers, friction, and complexity.4. Getting from proof of concept to value is harder: AI reduces the barrier to creating proof of concepts while also layering in a lot more uncertainty about how to make it production-ready.5. Centralize AI strategy & decentralize implementation: Orgs should have a cohesive strategy owned by a centralized team. But the workflows and use cases defined by the teams that are seeking to gain specific value.

Listen on Spotify | Listen on Apple | Watch on Youtube

Please rate the podcast

If you’ve listened to the podcast, please help us by giving us a rating. It helps us get in front of more people and know that what we’re publishing is delivering value.

Rate us on Spotify | Rate us on Apple Podcasts

And if you have comments, questions, or suggestions: info@designof.ai

New report showing use of Anthropic (Claude) doubled, while OpenAI lost 1/3

Menlo Ventures published their 2024 report: The State of Generative AI in the Enterprise. It shows the continued maturation of the AI market and clear use cases where the tech is being leveraged. Not surprising, task-level use cases that can be directly evaluated/audited are coming out on top.

Also, the layers of AI stack are becoming more distinct with some products starting to create their own moats. As we move into 2025 expect the Data layer to split as more orgs realize that they need a semantic layer to structure and make sense of first-party data.

Thanks for reading Design of AI: News & resources for product teams! This post is public so feel free to share it.

The LLM market share data makes OpenAI look like the big loser. But I suggest throwing out the 2022 and 2023 data since adoption was so low and leveraging the tech for experimentation rather than impact. 2024 is the year when AI became the workhorse for the first time powering countless products.

Nonetheless, it is compelling to see Anthropic and Claude shoot up. Their focus on UX seems to be paying dividends, that or OpenAI’s dilution of trust is.

Of no surprise, prompt engineering is falling off a cliff. It was a bandaid approach for a tech that had no standards yet. For reference a business that built their product through prompts often had to rebuild all those prompts whenever a model was updated.

Thanks for reading Design of AI: News & resources for product teams! This post is public so feel free to share it.

AI use & impact assessment survey

Please share your experiences and point of view in our year-end AI research study.

Your lessons and opinions will shape a critically important assessment of how & if AI is positively impacting individuals and teams.

Less than 5-minutes of your time will help us a lot.

Perplexity is one-upping Google by introducing AI-powered shopping journeys

Perplexity, the upstart GenAI search form is firing shots at Google by taking a refreshing look at shopping. Rather than focusing on someone searching for a product (e.g. Patio furniture), they are taking a very human-centred approach by focusing on what a user is trying to accomplish (e.g. renovate my outdoor living space). The platform then provides ideas, support, and instructions. Plus, recommends products to buy.

While this is immensely helpful, it brings up the ever-present concern that AI will pick winners and losers for us. Where Google served up dozens or hundreds of results and encouraged us to make our own decisions, AI only shows a handful of options. This is the beginning of the platform as expert and it could change how we interact with the world in a huge way. It could lead to small merchants being shut out or even grow distrust of options that aren’t recommended by a platform.

Alarming data showing that achieving AGI could destroy market wages

Economics at the International Monetary Fund have modeled data that shows that if Sam Altman & crew succeed at bringing AGI to the world faster than expected, it could set into motion a total destruction of market wages (aka devalue everything).

Their model also showed that on the expected timeline of AGI, wages will continue to rise as humans continue to do the thinking for the machines.

Read the report

Thanks for reading Design of AI: News & resources for product teams! Subscribe for free to receive new posts and support my work.



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit designofai.substack.com

[00:00:00] We have seen it where people are working on something that matters for the organization. Where there's no AI strategy, it just fails because you get very busy people, well-intentioned, doing a lot of proof of concepts that don't go anywhere because they're not necessarily attached to something that is meaningful for the business and don't have a clear business case.

[00:00:22] After all our recent episodes about how AI will change design, we thought it was time to hear about how AI is changing business.

[00:00:30] Our guest is Lisa Weaver-Lambert, the author of the AI Value Playbook, How to Make AI Work in the Real World.

[00:00:37] She interviewed the leaders of 35 businesses who were early adopters of AI to define where it delivers the most value and how they've had to shift their strategies.

[00:00:45] It was worked through as a process redesign, which when I say process redesign, it includes people and skills.

[00:00:53] So you've planned that out from the beginning, where the impact is and where it's going to land.

[00:01:00] And the users of the technology have to be embedded into the solution development as well.

[00:01:06] This is episode 22 and the author of the AI Value Playbook will be sharing with us today about first-hand lessons from executives about how AI delivers value.

[00:01:14] The characteristics of a successful or a poor AI strategy.

[00:01:19] Why some businesses and structures are better set up for success.

[00:01:23] The challenges that legacy organizations face incorporating AI and getting real value.

[00:01:29] And the future of jobs in a world that will be changed by AI.

[00:01:33] The pathway to effective AI isn't about chasing the technology and finding a home for the technology.

[00:01:41] It's about building systematic approaches to evaluation, implementation and measurement.

[00:01:47] The examples that I have of success are all starting with narrow applications that are then expanding into new use cases.

[00:01:56] Lisa is a published author on AI, specializing in advising private equity firms on due diligence and value creation.

[00:02:03] With experience in capital markets, working at Microsoft, Accenture, she has a proven track record of partnering with businesses to develop operational improvement with data and AI strategies.

[00:02:12] And to drive value across various sectors.

[00:02:15] In addition, she has held executive line management positions working with some of the world's best known brands and served on various boards, earning her recognition as a leading woman in technology.

[00:02:25] Her recent book, the AI value playbook, provides an essential handbook for non-technical business leaders to quickly formulate a perspective on how to leverage AI.

[00:02:35] Her book draws on conversations and case studies with leading practitioners who share their first-hand experiences successfully driving AI value and pathways to progress.

[00:02:44] Read this book if you want to understand machine learning and generative AI terminology, concepts and the AI technology stack.

[00:02:51] To learn from diverse real world case studies narrated by business leaders in their own voices.

[00:02:56] And to apply a value driven approach to AI applications across multiple business sectors.

[00:03:02] The AI value playbook is available on Amazon and all major book retailers.

[00:03:06] The biggest differences that I saw had nothing to do with geography and nothing to do with industry.

[00:03:12] It was really about the tech stack and skills and agility of the business model and the leadership.

[00:03:22] Now, we want to hear from you. How much value are you getting from using Gen.AI?

[00:03:26] Please help us by taking a five-minute survey to help us understand how and why you use AI.

[00:03:32] We're starting to build a knowledge base of how professionals are getting the most value from AI so that we can publish it in a report.

[00:03:38] You can find the survey link in the show description and on our website, designof.ai.

[00:03:45] Not only will it help us to share more information out into our industry, it will also help us to inform what topics the podcast will cover in 2025.

[00:03:55] Hi, Lisa. Thank you so much for joining us.

[00:03:58] So you've had a pretty varied background.

[00:04:01] You've gone from finance to consulting to working in private equity.

[00:04:05] So what made you decide to pivot to the role of author and to write the AI value playbook?

[00:04:11] I don't think I pivoted.

[00:04:13] The book is a testament of me reflecting on what I have learned working with businesses and also what I have learned from businesses that I wanted to learn more from, really.

[00:04:33] How this book really emerged was during the time of COVID, reflecting on the impact that I had in the private equity firm that I was working with.

[00:04:45] And it became apparent to me that when I documented my thinking and how we could systemize approaches, that there was a connection, that the people got it, that they took it forward.

[00:05:02] And I was thinking about how do I do that for data?

[00:05:06] And I was also working as well with the CEO of a fund.

[00:05:12] And he was starting to ask questions from other people in his country around using data and AI.

[00:05:24] And I thought, how do I scale that?

[00:05:27] And the third point is really this difference between the expectation and then actually what AI is delivering in terms of business value into businesses.

[00:05:40] And I saw that difference in a very marked way when I was working for Microsoft, because obviously Microsoft has been working with AI for over a decade, like most of the tech companies.

[00:05:54] And the expectation then of businesses outside of tech adopting AI and delivering value, there was a big distance there.

[00:06:05] And I really wanted to get into understanding why that was.

[00:06:12] Yeah, so it sounds like you were really trying to compile this knowledge and information well before the current, let's call it, AI boom, which is a fabulous place to have started.

[00:06:26] So our audience really wants to know, you know, how do they build with AI and leverage AI?

[00:06:32] We've had a lot of people on who have talked about their singular experience or more theoretically, but you've really had the chance to speak to so many people that are building and succeeding and failing and whatnot.

[00:06:45] So how can we help the people listening know how to use AI to build effectively?

[00:06:53] So we want to learn as much as possible from you about how to get value when building with AI.

[00:06:58] Let's start with some examples of success.

[00:07:00] What are some stories or examples of people building with AI or for AI that have been particularly successful?

[00:07:09] I think we can break this into what are the commonalities of successful integration of AI.

[00:07:18] Okay.

[00:07:19] And the first tenant of this, I would call strategic alignment and target application.

[00:07:27] Because if you think about AI purely as data and technology, you're going to fail.

[00:07:34] So the first step is really getting a consensus among leadership that the value is there and that AI is being focused in the right places.

[00:07:45] Initiatives have to be aligned with the company's overall business strategy and target very specific problems where it can address effectively.

[00:07:57] And success really only happens when you start using solutions to improve customer experience, grow revenue, or become more efficient.

[00:08:08] So really starting on specific use cases and problems is so important.

[00:08:15] And really finding the intersection of user needs in the business and AI strength.

[00:08:23] I think that that is highly critical.

[00:08:26] In terms of an example of this, I was working with a company that invests into infrastructure.

[00:08:32] The infrastructure is then used by companies like Amazon for warehousing.

[00:08:38] And it's an intrinsically numeric company.

[00:08:43] But the CEO there made a very specific leadership decision that he wanted his company to be AI first.

[00:08:51] And he wanted his teams to understand and interpret what does AI first actually mean for them?

[00:08:58] Which then involved breaking down this statement into the top use cases where AI was going to be most aligned for value creation.

[00:09:08] But also ensuring that the tech foundations in the company were fit for purpose.

[00:09:15] So there was a lot of remediation there that we can get into.

[00:09:19] And that I think is what I see consistently within companies.

[00:09:24] There has to be a very clear strategic decision.

[00:09:28] But then a roadmap of very practical interventions needs to follow very, very quickly.

[00:09:36] I think as well it has to be explained as a benefit to the people that need to buy into it and back it and not as technology.

[00:09:46] Because sometimes leadership teams might get overwhelmed with the technical details.

[00:09:52] But you've got to bring in stakeholders from different departments.

[00:09:56] You've got to set realistic expectations and don't oversell.

[00:10:02] I've got a good example of this where the expectations weren't set.

[00:10:07] And, you know, this was for transport in London.

[00:10:12] And as you know, if you have a traffic infraction, then the camera will take a photo of you, your reg, send it to you, etc.

[00:10:23] And there's not a lot of wriggle room around that.

[00:10:26] But in this case, a new law had come out.

[00:10:30] And it was to prevent large vehicles going on minor roads in London.

[00:10:36] But being able to track and monitor that was highly complicated.

[00:10:41] And therefore, the team built up a highly sophisticated, or I'm going to call it creative as well, model that set out probabilities to determine whether you were driving on the wrong road.

[00:10:57] And the model itself was very strong.

[00:11:00] I mean, it was very sophisticated.

[00:11:03] But yet the end product was illegal.

[00:11:06] You cannot go to people with a probability that they might have legally been out of line.

[00:11:15] You've got to go with hardcore evidence.

[00:11:18] And that's a real case of months of time and energy from very different skilled people coming in to solve a problem where the last mile of it hadn't been sort of thought through first.

[00:11:33] So actually setting realistic expectations and not overselling is important.

[00:11:38] Making sure that the ownership of the AI solution actually lies within the business unit where the use case originates.

[00:11:48] And treating this really as a process that you're going to improve.

[00:11:55] And if you think it about as process improvement rather than as technology implementation, I think that reframes.

[00:12:02] And then also having a strong ethical foundation as well and making sure that you've got guardrails.

[00:12:11] It also makes me think as well as a case study and interview that I did with the CEO of a company called Vectara, where he really emphasizes in specializing in verticals, industry verticals.

[00:12:24] And I would take that one step further in saying thin slices of processes within industries.

[00:12:32] So don't go after a broad scope.

[00:12:35] He also really emphasized the importance of data quality and governance.

[00:12:41] So that would be the first big category that I would aim to get right.

[00:12:46] But there are other categories that I would give equal importance to as well.

[00:12:50] This episode is brought to you by PH1 Consulting.

[00:12:53] Since 2012, we've been working with midsize and large size organizations to help them design the future of their businesses.

[00:13:00] Our clients have included Microsoft, Mozilla, Dell, Spotify, the National Football League, and many more.

[00:13:07] We offer product strategy services to accelerate your adoption of new technologies and validate new ventures and products.

[00:13:12] We help you envision your North Star, co-create possibilities, and roadmap your way to success.

[00:13:18] As service design and futures thinking experts, we deliver a vision for how to take your product to the next level, particularly how to incorporate AI into workflows to augment the capabilities of your workforce.

[00:13:32] Please visit our website at ph1.ca to book an intro call to tell us about your project.

[00:13:38] Lisa, thank you so much for writing this book.

[00:13:41] It's essential that we have these conversations.

[00:13:44] Just give me a reminder, how many businesses did you speak to across this study?

[00:13:49] I spoke to over 35 businesses in total.

[00:13:55] And I was very sort of deliberate about speaking to a variety of companies, not just in terms of sector, but size and maturity.

[00:14:07] And also geographic headquarters as well.

[00:14:11] They were quite different because I wanted to really gauge where commonalities and differences were.

[00:14:17] Wonderful.

[00:14:18] And I'm bringing this up because you referenced the concern about AI being implemented in perhaps not the ideal way.

[00:14:25] And we've seen some trends through startups in the past.

[00:14:30] Let's say Uber, which was developed off of bad behaviors, right, is really trying to push the envelope as far as possible.

[00:14:36] What I'd like to know is from your conversations, the people leading these companies, are they AI first led organizations where they're trying to break the mental model of how technology is framed?

[00:14:49] Or are they typically leaders who are adding AI into the mix like yet another digital transformation?

[00:14:55] Both.

[00:14:56] The companies that will resonate with you are the ones that are six, seven years old.

[00:15:02] They're cloud native.

[00:15:04] They already have a clear, purpose, sizable revenue.

[00:15:09] They have been implementing machine learning, first and foremost, into their processes.

[00:15:18] And Genitive AI has given them a whole different way of looking at possibilities.

[00:15:25] So they're very different.

[00:15:27] So I was speaking to a company that works in the legal space that does a lot of sort of contract analysis and the CEO is actually a lawyer.

[00:15:37] And I was really interested in this particular leader because I wanted to try and understand how he had led this company, which is actually in itself very, very strong in technology.

[00:15:49] But the domain expertise in this company is equally as important.

[00:15:55] He sort of has a right-hand person who is the technology leader and he is the domain leader, but he equips himself continuously through learning.

[00:16:06] But I give that example because they had already built in a lot of software practices with the goal of streamlining end-to-end contracts.

[00:16:17] That's their whole reason for being.

[00:16:20] And then Genitive AI opened up a whole different set of possibilities for them.

[00:16:26] And I compare that to a company that I spoke to called Lightrix.

[00:16:30] And this company works a lot with the entertainment industry.

[00:16:36] They have a sort of mission, if you will, to localize entertainment as much as they can.

[00:16:43] So a lot of our entertainment remains in the English language, as you know.

[00:16:48] A few blockbusters get some translations.

[00:16:51] It really doesn't reach that many organizations.

[00:16:54] And again, they were using machine learning.

[00:16:57] And then now Genitive AI has enabled them to have much greater reach.

[00:17:03] And what they're able to do in the same period of time is quite amazing.

[00:17:09] But I think the other area as well are software companies.

[00:17:13] I mean, the CEOs of these companies, because they're often engineers from the beginning,

[00:17:19] have gone back into the tech.

[00:17:23] And they've been really surprised, if you will, about the speed and what is now possible, etc.

[00:17:33] So I think the technology companies are embracing it much faster than maybe other industries.

[00:17:39] This is definitely fascinating, because in our previous episode with Peter Merholtz,

[00:17:44] we were talking about design organizations, how organizations are structured.

[00:17:48] And he ultimately proposed a theory, which is the organizations that were design mature

[00:17:53] may not have an advantage when it comes to AI.

[00:17:56] But those that were algorithmically mature, as in they leveraged data as a core part of their

[00:18:02] business units and efficiency gains and ROI, and data had a seat at the leadership table.

[00:18:07] They're the ones that are going to benefit from this.

[00:18:10] I would just like to build on that.

[00:18:13] So I think the biggest differences that I saw had nothing to do with geography and nothing

[00:18:19] to do with industry.

[00:18:20] It was really about the tech stack and skills and agility of the business model and the leadership.

[00:18:30] Those were the big differences that those companies, because they have less legacy,

[00:18:35] they have that very single focus still, you know, very founder led, and they have taken on and built

[00:18:44] themselves on an analytics tech stack.

[00:18:46] That makes a huge difference.

[00:18:48] Now, when I look at that, I wonder the inverse, which are the legacy organizations, the ones that

[00:18:55] move slowly, banks, insurance, healthcare.

[00:18:58] In our next episode, we're actually speaking to someone in healthcare, because healthcare keeps

[00:19:03] getting highlighted as a huge opportunity for digital transformation.

[00:19:06] But again and again, digital transformations fail in legacy organizations.

[00:19:11] What have you learned about mistakes and failures to implement AI in these legacy organizations?

[00:19:17] A tonne.

[00:19:19] I've learned a tonne.

[00:19:21] Let's switch that around to talk about where it is successful.

[00:19:26] And we have seen it successful when these large organizations have been very deliberate about

[00:19:33] specific processes that they want to transform.

[00:19:37] I've got a case study in the book with Listanza, the German airline group, and how they've used AI to

[00:19:46] really transform their customer service.

[00:19:49] They worked with a very competent partner to do this.

[00:19:54] It's taken time.

[00:19:55] It's taken iteration.

[00:19:57] But they had a very clear business case for it, because they were under a lot of pressure

[00:20:03] to get back up and running after COVID.

[00:20:06] They did not have the capacity to absorb all the customer inquiries coming in.

[00:20:13] So we have seen it where people are working on something that matters for the organization.

[00:20:19] Where there's no AI strategy, it just fails, because you get very busy people, well-intentioned,

[00:20:27] doing a lot of proof of concepts that don't go anywhere, because they're not necessarily

[00:20:32] attached to something that is meaningful for the business and don't have a clear business case.

[00:20:38] And I think the other challenge is when you were doing sort of more traditional software

[00:20:41] development, if you were sort of 75% there, you knew that you were almost there, if you will.

[00:20:48] Whereas with Gentive AI, you can spin up a pretty good proof of concept that you just can't scale,

[00:20:55] but it looks very compelling.

[00:20:57] So I think the operating models of these larger organizations are a challenge, because in

[00:21:05] these large organizations, often they are additive.

[00:21:10] People don't make decisions to take things away.

[00:21:14] And when you think about AI, you have to think about simplification.

[00:21:19] And where you have complication, even in the operating model or in processes or in even

[00:21:25] sort of people structures, that's just going to slow you down.

[00:21:30] Now, looking at organizations, quite often they're very siloed, particularly large organizations,

[00:21:37] healthcare, government, legal.

[00:21:39] It really doesn't matter what it is.

[00:21:40] They're very siloed and rarely is there a holistic group that is there to explore the

[00:21:47] implications of one unit to the next.

[00:21:50] Are you seeing that the teams that are successful have this AI center of excellence?

[00:21:55] Do you see that they have a chief AI officer?

[00:21:59] Do you see that there's some governance group that actually looks at things holistically?

[00:22:04] So hub and spoke is the way that I look.

[00:22:08] Every operating model is going to be slightly different, but the investments have to be

[00:22:13] centralized.

[00:22:15] They have to be centralized because there are a lot of technical decisions that have to

[00:22:19] be made centralized.

[00:22:21] But having AI or problem solving skills near to the business, I have seen a lot of benefit

[00:22:29] in that as long as it's coordinated with the center.

[00:22:34] So that's the best way of sort of organizing it, I've found.

[00:22:40] And I think the other point that is really important is explainability and safety of these

[00:22:46] models.

[00:22:46] And if you're not doing things with sort of guidance from a central point, you're putting

[00:22:52] your business at risk.

[00:22:54] And also there's the argument of the right skills in the right place, as well as finding

[00:23:02] the right skills can be challenging depending on which industries you're in.

[00:23:08] But I think, you know, organizations really need to prioritize building internal technical

[00:23:14] teams, hiring specialist engineers.

[00:23:16] And I gave the example of the legal company and they now have what they call legal engineers,

[00:23:24] you know, people who understand.

[00:23:26] You've got the engineering skill set, but understand the domain as well.

[00:23:30] But those are not the only skills to have in place.

[00:23:34] But it's not easy to shift an operating model.

[00:23:37] That's for sure.

[00:23:38] Let's talk specifically then.

[00:23:41] What are some examples that you've seen about either organizations that have been successful

[00:23:47] because they've had a really strong organizational design or ones where they've had to restructure

[00:23:54] the way that their organization is designed and set up in order to pivot from some of these

[00:24:01] mistakes into a more successful implementation?

[00:24:04] How can we advise people listening about how to structure the organization to be successful?

[00:24:10] So there's a cost to having a very fragmented approach.

[00:24:15] And you've got to understand the cost of that kind of effort and disconnectedness.

[00:24:20] And when I was working with one organization, they were in that situation where they're global.

[00:24:27] They had very gifted teams in very different geographies, but spinning up different solutions.

[00:24:34] There was no foundation sort of approach to enterprise data either.

[00:24:40] And the solution in this company had part of it in China.

[00:24:45] And we had to separate China from the rest of the world for data governance reasons.

[00:24:50] And then we centralized all the efforts in terms of getting the data foundations right.

[00:24:56] And we started to take all the learnings from these use cases and do it in a lot more of a coordinated way.

[00:25:06] People were just reproducing something that somebody else did.

[00:25:10] You know, the use cases were also moving out of the company as people moved on as well,

[00:25:17] which was a big issue.

[00:25:18] So having a much more organized and thoughtful approach to it was fundamental.

[00:25:25] And it didn't take away all the goodness that had been created in the organization.

[00:25:29] It just brought it, allowed it to get a lot more momentum.

[00:25:34] When you're seeing a more thoughtful and organized approach,

[00:25:38] are you speaking about the strategic plan and the directives for those initiatives from the leadership down?

[00:25:45] Or are you speaking about the way in which that knowledge sharing is codified and implemented within the organization?

[00:25:55] Yes.

[00:25:55] I'm talking about the business strategy around it, the technical strategy around it,

[00:26:00] the actual operating model, and how AI adoption is going to take place in the business.

[00:26:07] And how people in the business who are actually in, if you will, leadership roles that are going to integrate the solutions into their work.

[00:26:17] So all of that has to be planned and thought through.

[00:26:21] So you mentioned about needing to consider how it will be adopted internally.

[00:26:25] Yes.

[00:26:27] Are there any specific examples that you've seen for how organizations or leadership have overcome that failure to adopt by teams?

[00:26:37] Because what we've been hearing is generally that is one of the biggest challenges,

[00:26:42] is people are so busy trying to get the work done that needs to get done,

[00:26:47] that they don't have the time or the mental energy to try and adopt something new.

[00:26:55] Why are teams failing to adopt and how are successful leaders planning for overcoming that?

[00:27:02] I think they're failing to adopt because the solution is disconnected from what their role is in the business.

[00:27:09] So if we just go back to the example of customer service, which I've got a number of these examples,

[00:27:16] it was worked through as a process redesign, which when I say process redesign, it includes people and skills.

[00:27:26] So you've planned that out from the beginning, where the impact is and where it's going to land.

[00:27:32] And the users of the technology have to be embedded into the solution development as well.

[00:27:39] I think that starting in sort of smaller phases helps as well.

[00:27:45] It helps build trust.

[00:27:46] It helps build buy-in.

[00:27:48] I think sometimes incentives are not aligned.

[00:27:51] So if you think about people, you know, let's just go back to the call center where people might resist the technology.

[00:27:59] It's often because they're not incentivized, integrated as well.

[00:28:03] So I've seen projects be thwarted because people aren't incentivized to actually make it successful.

[00:28:11] Look, every case study I've worked on, taking an iterative approach came out as a very common denominator.

[00:28:20] And I think engaging as well with finance and having a lot of rigor around your metrics is very important as well.

[00:28:29] Your book is about the value of AI and you've been speaking to leaders.

[00:28:34] So I'd like to focus on the leadership's perspective on when an investment was considered valuable and successful.

[00:28:40] We're not sure if it's simply laying the foundations for the tools.

[00:28:44] We don't know if it's adoption.

[00:28:45] We don't know if it's direct revenue attributed to it.

[00:28:48] How are they looking at it?

[00:28:50] And maybe let's frame it from the perspective of customer support and sales, where those are very distinct groups or units.

[00:28:56] How would they consider it a successful or valuable implementation?

[00:29:00] So how are they measuring it in terms of value?

[00:29:03] Is that your question?

[00:29:04] But I think more so, are they looking to have a revenue positive investment this year?

[00:29:10] Are they simply looking at this as a proof of concept at this point?

[00:29:14] I think if you can go from that perspective, it'd be helpful.

[00:29:15] Okay.

[00:29:16] Well, the people that I have worked with are not looking at it for sort of proof of concept, and that's good enough.

[00:29:24] They're actually looking at a larger win, okay?

[00:29:27] Which is then broken down, and they all have business cases behind them because they're based on specific use cases and problems.

[00:29:38] So they're not trying to the impact of AI broadly.

[00:29:43] It's very much aligning metrics to specific business problems or use cases.

[00:29:49] So, for example, how many calls have we remediated, if you will, in our internal ticketing system that we couldn't previously do?

[00:30:01] So there are very specific measures used for different cases.

[00:30:06] Yeah.

[00:30:06] Definitely.

[00:30:07] That's wonderful.

[00:30:07] Now, looking at that metric specifically, some organizations might look at increasing the velocity that they can handle requests at.

[00:30:16] Some might look at the quality, or some might look strictly at the revenue.

[00:30:21] How are you viewing what leadership is saying is their mental model around success?

[00:30:26] I think it's around productivity gains.

[00:30:28] That's where I've seen most focus.

[00:30:31] It's in efficiency or output of tasks where AI is applied.

[00:30:35] That goes into sort of saving time, okay?

[00:30:39] I think, of course, I've seen AI applied to growth and quality improvements as well.

[00:30:46] But I think, you know, in the current economy, productivity improvements is where the focus is.

[00:30:53] Now, the term AI maturity is something that's been popularized.

[00:30:56] It's used a lot.

[00:30:57] The definition is a little broad.

[00:30:58] It's a little vague.

[00:30:59] You know, consultancies love to use this term.

[00:31:01] But do organizations question their own maturity?

[00:31:05] Is it something they're concerned with?

[00:31:07] Is it something they're trying to elevate, or are they simply trying to drive value?

[00:31:10] I think it is very much a consulting term.

[00:31:13] I think leaders of organizations are looking at, are we fit for purpose?

[00:31:18] You know, and what that purpose is, is their strategy and what they need to deliver back to either shareholders or investors.

[00:31:26] So, I think when we break it down, it is meaningful when you think about it as what is our technology capability.

[00:31:36] That has to be very specific in terms of data quality, security, and the overall readiness to implement AI.

[00:31:47] So, you know, current resources, tools, practices, architecture, integration points, performance and scalability, and understanding the gaps and the risks.

[00:31:59] But I think maturity isn't meaningful because it's too abstract according to a particular company wants to achieve.

[00:32:08] I prefer a rigorous sort of tech capability assessment that gives me a clear indication of where my roadmap of investments needs to be or where I'm exposed to risk.

[00:32:21] I find that a lot more helpful.

[00:32:23] And I don't find helpful sort of abstract benchmarks that aren't to do with what I'm specifically doing within my industry.

[00:32:34] We really appreciate the candor on that.

[00:32:37] And it makes me want to ask, going back to legacy organizations, ones that perhaps may be more lost or uncertain, or don't have the foundational capabilities, expertise, maybe not even the leadership necessary.

[00:32:49] How would you consult them on how to start this roadmap to success?

[00:32:56] That's a really good question.

[00:32:59] And I think it has to start with their overall strategy and financial commitments, because the use cases have to back into generating value at that level.

[00:33:13] If I was in healthcare, if I was in healthcare, I would look at proven use cases.

[00:33:19] I wouldn't try and be too clever or innovative about it.

[00:33:23] I would take use cases that have demonstrated ROI and think about how can I adapt those into my organization?

[00:33:35] What is it going to take?

[00:33:37] I would think about how do I accelerate my organization as well?

[00:33:45] And that might mean working with an ecosystem of partners.

[00:33:49] It might be working with cloud providers, which bring a lot of expertise as well, as well as upskilling internally, which I think is absolutely fundamental.

[00:34:01] So I would definitely start there.

[00:34:05] And I would narrow it down into three maximum areas where AI can make a big impact in my business.

[00:34:16] So let me bring this to life a little bit, maybe on a different scale.

[00:34:20] But this is an example from healthcare.

[00:34:23] And this company has a mandate to train people who are using ultrasound equipment.

[00:34:31] And all of their knowledge is in the company.

[00:34:35] It's in over 9,000 different pieces of documents, videos, and all sorts.

[00:34:42] And the only way that they used to be able to interrogate this information was actually almost like a Google search, if you will.

[00:34:50] But then you've got to trawl down the old school Google search, I would say, and find the right references.

[00:34:57] But now they've implemented a platform that has ingested all of that documentation and made it available to these practitioners on an app, on their mobile phone.

[00:35:11] And that has significantly changed the value output.

[00:35:17] So that company is able to train those practitioners much more efficiently and effectively.

[00:35:26] Okay.

[00:35:27] So I do think that in every industry, there are use cases to go after.

[00:35:34] And picking three maximum is really where you want to start.

[00:35:38] That is super helpful.

[00:35:40] And I think especially as we consider these more regulated industries, what are some use cases in this case that you've seen that have delivered value consistently?

[00:35:53] Where they're really more ripe or more open and available to AI implementations?

[00:36:01] Let's first of all classify what we're talking about in terms of AI, whether we're talking about genitive AI, whether we're talking about sort of more structured machine learning, etc.

[00:36:11] I have seen traditional AI being applied to very structured finance processes and to problems around customer profitability, customer churn, for example, where it's a little bit more deterministic.

[00:36:33] Okay.

[00:36:34] In terms of results you are getting, I think where genitive AI is performing really, really well is in those spaces where you've got a lot of unstructured documents or voice or images.

[00:36:51] So you have to go after those large segments like customer service.

[00:36:57] Or I've seen it applied to innovation processes where genitive AI has had a role alongside the more traditional innovation processes.

[00:37:08] They're thinking more efficiently into new or different areas as well as challenge.

[00:37:13] Customer segmentation.

[00:37:15] Customer segmentation.

[00:37:15] I've seen genitive AI applied into customer segmentation.

[00:37:20] Again, getting into the granularity of different segments and needs within those segments.

[00:37:27] I think, you know, AI is very good at propelling engagement.

[00:37:33] I have an example of that with Koo.com, which is an Indian startup, which is a little bit like, if you will,

[00:37:41] you will, the social media platform for non-anglophone people.

[00:37:46] And therefore, it was very important for them to use voice and images a lot more than we perhaps do in maturer economies.

[00:37:57] So those would be some good examples that I think, and obviously code and code generation is very important.

[00:38:07] And spinning up proof of concepts as well is very important.

[00:38:12] And as I'm thinking through your question, I think there's just one thing that I would say to any organization,

[00:38:21] and that goes back to the earlier question, you know, where to start.

[00:38:26] But where to start is bring AI in the room.

[00:38:30] When you're trying to solve a problem in a business, bring AI in the room.

[00:38:35] You can decide that it's not the right fit.

[00:38:37] That's for sure.

[00:38:39] And I think sometimes, you know, the cost of implementing AI supersedes, if you will,

[00:38:45] the effort of people actually solving that task for now.

[00:38:49] And it's not always applicable.

[00:38:51] But I think you've got to bring it in the room because sometimes it's better at things that we don't expect it to be better at.

[00:38:59] And it's worse than others that we expect to be better at.

[00:39:02] And also, the rate of innovation is so fast at the moment that there has to be a constant re-evaluation around,

[00:39:12] well, what are the solutions now available to solve this problem?

[00:39:16] One of the other threads that's been running through our conversation are the unknowns, unknowns about AI, specifically Gen AI.

[00:39:25] What can it do to help teams and individuals become better at divergent thinking, at creativity,

[00:39:32] at getting through the first draft, at exploring new possibilities,

[00:39:36] whether it's in terms of markets or audiences or data sets?

[00:39:41] Have you seen any organizations that have cracked the code in terms of exploring the non-popularized use cases?

[00:39:52] Yes. So a large consumer goods company has been working with Gen AI and experimenting with it, if you will,

[00:40:02] within its innovation process.

[00:40:05] And they've actually managed to standardize a lot of processes off the back of this

[00:40:10] because innovation was considered sort of, you know, only very local, etc.

[00:40:14] And I have seen it really improve the speed of product innovation,

[00:40:20] as well as create more streamlined and efficient approaches.

[00:40:26] And the way that I've seen this done is through a very structured prompting mechanism

[00:40:34] that has then been adopted by the company.

[00:40:37] So they had already partnered with a cloud provider and their AI service.

[00:40:44] And so that was a given.

[00:40:46] But then the key, though, was actually to make sure that the very experienced people working in research

[00:40:56] were trained on these prompts because they did an experiment where they gave some of these prompts

[00:41:02] to less experienced people and the outcome was completely different.

[00:41:05] And so you really need to know what the garbage is and what is going to be useful.

[00:41:12] But unless you have that, in this case, marketeer research knowledge, then you don't.

[00:41:18] Everything looks great.

[00:41:20] There's definitely a lot of shifting.

[00:41:22] And actually, that reminds me of a conversation that I had early on with a colleague when I was working at Microsoft,

[00:41:28] where he said, look, you know, a client came and they said, look, we don't need you because, look,

[00:41:33] we've generated all this code and he said, fine, you generated code, but it's pretty useless for these reasons.

[00:41:39] When you've got experience in the domain, you can see the problems with it a lot faster,

[00:41:44] but also take the goodness out of it.

[00:41:47] What this company did is actually document a consolidated manual about how to do product innovation now

[00:41:53] and sort of detailing case studies for practitioners.

[00:41:58] But I expect that has to be rewritten again soon as new tools come out.

[00:42:04] That has already had a big impact on how they work with innovation.

[00:42:09] So the other example that is worth bringing up is one that was done for an audience strategy.

[00:42:15] It's quite hard to spend time developing the right audience strategy for specific events.

[00:42:23] In this case, it was a DJ who was sort of well-known, et cetera.

[00:42:27] And what the AI helped the marketing team understand is really what the key dimensions of fame were of this individual.

[00:42:38] The AI was able to synthesize sort of decades of online rich coverage and commentary on this particular person,

[00:42:45] helped define sort of different audience segments and how to develop those profiles,

[00:42:52] the lifestyle attitude behaviors and help tailor the positioning and the strategy.

[00:42:58] That type of work would have cost just a lot more than it actually cost this particular company.

[00:43:05] I think that that is really valuable.

[00:43:08] I do want to bring it back, though, to something that you said a while ago,

[00:43:13] which was one of the common mistakes that you're hearing is that these organizations are making proof of concepts

[00:43:20] that can't really scale or they don't have the right business case.

[00:43:26] And so, you know, once an AI solution shows promise in that proof of concept,

[00:43:33] what strategies do you recommend for scaling it across that larger organization

[00:43:38] or turning it from a proof of concept into a product?

[00:43:43] So I think, first of all, the pathway to effective AI isn't about chasing the technology

[00:43:50] and finding a home for the technology.

[00:43:52] It's about building systematic approaches to evaluation, implementation and measurement.

[00:43:59] The examples that I have of success are all starting with narrow applications

[00:44:05] that are then expanding into new use cases.

[00:44:10] This really allows for controlled development.

[00:44:13] So the integration of AI within existing systems is very important to scale.

[00:44:20] Providing training for end users as well is very important.

[00:44:26] Monitoring the system's performance and user adoption and continuously refining and optimizing based on feedback.

[00:44:36] And the example that I have in the book that is really interesting is how Otter AI,

[00:44:42] so Otter AI is an AI product solution, if you will,

[00:44:47] for reducing the number or the burden of sort of so many meetings

[00:44:53] that one has to attend to in a large company and rationalizing, if you will,

[00:44:58] the meetings and key insights.

[00:45:01] And the CEO, Sam Liang, is innovating constantly based on user feedback.

[00:45:09] And when I say user feedback, it's not just the usage data that he's getting.

[00:45:15] There are regularly sort of questionnaires in the apps as well.

[00:45:19] Now that's a consumer example.

[00:45:22] But if we take that into B2B, so actually testing out products with your employees

[00:45:28] is really, really important before you scale.

[00:45:32] And testing them out as well with a priority set of customers as well.

[00:45:39] Also, you can keep working out what potential can be into an area that often gets overlooked.

[00:45:48] Communication and training is really, really important.

[00:45:52] And also learning and upskilling is very important.

[00:45:57] And just to loop back to that human element that you underlined,

[00:46:03] sort of human expertise in evaluating the effectiveness of AI systems is absolutely critical.

[00:46:12] And I think that a lot of companies now have to really think carefully about balancing speed with safety.

[00:46:19] You know, the sort of safety and alignment needs to be thought through.

[00:46:24] And accountability is very important, especially for established organizations.

[00:46:30] Thank you so much, Lisa.

[00:46:31] It's been wonderful speaking to you on this.

[00:46:33] Now, you've spoken to many leaders and you've been privy to their perspective on the short term and long term.

[00:46:40] And it opens up the uncomfortable question about automation and the fears people have about being replaced by a technology.

[00:46:49] What is your perspective based on what you heard as to what leaders are planning to do?

[00:46:55] So I'll give you some different perspectives on your question.

[00:46:59] You know, I spoke to one leader.

[00:47:01] He was adamant there will be nobody working in customer service within the next five years.

[00:47:05] I generally think that he said that to make a point.

[00:47:09] But I think that if you extrapolate from it, that any job that is involving a routine task that can be automated will be impacted by AI.

[00:47:23] I would encourage leaders to think about the future of their organizations as thinking about their org structures in a very different way.

[00:47:31] I think that we won't just have people on those org structures.

[00:47:35] We will also have AI agents on the org structures.

[00:47:40] I don't think task and job are the same thing.

[00:47:43] But I think the composition of roles within teams will change.

[00:47:49] And the biggest impact of AI is going to be on our knowledge economy.

[00:47:54] And so you'll have skills like an electrician or a plumber that just will not be impacted by AI.

[00:48:02] But as our world has moved towards a large knowledge economy, that will change.

[00:48:10] And I think the erosion of middle management is not something new.

[00:48:18] But I think that it will accelerate.

[00:48:22] You mentioned about how groups like plumbers, electricians won't be impacted.

[00:48:28] But it's funny.

[00:48:29] My brain sort of went to this idea that maybe we need things like plumbers for data and new roles that are all about restructuring and thinking how these AI agents that exist in our org structures need to work and exist.

[00:48:41] But that's a tangent.

[00:48:43] It's a great point.

[00:48:44] There will be new roles that emerge.

[00:48:46] But I think that history shows us that we will move quicker than we can generate new roles.

[00:48:53] The most recent book from Yaval Harari makes a very strong case for this.

[00:49:01] His book is Nexus.

[00:49:02] And we need time to adapt and understand this technology and its impact to businesses and society.

[00:49:12] And we risk losing a lot of expertise and knowledge.

[00:49:15] You know, I think that that is probably my biggest fear.

[00:49:20] And I'll bring it to life with a specific example.

[00:49:22] So I was working with a company that's in the north of England.

[00:49:26] And it has a very niche operation that is special.

[00:49:29] I'm not going to spell it out because it would give it away.

[00:49:33] But it's in manufacturing and it has a very specific product it's producing.

[00:49:37] And I was on a call with an investor.

[00:49:39] And the people with the knowledge are the older people in the workforce.

[00:49:45] And he was trying to find a way of automating people out of the process, if you will.

[00:49:52] I think that is very risky.

[00:49:54] And I think we have plenty of data points that highlight that risk.

[00:49:58] So I think we need to be really careful about the loss of expertise and knowledge.

[00:50:03] I want to close this out on a fun question, which is your book, The AI Value Playbook, has received a lot of buzz, rightfully so.

[00:50:11] Super valuable and helpful.

[00:50:13] What would you like your next book, Exploring AI, to be all about?

[00:50:18] Well, I think the crux of the book is really looking at the impact of AI on businesses and how people are making decisions about it and how they're integrating it into businesses.

[00:50:31] So I could really see an evolution of the book as businesses mature in terms of their adoption.

[00:50:40] And I'm sure the learnings will be different.

[00:50:42] They will be new.

[00:50:43] I'd be really excited to look at some startups as well that are coming through and which areas they are being successful in.

[00:50:55] Fabulous.

[00:50:56] And finally, where can people learn more about your work, aside from obviously reading your book?

[00:51:02] So LinkedIn is a good place to find me.

[00:51:05] Amazing.

[00:51:06] So we'll share the link to that in our show notes.

[00:51:09] Thank you so much, Lisa.

[00:51:10] This has been so incredibly insightful and helpful.

[00:51:14] And I know I've jotted down a ton of notes, even just for myself.

[00:51:17] And hopefully the people listening will feel much more confident moving forward in their own AI leadership,

[00:51:24] as well as finding ways to better build business cases so that they can have more impact on how their organization is moving forward in this AI-first space.

[00:51:34] Thank you so much for your time.

[00:51:37] Thank you for listening to the Design of AI podcast.

[00:51:40] We interview AI leaders and discuss the latest innovations.

[00:51:44] We help teams learn how to leverage AI to reshape their industries.

[00:51:47] If you like learning about AI and how to grow your career, make sure you follow us on your favorite podcasting app.

[00:51:53] And it really helps us if you leave reviews.

[00:51:56] And to get more AI and career resources, be sure to subscribe to our sub stack at designofai.substack.com.

[00:52:07] This episode is hosted by Arpi Draghi-Guerrero, the founder and head of product strategy at PH1 Research,

[00:52:14] and Brittany Hobbs, VP of Insights at HUGE.

[00:52:17] Thank you.