GenAI’s promise is that digital experiences will become more intelligent. Big Medium Founder Josh Clark and his daughter, Veronika Kindred, are the authors of the upcoming book “Sentient Design” and the latest guests on the podcast. They see products that are radically adaptive to our situational needs and collaborate with users in ways that seemed insane a few years ago.
Listen on Spotify | Listen on Apple Podcasts
But what struck me the most were three things:
* Veronika, a GenZer who figuratively grew up inside of tech because of her father’s work, sees the role of AI much differently than what us older folk would expect. There’s an awkward comfort with the centralization of power within these systems and the expectation that we, the users, will decide whether it is used for good or bad.
* Not building towards personalization. Josh knows that it requires far too much data for a system to understand us and what we truly need. So they’re better suited to inferring where we are in our journey, making assumptions about what might have changed about us, and adapting to meet us where we are.
* Josh is a champion for embracing the weirdness of AI. Rather than be intimidated and worried about hallucinations, use the not-so-perfect technology in ways that provide unexpected results.
The counter-point to intelligent products continues to be how much intelligence a user wants and how much personal information they are willing to give up for it. There’s nothing more uncomfortable than a salesperson who doesn’t get your signals.
Adobe’s Project Concept is the start of something huge
Embracing the weirdness is exactly what Adobe’s new product, Project Concept does. Better you watch the video than me try and explain. It will be interesting to see how agencies respond to the further commoditization of their expertise.
Always remember, GenAI is great at the boring stuff
Amazon, in its quest for greater efficiency, has developed new systems to shave seconds off each package delivery and to help customers make faster buying choices, even for new product types that they may know little about. The company announced Wednesday it has created spotlights within its trucks to guide delivery people to packages for each stop along a route."When we speed up deliveries, customers shop more," said Doug Herrington, CEO of Amazon worldwide stores in remarks at the event. "Once a customer experiences fast delivery, they will come back sooner and shop more."
Interestingly, this also highlights the tech’s ability to imagine solutions to problems that humans may not be able to see otherwise. You could call that embracing the weirdness again.
We’ll go into this conversation in detail when we interview Lisa Weaver-Lambert, the author of The AI Value Playbook. In the book she interviewed business leaders to document exactly where and how AI has been delivering value.
Multi-modal AI: 8 ways computer vision will change our lives
While GenAI has been monopolizing the headlines, Apple, Meta, and Snap continue to invest in augmented reality headsets. Apple's Vision Pro landed with a thud —largely due to the price and home-bound use cases— but the others stirred buzz because they focused on lightweight and fashionable eyewear (courtesy of their partnership with Ray-Ban).
We've been here before though. Google Glass famously failed. And no one remembers Snap's previous eyewear.
But now is different.
AI researchers have made huge advancements related to computer vision. If AI enables computers to think, computer vision enables them to see, observe and understand.
Continue reading the article on LinkedIn…
Want to join as a contributor?
Contact us info@designof.ai to help us collect the best resources about how AI is shaping the world around us.
Thanks for reading Design of AI: News & resources for product teams! Subscribe for free to receive new posts and support my work.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit designofai.substack.com
[00:00:00] What we are saying is that there is an opportunity for intelligent interfaces that are far more aware of context and intent than anything that we've been building to date.
[00:00:11] And that has the ability to be radically adaptive to the user, to that context and intent, and have agency to sort of do things on their own.
[00:00:22] Sentient Design is the already happening future of intelligent interfaces.
[00:00:27] Josh Clark and Veronica Kindred are here to discuss their upcoming book and vision of AI-powered experiences.
[00:00:33] People had a hard time imagining what the iPhone could become because it was so underpowered, as novel as its interaction was.
[00:00:41] In a way that we have sort of the reverse thing with AI, partly because we have such emotional baggage around it.
[00:00:49] It's a term that's so hard to put meaning to that it's become almost useless.
[00:00:54] It's a field. It's a sci-fi concept.
[00:00:57] In episode 19, the authors of the upcoming book Sentient Design tackle the biggest topics impacting product teams.
[00:01:04] This includes the types of intelligent experiences that generative AI enables.
[00:01:09] Moving beyond the chatbot, where AI products go next.
[00:01:13] The role of designers in shaping the next generation of digital products.
[00:01:18] The type of relationship that Gen Z and Gen Alpha will have with AI.
[00:01:23] And whether this is a period of optimism or skepticism.
[00:01:26] Let's provide better care. Let's provide better employment.
[00:01:29] Let's make healthcare less expensive.
[00:01:31] That's the thing that we can do. And that's why design is such a critical role here.
[00:01:36] Because that's a design job.
[00:01:38] What is the goal and outcome of what we're going to do?
[00:01:41] And what are we going to design toward?
[00:01:43] Josh Clark is the principal of Big Medium, a digital agency that helps complex organizations design for what's next.
[00:01:49] Josh coined the term sentient design to describe the already-here future of intelligent interfaces.
[00:01:54] AI-mediated experiences that seem almost self-aware in their response to user needs.
[00:02:02] Sentient design describes the form of this new experience as well as a framework and philosophy for applying it.
[00:02:08] In addition to Sentient Design, co-authored by Josh and Veronica Kindred,
[00:02:14] Josh has also authored several other books including Designing for Touch and Tapworthy, Designing Great iPhone Apps.
[00:02:21] I was trying to find a metaphor to put for this.
[00:02:23] I kind of got the sense that AI is not so much a huge force for good or bad in the world.
[00:02:29] It's more like oil.
[00:02:30] It helps people heat their home and drive their cars, but it also is a great tool for both equality and inequality.
[00:02:38] The book's co-author Veronica Kindred is a designer and researcher at the digital agency Big Medium,
[00:02:43] where she defines and solves design problems alongside some of the world's biggest companies.
[00:02:49] Remember to subscribe and follow the Design of AI podcast for interviews with AI product and industry leaders
[00:02:54] that deliver lessons for teams building tomorrow's AI products.
[00:02:59] Learn how Gen AI products are designed and the choices that make them successful.
[00:03:03] We hear from designers, researchers, futurists, founders, and more.
[00:03:07] And remember to subscribe to our newsletter for additional resources at designofai.substack.com.
[00:03:14] This episode was hosted by Arpey Drag Figueredo, the founder and head of product strategy for PH1 Research,
[00:03:20] and Brittany Hobbs, VP Insights at HUGE.
[00:03:24] Feel free to contact us on LinkedIn with any questions.
[00:03:29] Welcome Josh and Veronica.
[00:03:31] Thank you for joining us.
[00:03:33] So Josh, let's start with you've lived quite the life.
[00:03:36] You were a producer of nationally televised PBS programs.
[00:03:40] You've met Mikhail Gorbachev and strolled the ranch with Nancy Reagan.
[00:03:45] You created the iconic Couch to 5K program to help millions of people take up running.
[00:03:52] On the side, you just wrote two books about designing for mobile apps before it was the norm.
[00:03:56] And you co-founded one of the agencies that led the design system revolution.
[00:04:01] I mean, that's a pretty jam-packed resume already.
[00:04:04] Wow, but I want to bring you everywhere.
[00:04:05] Will you be my hype man?
[00:04:09] I feel great now.
[00:04:10] This is going to be a great album.
[00:04:12] Yeah.
[00:04:12] Yeah, let's just start on that.
[00:04:14] So now you've decided to write another book with your daughter Veronica about the future of intelligent interfaces.
[00:04:23] How did that come about?
[00:04:24] Yeah, wow.
[00:04:26] Well, Veronica and I both work for an agency called Big Medium that I founded over 20 years ago now.
[00:04:32] And all of the sort of focus of that has always been around sort of design for what's next.
[00:04:38] You know, there's kind of two reasons that people hire an agency.
[00:04:41] It's either because we don't know how to do a thing or we don't have enough people to do a thing and we need sort of a body shop.
[00:04:47] And we've always focused on the first one.
[00:04:49] Like, how do we help organizations figure out how to do the thing that has become urgent but that they don't yet know how to do?
[00:04:57] And so over time, that's been around mobile.
[00:05:01] That's been around design systems.
[00:05:03] How do you scale large organizations as you bring design in-house?
[00:05:07] And of course, right now, sort of a big part of that is about machine intelligence.
[00:05:12] How do we use AI and machine learning?
[00:05:15] What the hell is this stuff for?
[00:05:17] What is it good at?
[00:05:18] What is it bad at?
[00:05:19] It seems so smart, but it seems so dumb.
[00:05:21] It's so big, but it's so small.
[00:05:23] What is it?
[00:05:24] And so a lot of what we're focused on as an agency is helping our clients figure out how to use machine intelligence in a way that is meaningful for the customer and for the business.
[00:05:38] And so we're helping to build a lot of things and learning a lot of lessons doing it.
[00:05:42] But a big part of what we do is to help our clients and the general industry learn how to do it with us.
[00:05:49] So I think as an agency value and as a value for both Veronica and me, a lot of what we always try to do is to make our skills our clients' skills or those who we work with or meet.
[00:06:01] And that's like an important part of us.
[00:06:03] And so that's where the books come in.
[00:06:05] You know, a big part of that is we want to share what we know and what we're discovering about working with machine intelligence as a design material.
[00:06:13] And the very cool thing that you mentioned is that Veronica is my daughter.
[00:06:17] She's also my colleague at Big Medium and a really talented, please don't blush, a very talented designer and researcher.
[00:06:24] And it felt really important to work on this book together because this is like a moment, just like a huge seam in the industry where a new technology has arrived that is baffling and confusing and has so many risks and so many opportunities that we need to kind of bring new perspective to it.
[00:06:47] That kind of the old tricks don't work.
[00:06:50] And it's sort of a responsibility for old heads like mine to bring new heads in to get new perspective.
[00:06:57] And so there's nobody in this generation that I respect more than Veronica.
[00:07:03] And so it's been a real treat to work with her and to write this book with her.
[00:07:09] Yeah. And that's such a different approach to both agency thinking, but also to sharing that knowledge.
[00:07:18] And was there something about the way that you've structured your agency and your approach to work that really made it just make sense or seem obvious that this book needed to be written and shared externally beyond just the work you're doing?
[00:07:32] Yeah. One is writing books is something that really forces a specific kind of clarity of thinking that serves, frankly, us as well as the folks who we hope buy it and read it and our clients as well.
[00:07:52] So I think part of this is sort of, and this has kind of been part of my personal practice as new problems or questions or opportunities arise that seem challenging and hard to figure out.
[00:08:05] I seek out other people who I realize are also wrestling with this.
[00:08:10] And I also try to get it down on paper to share with other people, because I think some of the things that we're learning are really valuable in terms of helping this roll out in a way that will actually be better for the world and not worse.
[00:08:27] And we'd like to be part of that story rather than the kind of going down the toilet story that some people are reasonably afraid of.
[00:08:36] It's interesting for me to look at your agency perspective, because you're really showing what's under the rocks of how things work, what you're thinking, what's inside your head.
[00:08:44] And I find a lot of agencies really obfuscate to the max what they do.
[00:08:48] And it's really puzzling and confusing.
[00:08:51] But focusing on Veronica, the thing that's shocking for me is the world that you're going to experience is so different than mine.
[00:08:58] I was born in the 70s.
[00:08:59] I compare everything to the past.
[00:09:01] I remember the first time I picked up a Nintendo, the first time I played Final Fantasy, how amazing it was in going to arcades and the first cell phone I had and just the feeling that all those things gave me.
[00:09:10] But you're going to grow up quite literally with artificial general intelligence as a reality.
[00:09:15] You're going to grow up where systems are there to support, guide you, where in-person interactions are going to be this weird arcane thing that's going to feel uncomfortable probably in some situations.
[00:09:26] Does that come with a lot of pressure or does it come with a lot of excitement, this idea that you're going to be part of this future and crafting it in some form?
[00:09:34] It's so funny to hear you say you compare everything to the past.
[00:09:37] I recently have been beefing up on my history lessons, looking at the dot-com crash and looking at the early days of Apple, because I know those things from popular culture, but didn't experience them in the way I feel like a lot of my colleagues and coworkers or professional mentors that I look up to did.
[00:09:54] So I've been trying to get that context for myself.
[00:09:57] I grew up and Josh and my mom were both speaking at all of these tech conferences where there was a lot of talk about AI and ML.
[00:10:06] So I, as a kid, just thought that whatever my parents were talking about was probably not very cool or not very relevant.
[00:10:16] So I grew up thinking that those ideas were totally nerdy and just paid no mind at all.
[00:10:21] And then I got to college and started taking classes and I got very much into data science.
[00:10:27] And I was like, wait, I've heard of these things before.
[00:10:29] And, oh no, I think I like what my parents like.
[00:10:32] Every kid's nightmare.
[00:10:33] And then, of course, fall semester, my senior year of college, OpenAI released ChatGPT on the world.
[00:10:40] And as you can imagine, on a campus, it just spread like wildfire.
[00:10:45] And I had my data science teacher at the time was really, really on top of understanding what it would mean.
[00:10:52] He told us like this is going to be a very big deal.
[00:10:55] This is going to change everything forever.
[00:10:57] I mean, it may be a bit dramatic, but still, I feel that very early on I was given the perspective that this is the future.
[00:11:05] And sorry, to go back to your question, I think it's very exciting.
[00:11:09] I don't feel a lot of pressure because I don't really feel that this is my doing in a sense.
[00:11:17] You know, it's kind of just happening and we're all on board this train.
[00:11:21] The most exciting thing that's taken place in my lifetime is that it's just become so, so accessible to everyone.
[00:11:27] Yeah, it's really exciting.
[00:11:30] You refer back to some historical stuff.
[00:11:32] The iPhone, when it came out, people doubted it so much.
[00:11:35] It didn't spread like wildfire.
[00:11:36] It took to the third generation before people really started to believe that this was a real thing.
[00:11:41] So it's so funny how the world has shifted to these monopolies right now.
[00:11:45] One of the things that's really interesting about what you just said, there was sort of this thing of like,
[00:11:49] people had a hard time imagining what the iPhone could become because it was so underpowered, as novel as its interaction was.
[00:11:56] And in a way that we have sort of the reverse thing with AI, partly because we have such emotional baggage around AI.
[00:12:05] It's a term that's so hard to put meaning to that it's become almost useless.
[00:12:10] It's a field.
[00:12:11] It's a sci-fi concept.
[00:12:13] It is a existential crisis.
[00:12:16] It is kind of some hallowed land.
[00:12:19] And the expectations for what AI means that it's, well, sure, it can write this song, but it's never going to be on the charts.
[00:12:25] It's like, it's writing a song.
[00:12:27] You know, there's sort of these things that are, we're almost under dialing kind of the remarkable power of this,
[00:12:34] but our expectations are set to the wrong place.
[00:12:37] I think part of the thing that Veronica and I are doing, and what Veronica is keeping me honest on a little bit,
[00:12:43] is how do we actually sort of align the experiences we create with the capabilities of where we are right now?
[00:12:52] Veronica, you were talking about how it's this boring ho-hum thing that your parents talked about,
[00:12:58] which I know that was your experience of it.
[00:13:01] It's up for you.
[00:13:02] You know, I get the impression this is just technology.
[00:13:04] It's not magic.
[00:13:06] It's not new.
[00:13:07] This is just the baseline of your experience, which is a really different perspective from a lot of us who are feeling kind of turned upside down by what's happened in the last couple of years.
[00:13:19] This Design of AI episode is brought to you by PH1, a research and strategy consultancy that helps clients build AI products that customers want.
[00:13:28] Contact them about product discovery research to answer critical questions about what to build,
[00:13:33] competitive analysis to find out how to gain an advantage,
[00:13:36] service and customer analysis to identify the best use cases and value drivers,
[00:13:41] and workshops and concept testing to validate what will work and to fine-tune products.
[00:13:46] PH1 has worked on products for Spotify, Microsoft, the National Football League, Dell, Mozilla, Bell,
[00:13:52] and various health and higher education groups.
[00:13:54] Bring in an expert to make sure your products and teams are focused on what customers want.
[00:13:59] Visit their website to book an intro call.
[00:14:02] PH1.ca
[00:14:04] It's a really interesting time because for the first time ever,
[00:14:09] you have a new technology or something that could be really big differentiator,
[00:14:15] but there's no cost prohibitiveness.
[00:14:19] And it already has existed in the public discourse for so long that people already have a mental model
[00:14:27] or a thought or an expectation around it that technology throughout history have not come with.
[00:14:34] Yeah.
[00:14:35] Yeah.
[00:14:35] For better and for worse.
[00:14:37] Because it also comes with some baggage of expectations.
[00:14:40] And it's always the thing.
[00:14:42] Because Siri sucks.
[00:14:43] Yeah.
[00:14:43] Right.
[00:14:45] Every time I use voice, I get reminded, my God, we still have so much further to go.
[00:14:49] Yeah.
[00:14:49] But then you see glimmers of some of the things with OpenAI's new stuff around speech.
[00:14:54] And it's like, oh, wait a second.
[00:14:55] That feels genuinely different.
[00:14:58] There's a different vibe to it.
[00:15:00] And I think that's as much as anything right now is it feels like there's this opportunity for designers
[00:15:06] to be kind of plug into a new vibe around technology.
[00:15:11] And I mean that in the broadest, most sort of floofy sense of, oh, there's a new feeling about how this can be.
[00:15:19] And what is the opportunity there?
[00:15:21] How do we lean into that instead of maybe fears or the worst parts of how this could go?
[00:15:26] But also even just in the interactions that we create.
[00:15:29] We can have sliders now that aren't just for volume or brightness.
[00:15:33] It's for like weirdness and stylization and emotion.
[00:15:37] And that's a different kind of vibe to software that I think that designers have not yet really appreciated or embraced.
[00:15:46] That there's a dimension to what we can create that is different.
[00:15:53] And we're still learning how it will be better or worse and what our responsibilities are to work with things like emotion,
[00:16:01] which is fraught and not what machines actually have, but a role they can play.
[00:16:09] And that's a lot of responsibility.
[00:16:12] So focusing on the book, you picked an interesting title in terms of sentient design.
[00:16:18] It's a word that might be loaded depending on your worldview.
[00:16:21] You know, someone might have significant optimism about sentient-ism or skepticism about it.
[00:16:28] Knowing that a lot of the people reading this book are going to be designers,
[00:16:31] what are you proposing that their role in this future should be?
[00:16:35] Yeah, design plays such a critical role in this.
[00:16:39] And I'll talk about the title in a second.
[00:16:42] But I think that for the last really decade around machine learning and around AI,
[00:16:49] it has been an engineering-driven world of engineers showing us what is possible.
[00:16:55] And I think that a lot of times designers have felt behind the curve on that or that there wasn't a role for them.
[00:17:01] And I think, in fact, it's like such a critical role of here's the raw material that the engineers have done to show us what's possible.
[00:17:11] And I think what designers need to do is now to figure out with their kind of connection to the user and the specific skills there of what is meaningful.
[00:17:20] What can we do to use this in a way that is respectful of the people who use it,
[00:17:27] that is adaptable to both its strengths and its weaknesses and risks?
[00:17:31] So I think there's like a huge opportunity of how do we use this stuff?
[00:17:36] How do we learn it as a design material to make truly great new kinds of experiences?
[00:17:41] For sentient design, you're right.
[00:17:42] Sentient is a little bit of a tricky thing.
[00:17:44] It's a little bit of a bold title.
[00:17:46] And I want to be clear that this is not sentient like Terminator sentient.
[00:17:50] I think it is above our pay grade for Veronica and me to know when or if artificial general intelligence is coming.
[00:18:00] And so we're not sort of suggesting or saying how to design for that right now.
[00:18:04] But what we are saying is that there is an opportunity for intelligent interfaces that are far more aware of context and intent than anything that we've been building to date.
[00:18:16] And that has the ability to be radically adaptive to the user, to that context and intent,
[00:18:24] and have agency to sort of do things on their own.
[00:18:27] And those bundle of things, it's like, oh, there's a lot that we can do here.
[00:18:32] If the technology never moves forward another inch, we've got decades of work to figure out what we can do with that.
[00:18:38] But also, we call it sentient design because we're also talking about the designer,
[00:18:43] that we want the designer to be mindful and to be really thinking about how to, again, create meaningful experiences.
[00:18:51] Because this technology is not without risk or without cost.
[00:18:55] And so as we look to create the benefits with machine intelligence, the designers have to ask at what cost?
[00:19:02] Or how do we avoid or minimize those costs?
[00:19:04] Yeah.
[00:19:05] In the literature that you've created already, and thank you, you both share so much of your thinking and what you're working on with the world, which is a unique breed.
[00:19:17] But when you've been talking about intelligent interfaces, there's six pillars that you've identified.
[00:19:22] So one being aware of context and intent, being radically adaptive, being collaborative, multimodal, continuous and ambient, and deferential.
[00:19:32] I'd love to go a little deeper on three of those in particular, especially because they're sort of words that already exist,
[00:19:41] and people have different meanings or different interpretations of them.
[00:19:45] So I'd love to talk about radically adaptive, multimodal, and continuous and ambient.
[00:19:51] And how do you explain or define those concepts, especially when it comes to how a designer can consider them or work with them for what AI is actually capable of today?
[00:20:03] Yeah.
[00:20:04] I mean, I think radically adaptive is something that, let's just talk about chat.
[00:20:10] When ChatGPT came out, I think that was a thing that was like, for a lot of people, oh, something has happened right here.
[00:20:17] This is sort of something that is now entirely open-ended experience that, like any conversation, I can open any topic, and it will follow me there.
[00:20:29] And in some cases, leave me there.
[00:20:31] You know, and it was sort of this thing of, oh, here is now something where we have something that is completely adapting to my context and intent in the moment.
[00:20:42] I think that kind of got us stuck on chat, that that's sort of the model.
[00:20:45] And if we pull back from that and we say, well, what other opportunities are there that have a similar open-endedness and adaptability?
[00:20:55] What happens if we bring that spirit that we've seen in chat to other types of UI?
[00:21:02] So that could be graphical UI, but also, you know, things that emerge into the physical world or interact with the physical world in some way.
[00:21:12] And that's what I mean by multimodal is, you know, there are just sort of different modes to work with.
[00:21:18] For a long time, computing systems only understood ASCII text.
[00:21:22] And under the hood, that's still kind of what it all boils down to.
[00:21:26] But they've come to be able to communicate with us in all the messy ways that humans communicate in speech, in, you know, scribbles and in doodles and images and video.
[00:21:41] And that means not only that there are new kinds of data that are unlocked, that, oh, it can understand this video and help me make sense of it.
[00:21:49] It's like that can become the point of interaction so that now I can talk to it.
[00:21:55] Now it can see me.
[00:21:57] Now it can see the world around me.
[00:21:59] And now that means that I can ask questions or have information come to me from my surroundings.
[00:22:07] You know, Veronica, we talk about this a lot about like Shazam, which is old school for Veronica.
[00:22:15] This is old fashioned technology, right?
[00:22:17] But that was like early machine learning.
[00:22:19] And part of definitely what we think about machine intelligence, that's something that sort of broke that fourth wall and came into the world with us in an important.
[00:22:30] So Veronica, again, as someone who's growing up with this technology as the de facto way that you view the world, Google, Anthropic, OpenAI, they're all really pushing this idea of multimodal and engaging the technology when and how you want.
[00:22:44] You then have tools like Rabbit coming out with these AI assistance, physical things.
[00:22:50] What's your expectation on how you're going to be able to interact with like a bank or with government agencies or such in the future?
[00:22:57] What is your assumption of that?
[00:22:58] First of all, I'm pretty excited about all of the AI integrated operating systems.
[00:23:04] Google's Pixel phone, Apple iOS about to come out.
[00:23:07] I'm pretty unenthused by the additional hardware that we're expected to be carrying on our bodies.
[00:23:13] Not to be too pointed, but the pin that you mentioned.
[00:23:16] As far as interacting with these institutions, I think there's an expectation that it should be easier and easier.
[00:23:22] You can now bank on your phone.
[00:23:24] You should maybe be able to vote on your phone.
[00:23:27] That would be crazy.
[00:23:28] You should be able to file your taxes on your phone.
[00:23:31] Can you do that with AI?
[00:23:32] Are those the things that we can be expecting?
[00:23:35] I think what's really amazing about these recent innovations is that it's just really brought so much of the world to our really casual interactions.
[00:23:43] That were all there already.
[00:23:45] People were emailing and texting.
[00:23:47] And so OpenAI was really able to meet users where they were in that chat format.
[00:23:52] And really, as Josh said, break that fourth wall.
[00:23:55] An AI-powered IRS is a nervous-making thing, right?
[00:23:59] There are definitely things like, where do we want this?
[00:24:02] How does it serve us as a people?
[00:24:05] The power of AI is really when you break down the silos between groups and organizations and you collate the data in big pools.
[00:24:14] And that's really where the power is, is when you can have as much context awareness as possible.
[00:24:20] So specifically, Veronica, is that worrying to you?
[00:24:22] That AI is going to open more access to who you are and make it vulnerable to InfoSec threats and such?
[00:24:30] Absolutely.
[00:24:31] There's already been a lot of problems with AI misidentifying people because they were underserved in the data.
[00:24:39] It's a huge concern.
[00:24:40] Proceed with caution, of course.
[00:24:42] But I have a sense that my data is already out there, that it's gone, that probably anybody who wants to buy it has already bought it.
[00:24:49] I saw a statistic that was something like 88% of the U.S. population is identifiable by their age, their city, and some other very generic piece of demographic information.
[00:25:01] We have the sense that our data is bundled and sold, but it's okay because it's anonymized.
[00:25:05] It doesn't have your name or your birthday.
[00:25:07] They don't really need that.
[00:25:09] You're already very identifiable, very out there.
[00:25:11] I start to wonder, is it worse if one central power has it or is it worse if just everybody has?
[00:25:17] I don't know.
[00:25:18] It's just definitely a huge concern.
[00:25:21] I would say that Veronica, early in her career, young in her life, has a kind of a matter-of-fact cynicism that I think serves you well about what technology is and what the reality of it is.
[00:25:37] And I still have a little bit of my 90s start of the web idealism about what I still hope this could do.
[00:25:45] And I think that, Veronica, you've grown up in a world that things haven't quite turned out that way, or it's a mix, just technology.
[00:25:53] It's not good or it's bad.
[00:25:54] It's just sort of here's the complicated reality of this.
[00:25:59] It's the infrastructure of your life.
[00:26:01] I was trying to find a metaphor to put for this.
[00:26:03] I kind of got the sense that AI is not so much a huge force for good or bad in the world.
[00:26:09] It's more like oil.
[00:26:10] It helps people heat their home and drive their cars, but it also is a great tool for both equality and inequality.
[00:26:17] It's more of a tool of a thing, almost a weapon in the sense that if you don't use it, then it's not bad.
[00:26:24] Or if you do use it for good, then it is good.
[00:26:27] But that gets into messy thinking as well.
[00:26:29] This is interesting for me because I've been doing quite a bit of research into generational divides and particularly around how they work with large institutions.
[00:26:37] And older folk, people who are 60 plus, they have such deep-seated trust of institutions.
[00:26:44] They have confidence in advisors.
[00:26:46] They trust that someone has their best intent in mind.
[00:26:49] But then we go to younger generations and they trust corporations and large entities because that's just what you're used to using.
[00:26:57] But there's a distrust to government and institutions and this idea of an expert who's trying to profit off you.
[00:27:03] And it's just this fascinating inversion.
[00:27:05] And you kind of spoke to that in a sense where you view it as like, well, it's a net benefit.
[00:27:09] So if there's harms, well, that's just part of owning that possibility of good.
[00:27:14] It's kind of an interesting generational thing.
[00:27:16] Josh, what do you see happening in terms of how people perceive the use of AI as a good or bad tool?
[00:27:22] I mentioned earlier that AI is such a freighted term.
[00:27:27] And I think that a lot of assumptions go into it.
[00:27:30] And I think that when you look at sort of like a mainstream world, there are a couple of takes on it.
[00:27:35] One is, wait, is this really AI?
[00:27:38] Why is this like the sentient robots that are finally here to, if not destroy the world, to take my job, to replace my doctor, to dehumanize and de-skill all of us?
[00:27:51] I think that there's sort of like this built in sci-fi understanding of what this could do, which is not helped by kind of this current phase of somewhat rapacious emphasis on efficiency to get it.
[00:28:05] So I think that that's doubling down on some of those fears.
[00:28:08] When you pull back and you look at it, not necessarily as AI, but as machine intelligence, there's a more nuanced appreciation and skepticism around the algorithm in our lives.
[00:28:22] I think if we think less about AI and sort of thinking about, all right, what are algorithmic influenced and mediated experiences like?
[00:28:31] I think we all have experiences where it's like, wow, that is great.
[00:28:34] That is much better than before.
[00:28:36] And just even simple ways of like, my Netflix situation is sweet.
[00:28:41] It's given me the stuff that I need.
[00:28:43] But then there are other things where we feel manipulated.
[00:28:46] We can see and feel the algorithm trying to sell us something, move us into different places.
[00:28:53] If we think about AI not as a move towards some sentient being, I don't know, maybe it will turn into that.
[00:29:00] But as sort of like the next more powerful set of algorithms, I think that we have the opportunity to crank up both the good and the bad of that experience.
[00:29:11] I think one of the things that Veronica and I are trying to do with sentient design is not only sort of show how you can build new kinds of experiences and products and interactions with this, but also how do you lean into the better part of this and not sort of go into what I think some are the fears around this.
[00:29:30] How do we create experiences that, you know, amplify judgment and agency instead of replace it?
[00:29:37] Yeah, I like the way that you've summarized your book where it's, I'm just going to read it.
[00:29:44] It's sentient design referring to intelligent interfaces that are aware of context and intent so that they can be radically adaptive to user needs in the moment.
[00:29:53] Which I think is a really optimistic view of what we can be and should be doing with AI.
[00:30:02] So moving away from productivity gains, time gains, whatever, over towards that adaptability, being more intelligent, the context awareness.
[00:30:11] Forrester actually recently coined this term or this idea of zero party data, which is where you as the user have to be providing your data into a system.
[00:30:21] So especially when it comes to intelligent interfaces, the more that you give your data and engage with it, the more that you'll get back from it.
[00:30:31] Because it can learn from you, it can become personalized, it can have more of that context and intent understanding.
[00:30:36] So thinking about a lot of these unintelligent experiences that exist right now, how can people who are looking to build intelligent experiences or looking to build intelligent interfaces, working in sentient design,
[00:30:52] how can they be considering the gathering of context and intent and creating those experiences for that new next version of how AI can actually be a positive net benefit for users?
[00:31:07] Yeah, it's a great question.
[00:31:08] I mean, there's two elements to it, right?
[00:31:09] There's sort of the mechanical piece of this, of how does it work?
[00:31:13] Like, how do we get the data?
[00:31:14] And then there's the ethical and kind of comfort level that we have as consumers, as designers, and as a culture around those things.
[00:31:23] On the first one, just sort of the mechanics of it, personalization is super hard.
[00:31:28] Like, personalization to you is super hard and typically takes a ton of data and a really strong relationship over time that I think, frankly, most brands and products will not approach just based on the interaction that we have with them.
[00:31:47] But I do think that more broadly, there are incredible insights that we can get from broader patterns and context that the aggregate informs things.
[00:31:57] Because it doesn't matter to me if the thing that I'm worried about is middle of the night and my infant is sick.
[00:32:03] You know, it's like, that is who I am right there.
[00:32:06] I am a desperately concerned parent.
[00:32:08] I think that one of the things that as designers that we need to do to sort of begin to get sort of fluency and what it means to create an adaptable experience is that it doesn't necessarily mean a personalized to this person and everything that they've shown over time.
[00:32:26] It is, I recognize this signal in the sea of human patterns that can influence how I do this.
[00:32:37] And so that can be wide, like the example I gave of kind of human parent experience, or it can be very narrow in terms of this is the stuff that is relevant to my work and domain and the signals that it picks up there.
[00:32:51] We've demonstrated that we have these systems that understand language and thus somehow behavior and facts in ways that we don't yet understand with LLMs.
[00:33:03] That alone is like something that can act and respond, even if it doesn't know very much about me.
[00:33:11] It recognizes the context and can act on it.
[00:33:15] So I think that's also a way to think about how can we respect specific privacy and just the reality that we don't always have strong relationships with an individual product or brand, but act on the broader pattern.
[00:33:30] To give the people listening more grounding in what you're talking about.
[00:33:36] Is there a specific example that you could share of, you know, where you've seen or worked on that more unintelligent experience or interface and where it's used these concepts to be able to be more of an intelligent interface and provide more context awareness?
[00:33:56] Yeah.
[00:33:56] Just as background, you know, we are used to designing the happy path as designers.
[00:34:01] There is a set of interactions and data over which we are under complete control and that we've sort of like set up the levers and the knobs and the dials for people to turn.
[00:34:12] And we know the path that they're going to follow.
[00:34:14] Once you sort of start having machine intelligent experiences where the system is mediating this, you aren't in that control anymore.
[00:34:23] Which means that the design experience shifts from creating this static experience to something that is much more responsive and is interpreting the context.
[00:34:34] I think one of the things that I think is an exciting element of this is something that Veronica and I are calling the Pinocchio pattern, which is sort of turning the puppet into a real boy.
[00:34:46] Which is sort of things that we're seeing in a lot of AI experiences right now, which is here is a low fidelity.
[00:34:54] It could be a sketch.
[00:34:56] It could be an outline of ideas.
[00:35:00] And we have or it could be a few bars of music.
[00:35:04] And we can kind of zap it now and sort of say, I recognize the intent in this.
[00:35:10] And now here is the high fidelity version or at least something that takes it closer to that so that now I'm working with a real thing.
[00:35:19] And that's a pattern that we're working with.
[00:35:21] A Figma plugin that we just built for a client that takes this sort of hand-drawn sketch and turns it into an actual design using the design system.
[00:35:31] And it's not done.
[00:35:32] It's not sort of finished, but it's this thing that sets the table now.
[00:35:36] It's like you've expressed this intention of this wireframe.
[00:35:38] Here's like all of the components that you need to build it roughed out.
[00:35:43] Now it's a real boy.
[00:35:44] Go ahead and sort of start working with this.
[00:35:46] Here's sort of like a rough idea that we have.
[00:35:49] And we're used to a whole set of long paths that we have to take to manually move it from here to there.
[00:35:57] And a big sort of swath of the work that we're doing these days is around that Pinocchio pattern where we're helping to sort of short circuit that in ways that sometimes it's a little mind-blowing and hard to even conceive.
[00:36:10] That we can go from here to there and collapse that time and effort.
[00:36:16] In the case of these design things, that's not designerly effort.
[00:36:19] That's kind of the clerical hard labor that just sort of takes time.
[00:36:24] So part of this is like, oh, that kind of work can actually elevate the creative process instead of replace it.
[00:36:32] Some people would say that a lot of the AI value right now is being driven on the B2B side.
[00:36:38] So you mentioned on the design end, it enables new possibilities, improves workflows, adds efficiencies and such.
[00:36:44] Are you seeing any opportunities in B2C where the technology can be literally monetized from a consumer standpoint?
[00:36:53] And for example, it brought up Shazam where I don't think they ever were able to monetize.
[00:36:58] They ended up just getting acquired by Apple, right?
[00:37:00] So are there some precedents now that you're starting to see or some glimmers of hope of B2C delivering value?
[00:37:06] Yeah.
[00:37:06] I think it's early to tell.
[00:37:07] I think that that's like the big question.
[00:37:09] And what you're seeing in the market for some folks and being like, man, we put a lot of money in this.
[00:37:13] Where's the money coming back?
[00:37:15] And I think especially around creating new kind of UX paradigms, it's going to take some time before the money becomes clear of what the business in that is.
[00:37:26] But that said, I do see a real opportunity to transform and elevate the kinds of experiences that we give to customers by making whatever task they're trying to do easier and better.
[00:37:42] And we've seen that in previous generations of machine learning powered technologies around recommendation and prediction that we have seen in commerce that have made both the shopping experience better as well as the kind of business of providing that experience better.
[00:38:00] So I feel like once we can really sort of take advantage of the powers of these things to be aware of intent and context and to radically adapt the experience that's bound to be powerful, both for the people who use those services as well as the businesses that provide them.
[00:38:23] We are used to this idea of radically adaptive content, the kind of thing of whatever Netflix, Amazon, you know, kind of these prediction recommendation things.
[00:38:33] That's sort of so familiar that it seems boring.
[00:38:37] The opportunity now is to be like, oh, wait, we can do that with UX now.
[00:38:41] We can do that with the way that the product, and I'll put this in quotes because I don't mean specifically chat, but the way that the product speaks to you is something that we can be adaptive in now.
[00:38:50] And I think that that's going to be really powerful.
[00:38:52] I like to reference video games because I find that they're actually on the bleeding edge of what's possible.
[00:38:58] And I know the metaverse concept kind of fell flat on its face, but the metaphors already exists in video games and Roblox and all these sorts of places where it is a radically adaptive experience where you can have endless conversations, endless quests.
[00:39:11] So I'm wondering as an AI native, what kind of AI and intelligent experiences do you want to see, Veronica?
[00:39:18] I think there's a lot of innovation around commerce already.
[00:39:22] So I think there's a lot of opportunity for things surrounding education.
[00:39:27] For example, Duolingo has done a really good job of gamifying that learning experience for learning languages.
[00:39:33] And I think there's more opportunities in education to meet students where they're at.
[00:39:39] I mean, I can call myself AI native, but Gen Alpha or whatever this next one is, they're going to be so fluent and so literate when it comes to AI experiences.
[00:39:51] I think one of the things that you mentioned, Arpi, around games, one of the things that is exciting and evolving is kind of AI as an NPC and the experiences that we have.
[00:40:04] You know, and that's somewhat familiar in terms of some of the chat bots and stuff that we've seen, Slack bots and things like that for years.
[00:40:10] But there is sort of this thing that we're starting to see where there's kind of this virtual teammate.
[00:40:17] And we need to be careful about this, about particularly the design of them, of having fairly constrained and limited roles for the capabilities that are available now.
[00:40:25] But seeing things like when Miro, these sort of sidekick things that represent as user accounts, but have really sort of specific kinds of feedback that they go around and they leave comments and suggestions for these sort of specific roles.
[00:40:41] I think that's exciting to think about of what's the next level of that.
[00:40:46] Like, okay, great.
[00:40:46] So now we've got another cursor flying around in our Figma canvas or in our Miro whiteboard.
[00:40:52] What does it mean to have a participant and a teammate in this is something that I think we're starting to really only tap.
[00:41:03] And I think that it's useful to think of these things as different than human.
[00:41:09] Matt Webb does a lot of work on, Matt has a great blog at interconnected.org, I think.
[00:41:16] And he's been an old hand around thinking about kind of emerging technology for a long time.
[00:41:22] When he does sort of things around AI teammates, he thinks of them as dolphins.
[00:41:26] It's like, here's like a smart, but not human technology.
[00:41:30] And he even has them say, and things like that.
[00:41:33] But what are these things?
[00:41:34] It's different.
[00:41:35] It's a companion species, not a person.
[00:41:37] And I think a lot of times we try to pretend that it's human.
[00:41:41] We want to make a new human for some reason.
[00:41:44] Brittany and I, we've talked about this a lot.
[00:41:45] Sometimes there's value in dealing with the most frustrating parts of regular life.
[00:41:50] And a couple of days ago, she had to wait on hold forever.
[00:41:54] Why can't we have something wait on hold for us?
[00:41:55] And there's already basically a proof of concept.
[00:41:58] People would pay people to wait in line for them to buy things.
[00:42:01] So we need these automated selves to fill the voids that we don't want to.
[00:42:05] There's this gap of time that we have to sit in.
[00:42:09] Did some work with a hotel booking company a lot of years ago.
[00:42:13] It has clients all around the world, sometimes mom and pop ins and things like that.
[00:42:18] And they had this whole bank of fax machines.
[00:42:22] Because what would happen is, it was like maybe a decade ago, these little ins or hotels that
[00:42:27] didn't have any internet access, no website, no API for their reservations.
[00:42:32] And so the way that it would work is you would book your reservation.
[00:42:36] You thought you were booking it online.
[00:42:38] And that would fire up the fax machine to kind of go and make and sort of send a letter
[00:42:44] to these people.
[00:42:46] Do you have a booking?
[00:42:47] And then they would respond.
[00:42:48] And that's when it would be confirmed.
[00:42:50] I think that that's sort of like a lo-fi way to think about this.
[00:42:53] But I think that there is also sort of a thing of, you know, just like you said,
[00:42:57] instead of being on hold for 10 minutes, what's the fax machine fix for that?
[00:43:04] How can AI help with that?
[00:43:06] And I think there's like a ton of just gaps and frictions that we can come into with that.
[00:43:12] And that's where it all starts, right?
[00:43:13] Is what is the human problem that we're trying to solve that we can really solve meaningfully?
[00:43:19] Not just because we have this technology, but because it is a real problem.
[00:43:24] What are the frictions that we can solve and how might any of our technology tools,
[00:43:30] or maybe that's a non-technology tool, help to fix that?
[00:43:33] I know amongst our community, there's a lot of questions about what are the most appropriate
[00:43:38] and effective strategies to leverage AI.
[00:43:40] And one of the ones that's coming up in our conversation here is that most AI tools right
[00:43:44] now, bots, are used as a human mitigating strategy.
[00:43:48] How can we minimize the amount of interactions we have with a human?
[00:43:51] But another strategy altogether is how can you augment that human so that if they're unable
[00:43:56] to answer every call that's coming in, how can they actually provide you like 50% of the service,
[00:44:01] but in a scalable manner?
[00:44:02] I'm wondering if you've seen any bleeding edge strategies at play that are worth discussing
[00:44:07] or bringing to attention for people.
[00:44:09] My favorite example on this part is about healthcare.
[00:44:13] I don't even know that it's bleeding edge from a technology point of view,
[00:44:16] so much as it seems so incredibly novel from a goal point of view.
[00:44:23] But if we think about how machine intelligence could be used in a medical setting,
[00:44:28] we can reduce the amount of time that a doctor spends with a patient, which to me as a patient
[00:44:33] and not part of that medical health system sounds terrible.
[00:44:36] It's going to make our terrible health system in the US even worse, right?
[00:44:41] How could that possibly be better?
[00:44:43] But there are some opportunities to do intake and to do some initial analysis that is time
[00:44:49] that is currently wasted with my doctor while they're tap, tap, tapping,
[00:44:52] and I'm sort of waiting to talk about what's up.
[00:44:56] That if we think about a goal not to be, let's minimize doctor time,
[00:45:00] let's get patients in and out as quickly as possible.
[00:45:03] And we're instead sort of saying, how can we minimize the rote parts of intake and evaluation
[00:45:10] so that the doctor can actually have more time to spend to talk about what the implications are?
[00:45:15] How do I feel?
[00:45:17] And maybe because some of this work is being done by AI, we don't actually need a full kind of doctor.
[00:45:24] We see that already, right?
[00:45:26] With physician's assistants and nurse practitioners.
[00:45:29] Maybe we can have sort of less education required to provide everyday care.
[00:45:35] What that means is, wait a second, I'm getting more time, more human attention to my medical needs.
[00:45:44] We're having more people be qualified to do that so we can actually increase employment possibly around that role.
[00:45:53] It costs less because there's less education.
[00:45:56] We could drive down the cost of medical care.
[00:45:59] And all of a sudden, that's an interesting goal that I can get behind that is ostensibly efficiency-based,
[00:46:05] but with a goal of being, let's provide better care.
[00:46:08] Let's provide better employment.
[00:46:10] Let's make healthcare less expensive.
[00:46:12] That's the thing that we can do.
[00:46:14] And that's why design is such a critical role here.
[00:46:17] Because that's a design job.
[00:46:19] What is the goal and outcome of what we're going to do?
[00:46:22] And what are we going to design toward?
[00:46:25] Because the same technology could have really different outcomes if we're just like, let's cut the patient visit as short as possible versus all the goals that I mentioned.
[00:46:34] And so, Arby, when you ask, you know, what's a cutting-edge application?
[00:46:37] I would say it's not even a technology application.
[00:46:40] It's a mindset application.
[00:46:42] What kind of world do we want to design for with this?
[00:46:46] And that's what we talk about when we talk about sentient design.
[00:46:49] Let's let the designers be sentient and mindful about what we can do with this technology.
[00:46:54] This reminds me of an episode that we did just before the summer with Amy Buecher.
[00:46:59] And she is a behavioral scientist as well.
[00:47:04] She actually talked a lot about working in healthcare and how AI in particular can really help remove the barriers to even accessing healthcare in the first place.
[00:47:14] And allowing people to have things like personalized coaches, having nudges that track them to say, hey, this is what you should be doing now.
[00:47:23] Or this is when it actually is important to go and see your physician.
[00:47:27] And you can really be tracking the backward steps in that journey.
[00:47:31] Like where do we want people to be health-wise?
[00:47:33] And then how do we bring them from where they are today to that?
[00:47:36] And then how do we also then continue along that journey with them?
[00:47:39] Because that's something that right now, when it is being done, it's very manually.
[00:47:44] And oftentimes people are completely left out of the system just because there isn't capacity to be tracking people or to be changing behaviors or be changing people's perceptions about access to healthcare.
[00:47:56] Yeah, I love that.
[00:47:57] I love Amy too.
[00:47:58] She's the best.
[00:48:00] Thank you for sharing that.
[00:48:01] I mean, that's right.
[00:48:01] It's what is our intention, right?
[00:48:04] I mean, I think that that's part of it.
[00:48:05] And I think right now the flurry around AI has been on this kind of low-hanging fruit of this feels like productivity.
[00:48:12] And I think that there's something that is bigger, more meaningful experience transforming that is afoot here.
[00:48:22] We're going to look back in history and see this time period right now.
[00:48:27] Let's call it 2020 to 2025, open AI sort of to now as an inflection point for the world of design in particular.
[00:48:35] So it's when design systems became the DNA of interfaces and when APIs became like the nervous system of products.
[00:48:43] You know, this is when the groundwork is really being laid for that idea of sentient adaptive products of the future.
[00:48:51] For how long did you expect that this is where we were headed?
[00:48:56] You're someone who has really been talking about things before it was cool, before anyone wanted to talk about it.
[00:49:02] When did you really start to notice this and pivot your work towards this?
[00:49:07] Yeah, thanks for that.
[00:49:09] I would say it was around like 2015, 2016 when we were seeing kind of a lot of the deep learning advances, the things around image recognition, computer vision starting to come out and starting to be able to like really get.
[00:49:24] What I would say were kind of broadly available machine learning services that we could have things where here's some data.
[00:49:32] We can actually start to nudge interfaces in different ways.
[00:49:37] And so I think one thing that's worth saying is like that 2015 present that I saw back then still frankly remains our future in some ways that what we were able to do back then is to sprinkle a little bit of machine intelligence onto our experiences with prediction, with classification.
[00:49:58] And I'd say we haven't even scratched the surface on that experience yet.
[00:50:03] So sort of putting aside kind of the walking, talking, sentient interface robots sort of future, there's still sort of this really meaningful thing that we can still do with forms and buttons even where we have things that are smart and informed and responsive defaults where the forms morph and change based on the data, the preferences,
[00:50:27] things that I've entered before that we still aren't even using kind of what Veronica you would call the old fashioned machine learning to get things into.
[00:50:38] I've been thinking a lot for the last 10 years about machine intelligence as a design material.
[00:50:45] And I think that that's things that can be in really modest, small, kind of casual intelligence, sprinkle a little bit of that onto a web form to things that are really these really radically adaptive experiences.
[00:50:58] But I think that it's been sort of coming for a long time.
[00:51:01] I think that all of us, including the people who invented it, were really surprised two years ago when GPT-4 happened that that was like, oh, that's a real leap.
[00:51:11] I didn't see that coming so quickly.
[00:51:14] I don't think, again, even the people who invented it did.
[00:51:17] But I do think that the trajectory is something that has been coming.
[00:51:20] And if you look at it, the exciting opportunity is what happens when our interfaces, these surfaces that we work with all the time, can become more intelligent and understanding and adaptive.
[00:51:33] And that's the experience to create.
[00:51:36] I think that's been the opportunity for really the last decade.
[00:51:39] It's still unrealized.
[00:51:42] So I'd like to ask a bit of a philosophical question when it comes to design.
[00:51:46] The idea of sentient design raises an important question, which is that for the last 10, 15 years, UI design has gotten so much better.
[00:51:55] We're giving people so much more control in their interfaces.
[00:51:58] They can manipulate data.
[00:51:59] Everything's easier to use.
[00:52:00] It's much more accessible.
[00:52:01] It's much more universal.
[00:52:02] It's handheld.
[00:52:03] It's all these wonderful things.
[00:52:05] But now with AI interfaces, they have to hand over control to a superintelligence.
[00:52:12] And that superintelligence is often wrong.
[00:52:15] We've heard a lot of people come on our show basically saying that, well, the interface base is to tell people that the results might not be valid or what you actually asked for.
[00:52:22] So what should be the role of a designer?
[00:52:26] Should they be advocating for the user's wants or the LLM's potential?
[00:52:32] Veronica, I know you've got strong feelings on this one.
[00:52:34] Yeah, 100% designers should definitely be advocating for the user's wants.
[00:52:41] I think LLM's and now we have LMM's as well.
[00:52:44] They're going to keep getting better, even just incrementally.
[00:52:48] But like we've been saying, there's so much, I think, for designers to catch up on and to really bring to the users.
[00:52:57] I also think this is something Josh and I have talked a bit about need to get our products to a standard that we're proud of.
[00:53:05] Like we really shouldn't be bringing things to consumers saying like might be wrong.
[00:53:10] I don't know.
[00:53:10] Might be horrible.
[00:53:11] It might have to disregard everything like you decide that kind of works for chat GPT.
[00:53:15] But I think it really shouldn't be the standard when we're integrating AI, especially when we're talking about smart interfaces that are radically adaptive.
[00:53:24] The AI can make decisions about maybe what to present to users.
[00:53:29] But designers can still be the ones that are creating all of those options and kind of presenting those options to an algorithm that can pick based on the context of the user's needs in the moment that can pick which of those options should be presented.
[00:53:43] So I think we should raise our standards and really see that this is the moment for designers to become the ultimate mediator for users' needs.
[00:53:54] Yeah, I love that the TLDR is raise our standards.
[00:53:58] And I think to add to that or to expand on that is, you know, running steam in our most recent episodes has really been about evaluating the impact of deploying Gen AI.
[00:54:09] To your point before, Josh, is we're starting to get to a point where businesses are saying, how can we now justify these expenses and whatnot?
[00:54:18] So we really need to understand, like, how can we be evaluating the impact?
[00:54:22] So how would you decide if a sentient interface has delivered a more positive impact than basic buttons and fields interface would have?
[00:54:30] Yeah.
[00:54:31] My friend Josh Seiden wrote a book called Outcomes Over Outputs.
[00:54:34] And I think it's just such a smart, it's recommended to anybody, quick read, super impactful.
[00:54:43] Its goal there really is sort of saying we focus all the time on what to make and less on why we're making it.
[00:54:50] And I think part of it is we, so this is not necessarily specific at all to working with machine intelligence,
[00:54:56] but just in general as sort of like good UX design is what are we trying to accomplish?
[00:55:03] What are the outcomes that we want to make?
[00:55:06] And the way that Josh puts it is it's sort of like an outcome is a behavior that's going to change as a result of this.
[00:55:12] And behaviors are really measurable.
[00:55:14] And so if we think about what it is that we want to happen, that's the thing to design for, not so much the technology that we use.
[00:55:24] So, you know, plus a hundred for what Veronica was saying, it's kind of like, let's figure out how we're solving real user needs and find the right solution to make that happen.
[00:55:36] And then those are the things that we measure.
[00:55:38] So we focus on outcomes and not the technology and just being really intentional and saying out loud what that thing is.
[00:55:47] I think, too, if we're all in agreement about, OK, we need to be measuring, we can be measuring productivity or efficiency via quality, which is something that Josh was talking about earlier.
[00:55:57] You know, then it gets to be a question of, OK, well, how do we measure quality?
[00:56:01] How do we measure that people are having a better experience?
[00:56:05] Because traditionally we've been measuring like, OK, how much time is user spending on our website?
[00:56:09] How quickly can we get through this process?
[00:56:11] And if it's no longer about speed or time or volume, then what are we measuring?
[00:56:17] And then it becomes great that we have this new this new I mean, in quotes, new technology to measure things like emotion and tone that the algorithms are starting to be able to understand, like the quality of our experiences.
[00:56:30] When we talk about new problems, we can also remember that there are new solutions.
[00:56:36] I love that you mentioned that it's like we can also use these new tools to measure things in different dimensions than we did before.
[00:56:45] So all four of us here work in consulting and there's a lot of hype right now and there's a lot of executives chasing ambition more than realities.
[00:56:53] I'd love to know what your process is in terms of helping identify what types of clients, what types of circumstances actually would benefit from a sentient interface project.
[00:57:04] Because I'm assuming it's quite intensive to transcendent classic UX.
[00:57:09] And what are some of the kind of questions you guide them through or questions you ask them to ask of themselves?
[00:57:13] That'd be really helpful.
[00:57:14] Yeah, that's great.
[00:57:15] I mean, I think to start, it's tempting to lean into the new technology that it's like, oh, we've got to it's like LLMs.
[00:57:23] They can do anything.
[00:57:24] Let's use it for everything.
[00:57:25] And I think that one of the things that we really sort of start with, frankly, with with our clients is, again, kind of starting on what are those outcomes?
[00:57:33] Where are the frictions that are there?
[00:57:35] And what is it in our entire toolkit that is going to help the most there?
[00:57:39] And so I think when we think about doing sentient design, there's an important part of certainly learning how to use machine intelligence as part of your kit, but also that it's an additional part of your kit.
[00:57:52] We have a whole lot of other kinds of solutions, technology based or not, that we can use there.
[00:57:57] And so in terms of how we decide, all right, here's a problem that is really good for machine intelligence to serve is looking at the sort of five types of machine intelligence and really thinking carefully around those recommendation, prediction, classification, clustering, and generation.
[00:58:18] And right now, because it's so new and so powerful and is kind of mind boggling in so many ways, generation has got all the air in the room right now, right?
[00:58:29] It's sort of, it's the assumption that that's the thing that we have to use for everything.
[00:58:33] And so we see these sparkle buttons everywhere.
[00:58:38] And Brittany, to your kind of point and question, how can we sort of rely on these things?
[00:58:44] We're training people that sparkly means broken and weird result because we're putting generative AI against everything and things that it's not reliable for and sometimes dangerous for.
[00:58:57] And so I think part of the thing that we're thinking about here, as we're thinking about particularly generative AI, is it's the cost versus the benefit.
[00:59:05] You know, what are the risks if this goes wrong?
[00:59:07] And I think the places where you can use that best is where you want weirdness and expansive ideas and ability to let things kind of go sideways or it's okay and it's forgiving if the answer is not correct that you're looking for.
[00:59:28] We sometimes focus too much on the generation or the representation, the output, than we do on the understanding.
[00:59:36] And a lot of the things that machine intelligence can do is to provide context, ideas, thinking, perspective, process huge amounts of data into patterns that we can suddenly make sense of.
[00:59:51] And I think that's something as much as anything that designers and executives should be thinking about is not how do we rush to the output, how do we compress all of that time, but how do we get better understanding of the problems that we're trying to solve?
[01:00:07] So I know, Veronica, you handle a lot of the research duties over there at Big Medium.
[01:00:11] Have there been any interesting findings when you've tested out some of these products and new ideas out?
[01:00:17] Anything you can share that's been surprising or intriguing?
[01:00:20] I've been surprised by how many of these products that are put to market just don't work or just don't really don't do what they're promised to.
[01:00:31] Or maybe you can get one idea to be conceptualized out of it, but you can't take it to the next level at all.
[01:00:37] It just completely falls apart when you try to initiate any sort of feedback loop.
[01:00:41] I would say that's been my biggest surprise is just how fast people are just rushing to get these products out of the gate when they are not good enough yet.
[01:00:52] We're definitely teaching people that AI is unreliable and it's unreliable for certain things.
[01:00:58] I think that's part of the really important thing of designers learning to use this as a design material is learning its strengths, its weirdnesses, especially its weirdnesses and its weaknesses and how to work with them and around them.
[01:01:14] I think that's the biggest thing.
[01:01:17] There's a lot of folks who are hanging their hat on it's like, it's going to get better.
[01:01:19] It's going to get better.
[01:01:20] I mean, it might, it might also just get weirder.
[01:01:22] And I think that's part of the essential nature of this kind of probabilistic technology is how do we lean into that as an asset instead of as a liability?
[01:01:34] Because we can't fix it being a liability.
[01:01:38] Doing research on a couple of products, what I found interesting is generally speaking, people don't really care if it's AI.
[01:01:45] They want the outcome to turn out better.
[01:01:48] But conversely, and this is where the irrationality comes in, is you can sell them a product and tell them it's AI.
[01:01:55] And it's almost better if they don't understand what it's supposed to do because they'll think it's superior, you know, in a ranked order to something else.
[01:02:01] So we're in a period of contradiction.
[01:02:04] Yeah, for sure.
[01:02:05] Human beings.
[01:02:06] But I think you're right.
[01:02:07] You know, I'm a big fan of getting rid of the sparkles.
[01:02:10] Like, I think that all that that's doing is saying this is going to be weird and broken.
[01:02:14] And I think if we instead think of it as here's just the underlying technology, we're making a thing that is going to work and we should present it to the user as any other technology.
[01:02:24] It's just software after all.
[01:02:26] It's not magic.
[01:02:27] It's just software.
[01:02:29] And I think if we start to think of it that way, it's like, oh, all right, here's like a new tool.
[01:02:33] Here's a new system that we can use.
[01:02:34] Let's use it correctly.
[01:02:37] Experiment with it, but not push our experiments to the people we're supposed to be taking care of, the users of these systems.
[01:02:45] People are very bad at ideating on the future or understanding something that is outside of their mental models of what the world is and whatnot.
[01:02:54] So innovation research is historically difficult.
[01:02:58] Are there any sort of methods or approaches that you found have been either particularly useful for you or that you haven't done and you've seen in academia that you're excited by the potential for?
[01:03:09] There's a product called Illuminate that is converting academic articles into NPR style interviews, which I love.
[01:03:18] I think it's so cool.
[01:03:19] It makes those papers so much more accessible, not only if you don't like to read or don't necessarily have the time to read, but all of a sudden reading academic articles has become hands-free.
[01:03:30] How cool.
[01:03:31] And so much more palatable.
[01:03:32] You know, you use kind of the available sort of AI tools, Claude, ChatGPT, Gemini all the time, moving between them.
[01:03:42] I think that that's like sort of something that that gameness is something that not everybody in the industry has.
[01:03:51] And I think it is born out of this idea that is true, which is that this stuff is not good for everything.
[01:03:57] And so I can't rely on it.
[01:03:59] And I think that there's like a difference, Veronica, in the way that you approach it, where I think a lot of people could reasonably say, so I'm going to use AI for very little because I can't trust it.
[01:04:08] And you go into it and say, I'm going to use it for everything.
[01:04:12] And no, I can't trust it.
[01:04:14] But also that you know that you can't necessarily predict what it's going to be good at either.
[01:04:18] There's a sort of a different waters, the glass is half full, glass is half empty approach to this.
[01:04:24] And I think in this respect, your glass is half full and be like, I don't know, let's try it.
[01:04:28] It might surprise us while knowing that you have to treat it with skepticism.
[01:04:32] And that's what discovery is all about.
[01:04:35] And I guess I would say for designers who are curious about this, maybe nervous about this, is use this stuff more because it's by using it that you learn the texture of how it works.
[01:04:47] And it is weird and it is constantly surprising what it's good at and what it's bad at.
[01:04:53] Our expectations of those things are often wrong.
[01:04:56] So I think when you're bringing, Veronica, your research work and sort of using that so fully as one of your tools, remaining skeptical of it as you go, that's like a model for how all of us should be learning how to work with this.
[01:05:15] Definitely. I think the more you engage, the more skepticism you should bring and the more exciting it can be.
[01:05:24] If your expectations are low, you can be so happy when you're wrong.
[01:05:29] I think productive skepticism is something that we talk about a lot when we're writing this book, but also what we want to bring out and as a designer's responsibility with the user is, you know, I think when things are going well as users, we will trust the system is doing the right thing.
[01:05:45] And I think that there are signals that we can pick up as designers that are implicit in the model and its responses that we can actually engage productive skepticism.
[01:05:58] That is actually one of our responsibilities.
[01:06:01] It's not enough to have a little tiny text footnote that says, this could be wrong.
[01:06:05] Use it at your own risk.
[01:06:06] Use it at your own risk.
[01:06:07] We need to get better at sort of like building things into our UI.
[01:06:13] This is hard with LLMs because they're always confident.
[01:06:16] So generative AI is tricky around this.
[01:06:18] But part of this is that we sort of want them to kind of be like the way that we would as human beings.
[01:06:23] It's like, I'm not sure, but maybe it's like, okay, all right.
[01:06:27] That's useful information, both because of your tone, but also you've given me some information to go on, but I understand how to treat it.
[01:06:34] So I think it's like, how do we create that cultural and data literacy in our users?
[01:06:41] Because I think that is a responsibility for us as designers to sort of really know when to prompt and engage, again, productive skepticism and critical thinking about when these systems are confident or not.
[01:06:56] Because often the systems do know when they're confident.
[01:06:59] We don't reflect that enough in our interfaces.
[01:07:03] You know, how do we, how do we treat things as signals more than as facts or truths?
[01:07:09] Well, that sounds like a roadmap for the future of design and how it's changing because basically we're having to move to this probabilistic worldview where it's not about driving a user to a specific endpoint.
[01:07:19] It's about helping them navigate through the deep dark woods, which is interesting.
[01:07:23] It requires us to go back to some of those school projects that had big open sky thinking, which funny enough design kind of lost along the way.
[01:07:31] So it's amazing to hear that your agency promotes that, that you're building that, that you're advocating for and teaching that.
[01:07:37] So it's been an absolute pleasure having you here.
[01:07:40] I'm looking forward to reading the book and seeing your other talks because it sounds like you could talk for hours about this and that's amazing.
[01:07:47] Thanks so much.
[01:08:17] Thank you.
[01:08:25] Thank you.
[01:08:28] Thank you.
[01:08:28] Thank you.

