Spotify’s former data alchemist: Evaluating when & how to use GenAI

Spotify’s former data alchemist: Evaluating when & how to use GenAI

Episode 17. Our guest is Glenn MacDonald who was Spotify’s Data Alchemist, building it into an algorithmic powerhouse.

We’re critically evaluating algorithms' effectiveness and why GenAI probably isn’t the best technology for many problems.

Some key insights:

#1. As Spotify's former data alchemist, I expected huge advocacy for hashtag#ML & hashtag#AI as a predictive technology. Instead, we must not play god with algos. They should be assistive tool to get people to where they're headed. Prediction leads to errors.#2. You must be able to evaluate algorithms. Too often we're deploying fancy tech with no way to know it is performing better than an alternative. hashtag#GenAI has a huge risk of this because the assumption is that it solved everything. But the cost of deploying it is also very high."I think the main thing I've learned Is actually not to think about it as prediction, I think the thing that happens to you when you start thinking about things as prediction, and I think this applies to thinking about LLM, LLM outputs as predicting text. It also applies to A& R and music as like predicting hit artists. The moment you start thinking about it as prediction, you've sort of internalized sort of ugly idea that the future is kind of determined and you're just attempting to guess what it's going to be and thus profit by anticipation. And I think it's a lot more productive to not think about the future as something you're predicting, but it's something you're making. ""I think a lot of the time we evaluate new tech against really Poor baselines, like against randomness or against the most popular things, or like you said, against just like our intuitive guesses. And in those contexts, sometimes the fancy tools seem like, Oh, they're clearly better. But then when you compare them against, Oh, what if we just did some math and you realize. Oh, the math's even better. It's a lot simpler. "

The episode is hosted by:

Arpy Dragffy Guerrero (Founder & Head of product strategy, PH1 Research) https://www.linkedin.com/in/adragffy/

Brittany Hobbs (VP Insights, Huge) https://www.linkedin.com/in/brittanyhobbs/

Glenn McDonald is a music evangelist, algorithm designer, software engineer and technology strategist. He created the music-exploration website Every Noise at Once, and for 12 years was the Data Alchemist at the Echo Nest and Spotify. He has written about music online since before "blog" was a word, and his first offline book, You Have Not Yet Heard Your Favourite Song: How Streaming Changes Music, is available now from Canbury Press.

00:24 Meet Glenn MacDonald: Spotify's Data Alchemist

01:50 The Evolution of Music Discovery

08:39 The Role of AI in Music and Beyond

13:29 Challenges and Future of AI in Music

29:14 Navigating AI in the Workplace

31:25 Designing User-Friendly Algorithms

34:59 Challenges with Algorithmic Recommendations

39:42 Evaluating AI and User Testing

47:41 The Future of Music and AI

Thank you for listening to the Design of AI podcast. We interview leaders and practitioners at the forefront of AI. If you like this episode please remember to leave a rating and to follow us on your favorite podcast app.

Take part in the conversations about AI https://www.linkedin.com/company/designofai/

And subscribe to our newsletter for additional resources https://designofai.substack.com/



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit designofai.substack.com

[00:00:00] Find something that you care about whether it's done well, and you can tell whether it's done well.

[00:00:06] If you have those two things, then you will be motivated to learn whatever you need to learn to make it good

[00:00:13] and you'll be able to iterate because you can tell whether it's good.

[00:00:17] If you don't have either of those things, then you're screwed because you can't tell if you're making progress.

[00:00:23] Episode 17, our guest is Glen McDonald who's part of Fires Data Alchemist.

[00:00:27] We're taking a critical look at the effectiveness of algorithms and why GenAI probably isn't the best technology for many of your problems.

[00:00:36] I think the main thing I've learned is actually not to think about it as prediction.

[00:00:41] Let me start thinking about it as prediction.

[00:00:43] You've internalized an ugly idea that the future is determined, and you're just attempting to guess what it's going to be.

[00:00:54] What you want is them helping you do your things.

[00:00:58] Instead, what you've got is a thing that is helping the business accomplish the business things.

[00:01:05] Glen McDonald is a music evangelist, Algorithm Designer, Software Engineer, and Technology Strategy.

[00:01:12] He created the Music Exploration website every new is at one.

[00:01:15] In for 12 years was the Data Alchemist at the Economist and Spotify.

[00:01:18] He's written about music online since before vlog is a word.

[00:01:22] His first offline book, you have not yet heard your favorite song.

[00:01:26] How streaming changes music from Glen Canberra press.

[00:01:29] Like every shining new toy, it's like, ah, this is amazing.

[00:01:33] We can use it on everything and then a while later, you know what?

[00:01:37] Microwave is amazing and we don't have to cook everything with a microwave.

[00:01:41] Like we have microwaves but there's still things that are good to put in skillets or in ovens.

[00:01:47] Or put on a stick and hold over a fire.

[00:01:51] In this episode, we discussed the secret, the successful Algorithmic Playlist is Spotify.

[00:01:56] Why you must be able to evaluate the effectiveness of Jenin.

[00:02:00] What's at risk when you take it on Jenin and I project?

[00:02:04] Jenin is often not the right solution for your problem, and ways of optimizing Jenin and I help us.

[00:02:10] Thank you for listening to the Design of AI podcast.

[00:02:13] We interview leaders and practitioners at the forefront of AI.

[00:02:16] If you like this episode, please remember to leave a rating and follow us on your favorite podcast app.

[00:02:23] Well, Glen, first of all, thank you so much for joining us today.

[00:02:26] We're very excited about this conversation.

[00:02:29] My pleasure.

[00:02:30] So you've been called Spotify's Data Guru and their Data Alchemist.

[00:02:34] And your project every noise at once, map the endless complexity of music categorization.

[00:02:41] Can you decrypt what you've been working on for the last decade?

[00:02:45] Nice simple question we've got to start it.

[00:02:47] Yeah, well, there was actually a job description in the Spotify HR system called Data Alchemist.

[00:02:55] There wouldn't have been if I hadn't arrived with the Aconest acquisition with that as my funny title

[00:03:01] And my official responsibility was to explore what was possible to make out of listening data.

[00:03:10] We had 200 million or 300 million or something by the end that were 5 or 600 million people listening for, you know, hours every day and that's a lot of data.

[00:03:20] And so it seemed pretty safe to assume there's stuff that we could make out of that.

[00:03:27] So I got to for a long time experiment with what that stuff might be.

[00:03:34] And for me, as a music fan and a curious person about how the world works and what's going on,

[00:03:41] the goal of that was always to find out the answers to those questions.

[00:03:48] Like what what other music is there?

[00:03:51] I've long said that I live in the paralyzing fear that there's great music out there that I haven't heard yet,

[00:03:58] desperately trying to find it before I die.

[00:04:01] And it turns out I've only become more sure of that after 10 years of exploring music data and music.

[00:04:09] There's amazing music everywhere everywhere I look and like, oh that's that's a cool odd band and then 10 minutes later.

[00:04:16] Oh there's 200 more of those bands that's all seen to somebody that's normal music and it's only strange and novel because I'm from somewhere else and I never heard it before.

[00:04:28] But isn't it so transformative what Spotify and streaming has been capable with doing, bringing new sounds to the forefront and you could go down these rabbit holes that are endless.

[00:04:37] It's magical and that's why you're every noise at one site was a fantastic companions through which to explore different sounds and sub genres and sub sub genres.

[00:04:46] Yeah, I definitely believe that and that's basically the premise of my book too,

[00:04:51] which is that bringing all this music online together connected is it doesn't make technical sense to say that it's a bigger cultural development than the internet itself because obviously the internet is super set of that but,

[00:05:07] it feels that way to me. Like the internet without all the music online was that was cool like it was amazing to have all this information accessible and I was impressed that I could go online and download Gary Newman's

[00:05:21] Discography and know which albums I was missing but we got to the point where actually all the music was online as opposed to all the files about what the music was then I really felt like we changed something profound.

[00:05:36] Really a fan of looking back in history and seeing you know what what are the precedents for now and in many respects it's what you were doing is discovering languages that have yet to be.

[00:05:46] Collected or documented or the same way that a natural list would go and find new species of things obviously the locals knew all about it, but this is a matter of collecting them on and languages don't evolve as quickly as music does music is the most ephemeral type of language in the world so again an emissary quite an adventure right.

[00:06:02] I totally was and I think when I started I sort of thought I was discovering things that there's this inherent sort of colonialist hubris and the idea of discovering somehow those things didn't really matter until we arrived that.

[00:06:23] I thought of myself as like a knowledgeable music geek when I began this and at some point very early in the process.

[00:06:31] I had this software that went we started categorizing genres and I had this software that I called the genre minor that would wander around in the data structures looking for clusters of artists that were related to each other by.

[00:06:45] shared audience but didn't weren't tagged in our system yet and some of those would be real and some of them wouldn't because the data was weird at the time so the minor would surface these clusters for me to look at.

[00:06:59] and sometimes I'd look at them and go I don't know what this is and sometimes it would be really obvious from the names that there was a thing and so one day it came up with this set of names and they were all name e name or name ampersand name like.

[00:07:16] all right this is that's that Portuguese I don't know what that is it's some Latin language there all do was.

[00:07:24] it must be some I've seen that I never heard of and 10 minutes of internet research later I realized it's this thing called certain a ho it's like country music of Brazil.

[00:07:35] It is the most beloved music for a hundred million people and I

[00:07:41] Thinking of myself as a knowledgeable music person to basically never heard of it at that point. I realized okay

[00:07:48] I just have to let go of this idea that I know that that my

[00:07:52] Catalog of factual knowledge is my claim to

[00:07:56] Have the right to participate in this?

[00:07:58] I just don't know the world is broad assume that there are things that are just totally normal to millions of

[00:08:05] People that I've never heard of and that allowed me to reset and

[00:08:12] Realize I'm not discovering things. I'm helping the music world self-organize

[00:08:17] I'm helping these communities which exist in a real and don't rely on me for that

[00:08:24] To project themselves through this listening data to themselves to more listeners to

[00:08:32] Potential audiences elsewhere. It's it's not discovery. It's facilitation

[00:08:39] So not to open or simplify things, but if we were to look ten years in the future you're raw

[00:08:45] Your focus your drive your outcomes is that what a music AI agent should be trying to replicate

[00:08:53] AI is a is in a very funny place where it seems obvious that it has a massive amount of potential and

[00:08:58] And we don't really yet know what to do with that and

[00:09:05] That's that's a normal state of ball technology like every shining new toy. It's like

[00:09:11] Oh, this is amazing. We can use it on everything and then a while later you're like, you know what?

[00:09:18] Microwave is amazing and we don't have to cook everything with a microwave like we have microwaves

[00:09:23] But there's still things that are good to put in skillets or in ovens or put on a stick and hold over a fire

[00:09:30] My theory current theory all provisional is that

[00:09:35] AI is going to be another great tool if we can keep ourselves from screwing it up by forgetting that it's a tool

[00:09:45] As long as we realize and remember that

[00:09:48] Like all these things going back to math and sequel queries

[00:09:53] Which is what most of my work. It's modified actually boil down to an identical sense

[00:09:59] They all exist to extend human powers

[00:10:03] human interest curiosity

[00:10:06] Initiative they don't have motivations of the row and people talk about with the algorithm wands

[00:10:12] There's rarely the algorithm but even if there were doesn't want anything it doesn't have that power has no soul

[00:10:18] There's no goals and I think there's a lot of tech company inertia to treat the technology as the thing

[00:10:29] And the technologies never the thing that people are the thing and

[00:10:35] Every new technological tool goes just this painful iteration of pretending it's the thing and then

[00:10:43] Eventually we reign it in and realize. Oh, it's like a lever to very fancy lever

[00:10:47] The lever doesn't have a goal about what it's gonna lift or move or turn over just exists to be put in our hands and let us move something

[00:10:58] What the idiot stage where we're just mesmerized by our levers and think that they will lead us

[00:11:05] They will move all the things for us and they will decide what to move and they're not gonna do that

[00:11:11] And I wonder if that's why we are all in this phase of like wanting to answer from more fires. That's so much

[00:11:15] But not to get too much onto that

[00:11:17] you

[00:11:19] We're so heavily involved in what was happening with Spotify and then obviously the end of

[00:11:24] Spotify came very suddenly for you and a lot of really amazing people who worked there

[00:11:29] You know we have worked on some projects with Spotify and a lot of the amazing the intelligent people over with

[00:11:35] We're so integral to that and I don't know longer there

[00:11:38] So what drew you to then sort of shift from that to now working for an AI startup?

[00:11:46] Well, I certainly assumed after I got laid off that I would find a new music job

[00:11:54] to sort of keep doing the work that I was doing which I didn't think was done

[00:11:59] I don't know that I would have ever thought it was

[00:12:03] done and

[00:12:04] And so I spent a bunch of months talking to pretty much every buddy at every level of the music industry and

[00:12:11] And

[00:12:13] Basically sorted all the things into two big groups. There were a lot of

[00:12:17] very interesting

[00:12:19] Principled idealistic startups trying to rebuild some broken part of the music industry and

[00:12:27] some better way and there's no shortage of broken parts of the music industry that could stand to be

[00:12:33] reinvented that way and

[00:12:36] Almost all of those seems certain to fail individually almost all of them all most certainly will fail some of them won't

[00:12:45] Very hard to guess which ones won't and then on the other side there were a lot of

[00:12:51] very

[00:12:53] Established large companies that almost definitely weren't going to immediately fail

[00:12:58] But that it was hard to be sure that there was really room to do interesting things

[00:13:04] And as I was juggling those waiting for

[00:13:09] Either a crazy interesting idea to become viable or a viable

[00:13:14] Landings pod to become clearly interesting

[00:13:17] I was like what else could I be doing and my two main candidates were help alleviate global climate crisis or

[00:13:26] Help of earth the AI apocalypse and I decided not to be a physics major back in 1985

[00:13:33] So I'm really I wish I had something as a contribute to the

[00:13:38] Climate situation but I kind of don't but the

[00:13:41] Use of AI I

[00:13:43] There I felt like all right that's a kind of problem. I'm used to dealing with what do we do with these tools? How do we understand

[00:13:50] What's possible with them and how they serve human goals instead of

[00:13:55] Subduing them and a random AI company happened along while I'm sorting through all these other things and we're like hey

[00:14:03] Would you consider working with us? And yes, I would consider it what are you doing and

[00:14:09] So that turned out to be an interesting thing to do next. I think I initially was skeptical

[00:14:16] So like and then I'm sure these people know all these things about AI and I've been writing sequel queries

[00:14:21] And arguing against machine learning for 10 years and how's that gonna work but as soon as I get into it

[00:14:27] I realized that's the same problems like I might as well just call myself an alchemist at both

[00:14:33] Jobs the goals are the same like what putting technology in between human beings were intervening in human culture

[00:14:41] In some way and that could be good or bad and in order to understand which it is we have field

[00:14:47] Understand the human processes that we're dealing with and

[00:14:52] What happens to them and how we evaluate

[00:14:55] Results and how our ability to evaluate results influences what we choose to do once I realize that this is good too

[00:15:03] And you know, I thought having written a book and published it in this weird old world of paper

[00:15:10] Publishing that I sort of had an obligation to help prevent the

[00:15:15] Complete

[00:15:17] Disolution of the information world by running it all through AI and producing AI not sense

[00:15:24] And to that point whether it's ML or AI

[00:15:28] It seems like these tools fundamentally are there to

[00:15:31] Improve the predictions of what people want

[00:15:34] It seems ultimately that's really what we're trying to craft in some form

[00:15:40] I think the main thing I've learned is is actually not to think about it as prediction

[00:15:46] I think the thing that happens to you when you start thinking about things is prediction and I think this applies to

[00:15:54] Thinking about LLM LLM outputs as predicting text it also applies to AR and music as like predicting hit artists

[00:16:04] Let me start thinking about as prediction you've sort of internalized the

[00:16:09] Sort of ugly idea that the future is kind of determined and you're just

[00:16:16] Attempting to guess what it's going to be and thus profit by anticipation

[00:16:21] And I think it's a lot more productive to

[00:16:26] Not think about the future or something you're predicting but it's something you're making

[00:16:30] And this is sort of a a phrasing like a linguistic point because you could talk about the same things

[00:16:39] As prediction or

[00:16:41] Production but it changes how you think about it if you're talking to an artist

[00:16:46] If you talk to like a label and they're trying to guess who the big artists are going to be

[00:16:52] That tells a story about how they go about things and if instead

[00:16:57] They are trying to figure out how to build careers

[00:17:01] That's very different story

[00:17:03] It's the same all the ways through all technology you think about what you're doing with AI is all right

[00:17:09] What am I trying to make different in the future?

[00:17:13] This design of AI episode is brought to you by pH one a research and strategy

[00:17:18] Consolency that helps clients build AI products that customers want

[00:17:22] Contact them about product discovery research to answer critical questions about what's a build

[00:17:27] Competitive analysis to find out how to gain an advantage

[00:17:30] Service and customer analysis to identify the best use cases in valley drivers and workshops and concept testing

[00:17:37] Devalidate what will work and to fine tune products

[00:17:40] pH one has worked on products or Spotify Microsoft the National Football League Dell

[00:17:45] Mozilla Bell and various health and higher education groups

[00:17:49] Bring in an expert to make sure your products and teams are focused on what customers want

[00:17:53] Visit their website to book an intro call pH one dot C.A

[00:17:59] So maybe you could walk us through your point of view from the perspective of an algorithmic playlist

[00:18:04] What do you view is the role of

[00:18:07] delivering to a user an algorithmic playlist?

[00:18:10] You know whether it's a daily mix or the weekly mix what was your point of view on that

[00:18:14] So the goal for me is doing something for the listener that the listener wants but would have trouble doing them cells

[00:18:24] So daily mix which was originally my thing we implemented it at Ardio at the

[00:18:30] I can ask before we got acquired by Spotify and then we did it in a better version. It's Spotify

[00:18:36] The goal of that was

[00:18:38] Recognizing that people's taste is rarely singular like most people like a bunch of different things and they like them in different ways and they use them in different ways in their life

[00:18:49] And

[00:18:51] Most people would find it tedious to manually construct that taxonomy for themselves

[00:18:58] But we had enough data. We could try to do it for them. So that was the goal of daily mix is

[00:19:04] Organize your catalog of listening in the way you would if you had time and initiative too, but you probably don't

[00:19:14] It wasn't to recommend things it was just to do this useful thing for you

[00:19:20] And having done that then it's

[00:19:22] Possible to say all right well having found these are the blues artists you like

[00:19:28] Separate from the black metal artists you like then it's possible to say oh and

[00:19:32] Now that I've collected those 20 blues artists here 20 more blues artists that are not only just blues artists, but

[00:19:39] The kinds of blues artists that you like

[00:19:42] Like there's no reason you need to like those too

[00:19:45] the ways in which they're similar may not be the ways that you care about

[00:19:50] But at least it's useful to put them there and say

[00:19:53] You might consider these it might be interesting and useful to you

[00:19:56] To have these given to you as opposed to trying to find them yourself by

[00:20:00] Navigating through 20 artists fans also like lists and writing down notes on it. These are paper

[00:20:07] Well maybe let's take the analogy and move it over to something that LLMs are being leveraged for which is add tech

[00:20:13] And let's keep the analogy of music

[00:20:16] Brittany neither some of the project who worked on where it was for Spotify for artists

[00:20:19] So it's the platform for which artists leverage to click analytics and inform decisions

[00:20:24] Now one of the dilemmas we always had in that project was

[00:20:28] If an artist isn't trained on how to understand or manipulate data they tend to make uninformed decisions based off that data

[00:20:36] And often are actualizing their own biases of rather than anything factual because usually they don't have a frame of reference

[00:20:43] Usually they're unsure how to

[00:20:46] monetize things appropriately. They don't know how campaign should be developed

[00:20:50] So from that perspective of a very clear customer journey like I have a system collecting data

[00:20:55] And I need to generate more revenue. How would you leverage an LLM and that kind of context and add tech?

[00:21:03] It's a fair question and I would certainly not start with an LLM because that's a very complicated machine

[00:21:10] That's very tricky to predict it's outcomes and so I would start with math

[00:21:16] This happened a lot of Spotify right have to

[00:21:21] I would not have to but I would be the one to try to compare methods

[00:21:27] Because Spotify's organization is very committed to machine learning and so it hired lots and lots of machine learning

[00:21:34] Engineers and a few higher machine learning engineers you will get machine learning because

[00:21:39] That's what they do and that's what you hired them to do and that's not always the best

[00:21:43] Solution to things sometimes it is sometimes it isn't and it would be by roll to

[00:21:50] Take a given problem and say all right

[00:21:52] What what's the baseline here like what what can we do with the simplest tools possible and does that help us

[00:22:00] Understand whether the machine learning is adding

[00:22:03] value or in screudability and

[00:22:08] I haven't done anything an add tech. I tried to stay away from

[00:22:11] advertising as much as I could but

[00:22:16] I've been of 99.7% sure that the same things would apply there

[00:22:22] That in order to have even any sense about whether your LM things are good

[00:22:28] You need to know why are the right answers like are they're simple right answer?

[00:22:34] Are there good answers can we get good answers easily and then

[00:22:37] Use those to tell whether our fancy methods are giving us great answers or

[00:22:43] Like average answers and I think a lot of the time

[00:22:47] We evaluate new tech against really poor baselines like against randomness or against the most popular things or like you said against just like

[00:22:58] Our intuitive guesses

[00:23:01] And in those contexts sometimes the fancy tools seem like oh they're clearly better

[00:23:07] But then when you compare them against oh what if we just did

[00:23:11] to math and you realize oh

[00:23:15] The math's even better and it's a lot simpler and it

[00:23:19] Goes weird in a lot fewer ways

[00:23:24] So if it were me if I had a change of emotional composition and decided to work on add tech suddenly

[00:23:30] That's what I do so I'd start with right where are we now?

[00:23:35] What where can we get with simple tools and

[00:23:40] Then we're calibrated and we say all right what could be better and can we get to the betterness with

[00:23:47] Fancy magic or is the fancy magic just more fancy than it is magic you made a

[00:23:54] Compiling point which is basically LLMs and just

[00:23:58] ML and algorithms shouldn't be leveraged for the purpose of prediction they should be for the purpose of

[00:24:03] Sistive

[00:24:04] functions ones where there's a clear target and this can improve the fidelity or the accuracy of the result

[00:24:11] from that perspective

[00:24:13] Where do you think LLMs are well suited to support you know like let's take for example in your world

[00:24:19] Adding taxonomies to music retagging them recompiling them re-associating them

[00:24:25] You know those tags been context aware are those good functions for LLMs or what would you say are good functions

[00:24:33] My feeling is that LLMs in particular at this point are amazingly good at reading

[00:24:40] their language models so with if you just sort of think about them for moment it makes sense that they're

[00:24:48] Really good at reading human tags because that's that's what that's how they've been created

[00:24:53] They're pretty good at remembering things although there are notion of memory is a lot different than a human

[00:25:01] one and so that's part of why it's good

[00:25:05] They are not for a good at thinking

[00:25:08] That could be hard to realize because they're very good at recalling when people have thought

[00:25:14] And it can be tricky to tell the difference between a thing that's thinking and the thing that is

[00:25:20] Recalling good thought that someone else did the terrible at writing they're they're not good at

[00:25:26] Creating but but they're amazingly good at reading

[00:25:31] So for me most interesting applications start with that with where is it that we have

[00:25:39] Knowledge representatives human language

[00:25:43] That it's been hard to convert into

[00:25:46] Just mundane structure data that we can then do things with

[00:25:51] And one of the best examples to me is web scraping like if you have a page that has a blog post that some

[00:25:59] Man wrote about when their new album is coming out

[00:26:02] That's good information like that that page represents the factual information that this album has a title

[00:26:09] It's by this band and it's coming out on September 24th and the fans of that band want to know that

[00:26:15] And they can find that human fans can find it out by reading that page

[00:26:19] And it'd be useful to have that data somewhere too and getting that data out of a web page with old tools is really hard

[00:26:27] Because it's written the middle of text like you don't want to do with beautiful soup

[00:26:31] We're not going to do with red jacks. It's a pain

[00:26:33] The best way would have been have a human read the page and then type it into a database

[00:26:39] That I'm going to do that like it's amazingly good at that it can take a page of like

[00:26:44] Weirdly written

[00:26:46] idioms and mentioning a bunch of releases and pull out the structure data that is impressive

[00:26:53] That's the kind of thing that these tools are really good at read some text

[00:26:59] Distill facts out of it. I mean remembering is not a trivial part and so

[00:27:07] The use of LMS to basically do searches that

[00:27:13] Map straight forward lead information that we have but not the way it's been organized

[00:27:19] is another good example

[00:27:22] So many things you do in chat GPT

[00:27:25] For example, I think would be better as Google searches like you'd like to get an actual authoritative

[00:27:31] source that explains something

[00:27:33] If it's something about how do I apply a

[00:27:37] JavaScript library to a problem. I have and the problem I have is kind of backwards from the way the JavaScript library is is documented

[00:27:46] And a Google search would bring me up the documentation for the library

[00:27:50] But it wouldn't be obvious to me how to apply it to solve my problem

[00:27:54] They'll am because it has broken up that up into sentences and understands

[00:27:59] Can give me the explanation I need which is rearranged the human language

[00:28:06] but kind of

[00:28:08] The human input I can read and go oh I see

[00:28:12] I need to put this thing in the middle of my process and call it in this way

[00:28:17] And now I can understand it so those are great uses and things we have not been able to do before

[00:28:24] Having a right me an essay and now I don't want that have it right me a song boy

[00:28:29] Do I not want that that is not

[00:28:32] Existentially useless to me

[00:28:35] So if we take a step up higher, right because you know you obviously have been

[00:28:39] Working in and with data for so long and

[00:28:43] There's now a lot of designers and researchers who are really having to start working with data

[00:28:51] I'm probabilistic interfaces for the first time or in a meaningful way for the first time

[00:28:57] What advice do you have or what do you believe that they need to know about working with data and probabilistic

[00:29:03] Interfaces to be successful in this new era?

[00:29:07] I would start with just

[00:29:10] Big hug and then say you don't you don't have to

[00:29:15] Like if you get into it and you're like I don't I don't know how to do this

[00:29:21] Maybe it's telling you that this is not yet the right solution for your problem

[00:29:27] And I think it's really hard if you're trying to make a living an attack field

[00:29:32] to

[00:29:33] Say that

[00:29:34] To like turn around to your boss who's asked you to figure out

[00:29:38] You know our

[00:29:40] Future relies on embracing AI before our competitors so you have three weeks to figure out how AI is gonna transform

[00:29:48] Chocolate chip cookie baking

[00:29:51] You better come up with an answer, but but I think often the answer is

[00:29:56] Yeah

[00:29:57] There's there's nothing here or there's something here, but it's like peripheral

[00:30:03] We could use it to

[00:30:05] Read our customer emails and distill them in a better way than we have time too

[00:30:11] But it's not gonna help us bake the cookies. It's hard to be the one who pushes back on a

[00:30:18] Fevered corporate enthusiasm for something

[00:30:23] But but if you can't do that

[00:30:27] Then

[00:30:28] You're probably gonna do a bad job of whatever you're trying to figure out the thing I always say to people

[00:30:32] I talk to like college students and they ask how do careers work how is this job thing

[00:30:38] Work and I always tell them

[00:30:40] The thing to do is find something that you care about

[00:30:45] Whether it's done well and you can tell whether it's done well

[00:30:49] So if you have those two things

[00:30:51] Then you will

[00:30:53] Be motivated to learn whatever you need to learn to make it good

[00:30:56] And you'll be able to iterate because you can tell whether it's good

[00:31:00] And if you don't have either of those things

[00:31:02] Then you're screwed because you can't tell if you're making progress and if you don't care

[00:31:07] I'll just stop

[00:31:09] And if you have to get in day eye

[00:31:12] Because someone told you to but you can't tell whether it's good or not and you don't care

[00:31:17] then

[00:31:18] Is it gonna be bad

[00:31:20] And you should just not

[00:31:25] Thinking then about how to evaluate if something's working or not

[00:31:29] I'd like to go to the piece about how a designer can influence the interface of an algorithm

[00:31:35] I

[00:31:36] Use my Spotify locked. I use a lot of these tools and one of the

[00:31:41] areas of design has not been figured out yet, are the ways through which

[00:31:46] A user can use a gap graphical user interface to train the data set

[00:31:51] I know for myself when I have an algorithm playlist a lot of times

[00:31:56] Frustrating we angry and just want to like scrap everything in my profile because I feel like it doesn't know me

[00:32:02] I feel like I've lost a friend

[00:32:04] But the interface doesn't give me a way to tell it that

[00:32:09] What do you believe you as a data person need as inputs from a designer to enable this kind of

[00:32:17] Ying and Yang of learn and fail and learn and fail in a

[00:32:22] historical effective way oh

[00:32:25] I think the problem is not the designer in that case the problem is that the product has been set up to do a thing

[00:32:34] That isn't quite what you actually want

[00:32:37] I think you you you described a

[00:32:41] Is had in common failure mode of

[00:32:45] Algorithmic technology

[00:32:47] which is that you

[00:32:50] What you ultimately want is them helping you do your things and instead what you've got is a thing

[00:32:57] That is helping the business accomplish the business things

[00:33:02] And those are not quite aligned

[00:33:05] Sometimes they're close enough that you can kind of pretend to not notice that for a while

[00:33:11] And it rarely holds up for very long

[00:33:14] So by time you're Spotify algorithm has gone awry as as people put it

[00:33:20] It was a ride to begin with

[00:33:23] Like to discover weekly

[00:33:25] It's a very successful product

[00:33:27] But it doesn't make sense. It treats you as if you are one blob of taste and

[00:33:34] Trying to find more of that blob of taste and if you are in fact one blob of taste if you only like one thing

[00:33:42] And the other people who like that thing only like that thing

[00:33:46] Then it works great

[00:33:48] But if you do those things are wrong then it's not so great. It uses vector embeddings which

[00:33:57] Turn the world into this one space and then

[00:34:02] Trying to navigate in it, but if you like

[00:34:06] Belgium hip hop and California surf rock

[00:34:11] Equally then

[00:34:12] You're placing that space is somewhere between those two which is

[00:34:18] Something else entirely and you know going back to daily mix

[00:34:22] We that was the opposite approach it was

[00:34:24] Don't try to average you try to separate your things

[00:34:30] So that each one of them can be a jumping off point for exploration

[00:34:36] You know discover weekly is far from the only thing like this your recommendations in any

[00:34:42] algorithmic platform are probably going to

[00:34:47] Experience this

[00:34:48] effect in unless you have a

[00:34:52] Software where the people have really embraced the idea that the algorithms are trying to help the people instead of the business

[00:34:58] But YouTube notifications are of visible same reason they're trying to average me out and assume that what I listened to last week is me

[00:35:05] Usually, I was on it by accident

[00:35:07] Everything that I collected online is at different purpose for it and they don't understand the intent

[00:35:11] Without understanding intent they don't have to serve it not a prioritize that right right because it's not just that you have different areas of tastes

[00:35:18] It's they mean different things to you and you interact with them in different ways

[00:35:23] So you like Belgium hip hop and you like California surf rock

[00:35:26] But the California surf rock is like comfort music and

[00:35:31] All you ever want to hear is the greatest surf rock of all time

[00:35:35] You're not interested in the new songs by those bands if they're even still

[00:35:39] Existing whereas Belgium hip hop you want to hear the newest stuff and only the newest stuff

[00:35:45] So not only is the averaging

[00:35:47] Losing you in the space between those two styles, but it's losing you in

[00:35:52] The modes based between what you need

[00:35:55] In Spotify wrapped a few years ago

[00:35:58] We did this thing called the listening personality and it was a Myers-Briggs style

[00:36:04] For character breakdown of the way you listen to music independent of what music it is

[00:36:12] And it was wrapped so it was presented as a as a shareable thing and product goal of rap does is

[00:36:18] Viralty and and brand marketing

[00:36:21] But the date of I'd done that and the date of for it was real and I've been working on it for years

[00:36:27] And my goal of having that data was to have a framework for the humans and the algorithms to communicate

[00:36:35] To be able to express

[00:36:37] What is it that's different about my

[00:36:39] California surf rock needs versus my Belgian hip hop needs and it's in one I want familiarity and

[00:36:48] You know popular validation only the classics and in one I want

[00:36:54] Nunes and

[00:36:58] genre coherence and and one I'm happy that the same artist play over and over again in one I

[00:37:05] I want variety

[00:37:07] So these four dimensions of listening were an attempt to phrase those such that he was a human

[00:37:14] Could override

[00:37:16] That you could have a way of talking to the algorithm to say all right today

[00:37:22] I

[00:37:23] Usually want the newest stuff in Belgian hip hop, but today just like play me my favorites or today

[00:37:29] I'm willing to hear if there's any new California surf rock

[00:37:33] But that is a it's a vastly different model than the hold still and we'll tell you what to listen to next and it implies a lot more agency

[00:37:43] on the part of the listener

[00:37:46] and

[00:37:47] So far this juncture of technology

[00:37:49] agency is not

[00:37:52] Not usually the the goal and I think this applies to the artist side of in music too

[00:37:58] I Spotify for artists has a lot of

[00:38:01] good information

[00:38:03] it does not have a lot of

[00:38:06] Powerful decision making

[00:38:09] That he was an artist can do and

[00:38:12] When Spotify introduced discovery mode this was sort of a good example

[00:38:17] Discover mode you as an artist have the opportunity to opt in so your agency is I can push the button this says okay

[00:38:25] And when I do that that says all right Spotify can do a bunch of magic the sender Spotify's control

[00:38:31] to

[00:38:31] Try to give me more plays

[00:38:34] In return for or royalties

[00:38:37] But for me would I wanted

[00:38:41] To modify to do in in something like that was give the artist more power

[00:38:48] Not just ask them to submit to magic but

[00:38:52] If you're an artist you're next song you know it's a ballad and

[00:38:57] You usually do you know jams and so you would be able to say all right this next song

[00:39:04] It'd be I think it'd be better push to this segment in my fans the fans that I share with

[00:39:11] Celine Dion not the ones that I share with a few fighters or you know whatever it is

[00:39:18] That seems more interesting to me giving the artist more tools to try to accomplish something they want to do as opposed to just

[00:39:27] Giving even more control to two algorithms

[00:39:32] I think that that's so

[00:39:34] Important and interesting and you know you said earlier on in this conversation that often will evaluate based on poor baselines as well

[00:39:42] And so let's talk about

[00:39:45] Evaluating success

[00:39:47] Let's talk about

[00:39:48] Understanding as specials are going into this new space and you have product teams that are trying to implement gender

[00:39:55] of AI many are struggling to make the tech work the way that they need to or

[00:39:59] Struggling to find that proper use case

[00:40:02] What should these product teams be concerned with to make these implementations meaningfully successful?

[00:40:11] So I think it's all

[00:40:13] to me can you generate results that you're gonna evaluate and

[00:40:18] and

[00:40:20] This was one of the things that allowed me to feel like I was doing a good job at Spotify

[00:40:26] If I was working on algorithmic playlist generation, I would generate playlists

[00:40:31] I would generate them in a lot of there are a lot of different seeds whether there are dis-based or genre-based or whatever

[00:40:36] And I would look at them and I would look at

[00:40:39] 50 of them side by side and

[00:40:42] I would take whatever time it took to get to the point where I could evaluate them so I would be like

[00:40:48] I'm supposed to be generating German hip-hop now. What do I know about German hip-hop? Well, start the stopwatch nothing

[00:40:54] Okay, here's a list

[00:40:57] All right, I studied German for two years in high school

[00:40:59] So I can look these in names and have some idea whether are any of these jigs

[00:41:03] This is a look German at all. Okay, it looks sort of German good

[00:41:07] So let's take the top three and let's do some Google research like this is this music

[00:41:13] It's a real thing the world's full of it

[00:41:15] Let's go find out who are the top rappers in Germany are these them

[00:41:20] And 15 minutes later, I'm like all right now. I know a little bit about German hip-hop

[00:41:25] It helps that I've done this a lot so it's not like I have to have explained to me like what German is or what hip-hop is

[00:41:34] And the more you do this the more you learn it's actually pretty

[00:41:40] It takes a lifetime to

[00:41:43] To learn the deep nuances of anything but it doesn't take very long to get a sense of

[00:41:50] What the things are sort of right or totally wrong

[00:41:55] And so if you can do that

[00:41:58] then

[00:41:59] It doesn't matter what technology you're employing you could be implying the most

[00:42:03] Falbo-weird randomness producing

[00:42:07] Machines

[00:42:07] But if you can evaluate the results and then you can iterate and you can see if they get better

[00:42:14] Then

[00:42:15] At least you have something you can do

[00:42:18] And the worst case in

[00:42:21] The other extremists if you can't evaluate it as a why tell people that you have to do something where you care and can tell whether it's good

[00:42:28] If you can't evaluate then you have to wait for it to be done for you and

[00:42:33] That is incredibly slow. So if you don't know if you look at that list and you like

[00:42:38] I don't know I don't know if this is good German hip-hop and and I'm unwilling to find out

[00:42:43] Then the thing you do is you schedule a user test

[00:42:46] And that user test can't be run from another three months because it's lined up behind obviously

[00:42:51] So there are user tests that conflict with the

[00:42:54] See the assignments and once it's run it has to wait for the data science team to evaluate the

[00:43:00] P numbers the results and decide whether it's

[00:43:04] Powered or not and then something goes wrong. We have to better again

[00:43:08] And if it takes you six months

[00:43:10] To get an answer about whether your current iteration is good then to do 10 iterations

[00:43:18] We've we've moved on to brain implants and and it doesn't matter anymore

[00:43:23] I would also argue that I think as an industry

[00:43:26] We also just need to get better at doing user testing that doesn't require so many parameters around it as well

[00:43:32] Fair point, but you know anything in the gap between I can run it in

[00:43:37] 30 seconds and look at the results myself and then change something you run it again and I have to schedule a test that takes six months and

[00:43:45] Five teams to get me an answer that is maybe not

[00:43:49] Still not what I need there's a lot of there's a lot of variables in between that but the close for you are to

[00:43:57] I can tell if it's good or bad and I can iterate until if it's better then

[00:44:04] The more the more weirdness that you can take on and have a hope of telling whether it's good or not

[00:44:12] So

[00:44:15] Just in it like let's say for example, I'm pretty nice. We went to an AI event in Toronto and

[00:44:20] There's somebody presenting

[00:44:21] In all of them that evaluates your lawyer documents and centralized easy to have an accuracy percentage as an evaluator metric

[00:44:29] But let's say we met somebody who was building a startup that was generating a resume for you

[00:44:36] How can you build a value of metrics into something where either

[00:44:41] The evaluator is at arms reach in this case like the employer or where it's highly subjective and highly emotional and highly irrational

[00:44:50] How can you build that evaluativeness into it?

[00:44:55] Well the answer in some cases is you can't towards really hard to and then that implies that

[00:45:04] This is gonna be a tough application of that technology so yeah, I think

[00:45:11] Sometimes it's just yeah, that's good luck

[00:45:14] I mean you could try that but it's gonna be hard so using LM to generate

[00:45:21] Implementation of Python functions with given definitions and given inputs and outputs

[00:45:27] Yeah, yeah we can do that like it's got to be able to do this and you're gonna run a test at the end and it passes or fails

[00:45:35] Great, that's not subjective

[00:45:39] It's gonna work or it's not gonna work

[00:45:41] Generator resume

[00:45:43] I mean the test there is does this person get hired?

[00:45:49] We we can't run that test

[00:45:50] A thousand times like we can run the Python functions

[00:45:54] So that's gonna be hard. I think using LM to do dating matches

[00:46:01] Maybe it would work but yeah, it's gonna be hard to test

[00:46:05] And so if you if you take on something where

[00:46:09] It's hard to test

[00:46:12] You know that that's a different thing you're gonna have to have something other than testing that you can rely on whether it's

[00:46:18] your intuition or

[00:46:21] Really good user testing some really good method for user testing some other way to

[00:46:28] Get confidence or some reason why the confidence doesn't matter

[00:46:33] in some LM

[00:46:35] Applications like mid journey so in generating images from text

[00:46:40] I think we're at the point where

[00:46:44] that's in some sense not very good

[00:46:47] But is a being evaluated at the moment sort of on amusement value

[00:46:53] Like it's not for functional purposes like if I if I were trying to use it to generate

[00:47:00] Needs for images I'm sort of speculating that such thing exists

[00:47:04] But if I had needed a specific image that didn't exist, but it would be reasonable that it did

[00:47:11] You it would be hard to use current AI

[00:47:16] Immigeration tools for that and hard to evaluate and hard to test

[00:47:21] But if I have to do is like

[00:47:24] It has to look funny

[00:47:26] Like I right I need common here as a communist order

[00:47:32] Like okay here's here's a picture and it

[00:47:36] It conveys that idea

[00:47:38] Thankfully Twitter does that for you for free, but you're obviously very knowledgeable on music and

[00:47:45] If we were to build a renegade startup in the music world and we're gonna say it's AI power because that's what the investors need

[00:47:52] What would that music agent do? How would you evaluate whether it's working on? I'm asking this because again we've all been embedded in

[00:47:59] The streaming world and the definition of a good listener's hard to know because is a good listen if you've listened to the whole thing through

[00:48:06] Is it that you saved it? Is it default the person?

[00:48:10] Is it that you listened all the way through does that mean that you just were checked out and you're just not even caring about the music?

[00:48:15] How do you value success with this kind of world because there's tons of data in music

[00:48:19] What my favorite success matrix for music things was returning to them over time? I use this

[00:48:30] For just that everything not

[00:48:32] Everyone's a I or or algorithms or not

[00:48:36] Just like does he listen or like an artist the most straightforward way is to add up the time or the number of plays and

[00:48:46] Like you say neither of those are necessarily

[00:48:50] True, I think you see this in Spotify wrapped sometimes should get to the end of the year and like my favorite song was that song that I remember that song and then you think about it like

[00:49:02] Oh wait yes, yes, I left my phone in my pockets on loop by accident when I left it Macarbon and went to the beach in June and

[00:49:12] That song played 85 times before I came back to the car and and realized it

[00:49:19] That wasn't my favorite song but if you come back to that

[00:49:23] Repeatedly every day for a week or every few weeks all year that's a much better sign

[00:49:32] And that's true for artists for there's any level of abstraction if you

[00:49:38] Play songs from the same genre different artists the same genre, but repeatedly over time it's pretty good sign that you're actually

[00:49:45] interested in that genre

[00:49:47] You return to a playlist

[00:49:50] That tutorial playlist and other users playlists over time

[00:49:54] It's good sign that is usually

[00:49:57] How I evaluate things now the tricky part of that is that's really easy to evaluate

[00:50:02] Over historical data. It's hard to evaluate in a user test which are usually limited in time

[00:50:08] So you're like all right, I have seven days

[00:50:12] Did they come back to it during the seven days and then that's only really

[00:50:17] Good if they gathered it early in the seven days. I fundamentally

[00:50:24] Disacreate with you

[00:50:25] That because I think that we're in this time in the world where people are over relying on data or giving it an an over strength

[00:50:33] Credibility right and you said several times that a lot of the data of lacking context or the

[00:50:41] Algorithms are lacking context to be able to know what to do about it

[00:50:44] And I believe that you probably have just had

[00:50:49] To siloed an experience of user research because there are so many ways that data and user research can collaborate better

[00:50:58] So for example, if you would come and say hey, we're here. We're seeing that you know people are these people are returning back to this data over and over again

[00:51:07] I as a user researcher

[00:51:09] I don't need to then have them do a diary study where you know they go back to that exact same thing in that time

[00:51:14] We can look at historically what's been happening and have conversations that are deeper and we're meaningful with them about

[00:51:19] When they're listening to different things and what's happening and I think that there's just this fundamental

[00:51:26] divergence right now where we need to come back to a place where I think user research and

[00:51:31] Data are working more collaboratively and I think more transparently

[00:51:35] Where we're each influencing each other's work and we're able to then

[00:51:40] Strengths in these a research but strengthen the data as well

[00:51:44] And so I guess my question to you is if you were in a space where that collaboration was happening more meaningfully

[00:51:51] How would you want to engage with the user researcher with the the user testing in that perspective to give you

[00:51:59] The strongest ammo

[00:52:01] To then make a case and say hey, we do need to

[00:52:04] Segregate these playlists or we do need to create different profiles for these users within the algorithm's things like that

[00:52:13] To make the key difference there is between user research and user testing

[00:52:19] And I

[00:52:21] I was thrilled with every time we ever did actual user research like actual user researchers humans were involved in

[00:52:29] In interacting with human listeners

[00:52:34] That always produced fascinating insights invariably

[00:52:38] most user tests

[00:52:42] That this was true. It's modified in I'm

[00:52:45] Certain it's true at most tech companies are not user research don't involve user researchers

[00:52:50] Don't involve personal interaction they are AB tests we

[00:52:56] Two different algorithms to two different randomly selected populations

[00:53:01] No one talked to them no one told them that was happening all we have his data so when I said

[00:53:07] That's what I'm talking about with the limitations of

[00:53:11] user

[00:53:13] tests

[00:53:13] if a person had gone and talked to them yes like you can get

[00:53:19] The insight you need by talking to people

[00:53:22] Based on no data let alone a little bit of data

[00:53:26] But if you subject people to get anonymous secret AB test for seven days

[00:53:33] Then you that's enough for some things it's not enough for a lot of things

[00:53:38] Yeah, I think we have our work cut out for us in terms of how do we redefine

[00:53:44] User research and engagement data collection a few months ago we spoke with Martin well-reven from music acts

[00:53:51] There community that's exploring the

[00:53:54] Economics and broader system of music industry and looking at experimental ways of engaging

[00:54:00] We also spoke with Virginia bearer from match to anybody I

[00:54:03] The sentence really negative it's very pessimistic you know the value music is going down

[00:54:08] Rights holders are losing out. You know things are being trained illegally

[00:54:12] Do you have anything optimistic that you feel is happening in music as a result of this technological

[00:54:18] Search is there anything that you see that are good signals any happy stories?

[00:54:23] Well, I think the same thing as I think when people bemoan the economic

[00:54:31] Status of music and say sort of reasonably with brinomeration this low

[00:54:38] There'll be no future generations no one can afford to make music anymore

[00:54:43] And I put that and then I turn around look at music and I'm like

[00:54:48] Except there's more music than ever and it's just as good and more varied than ever

[00:54:55] so

[00:54:57] Maybe that's not quite what's happening and I'm

[00:55:01] Fundamentally optimistic about technology in general

[00:55:06] But at the beginning I think the first few LM things I felt this chill of like

[00:55:13] I just that horror go through me especially with music generation like the idea that

[00:55:22] Text prompts to songs is the future and that people will be

[00:55:28] Listening to other and the other rhythmically generated music that's their perfect thing that idea of a future

[00:55:35] That's brains and vats to me

[00:55:38] And that's not what humanity is is here for but for that same reason I don't imagine that happening

[00:55:44] I don't think anybody is going to be pleased with

[00:55:49] AI-generated music for the same reason that I don't think it's

[00:55:55] It solves any problem because that's not what we do with music as humans. It's not like air conditioning

[00:56:01] It's not like there's a there's a perfect body temperature

[00:56:04] There's a perfect emotional temperature and we're just trying to reach it music is a communal experience like we

[00:56:11] Experience it together. We use it to

[00:56:15] Communicate with each other about what it's like to be human and feel things and the AI is not

[00:56:22] Part of that I think there were the same fears at every stage of technology

[00:56:28] We had synthesizers

[00:56:30] It seems if you look at it

[00:56:32] Mystically the what's happened with tack is it's reduced the barrier to be a musician

[00:56:36] But it's also reduced the barrier to be a music fan and

[00:56:40] If there are no more barriers if there's no more friction left

[00:56:44] Then does any of it matter if anyone can listen to anything at any time

[00:56:49] With no restrictions and if anybody can create music at anytime with no restrictions

[00:56:54] Is it still a craft that's the philosophical question? What does it mean for it to be a craft and

[00:57:01] What matters about that as I think the core so I think you take it back to

[00:57:08] We invented synthesizers and suddenly you could make violin noises without learning to play the violin

[00:57:14] So it's the same kind of thing like you had this

[00:57:17] craft that you needed to learn and

[00:57:22] We'd sort of assume that that learning was a necessary part of the value and then that got questioned

[00:57:29] Then a bunch of people were afraid of it like

[00:57:32] No one's ever gonna learn how to play the violin again because they don't have to

[00:57:36] But it didn't happen people still

[00:57:39] Learn to play the violin

[00:57:41] We'd learn to do other things with synthesizers than just pretend that their violins and

[00:57:47] There's a lot of great music but synthesized violins and there's a lot of great music with violins

[00:57:51] And I think we're not yet to that point with

[00:57:55] AI as a category of

[00:57:58] Using it as a tool I think we'll get there because we won't be satisfied until we do

[00:58:05] And so I'm not really worried that it's gonna be a

[00:58:09] more hazy and my very problem where every possible sound

[00:58:14] Exists and there's no way to distinguish between them and we are just stuck listening to a

[00:58:19] a feed of

[00:58:21] AI muttering

[00:58:23] Because no one actually wants that

[00:58:26] Glenn it's been an absolute pleasure speaking with you. I have loved how a diversion we've been from what we thought would be talking about

[00:58:33] We learned a lot about the philosophical philosophy of

[00:58:36] data and this really critical context about

[00:58:41] You need to measure it. You know, I understand why we're doing it being able to care about the outcomes

[00:58:46] I'm skeptical and curious. I think that's the key tension

[00:58:50] Don't do things because you think you have to but also don't be afraid of them and

[00:58:56] Trying things is how you find out that what works even when what works is not what you thought would work or what someone told you to build