#8 - Leading AI Transformations with Tom Goldenberg

About the guest

Tom Goldenberg, Junior Principal at QuantumBlack, a McKinsey Company

Tom has been involved in the technology industry as an engineer, entrepreneur, and leader. Most recently he is working as a technology translator and a team-lead at Quantum Black. Tom has been named one of the Top voices in Technology by LinkedIn.

Check out the open positions at Quantum Black https://www.quantumblack.com/careers. You can learn more about what they do by reading about their recent project Kedro.

Notes

AI is new, lots of people are struggling to understand what it means, where the benefits are. When engineers, leaders, business people hear AI they may get excited by the opportunity to innovate and intimidated by the lack of understanding at the same time. Engineering (AI) managers play an important role to be a bridge between business, product and technology. It's also the case with any technology but AI being new, requires a more delicate approach.

  • AI transformation and the common misconceptions around AI.
  • Challenges when it comes to AI transformation.
  • The role of an AI translator and career path to get there.

Transcript

John 0:13

Welcome to PragmaticLead podcast, your hosts are Alex and john Masse. We have conversations with folks throughout the tech industry to get a real-world perspective on how people make things happen for their careers and businesses. Check out pragmatically.com for more content, just like this.

Hey, guys, welcome to this episode of PragmaticLead. Today we're talking about leading AI transformations. And our guest is Tom Goldenberg.

John 0:49

Very good. So, Tom, we've known each other a little while we met in a tech meetup.

Tom 0:55

Oh, man years ago. Yeah. time ago. Yeah, I think Priceline.

John 1:01

Yeah, yep. And, Tom, we recently caught up, and you've been spending a lot of time in AI. And you've been playing a team lead role on QuantumBlack currently. So if you could you tell us, tell us a bit about yourself and what you've been up to?

Tom 1:17

Sure. So, currently, I work as a junior principal as part of quantum black, just a little bit about what quantum black, who we are what we do use AI advanced analytics across, number of different industries, including, health care, Formula One racing, which is where I actually got started more than 10 years ago, all types of different operations, marketing, financial services, etc. In quantum black is a McKinsey company. So the company was it was acquired by McKinsey about five years ago. And so we've been integrated in the broader McKinsey ecosystem. And McKinsey is management consulting, basically, top-level strategy operations, a whole host of things, 10s of thousands of employees that we just work with, the top level of organizations to improve their performance. So how do I fit into that I, serve this role as a translator, I work with teams, as a team lead on the technical side, to help them, help an organization basically unlock value with AI. And I've been here for about a year and a half have been doing consulting for two and a half years, I think, when we met at the time, I was just really getting into the tech industry as an engineer. And so at the time, I was, going to lots of meetups and starting to present and learning about different, technology frameworks, etc. So, it's been a very interesting journey.

John 2:58

So, Tom, AI is a pretty broad term, and probably means different things to different people. Could you describe, like, in your context and working in with artificial intelligence? What does it mean to you?

Tom 3:10

I agree it's, it's a term that's thrown around a lot. The way I think of AI and how we define it is, it's essentially, machines mimicking human cognitive functions. So I mean, think of all the things that we do on a physical or mental level, how can we replicate some of these processes, so we think of like image classification of, we're able to recognize things, and so we can train computers to recognize objects and detect things. So that's an example of vision, any type of cognitive analysis that we do have, drawing insights, speech, etc. And, when we think about artificial intelligence like I said, it's a broad term within that you have different techniques that are used, such as machine learning, deep learning, etc. But artificial intelligence is the blanket term that we use to describe those things.

Alex 4:10

I'm sure a lot of companies are trying to get on this AI bandwagon. It's cool, it's new, its innovative. So when you start working with a company like Formula One, I'm sure it's a bigger company or smaller companies. How do you explain to them or how do you start this conversation about AI to define what AI means in that specific context? Is there a general rule or is it just really case by case company by a company?

Tom 4:39

Yeah, it's a good question. I think there are, I guess it should be it's a little bit of both right? There are some general principles, but there's also there has to be customization for each organization. If I think about general principles, one is understanding, we call it the art of the possible, what, what, what is it that AI can do? Like the potential? And what are some of the limitations of it? And how is it being used today, I think that's a key thing as well, like how are, competitors in the space as well as maybe potential future competitors, using this as a competitive advantage. And, I think it really starts with the top level of an organization seeing that there is value and seeing that, along with the five-year business strategy, this is an imperative, right, like, there's a lot of value will be unlocked, and it's a key competitive driver. So I think that that's one approach, is getting alignment within the top level of an organization that this is important for our organization, in the next five to 10 years. From there, then you get into more of the custom customization you mentioned, for each organization, within their particular industry, given their own unique mission and vision and history. What does that look like? Right? Because you're not starting with a blank slate, organizations have done analytics, they work with data that's not, doing this from scratch. But how do you then take those capabilities and build, ai capabilities and actually unlock value from it? That's where you have to get into the weeds of their organization, and their industry as well.

Alex 6:28

Okay. And before we get into the topic of our conversation, transformation, are there any prerequisites for these companies, before they get into AI? or start thinking about AI? Or just to get the value out of this system? Any specific data, they have any structure? Any anything?

Tom 6:50

Yeah, I mean, why it's interesting, you mentioned the data, there's been often-cited research that, part of any data science insight is about 80% is the engineering and the data that goes into it, and my experiences is playing that data engineering role. And I've seen firsthand that a lot of attention gets paid to attract talented data scientists, right, if we all know that it, it, if you just look on Glassdoor or any of these websites, you can see there's a strong demand for data scientists. But a lot of the value is from having a very strong foundation of both data governance, and accessibility, that data has to be available to the teams to be able to use, it has to be well governed, it has to have, quality checks in place. So that it's it's, you can rely on the quality of that data. So I do think that you have to put the worst in front of the car, and, invest upfront on that data capability. That said, I don't think that organizations should wait and, do this massive, uplift to their data capability before even trying to do anything and data science or AI, a lot of times you see that there is a need, and you can address both in parallel. So that would be one thing that is certainly a key driver, which is a strong data capability and data engineering capability. The second thing, like I said, is buy in from the business stakeholders, knowing we have this influence model where we can adopt an adopted change if we see our leaders representing that change, and we feel that it makes sense for an organization. And so that's also needed, right? You need a message from the leaders of the organization that we need to invest in these capabilities. So certainly, those are two things that come to mind. And,

Alex 8:59

and how the business people product, the leadership of a company can understand the value of AI, is there any education they need to go through? How do they realize the power that AI can bring to to the business and get the most out of it?

Tom 9:18

Yeah, I think this is actually really important because it's one it's about like you're saying, of knowing the value of it, but it's also about building trust. So I think executive education on the topic of AI is a very important topic. We're certainly, seeing that that's the first step is understanding what's out there, and a lot of this we know, right, like if you just look at the big tech companies and you can see these use cases. So think of, Spotify and Netflix, the recommendations that you get, these are AI-driven when you're using your email with Gmail and you have a spam filter that is, ai driven. We use Alexa or Google Home and all the speech recognition that goes into it. But just taking a step back, even outside of the big tech companies go to the retail space, like McDonald's using it for supply chain efficiency. I believe Procter and Gamble use it for personalized product recommendations, john deere, in the agricultural space, using it for applications as well. So it is really pervasive at this point. And I think that's, that's one of the building blocks is understanding just how widespread AI is in use today, and in what type of value it's creating already. And then the second step, like I said, of, like, building trust is Okay, then let's peel the, let's pull the thread a little bit and understand what exactly is a, like? What is how does it work? That, that requires a little bit of study to understand how the algorithms work? What does it mean, when an algorithm says that, it gives you a score, that, a stock has a score of 0.8, in terms of its, likelihood of being in outperforming the market, understanding the interpretability of the results, and understanding what that means is the second part of it, which is, which is also really important for adoption.

John 11:38

So, it has to at least get to the bottom line, at some point for some of the business owners, right? Like it, how is it? Do you find people sometimes trip up over? Okay, so it sounds nice that I can calculate these things, or I can come to some realization, but how do I, how do I take that to my stakeholders?

Tom 11:57

Yeah, well, this is I think that part. So we call this like, the value at stake, right. So I think the value be captured is an important part at the beginning and the end, right. So what I mean by that is, when you're working with an organization, the first thing you do is you map out the domains of the organization., it could be, if you think about, an investment fund, for example, they'll have, research where they research particular funds, they'll have a, investigation phase where they're finding new investments, they'll have a portfolio management phase, they have different domains. And if you think about investment funds, they'll have different types of investments of real estate to have the infrastructure, right. So it's like mapping the different domains, and then looking and seeing what are all the different use cases that we can do in those domains? And for each of those use cases, you have to figure out what's the potential, like you're saying, What's the value at stake? If we were to improve our, supply chain efficiency by, 1%? What would that mean, like you say, for the bottom line? And not only that, that would be the impact? We'd also look at what is the feasibility? Like, how hard of a problem is this to crack? And so that's the initial phase of like, going through top-down and organization and understanding where are all the possibilities. And so the first thing that what you do is you, you try to find the places where you can unlock the most value, where you can actually quantify that that potential impact. And then, once you have something like a model in place, or, a use case, where you have something that you've delivered, at that point, it becomes important to have some type of way of quantifying the actual impact it's having. And so then you get into a be testing different types of simulation approaches, etc.

John 14:07

Okay, that that makes sense. And the last one, before we move on here is, are there any misconceptions or myths that you think are worth busting? That people have around? Ai?

Tom 14:19

Yeah, I think in general, like, there's a, there's a fear that AI, like, okay, AI is suddenly going to replace everyone, but actually, like, what, at least me personally, and what I've seen is the most effective is, Ai, in most cases, and especially in critical use cases, not going to, like just going to automate, a human out of the loop. And a lot of cases what it's doing, it's providing additional nuance expertise for a human to make even better, a better judgment call. And so, when you think about About investment decisions, for example, you think about all types of highly critical things. A lot of times the answer is, this is a way to, it's not an isolated thing, it's really a combination with the human expertise that you can really deliver a lot of value.

Alex 15:21

Yeah, I've heard general AI is very hard or even impossible to achieve, right? So it's not going to replace like you said, it's not going to replace humans, it's not going to run the world. But it's a good assistant to two people, it's good for the system to go through your data, analyze the data, and give you some meaningful results. So people can easier can make those decisions much easier with the help of AI, and which is pretty much machine learning and data science.

Tom 15:51

Yeah. And just think about it like in medicine, what we're seeing as well, like, people don't feel comfortable being diagnosed by an AI, they want to be diagnosed by a human right, like highly critical, things like that. But if you take a human, like a trained physician, the race that they get by traditional rates and some types of diagnoses, you're seeing it a huge improvement, right, with the assistance of AI. So it's like human plus AI, in a lot of these scenarios equals better outcome.

Alex 16:26

Think airplanes are a good example too. Right? So autopilot, it's the safest way to travel because it's there is no human involved. That what makes it the safest way to travel?

Tom 16:40

Yeah, I mean, I I have, I think Tesla has like, they're on the cutting edge of the driving cars. But, I, I wouldn't feel comfortable falling asleep at the wheel, you need you still have to be there to have a human in the loop. And who knows, maybe we'll get there at least with self-driving cars, but it really is a case by case thing.

Alex 17:02

Yeah, I tried Tesla autopilot. And I agree, I wouldn't trust it. Not yet.

John 17:08

Well, so the other thing I hear with AI and actually peers of mine, outside of technology is saying, Well, what are you going to do once computers start writing code for you? Right? is artificial intelligence going to do this, and we've seen projects pop up where they try to generate some interface or auto write code. Um, I always imagined that at least, maybe not right away. But the first iteration, if there was any machine learning introduced that it would be in that assist, I imagined, like this nice editor, or I'm writing code. And there's this robot that pops up and says, By the way, there's a better way to do this. Or hey, your buddy Alex actually wrote something very similar. Why don't you take a look? Yeah. And it's all there. So it's helping me make better decisions on the fly. And then me eventually, it's, it's creating software. But then again, my next question is, why would a computer write code? That's a human thing, isn't it? Like, isn't it like we need code to write instructions for a computer? What does that look like? Well, sometimes

Alex 18:08

it makes sense. If you have a piece of UI, and there's no logic, maybe that there's a good fit. Why would humans write it? It's just you see the picture translated in the code, it looks exactly what it should look like, by when it comes to big decisions, big business decisions big, like the logic that runs your business, maybe it's not time yet for AI to write the code itself. For that, but maybe with some transformation, maybe in the future, it's possible. So why don't we talk about the transformation? So if a business decides to invest in AI, and there's a good fit, and they are thinking about some, there's some potential value, and they start thinking about, investing in AI? What does that mean? What is AI transformation? what that look like, for a, an average company, based on your experience?

Tom 19:08

Yeah, so AI transformation, I mean, the first thing like we talked about is understanding, the landscape and understanding that this is a, imperative for the organization to invest in. And then you start to think of, well, how do you sequence that? Right, and I think we touched upon investing in data capabilities is a primary thing to do, right. And so you have a lot of movement in this space of, companies that are moving their data to the cloud, creating data that are more accessible. So, data lake architecture, basically an infrastructure so that teams can access data and access it at, reasonably frequent intervals, right. So that's like the first step. And then we talked about mapping use cases. So looking through the organization, looking at each domain, what are the use cases that could be applicable and a, how feasible are they and be? how impactful Could they be. And so that's the second part. And then once you have that, it's really, you want to find one or two use cases that we call it a lighthouse, they, they serve as a beacon for the rest of the organization to say, Hey, maybe this is maybe this really value here. And, we should be paying attention and, and we should bring this in our part of the organization, right. And I've seen this firsthand where you work with, different departments in the organization, and they're very skeptical at first, and you work on one or two use cases. And in that one use case that you pick you, they see that there's a lot of value being added there. And it's like, Wow, now I understand why we're having, why so much time is required. So that's what I would see as the first step is identifying that that first lighthouse, building the capabilities around engineering, on data science, on, we call the translator skills for the business functions so that they can understand what's going on. And once you have that, I think it's, there's a lot of different ways that organizations can grow and adapt. And, there's somewhere you have a center of excellence that works with the different departments and organizations, some organizations feel like it's, it's better to have, different, different analytics group for each department. It's, it's really a case by case thing from there, but definitely to get started is, is finding one use case that really brings a lot of value that the organization can see and get buy in, and then spreading it forth and the rest throughout the rest of the organization.

John 22:05

Some sound like there's, once you establish the lighthouse, that the general adoption across the company would become feels more natural, you don't have to sell it because you have a working model. Now, there's a lot of times we have to see, we actually have to see something before we understand the patterns or the capabilities that provide. Actually, that was mostly the motivation of the first few questions that we asked is, how do we get people starting to think about what could be possible within the realm of something that they haven't had the opportunity to be that closely involved with? Or might be really far away from them? technical wise, like, it just feels like a completely different world? You said the word translator. And last time, we had a chat, you talked about the, um, I guess it was like a role of AI translator.

Tom 22:51

Yeah. So the translator role in a, in AI project is the person who plays the bridge between the business and the technical team. And so we just take, I can explain why this is necessary. So we talked about two big things, in the beginning, is, understanding the landscape, and then working to identify use cases, that identify identification of use cases, is really requires a lot of domain expertise. It also requires we talked about the art of the possible and understanding from a technical point of view, what is possible, you don't want to end up in this was it in Silicon Valley, where, they're asking for a new product, and it's like, okay, we just think, and then it writes for you, and it's like, okay, we'll be ready and, 60 years or something. So, like, you need to ensure that part of the feasibility, you really need some technical expertise, but you also need the domain expertise to under actually, like, really deeply understand the business processes. Right. So that's where the translator role plays in. And it's the role that I play more often than not, is working with the stakeholders to identify those use cases. And then, work on when you're implementing them. It's an iterative process, right? So you may have some early results, you need the business to really weigh in and understand how to interpret how to find a way a path forward and then drive an option. So that's how I would describe the translator role.

John 24:36

So when you're in those conversations, and you're talking to the business team, are you ever going in there and feel like the audience is trying to drive the solution in a way that doesn't necessarily fit what you're trying to accomplish? Take for instance, we are maybe I can use artificial intelligence to solve this, this pricing issue over here and but you're You with the domain expertise, you probably well maybe that's not the right way to ask that question. What you really probably mean to ask is, we need to understand a market trend within a given environment. what I mean? Yeah, you ever find like your coach sometimes coaching the business team into how to think critically about the problems or, or the opposite opportunities around them. It's,

Tom 25:21

it's both ways, right? Like, a lot of times, you will, I mean, building an AI model is that process of experimentation of, of adding and removing and refining variables. And a lot of times, you may get results where you're trying to, like I said, you're trying to interpret the results to get to the explainability. A lot of times that requires business expertise. So it may be that, just getting a good result on a model is not enough. Because what can happen is sometimes you can have all types of either data leakage, or you can have variables that are doing things that you are non-intuitive. I'll give an example, working with a model where we're, trying to predict how the stock market is going to move, it turns out that one of the variables was the year. And so because year is ever-increasing variable, it's actually not actually have a weight in the model. But we found that it did. And this was a big flag for us. And this a very, it sounds very simple, but in when you have, hundreds of variables and teasing out pressure testing the approach to make sure that it makes sense from a business domain perspective is super important. And a lot of times it requires the business perspective of those stakeholders.

Alex 26:51

So being in this role of the translator is not unique to AI. It's like it's not a translator that is unique. I think it's a, it's a pretty common responsibility of any leader, I'm going to be talking about in in technology context, so engineering manager, TPM, Product Manager, CTO, you have to work with the business product stakeholders, external clients, and then translate those requirements into technical project expectations, and drive this project to success. Some curious, based on your experience being in this role? What are the challenges being in the middle of product stakeholders and technology?

Tom 27:35

Yeah, and you guys have a lot of experience with this as well. Right? For me, like, one of the toughest things, Have either of you read the book, the mythical man month? No, No, I haven't. But we will. It's, it's a very old book about, software engineering teams, and, estimating work. And it introduces a lot of counterintuitive principles that may not be evident if you don't work in the technology domain, which I think points to what you're saying. So for example, a lot of times the rational solution for if a project is running behind is to throw more bodies at it. And, the author of this book, I think, is from like, it's quite all that things from like, the late 80s. He, he's basically saying what you're doing, I think by adding more people to the problem, you're actually slowing it down further, because you're creating more complicated network, plus, there's the onboarding process of getting them up to speed, etc. And so it's things like that of the nuances of technical development that may not be obvious outside of building software. And, and writing code, I think that's one of the challenges is, is really getting that the buy in for this iterative process, rather than, this is a fixed schedule. And, this is the due date for this is this, and this is what we're going to stick to, there's a danger of being on either extreme of that, right? Like, if you don't have the a little bit of leeway to adapt, you're going to burn the team out. Right. And it's because software projects are by their nature, much more harder to estimate then, manufacturing projects, for example. On the other hand, if you take the, another approach, which is this is software, it might be done tomorrow, it might be done in a month, who knows, and you can just, and some people will say that's agile, right?, it is not having a deadline, there's also a danger of that because then The business stakeholders are like, I can't work this is this doesn't fit with my paradigm of the way that things work. And so that's that, for me is been, the thing that I've tried to learn over time is getting to estimates, but working with all the stakeholders to be flexible with them and revisiting them. And in honestly, like, it even comes to the point of like, let's be ready to just walk away at any point, we might be doing a use case that we had a high, these are all experiments, and they're all hypotheses. And we might have a hypothesis that doing this problem is, is even possible. And, a couple two weeks in, we may discover that actually, we need to revisit that assumption, and maybe we need to pivot. And so that's the, I think, with analytics and AI, especially you have that risk. A lot of times you're doing use cases that either nobody has done or you don't know anyone who has done them before. So there is always a risk that you need to be ready to pivot or reassess the priorities.

Alex 31:11

So one of the outcomes or challenges I envision, given my experience, just in software is expectations. So in software, you write software, you delivered a works, we call it a day it's done. AI is more flexible. It's more vague, right? There's a lot of things you can do or cannot do. How do you deal with that? Fear, ambiguity, ambiguity? Yeah. How do you deal with that? How do you manage expectations when working with clients, and like you said, you can walk away, that's one of the options. And there's a lot of experimentation going on, right? So you don't really know what you're gonna get at the end. But the client, let's say, will expect Oh, this AI will do everything for me will write the code will make a coffee for me. And at the end, it's not it's just a maybe a simple suggestion engine or something like that. So how do you deal with that?

John 32:09

Well, if I, if I could just add a little bit of So Alex might roll his eyes at me at this one is because I keep talking about complex systems and complexity, science and how we think about things. And Tom, you said something about agile, and how it has like this, it has a sentiment of, it's unpredictable as to when I'll ever realize the things that I'm hoping that my company eventually realizes, and I and if I think where it comes from is our obsession with being able to develop control systems, systems of control, where I can say, if I put this in, I will get this out every single time and it's predictable. Because when somebody asks me have to produce this thing, I'll know I can insert this and I'll get this outcome. And we want to treat our people that way too. Because if you're a tech lead or manager, and you're being asked to deliver or solve this big problem, obviously people want to know when, because when is timely for competitive sake, for the sake of keeping the business running. There's a lot of motivation for that. But we always seem to take things everything, almost everything. And including AI and every other technical project we've worked on we work on and even agile, because agile is not technically a framework, it's set of tools to build a process around, even for for AI, it's still this is a common theme. How do I control this? How do I make something that's really complicated? And now and to the point, Alex, I heard what I was hearing, Alex say is that there are so many variables. And, Tom, you're saying too, that we might even have been working with the wrong data here. So part of the process has to be experimentation and rapidly, and understanding where the failures emerge and allowing for the story to I don't know if it's organic. What do you think, Tom?

Tom 34:02

No, I mean, I think you're both getting at the same point, which is like the desire to control processes from both sides. Right. So from the technical team, and from a business point of view. And I think, like I said, I think it's important to have that adaptive mindset and then work with the stakeholders to have that at Day Zero. It's also important to be flexible. I do think that it's funny like what you're saying is very similar to what the mythical man month. The title comes from the fact that people think that, a month worth of effort is, equal to one person. And so adding more persons means you're reducing the time in reality, it's a lot more complex than that. It's the specific individuals Are there the specific capabilities that you have, and working together? So, I think, I mean, you what you want to avoid is being like, Hey, we're going to start this project. And, six months from now, we may or may not have something interesting to show you, I think that's hard or a year from now, right. And, and the same thing goes with, like, we talked about, preparing the data capabilities, like, people want to see a return on investment somehow, because that fuels the enthusiasm and actually creates a lot of momentum. So I think what you want to do is, you don't want to create the expectation that like, oh, every two weeks, we're going to have amazing results. But you want to have some type of cadence where, every month or two months, you're able to reach some milestones and regenerate that excitement, enthusiasm, because what you don't want. And I think this happens in a lot of organizations, they know that data and analytics is important. They invest in hiring people and creating a team. And that team becomes fully isolated from the rest of the organization. And therefore, it feels on multiple levels, right? It feels because the people in that team are shut off from the actual business, and they're not able to learn and make anything useful. And then the rest of the business, it's, they're not as it's not integrated into the organization. So to get that, and we talked about AI transformation, that's what we're trying to get to, we're trying to get where it's not an isolated individual that, this, the CEO decided that they need to invest in analytics, but it's not actually creating value, you need to generate that excitement and intervals so that the people at the leaders of the organization are really excited, and they want to continue to invest and spread it through the organization.

John 37:07

So that's the challenge. So there's a people aspect or culture aspect as well, that you consider when, when talking to folks about what how they're adopting AI and as part of the transformation?

Tom 37:22

Yeah, I absolutely. I mean, especially in larger organizations, it's, this is very new, like having a, an AI Center of Excellence or having, even a chief data officer or, an analytics part of the organization is very new. And the organist, these organizations are large, and they're structured in a certain type of way. And you're essentially asking people to break out of that. And I think any type of organizational change requires some type of shift in mindset or culture.

John 37:59

And what would so if, what would a healthy situation look like if I was on or if you came into an organization, we had a steadily established data team, we had some facilities? What are the if you're the doctor coming in to do a health check? What are the attributes you're looking for?

Tom 38:17

Yeah, so I think you mentioned a couple. So one is the data capabilities are primary, right? So making sure that the data, the data of the organization is well governed, and that it is accessible is is usually the first,

John 38:37

well, what do you mean by accessible? Because I hear a lot of people say that, like, what is accessible data?

Tom 38:42

So if you think about a lot of times, it depends on how you want to create your analytics environment, there is a lot of benefit to having analytics in the cloud, because you can scale up and down automatically, right? I mean, if I have a large job that I need to do that requires hundreds of machines, I can just ask the cloud provider to give me those machines for 10 minutes, and then I may turn off, right. So having access to the cloud can be a key enabler in providing access to the data. And having the data available on some type of cloud platform is typically a means to do that. It's not advisable in all circumstances, of course, but that's, that can be a good way to make sure that the parts of the organization that, the data scientists, data engineers need access to the data to have access to it, such that it's not interrupting their critical system processes, right. So think about a bank, right? Like a bank has a core processing system that has to run, the data scientist is not going to have access to that right because it's just too much at stake. And so what you do is you replicate That data in an environment where they'll have access to it to like a sandbox now, availability means having that set up the types of data that the data scientists and engineers are going to need. And having it be updated on a frequent enough basis, right? So once you get past the experimentation phase, you want to put something into production, you want, if not minute by minute updates, at least day by day updates, right. And so that requires and having a whole data process set up to have that analytics environment available to be reliably available to the data scientists.

Alex 40:42

So in this role, sounds like communication skills is important. Having good communication skills to be able to talk to clients, or stakeholders is important, and then translate that into more technical terms. What about technical skills? How is that important? How much? How much technical skill do you need to have? How technical Do you need to be? And how do you actually learn those technical skills while being in this position? Or keep up with the technical skills? Sure, and

Tom 41:14

you're referring to the chance later, all right,

Alex 41:17

yeah. Yeah. And it's, it's pretty common, again, to any leadership and technology. If you're in engineering manager and CTO, the frequent question is, do you need to code? Do you need to stay up to date with technology trends and architecture? Or do you do delegate that completely to tech leads?

Tom 41:36

Yeah. So if I think about the team leader or engagement manager role, call it a translator role there. And you can find some articles on this@mckinsey.com, we've written about, the need to have translators and AI transformations, that there are basically four key components. So one is domain knowledge, understanding the business, technical fluency, which, which is, what you just mentioned, project management, and then entrepreneurship. And so that the technical fluency is a key part of that you have to be able to work with the technical team, understand, what are the challenges that they're facing, how to maybe balance resources that might be required, communicate, the results to the stakeholders, etc, I think it can come across two ways. I've seen it two different ways. My own personal background is more coming from a technology contributor myself. So I, having done the data engineering work myself, I understand what goes into the technical team. For me, I it's more about bridging my own skills and project management and domain knowledge. so deeply understanding the business dynamics and stakeholders, but it can be the other way, as well. So you'll see lots of people that have very deep domain knowledge, strong skills in project management, entrepreneurship, and they build that technical fluency over time. And, this can be through, training online, just through working on these projects, and getting to know data scientists and engineers and in picking up those skills and in the fluency, so it can be either way, but definitely, for myself, I come from an engineering background. So for me, it's been more of developing more of the business toolkit.

John 43:35

And is anything about having an engineering background that you think is really benefited your or propelled you into where you are now? Jjust knowing the code, or is it also algorithms, and what about the technology domain?

Tom 43:49

Yeah, it certainly helps to, like I mentioned this estimation, part of it is so critical of knowing when you can reasonably expect something I think part of it is yeah, the fact that I've I, come from an engineering background, I might be able to think, well, if I were doing this, how long would it take bla bla bla. But it's also I think, it can also come from pattern recognition as well, just working in different use cases, seeing the different ways that thinking things can go and, and being able to have that pattern recognition. But yeah, for me if I do find it helpful and being able to, like if there's a problem or an issue or blocker with the team, being able to say, Okay, let's let's talk about this, let's have a problem-solving session for the next 45 minutes and figure out what to do. I do find it has been helpful and I think a lot of times, at least with the technical teams, what can happen is sometimes the prioritization is missing, right? Like he sometimes you can get hung up on, one thing It may be is not necessary in the short term, but needs to be addressed long term. It's just a matter of helping sequence the different tasks to do.

John 45:09

I wanted to talk about maybe some of the results from projects you might have been on, there are two angles that that, at least I was curious about. But sometimes we start working on something, and a new opportunity or result emerged that was unexpected. And sometimes it's great. Like, oh, I wasn't expecting that. This is even better than I thought. Has there been a time where that's happened while working on something like this? Every day? Do you have an example? Maybe that you can share? Or

Tom 45:43

Yeah, I mean, this happens so frequently. And, of course, it's always a pleasant surprise, when it's the opposite, like, I actually have had this happen, where we thought that it would be very hard to do a certain use case. And in fact, some of the client stakeholders were, were very skeptical, but we wanted to push forward and try something, very ambitious. And it turned out that this one use case, we've done two use cases prior to this was actually easier, and we got better results. And it was just this like, Wait a second, that was not what we expected at all, that's the best, that's when it works out when it works out good. But of course, they're like I mentioned, I mean, very often, you may have the data, scientists will work on a model, you achieve some result that looks really good. And, it could be that there's a variable in there that should not be there. And therefore, you have to throw it away and start from scratch, that that does happen fairly frequently. Or it may be an issue with the data, if the way that, a big thing in building these models is making sure that there's no forward-looking information that the model is trained on. And so when you're, doing different types of model architectures, it can happen that, data that shouldn't be part of the training set somehow is, is in there, right? It is lots of ways that this can happen. And so that that's always a concern. Sometimes that might happen. It really requires like, a lot of due diligence, and looking into the model methodology and everything and verifying it, that doesn't mean that the project itself is is a wash, it's just a matter of regathering and figuring out, okay, how can you address and go on from there, I have never been part of a project where we, we just decided to not do it, because it was not technically possible. But that does sometimes happen. Sometimes, and we call this like the intelligence phase where we're trying to understand is something feasible or not. And it hasn't happened to me. But that can happen where that there's value there, you're not sure how feasible and so you just invest a couple of weeks to tease out if something is and sometimes you walk away and say we don't think it is. So that hasn't happened to me personally, but it certainly can.

Alex 48:28

Okay, so with the AI being this experimentation being a flexible project, how do you think about success? How do you measure? What metrics do you use to understand looking back understand how effective what was this project or experiment should move forward to? Should you change the direction and strategy?

Tom 48:49

Yeah, that's a good question. And I think it's a short answer, there's not one metric, or at least there's not one value that you can say that a model is good. It really is use case-specific. And just give an example like, for, there are so many different examples, like for fraud detection, for example, like, you would want a model or some other use cases, you would want a model that's highly, highly accurate. And so, you'd be expecting, 90% or higher. But in finance, for example, the, the accuracy level is not that high at all, especially when you look at the stock market, because there are so much noise and variability. If you get, 60 to 70%, it's actually considered amazing. So, what is considered one, very high metric and one use case might not be would be unachievable and another so that where you get into the domain, the blending of data science in domain understanding, like what is what does good look like for this domain and this project Use Case? But the other way I would think about it is there are technical metrics. And then there are business metrics. So, in terms of technical metrics, you'll hear things like the area under the curve, it's called a ROC AUC, which, which looks at how well the model is how accurate model is performing. You'll see things like the the error rates of the model, and their different ways of capturing that. These are all, the technical metrics. Another one that's used is something called the confusion matrix. It is a way of quantifying where the model gets confused and Miss labels things as either positive or negative. And there's got to be, so there's

Alex 50:47

got to be some feedback loop built-in right to, to measure that?

Tom 50:52

Absolutely. In every successive model, you build, you're going to be comparing, comparing those metrics to previous models, but there's also the business metric that you want to capture as well. And that would be something like, if this model, for a portfolio, for example, investment portfolio, if we had, if the model was compared to the current investment approach, how many percentage points better? Would it be? Or if you run it through a simulation, how much better would it be? That could be one, so the thing is, if you go to a business stakeholder and say, oh, our ROC AC improved by point 02 percent, and you've got a big smile on your face, and like was, so what I mean, what does that even mean? So at the end of the day, like the technical team, I mean, you need to keep track of those metrics, and they are important, but you also need to translate, again, translate them into a business metric that's important to the business, any tools to use to measure business metrics or monitor them constantly. So there are a couple of tools out there, this would be considered like performance tracking tools, there is a tool called ml flow, which is open source, which is used pretty widely. Internally, we have a tool called performance AI that we use as well, for this, it is a key part of, going from that experimentation phase to actually, call it production Ising. these use cases, putting them into production, being able to monitor them over time and through each experiment is, is really important.

John 52:35

So from somebody who might be looking to hire an agency to come into my company, and perform some type of AI transformation, there's going to be some, probably some static, some folks that can probably do a better job than others, are probably healthy to work with. So if you look back just on what web design agencies, some of them might leave a bad taste in the mouth of folks that have might have tried to go on this adventure before. And they probably might write it off entirely. So if you're going to advise, like Alex and I, and through an AI transformation, and we're going to start talking to some companies about it, what are some characteristics that you would look for when you're talking to the people as to whether or not they're a company that you should commit to going through working on projects of this nature with? Like, are there any, are there may be terms or ways that they would present themselves or how they would strategize with you that you should start asking those questions up front?

Tom 53:33

Yeah, I mean, I think it comes down to a couple things. So I mean, in terms of the technical talent, that's the barrier, right? Like, everyone, you have to have the technical talent to do it. I think in terms of the operating model is, it is about working together collaboratively. Like, this is not about, having someone come in and do things for you. And then, you have this nice shiny thing. It's it's really about building capabilities. And that's the work that I've been involved in, right. So yes, it's about like we said, like building a lighthouse use case proving the value of working with stakeholders, it's also about getting the the the organization so that they can do it in on their own. That's, that's the real mark of success is that you're able to bring them along on the journey and empower them to be able to continue to do it themselves. And that's a critical component. I mean, the other critical component is understanding the business, right? It's Yes, a lot of it is about analytics. And yes, a lot of it is about technical expertise. But a lot of this is about business understanding and adoption. And I think that's why, I mentioned I work at quantum black, which is part of McKinsey. And, McKinsey has a reputation for working with top management executives across all industries. And that really helps have an understanding from a top-level. Do they just get the business like what are your strategic imperatives? Like, where are you going? And how can we leverage that to add value in this way?

John 55:20

So sounds like the conversation if, if we're having one, it should feel more like you're helping me not just a bolt-on as an appliance, ml AI. But this is actually literally embedded a part of my company, and that most teams that are thinking about this are also thinking about the company that they're doing the consultation for working with, as well, this is actual the product is by the time we've changed how your company operates, and has adopted this style of using data to produce products, then our job is done, not just Hey, give us this data, and we'll go run our stuff on it. And we'll come back with a calculation for you, which I can see happening very easily, we tend to we also we do this with contractors, oh, I hired a new team over here, I have this task, that's a ticket, I'm just going to throw it in their bucket. And then two weeks later, I'm going to check in on it and expect it to be done. And maybe it's partway there, but the code is terrible. And those types of relationships are so much loftiness because we work for fitting the accountability, all of the accountability of the of the outcome. And then we have someone to blame later, when it doesn't come out the way that we had hoped and dreamed it would have just by default, because I gave it to you It happened the way I hoped. And I sense that there might be some folks out there thinking the same way about data, artificial intelligence machine learning that will actually will run the models for you. We own them, they're in our domain, you just give us your data. Yeah. And then we'll take care of it. So there's, I think there's a there's a really big difference between that story. And then what you're describing, Tom, is that you Tom sound like, Hey, I'm a team member for you, I'm going to consult you along the way I'm going to help your team, your business team understands how to think about data and ask questions about it in ways that we can use it to make better business decisions, or better yet improve what our customers are experiencing.

Tom 57:21

Yeah, absolutely. I mean, in. And what I found is, a lot of times we come in, we're an organization that is tried, a company that basically says what you describe, like, give us your data, and we'll create a model. And it just doesn't work. Because at least in this scenario, I, I'm highly skeptical of it. Because such a key part of this is the interpretability is the, like explainability. And it needs to be your model. Because if it's not your model, you don't have access to that even. And if you don't have access, and you don't act, it's just a black box that it does some stuff, do you really think that people in the field downstream are going to use it? And we, we saw this I worked with a company, it was an oil and gas company that wanted to reduce their carbon emissions. And they worked with a company that did exactly that. And like, like, we're not using this, like we don't even know what it is. And so that's a big thing is is understanding and adoption, particularly with AI, it's it is a hurdle, especially when, you're you're adding this thing into the process where people have done things in a certain way for a long time. They really need to trust that it's going to improve the way that they're doing things and adding value. Otherwise, it's it's very easy for them to just be skeptical and be like, this is useless. I've done it this way for 20 years. Why are you making me do this? And and those type of projects just don't they just don't succeed, at least on a broad scale.

John 59:13

Yeah, I could definitely get the sense of that. Now. I could also be a new business, someone really small. I don't really have a lot of data yet, but I want to learn something. Is it really a fit for transformation there? Or is there like a general set of tools maybe that someone could use to use data that's just broadly made available? For instance, like if I wanted to learn something that's available on Facebook's graph or something on Twitter? Is there something for people there that they could explore?

Tom 59:42

Yeah, I mean, this is a very fast growing space of how do you create two tools to enable people to use machine learning and AI and easier and you see the cloud providers are really doubling down on this. So whether it's Google cloud or Microsoft Azure or Amazon Web Services, you'll see services where it's like AI or machine learning as a service. This is certainly a field that's this growing and can be used. But there's also a big open source community as well. So I mean, all of the, the AI algorithms, they, they are generally in open source libraries, whether it's like TensorFlow, or pi torch, etc., it's a very active community, I would say, for understanding, there's some very good courses by Andrew Yang, I'm a big fan of, of Andrew Yang on Coursera, he was actually the co-founder of Coursera, Professor out of Stanford University. And his course is actually what got me into into AI. He goes over the basic concepts, he has tutorials in Python. And that's a great start. And, and there are lots of, I think kaggle is another website where they have AI competitions, where you can see other people's approaches, etc. So there's a lot of stuff online for ramping up and self-learning in the space, as well, as I mentioned, different tools that are trying to make it easier for people to adopt AI as well.

Alex 1:01:25

Yeah, I've tried a couple of tools on GCP. And Azure, not that easy to use. They make it definitely they make it easier. But it's still you have to know your stuff, you have to do some preparation, some learning to know exactly what you're doing. They give you those tools, but it's it's still a bit confusing for technical even for technical person. So yeah, it's good to know there's some courses any specific course you would you would recommend for let's say the technical person who knows some Java who knows some JavaScript, c++, just general languages to to get started on that path, other than these Coursera any like, specific thing that they should understand first, before getting into all these are all the tools that GCP and Azure provides?

Tom 1:02:13

Yeah, I mean, I will just re-emphasize, like, I have different courses available. I'm not being paid to mention this, by the way. So I think the first one is like an introduction to machine learning. That's an excellent one. And then he has a whole series called Deep Learning AI, and goes into the different types of machine learning algorithms, one by one and actually has real examples. Like, for example, using an algorithm to detect handwritten numbers is a common one, or like, classify cats versus not cats, right? stuff like this, where you get to do it hands on is, is really helpful. I don't have a lot much more to offer. Other than that, like, that's been my primary resource for a lot of this. And then, and then you get into textbooks and some of the research as well, which is, right, it's a lot harder,

Alex 1:03:10

depending on how deep you want to go into that path right there. You can get into algorithms and statistics and all the science that powers underneath the facade of AI and all the fancy stuff. There's actually a science. And it's not it's not a new science, right, statistics existed before computers existed. Okay, so you went to this journey you went from in technology, 10 into meetups. So you went through all of these steps. And now you're here, you're, you're doing a transformation work? Is there anything based on your experience? Is there anything you wish existed? Or maybe it exists that makes your life easier? That maybe people don't know? Or maybe it doesn't exist? And somebody knows, and then they can reach out to you and recommend?

Tom 1:04:00

Yeah, it's interesting. I mean, I guess one thing I wish for, you guys might feel similarly, but technology is just like an ocean, right? And so it's impossible to be deep in everything. And so I think the real challenge as the learner is, is knowing where to go deep. And I think I've been, I've had good guidance and mentorship for that. But that's, I think that's the critical thing that each learner has to figure out is, you can't learn all the things, right. I mean, this is, whether it's just engineering or data science or whatever. It's just a growing field, and there's more, there's more things that can possibly be learned. So I think it's how do you get the right set of mentors or guidance to know where to go deep and how to do it. So I feel like I Didn't have some of that. But that's I think that's the key thing is that sometimes you may make a bet on technology and learn it. But maybe it's not useful, right? Maybe maybe doesn't get used or, really, I mean, this, this happened when I was in boot camp, I felt like I got lucky. I was learning JavaScript. And I think at the time, angular was very much more of a JavaScript framework that was used, all the jobs were an Angular, and I had, done some research and experimentation, I was like, I, I tried react, it was newer, but I felt like there was something there, and I made my own personal bet. And it actually paid off really well at the time. And I think the same is true for any technologies, you have to make bets. I mean, you're, you're a venture capitalist of your own career. And, and so knowing, what libraries, what, even within AI and machine learning are different topics. And my brother, for example, he studied AI recently, and you have to pick a focus, the AI is not just one thing, it's there's a computer vision, there's, robotics, there's, natural language processing, you can't be, you can't be deep in all of these things. So that's, that's the one thing that I think is critical for any learner is like, taking that ownership and making a calculated bet on where your passion and what you think the, the industry is going towards. The other thing I'd say is like, I going through a coding boot camp was great, I felt like I got good technical skills that helped me, but honestly, it's been the, the real learning for me has been, learning how to how to lead a team, leadership, project management, and honestly, strategy. I say this because I, I think when we first met, I was I was a software engineer. And I left that I left my first software engineering job to co found a startup, and I was the CTO of the startup for about a year and a half. And ultimately, as, it did not work out, which is why it's no longer. But one of the key learnings for me from that was that we didn't fail because of technology. Like that was that was my domain, like, that was my responsibility, to make sure that our tech was good, it was, scalable, all this stuff. That's not the reason that the business didn't succeed because of our strategy. And, and that's the one thing that I think, in addition to building the technical skills, it's important to always continue to upskill as well, like, personal leadership, understanding of strategy and, project management, all kinds of stuff. But, it's a continual process. And there's always more to learn, right?

John 1:08:26

Yeah, absolutely. And I feel like sometimes the, our roles in the teams push us into these areas, and stretch us in those ways where we have to start thinking about what's happening around about the world around us, and how our work immediately fits into that story. And what's the future of our work? Because even the project or the things that we're doing now, if they're if the wind blows, just the right way, now becomes irrelevant? And then then then where are we?

Alex 1:08:56

Yeah, it's easier, it's easier to go with the flow. It's like, Oh, this is a new hot thing. Let me jump on this and jump on that and there's no focus. And five years later, you're didn't really master anything. You don't really know anything really well. And yeah, so I that's good advice to focus, pick your industry, pick something that you want to focus on, go deep and learn something. And also at the same time, be flexible and broad, said project management. Leadership, right, not just technology, because technology is just a means to an end. It doesn't solve the business business. Right. And most companies fail for that reason. The business plan is not it was not perfect.

John 1:09:37

Hmm. Or they didn't talk to their customer. Sure, or they hope to be their customer. Yeah. So Tom, thanks so much for hanging out with us. As we're closing here, you mentioned there are some pretty cool projects going on in your domain. So you want to spend a couple of minutes tell us what you have going on now and what you'd like people to know about.

Tom 1:09:57

Yeah, so as I mentioned, I'm I'm part of Quantum black., we do AI advanced analytics cross different industries, like I said, it's very impactful transformational work, we are hiring. So I'd encourage people to check out our careers page, quantum black dot forward slash careers. And then, we're we also have an aspect of quantum black, that that is building, reusable tools for these transformations. And one in particular, we open-sourced about a year ago, it's called Kedrosky ke, Dr. O. And, I would encourage people that are in this space and interested to check it out. I've given a couple talks about it on in different data science conferences, I think one of them is available on on YouTube. And yeah, we, it's essentially a tool that to really help organize the development process of an AI solution. So from the data engineering aspect, in the data science aspect, it's a, it creates an initial boilerplate to get you started, but it has a whole bunch of very cool tools, that that just simplified the process of bringing in new data, joining it and running a pipeline. So I encourage people to check that out. It's on GitHub. There's also a read the docs page that you can access from the GitHub site that you guys can link to, and, in any questions, feel free to reach out to him.

John 1:11:32

And how should folks reach out to you like LinkedIn or email or

Tom 1:11:36

LinkedIn is is good. Yeah.

John 1:11:41

thanks so much for hanging out with us today. Definitely, a pleasure having you on awesome seeing you again. Everyone looks healthy, which is thumbs up here. Yeah.

Tom 1:11:50

Absolutely. Thanks for having me.

John 1:11:52

Thanks, Tom. Thanks for tuning in to the pragmatically podcast. If you found this conversation interesting or helpful, we would appreciate your feedback. If you want even more content like what you just heard, check out pragmatic lead.com if you have a story to tell, send an email to pragmaticlead@gmail.com and someone will be in touch. Thanks again.