Creating AI Products for Business Impact

September 5, 2023 Rachel Halversen

In this episode of BTG Insights on Demand, Dr. Mohammad Ghassemi—an internationally renowned data scientist, professor of data science, national service scholar, and founder of boutique data and technology consulting firm Ghamut—joins BTG’s Rachel Halversen to discuss best practices for creating AI products that drive business impact. Together, they’ll explore three key pillars for successful AI systems, how AI can enhance human productivity, and common pitfalls companies fall into with AI product development. They’ll also discuss some emerging trends in AI that business leaders should be aware of today. Listen to the episode, read our lightly edited transcript, or jump right to a specific section of the chat below:

Interview Highlights:

Rachel Halversen (1:04)

Hi, Mohammad. Thank you for joining us today. It's a pleasure to speak with you.

Mohammad Ghassemi (1:09):

Thanks for having me, Rachel. Great to be here.

Rachel Halversen (1:10):

So, all of us at BTG are familiar with you and your work, but would you mind introducing yourself to our audience and sharing a bit about your background?

Mohammad Ghassemi (1:16):

Absolutely. So I am the founder of a boutique artificial intelligence consulting firm. We focus on the three pillars of artificial intelligence, which are: data, that includes data quality, data aggregation, data insights, and then two other pillars, which are algorithms, including Generative AI, which is something obviously it's of interest these days, and then insight generation. So, our firm focuses on all three of those pillars, both the strategic as well as the implementational nuts and bolts of those technologies through our practice.

The way I got into artificial intelligence was many, many years ago, but it started with a scientific interest in the topic. So, I actually have a master's degree in artificial intelligence from the University of Cambridge, and I also have a PhD from MIT in the topic and actually teach classes on artificial intelligence at the university. So basically have had a scientific interest for many years, have been actively pushing the forefronts of the technology in those places, but really believe in the translational potential of these technologies, which is why I'm so interested in what we do in my firm.

Rachel Halversen (2:37):

Thank you! That’s great. So, artificial intelligence. This is obviously a big topic right now and these can be some really complex initiatives for companies to take on. How would you suggest business leaders begin to think about their goals in using AI and how to make the best use of it?

Mohammad Ghassemi (2:55):

So if you're going to do anything in data science and data science is a slightly larger category than artificial intelligence. Artificial intelligence falls within that bucket. There's really three pillars to it. Three legs on the proverbial table. The first of those is the data. That's why it's called data science, right? Starts with the data. And the activities within this data bucket are very diverse, they could include, how do we, if we have a data set today, how do we leverage value from this? It could be, that we don't have data today and we want to know what we should collect so that we can make competitive strategic decisions. Or it could be that we have some data today, but it has some quality problems and we want to clean it up, right? These are some rough categories of problems we encounter.

We've done work in all three of those spaces, but one, I think pretty big one pretty substantial one was with an entity that is in the Software as a Service space. They're one of the leading Software as a Service providers in the United States, actually, and I think internationally as well, and they were actually suffering from an issue where they had a huge amount of information—a lot of data—but the data had varying levels of quality. It's sort of like they didn't know which parts of the data they could trust, which parts of the data they couldn't. And, obviously if you want to make strategic decisions with that data, like who you should target as your next possible customer or how you should price your services, right, those things can create difficulties. So what we did is we came in and we did an assessment basically with this company to try to determine where are the areas where their data quality was excellent and where are some of the areas where there was opportunities for improvement.

I then came forward with a very detailed strategic plan for how much resources, time and energy and effort it would take to take them from where they are today to an end state where they could use that to drive more effective decision-making. We had this wonderful opportunity to partner with this client of ours in seeing the implementation through as well, right, so helping them actually again, go through the nuts and bolts exercise training members of their team to get good at this. It had a really substantial impact on their ability to do prioritization of customers within their sales funnel and also provided some guidance for their greater marketing activities. So we were really happy with that particular one. That's an example of the data side.

So then there's these two other buckets, right, I mentioned there's, you know, data science, data and then followed by science. So that's the algorithms bucket. Okay, now, algorithms have a couple of different ways that they show up. One of those ways is, let's call it predictive analytics. So there's something that's going to happen at some point in the future. It's hard to know what that thing is, and I'd like to be able to use some information I have right now to predict it. Okay? It doesn't have to be in the future, it could just be something that's hard to capture today. But in general, people have some stuff today, they want to measure something tomorrow—they want to figure out something that's happening tomorrow. So, we have had some wonderful opportunities to work with folks in the predictive analytics space. The second sort of subcategory within this is descriptive analytics.

So you have a very complex, very large data set, hundreds of thousands of columns. How do you distill the key descriptive components of this data so that you're not overwhelmed by looking at a spreadsheet with a hundred thousand columns in it and a million rows and so on. So that's descriptive analytics where we've done some work. The third of these, which is an area as we all know has had a lot more interest recently, is in the Generative AI space, right, generative analytics, which is, if you want to generate data, that's what ChatGPT is doing, right? It's generating data, image tools that you've probably seen on the web or doing the same thing. So the third question is, how can you specify a set of constraints and then generate data that meet those constraints? Okay? That's the third set. Given that the Generative AI piece of this is of interest to a lot of people within our client base, especially recently, I'd be happy to provide an example of that.

There's a client that we have this wonderful opportunity to work with that's interested in figuring out how you can use techniques within Generative AI to help explain what's happening within the financial services domain. Actually, I can speak to who this particular client is, because we're working with them also in a research capacity through my appointment at the university, it's actually JP Morgan Chase. And in this particular project, we're interested in seeing if we can model how cash flows are impacted by innovation, right, and using large language models to take these very complex documents, these financial documents, legal documents, intellectual property statements and so on, and distilling these into values that we can use to make robust statistical predictions about how cash flows will work as a consequence of those innovations. So again, I can speak to this one and disclose the name because of the nature of that relationship, but that's an example in this second area that we're working on.

The third area has to do with insight generation, and that's one that, actually that’s where a lot of the greatest impact is for clients that we work with. So this comes after that data science piece I mentioned. So you've got some data, you've done some science on it. Well, you've got to action against that now, right, you don't want it to just be a research project that's cool, but you want it to drive some business value for you ultimately. The way you do that is you have to turn what the algorithms give you into some kind of actionable insight, something that can be integrated into a business process so that it increases revenue or it decreases costs, right? Okay, so we've had some work that we've done with some of our clients, very large in the particular case I'm thinking of now.

It's a leading pharmaceutical company within the United States, and we've had some really exciting work we've done with them on figuring out how we can take algorithms that have been developed on very large sets of data that are both public as well as within their facilities to derive insights that drive marketing actions within those organizations. How do they market, who do they target, and so on and so forth, as well as making projections about how upside revenue will look like as a consequence of the actions and the costs that are necessary to take those actions in the marketing space. So these are three examples of, I think ... Well, not only three examples, but really the statement to the three areas where our firm does work. Three examples, kind of one within each of those buckets. Just to say, we do both the strategic piece—hopefully the examples made that clear—as well as the actual implementational components of the work.

Rachel Halversen (10:18):

That's really interesting. I can tell. I'm going to learn a lot during this conversation from you. So, with your decade of experience in translating AI systems into products, what are the key factors you believe contribute to successful AI product development and deployment? Okay, well, thank you. With your decade of experience in translating AI systems into products, what are the key factors you believe contribute to successful AI product development and deployment?

Mohammad Ghassemi (10:34):

Great question. So, it's interesting. Artificial intelligence is a word that everybody's talking about these days. Back when I first started working on AI, it was a little bit more of an obscure term, right, there wasn't the great interest in it today. What I've actually come to conclude after having worked on it for many years, both in the theoretical sense and then as you mentioned in the translational sense, building systems, for example, when I was at S&P Global as a director of data science, building systems that really get implemented in the world, there is a essential human component if you want these systems to work well. So I'll speak to that human component first and then I'll return back to the technical components. They're both important, but I think the human component is a necessary, but insufficient condition, the technical is part of those sufficient conditions.

So what do I mean by human component? If you're going to come in and build a system with artificial intelligence, the goal should not really be to automate procedures, but to augment human beings that are performing procedures, right, so, how can these tools come in and not really replace operations, but supplement those operations? Why? Because within large organizations, right, and small organizations, for that matter, you have a human stakeholder within there who you want to be the ally of the end product of any data science or artificial intelligence effort you do. And if the way this is positioned is that the work of these algorithms is here to replace you, that's a very different kind of messaging, and it's actually not a very effective messaging compared to showing how the artificial intelligence techniques can augment the workforce, make people more effective, and in truth, that's actually necessary. Most of the jobs that at least I see within my clientele are not things that can be trivially automated, but a lot of those jobs could be very strongly supplemented and augmented through the use of artificial intelligence and the downstream effects that they can have. So I think the first one is making it clear to people specifically how the augmentation of people is going to occur; whether that's helping marketers find better targets, or that's figuring out how you price things more effectively to account for complex market forces or whatever it might be. But then more importantly, you have to make sure that in addition to surfacing those insights with the humans, you have to make sure it's clear how they action against it. I could, for example, with an artificial intelligence tool tell you, if you want to sell your widget, let's say you want to sell paper cups for example, then the best target you should go after is individuals between age X and Y, and they should look like this, and they should talk like that and have this level of income and so on. All the statistics bear out that that's the best group to go after. But if you want to actually take the next step, right, you want to make this actionable for the human being, you want to go one step beyond that. Generative AI is one place that can help with this. You want to be able to give them, "Here's what you tell that person to get them interested in the product.", right, ,maybe you draft a marketing message that would go out to them. You also want to tell them what cadence the communication on to occur. You still want the human being who has, they've spoken to potential clients and things they might know things that an algorithm doesn't necessarily know, but you want to empower them with ease of actionability of the artificial intelligence. So, I emphasize this because I think the most important thing for AI to have an impact in an organization is having a very clear path to how the outcome of the algorithm can be actioned by the organization in some way.

Otherwise, it's just a scientific exercise, which is cool, but it doesn't have a business impact. So you've got to have that roadway to the action, okay? But the second thing is this. If you have the roadway to the action, but you don't have people who are very technically gifted with their hands on the steering wheel, you can drive the car in the ditch. The difference between artificial intelligence and, you know, maybe some other domains that, let's say, non boutique firms work on, right? Firms that paint with a more general brush, right, those guys are all super smart, they're really great at what they do, but there's an incredible amount of background knowledge in principles of statistics, probability, mathematics and so on. If you want to know how to engineer one of these systems so that you don't make common, I call them rookie mistakes. I'll give you an example if you'd like.

So let's say we're building a system that's supposed to figure out ... I'm just going to make this up. We're supposed to figure out whether people are at risk for lymphoma. Okay? So they're going to get cancer. Let's say, the problem is though, for this lymphoma system, we don't have a lot of examples of people who are from the southeast of the United States and who are, let's say, between the age 50 and 60. Let's just say for whatever reason, our dataset doesn't have a lot of information there. If you're not careful with the way you design the algorithm, it won't realize that any conclusion it draws, it has to take it with a grain of salt because it didn't have a lot of information to draw a conclusion on just a few people from who were 50 to 60 in that region.

It could, for example, if we only had a couple of people come up with really silly results. Now, if you're a person who studied AI, you intuitively know this, you immediately know this, and you know what kinds of probes and checks you should do on the system so you don't get things like embarrassing hallucinations or predictions that can lead you to make the wrong decision, right? So the expertise, I guess what I'm saying is extraordinarily important. If knowing what you want to use it for, of course is a must or you're wasting your cash. But once you know what you want to use it for and how you want to action it, if you don't have deep technical expertise that can help you design that airplane to fly you to where you want to go, you're equally stuck. So I think you want to basically have these two things, very clear idea of how you're going to use it at the end, and people with the deep technical knowledge who can help you build it so you can get there. Does that make sense?

Rachel Halversen (17:23):

Yeah! That’s helpful, thank you. So with that, what are some of the challenges you’ve seen companies encounter when using AI?

Mohammad Ghassemi (17:00):

There are several challenges, right? It depends on the size of the company, it depends on what we want to use it for, but I'll tell you one big one that comes up. Let's call it a common mistake that gets made when people deal with artificial intelligence. I'm going to speak in an example. If I asked a system to predict the weather tomorrow, like this app on my phone, right, what does it do? It tells me something like, "Here's the probability that it's going to rain tomorrow.", right? We all have weather apps on our phone, and they tell us things like, "Oh, it's 80%, it's 30%," so on and so forth. But here's the thing, in reality, it may be it's 80%, but depending on how good their algorithm is, there's an error bar around that, right, it could be 90, it could be 70.

You see what I mean? There's kind of this buffer around the 80 that if they had more information, they could say, "Oh, I'm totally sure it's 80." If they have less information, maybe their radar breaks or they get some bad information from someone, it's actually going to be more noisy. So it could be a number between 70 and 90. A lot of folks who release algorithms, including even the weather service apps on your phone, don't give you what's called that confidence interval, the optimistic upper bound and the pessimistic lower bound on the prediction. That matters. Here's why. Coming back to the weather app example, let's say your phone told you, "I think that the odds it's going to rain tomorrow is 50%, but if I'm being optimistic, it's 40%. If I'm being pessimistic, it's 60." That's very different than if it says, "If I'm being pessimistic, it's 10, and if I'm optimistic, it's 90." You see what I mean?

Rachel Halversen (19:32):

Yeah.

Mohammad Ghassemi (19:33):

The most likely value could be the same, but the upper and lower bound, the optimistic and pessimistic lower bound could be vastly different. The issue is, is that in a lot of cases in the financial services, if you want to be cautious, if you're conservative, you actually don't care what's most likely. You care about in the most pessimistic case, what's going to happen. So, not quantifying basically, those confidence around your predictions is a common mistake that I think happens. It's a big challenge because the algorithms then can lead you to outcomes that you weren't expecting, and people throw the baby out with the bath water, but it's not because the algorithms are deficient. It's because, again, you don't have those experts in place who can tell you how you can interpret them and use the outcomes in the correct context. So I think that this how do you interpret AI algorithms and interpret their outcomes in a way that is useful for the downstream business need? I think this is the biggest challenge with them. I could speak about more, but that's certainly the biggest one.

Rachel Halversen (20:43):

That’s interesting. I’d love to hear more about how that could play out.

Mohammad Ghassemi (20:33):

Okay. So we have some folks in the financial services and these folks in the financial services, they had an issue where they needed to do a prediction of, let's say, a bad event happening, okay, for some of their clients, some bad financial event. I'll leave it to your imagination what that is. It's probably pretty obvious. And, the issue becomes that depending on what that probability of the bad financial event looks like, they may or may not, let's say, increase a credit limit, or they may or may not give them a loan, or they may or may not give them access to certain financial products and services. So the question becomes, when you're trying to make this prediction, should you take what's most likely to happen? Should you take the pessimistic lower bound? Should you take the optimistic upper bound? How should you do this? How do you even choose where in that range of possible futures you should operate when making your bets?

The way that you do that is there's a partnership that takes place with the business where for all possible kind of configuration—sounds like it's a lot, but it's actually not—for all possible configurations of how the bets are placed, we can assign prospective dollar values to those, what is the value back to the business, not just in the immediate term that comes from successful interaction with their customers and downstream profits that they gain from that, but reputational gains and so on. These all get quantified in ways where we can very precisely and surgically understand what the consequences are of the algorithm making a correct decision and making an incorrect decision, we can consolidate that into a way that helps us exactly tune how optimistic or pessimistic where we should choose in that range when we're making our decisions, right? So there's basically very principled statistical techniques that we can use to help people get not only a model, but a model that's going to maximize their profit in the future, which is why they're building it, right?

Rachel Halversen (23:05)

That actually goes into the next question really well, because the next question is about, how do you ensure AI solutions that you design align with company business objectives and that they deliver real value?

Mohammad Ghassemi (23:17):

You do not build an AI product. Hands should not touch a keyboard until and unless, A, it's very clear what you want the AI to do, and B, it's extremely clear who the human beings are that are necessary to action against what the AI provides. That is a necessary prerequisite. As I mentioned the human element before, that's exactly what I was speaking about. You have to have that because you can build the best piece of artificial intelligence in the world, and if people don't touch it and play with it, it's useless. I'm going to give you actually a contemporary example. So you know the technology around ChatGPT?

Rachel Halversen (24:04):

Yeah.

Mohammad Ghassemi (24:05):

Everybody knows it, right? Here's funny, you know, those chatbots, basically things that are very close to ChatGPT have been around in my [academic career]...I've told you, I'm also, I teach classes, I write research papers and so on. That's not new stuff, right? We've been seeing that at least within, let's call it the more, you know, scientifically inclined community. We've been seeing it for a long time. We've seen demos of those tools and so on. But you know what the difference is: why? What OpenAI came up with caught wildfire from my perspective is because they thought through the user experience, they thought through how people would want to interact with an algorithm like that in a natural way, in a way that made their lives easier. Not clicking a bunch of configurations on a menu, selecting 5,000 sub things, and then very carefully typing in a prompt into an engine to get it to give you still something that was useful, but it was this tedious, ridiculous amount of overhead that paid no respect to the ultimate downstream user who had to use that to accomplish a task.

So that user experience is critical. The AI is there to serve the user. In this case, the user is the business, right? What the business needs is why the AI is built and so this is why in most of the engagements we do within my firm, there's always a strategic phase that precedes any hands on keyboard. We're trying to figure out what do you want to use this for to drive value? And sometimes they don't know, and we perform an opportunity prioritization to help them figure out where can they use this to get value, right, figuring that out. But then again, B, who are the people inside the organization that need to be involved if you want to action against that value, right? Who are the people we've got to bring in and integrate those things into their workflow and get them on board and make them understand that value? Because, you know, organizations are people, and getting those people to see the value is extremely critical.

Rachel Halversen (26:08):

So with that, AI adoption can be a challenge for organizations, especially if they're new to it. So, what are some strategies you suggest for smooth AI adoption and how can you support it?

Mohammad Ghassemi (26:20):

Yeah, that's a great question. So I think what you want to do when you're starting down an AI journey is –you know, I say this very authentically—if you're not an expert at something, you want to leverage experts. It could be my consulting firm, it could be others that you know, it could be, you know, going out and doing your own research, but you want to basically leverage what experts have told you about how you structure your artificial intelligence workflows, AI. I can be a little bit more clear about what that means. I'll double click on it and then I'll zoom back out. Building an AI system to get it to achieve value because it has these two components, the human and this deep technical knowledge, there's not a very large number of people who have that Venn diagram intersection.

Most very technical people, you know, they are buried somewhere within Google or they're buried somewhere within Amazon or Meta, and they write papers and they do science, and they do research, and they're not used to thinking in many instances, not all of course, but in many instances about those other things, the very important human elements, the end user, how you impact them, and so on. So what you're looking for if you want to start an AI initiative within your organization is someone who's at that Venn diagram intersection who can talk the talk. They know how businesses work. They've got real experiences, but they also can walk the walk. They deeply understand technically how these systems are built. Why? Because they can help set up the scaffolding with those organizations on both of those fronts.

The first front is the opportunity prioritization. What should we use this AI for? Who are the people we need to work with? But then the second front is, even if you're starting from zero, who are the technical people we should bring in to help us build this? Because if you're going to partner with an external expert, there's one of two ways you do that. Either you bring them into your organization or you're working with someone like me, you bring them on to do contractual work for you, usually to help get your organization better at something. So you want to leverage, in this case—if you were working with someone like me—you want to leverage the experience that we bring to help get your team, identify who in your team is best suited to take on the tasks and scaffold them in a way where you can start taking advantage of the opportunities that exist within AI, but you can take advantage of a way that meets both of those, the necessary and sufficient criteria I spoke about.

Rachel Halversen (29:03):

That’s great. Thank you. So I’d like to turn now to a big question many might have: what are some of the potential risks of using AI, and how can companies work to minimize them?

Mohammad Ghassemi (29:14)

So I think that the biggest risk comes from, you know, blindly trusting an algorithm. This is one of the reasons I mentioned this thing about, you want to be able to have a confidence interval on any results you have. Okay? So you don't want to blindly trust algorithms. A big risk that can come from this, I actually think absolutely enormous, is if you use ... Let's call them big data algorithms without big data, that's a huge one. You can get into all sorts of problems trying to use advanced analytic approaches when you don't have the data ecosystem to back it up. Leads you to all sorts of terrible downstream problems. There are techniques, by the way, very principled statistical techniques that allow you to do amazing things, even with small data, amazing things. But you have to know what those techniques are and know when you use a hammer and when you use a scalpel.

So I think not taking—not using the right tool for the right job is a huge risk because it leads you to algorithms that advise you the wrong way. They tell you to go after the wrong customer segment. They advise you to adjust the prices in the wrong direction, right? They hallucinate, right? And even the best algorithms, as we've seen with ChatGPT, they hallucinate. They're great in most instances, but even the best of them make mistakes. So that's why I keep coming back to, if these are the risks and you want to mitigate them, that's why you want to have a human part of the AI workflow as well so that experts can interact with the consequences of what the AI is generating and be that last check, that final pass in the process that makes sure this is right. This makes sense. Let's go with it.

Rachel Halversen (31:11)

So going a step beyond risk, what about some of the ethical implications and concerns about AI? How can leaders ensure that these tools are being used in an ethical way?

Mohammad Ghassemi (31:19):

This is a fantastic question. There is an entire field of study within. There's actually a very close colleague of mine at the university that I'm at Michigan State. There's a very close colleague of mine here who, this is his whole area of research. It's also an area where I have some experience working. It's, how do you take ... To restate it, how do you take an algorithm and prevent it from doing something unethical when it's trying to solve some objective task? I'll give you an example. If you want to, let's say, figure out how to predict recidivism, okay? There's these classical examples we've probably heard of looking at some factors like that are clearly have nothing to do with your recidivism like your gender or your ethnic background or those things might come up, right?

An algorithm, if not constrained properly, if not built by a person who knows this stuff, could learn those things errantly and start to do unethical things. It could take some statistical pattern in the data that's the result of very, very complex factors and forces and learn that as a truth, which is not actually true. Okay. So to stop algorithms from learning and then acting on unethical things like that, there are principled techniques to do that. There are ways to basically get an algorithm to optimize for an objective, but get penalized for learning things that are ethically wrong. So it considers both. Now, typically, a kind of rookie user of an algorithm or a rookie kind of trainer of an algorithm would only get it to optimize against one objective. It would say, "Hey, go maximize profit, or go figure out how we predict recidivism rate optimally," or whatever else it might be.

But there's actually formal techniques that you can use to place constraints on these algorithms so that they learn not to violate ethical considerations in the course of generating their predictions in the course of assessing people for risk or credit worthiness or mortality or any of the tasks that you might be interested in. So there's a solution to, I think the ethics problem that exists in AI. It's just that there's not a lot of expertise, sort of, this stuff is growing and changing phenomenally fast. So there's oftentimes not the deep technical expertise that's there to advise people on how they can make those artificial intelligence algorithms ethically aware. It's a solvable problem is the short version of this. It is a risk, but it's dissolvable risk.

And looking forward, what emerging trends in AI do you think businesses should be aware of and prepare for?

Mohammad Ghassemi (34:14):

Yeah, that's a great question. I think Generative AI is definitely one, but everybody's already aware of that. Everybody's aware of Generative AI. One thing that some businesses may not be as aware of is, what's actually made the contemporary Generative AI technology so successful? That's a domain called reinforcement learning, and I can tell you a little bit more about this. It's basically, have you heard of AlphaGo?

Rachel Halversen (34:41):

I have not.

Mohammad Ghassemi (34:43):

So there's a group called DeepMind, subsidiary of Google, and these folks at DeepMind trained an algorithm to basically beat the world's best go player in Korea, I believe it was right? Trained them to beat the best go player in the world. Thinking about—teaching an algorithm to do a sequence of strategic moves to get to a final end goal, that's a very complex problem, right? But what made the most recent configuration of these Generative AI technologies, kind of cross that threshold where they started to become useful was they used some of the principles of that domain called reinforcement learning, to get them to perform better. That domain of artificial intelligence can be applied to all sorts of problems, all sorts of problems, portfolio optimization. It can be applied to logistical, you know, how do you logistically move goods from one place to the other to minimize downstream costs or minimize environmental impact and so on. So I think reinforcement learning is one where people may want to, you know, basically pay some attention, think there's a lot of value in that domain.

Rachel Halversen (36:05):

Would you like to touch on how your firm is poised to help businesses through these challenges and changes?

Mohammad Ghassemi (36:10):

Yeah, absolutely. So my firm, as I mentioned a little earlier, we do two things. We help you with the strategy. So if you're interested in AI, you have an intuition that AI can help you, but you don't know how, we can help you with that, we'll help you answer that question. What can we do with AI? And we're not going to just come in and have a theoretical exercise. It's very targeted. What makes your business money and how can AI come in and supercharge that? That's the question we answer when we have our strategic hats on. That's the first way. And then as we come up and flesh out exactly how AI can do that and lay out what are the key ingredients, what are the data, what are the algorithms, what are the insights that you need, three pillars I mentioned, we can even help scaffold those for you or build the whole thing out entirely, end to end, right, just depending on where you are and the extent to which you need external support, right, we can help with that whole process.

Rachel Halversen (37:12)

Perfect. My last question is: do you have any thoughts on what kind of impact this will have on the global economy, the business landscape, the labor force?

Mohammad Ghassemi (37:22):

It's a great question. I think, I'm a little bit more optimistic, I think than most people about this probably because I work on these things deeply and technically so much. I think algorithms are not going to replace people. I think algorithms are going to increasingly supplement people. I'll give you an example. Microsoft Word has a spell checker and a grammar checker. It has, it hasn't eliminated writers just because there's Microsoft Word there. Well, in a similar way, even having GPT available to help write drafts, even then there's still a question of what would you ask GPT to help you create the draft for? You see, for any solution, a meta problem is created and human beings can always work on one layer higher in the abstraction of problems. I don't think that's going to go away. Now, in terms of how that affects the total net number of jobs, that's an open question. It depends on a lot of economic factors and forces and how companies decide to change the total number of people who are doing certain tasks and so on.

But I'm generally optimistic that this is going to create more opportunities than it's going to take away. Let me put a little bit more flavor on this because I realized that was abstract. So, there was a time where if you wanted to go start a business, let's not even talk about a crazy business. You wanted to start a bakery and you wanted to come up with a menu for that bakery, it would've taken a lot of work, right? Well, think about the fact that this AI tool now, like GPT-4, has given you a way to cycle through ideas for your menu so that you, the person can go start the bakery faster. See? Yes, on the one hand, maybe it took away from the time of a consultant or someone else who would've made that bakery thing, but you, the person can now more easily go start a bakery. So it doesn't change the fact that there are opportunities. It just moves them around a little. That's how I see it.

Rachel Halversen (39:36):

Your optimism's refreshing, honestly. It really has been so wonderful talking to you and learning from you today, Mohammad, thank you so much for your time.

Mohammad Ghassemi (39:46):

I appreciate the invitation. It's been a pleasure talking to you as well.

Rachel Halversen (39:52):

As a reminder, our guest today has been Mohammad Ghassemi, the founder of Gamut, a boutique AI consulting firm, and I'm Rachel Halversen of Business Talent Group. To start a project with Mohammad or thousands of our other highly skilled independent consultants, visit businesstalentgroup.com or subscribe for more of our conversations with on-demand experts and future of work thought leaders wherever you find your podcasts. Thank you for listening.

Previous Article
Shaping the Future of Healthcare with AI
Shaping the Future of Healthcare with AI

Mal Postings—innovation expert and technology executive—joins us to discuss the impact of artificial intell...

Next Article
Generative AI Benefits and Best Practices for Businesses
Generative AI Benefits and Best Practices for Businesses

Innovation expert Stephen Wunker of New Markets Advisors joins us to discuss how companies can unlock the b...

Jumpstart AI initiatives and drive real results—fast.

Get the Guide