In this episode of BTG Insights on Demand, author, entrepreneur, and innovation expert Stephen Wunker of New Markets Advisors joins BTG’s Rachel Halversen to discuss generative AI, one of today’s most promising and disruptive technological developments. Together, they’ll explore the benefits companies may be able to unlock with generative AI, how business leaders can best begin to experiment with this rapidly developing technology, and the broader impacts it may present for the economy, labor market, and the world at large. Listen to the episode or read our lightly edited transcript of the chat below:
- The promise of generative AI for companies (02:04)
- How generative AI compares to other innovative technologies (03:24)
- How to think about generative AI in business (05:50)
- Limitations of generative AI (09:44)
- Generative AI best practices for businesses (12:22)
- Assessing the value of generative AI solutions (14:07)
- Risks of generative AI (15:09)
- The legal and regulatory impacts of generative AI (16:47)
- Ensuring data security with generative AI (17:47)
- Ethical concerns around generative AI (19:09)
- How leaders can ensure the ethical use of generative AI (22:00)
- Emerging trends in generative AI (22:52)
- The global impact of generative AI (25:36)
Rachel Halversen (01:11):
Steve, it’s so great to talk to you! Thanks for joining us today!
Stephen Wunker (01:14):
Oh, my pleasure to be here. Thanks for having me.
Rachel Halversen (01:16):
Now, we’ve worked with you quite a bit here at BTG, but for those in our audience who might not be familiar with your work, could you give us a quick introduction about yourself and your background?
Stephen Wunker (01:26):
You bet. So I'm a former Bain & Company consultant. I have led corporate innovation and corporate venturing programs in large organizations and have also built and sold a couple companies of my own. I was the head of the healthcare and financial services practice at Innosight, which is the consulting firm founded by [Harvard Business School] professor Clayton Christensen, who coined the term disruptive innovation. And then since 2009, I've led New Markets Advisors. We're a 16-person boutique firm with offices in Boston, Paris, and San Juan, Puerto Rico focused on innovation and innovative opportunities in business.
Rachel Halversen (02:04):
That’s great, thank you! So let’s just start with the basics. What’s generative AI, and what do you see as the promise it holds for companies?
Stephen Wunker (02:12):
Sure. So AI has been around a long time, and it's been used in typically algorithmic ways that take advantage of machine learning to hone models in ways that humans just never could, over a large number of iterations and very large data sets. The emergence of generative AI, however, is fairly new. It builds on machine learning, but it does it in several distinct ways that we don't need to go into. But what it does is open up a whole number of use cases, often paired with algorithmic AI, but either in isolation or together with more traditional forms of AI, it enables a lot of advantages to organizations that are looking to, for instance, bring structure to unstructured data, to provide new kinds of insight, to find things based upon general intent, not upon just precise queries. But there's a business opportunity in generative AI that prior forms of AI, more algorithmic forms of AI struggle to unlock.
Rachel Halversen (03:24):
So as an expert on all things innovation, what really draws you to generative AI? And how does it compare to other types of innovative technologies you’ve seen?
Stephen Wunker (03:33):
So I recognize that platforms are the best way to unlock innovation. The internet was a platform. I was responsible for one of the first smartphones. That was one of the corporate ventures that I led. That was a platform for innovation. Generative AI is a platform, but it's an especially intriguing one because it can go in so many different directions. It's not just about communication like the internet or accessibility like smartphones. It unlocks a lot of different vectors of innovation all at once. And I think much as with prior major platforms that created disruptions, companies sort of get some of the initial implications of it. They're experimenting around. They have not thought through the business threat and business opportunities that are created by generative AI.
So I see this as akin to around 1997 or 1998 with the internet—and I'm old enough to have been at Bain during those times—whereas in maybe 95 and 96 people were experimenting around with internet. They thought this could be somewhat significant for the business. And then all of a sudden the accelerator got slammed on, and they very quickly realized that they had to play offense, not just defense, and really start transforming their business models, the ways they worked internally, their customer relationships. It was a far-reaching disruption into so much that they had done, which created both threat and opportunity. So the difference with Generative AI as opposed to the internet and the smartphone is that in the prior cases, we had several years to adjust. I'm not sure we're going to have several years in the case of generative AI. So there's even more urgency. For somebody who's staked their career on innovation, that is a fantastic moment and something that certainly fascinates me. Terrifies me a bit, but also fills me with hope.
Rachel Halversen (05:50):
Terrifying and hopeful, that sounds about right. So staying on the hopeful side, what are some of the best ways that companies are thinking about and using AI today?
Stephen Wunker (06:01):
The best companies are not really starting with the technology. Certainly they're cognizant about some of the major avenues that it can take, for instance, bringing structure to unstructured data, and they're on the hunt for that. But they are focused on use case needs, and they're choosing use cases to explore based upon some of the advantages of Generative AI, but they want to be focused on the users first. So that is number one.
And then secondly, once they've determined the landscape of user needs, they're trying to think very expansively about not just revenue and cost opportunities, but fundamental business model changes that can be undertaken. So really trying to expand the number of vectors of innovation that they're looking at. Yeah, it's technology but it's technology in the service of many other changes that will benefit some kinds of users or benefit the cost position of the organization. Hopefully both. And then finally, they're approaching it with some of the beneficial attributes of these platforms in mind, which is that they don't take a whole lot of coding. It's not some massive SAP implementation that we're talking about. You can be very scrappy and quick and low cost. None of us have this figured out, but they can be pretty quick and disciplined about how they enter into this experimentation.
This is the early, early days, right? And we are certainly going to figure this out. But if you look in healthcare, for instance, which is a field where I focus a lot of our energy, it's a tightly regulated domain, right? We can't just have—as much as the science fiction blogs may talk about doctor AI seeing you now, at least in most developed markets, that is simply not going to happen. But we've seen all sorts of interest from helping people structure their diet and exercise choices based on observed visual cues, what's in your refrigerator, what's on a menu, to understanding what patients might be saying about some of their symptoms, to recording the consultations that occur between patient and physician in an exam room and providing structure to that and summarizing that for other doctors who that patient might be seeing later on, through to R&D, to creating new designs for molecules and devices, to interpreting signals that are coming from remote monitoring devices, helping to triage patients for different sorts of care.
So all of this is being deployed now, and even that—which is going to be sort of the toddler days of AI—can be significantly transformative on labor costs, on the way people spend their time, on getting physicians and other healthcare professionals to practice to the top of their license and ensure that they're taking care of the things that require their professional judgment the most and not so much of the administrivia that actually consumes a whole lot of healthcare time, to giving patients a much more rewarding and ultimately effective experience. So that's in a super regulated domain. If you take that beyond, you're seeing all sorts of applications in financial services, in consumer goods, and certainly amongst a wide array of technology firms as well.
Rachel Halversen (09:44)
That's really interesting. So we've talked about the benefits, but what are some of the limitations of generative AI?
Stephen Wunker (09:51):
Of course there are many, right? And before we think about terminator scenarios, or at least the job terminator scenario, we have to realize that AI is only as good as the data sets that power it. So we recently talked with a nonprofit that was looking to use AI in many developing country situations, but the fact is that data really only was solid for the urban dwellers in these countries and the higher income dwellers. So there is a bias that's cooked into the model leading it to somewhat inappropriate conclusions occasionally. So that's one. There is certainly a danger that people think that the outputs of generative AI are good enough, and they just run with it.
I've seen people doing consumer research, for instance, using generative AI. Now, there is a difference in effectiveness of these approaches. One large organization we know has actually input the notes from hundreds of focus groups and consumer interviews that it's done over the years into an AI system. So there can be sort of a synthetic response, based upon all these prior observations, about how new concepts might be received or new commercialization approaches, for instance, might be received. Great. But just going out there and asking ChatGPT, what are the jobs to be done of people who are buying life insurance for their families? That’s a really bad idea, because there are biases also cooked into that. Right? Most of what it’s going to report out on are functional needs, not emotional needs. And we know that we are not computers, we are humans, and emotions are critical to our decision making. There's also a lot of post-hoc rationalization, whereas there's all sorts of dysfunction in not just in consumer markets, in business-to-business markets as well. And generative AI will focus on what is observed and not what's latent, whereas we know that the big innovation opportunities are lying in the latent needs that are sometimes obliquely expressed, but never or seldom in a fashion that a generative AI is going to ingest into its model and spit out in a way that has prominence for a researcher.
Rachel Halversen (12:22):
To that point, what are some of the best practices that companies should keep in mind as they explore these solutions?
Stephen Wunker (12:28):
So we are certainly still on experimentation phase, right? I don't know any of our clients that are going all-in yet on a solution. They're going all-in on experimentation, not on solutions. So that sort of goes without saying.
Ensuring that there is understandability—not of the model, because oftentimes even the best researchers will not understand how a neural network is coming to its conclusions—but understanding the data that's being used is going to be important. Having some sort of comparator, so you have things by traditional methods, and then you have things by AI. Sometimes that's just a historical precedent. Sometimes that's a control group, but that's useful as well.
And then being able to assess these experiments in a rigorous way. So the flip side of willy-nilly experimentation is you can be experimenting so much, you don't actually know what you're looking for. And the best experiments are designed with a hypothesis, something of a control group, and observation. It's just like we learned in grade school—the scientific method, right. So that you can compare the actual results to the hypotheses and then hopefully rapidly iterate. And again, one of the great benefits of generative AI is that there can be such rapid iteration of approaches, unlike with ERP systems or some of the more static IT transformations that underlay digital transformation efforts in the past.
Rachel Halversen (14:07):
That’s a great point. So while we’re in this experimentation phase, how can business leaders properly assess the value these solutions can deliver?
Stephen Wunker (14:16):
Well, look, it goes to having that control case and having clear hypotheses. So having measures associated with those hypotheses that ensure that you can understand the benefits, cost savings, time savings, premiumization, new customers, more loyal customers—having those sort of specific metrics will enable you to find those benefits. That being said, you're experimenting partly to be surprised. So if there are new effects that are happening, let's certainly be open to that. Right? So there should be a significant other bucket where you're looking for sort of the unintended consequences—positive and negative—and then that's going to affect the structure of your future experiments as well.
Rachel Halversen (15:09):
We’ve talked about the value of generative AI, but what about the risks? What should companies be mindful of, and how can they mitigate any risks that might arise from using these types of AI?
Stephen Wunker (15:40):
So there's been a lot of publicity about unconscious bias and limited data sets or just sort of offensive outputs that come from AI. So that goes without saying. And if you are giving customers outputs of generative AI—as opposed to synthesizing what they're telling you with generative AI—then there is absolutely a need for editing and control. But beyond that, understanding the real control and what is good enough and what is excellent are really important to assess whether generative AI is matching up.
There are few situations where we've seen people saying generative AI gets an A+. It may get a B+, but not an A+. So is that enough, given the other benefits? That is something that has to be sort of metriced both in advanced and then through post hoc analysis to see if it clears those sorts of thresholds. But there is that need for control to de-risk because you might get on average an A+, but once in a while when you get an F-, that is tremendously bad. And those risks have to be stamped out in ways that sometimes humans don't have to be supervised to such a degree.
Rachel Halversen (16:47):
Interesting. What about the legal and regulatory implications of generative AI? Things like copyright, IP, that sort of thing?
Stephen Wunker (16:56):
Well, so copyright is a bit of a wild west right now. That being said, there are tools being launched now for private data clouds and deploying generative AI on private data clouds. So keeping ownership of sensitive data is fine. Right? I mean, Epic, which is a large electronic medical records company is summarizing patient consultations between doctor and patient. That's extremely private data, but it's okay, it's secured, so you don't have to worry about that too much. Just feeding your big confidential business problem into ChatGPT is an extraordinarily bad idea, so you shouldn't do that. But there are plenty of ways to do this in a secure and compliant manner.
Rachel Halversen (17:47):
That’s a great point. Do you have any advice on how users can ensure that security?
Stephen Wunker (17:52):
Well, I mean it depends on who your cloud provider is, right? You typically would want some sort of cloud provider or access to one in a way that you can upload your data in an appropriate manner, or you experiment around with things that may not be so sensitive. There's a company I profiled in a Forbes article recently called SOCi, which helps companies that manage large networks of retail branches to provide very localized social media posts or responses to reviews—and it does that through generative AI. Now, look, if a Subway franchise gets something wrong about its corner of New Jersey and refers to something in Delaware, it's not great, but it's not the end of the world. So they can experiment with that in a way that isn't going to require a huge amount of supervision or risk aversion. Whereas distinctly, if you're at a bank or a healthcare institution, and you're communicating with customers, patients, you need to do that in a much more structured and governed sort of way. But often there'll be regulations anyway that cover what you can do or what you can't.
Rachel Halversen (19:09):
Can we talk about the ethical implications and concerns about AI?
Stephen Wunker (19:13):
Well, we've talked about unconscious bias, and that is definitely an issue. I think there are broader concerns about what the job market is going to be. There are firms I've spoken to in large IT services organizations who see a huge boon, for instance, in being able to get rid of fairly repetitive coding tasks. But the broader worry then for them is where do the senior coders come from? The supervisors of these systems? Sure, they have them now, but five years from now, 10 years from now, who are these people going to be if they didn't rise up through this process?
I mean, we have this challenge as well. We often will start a new associate right from college taking notes on customer interviews or on client interviews, and it's a way for them to sort of just really dive in depth. We'll then have them summarize it, and they can learn how to do that. Now, generative AI can do that reasonably well—it needs editing—but reasonably well, extraordinarily fast, and also inexpensively. So of course we're going to use that, but then I have to find new ways to train those associates. So we have to play this out a bit and assume that we don't just all free ride on somebody else's training, but there are ways to attain these very job specific skills.
And then look, of course there are other concerns about people taking shortcuts and having B+ work, and moreover, not being able to learn the skills. So we talked about workers. I've got three kids, I'm worried about those three kids. I know the school is worried about all their kids, and already it's very clear just speaking to my kids—well, like one in ninth grade will tell me, oh yeah, somewhere between 60 and 80% of my class is just using ChatGPT to write their responses on questions about a book.
That was fast! Right? And it's extremely disturbing. And for those 60 to 80%, they may not be doing A-level work, but it can produce Blevel work. And because the engines produce unique output every time, it's extremely difficult to tell if that was synthesized or not. So for sure, there are sectors where this creates significant ethical concerns. I don't think the answer is just to ban it like New York City schools tried to do. That's like trying to ban the internet. We don't live in North Korea. That's just not going to work. So there have to be other approaches hopefully that draw out a lot of the benefits from these systems and accentuate those even as there are negatives that play out as well.
Rachel Halversen (22:00):
So what are some ways that leaders can ensure that AI is used ethically within their organizations?
Stephen Wunker (22:05):
Well, so there are watch-outs that need to be publicized and with case studies, right, that make it clear what people need to be watching out for. But you also need to ensure that there is a degree of supervision, and there is structured experimentation, so that you know what's happening and what's not. You have clear control groups and you have a very clear disciplined lens for understanding both what you're trying to do in an experiment and what actually happened so that you bring out lessons from that.
I'll tell you, companies are terrible at that by and large. They do it really well in the R&D lab, but even in like pharma companies that this is their business to do it in the R&D lab, you get over to the commercial side of the house and it doesn't occur very, very often or very well. And that's certainly the case in other sorts of situations, including ones where the product is created every time, like in service organizations. Getting better at experimentation is going to be one of those critical imperatives for companies that are going to succeed with generative AI, because the experimentation isn't going to be over, and we're just going to go execute and all be happy and singing and dancing in the streets.
Experimentation is never going to end because these platforms are going to continue building their capabilities, both in what they are today—because the data set changesbut in how these platforms are going to evolve in their capabilities as well. So you have to get good at experimentation. I'd say that's probably the biggest gap that most organizations, certainly large organizations, are going to face with the implementation of AI.
Rachel Halversen (22:52):
That makes sense. So looking ahead, what are some of the most exciting emerging trends in AI, and how should companies be thinking today to prepare for them?
Stephen Wunker (24:01):
So it goes along a lot of vectors, and of course it varies by industry and by situation. Customer service and customer engagement is one of those obvious ones. Getting people to talk naturally and then understand what's really going on and be able to synthesize that for people who can act rapidly either on that customer situation or more broadly, that's fantastic. That enables real-time intelligence and speedy decision making, as well as potentially tailored interactions back in ways that don't require a whole lot of staff or staff training or staff capabilities. Wonderful. But more broadly, bringing structure to unstructured data is a huge opportunity for organizations.
They have so much unstructured data that's out there, and being able to discern trends or lessons or patterns in that information unlocks a large amount of potential. We've been talking about big data for eons, but it's often big structured data sets. Okay, great…but most data out there isn't structured—varies by industry, but generally that's the case. And so being able to actually bring that into the realm of data analysis and fast data actionability that doesn't require an army of super well-trained data analysts to execute, that's a terrific opportunity as well.
Rachel Halversen (25:36):
So pulling up, on a macro level, where do you think this is going? What kind of impact is this going to have on the global economy and the world of businesses at large?
Stephen Wunker (25:45):
Look, we are already seeing an impact in the global economy. Axel Springer, a very large European publisher, just announced some very significant layoffs of not only its editorial staff, but a lot of its production staff to be replaced by Generative AI. Now they're at the vanguard. They acted very quickly. A lot of other organizations will follow. There may be a tiny bit risk averse at first. They want others to go first. That will happen and they're going to follow because those are just the economic realities. So there could be a very significant reordering of professional and non-professional roles out there—in a way that maybe happened in other realms like the replacement of manufacturing for services, but in a super time compressed sort of fashion. So that can create a whole lot of dislocation. I guess that's my number one worry. And then you break it down into realms like education where there is, yes, a lot of upside and ability to tailor educational programs, but a lot of downside as well.
I guess my biggest concern about all of this is that the rate of change is likely to be faster than human, or certainly organizational or legislative decision-making occurs at. It's a lot faster to speed up technology than it is to speed up organizations. So that's a very negative framing, right? There's a lot of positive framing around the opportunity for customization, personalization, giving voice to people who are often voiceless. There are tremendous things that can be done and unlocked with generative AI, so we need to look at both sides of the coin. Much as I think we looked at the internet. There, maybe it was a little sequential. We thought all about the positives first, and the negatives only hit us later on. Here, they're going to be occurring in parallel.
Rachel Halversen (27:53)
It’s been wonderful talking to you today, Stephen. Thank you so much for your time.
Stephen Wunker (27:57):
Thank you! I really enjoyed it.
Rachel Halversen (28:00):
As a reminder, our guest today has been Stephen Wunker of New Markets Advisors, and I’m Rachel Halversen of Business Talent Group. To start a project with Stephen or thousands of other highly skilled independent consultants, visit businesstalentgroup.com. Or subscribe for more of our conversations with on-demand experts and future of work thought leaders whereever you find your podcasts. Thanks for listening.