In this latest Harvard Business School Healthcare Alumni Association Q&A, Ayelet Israeli, PhD, explains the potential—as well as limitations—of these offerings and pragmatic guardrails for deployment in organizations.
With at least 13 different companies announcing staff reductions resulting in hundreds of layoffs during Q1-2023,1 many biopharma C-suites have doubled down on evolving technologies like generative artificial intelligence (AI) as panaceas to turnaround their fortunes. I sat down with Ayelet Israeli, PhD, a Marvin Bower associate professor of business administration at the Harvard Business School Unit, to discuss the capability for businesses to successfully utilize these technologies.
Michael Wong: Big data, machine learning, next best action, next best experience, and generative AI; if we only had a dollar for every time that one of these terms was referenced in a biopharma’s annual report. Technology buzz words have been around for decades, so, given your research findings over the years, what is the realistic potential for companies to leverage and scale these evolving areas and deliver quantifiable returns?
Israeli: At the heart of these evolving technologies is the simple principle of leveraging data for informed decision making; be it from pilot to scaled production. For example, I wrote a case study on how GAP’s CEO considered how they might leverage big data, versus creative directors, to predict what consumers want to wear next.2 And what was interesting to me was to think about which tasks are big data and AI are suited to do, and which tasks are less appropriate for data and AI. As we think about store and supply chain operations, integration between a seamless online and offline shopping experience, restocking sizes across stores, all of those can be better executed with algorithms. However, predicting trends and the success of new products is an extremely challenging task, especially in fashion, and we cannot expect that simply using historical data will help us predict future preferences and trends.
To your point, tech buzz words have been around for a while, but what is exciting to me is that generative AI could be something as big as the first Industrial Revolution. This assertion is based upon my own research and other field studies where large language models (LLMs) have successfully created value for everyday tasks like improving coding, enhancing professional writing (emails), and driving greater end-user self-sufficiency via call centers’ sophisticated chatbots. These examples are no longer just pilots but deployed in some Fortune 500s like Microsoft and Google where they have integrated LLM into office products and Internet Search.
While it is great to hear of the potential, what are your top three recommendations to help biopharma C-suites pragmatically deploy evolving solutions like LLMs and other technologies?
While different industries within the Fortune 500 will have various strategies, I would recommend a three-step playbook for your biopharma colleagues given the safety aspects of this vertical.
First, as many of these firms have thousands of employees spread around the globe, their C-suites need to provide simple but clear directives around what their teams can and cannot do around generative AI. Frankly, given the risks of well-intended but dangerous employee actions (for example, around data security and privacy and protection of proprietary data), I would recommend that most employees are banned from using generative AI on their company-provided computers. While there might be some test areas that are closely supervised, start small during this learning process. Assign specific teams that will evaluate different use cases across the company and see where these new tools can be deployed responsibly.
Second, provide continuous training and explorations around these new technologies for your employees. While it is great that many people have enthusiasm around the potential to use LLMs and reduce the time to bring life-saving drugs to patients, it is crucial to consider that many of the generative AI tools that we are seeing these days (such as ChatGPT) were not meant to reveal the truth or to display correct knowledge. Rather, LLMs were built to generate content like text that displays the words that are most likely to come next. In turn, one cannot expect the outcome to necessarily be true statements. Managers and employees need to understand the risks that LLMs are known to make things up or hallucinate and therefore cannot be used without audits for correctness.
Finally, invest in change management and communications during this journey. For the former, recognize potential biases if pilots are being trained on data with underlying biases contained in them, and figure out how to circumvent these issues. For the latter, continuous C-suite communications of expectations of behaviors with ideally the CEO’s and the CIO’s or CDO’s signatures on the email should be the norm. This simple approach will help set the guard rails for the entire organization to carefully crawl, walk, and eventually run with these powerful new tools to drive profitable growth.
1. Bayer, Max, Layoffs Continue to Batter Biotech, with Big Pharmas Piling on the Pain, Fierce Biotech, February 1, 2023.
Ayelet Israeli is the Marvin Bower associate professor of business administration at the Harvard Business School Marketing Unit. She is the co-founder of the Customer Intelligence Lab at the D^3 institute at Harvard Business School. She received her PhD from Northwestern University and her MS, MBA, and BS from The Hebrew University of Jerusalem.
Michael Wong is an emeritus board member of the Harvard Business School Healthcare Alumni Association.