Getting value from Big Data

Publication
Article
Pharmaceutical CommercePharmaceutical Commerce - July/August 2014

Setting a foundation for long-term success: How the pharmaceutical industry can transform Big Data into actionable insight

The term “Big Data” has exploded within the healthcare industry, and it’s unlikely that it will go away anytime soon. In fact, these large sets of information have the potential to impact almost every facet of the life sciences value chain, including research and development, finance and administration, manufacturing, supply chain, marketing and sales. There is, however, a disconnect between availability of data and the ability to translate the information into actionable insight.

Historically, business intelligence has dealt with the analysis of transactional data—order volumes, production costs, sales—but Big Data’s true value lies in the ability to evaluate interactions among data, not simply the individual transactions. By connecting information across the enterprise and throughout the healthcare ecosystem—such as sales and manufacturing—pharma and medical device organizations can support business decisions with real-time, granular-level data that not only streamlines workflows but also improves development and delivery of products.

Because pharma operations cover all facets of the supply chain up to the actual retail point of sale, pharmaceutical organizations are uniquely positioned to optimize the use of Big Data. Even in the case of retail, pharma may not be present in the actual transaction, but regulatory mandates require pharmaceutical and medical device companies to track what is sold and to whom, most of which has not been tracked historically. This detailed consumer information can further enhance the organization’s use of data to evaluate market opportunities and refine research, product development, manufacturing and logistics functions.

The flow of information throughout a pharma enterprise is circular, with research and clinical data supporting product development, helping to guide manufacturing processes. Marketing, sales and distribution information provide insight into provider and patient interactions, and the newest form of data—social media—enables companies to determine the sentiment, or reaction to products, directly from consumers. Analysis of consumer sentiment can then determine new opportunities for research. (Fig. 1)

It is no surprise, then, that pharma companies have access to greater amounts of information than ever before. While Meaningful Use (MU) Stage 1 increased the number of healthcare providers that capture patient information electronically, MU Stage 2’s mandate to exchange data among multiple providers has created a treasure trove of information in the form of structured and unstructured data, all of which can be used to complete clinical trials in a shorter time frame, conduct studies related to compliance and outcomes, or move more quickly toward personalized medicine. The same increase in volume and velocity of previously unavailable data is present throughout the pharma enterprise.

Unfortunately, much of the data is captured in silos across the healthcare community. The missing piece for many pharmaceutical organizations is a platform that gathers information from partners and internal disparate sources and enables access to other users after it has been integrated, aggregated and harmonized in a central location. This foundation is critical to providing a range of information that can help executives reap rewards such as faster product time to market, cost savings, new efficiencies and greater customer understanding.

Connecting disparate data sources

The ecosystem for which data must be integrated includes the entire pharma enterprise, as well as providers and payers. Not only must a large volume of data from disparate sources be integrated, but the data must be aggregated and harmonized so it can be understood in the right context by a wide range of users.

The traditional approach to integration is a point-to-point, script-driven process. As files travel from server to server or application to application, imbedded code directs the movement of the information. This approach works well when data transfer is limited to a few localized systems and users, when data volume is manageable and when data requires little “translation” for each user to access.

In the recent past of IT development, it was common to hear of programs that performed “extract, transform and load” (ETL) functions, which worked well for the point-to-point applications just mentioned. Today, however, the combination of vastly larger datasets and new programming technology allows the user to “come to” the data, rather than the data coming to the user. You could call it “extract, load and transform” (ELT) instead. With Big Data, you extract and load the data into the repository (e.g. Hadoop, Casandra, etc.) and then when you figure out what questions you want to answer, you transform it and perform the research. This means that you no longer need to figure out all of the questions you are going to ask in advance of the load—a huge change.

Effectively managing Big Data, however, requires more of a “hub and spoke” approach in which data is aggregated in a central repository, whether it is in a structured or an unstructured format, and can then have different rules applied, which ultimately empowers the various parts of the organization, or the “spokes.” This centralized approach enables access to information across the enterprise, no matter the source of the original data. An additional benefit of this model is less risk of broken script that must be tracked down and repaired to enable automation of data gathering.

Moving to this model ensures that data integration, aggregation and harmonization take place in a much more strategic manner. Manual steps related to coding rules for point-to-point transfer of data are removed in favor of a higher level operational view. Also, access to data throughout the enterprise, and among partners, optimizes the use of analytics applications which are often limited by the amount of data and format available. When access to Big Data is automated, staff time spent on analysis is more effective because staff can focus on analyzing the graphical or methodical interpretations of data rather than overseeing the acquisition of data.

Although redesigning a company’s information technology infrastructure to optimize the use of Big Data to provide quality business intelligence makes sense, a fiscally-sound approach is necessary. The most financially feasible option is the use of a cloud-based solution as the platform to integrate systems and aggregate and harmonize the data for use in appropriate applications. A cloud-based system also provides flexibility of use for the pharma enterprise—analysis of data can take place in the cloud, or portions of data can be compartmentalized and packaged for use in different parts of the enterprise. These options not only ensure access to the right data for specific applications but also enable the pharma organization to determine how and by whom data will be used. This is especially critical in the research and development departments, where proprietary information along with patient information demand additional levels of security.

Benefits of integration

When all silos of information are integrated and accessible throughout the organization, data that has previously been unavailable to pharma can now be analyzed alongside traditional data. Social media is one example of new data that enhances point of sale data. With social media, now, pharma marketers and researchers have a tool that not only measures what is purchased but also what consumers think about the product. Traditional focus groups or review of sales data has provided some insight into consumer reactions, but social media provides real-time, product-specific feedback that enables a company to reevaluate a product, make marketing expenditure decisions or even adjust pricing if needed.

Social media also informs the research area by providing an overview of consumer preferences and reactions to existing products. Research and development further benefits from integrated Big Data in a number of ways, including enabling many rather than a few researchers to work together at the same time. This expanded collaboration can help to fill clinical trials faster and document outcomes more quickly, which means products go to market in a shorter time frame.

Moving products through development and manufacturing in a timely manner contributes to the financial success of a pharmaceutical organization, but there are two other potential financial opportunities for pharma as a result of integration and analysis of Big Data. Typically, diagnoses are determined by payer data, but access to patient information, along with historical sales data, prior to the submission of claims, enables pharma to anticipate needs more quickly to ensure the right medication is available at the time it is needed. Not only does this streamline manufacturing and distribution processes but inventory costs can be better managed with stronger insight into demand.

Although healthcare providers have been more interested in the use of Big Data to monitor patient compliance with medication therapy, pharma can also benefit from access to adherence data. For example, a patient with a heart condition may fill the first prescription but neglect to fill subsequent prescriptions. An alert from the patient’s pharmacy to the physician prompts a follow-up conversation to determine the reason for no refills, to provide additional education and to obtain resources the patient may need to purchase the medication. Not only does this process result in a better patient outcome, and lower cost of care for a collaborative care organization, but pharma benefits from more predictable product sales.

Another advantage of centrally-integrated data for the manufacturing and logistics facets of the pharma supply chain is the ability to obtain global point-of-sale information more quickly to improve efficiency of the manufacturing process. With different countries or regions regulating slightly different formulas for medications sold in their locations, the ability to see how demand in one region of the world changes compared to others allows pharma companies to better manage the production of drugs—in terms of formula “recipe” as well as volume—to create cost efficiencies in the manufacturing and distribution processes.

Perhaps the business units that benefit the most from the ability to aggregate and analyze multiple layers of data are marketing and sales. Access to information not previously available, such as more direct consumer preference and adherence data, provides a more robust interpretation of trends to better identify opportunities to evaluate and improve sales.

Integration expertise required

One of the major concerns associated with integration of data from different proprietors’ datasets is delineating who owns the data. With federally mandated requirements for data privacy and security, data ownership is an evolving discussion with different opinions on the use of protected health information—identified and de-identified.

As the conversation about data ownership continues, no organization can ignore the need for multiple layers of security, including encryption and well-defined access controls. This is not to say access should be limited—that negates the concept of having access to more data—but appropriate access must be defined. In fact, a central integration platform that applies security protocols at a global level is an effective and fiscally responsible way to protect data from outside threats as well as unauthorized internal access. Partnering with a cloud-based provider ensures access to staff expertise that is difficult to maintain internally. Not only are security protocols continually updated but all processes required for seamless interoperability among all users are maintained on an ongoing basis to avoid use of obsolete solutions that may affect the integrity of data analysis.

Developing and maintaining a platform to enable integration and aggregation of Big Data is not the expertise of pharma, but rather is the analysis of the information in order to provide actionable insight. Choosing to use an expert for the integration and harmonization of data is similar to choosing between changing the oil in your car by yourself or taking the car to an oil change center. Because changing oil is all the center’s staff does, they can perform the task better, faster and less expensively than you might when the cost of your time and supplies are included.

The technological transformation of the healthcare industry has contributed to the dramatic increase in data that is available to the pharmaceutical industry. The industry recognizes the need to break through silos of information to enable enterprise-wide access to information but what impact can Big Data have if there’s no way to assemble and condition structured and unstructured information into actionable insight? Adopting a cloud-based, centralized integration platform provides a solution to manage Big Data to produce a more robust analysis of business trends and opportunities to support business decisions that ensure long-term financial and operational success.

ABOUT THE AUTHOR

Gary Palgon is the vice president of healthcare solutions for Liaison Technologies, which provides healthcare organizations with innovative solutions to complex integration and data management needs. He can be reached at [email protected] or on Twitter at @GaryPalgon.

Recent Videos
© 2024 MJH Life Sciences

All rights reserved.