I recently finished reading an excellent white paper by David Stokes on compliance and cloud computing (1). In it he gives a clear outline of what the cloud is (and is not), using the National Institute of Standards and Technology (NIST) framework (2). The paper also highlights what is new and what is not about cloud computing, and why the pharmaceutical and allied industries are concerned about getting our heads in the cloud.
What is it then?
At its most basic, the cloud is just about using someone else’s computing resources. That’s not really very new – we have had outsourced IT even in the risk averse pharmaceutical industry for some time. There is also the notion that resources are pooled between consumers of those resources. Again conceptually or technically that is not unique to cloud solutions – large IT infrastructure in companies already use virtual servers for example. Other NIST characteristics seem even more like everyday business: the ability to access resources from a variety of clients (laptops, phones, workstations etc.), the implementation of measurement of network/resource usage and performance, self-service allocation of resources. What is strikingly different in the NIST definition is “rapid elasticity” – the ability to seamlessly increase or release resources as needed, such that to the consumer the computing power can seem infinite. That characteristic points towards specialized providers and their leverage of economies of scale.
So what’s the fuss about?
Glass half-full, cloud computing is about reduced IT infrastructure costs for companies yet increased reliability and resources. It is also about the connected world: the constant interaction between systems and programs that we take for granted in the background of our lives might not be necessitated by cloud computing but it is facilitated by it. Glass half-empty, adopting the cloud is about losing or ceding control.
I’m a control freak
David Stokes’ second objective (after defining the cloud) is to emphasize the importance of the risks inherent to the cloud. And that’s about control. As is often the case in our regulated industry the conversation is couched in terms of compliance. This, to me, puts the cart before the horse. It is straightforward for a company to put in place all the measures necessary (e.g. QA, procedural and technical controls, training) to appear compliant with regulatory expectations – we have a recipe for that. However that does not mean that system of controls is actually effective in producing the results intended by those expectations. You can appear compliant without being effective (meet all business requirements). On the other hand if you meet all business requirements you will, as part of that effort, be compliant (as regulatory expectations are one element of the business requirements and because many regulatory expectations are common sense good business practice, such as the security elements of 21 CFR Part 11). Whilst the conversation is framed as one of compliance, in reality David’s point is broader and clearly makes the case for a cloud strategy (like any business strategy) to consider all stakeholders, and not to allow key purchasing decisions to be driven solely by headline cost.
I don’t want my data playing with your data
David also points to using a risk management methodology to help develop your company’s approach to cloud computing, considering risks from various stakeholder perspectives and mitigation strategies. That brought me back to the control question. Let us say I employ an Infrastructure as a Service (IaaS). As the cloud is defined by NIST, I am using an environment where my data could be anywhere within the physical infrastructure. My critical and confidential efficacy data could be sitting on the same physical server as Facebook’s user profiles. I cannot expect to control the qualification and change control around hardware for example because I don’t own the servers and I am only one of many customers using them. By the very nature of the ‘elasticity’ characteristic, where I get my computing and storage resources from cannot be easily controlled (Note: There are ways to modify the concept of the cloud to mitigate this but more of that further down). So what can we do when we do not directly control? There is of course some precedent. There are many examples of pharmaceutical companies and CROs using Software as a Service platforms over the years. Take Interactive Response Technology (IRT) systems, which don’t really even fulfil the SaaS model. IRT systems are typically set up where the provider owns the applications, servers etc. configures and manages the project applications, and virtually no direct IT control is ceded to the customer. However, those companies exist to service the regulated industry, so we can audit them and force them to meet our standards. That doesn’t work so well in the world of cloud computing where perhaps only a tiny part of the provider’s business is with regulated companies and thus the need to meet those expectations could seem more trouble than it is worth. The David and Goliath image is heightened with a small life sciences company. How does a 50 person company force an audit at Amazon, Oracle, or Microsoft (hint – you don’t and stop tilting at windmills because they don’t and won’t let you).
Behind the Great and Powerful Oz’s curtain
Medidata is a good example of how the growth in outsourced cloud is changing things in our industry. In the past you might have put their EDC system on your own servers in house. They might have housed their IRT system in house. But now, for example, if you engage those systems or their CTMS as SaaS, Medidata is itself employing Amazon to provide IaaS (3).
My head hurts, give me some solutions
David Stokes points to one model that may point the way forward – the virtual private cloud. You utilize the advantages of outsourced computing resources but you have (logical) walls around your data and applications. For example see Pfizer’s application of a hybrid model where they dip into their virtual private cloud when their own server capacity doesn’t meet need (4). Assuming you are not a behemoth company with the financial resources to partner with a behemoth cloud provider to build a compliant cloud solution, another option is to look for providers that have gone through exercises to show how they can meet your standards (similar to how many existing SaaS providers in the industry have 21 CFR Part 11 white papers available). There are cloud providers that have specialized in the regulated industry, such as Validated Cloud (www.validatedcloud.com). Aris Global, already a leading provider of technology solutions, has diversified into providing a compliant solution, AGCloud (http://arisglobal.com/regulated-cloud/agcloud/). Another interesting development is where specialized validation/compliance companies provide validation solutions for a major industry player’s cloud services, such as Montrium’s approach with Microsoft Azure (http://www.montrium.com/montrium-cloud-0).
Let’s not emulate Canute
Whatever the solution, the move towards computing resources being centralized and owned by specialized companies with the consequent benefits of scale and reliability seems inevitable to me. Going back to David Stokes’ white paper, some of his solutions are reasonable enough but are only feasible for large companies with deep pockets (such as building your own cloud in house). Yet the 21st century is being defined as the era where data and the effective mining of those data is a necessary core competency for companies of all sizes (and in all industries) to compete. The pooling of financial resources to create enormous computing power is after all not new, and not limited to cloud computing. It is at the heart of peer to peer protocols such as bittorrent, and going back further to when two guys in my lab gave up their PCs every night to aid the search for extraterrestrial intelligence, which by the way you can still sign up here