One of the top concerns for Serverless, especially among the enterprise users, is the idea of vendor lock-in. Functions as a Service (FaaS) offerings from public cloud providers definitely has a certain level of vendor lock-in. The key question is what is the impact of this lock-in. There are two factors that need to be discussed while addressing this topic.
- The cost of lock-in and whether it matters.
- Will a Multi-Cloud Serverless platform mitigate this lock-in risk?
In this post, let us take a look at these two factors and understand where the real lock-in risk exists.
How does FaaS lock you in?
When you use FaaS offerings like AWS Lambda or Azure Functions, you are locking yourself into their API and other services more tightly. If you want to migrate, your application should be re-architected because your functions are tied to the service providers API Gateway, Event Sources, etc.. However, IBM Cloud Functions and Azure Functions offers little more flexibility than AWS Lambda or Google Functions. IBM Cloud Functions is based on OpenWhisk open source project and with some caution, your applications can exist with a loose coupling on top of IBM Cloud Functions. Azure Functions offers an on-premises version of their offering which gives you some flexibility to move around the applications in hybrid cloud scenarios and edge computing use cases. In short, if you are using native FaaS offerings by public cloud providers, be prepared for the vendor lock-in.
Does this lock-in matter?
The short answer is, it depends. As you move from Monolith to Microservices, you reduce the “service perimeter” dramatically because of the narrow functional definition of the service and it gets even more reduced as you start bringing functions into the mix. If you are an agile organization, you might find it more convenient to dispose of the functions in one cloud provider and build a new set of functions in another one than migrate the original set of functions from one cloud to another. Disposable applications, a term we can apply for Functions and Microservices, will make vendor lock-in less impactful. An ideal metric for evaluation in this context is to see if the cost of building functions from scratch is less than the cost of migration.
Multi-Cloud Serverless platforms and lock-in
In order to mitigate vendor lock-in risks, there are many multi-cloud serverless platforms including OpenWhisk, Nuclio, Kubeless, etc. While these platforms make it easy to move the applications around, the data gravity issue will still be the predominant factor deciding where the applications reside. If your data volume is low, having the ability to seamlessly port applications across cloud providers with multi-cloud abstractions makes sense. For petabytes of data, data gravity will be a big deterrent even with a multi-cloud abstraction layer. Application architecture and the underlying platform abstraction only solves part of the portability needs. The data architecture and volume play an equally important role in determining portability. I provided an example already on how data gravity can shape the cloud provider choices.
Portable APIs like Open Service Broker API helps in portability by fixing some of the issues around data architectures and how applications can access the data. It still doesn’t solve the data gravity issues associated with data volume. This is not a FUD I am unleashing to make a case against portability. If anything, we are big proponents of having portability as an underlying philosophical driving force in enterprise modernization. Just look at the investments cloud providers are making when it comes to data services. Google Cloud, from early days, is focussed on building powerful data services. AWS and, now, Microsoft is also spending resources on building data services. These cloud providers know that the only way to lock people in is through data gravity.
Is portability a pipedream?
Not necessarily. Data gravity makes portability difficult even with platform abstractions that make applications portable. The key point I am making here is that don’t restrict your portability considerations to just the abstractions provided by application platforms. Think beyond applications to data architectures and your data storage including data replication strategies. Your portability journey should span different areas from applications to data to even abstractions at the network level.
PS: I am not making an argument that data gravity makes lock-in concerns meaningless. Rather, I am arguing that lock-in goes beyond application lock-in to data gravity issues