A decade later, “Vendor lock-in” still remains a hurdle that the technical sales rep has to go through. But the landscape has changed significantly. Now the vendor lock-in argument has shifted to the Cloud. I often hear lock-in in the context of Amazon Web Services. AWS competitors use the lock-in factor to create FUD among the customers with a hope to increase their sales prospects. But how much of this is a real concern? Should businesses be really worried about getting locked into one platform?
In my opinion, every technology decision involves some level of lock-in. My Hyundai car came with an in built MP3 player that makes it difficult for me to upgrade to a better audio player. For a long time, I lived with a HTC phone that had a proprietary headphone socket that prevented me from plugging in the powerful Bose headset into it. All of us tolerate certain level of lock-in across the devices and technologies that we use.
Vendor lock-in can be avoided to a large extent in the IT world. But, it is a huge trade off between performance and portability. Let’s consider the example of Microsoft SQL Server as the database. Like most of the enterprises databases, SQL Server comes with a lot of built-in optimizations that can be exploited for enhanced performance. To leverage these optimizations, one has to encapsulate the business logic in the stored procedures written in T-SQL, which is a native SQL variant used by MS SQL Server. This gives the database developers and the DBAs a chance to extract the best possible performance from SQL Server. So, what’s the trade off here? This makes it extremely hard to move the data tier to Oracle or DB2 at a later point. Migrating the schema may be easier but not the business logic and the extended stored procedures that are written to take advantage of the native SQL Server capabilities. The developers could have chosen not to exploit these features and to comply with ANSI SQL standards that are common across any relational database. They could achieve it by moving the business logic to the client and use the DB only for persistence. But that would seriously limit the performance of the application. So, it is clearly performance or portability!
AWS offers most of the services that are required to build Internet scale applications. Static content can be off-loaded to Amazon S3, which will reduce the burden on the web server. It can further be cached across the edge locations by leveraging Amazon CloudFront. The session management can be moved to Amazon ElastiCache to achieve better scale of the web tier. Data that gets written once but read more often can be persisted to Amazon DynamoDB. All these services are exposed through REST API that is very proprietary to AWS. The moment you start consuming these APIs in your application, it is extremely hard to move to other Cloud platforms. The fact is that most of these services have alternatives that will not lock you down to AWS. For example, you can setup a Memcached server to manage in-memory objects and session data in the place of Amazon ElastiCache. You can setup and configure a MongoDB or Cassandra cluster to handle the non-transactional data without depending on Amazon DynamoDB. You can deploy HAProxy instead of relying on Elastic Load Balancer of AWS. That will make your application independent of the underlying Cloud infrastructure. You can then move the whole deployment that consists of a couple of dozen VMs to another Cloud. But, remember you are going to take control of the infrastructure and need to handle the failover and high availability all by yourself.
The implications are very similar to the tight integration with the database server that we discussed earlier. If you need performance and productivity, you will have to come to terms with the proprietary features of the platform. If you want to avoid vendor lock-in and insist on portability, then you got to sacrifice the power and performance offered by the platform.
So, is it performance or portability? You decide!
- Janakiram MSV, Chief Editor, CloudStory.in