Tag Archives: Cloud

API versioning scheme for cloud REST services

Traditional approach

Traditionally it is considered a good practise to have REST API versioning like “/api/v1/xxx” and when you transition to next version of your API you introduce “/api/v2/xxx”. It works well for traditional web application deployments where HTTP reverse proxy / load balancer front end exists. In such set up application-level load balancer can also work as an HTTP router, and you can configure where requests go. For example you can define “v1” request go to one rack, “v2” to other, and static files to third one.

Cloud services tend to simplify things and they often don’t have path-based HTTP routing capabilities. For example Microsoft Azure does load balancing at TCP level. So any request can hit any VM, so your application needs to be prepared to handle all types of requests at each VM instance. This introduces a lot of issues:

  • No isolation. If you have a bug in v1 code which can be exploited all your instances are vulnerable;
  • Development limitations. You need to support whole code base making sure ‘v1’ and ‘v2’ live nicely together. Consider situation when some dependency is used in both ‘v1’ and ‘v2’ but require specific different versions;
  • Technology lock in. It either impossible or very hard to have ‘v1’ in C# and ‘v2’ in Python.
  • Deployment hurdle. You need to update all VMs in your cluster each time.
  • Capacity planing and monitoring. It is hard to understand how much resources are consumed by ‘v1’ calls vs. ‘v2’

Overall it is very risky, and eventually can become absolutely unmanageable.

Cloud aware approach

To overcome these and probably other difficulties I suggest to separate APIs at domain name level: e.g. “v1.api.mysite.com”, “v2.api.mysite.com”. It is then fairy easy to setup DNS mapping of the names to particular cloud clusters handling the requests. This way your clusters for “v1” and “v2” are completely independent – deployment, monitoring, they can even run on different platforms – JS, python, .NET, or even in different clouds – Amazon and Azure for example. You can scale them independently – scale v1 cluster down as load reduces, and scale v2 cluster up as its load increases. Then you can deploy “v2” into multiple locations and enable geo load balancing still keeping “v1” legacy set up compact and cheap. You can have ‘v1’ in house, and ‘v2’ hoste in cloud. Overall such set up is much more flexible and elastic, more easy to manage and maintain.

Of course this technique applies not only to HTTP REST API services but to others as well – HTTP, RTMP etc.

Advertisements

Brief reference on cloud storage

This is very brief and shallow comparison of data model and partitioning principles in Amazon S3 and Azure Storage. Please also see my feature comparison post of various storage platforms: https://timanovsky.wordpress.com/2012/10/26/comparison-of-cloud-storage-services/
Amazon S3
Getting most out of Amazon S3: http://aws.typepad.com/aws/2012/03/amazon-s3-performance-tips-tricks-seattle-hiring-event.html
Their storage directory is lexigraphically-sorted, and leftmost characters used as partition key. It is not said, but looks like you need to have your prefix tree balanced in order for partition balancing to work optimally. I.e. if you prefix with 0-9A-F as suggested in the article, amount of requests going to all 16 prefixes must be roughly the same. This underneath might mean that key space is always partitioned evenly – split into fixed amount of equal key ranges. That is totally my speculation, but otherwise I can not explain why such prefixes would matter.

Microsoft Azure Storage
http://blogs.msdn.com/b/windowsazurestorage/archive/2010/05/10/windows-azure-storage-abstractions-and-their-scalability-targets.aspx
http://blogs.msdn.com/b/windowsazurestorage/archive/2010/12/30/windows-azure-storage-architecture-overview.aspx
http://blogs.msdn.com/b/windowsazurestorage/archive/2011/11/20/windows-azure-storage-a-highly-available-cloud-storage-service-with-strong-consistency.aspx
Having glanced over MS docs I’m under impression that Azure storage can split key ranges independently based on the load and size.
Update: The following quote shows that Azure is similar to S3, and I was wrong:

A downside of range partitioning is scaling out access to
sequential access patterns. For example, if a customer is writing
all of their data to the very end of a table’s key range (e.g., insert
key 2011-06-30:12:00:00, then key 2011-06-30:12:00:02, then
key 2011-06:30-12:00:10), all of the writes go to the very last
RangePartition in the customer’s table. This pattern does not take
advantage of the partitioning and load balancing our system
provides. In contrast, if the customer distributes their writes
across a large number of PartitionNames, the system can quickly
split the table into multiple RangePartitions and spread them
across different servers to allow performance to scale linearly
with load (as shown in Figure 6). To address this sequential
access pattern for RangePartitions, a customer can always use
hashing or bucketing for the PartitionName, which avoids the
above sequential access pattern issue.