Thought Leaders in Cloud – Sebastian Stadil, Scalr
Thought leaders in Cloud is a series where we get you insights from the most influential leaders of the Cloud computing world. We present an exclusive interview with Sebastian Stadil, Chairman of the Board at Scalr
Please tell us about yourself
The story starts in 1999 when my poor shared hosting server was slashdotted. I vowed to one day fix the scaling problem for everyone. And in 2007, did so with Scalr!
Aside from founding Scalr, I sit on the Google Cloud Advisory Board, am a lecturer at Carnegie Mellon, and run Silicon Valley’s largest Cloud Computing user group.
How did Scalr happen?
Pretty much by luck. We started very small with a simple solution to a simple problem. But the community wanted more, and was willing to pay for it. So we created a company to support the project and here we are today!
With so many multi-Cloud management tools in the market, why should customers choose Scalr?
It’s the one Cloud management platform that IT and developers can agree on. I can’t stress this enough.
DevOps is the uncomfortable union of developers and IT operations, and tooling needs to cater to both categories of users. If IT ops chooses a tool that devs don’t want, they won’t have internal adoption and hence anything to control, so they won’t be able to fulfill their mission. Conversely, if developers choose a tool that doesn’t allow IT to have any oversight or control, IT won’t be able to fulfill their mission either. IT’s mission is important, as they are in charge of identity and access controls, complying with regulatory and corporate obligations, and often optimizing performance and reducing costs.
That’s why choosing a CMP that both IT and developers can agree on is vital for DevOps in the Enterprise.
How do you see the Cloud brokerage evolving? Can customers really choose a Cloud based on their workload requirements and specifications than being forced to select what’s available?
I haven’t seen this happening, though I see IT loving the concept. The reality is that where you place a workload is more dependent on where you’ll get your job done faster (ie, the Cloud you are familiar with) than what’s best for the workload based on some elaborate assessment of cost-performance.
That’s because workload placement based on its cost-performance profile is hard: workload performance profiles evolve over time as they get tuned and usage patterns change; new workload requirements are largely unknown (will it grow? will the architecture change? will the new Cloud functionality we leverage affects its performance profile?); and planning effort and risk of porting a production workload over to another home (migrating data, no-downtime transition, finding and reconfiguring hardcoded variables like ElasticIPs) is too high.
This means workloads can’t and don’t move for pennies. The only criteria left to developers for workload placement is what will let them ship code to customers faster, and give them the agility to adapt to changing conditions.
Why did you choose Chef instead of Puppet for Scalr’s configuration management?
We had a customer work with us to do it. We haven’t had anyone using Puppet push as hard, and haven’t been able to get Puppet Labs to help. I get asked that question weekly, so Puppet is clearly loved and growing. Maybe 2014 will see it come to Scalr?
Scalr has an open source edition. What are the key differences between the commercial and the open source build?
None. There’s a single codebase, and it’s all open source. Scalr Enterprise is just that codebase plus support, updates, and that fuzzy good feeling of contributing to the community!
Deploying a Hybrid Cloud requires granular control and customisation. How can Scalr help deploy Hybrid Clouds?
With the right architecture, this is easier than you’d expect. Trick is choosing which Clouds to support: building multi-Cloud abstractions means going for the highest common denominator of functionality; choosing to support a Cloud that should be called an “on demand VPS” dramatically reduces that common denominator.
For very advanced clouds like AWS, we provide extended options–which of course break portability. This gives users the choice to trade off portability to leverage AWS innovation.
I am sure you got this question many times — Why don’t you support Microsoft Windows Azure?
We’re just not seeing the demand. People seem content with Amazon, and increasingly with Google too.
Amazon is launching services that directly compete with its partners. How will the introduction of services like AWS OpsWorks impact Scalr?
To clarify, they don’t compete with their partners. They launch new services to serve their customers. This is fair game.
To answer the question specifically, we were initially concerned by OpsWorks. But with a year behind us, and 135% revenue growth 2012 to 2013, it clearly hasn’t slowed us. See our blog post.
You published a few benchmarks that show GCE as the fastest IaaS platform. What impact will GCE have on the Cloud market?
The industry needs an AMD to Amazon’s Intel. I believe GCE has the datacenter expertise and technical know-how to become that contender. This will help large enterprises manage supplier risk which has been a barrier to adoption. And when the enterprise moves to Cloud, with it will come the trillion dollar IT market. IBM and HP know this, and are acting accordingly. More change will happen in the next 10 years than the past 40.
What is the roadmap of Scalr? What can we expect from it in the near future?
We’re completely open source, and our roadmap is equally transparent. We’re building the CMP that developers want to use, and that IT can control. That means we’ll continue to build tools that makes developers more productive through more expressive objects, and continue to provide IT with the oversight they need.