Currently it is very difficult for the cloud consumer to tell the difference between a good and poor quality cloud provider. On-premises outages can go on for hours behind the corporate curtains but people will not hesitate to ask cloud SLAs and warranties. For public clouf, since any outage information is publicly available which makes it more vulnerable. If you read SLAs provided by AWS EC2orRackspace Cloud you will find that it difficult to correlate SLA and actual availability and i find that SLAs are more of legal brawling and are not good indicators if you are to chose one vendor over another for technical precedence. At least for the Cloud, current evaluation systems of ‘uptime’ must be rethought.Should you better off looking at the track record of the company rather than the SLA.
Amazon is so open to disclose how their SLAs work which is widely appreciated and at the same time they make themselves susceptible to attacks.
From Hacking the Amazon S3 SLA
|Strategy||Expected Error Rate||Minimum p to get a 25% refund||Minimum p to get a 10% refund|
|One request per 5-minute interval||p||1%||0.1%|
|Stop after 1 failure||p ln(1/p) / (1-p)||0.154%||0.0110%|
|Stop when Error Rate > p||p ln(1/p) + 0.191 p (approximately)||0.149%||0.0107%|
|Optimal strategy||p ln(1/p) + 0.292 p (approximately)||0.147%||0.0106%|
Radicati Group conducted a study in 2008 on on-premise email solutions that exposes some interesting points, which presents very interesting data.
Image source: The Downtime Dilemma: Reliability in the Cloud
If you ask me “ If I move services out of my data centre, how will I guarantee availability and performance”, my answer would be “ How do you currently measure that?”