Release Management Authors: Pat Romanski, Elizabeth White, David H Deans, Liz McMillan, Jnan Dash

Related Topics: Release Management , @CloudExpo

Release Management : Article

SOASTA's 10,000 Hours in the Cloud

Lessons Learned from SOASTA’s Experiences in Cloud Computing

SOASTA at Cloud Expo

“Practice makes perfect” is an adage heard by everyone from budding pianists to ambitious athletes. In his book Outliers, Malcolm Gladwell focuses on exceptional people who don’t fit into our normal definition of achievers. A common theme in explaining the successes of these individuals is the “10,000 Hour Rule.” Gladwell’s theory suggests that high performance is, in large part, the result of putting in an enormous amount of time, and that it takes about 10,000 hours to gain the experience necessary to achieve greatness.

The rule, according to Gladwell, applies to more than just individuals. He points out that the Beatles performed live in Hamburg, Germany over 1200 times between 1960 and 1964, amassing the hours needed to catapult the band to stardom. Clearly, it applies to companies as well. That’s not to say that talent, intelligence and good timing don’t play a role; but nothing can replace good old-fashioned experience.

Having now executed well over 10,000 hours of testing in the Cloud, SOASTA has experienced many successes and more than a few challenges. This article is intended to share our experiences to help others understand the opportunities and avoid the pitfalls associated with cloud computing. Gartner has positioned cloud computing as being at the “Peak of Inflated Expectations”, and the fall into the “Trough of Disillusionment” can be a steep one, if you don’t know what to expect. According to Gartner it will be two to five years before cloud computing’s mainstream adoption. So how do you get ahead of the curve?

SOASTA has provisioned over 250,000 servers, most in Amazon EC2. Our application, CloudTest™, is a load and performance testing solution designed to deliver real-time, actionable Performance Intelligence that builds confidence in the performance, reliability and scalability of websites and applications. It leaves the baggage of client-server alternatives behind and is extremely well suited to the cloud. In fact, dev/test has been acknowledged as a logical entry point for leveraging cloud computing due, in large part, to the variability of resource requirements. It also improves web application testing quality by making it viable to simulate web-scale user volumes from outside the firewall.

The question isn’t if cloud computing will become mainstream, but when and how? What issues need to be addressed, and how do you make the decision to move to the cloud?

What is the Cloud?

No, we’re not going to create yet another definition for cloud computing. While both vendors and users often spin the definition(s) to match their agendas or points of view, most tend to describe it in terms of services, typically in three “buckets”:

  • Software-as-a-Service
  • Platform-as-a-Service
  • Infrastructure-as-a-Service

Software-as-a-Service is characterized by providing application capabilities, such as backup, mail, CRM, billing or testing. Platform services, such as Force.com, Google Apps, Microsoft Azure and Engine Yard, are designed to make it easier to build and deploy applications in a specific multi-tenant environment without having to worry about the underlying compute stack. Infrastructure services allow companies to take advantage of the compute and storage resources delivered by vendors, such as Amazon, Rackspace and GoGrid, while providing a great deal of flexibility in how those resources are used.

According to a draft definition by the National Institute of Standards and Technology, which is heading up a government-wide cloud computing initiative, Infrastructure-as-a-Service is “the capability provided to the consumer to rent processing, storage, networks and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications and possibly select networking components (e.g., firewalls, load balancers).”

SOASTA provides a software-based service, centered on reliability, that tests and measures the software, platform and infrastructure of both cloud-based and on-premise applications. This article focuses primarily on infrastructure, since the bulk of SOASTA’s experience is as a consumer of cloud services.

The diagram below illustrates how SOASTA leverages the cloud to deploy its Global Test Cloud components for on-demand testing.

Public and Private Clouds
Internal clouds are often about optimizing a private infrastructure. As a result, they’re usually referred to as private clouds. Public clouds provide access to universally accessible resources. Today, public cloud vendors are offering hybrid alternatives that leverage their compute and storage resources, yet require the proper authorization for access. Extending an internal or managed infrastructure by renting virtualized resources on demand has become an increasingly viable option. As a result, the line between public and private clouds is starting to blur.

With a private cloud, IT has greater control over issues, such as security and performance, that are common concerns about shared resources in the public cloud. Instead of renting the infrastructure provided by others, companies use commercial or open-source products to build their own cloud. Of course, this means they also need to address challenges such as repurposing servers, choosing a virtualization platform, image management and provisioning (including self-service) and capturing data for budgeting and charge back.

Thus, even companies that build their own internal cloud infrastructure will identify applications well suited to deployment in the public cloud, including externally managed hybrid environments like Amazon’s Private Cloud. Most IT shops start with applications that take advantage of the benefits delivered by Infrastructure-as-a-Service. This includes applications that are deployed for short periods of time, have increased use based on business cycles, need to scale on-demand to respond to known or unanticipated traffic spikes, have limited privacy and security concerns, and are simple to deploy.

Deploying Applications in the Cloud
Generally, an organization will start with a low-risk, tactical application before leveraging the cloud for strategic, business-critical applications. Typically, these initial apps require little, if any integration to other data or systems in the internal infrastructure (which adds significant complexity and severely hinders the goal of quickly deploying a stack and uploading an application). We are starting to see hybrid applications: that is, apps that are deployed on-premise but respond to spikes by automatically provisioning instances of stressed components of the infrastructure into the public cloud.

SOASTA CloudTest’s first deployment environment was in Amazon EC2. The requirements for load and performance testing fit almost all of the characteristics above: a need measured in hours, not months or years; an often significant and varying amount of compute resources; and minimal security issues. EC2 was the first to provide a platform that dramatically changed the cost equation for compute resources and delivered an elastic API for speed of deployment.

For an application that depends on the swift provisioning and releasing of servers, it meant that we had to quickly identify bad instances and bring up replacement instances. In addition, because EC2 was initially confined to a single location, it reduced some of the benefits of external testing. For companies deploying business-critical applications in the cloud, a single location might create concerns over failover and disaster recovery. Amazon Web Services has since created geographic options, regions and availability zones, providing very attractive options for failover.

Speaking of Challenges...
So what challenges might you face in moving to the cloud? Today, provisioning has become substantially easier and more reliable. Zombies still exist across all vendors; although, the frequency with which they appear continues to decline. Still, if on-demand and quick deployment is important to your application you need to make sure you quickly account for systems that are DOA. Handling this can be built into your deployment methodology or you can leverage the capabilities of third-party provisioning tools and services.

This issue has been mitigated by automation and packaging of applications. Companies, such as RightScale, Appistry, 3Tera and others, provide tools to package and deploy applications leveraging relatively simple and intuitive user interfaces.

Vendors provide tremendous amounts of flexibility, but little application awareness. Others provide more platform-oriented capabilities and include value-added features, such as monitoring, auto-scaling, pre-configured stacks and application containers.

Virtualization technology has allowed infrastructure vendors and enterprises to optimize their use of physical resources and is one of the fundamental building blocks that has distinguished the cloud from what was often referred to as “grid” or “utility” computing. The second key building block is the implementation of APIs, popularized by Amazon as elastic APIs. A prerequisite for today’s cloud infrastructure vendors, these interfaces enable the ability to dynamically start, stop, query the status of and manage the available resources programmatically and without the intervention of the infrastructure vendor.

There’s much discussion about portability, lock-in and the need for an open API. A standard elastic API is probably years away, causing many of the added-value deployment vendors (such as those listed above) to do the work for you: build your app using their tools and you don’t have to worry about infrastructure vendor lock-in.

Of course, that begs the question of where lock-in occurs. You’re never really locked-in if you’re willing to do the work to change. That could mean changing your processes for deployment, writing to a new API, repackaging your application or changing the stack. There will always be choices to make, and to minimize the impact of lock-in, you need to determine which layer is going to be easiest to change should you need to switch vendors. Ultimately, the only way vendors should lock you in is by providing superior service.

When working with public cloud vendors, planning certainly helps in meeting large-scale resource requirements. In SOASTA’s case, one engagement may require only two or three servers, while another may need hundreds, or even thousands, of server cores. We often schedule our compute requirements a week or two in advance, which helps our infrastructure vendors respond when we’re provisioning hundreds of servers at a time. In most cases, you’ll know your resource requirements ahead of time. However, since on-demand access to a pool of resources is one of the benefits of the cloud, you’ll want to understand the limits of your vendor to respond to ad hoc requests.

One way to minimize this issue is to spread the risk among multiple vendors. With CloudTest’s distributed architecture, we have the capability to concurrently provision across multiple cloud providers to help mitigate availability issues. Commercial deployment technologies can help reduce the complexity of deploying components of an application to more than one provider if this is a design criterion.

Another variable to consider is persistence of the application: for how long and how often does it need to be up and running? This has a particular impact on pricing. Is it only going to be in place for a few weeks, or is it a business-critical application that needs to be available 24/7 for an extended period of time?

Cloud vendors are trying to strike the balance between the traditional offerings of a managed service provider and a more flexible, scalable platform that is available and billed on demand. For example, Amazon now offers reserved instances, as well as other offerings, that are a blend of on-demand and monthly rental; while traditional hosting companies, like Rackspace and Savvis, race to provide fast, affordable and easy access to short-term and elastic compute and storage resources to complement their standard hosted offerings. Some, it should be noted, are much further along than others in providing a range of options.


Security is the number one concern when it comes to the public cloud. Testing, particularly load and performance testing, rarely requires the use of or access to real customer data. Since our application is in the cloud for the express purpose of simulating external load to a web application, our experience has not often included governance, compliance, data or other security concerns an organization might have when moving applications to the cloud. As a result, we’ll not cover security in-depth in this article.

There are organizations, such as the Cloud Security Alliance and many commercial and open-source companies that are working to address security concerns. Private and hybrid clouds, VPN access to cloud resources and industry-specific cloud initiatives are just some of the ways to minimize risk.

It’s worth noting that many organizations have become comfortable running their applications and storing their data at managed service providers because they comply with various regulations and auditing statements, such as SAS 70.

Cloud providers have also focused on securing the assets of their customers. For the most part, cloud and managed service providers have far more secure environments than many small- and medium-sized businesses that don’t always invest in basic security measures, much less comply with standards, such as PCI, the Payment Card Industry’s Data Security Standard, that is intended to protect customers’ personal information. As a result, for many organizations, the cloud is actually a more secure environment than on-premises.


The politics and competition within an organization for resources and funding are always a challenge. Managing issues like compliance with corporate standards and protection of data require careful attention in larger IT shops. In the past, the unavailability of hardware has been a limiting factor in the potential sprawl of application development. With the cloud, the ease-of-use and low cost described earlier make “skunk works” projects more feasible. This means individual businesses or groups within the company can be nimble and responsive, but they can also increase fragmentation and management risks.

It’s not dissimilar to the challenges posed years ago by the proliferation of the PC, or programs like Excel that clearly led to a wide variety of issues. However, we’ve learned to manage these challenges because the value proposition is compelling enough to deal with the governance concerns.


According to Forrester, 74% of app performance problems are reported by end-users to a service/support desk rather than found by infrastructure management. If this is true of applications using discrete resources under the control of the IT department, what happens when the infrastructure is in the cloud? In addition, much of today’s web applications are delivered by third parties (CDNs, web analytics, news feed providers and other aggregators) and thus are out IT’s direct control.

When talking about comparing cloud vendors, Alan Williamson, editor-in-chief of Cloud Computing Journal, wrote, “The first comparison point is performance. This is a common question asked at our bootcamps, and it is important to understand that you are never going to get the same level of throughput as you would on a bare-metal system. As it’s a virtualized world, you are only sharing resources. You may be lucky and find yourself the only process on a node and get excellent throughput. But it’s very rare. At the other extreme, you may be unlucky and find yourself continually competing for resources due to some ‘noisy neighbor’ that is hogging the CPU/IO. The problem is that you never know who your neighbors are.”

Spec-based comparisons are difficult as even machines that ostensibly have the same specifications may be using different processors, disk subsystems, etc. These differences are further obscured by how some cloud vendors refer to the options they provide. It can be like decoding the difference between venti, grande and tall if you’re not a Starbucks regular. The more specifics you can get from your provider, the better.

Ultimately, the best way to ensure good results is to understand and measure the factors that impact performance. Not all virtualized resources are created equal. While access to elastic resources is good for responding to peak requirements, simply scaling hardware does not solve underlying performance problems or the impact of a poorly designed application. It doesn’t tell you if you have too few load balancers, missing database indexes or improper settings in your firewall.

Just as with infrastructure you own, test and measurement is the only way to confidently optimize your application’s performance in the cloud: an area where SOASTA can help.

Criteria for Selecting a Cloud Provider
The list of reasons to choose one vendor over another for any service is pretty consistent, and certainly true of cloud infrastructure vendors:

Range of Offerings
There are a surprising number of options beyond simply providing compute power and storage. As infrastructure becomes commoditized, vendors continue to add capabilities to differentiate their offerings. Look for things like a broad set of rental alternatives, load balancers and other network features, content distribution, management and control, geographic locations, VPN access, managed services, and storage options.

Quality of Service
Based on our experience, there’s not much of a gap between the major infrastructure vendors when it comes to quality of service as measured by uptime. The differences are often found in the transparency provided by the vendor and their customer support. For example, is it important for you to know the exact configuration of the image you’re buying? How much control do you have over deployment and management? Are there different levels of support-such as chat, phone and email-and access to support resources based on customer profile and willingness to pay?

While closely related to the overall quality of service and range of offerings, simplicity deserves to stand on its own as a point of consideration. Whether you’re writing directly to APIs or using a vendor-provided management dashboard, making it easy is one of the reasons the cloud has become so popular.

The other primary driver for the popularity of the cloud is price. As noted above, it can be a double-edged sword. Since it’s so much easier and cheaper to get access to hardware, larger companies are struggling to contain the unchecked proliferation of projects. However, the fundamental motivation to move to the cloud is financial and, for the right applications, savings in hardware, energy and management overhead are very real and substantial.

SOASTA’s Test Cloud is designed to leverage all of the major infrastructure providers. We have access to multiple locations around the world and can provision servers across geographies in minutes. We bring up hundreds of servers simultaneously and tear them down hours later. And, of course, we endeavor to do so at the lowest possible cost for our customers. While perhaps a bit unusual in its scope, we hope that sharing our 10,000+ hours in the cloud will help you when considering your unique situation.

More Stories By SOASTA Blog

The SOASTA platform enables digital business owners to gain unprecedented and continuous performance insights into their real user experience on mobile and web devices in real time and at scale.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@ThingsExpo Stories
As businesses evolve, they need technology that is simple to help them succeed today and flexible enough to help them build for tomorrow. Chrome is fit for the workplace of the future — providing a secure, consistent user experience across a range of devices that can be used anywhere. In her session at 21st Cloud Expo, Vidya Nagarajan, a Senior Product Manager at Google, will take a look at various options as to how ChromeOS can be leveraged to interact with people on the devices, and formats th...
SYS-CON Events announced today that Yuasa System will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Yuasa System is introducing a multi-purpose endurance testing system for flexible displays, OLED devices, flexible substrates, flat cables, and films in smartphones, wearables, automobiles, and healthcare.
SYS-CON Events announced today that Taica will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Taica manufacturers Alpha-GEL brand silicone components and materials, which maintain outstanding performance over a wide temperature range -40C to +200C. For more information, visit http://www.taica.co.jp/english/.
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities – ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups. As a result, many firms employ new business models that place enormous impor...
SYS-CON Events announced today that SourceForge has been named “Media Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. SourceForge is the largest, most trusted destination for Open Source Software development, collaboration, discovery and download on the web serving over 32 million viewers, 150 million downloads and over 460,000 active development projects each and every month.
SYS-CON Events announced today that Dasher Technologies will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Dasher Technologies, Inc. ® is a premier IT solution provider that delivers expert technical resources along with trusted account executives to architect and deliver complete IT solutions and services to help our clients execute their goals, plans and objectives. Since 1999, we'v...
As popularity of the smart home is growing and continues to go mainstream, technological factors play a greater role. The IoT protocol houses the interoperability battery consumption, security, and configuration of a smart home device, and it can be difficult for companies to choose the right kind for their product. For both DIY and professionally installed smart homes, developers need to consider each of these elements for their product to be successful in the market and current smart homes.
SYS-CON Events announced today that MIRAI Inc. will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MIRAI Inc. are IT consultants from the public sector whose mission is to solve social issues by technology and innovation and to create a meaningful future for people.
SYS-CON Events announced today that Massive Networks, that helps your business operate seamlessly with fast, reliable, and secure internet and network solutions, has been named "Exhibitor" of SYS-CON's 21st International Cloud Expo ®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. As a premier telecommunications provider, Massive Networks is headquartered out of Louisville, Colorado. With years of experience under their belt, their team of...
SYS-CON Events announced today that TidalScale, a leading provider of systems and services, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale has been involved in shaping the computing landscape. They've designed, developed and deployed some of the most important and successful systems and services in the history of the computing industry - internet, Ethernet, operating s...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, will discuss how from store operations...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Tha...
SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, will lead you through the exciting evolution of the cloud. He'll look at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering ...
Infoblox delivers Actionable Network Intelligence to enterprise, government, and service provider customers around the world. They are the industry leader in DNS, DHCP, and IP address management, the category known as DDI. We empower thousands of organizations to control and secure their networks from the core-enabling them to increase efficiency and visibility, improve customer service, and meet compliance requirements.
Join IBM November 1 at 21st Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Cognitive analysis impacts today’s systems with unparalleled ability that were previously available only to manned, back-end operations. Thanks to cloud processing, IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Imagine a robot vacuum that becomes your personal assistant tha...
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, will discuss how they b...
SYS-CON Events announced today that mruby Forum will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. mruby is the lightweight implementation of the Ruby language. We introduce mruby and the mruby IoT framework that enhances development productivity. For more information, visit http://forum.mruby.org/.