Release Management Authors: Pat Romanski, Elizabeth White, David H Deans, Liz McMillan, Jnan Dash

Related Topics: Release Management , @CloudExpo

Release Management : Article

SOASTA's 10,000 Hours in the Cloud

Lessons Learned from SOASTA’s Experiences in Cloud Computing

SOASTA at Cloud Expo

“Practice makes perfect” is an adage heard by everyone from budding pianists to ambitious athletes. In his book Outliers, Malcolm Gladwell focuses on exceptional people who don’t fit into our normal definition of achievers. A common theme in explaining the successes of these individuals is the “10,000 Hour Rule.” Gladwell’s theory suggests that high performance is, in large part, the result of putting in an enormous amount of time, and that it takes about 10,000 hours to gain the experience necessary to achieve greatness.

The rule, according to Gladwell, applies to more than just individuals. He points out that the Beatles performed live in Hamburg, Germany over 1200 times between 1960 and 1964, amassing the hours needed to catapult the band to stardom. Clearly, it applies to companies as well. That’s not to say that talent, intelligence and good timing don’t play a role; but nothing can replace good old-fashioned experience.

Having now executed well over 10,000 hours of testing in the Cloud, SOASTA has experienced many successes and more than a few challenges. This article is intended to share our experiences to help others understand the opportunities and avoid the pitfalls associated with cloud computing. Gartner has positioned cloud computing as being at the “Peak of Inflated Expectations”, and the fall into the “Trough of Disillusionment” can be a steep one, if you don’t know what to expect. According to Gartner it will be two to five years before cloud computing’s mainstream adoption. So how do you get ahead of the curve?

SOASTA has provisioned over 250,000 servers, most in Amazon EC2. Our application, CloudTest™, is a load and performance testing solution designed to deliver real-time, actionable Performance Intelligence that builds confidence in the performance, reliability and scalability of websites and applications. It leaves the baggage of client-server alternatives behind and is extremely well suited to the cloud. In fact, dev/test has been acknowledged as a logical entry point for leveraging cloud computing due, in large part, to the variability of resource requirements. It also improves web application testing quality by making it viable to simulate web-scale user volumes from outside the firewall.

The question isn’t if cloud computing will become mainstream, but when and how? What issues need to be addressed, and how do you make the decision to move to the cloud?

What is the Cloud?

No, we’re not going to create yet another definition for cloud computing. While both vendors and users often spin the definition(s) to match their agendas or points of view, most tend to describe it in terms of services, typically in three “buckets”:

  • Software-as-a-Service
  • Platform-as-a-Service
  • Infrastructure-as-a-Service

Software-as-a-Service is characterized by providing application capabilities, such as backup, mail, CRM, billing or testing. Platform services, such as Force.com, Google Apps, Microsoft Azure and Engine Yard, are designed to make it easier to build and deploy applications in a specific multi-tenant environment without having to worry about the underlying compute stack. Infrastructure services allow companies to take advantage of the compute and storage resources delivered by vendors, such as Amazon, Rackspace and GoGrid, while providing a great deal of flexibility in how those resources are used.

According to a draft definition by the National Institute of Standards and Technology, which is heading up a government-wide cloud computing initiative, Infrastructure-as-a-Service is “the capability provided to the consumer to rent processing, storage, networks and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications and possibly select networking components (e.g., firewalls, load balancers).”

SOASTA provides a software-based service, centered on reliability, that tests and measures the software, platform and infrastructure of both cloud-based and on-premise applications. This article focuses primarily on infrastructure, since the bulk of SOASTA’s experience is as a consumer of cloud services.

The diagram below illustrates how SOASTA leverages the cloud to deploy its Global Test Cloud components for on-demand testing.

Public and Private Clouds
Internal clouds are often about optimizing a private infrastructure. As a result, they’re usually referred to as private clouds. Public clouds provide access to universally accessible resources. Today, public cloud vendors are offering hybrid alternatives that leverage their compute and storage resources, yet require the proper authorization for access. Extending an internal or managed infrastructure by renting virtualized resources on demand has become an increasingly viable option. As a result, the line between public and private clouds is starting to blur.

With a private cloud, IT has greater control over issues, such as security and performance, that are common concerns about shared resources in the public cloud. Instead of renting the infrastructure provided by others, companies use commercial or open-source products to build their own cloud. Of course, this means they also need to address challenges such as repurposing servers, choosing a virtualization platform, image management and provisioning (including self-service) and capturing data for budgeting and charge back.

Thus, even companies that build their own internal cloud infrastructure will identify applications well suited to deployment in the public cloud, including externally managed hybrid environments like Amazon’s Private Cloud. Most IT shops start with applications that take advantage of the benefits delivered by Infrastructure-as-a-Service. This includes applications that are deployed for short periods of time, have increased use based on business cycles, need to scale on-demand to respond to known or unanticipated traffic spikes, have limited privacy and security concerns, and are simple to deploy.

Deploying Applications in the Cloud
Generally, an organization will start with a low-risk, tactical application before leveraging the cloud for strategic, business-critical applications. Typically, these initial apps require little, if any integration to other data or systems in the internal infrastructure (which adds significant complexity and severely hinders the goal of quickly deploying a stack and uploading an application). We are starting to see hybrid applications: that is, apps that are deployed on-premise but respond to spikes by automatically provisioning instances of stressed components of the infrastructure into the public cloud.

SOASTA CloudTest’s first deployment environment was in Amazon EC2. The requirements for load and performance testing fit almost all of the characteristics above: a need measured in hours, not months or years; an often significant and varying amount of compute resources; and minimal security issues. EC2 was the first to provide a platform that dramatically changed the cost equation for compute resources and delivered an elastic API for speed of deployment.

For an application that depends on the swift provisioning and releasing of servers, it meant that we had to quickly identify bad instances and bring up replacement instances. In addition, because EC2 was initially confined to a single location, it reduced some of the benefits of external testing. For companies deploying business-critical applications in the cloud, a single location might create concerns over failover and disaster recovery. Amazon Web Services has since created geographic options, regions and availability zones, providing very attractive options for failover.

Speaking of Challenges...
So what challenges might you face in moving to the cloud? Today, provisioning has become substantially easier and more reliable. Zombies still exist across all vendors; although, the frequency with which they appear continues to decline. Still, if on-demand and quick deployment is important to your application you need to make sure you quickly account for systems that are DOA. Handling this can be built into your deployment methodology or you can leverage the capabilities of third-party provisioning tools and services.

This issue has been mitigated by automation and packaging of applications. Companies, such as RightScale, Appistry, 3Tera and others, provide tools to package and deploy applications leveraging relatively simple and intuitive user interfaces.

Vendors provide tremendous amounts of flexibility, but little application awareness. Others provide more platform-oriented capabilities and include value-added features, such as monitoring, auto-scaling, pre-configured stacks and application containers.

Virtualization technology has allowed infrastructure vendors and enterprises to optimize their use of physical resources and is one of the fundamental building blocks that has distinguished the cloud from what was often referred to as “grid” or “utility” computing. The second key building block is the implementation of APIs, popularized by Amazon as elastic APIs. A prerequisite for today’s cloud infrastructure vendors, these interfaces enable the ability to dynamically start, stop, query the status of and manage the available resources programmatically and without the intervention of the infrastructure vendor.

There’s much discussion about portability, lock-in and the need for an open API. A standard elastic API is probably years away, causing many of the added-value deployment vendors (such as those listed above) to do the work for you: build your app using their tools and you don’t have to worry about infrastructure vendor lock-in.

Of course, that begs the question of where lock-in occurs. You’re never really locked-in if you’re willing to do the work to change. That could mean changing your processes for deployment, writing to a new API, repackaging your application or changing the stack. There will always be choices to make, and to minimize the impact of lock-in, you need to determine which layer is going to be easiest to change should you need to switch vendors. Ultimately, the only way vendors should lock you in is by providing superior service.

When working with public cloud vendors, planning certainly helps in meeting large-scale resource requirements. In SOASTA’s case, one engagement may require only two or three servers, while another may need hundreds, or even thousands, of server cores. We often schedule our compute requirements a week or two in advance, which helps our infrastructure vendors respond when we’re provisioning hundreds of servers at a time. In most cases, you’ll know your resource requirements ahead of time. However, since on-demand access to a pool of resources is one of the benefits of the cloud, you’ll want to understand the limits of your vendor to respond to ad hoc requests.

One way to minimize this issue is to spread the risk among multiple vendors. With CloudTest’s distributed architecture, we have the capability to concurrently provision across multiple cloud providers to help mitigate availability issues. Commercial deployment technologies can help reduce the complexity of deploying components of an application to more than one provider if this is a design criterion.

Another variable to consider is persistence of the application: for how long and how often does it need to be up and running? This has a particular impact on pricing. Is it only going to be in place for a few weeks, or is it a business-critical application that needs to be available 24/7 for an extended period of time?

Cloud vendors are trying to strike the balance between the traditional offerings of a managed service provider and a more flexible, scalable platform that is available and billed on demand. For example, Amazon now offers reserved instances, as well as other offerings, that are a blend of on-demand and monthly rental; while traditional hosting companies, like Rackspace and Savvis, race to provide fast, affordable and easy access to short-term and elastic compute and storage resources to complement their standard hosted offerings. Some, it should be noted, are much further along than others in providing a range of options.


Security is the number one concern when it comes to the public cloud. Testing, particularly load and performance testing, rarely requires the use of or access to real customer data. Since our application is in the cloud for the express purpose of simulating external load to a web application, our experience has not often included governance, compliance, data or other security concerns an organization might have when moving applications to the cloud. As a result, we’ll not cover security in-depth in this article.

There are organizations, such as the Cloud Security Alliance and many commercial and open-source companies that are working to address security concerns. Private and hybrid clouds, VPN access to cloud resources and industry-specific cloud initiatives are just some of the ways to minimize risk.

It’s worth noting that many organizations have become comfortable running their applications and storing their data at managed service providers because they comply with various regulations and auditing statements, such as SAS 70.

Cloud providers have also focused on securing the assets of their customers. For the most part, cloud and managed service providers have far more secure environments than many small- and medium-sized businesses that don’t always invest in basic security measures, much less comply with standards, such as PCI, the Payment Card Industry’s Data Security Standard, that is intended to protect customers’ personal information. As a result, for many organizations, the cloud is actually a more secure environment than on-premises.


The politics and competition within an organization for resources and funding are always a challenge. Managing issues like compliance with corporate standards and protection of data require careful attention in larger IT shops. In the past, the unavailability of hardware has been a limiting factor in the potential sprawl of application development. With the cloud, the ease-of-use and low cost described earlier make “skunk works” projects more feasible. This means individual businesses or groups within the company can be nimble and responsive, but they can also increase fragmentation and management risks.

It’s not dissimilar to the challenges posed years ago by the proliferation of the PC, or programs like Excel that clearly led to a wide variety of issues. However, we’ve learned to manage these challenges because the value proposition is compelling enough to deal with the governance concerns.


According to Forrester, 74% of app performance problems are reported by end-users to a service/support desk rather than found by infrastructure management. If this is true of applications using discrete resources under the control of the IT department, what happens when the infrastructure is in the cloud? In addition, much of today’s web applications are delivered by third parties (CDNs, web analytics, news feed providers and other aggregators) and thus are out IT’s direct control.

When talking about comparing cloud vendors, Alan Williamson, editor-in-chief of Cloud Computing Journal, wrote, “The first comparison point is performance. This is a common question asked at our bootcamps, and it is important to understand that you are never going to get the same level of throughput as you would on a bare-metal system. As it’s a virtualized world, you are only sharing resources. You may be lucky and find yourself the only process on a node and get excellent throughput. But it’s very rare. At the other extreme, you may be unlucky and find yourself continually competing for resources due to some ‘noisy neighbor’ that is hogging the CPU/IO. The problem is that you never know who your neighbors are.”

Spec-based comparisons are difficult as even machines that ostensibly have the same specifications may be using different processors, disk subsystems, etc. These differences are further obscured by how some cloud vendors refer to the options they provide. It can be like decoding the difference between venti, grande and tall if you’re not a Starbucks regular. The more specifics you can get from your provider, the better.

Ultimately, the best way to ensure good results is to understand and measure the factors that impact performance. Not all virtualized resources are created equal. While access to elastic resources is good for responding to peak requirements, simply scaling hardware does not solve underlying performance problems or the impact of a poorly designed application. It doesn’t tell you if you have too few load balancers, missing database indexes or improper settings in your firewall.

Just as with infrastructure you own, test and measurement is the only way to confidently optimize your application’s performance in the cloud: an area where SOASTA can help.

Criteria for Selecting a Cloud Provider
The list of reasons to choose one vendor over another for any service is pretty consistent, and certainly true of cloud infrastructure vendors:

Range of Offerings
There are a surprising number of options beyond simply providing compute power and storage. As infrastructure becomes commoditized, vendors continue to add capabilities to differentiate their offerings. Look for things like a broad set of rental alternatives, load balancers and other network features, content distribution, management and control, geographic locations, VPN access, managed services, and storage options.

Quality of Service
Based on our experience, there’s not much of a gap between the major infrastructure vendors when it comes to quality of service as measured by uptime. The differences are often found in the transparency provided by the vendor and their customer support. For example, is it important for you to know the exact configuration of the image you’re buying? How much control do you have over deployment and management? Are there different levels of support-such as chat, phone and email-and access to support resources based on customer profile and willingness to pay?

While closely related to the overall quality of service and range of offerings, simplicity deserves to stand on its own as a point of consideration. Whether you’re writing directly to APIs or using a vendor-provided management dashboard, making it easy is one of the reasons the cloud has become so popular.

The other primary driver for the popularity of the cloud is price. As noted above, it can be a double-edged sword. Since it’s so much easier and cheaper to get access to hardware, larger companies are struggling to contain the unchecked proliferation of projects. However, the fundamental motivation to move to the cloud is financial and, for the right applications, savings in hardware, energy and management overhead are very real and substantial.

SOASTA’s Test Cloud is designed to leverage all of the major infrastructure providers. We have access to multiple locations around the world and can provision servers across geographies in minutes. We bring up hundreds of servers simultaneously and tear them down hours later. And, of course, we endeavor to do so at the lowest possible cost for our customers. While perhaps a bit unusual in its scope, we hope that sharing our 10,000+ hours in the cloud will help you when considering your unique situation.

More Stories By SOASTA Blog

The SOASTA platform enables digital business owners to gain unprecedented and continuous performance insights into their real user experience on mobile and web devices in real time and at scale.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@ThingsExpo Stories
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
DXWorldEXPO LLC announced today that "Miami Blockchain Event by FinTechEXPO" has announced that its Call for Papers is now open. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Financial enterprises in New York City, London, Singapore, and other world financial capitals are embracing a new generation of smart, automated FinTech that eliminates many cumbersome, slow, and expe...
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of bus...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER give you detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Miami Blockchain Event by FinTechEXPO also offers s...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...