Welcome!

Release Management Authors: David H Deans, Liz McMillan, Jnan Dash, Lori MacVittie, Gilad Parann-Nissany

Related Topics: Release Management , @CloudExpo

Release Management : Article

SOASTA's 10,000 Hours in the Cloud

Lessons Learned from SOASTA’s Experiences in Cloud Computing

SOASTA at Cloud Expo

“Practice makes perfect” is an adage heard by everyone from budding pianists to ambitious athletes. In his book Outliers, Malcolm Gladwell focuses on exceptional people who don’t fit into our normal definition of achievers. A common theme in explaining the successes of these individuals is the “10,000 Hour Rule.” Gladwell’s theory suggests that high performance is, in large part, the result of putting in an enormous amount of time, and that it takes about 10,000 hours to gain the experience necessary to achieve greatness.

The rule, according to Gladwell, applies to more than just individuals. He points out that the Beatles performed live in Hamburg, Germany over 1200 times between 1960 and 1964, amassing the hours needed to catapult the band to stardom. Clearly, it applies to companies as well. That’s not to say that talent, intelligence and good timing don’t play a role; but nothing can replace good old-fashioned experience.

Having now executed well over 10,000 hours of testing in the Cloud, SOASTA has experienced many successes and more than a few challenges. This article is intended to share our experiences to help others understand the opportunities and avoid the pitfalls associated with cloud computing. Gartner has positioned cloud computing as being at the “Peak of Inflated Expectations”, and the fall into the “Trough of Disillusionment” can be a steep one, if you don’t know what to expect. According to Gartner it will be two to five years before cloud computing’s mainstream adoption. So how do you get ahead of the curve?

SOASTA has provisioned over 250,000 servers, most in Amazon EC2. Our application, CloudTest™, is a load and performance testing solution designed to deliver real-time, actionable Performance Intelligence that builds confidence in the performance, reliability and scalability of websites and applications. It leaves the baggage of client-server alternatives behind and is extremely well suited to the cloud. In fact, dev/test has been acknowledged as a logical entry point for leveraging cloud computing due, in large part, to the variability of resource requirements. It also improves web application testing quality by making it viable to simulate web-scale user volumes from outside the firewall.

The question isn’t if cloud computing will become mainstream, but when and how? What issues need to be addressed, and how do you make the decision to move to the cloud?

What is the Cloud?

No, we’re not going to create yet another definition for cloud computing. While both vendors and users often spin the definition(s) to match their agendas or points of view, most tend to describe it in terms of services, typically in three “buckets”:

  • Software-as-a-Service
  • Platform-as-a-Service
  • Infrastructure-as-a-Service

Software-as-a-Service is characterized by providing application capabilities, such as backup, mail, CRM, billing or testing. Platform services, such as Force.com, Google Apps, Microsoft Azure and Engine Yard, are designed to make it easier to build and deploy applications in a specific multi-tenant environment without having to worry about the underlying compute stack. Infrastructure services allow companies to take advantage of the compute and storage resources delivered by vendors, such as Amazon, Rackspace and GoGrid, while providing a great deal of flexibility in how those resources are used.

According to a draft definition by the National Institute of Standards and Technology, which is heading up a government-wide cloud computing initiative, Infrastructure-as-a-Service is “the capability provided to the consumer to rent processing, storage, networks and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications and possibly select networking components (e.g., firewalls, load balancers).”

SOASTA provides a software-based service, centered on reliability, that tests and measures the software, platform and infrastructure of both cloud-based and on-premise applications. This article focuses primarily on infrastructure, since the bulk of SOASTA’s experience is as a consumer of cloud services.

The diagram below illustrates how SOASTA leverages the cloud to deploy its Global Test Cloud components for on-demand testing.

Public and Private Clouds
Internal clouds are often about optimizing a private infrastructure. As a result, they’re usually referred to as private clouds. Public clouds provide access to universally accessible resources. Today, public cloud vendors are offering hybrid alternatives that leverage their compute and storage resources, yet require the proper authorization for access. Extending an internal or managed infrastructure by renting virtualized resources on demand has become an increasingly viable option. As a result, the line between public and private clouds is starting to blur.

With a private cloud, IT has greater control over issues, such as security and performance, that are common concerns about shared resources in the public cloud. Instead of renting the infrastructure provided by others, companies use commercial or open-source products to build their own cloud. Of course, this means they also need to address challenges such as repurposing servers, choosing a virtualization platform, image management and provisioning (including self-service) and capturing data for budgeting and charge back.

Thus, even companies that build their own internal cloud infrastructure will identify applications well suited to deployment in the public cloud, including externally managed hybrid environments like Amazon’s Private Cloud. Most IT shops start with applications that take advantage of the benefits delivered by Infrastructure-as-a-Service. This includes applications that are deployed for short periods of time, have increased use based on business cycles, need to scale on-demand to respond to known or unanticipated traffic spikes, have limited privacy and security concerns, and are simple to deploy.

Deploying Applications in the Cloud
Generally, an organization will start with a low-risk, tactical application before leveraging the cloud for strategic, business-critical applications. Typically, these initial apps require little, if any integration to other data or systems in the internal infrastructure (which adds significant complexity and severely hinders the goal of quickly deploying a stack and uploading an application). We are starting to see hybrid applications: that is, apps that are deployed on-premise but respond to spikes by automatically provisioning instances of stressed components of the infrastructure into the public cloud.

SOASTA CloudTest’s first deployment environment was in Amazon EC2. The requirements for load and performance testing fit almost all of the characteristics above: a need measured in hours, not months or years; an often significant and varying amount of compute resources; and minimal security issues. EC2 was the first to provide a platform that dramatically changed the cost equation for compute resources and delivered an elastic API for speed of deployment.

For an application that depends on the swift provisioning and releasing of servers, it meant that we had to quickly identify bad instances and bring up replacement instances. In addition, because EC2 was initially confined to a single location, it reduced some of the benefits of external testing. For companies deploying business-critical applications in the cloud, a single location might create concerns over failover and disaster recovery. Amazon Web Services has since created geographic options, regions and availability zones, providing very attractive options for failover.

Speaking of Challenges...
So what challenges might you face in moving to the cloud? Today, provisioning has become substantially easier and more reliable. Zombies still exist across all vendors; although, the frequency with which they appear continues to decline. Still, if on-demand and quick deployment is important to your application you need to make sure you quickly account for systems that are DOA. Handling this can be built into your deployment methodology or you can leverage the capabilities of third-party provisioning tools and services.

This issue has been mitigated by automation and packaging of applications. Companies, such as RightScale, Appistry, 3Tera and others, provide tools to package and deploy applications leveraging relatively simple and intuitive user interfaces.

Vendors provide tremendous amounts of flexibility, but little application awareness. Others provide more platform-oriented capabilities and include value-added features, such as monitoring, auto-scaling, pre-configured stacks and application containers.

Virtualization technology has allowed infrastructure vendors and enterprises to optimize their use of physical resources and is one of the fundamental building blocks that has distinguished the cloud from what was often referred to as “grid” or “utility” computing. The second key building block is the implementation of APIs, popularized by Amazon as elastic APIs. A prerequisite for today’s cloud infrastructure vendors, these interfaces enable the ability to dynamically start, stop, query the status of and manage the available resources programmatically and without the intervention of the infrastructure vendor.

There’s much discussion about portability, lock-in and the need for an open API. A standard elastic API is probably years away, causing many of the added-value deployment vendors (such as those listed above) to do the work for you: build your app using their tools and you don’t have to worry about infrastructure vendor lock-in.

Of course, that begs the question of where lock-in occurs. You’re never really locked-in if you’re willing to do the work to change. That could mean changing your processes for deployment, writing to a new API, repackaging your application or changing the stack. There will always be choices to make, and to minimize the impact of lock-in, you need to determine which layer is going to be easiest to change should you need to switch vendors. Ultimately, the only way vendors should lock you in is by providing superior service.

When working with public cloud vendors, planning certainly helps in meeting large-scale resource requirements. In SOASTA’s case, one engagement may require only two or three servers, while another may need hundreds, or even thousands, of server cores. We often schedule our compute requirements a week or two in advance, which helps our infrastructure vendors respond when we’re provisioning hundreds of servers at a time. In most cases, you’ll know your resource requirements ahead of time. However, since on-demand access to a pool of resources is one of the benefits of the cloud, you’ll want to understand the limits of your vendor to respond to ad hoc requests.

One way to minimize this issue is to spread the risk among multiple vendors. With CloudTest’s distributed architecture, we have the capability to concurrently provision across multiple cloud providers to help mitigate availability issues. Commercial deployment technologies can help reduce the complexity of deploying components of an application to more than one provider if this is a design criterion.

Another variable to consider is persistence of the application: for how long and how often does it need to be up and running? This has a particular impact on pricing. Is it only going to be in place for a few weeks, or is it a business-critical application that needs to be available 24/7 for an extended period of time?

Cloud vendors are trying to strike the balance between the traditional offerings of a managed service provider and a more flexible, scalable platform that is available and billed on demand. For example, Amazon now offers reserved instances, as well as other offerings, that are a blend of on-demand and monthly rental; while traditional hosting companies, like Rackspace and Savvis, race to provide fast, affordable and easy access to short-term and elastic compute and storage resources to complement their standard hosted offerings. Some, it should be noted, are much further along than others in providing a range of options.

Security

Security is the number one concern when it comes to the public cloud. Testing, particularly load and performance testing, rarely requires the use of or access to real customer data. Since our application is in the cloud for the express purpose of simulating external load to a web application, our experience has not often included governance, compliance, data or other security concerns an organization might have when moving applications to the cloud. As a result, we’ll not cover security in-depth in this article.

There are organizations, such as the Cloud Security Alliance and many commercial and open-source companies that are working to address security concerns. Private and hybrid clouds, VPN access to cloud resources and industry-specific cloud initiatives are just some of the ways to minimize risk.

It’s worth noting that many organizations have become comfortable running their applications and storing their data at managed service providers because they comply with various regulations and auditing statements, such as SAS 70.

Cloud providers have also focused on securing the assets of their customers. For the most part, cloud and managed service providers have far more secure environments than many small- and medium-sized businesses that don’t always invest in basic security measures, much less comply with standards, such as PCI, the Payment Card Industry’s Data Security Standard, that is intended to protect customers’ personal information. As a result, for many organizations, the cloud is actually a more secure environment than on-premises.

Governance

The politics and competition within an organization for resources and funding are always a challenge. Managing issues like compliance with corporate standards and protection of data require careful attention in larger IT shops. In the past, the unavailability of hardware has been a limiting factor in the potential sprawl of application development. With the cloud, the ease-of-use and low cost described earlier make “skunk works” projects more feasible. This means individual businesses or groups within the company can be nimble and responsive, but they can also increase fragmentation and management risks.

It’s not dissimilar to the challenges posed years ago by the proliferation of the PC, or programs like Excel that clearly led to a wide variety of issues. However, we’ve learned to manage these challenges because the value proposition is compelling enough to deal with the governance concerns.

Performance

According to Forrester, 74% of app performance problems are reported by end-users to a service/support desk rather than found by infrastructure management. If this is true of applications using discrete resources under the control of the IT department, what happens when the infrastructure is in the cloud? In addition, much of today’s web applications are delivered by third parties (CDNs, web analytics, news feed providers and other aggregators) and thus are out IT’s direct control.

When talking about comparing cloud vendors, Alan Williamson, editor-in-chief of Cloud Computing Journal, wrote, “The first comparison point is performance. This is a common question asked at our bootcamps, and it is important to understand that you are never going to get the same level of throughput as you would on a bare-metal system. As it’s a virtualized world, you are only sharing resources. You may be lucky and find yourself the only process on a node and get excellent throughput. But it’s very rare. At the other extreme, you may be unlucky and find yourself continually competing for resources due to some ‘noisy neighbor’ that is hogging the CPU/IO. The problem is that you never know who your neighbors are.”

Spec-based comparisons are difficult as even machines that ostensibly have the same specifications may be using different processors, disk subsystems, etc. These differences are further obscured by how some cloud vendors refer to the options they provide. It can be like decoding the difference between venti, grande and tall if you’re not a Starbucks regular. The more specifics you can get from your provider, the better.

Ultimately, the best way to ensure good results is to understand and measure the factors that impact performance. Not all virtualized resources are created equal. While access to elastic resources is good for responding to peak requirements, simply scaling hardware does not solve underlying performance problems or the impact of a poorly designed application. It doesn’t tell you if you have too few load balancers, missing database indexes or improper settings in your firewall.

Just as with infrastructure you own, test and measurement is the only way to confidently optimize your application’s performance in the cloud: an area where SOASTA can help.

Criteria for Selecting a Cloud Provider
The list of reasons to choose one vendor over another for any service is pretty consistent, and certainly true of cloud infrastructure vendors:

Range of Offerings
There are a surprising number of options beyond simply providing compute power and storage. As infrastructure becomes commoditized, vendors continue to add capabilities to differentiate their offerings. Look for things like a broad set of rental alternatives, load balancers and other network features, content distribution, management and control, geographic locations, VPN access, managed services, and storage options.

Quality of Service
Based on our experience, there’s not much of a gap between the major infrastructure vendors when it comes to quality of service as measured by uptime. The differences are often found in the transparency provided by the vendor and their customer support. For example, is it important for you to know the exact configuration of the image you’re buying? How much control do you have over deployment and management? Are there different levels of support-such as chat, phone and email-and access to support resources based on customer profile and willingness to pay?

Ease-of-Use
While closely related to the overall quality of service and range of offerings, simplicity deserves to stand on its own as a point of consideration. Whether you’re writing directly to APIs or using a vendor-provided management dashboard, making it easy is one of the reasons the cloud has become so popular.

Price
The other primary driver for the popularity of the cloud is price. As noted above, it can be a double-edged sword. Since it’s so much easier and cheaper to get access to hardware, larger companies are struggling to contain the unchecked proliferation of projects. However, the fundamental motivation to move to the cloud is financial and, for the right applications, savings in hardware, energy and management overhead are very real and substantial.

SOASTA’s Test Cloud is designed to leverage all of the major infrastructure providers. We have access to multiple locations around the world and can provision servers across geographies in minutes. We bring up hundreds of servers simultaneously and tear them down hours later. And, of course, we endeavor to do so at the lowest possible cost for our customers. While perhaps a bit unusual in its scope, we hope that sharing our 10,000+ hours in the cloud will help you when considering your unique situation.

More Stories By SOASTA Blog

The SOASTA platform enables digital business owners to gain unprecedented and continuous performance insights into their real user experience on mobile and web devices in real time and at scale.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
The question before companies today is not whether to become intelligent, it’s a question of how and how fast. The key is to adopt and deploy an intelligent application strategy while simultaneously preparing to scale that intelligence. In her session at 21st Cloud Expo, Sangeeta Chakraborty, Chief Customer Officer at Ayasdi, will provide a tactical framework to become a truly intelligent enterprise, including how to identify the right applications for AI, how to build a Center of Excellence to ...
Recently, IoT seems emerging as a solution vehicle for data analytics on real-world scenarios from setting a room temperature setting to predicting a component failure of an aircraft. Compared with developing an application or deploying a cloud service, is an IoT solution unique? If so, how? How does a typical IoT solution architecture consist? And what are the essential components and how are they relevant to each other? How does the security play out? What are the best practices in formulating...
In his session at @ThingsExpo, Arvind Radhakrishnen discussed how IoT offers new business models in banking and financial services organizations with the capability to revolutionize products, payments, channels, business processes and asset management built on strong architectural foundation. The following topics were covered: How IoT stands to impact various business parameters including customer experience, cost and risk management within BFS organizations.
Internet of @ThingsExpo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devic...
SYS-CON Events announced today that Elastifile will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Elastifile Cloud File System (ECFS) is software-defined data infrastructure designed for seamless and efficient management of dynamic workloads across heterogeneous environments. Elastifile provides the architecture needed to optimize your hybrid cloud environment, by facilitating efficient...
There is only one world-class Cloud event on earth, and that is Cloud Expo – which returns to Silicon Valley for the 21st Cloud Expo at the Santa Clara Convention Center, October 31 - November 2, 2017. Every Global 2000 enterprise in the world is now integrating cloud computing in some form into its IT development and operations. Midsize and small businesses are also migrating to the cloud in increasing numbers. Companies are each developing their unique mix of cloud technologies and service...
SYS-CON Events announced today that Golden Gate University will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Since 1901, non-profit Golden Gate University (GGU) has been helping adults achieve their professional goals by providing high quality, practice-based undergraduate and graduate educational programs in law, taxation, business and related professions. Many of its courses are taug...
SYS-CON Events announced today that Grape Up will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct. 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Grape Up is a software company specializing in cloud native application development and professional services related to Cloud Foundry PaaS. With five expert teams that operate in various sectors of the market across the U.S. and Europe, Grape Up works with a variety of customers from emergi...
SYS-CON Events announced today that DXWorldExpo has been named “Global Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Digital Transformation is the key issue driving the global enterprise IT business. Digital Transformation is most prominent among Global 2000 enterprises and government institutions.
From 2013, NTT Communications has been providing cPaaS service, SkyWay. Its customer’s expectations for leveraging WebRTC technology are not only typical real-time communication use cases such as Web conference, remote education, but also IoT use cases such as remote camera monitoring, smart-glass, and robotic. Because of this, NTT Communications has numerous IoT business use-cases that its customers are developing on top of PaaS. WebRTC will lead IoT businesses to be more innovative and address...
21st International Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Me...
Recently, WebRTC has a lot of eyes from market. The use cases of WebRTC are expanding - video chat, online education, online health care etc. Not only for human-to-human communication, but also IoT use cases such as machine to human use cases can be seen recently. One of the typical use-case is remote camera monitoring. With WebRTC, people can have interoperability and flexibility for deploying monitoring service. However, the benefit of WebRTC for IoT is not only its convenience and interopera...
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be...
SYS-CON Events announced today that Secure Channels, a cybersecurity firm, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Secure Channels, Inc. offers several products and solutions to its many clients, helping them protect critical data from being compromised and access to computer networks from the unauthorized. The company develops comprehensive data encryption security strategie...
An increasing number of companies are creating products that combine data with analytical capabilities. Running interactive queries on Big Data requires complex architectures to store and query data effectively, typically involving data streams, an choosing efficient file format/database and multiple independent systems that are tied together through custom-engineered pipelines. In his session at @BigDataExpo at @ThingsExpo, Tomer Levi, a senior software engineer at Intel’s Advanced Analytics ...
WebRTC is great technology to build your own communication tools. It will be even more exciting experience it with advanced devices, such as a 360 Camera, 360 microphone, and a depth sensor camera. In his session at @ThingsExpo, Masashi Ganeko, a manager at INFOCOM Corporation, will introduce two experimental projects from his team and what they learned from them. "Shotoku Tamago" uses the robot audition software HARK to track speakers in 360 video of a remote party. "Virtual Teleport" uses a...
SYS-CON Events announced today that App2Cloud will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct. 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. App2Cloud is an online Platform, specializing in migrating legacy applications to any Cloud Providers (AWS, Azure, Google Cloud).
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be...
Internet-of-Things discussions can end up either going down the consumer gadget rabbit hole or focused on the sort of data logging that industrial manufacturers have been doing forever. However, in fact, companies today are already using IoT data both to optimize their operational technology and to improve the experience of customer interactions in novel ways. In his session at @ThingsExpo, Gordon Haff, Red Hat Technology Evangelist, shared examples from a wide range of industries – including en...
Detecting internal user threats in the Big Data eco-system is challenging and cumbersome. Many organizations monitor internal usage of the Big Data eco-system using a set of alerts. This is not a scalable process given the increase in the number of alerts with the accelerating growth in data volume and user base. Organizations are increasingly leveraging machine learning to monitor only those data elements that are sensitive and critical, autonomously establish monitoring policies, and to detect...