Welcome!

Open Web Authors: Carmen Gonzalez, Elizabeth White, Pat Romanski, PR.com Newswire, David Weinberger

Related Topics: ColdFusion

ColdFusion: Article

The Art & Science of Load Testing on a Budget

It's worthwhile to load-test before a crisis

Load testing your applications is much like flossing your teeth or taking out the garbage: you know you should do it; you know (vaguely, perhaps) that there will be consequences if you don't do it - and yet somehow it always seems to slip to the last item on the to-do list. Until, of course, there's a crisis of some kind.

It's useful to reiterate why it's worthwhile to load-test before a crisis. In the first place, you want to be able to give yourself and/or your client confidence that the application you're building can handle the expected traffic - and the only way to do that is to actually simulate that traffic. It's a little too easy to assume that if an application performs well under "normal" testing conditions that it will behave the same way (if only, perhaps, a little slower) under higher-traffic conditions. In many cases, however, that's just not so.

Second, as a developer it's important to have confidence in your application. Are there any hidden problems in your code? Load testing will often find anything lurking in the corners that hasn't been brought to light by functional testing. This is obviously important for intangible reasons (who wouldn't like to brag about their application?), but it's also important for internal and external business reasons:

  • Does the application need to be re-designed in any way?
  • Can the current database schema support the application under heavy load?
  • Do the queries need to be optimized?
  • Can this code base be reused for other high-traffic applications?
  • Does the application meet current expectations? Will this application scale well for future growth?
  • If it will scale, then what are the budget considerations for you and/or your client? Do you need additional hardware? Load-balancing software?
  • Will this application make you and/or your client happy?
This last item may sound like an intangible, but it's really not; a happy client is often a source of recurring revenue.

And yet, despite all those very good reasons in favor of load-testing applications prior to launch, my own company, Duo Consulting, didn't do it. Budgets were always too tight, it seemed, to make this part of the project plan - and besides, doesn't everyone know that you have to be a very large corporation to be able to afford any load testing? As a small company, we felt that we just weren't equipped to do it.

The Crisis
The crisis inevitably came. One of our clients is a local park district that offers online registration four times a year. The number of online registrations had been steadily increasing each season, and in particular, the first few hours of each registration period were becoming increasingly problematic. At 9:00 a.m. on the first day of registration, parents all around the greater Chicago area were poised over their keyboards to try to get their children into a limited number of slots in the park district programs. Finally, this past year, the intense traffic became too much and our production server hung right in the middle of the heaviest registration period.

Load testing was now no longer optional; it was paramount. Could our application handle any kind of serious traffic, or was the recent registration experience somehow exceptional? Did we need to rewrite any major portions of the code? Did we need to throw more hardware at the problem? What were the limits of the application, and could we scale up in time for the next registration period? And how could we afford the actual load testing itself?

In desperation, we turned to Microsoft's Web Application Stress (WAS) Tool. It's a free tool that we could download immediately, and although it's not a high-end tool like SilkPerformer, it turned out to be just right for our needs. This shouldn't be taken as a final recommendation - there are many options out there, so there's likely to be something suited to your budget and/or platform. One quick place to look for a rundown of tool options is the Software QA/Test Resource Center at www.softwareqatest.com/qatweb1.html#LOAD. There are also services that will perform load testing for you, and we even briefly considered bringing in Macromedia to help us with analysis and load testing. The cost for such a service can be prohibitive (the Macromedia consult, for instance, would have cost almost $15,000), however, and we felt that ultimately we were in the best position to analyze the site in depth. We already knew the code and the database architecture; we just needed the tools to get started.

That didn't mean we could start immediately. Setting up the environment and conditions for load testing - especially if it's the first time you've done it - does take a certain amount of time. First, we needed to set up a Web server that could approximate the production environment as closely as possible.

It's not necessary to seek perfection in this regard - it may be difficult to find a machine to dedicate to testing that has comparable hardware and memory to what's in production - but it is important to get as close to the production environment as possible. This means, at a minimum, making sure the code base is the same, the version of ColdFusion is the same (including any updates and/or hotfixes), the IIS settings are the same, the OS is the same (including any patches), the ODBC drivers are the same, and the ColdFusion administrator settings are the same.

If you do have hardware, memory, etc., that mimics your production site, that's even better - the closer you can get to your production environment, the more accurate and useful your tests will be. But don't despair if you can't create an exact duplicate; for instance, we didn't have an available machine that was the exact hardware and memory equivalent of our production server, yet our testing was still very, very useful.

You will also need to be able to point the site to a test database server (i.e., repurpose your development database). As far as your test clients are concerned, you should plan to set up multiple clients with WAS - one client machine will probably not approximate the kind of traffic you'll want to test against, and beyond a certain point, you're testing load on the client machine rather than the Web server.

We initially tried setting up a single designated client machine that would run multiple clients, but we found that, at least with the WAS tool, we repeatedly risked hanging the client machine rather than the Web server we were trying to test against, which meant that we weren't really load-testing at all because the "load" wasn't always getting from the client machine to the Web server. Once we moved to a scenario where we had multiple machines running the WAS clients, we could see from the Web server activity that we were finally getting the kind of load testing we wanted.

Here are the general steps we followed after setting up the Web server itself and determining which machines would act as WAS client machines. Your mileage will almost certainly vary, but these are good starting points:

First, make sure your test site is set up so that it will be accessible by all your designated clients. What exactly this means will be largely determined by what you want to test. If you want to be able to load-test from both inside and outside your organization, then your setup will obviously be different than if you want to test solely from within your organization.

I'd suggest strictly internal testing at first; our experience shows that once you move to external testing, you're simultaneously testing your application and your clients' bandwidth, which makes it harder to narrow down what your actual problems might be. If you're testing internally - that is, from machines on your local network to machines on your local network - all the test machines are on a level playing field as far as bandwidth goes. Once you move to external testing you have additional variables to contend with, many of which may be outside your control: connection speed (dial-up versus broadband), service providers, etc. External testing - is certainly quite useful; it's just that it's probably not the first approach you should use.

It's also important to make sure that the URL for the test site is unique and doesn't conflict with any other versions of the site that you may have running, as you want to be certain you get clean data from your tests. In our case, we set up a domain that follows this convention: preprod.[site_name].duoconsulting.com.

Second, make sure all the machines involved in the testing process are time-synced. You need to be able to compare apples to apples, using the cleanest data possible - and that means getting log files from all the machines involved in which the timing of particular events and/or errors can be matched up easily.

Third, make sure the WAS clients are configured and set up properly on each machine. This is not necessarily as straightforward as it sounds. Although the WAS tool is very useful, the setup instructions are, as a colleague of mine put it, "written by developers, for developers."

In particular, I found two documents very helpful: Microsoft's "HOW TO: Install and Use the Web Application Stress (WAS) Tool" (http://support.microsoft.com/default.aspx?scid=kb;en-us;313559) and "HOW TO: Measure ASP.NET Responsiveness with the Web Application Stress Tool" (http://support.microsoft.com/default.aspx?scid=kb;en-us;815161). The installation article will walk you through the IE configuration (don't ignore the proxy setting information - and know that for these purposes, "localhost" works where "127.0.0.1" does not). The second article provides tips on script configuration - in particular, why it's important to build in a warmup period and enable random delay. None of these items are intuitively obvious from the WAS tool itself, so be sure to read these articles, both of which are available on the Microsoft Web site.

WAS saves its scripting and report data in an Access database, so we found it useful to designate one client machine as the "parent." The parent client was then used to create the scripts needed for testing, and that database was then copied to the other client machines. WAS runs as a Windows Service, so be sure to choose the File > Exit & Stop Service command from WAS after you're done constructing your scripts and before you attempt to copy the database.

Set Aside Some Time
Next, you need to reserve the time to do the testing. Again, this may not be as straightforward as it sounds. You first need to determine who will be involved, and if there are multiple people, make sure that they're available for the duration of your testing plan.

In our case, both I, as lead programmer (and the person most familiar with the application), and our systems administrator (who monitored the machines during the tests) really needed to be present. You should plan on the testing process itself taking longer than you really expect (especially on your first try). There will be stops and starts, unexpected results and delays, not to mention environment bugginess, so don't expect the process to be completed in a single day.

Another equally important time consideration is the availability of equipment and network resources - and the question of when you can abuse them. If the point of the testing exercise is to find the limits of your application, you'll need to be able to hang the machines involved in the testing (probably multiple times) without any major repercussions.

If you're sharing a development server with someone who has a project deadline to meet, don't plan on testing when it will interfere with that deadline. If you're using a staging server that other clients have access to, then make sure that they aren't caught off guard by your testing. Ideally, you'll have other options so you won't have to work around either of the scenarios outlined above, but you may not have that luxury. If you must work around other people using your test machines, be extraordinarily conservative about timing on these machines - don't schedule the usage back-to-back, because if a machine goes down because of testing it may take some time to get the machine back in shape for its other purposes.

You then need to prepare to capture data from your WAS client testing. This will depend on what you're looking to find, of course, but in our case we threw our net as wide as possible precisely because we weren't sure what we were looking for. So we gathered reports from the WAS tool, database traces (you can reach SQL Profiler from the Tools menu in SQL Enterprise Manager; see "HOW TO: Troubleshoot Application Performance with SQL Server," http://support.microsoft.com/default.aspx?scid=kb;en-us;224587, for instructions on how to enable this), and the Web server performance monitor. We found that gathering data from all three of these sources really gave us a full picture of what was going on with the application: the WAS reports from the client provided information on timeouts, socket errors, and hits/requests per second; the database traces allowed us to track longer-running queries; and the reports from the Web server performance monitor gave us insight on simultaneous users, queue request times, and average page response times.

Using these sources in conjunction with each other, we could really narrow down what was happening and when. You should plan on running far more tests than you might initially expect, so it's important to organize the data as well. We numbered each test, and all data captured corresponded to those numbered tests; we ran more than 60 tests over the course of several weeks, so this was crucial when it came time to prepare comparative reports.

It is equally crucial to prepare to record your anecdotal observations of each test as well. You may think that the data you capture will speak for itself, but that may not be true (or at least obvious).

For instance, we at first thought that our main data point would be the number of simultaneous users that the Web server could support, but it turned out that that particular statistic didn't really speak for itself; although according to the Web server performance monitor logs our early tests showed a high number of simultaneous users on the site (good), those same tests produced very high numbers of timeouts and socket errors from the WAS reports (bad), as well as very slow page response times from other portions of the performance monitor logs (very bad).

We would have had a far more difficult time figuring out what data we should be focusing on if we hadn't kept our own personal notes as well. We made notes as to whether the site was fast or slow or erratic, and at roughly what points during the test these things were happening. And again, this is especially important if you're doing lots of testing over a period of time: if you don't have your own notes about Test 4, you'll have a very difficult time comparing it to Test 61 three weeks later - or even finding good starting points for comparison.

Creating a Script
Finally, with the testing environment in place it's time to create a script. Again, what this means will vary depending on your needs. In our case, we recorded what we felt would be a fairly typical user session over the course of a couple of minutes, including what we thought might be the problem areas. Don't get too attached to the idea of the perfect script - it may be that you'll need several different scripts over the course of your testing process as you narrow down your problem areas.

You will probably also want to keep your initial scripts fairly short, and expand them only later. Our first scripts were only 10 minutes long (in other words, we were looping over our recorded script several times) - which was certainly more than enough to see where the weak points were, especially as we added more users to the mix. Longer, endurance scripts (ones you might run overnight or over the course of a weekend), should probably be employed only after you've squashed all the obvious bugs you've found in your short scripts; ideally, longer scripts can provide another, more realistic, benchmark for you, but only if you can run them without quickly hanging the servers involved.

After you've created the scripts you think you'll need, copy the Access database that holds those scripts from the parent client to all the child clients - that way, you're sure that everyone has the same script data, and that only the generated reports will be unique.

Testing
Once the clients are set and time is reserved for people and machines, coordinate your time and set the scripts running. You should plan on scaling your tests by adding increasing numbers of clients rather than heavier scripts. In other words, we found that it's better to progress your tests as follows:

  • Test 1: 1 script on 1 client machine, 100 users x 1 thread
  • Test 2: 1 script on 2 client machines, 100 users x 1 thread
  • Test 3: 1 script on 3 client machines, 100 users x 1 thread
  • Test 4: 1 script on 4 client machines, 100 users x 1 thread
  • Test 5: 1 script on 5 client machines, 100 users x 1 thread
rather than trying to scale up testing with:
  • Test 1: 1 script on 1 client machine, 100 users x 1 thread
  • Test 2: 1 script on 1 client machine, 200 users x 1 thread (or worse, 100 users x 2 threads)
  • Test 3: 1 script on 1 client machine, 300 users x 1 thread (or worse, 100 users x 3 threads)
  • Test 4: 1 script on 1 client machine, 400 users x 1 thread (or worse, 100 users x 4 threads)
  • Test 5: 1 script on 1 client machine, 500 users x 1 thread (or worse, 100 users x 5 threads)
Both scenarios look like they're testing 100-500 users, but if you follow the second scenario rather than the first you're very quickly going to be testing the limits of your client machine (CPU, in particular) rather than the application on your Web server - and your results will be skewed accordingly.

The number of users multiplied by the number of threads equals the number of sockets being created, and we found that creating 500 sockets on a single client machine just bogged down that machine; even the WAS Help notes that you should "be careful not to increase the stress level on the clients such that these boxes spend more time context switching between threads than doing actual work." And the more threads you have, the more work your client machines will be doing simply switching between them. Obviously, if you have only a single client machine available to you, then your options are limited; just be aware that this will then be an additional factor in your testing.

With your first series of tests, you're really looking to get some initial benchmark scripts, conditions, and results for comparison purposes later. Those may come with the very first scripts you try, or it may take, as in our case, several attempts to get something useable for a baseline. When we began testing, for instance, we had lockups and crashes at alarmingly low user levels. We had to iteratively tweak the script until we eliminated some of our longer-running queries from the script. We were not ignoring those problematic queries - we returned to tune them as soon as we could eliminate some of the other underlying problems we were seeing - but it wasn't useful in the beginning to try to slay all our dragons all at once.

Refining the Testing Process
You will also, inevitably, be refining the set of data you're going to focus on as the most important. As I noted above, when we began our testing process we assumed that we would be using the "average number of users on site" as our first and most important measure of comparison, because we knew (roughly) the number of users we needed to be able to support. But as it turns out, that particular set of data was far less helpful in measuring the user experience we were after than "average page response time." Be flexible here: this is when you should carefully compare your results data with your anecdotal team notes.

So what exactly was our specific experience? As I mentioned above, we first spent some time tweaking our baseline test scripts, and we got some pretty horrible (if revealing) numbers. At 100 simultaneous users, the site performed just as expected - fairly fast page loads of just a few seconds. This was the "normal" mode for the site. However, at 500 simultaneous users from five separate client machines going against a single dedicated MX6.1 Enterprise server, we had:

  • Average page response times well over 1 minute
  • Average queue request times well over 1 minute
  • Hundreds of timeouts and socket errors
This meant that if users were actually lucky enough to get in the queue to reach the site, they might be waiting a couple of minutes before getting any response. Obviously, we had to get those numbers down.

The first thing we did was to try to track down the worst offender - and that was clearly the database. We found that even with our short, basic scripts, eventually we would get database locks because we were using database-stored client variables. Because we had separated out our client variable storage for this application into a discrete database, we could easily see that there was far more activity there than we would have expected. Even though we had disabled the global updates to our client variable storage for the site, the application was still making unnecessary trips to the database server with each page hit.

Further research showed that in our particular instance, we could very easily switch from database-stored client variables to cookie-only client variables. This may or may not be true for others: if you are storing a great deal of information in your client variables, then database storage is probably most appropriate. If you're not storing very much information (less than 4K) and cookies won't be a problem for the site - and you're prepared with a P3P policy - then using cookie storage for your client variables may be the way to go. Once we made the change to cookie storage, site performance increased considerably.

We could then revert our scripts to include some of the earlier problematic long-running queries; we had first excluded them by looking at our initial scripts and comparing the database traces with the lines in the script that seemed to correspond to those queries, and then simply deleting those page calls from the script. We reran the tests with the modified scripts (that is, with the page calls added back in), capturing the database trace as we did so. We could then easily identify those queries that ran most often, as well as those that grabbed the most database CPU.

This tracking really gave us bang for our buck - we were able to identify just a few problem queries and concentrate our efforts on those. We optimized those queries as much as we could, and even devised a new caching strategy to eke out more performance gains. By this time, we could see the following numbers for 500 simultaneous users on the same machine:

  • Average page response times under 20 seconds
  • Average queue request times under 20 seconds
  • No timeouts, and only a few socket errors
Although this was a significant improvement over where we had started our testing, it still wasn't going to meet the needs of our client, so we then set up a load-balanced environment and reran our tests. The load-balancing environment we set up was a combination software-hardware solution: we used additional machines controlled by load-balancing software from Coyote Point. Again, there are many other options possible here, including setting up multiple instances of ColdFusion and load-balancing between those instances. Not surprisingly, load balancing brought us significant gains as well. And because we had run the earlier tests, we also got a fairly good sense of how much gain we would get with each additional machine (Web server and ColdFusion server) - and we could then project how many additional servers we would need to add to reach the goals that we and the client had set together.

The Nature of the Beast
As you can see from just the short summary above, our testing was a highly iterative process, run by art at least as much as by science. In part, this is the nature of the beast - it takes a certain amount of trial and error before you hit upon the right problems and their corresponding solutions. But this also happens in part because as you refine your application environment, the source of your problems will change.

For instance, in our first tests the database CPU was maxing out during most of the script, but the Web server CPU would hardly ever rise above 10%. Why? Because of the client variable problem - it was overloading the database so much (as well as frequently locking it up) that the Web server didn't have that much to work with. Once we eliminated the client variable problem we could see from the traces that the database usage had eased significantly, but that the Web server CPU usage then rose to over 70% during certain portions of the scripts. Fix one problem, and the application bottlenecks somewhere else.

Since the process is so iterative, you'll have to clarify with your team fairly quickly what your specific endpoint will be. Of course, it has to be realistic - our client initially wanted to be able to support an entire season's possible registrants all at once, potentially 75,000 simultaneous users, which, given the budget and the actual needs of the site, didn't make sense (it had never experienced more than 1500 simultaneous users). It should be noted that upon reflection our client agreed to more realistic goals.

Even with realistic goals, however, it would be very easy to load-test yourself out of existence if they're not specific enough, because there's always more testing and tweaking that you could do. At some point, you and your team will need to decide something along the lines of, "we will tune the application so that all pages respond within 2 seconds when there are 500 simultaneous users on the site." In our case, we ultimately wanted to reduce the average page-response time and reduce the average queue time so that we could reach that 2-second goal. But whatever your particular goal is, once you get there, stop the testing.

Recode, Retest, Relaunch
The testing, after all, is just the first part of what you need to do. Now you have a game plan for refining the application or the database, or both, but you still need to recode, retest, and relaunch (or launch) the site. And that, obviously, takes time. So again, be cognizant of any looming deadlines so that the initial load-testing phase doesn't take up so much time that you won't be able to improve your production application. Once we got reasonably close to our 2-second page-load goal with our internal testing, we stopped our testing and did the actual recoding and regression testing we needed to do before relaunching the application.

Once we had recoded and relaunched our application, we did one final set of load tests - first, to verify our expectations; and second, to allow the client to experience the site while we load-tested. This second reason may seem like an afterthought, but it's not.

Remember that one of the main goals of load testing is to establish client confidence. Although we had been reporting our progress to the client throughout the process, this would be the first time for them to actually experience the faster version of the site. There's nothing that will establish confidence like setting up a test scenario and having your client experience the site at the same time. Having said that, be prepared for slightly different results than you may have had with strictly internal testing - because again, you'll also be testing bandwidth limitations, which throws another set of variables into the mix.

We set up specific times for our external, preproduction load tests, and let our client know ahead of time when those would be. As a result, many members of the organization were able to use the site while we were load-testing. They knew what to expect, they could see where the weak points were, and they could clearly see that the site performed better. We got client buy-in - and that's invaluable.

Going Live
The day of reckoning finally arrived - the next registration period. But this time things went smoothly. In fact, things went even better in production than in some of our final load tests, partly because we had constructed our tests so conservatively, and partly because of the low latency over the network during our internal tests (which created many more requests per unit of time). Not only were there no crashes in production, but the site performed without any slowdowns, even when we were processing nearly 300 orders a minute with well over 500 simultaneous users:

  • The Web servers' CPU usage was consistently 10% or less.
  • The database server used 15% or less of its CPU.
  • Pages responded in well under 2 seconds, on average.
  • There were 0 queued requests (and therefore, the average queue request time was 0!).
It was a completely different user experience, and both the client and the end users were very pleased with the results.

Conclusion
In the end, the load testing wasn't free, but the expense we incurred was worth it. There are many different options for testing, and I've discussed only a small number of the available tools and approaches here. You should review the tools and/or service options that seem best for your organization's needs and budget. For Duo Consulting, pursuing the load testing in-house with the necessary time, patience, and resources gave us client confidence, developer confidence, and a roadmap for scaling the application as usage increased.

More Stories By Kelly Tetterton

Kelly Tetterton is the technical lead at Duo Consulting (www.duoconsulting.com) in Chicago and has been designing and programming for the Web since 1993. She is a Certified Advanced ColdFusion MX Developer with expertise in content management systems and Fusebox methodology.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
Whether you're a startup or a 100 year old enterprise, the Internet of Things offers a variety of new capabilities for your business. IoT style solutions can help you get closer your customers, launch new product lines and take over an industry. Some companies are dipping their toes in, but many have already taken the plunge, all while dramatic new capabilities continue to emerge. In his session at Internet of @ThingsExpo, Reid Carlberg, Senior Director, Developer Evangelism at salesforce.com, to discuss real-world use cases, patterns and opportunities you can harness today.
The Internet of Things (IoT) is going to require a new way of thinking and of developing software for speed, security and innovation. This requires IT leaders to balance business as usual while anticipating for the next market and technology trends. Cloud provides the right IT asset portfolio to help today’s IT leaders manage the old and prepare for the new. Today the cloud conversation is evolving from private and public to hybrid. This session will provide use cases and insights to reinforce the value of the network in helping organizations to maximize their company’s cloud experience.
Cultural, regulatory, environmental, political and economic (CREPE) conditions over the past decade are creating cross-industry solution spaces that require processes and technologies from both the Internet of Things (IoT), and Data Management and Analytics (DMA). These solution spaces are evolving into Sensor Analytics Ecosystems (SAE) that represent significant new opportunities for organizations of all types. Public Utilities throughout the world, providing electricity, natural gas and water, are pursuing SmartGrid initiatives that represent one of the more mature examples of SAE. We have spoken with, or attended presentations from, utilities in the United States, South America, Asia and Europe. This session will provide a look at the CREPE drivers for SmartGrids and the solution spaces used by SmartGrids today and planned for the near future. All organizations can learn from SmartGrid’s use of Predictive Maintenance, Demand Prediction, Cloud, Big Data and Customer-facing Dashboards...
IoT is still a vague buzzword for many people. In his session at Internet of @ThingsExpo, Mike Kavis, Vice President & Principal Cloud Architect at Cloud Technology Partners, will discuss the business value of IoT that goes far beyond the general public's perception that IoT is all about wearables and home consumer services. The presentation will also discuss how IoT is perceived by investors and how venture capitalist access this space. Other topics to discuss are barriers to success, what is new, what is old, and what the future may hold.
All major researchers estimate there will be tens of billions devices – computers, smartphones, tablets, and sensors – connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be!
Noted IoT expert and researcher Joseph di Paolantonio (pictured below) has joined the @ThingsExpo faculty. Joseph, who describes himself as an “Independent Thinker” from DataArchon, will speak on the topic of “Smart Grids & Managing Big Utilities.” Over his career, Joseph di Paolantonio has worked in the energy, renewables, aerospace, telecommunications, and information technology industries. His expertise is in data analysis, system engineering, Bayesian statistics, data warehouses, business intelligence, data mining, predictive methods, and very large databases (VLDB). Prior to DataArchon, he served as a VP and Principal Analyst with Constellation Group. He is a member of the Boulder (Colo.) Brain Trust, an organization with a mission “to benefit the Business Intelligence and data management industry by providing pro bono exchange of information between vendors and independent analysts on new trends and technologies and to provide vendors with constructive feedback on their of...
Software AG helps organizations transform into Digital Enterprises, so they can differentiate from competitors and better engage customers, partners and employees. Using the Software AG Suite, companies can close the gap between business and IT to create digital systems of differentiation that drive front-line agility. We offer four on-ramps to the Digital Enterprise: alignment through collaborative process analysis; transformation through portfolio management; agility through process automation and integration; and visibility through intelligent business operations and big data.
There will be 50 billion Internet connected devices by 2020. Today, every manufacturer has a propriety protocol and an app. How do we securely integrate these "things" into our lives and businesses in a way that we can easily control and manage? Even better, how do we integrate these "things" so that they control and manage each other so our lives become more convenient or our businesses become more profitable and/or safe? We have heard that the best interface is no interface. In his session at Internet of @ThingsExpo, Chris Matthieu, Co-Founder & CTO at Octoblu, Inc., will discuss how these devices generate enough data to learn our behaviors and simplify/improve our lives. What if we could connect everything to everything? I'm not only talking about connecting things to things but also systems, cloud services, and people. Add in a little machine learning and artificial intelligence and now we have something interesting...
Last week, while in San Francisco, I used the Uber app and service four times. All four experiences were great, although one of the drivers stopped for 30 seconds and then left as I was walking up to the car. He must have realized I was a blogger. None the less, the next car was just a minute away and I suffered no pain. In this article, my colleague, Ved Sen, Global Head, Advisory Services Social, Mobile and Sensors at Cognizant shares his experiences and insights.
We are reaching the end of the beginning with WebRTC and real systems using this technology have begun to appear. One challenge that faces every WebRTC deployment (in some form or another) is identity management. For example, if you have an existing service – possibly built on a variety of different PaaS/SaaS offerings – and you want to add real-time communications you are faced with a challenge relating to user management, authentication, authorization, and validation. Service providers will want to use their existing identities, but these will have credentials already that are (hopefully) irreversibly encoded. In his session at Internet of @ThingsExpo, Peter Dunkley, Technical Director at Acision, will look at how this identity problem can be solved and discuss ways to use existing web identities for real-time communication.
Can call centers hang up the phones for good? Intuitive Solutions did. WebRTC enabled this contact center provider to eliminate antiquated telephony and desktop phone infrastructure with a pure web-based solution, allowing them to expand beyond brick-and-mortar confines to a home-based agent model. It also ensured scalability and better service for customers, including MUY! Companies, one of the country's largest franchise restaurant companies with 232 Pizza Hut locations. This is one example of WebRTC adoption today, but the potential is limitless when powered by IoT. Attendees will learn real-world benefits of WebRTC and explore future possibilities, as WebRTC and IoT intersect to improve customer service.
From telemedicine to smart cars, digital homes and industrial monitoring, the explosive growth of IoT has created exciting new business opportunities for real time calls and messaging. In his session at Internet of @ThingsExpo, Ivelin Ivanov, CEO and Co-Founder of Telestax, will share some of the new revenue sources that IoT created for Restcomm – the open source telephony platform from Telestax. Ivelin Ivanov is a technology entrepreneur who founded Mobicents, an Open Source VoIP Platform, to help create, deploy, and manage applications integrating voice, video and data. He is the co-founder of TeleStax, an Open Source Cloud Communications company that helps the shift from legacy IN/SS7 telco networks to IP-based cloud comms. An early investor in multiple start-ups, he still finds time to code for his companies and contribute to open source projects.
The Internet of Things (IoT) promises to create new business models as significant as those that were inspired by the Internet and the smartphone 20 and 10 years ago. What business, social and practical implications will this phenomenon bring? That's the subject of "Monetizing the Internet of Things: Perspectives from the Front Lines," an e-book released today and available free of charge from Aria Systems, the leading innovator in recurring revenue management.
The Internet of Things will put IT to its ultimate test by creating infinite new opportunities to digitize products and services, generate and analyze new data to improve customer satisfaction, and discover new ways to gain a competitive advantage across nearly every industry. In order to help corporate business units to capitalize on the rapidly evolving IoT opportunities, IT must stand up to a new set of challenges.
There’s Big Data, then there’s really Big Data from the Internet of Things. IoT is evolving to include many data possibilities like new types of event, log and network data. The volumes are enormous, generating tens of billions of logs per day, which raise data challenges. Early IoT deployments are relying heavily on both the cloud and managed service providers to navigate these challenges. In her session at 6th Big Data Expo®, Hannah Smalltree, Director at Treasure Data, to discuss how IoT, Big Data and deployments are processing massive data volumes from wearables, utilities and other machines.
P2P RTC will impact the landscape of communications, shifting from traditional telephony style communications models to OTT (Over-The-Top) cloud assisted & PaaS (Platform as a Service) communication services. The P2P shift will impact many areas of our lives, from mobile communication, human interactive web services, RTC and telephony infrastructure, user federation, security and privacy implications, business costs, and scalability. In his session at Internet of @ThingsExpo, Erik Lagerway, Co-founder of Hookflash, will walk through the shifting landscape of traditional telephone and voice services to the modern P2P RTC era of OTT cloud assisted services.
While great strides have been made relative to the video aspects of remote collaboration, audio technology has basically stagnated. Typically all audio is mixed to a single monaural stream and emanates from a single point, such as a speakerphone or a speaker associated with a video monitor. This leads to confusion and lack of understanding among participants especially regarding who is actually speaking. Spatial teleconferencing introduces the concept of acoustic spatial separation between conference participants in three dimensional space. This has been shown to significantly improve comprehension and conference efficiency.
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, will discuss single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example to explain some of these concepts including when to use different storage models.
SYS-CON Events announced today that Gridstore™, the leader in software-defined storage (SDS) purpose-built for Windows Servers and Hyper-V, will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Gridstore™ is the leader in software-defined storage purpose built for virtualization that is designed to accelerate applications in virtualized environments. Using its patented Server-Side Virtual Controller™ Technology (SVCT) to eliminate the I/O blender effect and accelerate applications Gridstore delivers vmOptimized™ Storage that self-optimizes to each application or VM across both virtual and physical environments. Leveraging a grid architecture, Gridstore delivers the first end-to-end storage QoS to ensure the most important App or VM performance is never compromised. The storage grid, that uses Gridstore’s performance optimized nodes or capacity optimized nodes, starts with as few a...
The Transparent Cloud-computing Consortium (abbreviation: T-Cloud Consortium) will conduct research activities into changes in the computing model as a result of collaboration between "device" and "cloud" and the creation of new value and markets through organic data processing High speed and high quality networks, and dramatic improvements in computer processing capabilities, have greatly changed the nature of applications and made the storing and processing of data on the network commonplace. These technological reforms have not only changed computers and smartphones, but are also changing the data processing model for all information devices. In particular, in the area known as M2M (Machine-To-Machine), there are great expectations that information with a new type of value can be produced using a variety of devices and sensors saving/sharing data via the network and through large-scale cloud-type data processing. This consortium believes that attaching a huge number of devic...