What are you doing now?

Twitter on Ulitzer

Subscribe to Twitter on Ulitzer: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Twitter on Ulitzer: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Twitter Authors: Pat Romanski, Hovhannes Avoyan, Jim Kaskade, Bob Gourley, Lori MacVittie

Related Topics: RIA Developer's Journal, Cloud Computing, Twitter on Ulitzer, Cloud Testing

RIA & Ajax: Article

Web 2.0: Why Has Web Testing Failed Us?

Customers have become the digital equivalent of 'Test Crash Dummies'

Ironically, and despite all the negative press, many service providers' market share continues to grow in huge numbers. This growth has been earned by delivering new services that offer unique value and convenience in their delivery model. Because of this convenience, up until now occasional hiccups in service have been tolerated and for the most part overlooked. Take for example the recent web site crashes at Netflix, Yahoo, and Apple. Customers did not stop downloading iPhone applications (now over 300 million downloads).

But, as the web services market continues to grow (Merrill Lynch predicts that over $100B (USD) in annual Web Services Revenues will be generated by 2012), competitive web services will surely be on the rise as well. As they do, all will offer similar "convenient" features, thereby minimizing "convenience" as a leading criterion for choosing a particular web service.

How then do we choose our services in the future? If history is an indicator of things to come, then we will soon use "quality" (as in reliability) of service to be a leading decision criterion. Take my Skype example...even though there was a significant economic justification and convenience to using Skype's service overseas, a service outage or poor service quality could cause many users to go back to a more stable yet more expensive alternative. Bottom line, competition causes quality and reliability to matter.

So why are sites failing at such an increasing rate?

eBay, Yahoo, and Apple are big companies...surely they can afford to thoroughly test their web applications and web network before they go live with a service? For that matter even smaller SaaS companies should be spending a disproportionate amount of time on testing. In many cases their web sites are often their entire distribution channel. If their site is down, then their company is down. Doesn't everyone exhaustively test their web applications before they go live? The simple (yet scary) answer is no, they do not.

This is not to say that no testing is being done. Many Web 2.0 applications like Netflix, Google Maps, and iTunes are tested daily for defects in new functionality. So why are they still underperforming or breaking? The problem is created when those "tested" applications go into the real world.

What makes the real world so difficult and costly to replicate if these applications appear to test well in pre-production? The first thing is that many companies do very little if any testing. It has typically been both too expensive and time-consuming or they just don't have the necessary resources to test deeply and broadly. Or, applications change so much that today you can't get away with just testing inside your firewall. In order to simulate real-world web traffic, you must test outside the firewall and in the cloud. You must be able to test all the individual components of an application. Some web application components (such as content management systems) may not even reside inside your firewall yet must be tested. There is even more speculation that many components may eventually reside inside a public or private cloud and the network in which it will run has many, many complex parts. In addition, there are not only the components of an application to consider, but also the assembled applications ...all of which were designed and built using various new development technologies, such as AJAX, Flash, and Ruby, each providing unique testing challenges of their own. Then there is the network in which the application will be placed into production, web and application servers, memory, CPU speeds, load balancers, bandwidth, and databases - all relevant to making applications run smoothly. There are the actual users accessing your site from all around the world, using various browsers like Safari, Firefox, IE and soon, Google Chrome. And, last, the various service providers with their own networks and varying connection speeds. Creating a "real-world" environment to test is not only daunting, but also very expensive.

Most companies have decided it is just too costly to ensure quality and reliability of their web sites using traditional test solutions. Even testing a fraction (typically only 5%) of anticipated user activity can cost millions of dollars annually to maintain. Thus, web site performance issues continue to grow at a faster rate than actual adoption.

In fact, the dirty little secret inside most Web 2.0 companies today is that more often then not, no performance testing is actually being done before a new web site goes live. That means that most companies are relying on their customers to find their performance issues for them...literally turning their users into the auto industry's equivalent of "Test Crash Dummies."

How long before consumers say "enough"?

One of the biggest driver's of the Web 2.0 generation has always been convenience. Access and ease-of-use have created the 300 million downloads of Apple's iStore applications, so what happens when convenience is impacted by slow downloads, applications that don't work correctly, and site outages? Customers go elsewhere. In addition, first generation applications had little or no competition. To date these companies have had a bit of a honeymoon period with their users.

However, with success comes competition, and with competition comes a much higher level of standards. Take Apple and their relatively new AppStore. On day one of shipping their new 3G iPhone, there were a few hundred terrific new applications available. Many of these applications shared similar functionality, just with a different pricing model. You would think free would win, right? Not really. Many of these applications could not even be downloaded - they simply crumbled under the stress of demand, never to be seen again. In another example, Amazon estimates that for every 100ms of latency they have on their site, they could lose as much as 1% of their sales...that's $180M/yr. Further, Google estimates for every half second that it takes to load a search page, they lose 20% of their traffic. The bottom-line with competition...latency and performance matter, proving once again, that even if you have the best functionality in your newest game or service, you may not be successful. If it takes 10 minutes to download or 30 seconds to access, people will look elsewhere.

Help is on the way.

Cloud Computing Offers Some Hope in Improving Web Site Performance
With the advent of cloud computing new testing methods are now becoming both affordable and available where previously they did not exist. Unstable economic times and the challenge of having to maintain a large IT resource are causing companies to look to the cloud to save costs, and testing in the cloud is an ideal application. New testing tools from companies like SOASTA, which leverage the access, availability, and affordability of cloud computing networks, enable companies that rely on the web to simulate hundreds of thousands of users hitting web applications located behind firewalls. With this ability, companies can now thoroughly test a web application's performance and capacity before putting it into production. They can even simulate user experiences from all around the world. It is now possible to simulate a real world user experience of a student in Paris, a bank manager in Hong Kong, and a mother in Florida all sending the same eCard from Hallmark. This ability gives Hallmark a much better understanding of their future customer user experiences, without turning actual customers into "Test Crash Dummies." No longer are sites limited to testing only a fraction of expected traffic under limited conditions. Companies now have the ability to test their applications' performance in the real world and not just in an internal test lab.

More Stories By SOASTA Blog

The SOASTA platform enables digital business owners to gain unprecedented and continuous performance insights into their real user experience on mobile and web devices in real time and at scale.

Comments (1) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
J.A. Watson 01/09/09 01:00:08 PM EST

You should not be surprised that Skype service was unavailable for 12-24 hours without explanations. In August 2007 it was unavailable worldwide for serveral DAYS, and they have never bothered to give an explanation of that. The Skype program crashes, hangs, and freezes video, and they don't bother to provide any rational customer support. They block customer accounts and freeze prepaid funds, without bothering to answer queries as to why it was done, or how to rectify it. User accounts are hacked, and money is stolen, and all Skype does is blame the users for not being careful enough.

The only thing that works well at Skype is their propaganda machine, as is evidenced by the number of Skype users cited in your article. Skype has nothing even remotely approaching 300 million users, although they insist on constantly citing that number. That is the total number of user accounts that has ever been registered at Skype, since the very first day, and it includes millions and millions of accounts that are abandoned, inactive, have never been used, or have been created for the innumerable spammers and pornographers who infest Skype space today. The best estimates of real Skype users are about 10% of that number.