APM Spectrum: Frontend vs Backend Application Monitoring

This post was originally published on the Rigor Web Performance blog.

When having discussions with clients and prospective clients in the marketplace the inevitable question comes up: Where does Rigor sit on the application performance management (APM) spectrum?

There are many different ways that I can navigate to answer this question, but it can be difficult to explain unless I fully understand the technology background of the person asking the question.

When I first joined Rigor, I asked the same question and for a long time I was told that we were a “frontend monitoring solution.” Now that was all well and good, but I didn’t really understand what that meant or how our technology differentiated from other “APM” technologies that were designated as “backend.”

So in this blog, I will explain the difference between “frontend and backend APM technologies” for all the uninitiated out there who are unfamiliar with the myriad of buzzwords being thrown around the web performance industry today.

Backend Monitoring

Backend monitoring provides visibility into the performance of a client’s infrastructure. These include the HTTP server, middleware, database, third party API services, and more.  Components can have multiple instances, and components can exist in the same data center or be located in data centers across the globe. Some synonymous words for “backend monitoring” include: data center monitoring, infrastructure monitoring, application performance monitoring. 

Backend monitoring is helpful for resolving problems around the following:

  • Code Bugs
  • System Problems (Operating system issues, security issues)
  • Hardware Problems (CPU failure, disk failure, out of disk space)
  • Software Performance Problems

Frontend Monitoring

Front-end monitoring provides the finished view of the performance of your web application from the perspective of an end user and encompasses all third-party content. Frontend monitoring provides insight into what your user’s actually experience when they visit your website. This experience varies dramatically based on the device, network, location, and a host of other variables. Some synonymous words for “frontend monitoring” include: end user monitoring, user experience monitoring, web performance monitoring.

Unlike backend monitoring, there is more than one monitoring technique for measuring the frontend experience of end users, each with its relative strengths and weaknesses.

The first technique is called synthetic monitoring. Synthetic monitoring allows you to test and measure the experience of your web application by simulating traffic with set test variables (network, browser, location, device). Benefits of using synthetic monitoring can be found here.

The second technique is called Real User Monitoring or simply RUM. Rum is a javascript tag that site owners insert on their web page that tracks the users interactions with their site. Rum tags send back to administrators high level metrics such as response time, server time, and the location and device that a user is accessing from. Benefits of using RUM can be found here.

Frontend monitoring is helpful for resolving problems around the following:

  • Third Party Content
  • Web page structure, organization, and weight
  • Location, network, or browser-related performance problems
  • Troubleshooting the effectiveness of mobile websites or responsive-design

What to Use?

Both frontend and backend monitoring technologies offer valuable insight into the performance of your application. Ideally, you should be leveraging all of the technologies listed above. However, if your budget makes employing all of these technologies prohibitive, try to find a frontend technology to test only your mission critical web pages and scale your monitoring with your business. Frontend technologies tend to be the cheapest and simplest to implement and understanding/improving the experience of your end-user should be your goal from day one.

More of a visual learner? Check out this infographic!

Rigor vs APM

How to Identify and Resolve JavaScript Performance Problems

This post was originally published on the Rigor Web Performance blog.

Websites today are growing larger than ever before with the average page size and number of page requests getting larger every month. As websites become increasingly JavaScript-heavy, it is  important for site owners to understand the impact of JavaScript on their websites.

With this blog, I will provide an outline for identifying whether poorly performing JavaScript files are a problem for your website, and provide some best practices for reducing the resulting performance issues.

To determine whether JavaScript is slowing down your site, run a performance test atwebperformancegrader.com.  Just plug in the URL for your home page and hit “Start Performance Test”. Within a few seconds, you will have a free report with data showing all of the sources of latency on your site.  You can also run tests for any other highly trafficked pages, such as a landing or product page on an eCommerce site.  The results will tell you a lot about your website–including information on each of the JavaScript files on your page.

Website Speed Test - Coastal

In the above picture, I ran a test on Coastal Contact’s eCommerce homepage. At first glance, the thing that popped out to me was the number of resources on the page: over 300 assets were loaded. This number is exceedingly high, particularly for an eCommerce website, as nearly 75% of the top 1000 websites have less than 150 requests.

The second item that stuck out to me was the page size breakdown. Nearly 3/4 of the page’s total size was comprised of JavaScript. This is in stark contrast to the average breakdown of the top 1000 pages on the web:

Average Page Load Breakdown by Content Type

The above picture is from HTTP Archive and shows the content breakdown for the average website as of their last scan (2/15/2015). The average web page today is around 2 MB in size, and only about 300 KB of that (or 15%) is attributable to JavaScript, significantly less than the page load percentage for Coastal Contacts.

How many Javascript calls?

The next metric I checked was the number of JavaScript calls made during page load. By hovering over the JavaScript line of the graph key provided by webperformancegrader.com I can see the number of JavaScript calls made for this specific page load. It turns out that there were over 92 pieces of JavaScript loaded on the page. This is significantly higher than the average website which loads 6-10 pieces of JavaScript.

We’ve established that there are entirely too many JavaScript files on this site and that, in aggregate, JavaScript accounts for an unusually large portion of the page load. These stats are inconsistent with modern performance optimization best practices. When optimizing your website for performance, there are three front-end techniques that you want to leverage:

  • Look to reduce the number of requests
  • Look to reduce the overall size of the content
  • Promote the simultaneous download of assets


Lets tackle the first technique for reducing the number of JavaScript requests by concatenating multiple JavaScript files into fewer files. I know you may be asking, “Why is it important to reduce the number of files if the overall file size remains unchanged?”

Concatenate this

The answer is that reducing the number of requests reduces that amount of time a browser spends fetching specific assets. As you can see in the above waterfall chart of the page load, every request a browser makes takes a minimum of 20 ms to fetch. While it may not seem like much, by enabling concatenation techniques on sites like coastal.com with hundreds of assets, site owners could shave off seconds of load time.  There are many tools to help site builders with this. A great open-source tool to help with JavaScript concatenation is minify.

Size Reduction by Minifying Large JavaScript Files

The next step in the optimization process is to identify any JavaScript files that appear unoptimized or excessively large. Once again, taking a look at a page’s waterfall chart can help with this.

Candidate for Minification

When scanning down the waterfall chart, look for JavaScript files that take longer than 200ms to load. If you identify any slow JavaScript files that exceed 200 ms then they are likely a candidate for minification. Once again, there are many tools that can minify your JavaScript automatically, including the tool I recommended above (minify).

What to do with 3rd Party JavaScript?

When dealing with JavaScript files that are managed by a 3rd party provider, it is best to be conservative and judicious when making decisions around what files to host on your website. Last week I wrote a blog demonstrating the need for organizations to require SLAs for all 3rd party JavaScript tags hosted on their sites as these scripts are some of the worst performance offenders.

Dealing with 3rd party JavaScript can be a tricky task, but it is an important one. Scripts from 3rd party providers can be a serious source of performance problems, particularly when the provider in question experiences a service disruption or downtime. Here are some questions (and answers) you might want to ask yourself once you’ve made the decision to utilize 3rd party scripts:

Q: Where do I want to host the script(s)?

A: Overall there is not an appreciable benefit to hosting JavaScript files on someone else’s servers. However, there is data out there that suggests a small subset of  IP addresses do serve a large volume of JavaScript files very efficiently. The majority of these addresses were found to belong to Google and Akamai servers. (Note: some newer CDN’s offer services around file optimization that can noticeably improve the delivery of JavaScript files)

Q: Is(are) the script(s) fully optimized or concatenated?

A: Looking at the script in the context of a waterfall chart can help you identify the total weight and performance impact of the file. If there is an opportunity for optimization, then minification or concatenation techniques outlined above can be impactful. These techniques can be done manually before deploying the script, or you can leverage a CDN such as Instart Logic that can perform these and other techniques automatically.

Q: Is(are) the scripts loading asynchronously?

A: If the answer to this is no, find out if your vendor has an asynchronous version of the script that you can use instead. Many vendors have come out with widgets that now load asynchronously so make sure you are using the latest version. There are two major reasons to use asynchronous scripts. First, if the 3rd party goes down or is slow, your page won’t be held up trying to load that resource. Second, it can help make that resource more efficient and speed up page loads.

We’ve gone over a lot of information about improving your site’s performance by identifying and fixing JavaScript-related performance issues. To recap, try leveraging tools such as webperformancegrader.com to identify the performance cost of each piece of JavaScript on your website. Once you identify key offenders leverage techniques to reduce the size of the JavaScript and the number of browser requests your website requires. Understanding the impact of  3rd party JavaScript is also important. Ensure that you are using the most recent versions of all 3rd party resources and look at how hosting providers or CDN’s can improve the delivery of these resources.

Service Outages Illustrate Need For 3rd Party SLAs

This post was originally published on the Rigor Web Performance blog.

It is still early in 2015, but there have already been numerous instances of web technology platforms experiencing service disruptions lasting over an hour in length. These disruptions often times have a ripple affect across the internet, impacting anyone that integrates with the service.

A couple of weeks ago, I wrote an article detailing one such outage related to New Relic’s RUM beacon. Earlier this year, a Facebook outage caused major service interruptions for its major partners Instagram, Vimeo and many others. A Facebook spokesperson told The Verge that the outage “occurred after we introduced a change that affected our configuration systems.”

facebook outage

Outages like this that have a wide-ranging impact bring to light the need for organizations to have service level agreements (SLAs) with all 3rd-party integrations and services that they host. These agreements are already commonplace for “mission critical” services such as cloud infrastructure, CDNs, and ISPs. However, seemingly harmless integrations with services such as social media networks, ad-serving networks, trackers, or marketing analytics plugins have been immune from this practice.

These “harmless” integrations can quickly become the source of serious performance problems for your website or web application. This is why I encourage sites to have clearly defined SLAs in place for all 3rd-parties that they host and to monitor these services with a neutral 3rd-party monitoring platform to report on the availability of the service.

fox guarding hen houseRight now, most 3rd-parties do not offer real-time monitoring of their scripts, and the few that do monitor their scripts internally. At Rigor, we often advise customers to be wary of reports on vendor performance that originate from the vendor themselves.  We operate under the simple mantra “trust, but verify”.

When working with 3rd-party providers that require hosted scripts on your site, here are some items to discuss when negotiating a clear SLA:

  • An annual percentage uptime guarantee
  • A process for reimbursing site owners if uptime drops below the guarantee
  • A neutral 3rd party monitoring platform to report on the availability of the service

Hopefully as site owners become more aware of the impact of “harmless” 3rd-party integrations and services, the demand for properly optimized scripts, improved monitoring and reporting, and greater organizational accountability will rise.

The Hidden Cost of Real User Monitoring (RUM)

This post was originally published on the Rigor Web Performance blog.

Every day Rigor robots make 1.5mm visits to over 350,000 web pages and load 23mm assets in browsers from around the world. Because of the large amount of data that we sift through each day, we are able to detect many outages that occur on the internet in real time. Yesterday we saw a sizable outage across many of our customer and research accounts.

When we dug down into specific runs we kept seeing the same issue pop up. A piece of javascript on each page was unable to connect back to its server and blocked other content from loading.  Below is a waterfall chart of one of our customers who experienced latency due to the beacon. (Not familiar with waterfall charts? Check out “How to Read a Waterfall Chart”).

waterfall picture rum latency

Websites such as TOMS, Citrix, AirBnB, and hundreds more experienced multi-second delays to their page loads due to issues loading a javascript beacon (also known as a tag). Ironically this beacon is used to track the end-users’ web performance experience.

After some digging we quickly discovered that the culprit was New Relic’s browser collection beacon. This piece of javascript is how New Relic creates their Real User Monitoring (RUM) data. Our suspicion was confirmed shortly thereafter via New Relic’s public status page as seen in the picture below:

New Relic Browser Collection

In all, the New Relic outage lasted nearly two full hours and caused significant UX problems for end users. This got me thinking about the two blogs I wrote in December describing the benefits and shortcomings of RUM and how it complements synthetic monitoring.

One of the key differentiators that I failed to mention in these blogs between RUM and Synthetics is the external nature of Synthetics. Despite all of the benefits of RUM, loading any external content on your site has risks. Unlike RUM, Synthetic monitoring is completely external and does not require any client-side installation or the insertion of a web tag. As the New Relic outage so clearly illustrated, RUM at its worst can be the source of serious performance problems and at its best is still yet another asset that needs to be loaded by your users at a time when web pages are already slowing and growing at an alarming rate.

Black Friday and Cyber Week eCommerce Results

This post was originally published on the Rigor Web Performance blog.

The holiday shopping season is almost over as we kick off the Christmas week. Hopefully you took advantage of Black Friday and Cyber week to knock out some of your Christmas shopping for a bargain. Now that the dust has settled from the holiday shopping bonanza, its time to take a look at some of the results and key eCommerce trends to look out for in 2015.

eCommerce Holiday Results 2014 vs 2013

Here some of the key takeaways from IBM’s annual Holiday Digital Analytics Benchmark Reports:

  • Overall Online Sales Grew: Online and eCommerce sales set new records on Thanksgiving Day and Black Friday with $1.33 Billion and $2.4 Billion respectively. Cyber Monday remained the busiest day for eCommerce of the year. Cyber Monday sales grew by 8.5 percent over 2013.
  • Big Year for Mobile: Mobile traffic accounted for over 50% of total site traffic this year, an increase of 25%!
  • Desktop is still King: Despite the heavy uptick in mobile traffic, 72% of sales on Cyber Monday and 78% of sales on Black Friday were made from a desktop device.
  • Smartphone vs Tablets: Smartphones accounted for a larger share of traffic than tablet devices (34.7% vs 14.6%), but tablets accounted for more sales and a larger average order value (16% and $126.50 for tablets vs 11.8% and $107.55 for smartphones)
  • IOS vs Android: IOS users were far more active online than Android users. Android users account for just 4.4% of online sales against IOS’s 17.4%.

Key Performance Trends

  1. Third Party Providers 
  2. Every year, numerous websites encounter performance and capacity issues on Black Friday and Cyber Week. This is inevitable as record buyers leverage online retailers to make their holiday purchases. This year the root of many issues were tied to the prevalence of 3rd party providers. For example, troubles related to leading video advertiser LiveRail caused over 400 websites to experience performance problems including page load latency, dropped packets, and in some cases downtime.
  3. Large Page Size = Slower Load times 
  4. Many have reported on the increasing bloat of web pages. In fact, since 2010 the average page size of the top 1000 visited web pages is up 186%. This trend is seen when we analyze the page sizes of top eCommerce sites on Black Friday 2014 vs 2013. According to metrics published shortly after the holiday weekend, desktop and mobile eCommerce sites were respectively 25% and 88% larger than on Black Friday 2013. This correlated with a a median load time increase of 20% for desktop websites and 57% for mobiles sites over the same time frame.
  5. Optimize for Mobile
  6. Despite the record amount of holiday traffic from mobile devices (over 50% of total traffic), less than a third of total online purchases were made on smartphones and tablets.  Could the low conversion rates be a function of the slower load times for mobile sites?

Lesson from the Holiday Season: Prepare for Downtime

While all retailers should  strive to eliminate downtime and minimize performance problems during the holiday season, it is important to have recovery plans in place to mitigate downtime when it does occur. These plans should be cross-departmental and include action items for PR, Marketing, and customer service. Building and maintaining brand equity is more than ensuring a perfect end-user experience at all times. That is impossible. Organizations must have plans to build customer loyalty even when their systems are failing. Here is an example of two retailers that were and were not prepared for downtime:

Examples of Downtime Preparedness

On the left we see Cabela’s error page during Black Friday weekend. Cabelas.com experienced extended periods of downtime throughout the holiday shopping season, but the messaging on their error page was never updated. All users were told that the site is “down for updates” and “routine system maintenance” and that orders should be placed via Cabela’s customer service number. Not the best message to be serving to your customers on the busiest shopping weekend of the year. Contrast that with Staples’s error message on the right. Staples understood that downtime during the holiday rush was a very real possibility and created an error screen that acknowledged their performance problems and provided a customer service number and the weekly sales ad for customers to peruse during periods of downtime.


It’s clear that eCommerce buyers are trending towards mobile devices and are becoming increasingly comfortable with shopping online. Ecommerce retailers need to focus on optimizing their mobile pages to facilitate a more consistent and expeditious shopping experience. If possible, retailers should reduce page size by optimizing images, reducing the number of requests, and moving non-essential elements to the end of the page load (you can check out this presentation for more page optimization suggestions). Lastly, retailers need to prepare for downtime and ensure that they maintain a level of transparency with their customers to build trust and  maintain customer equity even in times of crisis.

Benefits of Using RUM with Synthetic Monitoring

This post was originally published on the Rigor Web Performance blog.

Last week I wrote a blog that defined and analyzed differences between the two predominant end-user monitoring techniques leveraged in the market today: Real User Monitoring (RUM) and Synthetic Monitoring.

I ended that post with a list of benefits provided by synthetic monitoring tools. This week I want to detail some of the shortcomings of using a “synthetic-only” approach and discuss how leveraging synthetic monitoring coupled with RUM is the most effective and holistic monitoring practice.

Rum heart Synthetic

Synthetic Deficiencies

Last week, I detailed many of the benefits offered by synthetic monitoring tools. When it comes to alerting on performance problems or major service disruptions,  testing pre-production environments, or baselining performance, synthetic monitoring is unrivaled. However, synthetic is not without its deficiencies. As its name would suggest, synthetic monitoring does not measure the experience of actual users. Instead, it creates synthetic traffic from data centers owned by the vendor and/or customer specifically for the purpose of measuring performance.

Consequently, visibility into performance is limited to the number of “synthetic tests” built and managed by the user. For example, if CNN creates a synthetic test for the homepage of their site they will have visibility into the performance of that specific page, but will be blind to performance problems elsewhere.

As the above example suggests, the primary problem with synthetic monitoring is scaling the scope of your application monitoring. For synthetic monitoring to be valuable you must understand and monitor all of your business critical web pages, services, and transactions. When using a synthetic-only approach, failure to create tests for any of your high trafficked or mission critical websites and services leaves you in the dark to performance issues present in other areas.

RUM Benefits

RUM is valuable because, unlike Synthetic monitoring, it captures the performance of actual users of your website or web application regardless of their devices, browsers, or geography. In this sense, it is great for the business’s understanding of performance.

Because there is no need to pre-define your important use cases (a la synthetic), RUM is great for generating reports and analyzing trends. As users goes through the application, all of the performance timings are captured, so no matter what pages they see, performance data will be available. This is particularly important for large sites or complex apps, where the functionality or content is constantly changing.

By leveraging RUM, a business can better understand its users and identify the areas on its site that require the most attention. Moreover, RUM can help to understand the geographic or channel distribution trends of your users. Knowing your user trends helps you better define your business plan and, from a monitoring perspective, allows you to identify key areas to target for optimization and performance improvements.

Geo-Location GA

Summary of RUM Benefits

  • Understand HOW your application is being used
  • Understand the real geographic distribution of your users and the impact of that distribution on the end user experience
  • Understand network or channel distribution and flow of your users
  • Ensure full visibility of application usage and performance

Synthetic and RUM: Better Together

If your organization has access to synthetic tools, RUM allows for you to create synthetic tests that are more representative of your end user’s experience. A best practice would be to use RUM to identify target areas for optimization and then create synthetic tests to monitor these pages from relevant geographic areas and channels moving forward. This will allow you to isolate and diagnose the root cause of latency or intermittent performance problems that are impossible to identify with RUM data. A good illustration of this process can be found on this blog published by the performance team at Gilt Groupe.

In short, leveraging RUM with synthetic monitoring allows for a more cost-effective, accurate, and comprehensive monitoring experience.

Synthetic Monitoring vs RUM

This post was originally published on the Rigor Web Performance blog.

Recently one of our customers, Gilt Groupe, posted a case study on their tech blog evaluating the value and merits of utilizing synthetic monitoring (Rigor) to understand client side performance trends and problems real-time.

At Rigor, we often find ourselves educating potential customers and business users on the differences between some of the different web performance monitoring methodologies available in the market and their various use cases. In particular, we have seen an uptick in interest for Real User Monitoring (RUM).

For those that are unfamiliar with either performance monitoring methodology, here is a brief definition of how each technique works:

  • Synthetic Monitoring – vendors provide remote (often global) infrastructure that visits a website periodically and records the performance data for each run. The measured traffic is not of your actual users, it it traffic synthetically generated to collect data on page performance.
  • Real User Monitoring (RUM) – vendors provide an agent (javascript) that is injected on each page and reports on the page load data for every request that is made for each page. As the name suggests, this technique monitors an application’s actual user interactions.

Synthetic vs RUM

So which technique is more useful? The reality is that these two technologies are incredibly complimentary. In Gilt’s case study, Eric Shepherd (Gilt’s principle frontend engineer) does an excellent job defining the benefits of using both technologies:

Both RUM and synthetic monitoring give different views of our performance, and are useful for different things. RUM helps us understand long-term trends, and synthetic monitoring helps us diagnose and solve shorter-term performance problems.

Eric gives us a great start when detailing the benefits of each solution. Below I dive into specific benefits synthetic monitoring tools provide that RUM does not.

Top Benefits of Using Synthetic Monitoring:

  1. Monitor in a Controlled Environment
  2. Synthetic monitoring allows for users to monitor the performance of their websites or applications with a set of controlled variables (geography, network, device, browser, cached vs. uncached) over time. This is valuable because it allows users to block out much of the noise that is reported with RUM. As a result, users can identify latency and downtime promptly and scientifically isolate and diagnose the root cause of performance issues.
  1. Understand the Performance of 3rd Parties
  2. Unlike RUM, synthetic monitoring tools provide waterfall charts for every website visit generated by the vendor. These charts provide full page asset load times allowing users to attribute every millisecond of load time to a piece of web content. For example, users can understand the impact of switching ad providers, content delivery networks, or using a new marketing analytics plugin.
  1. Benchmarking
  2. Synthetic monitoring doesn’t require any installation or code injection on your website to start. Subsequently, users can leverage synthetic tools to monitor the competition and effectively benchmark performance against key competitors over time.

Staples Benchmark

  1. Test at Every Stage of Development
  2. Synthetic monitoring can be used to test websites and web applications in pre-production. Pre-production test results can be used to baseline performance and set alert thresholds when applications are live.
  1. 24/7 Monitoring
  2. If an issue arises during off-hours or other low-traffic periods, synthetic monitoring provides the insight you need to quickly identify, isolate, and resolve problems before they affect users and negatively impact revenue and brand equity.
  1. Baseline and Analyze Performance Trends Across Geographies
  2. With synthetic monitoring, baseline tests can be set up to mirror the way your end users access your applications. These baseline tests can monitor key transactions and a geographic locations while testing from multiple browsers and devices.

After this quick overview you can see how synthetic provides value in many ways that RUM cannot, but synthetic monitoring alone may fall short in some areas. In my next post I look at the benefits of RUM and how its strengths effectively complement the holes found when utilizing a “synthetic-only” monitoring approach.



Understanding eCommerce Site Performance for the Holidays

This post was originally published on the Rigor Web Performance blog.

It’s that time of year again. Imagery of turkeys and cornucopias have started to litter sidewalks and clutter isles in retail outlets. Children build lists for Santa Claus and parents scour the web in search of this year’s hottest new toy.

Yes, it’s holiday season. Meaning consumers are about to open their wallets in a big way over the next two months as consumer spending reaches the highest points of the year. Ecommerce Retailers must ensure that they are adequately prepared for the load of traffic about to hit their websites.

The State of eCommerce in 2013

IBM’s seventh annual holiday readiness report detailed many startling trends related to eCommerce buyers:

  • eCommerce sales have increased 10% year over year
  • Mobile sales for Black Friday and Cyber Monday leapt 43% and 55% respectively over prior year
  • Average order value and items per order reach new peaks
  • Consumer attention hits all-time lows as average session length declines and percentage of single page sessions increase

Addressing Consumer Attention Concerns

Wait, what was that last piece? Yes, today’s eCommerce market is more vibrant than ever before with a multitude of niche/specialized outlets and storefronts for the consumer to choose from. However, while online shopping has grown in popularity, user expectations of online retailers have similarly increased. Check out these numbers:

  • In 2013, the average bounce rate per website was over 34%, up from the 2011 average of 28%.
  • The average session length on an eCommerce site declined 40 seconds from previous year
  • Page views per session declined year over year from 2011-2013 from 9 to 7 pages

As online sales make up a larger percentage of a company’s revenue, a functioning and fast website is essential to improving customer conversion and retention rates. Let’s take a look at some of the ways website performance can affect conversion rates and revenue.

Optimization and Speed

Slow speeds play a huge role in user abandonment on a website. A Kissmetrics study found that 40% percent of shopping users who experience load times of 3 seconds or more will abandon a website.

Competitor infiltration due to website abandonment, loss of repeat purchases and customer loyalty, negative brand equity, impaired employee productivity and loss of advertising revenue are all hidden costs that occur as a result of a broken or slow website.

More than $3 billion in lost sales due to poor performance.

Of the total shopping carts abandoned on the web every year, 18% can be attributed to slow pages which correlates to more than $3 billion in lost sales (across US ecommerce sites) due to poor performance. The revenue an eCommerce company can lose from poor performance makes investing in performance tools a worthwhile necessity.

Downtime and Reliability

Downtime on a website results in a significant loss in sales and in customer equity. If a customer experiences downtime their first time using a website, they are highly likely to leave for a competitor and may never return. Even for returning customers, downtime closes a door for future sales.

Amazon-goes-down-losing-millionsAn unreliable website that experiences downtime is similar to closing the doors to a brick and mortar retail store during operating hours. Simply put, if the doors are closed, consumers can’t shop.

Improving Performance: What’s out there?

Once you’ve decided that improving web performance is worth the investment, what do you invest in? Two predominant solutions exist in the front-end web performance space to monitor and measure the ongoing health and reliability of your site as it relates to the end-user: Real User Monitoring and Synthetic Monitoring.

Real User Monitoring (RUM)

Real User Monitoring (RUM) is a technology that utilizes Javascript injected into a browser to passively collect user information as they engage your website. This provides accurate and comprehensive consumer data that is beneficial when addressing the needs of your website from the customer’s perspective. For example, if you can focus on the aspects of your site that are most used, you can have a better shot at improving the customer experience.

RUM provides value in an omni-channel environment, as it captures the experience of all users accessing your site from any device and geographic location. Most eCommerce marketers already have access to a form a real user monitoring if they utilize Google Analytics.


There are apparent limitations to this solution when it comes to diagnosing the source of the issues on your site. Waterfall charts are not generated from this approach and it lacks the ability to alert you to your problems rapidly.

For websites that are receiving heavy traffic on a regular basis such as Youtube or Amazon, a RUM solution will gather excessive data that can hamper your ability to decipher what is actually important.

During peak periods, such as the holiday season, the vast amount of information being gathered can prevent you from being able to pinpoint certain problems due to the variety and scale of user sources interacting with the website or application.

Lastly, because it is a piece of javascript that a user loads when they access your site, RUM can be a cause of latency and poor performance.

Synthetic Monitoring 

Another technique for monitoring web performance is called Synthetic monitoring. Synthetic Monitoring uses real browsers to access a website or application and mimic consumer actions. These solutions leverage browsers from multiple locations with varying internet connections to mock the end-user experience, allowing organizations to quickly and preemptively diagnose website problems and errors.

Because of the ability of synthetic monitoring tools to perform transactional tests, organizations can identify functional issues on their site as well (such as the inability to complete a checkout process).  Most tools, like Rigor,  provide alerting capabilities to alert users if the site is slow or experiencing downtime.

Screen Shot 2014-10-29 at 12.44.29 PM

This preventative approach puts the company in the driver’s seat of their performance monitoring and allows them to decrease the probability that negative effects will be incurred by end users and customers.

A limitation of synthetic monitoring is that, unlike RUM, it does not capture actual user interactions. However, due to the controlled nature of synthetic monitoring, organizations can more quickly isolate and identify the causes of performance issues to ensure that any service disruption that occurs is resolved expeditiously.


In an increasingly competitive eCommerce landscape where holiday sales account for nearly 20 percent of total annual revenue, retailers must ensure that their websites are adequately prepared for the holiday rush. Consumer attention is at an all-time low, while market saturation is at an all-time high meaning websites must perform at a high level in relation to their peers.

Leveraging performance tools is a must and understanding your options is the first step to preparing your website for the chaos that is the holiday season.

Unsure how fast your site is? Try our free web tool, webperformancegrader.com, to see how your website performance stacks up against key competitors.


In Honor of Sarah “Tay” Lever

In today’s world it is commonplace to embellish or exaggerate when discussing the qualities or character of another. How many times a day do we hear someone say, “He/she is one the most” followed by some complimentary adjective, such as “loving, selfless, compassionate”, and the list goes on and on. We have all been guilty of this simple indulgence, and while in a singular act, it may appear to be a harmless cordiality, when this exaggeration is enacted on a grand scale, it slowly erodes the weight that these phrases were intended to carry.

We don’t take the time to realize that our actions have created unintended victims: the individuals among us who are truly exemplary; those who truly deserve all the gravity that these words were meant to hold. Anyone who knew Sarah Lever, or “Tay” as she was affectionately known by her family, understands the tragedy that results from our overindulgence of powerful rhetoric, for she deserves so much more than our now empty words can now give her.

I love you with all of my heart and soul.

So here I am, tasked with what amounts to painting the Mona Lisa, armed with no more than a pencil. As I find myself trying to put into words just how special Tay truly was, I am reminded at how limiting language can be and find that the best words I can muster are the same overused words and expressions that will be lost on those who did not know her. Yet, I must at least make an attempt, even if it falls short of capturing just how wonderful a human being she truly was.

The word that I found most fully describes Tay is love. I cannot begin to describe the depth of the love that she had for God and all of his creations. Nor can I fully express the unending love and compassion that she showered freely and equally upon strangers and family alike. Though the number of people loved by her was vast, we all still received more love than we could ever return, for she loved with a vigor that could only be matched by our Lord God. She delighted in bestowing praise and thanks to those around her for their compassion and love, yet she never realized that we were returning a mere fraction of the love she had shared with us. This was what was special about her love: it inspired more love. Anyone who was touched by her was compelled to touch others in a similar way. As a result, her legacy of love can be seen and felt on this Earth across city, state, and country lines. Still, despite all my praise, I can never fully describe the beauty, joy, and compassion that she exuded every day of her life, nor can I express the joy that I have experienced from having her in mine; I will forever love and miss her until they day I am finally reunited with her in heaven.