September 23rd, 2011 by Jude Allred

Experiments with Google Page Speed Service

Our oldest product

Let me tell you a little bit about http://www.fogcreek.com/:

  • It’s old; It’s hosted by IIS6 alongside a horde of other websites.
  • It’s the first place people go when deciding whether to use FogBugz or Kiln.  It’s pretty important to us.
  • Recently, we started to treat the website as a product. It has received substantial attention, primarily in the form of A/B tests.

We’re building some fancy new web servers for fogcreek.com, but in the meantime we’ve been accepted into the Google Page Speed Service beta. It seemed reasonable that it might provide us with some useful tools, and in the short term it might give us a workaround for some of our IIS6 woes.

There were four things that we hoped Page Speed Service would take care of:

  1. GZIP all of fogcreek.com’s static and dynamic content.
  2. Set far-future expires headers on our static content.
  3. Distribute our static content via a CDN.
  4. Fix or remove the broken ETags that IIS 6 Is sending out.

Enter Google Page Speed Service

For this test, we brought Google Page Speed Service to life at w.fogcreek.com. This is a totally unused alias of www.fogcreek.com, so we were able to test PSS on an isolated mirror of our production environment.

After a small amount of setup, page speed service was live!  (Setup basically consisted of adding a Google verification .txt record and a CNAME record mapping w.fogcreek.com to ghs.google.com.)

The comparison tool that Page Speed Service provides was inconsistent, but it looks like there might be a lot to love.

  • Its page load times differ from those reported by GTmetrix and local testing
  • The measurements change significantly over the course of running it multiple times

But the results look promising.  Here’s a report on Fog Creek’s home page load time.  On the left is the load time without Google Page Speed and on the right with it.  Google PSS gave us a 10% difference in initial page load time, and a 25% difference in repeat page load time. We got that for very little effort.  Nice!

Here are GTmetrix reports on www.fogcreek.com from 9/23/2011.  First, the results without the Page Speed Service:

And the same tests on w.fogcreek.com, with Page Speed Service enabled:

Overall scoring improvements:

The Fog Creek landing page:

Google PS +1%, YSlow +10%, page weight reduced from 466kb to 365kb

The FogBugz landing page:

Google PS +1%, YSlow +12%, page weight reduced from 608kb to 379kb

The Kiln landing page:

Google PS +0%, YSlow +10%, page weight reduced from 520kb to 411kb

Interestingly, GTmetrix shows page load times (as measured by the onload event) that are consistently slower for w.fogcreek.com than http://www.fogcreek.com/, usually on the order of about 0.7 seconds.

GTmetrix’s tests don’t seem terribly consistent with webpagetest.org’s (Google’s promoted testing service), which show consistently faster page loads than GTmetrix. But, it’s hard to draw conclusions since incremental rendering of the page is an important factor which isn’t represented by the load time.

For those of you who wish to dig deeper, take a look at the waterfall diagrams of a given page rendered with or without PSS. You’ll notice that the PSS timelines have their content proxied across 4 of Google’s servers, the overall number of requests has decreased, and the waterfall has rearranged significantly.

For example:

www.fogcreek.com/fogbugz, Without Google Page Speed Service:

 

w.fogcreek.com/fogbugz, With Google Page Speed Service:

So how did PSS handle the goals we wanted to meet?

  • PSS has compressed (GZIP) the content that it sends down
  • PSS has far-futured the vast majority (but not all) of our static content
  • PSS has properly configured our ETags
  • PSS is distributing our content via its CDN

Pitfalls

1 - PSS’s lossless image compression is lossy.  Our website uses a tiled, textured background image.  Page Speed Service attempted to optimize it, and in the process altered the background images.  For example, this image:

Became this one:

I’ve put together a test case at w.fogcreek.com/ImageTestCaseForPSS/

As part of the lossy conversion, PSS converted the image from a png into a jpg.  I altered the source image files to be jpgs instead of pngs, and then PSS was able to optimize and bundle them without any difficulty.   I’d also note that the image optimization step (and all of their other optimizations) can be disabled in order to work around problems like this.

2 - PSS supports blacklisting URLS, but not blacklisting HTTPS urls.  I knew from the start that PSS didn’t support HTTPS, and therefore our trial signup forms, e.g., https://www.fogcreek.com/kiln/try/, would have to be exempt from PSS’s optimizations.

PSS supports blacklisting, which allows you to exempt a specific page from being hooked by PSS’s proxy.  Instead it will go directly to your source server.  Unfortunately you can’t configure HTTPS pages in this manner.  If you have a website with HTTPS content, you’ll have to move all of your HTTPS content to a separate subdomain, e.g. “secure.fogcreek.com”, in order to exempt it from PSS.

This was an inconvenient change to make, although not hard. Still, PSS’s support of blacklisting was put to good use.  We use a heavily-modified version of the FairlyCertain A/B testing framework on our fogcreek.com properties.  Server-side A/B test frameworks and CDN’s don’t really play well together, but PSS’s blacklisting allows us to blacklist any pages which contain active A/B tests.

3 - For the duration of the beta, once you bring PSS live on a given subdomain (e.g., w.fogcreek.com), you cannot reconfigure PSS to work with a different subdomain (e.g., www.fogcreek.com) without an additional beta acceptance.  I’ve reapplied to include www.fogcreek.com in the PSS beta, but have yet to be approved.  For this (and only this) reason, we’re blocked on moving forward with using Google PSS in production.  If a Googler is reading this and would like to whitelist www.fogcreek.com for the beta, we’d be much obliged.

 Impressions

Pitfalls and onload-event oddities aside, I’m very impressed with Google Page Speed Service.  I’m happy to see that they’re abstracting away the ever-present need to minify, compress, and bundle your static website content, as well as optimize your images.  In our specific case we don’t really care about those  features – we already optimize our images and bundle our scripts, but it is a great feature for most sites.  Our case is probably more edge than common—we’re using PSS primarily to work around IIS6, and it appears to succeed at that.  PSS’s CDN seems promising, but I don’t yet have any data on how it compares to, say, Amazon’s Cloudfront service.  Setting up PSS on our site was an absolute breeze, however, and  I think it may turn out to be a great tool for web developers and site administrators.

The choice of setting up the Page Speed Service on w.fogcreek.com instead of our production subdomain, retrospectively, was a useful mistake.  I appreciate the ease of using tools like GTmetrix to compare the two subdomains directly. I’ve also enjoyed keeping PSS’s configuration changes completely separate from our production environment as well as the ability to share the PSS changes with other people without having to manually configure their browser to proxy traffic through PSS.