← Fell Swoop Blog

Avoiding the feature graveyard – Part 1

(This post was originally featured on Geekwire)

Launching a new feature-set is seldom a home run on the first pitch. It can take inning after inning of foul tips, singles, and doubles to iterate your way around the bases. Without the right approach to analytics, that new experience you’ve carefully crafted is more likely to end up in the feature graveyard — adding to your teams’ technical debt, rather than gracing your site’s homepage.

graveyardWhile there’s no shortage of analytics offerings, each with their own turnkey reporting capabilities, meaningful product analysis requires more than out-of-the-box answers. Anyone can copy and paste a report into a slide deck. Simply regurgitating data, or “data pukes,” as coined by analytics guru Avinash Kaushik, isn’t helpful. In fact, more harm than good can result from “data pukes” because they are subject to interpretation.

Analysis, when done well, has a narrative style that considers the constituents from C-level to individual contributor. It creates a visceral desire to know what happens next – a clear path through a sea of data that moves us forward and never leaves us feeling lost. It provides the back-up for how conclusions are reached; particularly when contesting strongly held views. It blends the quantitative with the qualitative to supplement the ‘what’ and ‘how much’ with the ever-important ‘why’ behind the data. And, most importantly, complete analysis has actionable recommendations.

All of this might sound complex; something suited for a well-trained analyst. And while thoughtful analysis does take some rigor, it doesn’t require an advanced degree — just a curious mind and a systematic approach.

Here’s my battle-tested process for effective, in-depth, feature analysis from resourcing to presentation. Since there’s a lot of ground to cover, this is a two-part post. Part I covers planning, resourcing, tracking integration, and testing – the crucial first steps that make effective analysis feasible. Part II covers analysis and presentation. Because examples are helpful, let’s assume the feature is a multi-step process like an e-commerce product finder experience. But any multi-step process will do.  As for analytics packages, I’ll reference Google Analytics (GA) since it’s ubiquitous, but most of the top players have similar capabilities.


Proper analysis starts at a feature’s inception. Any product manager will tell you to define your success criteria early, whether its improved conversion, new accounts, or better retention rates.  However, the core metric is the tip of the iceberg. The first, and arguably most crucial phase in analysis planning, is resourcing. Resourcing occurs on two levels. First, when you’re pitching a feature and advocating for time on the roadmap, block out additional development time beyond what’s needed for the initial launch. Not only do you want to make sure you budget time to integrate your metrics (typically measured in hours, not days), you also want to account for time to iterate post launch. Roadmaps get full fast. If you don’t carve out time in advance for iteration you’ll lose out. Set this expectation early; otherwise you’ll get valuable data and not be able to do anything with it.

Planning & Integration

Thoughtful tracking planning and integration is crucial to effective analysis. For a complex feature set, I break tracking into two groups; business metrics and UX metrics. Your business metrics are typically pretty easy to measure thanks to GA‘s Goals and Advanced Segment capabilities. User experience driven metrics, like funnel completion rates, take a bit more configuration, but are no less important. After all, a successful experience, wherein the overwhelming majority of your users complete your multi-step process in our example, is far more likely to reach a successful business result. What’s key to both is clearly defined site Goals.

The free GA version offers up to 20 configurable Goals that can be defined on a few dimensions such as reaching a destination URL (great for our example), hitting an engagement threshold (like page views per session or time on site) or even triggering a custom ‘Event’ (more on that later). Custom URL goals can also include a funnel report that measures completion rates page-by-page – perfect for measuring the efficacy of multi-step processes. Since Goals are limited they are great for high-level business and user experience objectives that don’t change from release to release.

Detailed Usage Tracking

While GA Goals and segmentation analysis will enable you to tell if your users are successfully completing your process and driving your business objectives, it’s the detailed usage tracking that will put you on the path for how to iterate on your feature. If you see a 20% drop between two stages of your funnel, knowing things like scroll depth, cursor insertion, error rates, or time on page will indicate what to iterate on to bring up the larger success metrics. In GA, these interactions are measured in explicitly crafted ‘Events’, but they need to be custom configured in the client-side code. Map your Events out field-by-field, but be sure to establish an intuitive labeling scheme so your reports logically align with your interface. Avoid generic names like “homepage-test” where specific names like “primary-module-hover” are more meaningful. Remember your data has multiple audiences so try not to make the organization too complex.  An approachable core data source is extremely valuable; especially as your team scales.

Of course, most site changes don’t require such an extensive approach. You may just want to influence a single metric like improved scroll rates or bounce rate reductions. If you find yourself questioning how much to invest in tracking, ask yourself if the item is something a user can experience. If it is, track it. Better to track it and not use it than not and regret it later.

Qualitative Metrics

When preparing your integration, consider what your analytics package can’t tell you. Most quantitative analytics packages won’t tell you how users regard a feature’s helpfulness; nor will they give you customer suggestions. And while a ‘faster horses’ approach rarely leads to great experiences, knowing the emotional-drivers behind behaviors can produce powerful insights. That’s where qualitative research comes in handy — layering attitudinal findings on top of the behavioral.

The good news is that there are a number of qualitative research options; many of which are fairly turnkey and inexpensive. From targeted email surveys and on-site intercepts to traditional focus groups or one-on-one interviews, you’ll find an option for your budget and skill set. Whichever method you choose consider inclusion of structured questions around key topics like helpfulness and ease of use. Having answers on a numerical scale is great for benchmarking as you iterate. If your audience includes management consultants or board members, consider integrating something with industry benchmarks like a Net Promoter Score. While qualitative research takes time, much of it can be done with little to no engineering resource. You’ll find the investment time well spent. Quantitative findings invariably lead to qualitative questions, and vice versa.


Like any good feature, you need to test your tracking before launch. There are a couple of ways to approach this. First, you can simply use a browser debugging tool to make sure your Events are appropriately firing as you test the experience. Goggle offers a Chrome Browser Extension called the ‘Event Tracking Tracker’ – which is worth the free download. Second, you can create a staging or test instance of your GA profile, which monitors your test infrastructure and not your production servers. I recommend doing both. There’s nothing worse than fudging your tracking data. Finally, make a friend in the QA team. Not only can they help test your integration, they also contribute great ideas on what to measure.

With your metrics properly integrated and tested you’re ready for launch (assuming all the other planets have aligned of course). As a sanity check, verify that the data you want is being captured on your production GA instance the day after it’s live, but hold off on any meaningful analysis until at least a few days have transpired. The amount of time and traffic you need varies depending on your business and the nature of the feature.

Check back soon for Part II where I’ll cover conducting the analysis and presentation.