← Fell Swoop Blog

Avoiding the feature graveyard – Part 2

This is part II in a two-part series on feature analysis. You can read part 1 here.

Now that you’ve integrated your metrics, tested them, and launched your new offering, it’s time to dive in and prepare your analysis. For multi-step features, like the product ‘finder’ experience in our example, I typically divide feature analysis into five buckets:

1. Traffic & Merchandising
2. Completion & Utilization
3. Customer Perception
4. Executive Summary
5. Recommendations

This structure serves as the outline for your presentation deck and nicely follows a user’s path from discovery to completion, which can be grounding should you have any linear thinkers in your audience.

Traffic & Merchandising

Before diving into a feature’s performance, offer some context on the efficacy of promotion. There are two benefits to this approach. First, starting with feature discovery is a nice way to get your audience thinking about the customers’ journey. Second, feature merchandising, and the resulting traffic, is just as important as the feature itself. If your users cannot find it or, worse, don’t find the positioning compelling, then it doesn’t matter how great it is.

To tell this story effectively, weave screen shots of each promotional placement alongside corresponding click data into your presentation. Include other, similarly treated promotional elements, and their respective click conversion numbers as well. Without this context, and just raw numbers standing alone, your information is simply data and has little value. “Great, 50k unique visitors saw the ‘Finder’ last week. Is that good?” Finally, make sure you check your traffic sources. Understanding the nature of your traffic is key to understanding utilization. A large email blast to your house file can drive lots of less qualified traffic which can result in significantly different behaviors than a simple on-site promo.

Completion & Utilization

When it comes to in-feature behaviors, I recommend starting with the headline. In multi-step processes, that’s your funnel completion rate. Funnel reports are powerful and shouldn’t be reserved for checkout metrics alone. Give your abandonment rate at each layer of the experience along with top exit choices. Where your customers are heading can be illuminating as to why they left – your customers are voting with their clicks. Again, don’t show your funnel alone. Marry the data to the on-screen experience. Chances are your audience doesn’t have each screen memorized like you do. Keep the data and the experience apart and you’ll find yourself pogo-sticking back and forth from slide to site – which isn’t a great way to keep your audience from looking at their phones.

With completion rates addressed your audience is ready to dive into more detailed performance metrics. As I stated in Part I, your Google Analytics (GA) Goals will help tell you if you’re being successful against your business and broad UX goals, but it’s the detailed usage tracking that will put you on the path for how to iterate on your feature. For GA users, this is where you’ll rely heavily on custom ‘Event’ tracking. What volume of users experience an error and on which fields? What is the usage rate for inline help content? Which UI elements go unused? Don’t offer raw numbers — give percentages. With some simple math every UI element has a conversion number, so make sure you can measure yours. Each of these is a dial you can influence through a refined design. And sometimes those refinements are the difference between a feature’s sunset or continued promotion.

For each of your slides, make sure you have your back-up. In site analytics terms, your back up is the path to replicate your findings. Anyone who has spent time in GA will tell you that it can be hard to replicate a carefully crafted query if you’re not paying attention. There are often several ways to approach basically the same ‘ask’ of your data, so it’s easy to come up with a different approach and a different result. This becomes particularly important when you want to revisit queries as you iterate and look for performance improvements. Additionally, if you’re challenging someone’s pet project or a widely held notion, you should expect challenges to your methodology. Be ready.

For bonus points, consider running a cohort analysis. A cohort is a defined group of users at a point in time. In our example you can easily take Advanced Segments of users who did, and did not, utilize the product finder and compare their respective cohorts. From there you can see if users of your feature exhibit other positive behaviors like stronger user retention, longer session duration, increases in other goal completions, and of course, transactions.

Customer Perception

After you’ve walked through usage data, your audience is apt to have questions. As stated in Part I, quantitative findings address the ‘what’ and ‘how many’ questions, but often lead to qualitative questions of the ‘why’ variety. This is the circular defense of good metrics: qualitative backs up quantitative and quantitative backs up qualitative. Absent any qualitative findings snap judgements will be espoused, hidden agendas may bubble up, and office politics can rear its ugly head. Hearing directly from customers on the usefulness and appeal of your offering, in addition to their suggestions, can dispel many a pet theory and put your meeting (and your product) back on track.

Executive Summary

If you’ve crafted a clear narrative backed with key insights and data points then the heads should already be nodding. Obviously your initial business goal is the headline. If you’re not hitting your initial objective don’t shy from it. Do you need a handful of tweaks or a substantial pivot? Your summary is your opportunity to demonstrate that you have a thorough understanding of what’s transpiring and why. Your supporting points should follow the flow of the story to drive home the key messages from promotion to completion and customer regard to cohorts. Fortunately, thanks to your diligent preparation, you’ve avoided the worst possible outcome – immeasurable results.


Finally, it’s time for your recommendation set. As with the summary, these should nearly be foregone conclusions. Keep your recommendations targeted and specific. Each recommendation should address the questions of “what,” “why”, and “how much.” If you have time, engage your design team and sketch UI modifications to bring ideas to life. Even if you have pre-approved time to iterate on the feature post launch come prepared to defend your ideas. If there’s one thing you can count on it’s that everyone wants a piece of the roadmap. If you’re in a mid-size or large organization, make sure you have a groundswell of alignment before the meeting. The larger and more political your organization, the more the meeting before the meeting matters.

As for the presentation itself, don’t get too worked up about it. If you’ve done your homework, you’ll know 10x more than your colleagues. Focus on making sure your analysis tells a clear story. Start by writing slides by hand to rough out the flow and basic contents. This will keep you from wasting time on polish before you’re ready. Once you have a draft in place, pilot your findings with a friendly (but critical) audience to find any head-slapping gaps or errors before the larger group. However, even with a dry-run, you’ll undoubtedly get some questions you hadn’t considered. Visually capture any inquiries as you go so your audience feels they’ve been heard then circle back later. Following up will build trust in you as an analyst over time.

If you present a slide deck like this to an audience that has only seen data pukes in the past, they will be floored with the depth of analysis that can be done with modern analytics techniques. You’ve presented them with a data-driven story that details how your feature was received by its audience, why it responded to its goals the way it did, and where it needs to go next. The feature is now in the seventh-inning stretch and you know which pitcher to put on the mound.

About the author: Alex Berg is the Director of Strategy & Analytics for Fell Swoop – a digital design firm in Seattle. Prior to Fell Swoop Alex held leadership roles with Ritani, Wetpaint, Expedia, and Blue Nile. Follow him on Twitter @alexwberg