Thursday, November 20, 2014

Quantified and Qualifed Data



I'm back on the Data bandwagon and please excuse me for being persistent - if you read my last post, I made some statements about the importance of data. I then listed some ways we, as product management, should assemble data to support our product actions. I'm continuing with some thoughts that have bearing that I was remiss in pointing out the last time. If you read that last post, I made some assumptions that you can infer regarding the data itself, but wasn't very explicit so this is a bit of a clarification. What I called Data that last time should have been called "Quantified and Qualified Data" - I'll explain.

Monday night (2014.11.17) I attended the Atlanta Mobile Developer's Group meeting in Buckhead (this was at Alliance One and hosted by eHire). The presentation was called "The Tau of Mau: How to turn meaningless app downloads into engaged users" provided by Jeff Steinke. The presentation was one of the better I've attended this year - in this case "MAU" is Monthly Acquired Users and refers to a trend in the mobile industry to measure success by registrations. He used a graph bearing from 0 to 700K that hockey-sticks over a period of several months and began with little explanation to see if, based on that little bit of information and making the assumption that the room was full of potential investors, would we be willing to invest.

Without giving too much more of his presentation away and to get to the point of today's post, several slides in Jeff talked about how data should be both Quantified AND Qualified and how that first exercise put all the reliance on the quantity, and not on how qualified the data was. For mobile app users (and really for most B2C web users), downloads mean very little without engagement. For one of Jeff's company's (Less Meeting), he listed three things that allowed them to know a better picture of success: download (registration); completion of a short tutorial; and finally the use of the app to schedule a meeting. The talk itself boiled down to engagement and the definition of engagement (for most it's convergence - if the user isn't using your product, then a free download has little meaning). Jeff had done enough analysis to determine that if their new user accomplishes the three things on his list, there was a high degree of certainty that the newbie would become a real, paying customer. Back to the initial example used by Jeff to illustrate unqualified data, the company had a lot of MAU but very little actually ongoing engagement. Hard to monetize users if your application is a "one trick pony."

From my own personal experience, I've worked on several applications where the project decision was made for me and ultimately that decision was flawed. The problem is in looking at the raw data and not applying some sound reasoning to filter the data into something that's qualified. In the earlier days of the web there was a lot of emphasis on getting application registrations - this was based on an old-school thought that when people buy software they have skin-in-the-game and as a result they become users. The issue with that assumption is that the web changed the paradigm - all those early companies (most now defunct) based their logic on sheer numbers, and it was relatively easy to get funding (everyone wanted to invest in the next new start up and become an internet-millionaire!). Saying you had millions of "users" (meaning registrations) sounds awesome to investors who would hand-over-money just for an opportunity, without any sound reasoning behind what would power the monetization of those users. When the Dot-bomb dropped and there was a rush to convert all those free registrants to paying customers, the companies fell like dominoes. The analysis was flawed.

The other metric that often used to sell-a-company is number of site visits. The argument is that if you have a lot of visitors, you can always build a revenue model of page views and click-throughs. As someone who has also worked in this type of environment, this can also become a flawed statistic. When you look at the actual number of views you need to make any appreciable money from this model, you're not making much until you get into the 100s of millions. The corresponding likelihood of a click-through is likewise a flawed statistic. If the keywords driving those ads aren't relevant to the user (meaning things have to align just-right - user type, application type, paid-for-words and the gods!) the actual revenue gains are significant as single-instances, but flawed as a sum. Also, these types of campaigns are cyclical in nature so they can rarely be relied upon (one exception is to create a "key accounts" model where you have broad-spectrum advertisers who already have established brands). A technique to help you qualify these numbers is to tag pages to ensure that the user stays on the long enough to actually see the ads placed. Another is to use SEM to aid in placing inbound, specialty pages, which tends to have a synergistic affect on organic search.

So what else can you do? I think that using experts to help you decide can go a long way towards qualifying the data (I'm a fan of Data Science). I also think it's very important to use both the experience of your team and the information you garner from existing customers, to determine how that data rolls-up into something that can be used. If you look at something and don't understand it, re-examine the data to see if it fits some patterns or anti-patterns that make sense. Another idea is to leverage your network of technical experts - I'm sure we all know and have worked with professionals who have "the eye" in regards to gaining insight into the data. Ask lots of questions, gather your data and make sense of it.  Put monetary figures against what's happening and compare it to what you know and possibly don't know. Strive for understanding, and make the data work for you.

More information regarding the Atlanta Mobile Developer's Group: http://www.meetup.com/Atlanta-Mobile-Developers-Group/

Jeff Steinke's blog: http://www.jeffsteinke.com/

(also published on LinkedIn)

Saturday, November 15, 2014

Good Product Management is Based on Good Data

I harp on this all the time and I'm sure my colleagues are tired of me saying it - but I'm of the firm opinion that we should DO NOTHING in regards to product management or product development, without the data to support our decisions. The foundation of any change is underpinned by data to support that change - whimsical changes or even changes supported by some perceived need, mean very little without supported analytics. Foregoing the due diligence, even for small changes can not only be detrimental to the application but can also ultimately impact your company's bottom-line. You do not want to be in the position of defending your actions, even if they seem innocuous, against a product backlog that has real calculated ROI.

Even Dev Prospecting (the idea that sometimes we need to push a change that could garner new business or customers with some nebulous "maybe" result through innovation) should have a minimal data construct projecting what may come from doing so. For this I look at data models in parallel or similar affinity vertical markets.

If you aren't paying attention to the data, making some effort to understand the trends, and have the ability to filter the noise and make some decent projections based on what you see, you're doing more harm than good. So as a Product Manager, what can you do to get this data?


Research: Public Searches. The first avenue for any good product manager is to start doing some searches online using probably keywords. I think most product managers already have some ability at finding public information - it's a skill one develops over the years and is a good starting point for just about any knowledge gathering. Google is your friend, but this is only a starting point. I'd suggest you start brainstorming and as you broaden your searches, additional keywords will suggest themselves in the results you find. Make sure you note what you find but stay relatively focused using the additional words as possibilities as they can get you really distracted (ask me how I know?).

Research: Examine Your Internal Data. The second tool in every product managers tool-kit is internal data. Most of us have applications that have been running for several years and the data is sitting there for the grabbing. Look at the data points you have and see if there's information there that can be used for modeling or to suggest avenues that support your case. Just be careful, especially if you're forecasting to take this data with a grain-of-salt. Using existing data works great to support cost savings; it's much more dangerous to use it to support revenue opportunities.



Research: Use Your Existing Customer Pool. It's always blown-my-mind when I've come to a company and realized that there is almost no interaction with the existing customers, beyond simple support and account management. If you have enough data to identify problem areas in your application via support calls and emails, what's more useful than to reach out to your customers and start a conversation. Begin with what they like, move to what they don't like, then start suggesting things you'd like to do. The information you receive can be quite compelling and leading you to paths making good application decisions.

Research: Use Your Existing Sales and Support Pool. As above, we often receive feedback for what's wrong or bad with what we are doing from an application level. What's harder is to get a sense of what really needs to be changed or fixed. Use data as the foundation, then interviews with your coworkers to gauge what will have the most impact - you'll often be surprised at what you find out and once again, these are leads that can direct you towards real innovation. I love getting sales figures and using the information to defend a case for doing something, or even better, against doing something being driven by someone with influence but no clear understanding of what's needed.

Big Data and DataScience. The last and this is something that I'm a big fan of - hopefully your company has embraced DataScience and hired a good python developer to parse through your tables. It's amazing what can be discovered by trending data and looking for graph-outliers or anti-trends. As product managers, we need to better understand how this last tool can be used effectively, to support our case when making product decisions.
I think most product managers understand all of this, if at a very subconscious level. At minimum, keep your mind open and don't simply disqualify ideas being promoted by your coworkers - I know that we're all busy and that this is easy to do, but your do yourself an injustice and really exhibit a lack of respect for those you depend upon the most.

"I get it...I get it"...

-- John