Saturday, December 6, 2014

Revolution vs Evolution






I've recently been in a few discussions with colleagues about software innovation and how it relates to product management so I thought I would share a few thoughts. We always talk about opportunity, and frankly, we usually use it in a context to defend or soften a product decision to an end user (aka client). In this way we use words like "opportunity" to define a problem that needs fixing - since many of us bill for the work, the term "opportunity" equates to making more money, at least to a consulting company; and the use of the term "opportunity" is much more palatable than "problem." Instead we should identify problems as problems and spend our time working through solutions and really, isn't "solution" a better term than "opportunity"? In any case, I digress, as the topic I want to explore is one of innovation.


What got me thinking about this is an encounter I had earlier this year with a process improvement group. This is the scenario - I'm working within a project to rewrite existing software using new technology and better design principles. Most of the requirements were derived from interviews with the management teams using the existing software (they in turn presumably worked with their teams to come up with innovations and improvements). Fairly straight-forward, right? We then took those requirements and began exploring implementation converting the requirements into epics and user stories, to get the conversation going with our team, derive executable bits and present them back to the business users as we move from bits, to MVP, to production release and consumption. We take a very agile approach to software development so the first hurdle was to get our business users to adapt to this practice - not an easy task. Our BUs tend to throw everything into the requirements and their initial expectation was that everything listed would get done - which made it very difficult to get things through UAT, even when we constantly explained that it was a planned cycle of development moving from simple to complex. In their minds, "working software" meant that everything they wanted on the requirements was done, to be able to define the software as working. Once again I digress, but this time to help provide some context.

In actuality, there's an inherent problem with this requirements gathering method. First, the innovations asked for by the business user usually relate to the existing software platform and amount for the most part to tweaks or small process improvements. Second, when those requirements come through the management filter they tend to be broad-stroke in some regards, and myopic in others (depending on what the manager perceives as important). Third, the methods that the business user is defining in requirements are usually the same as the original software without thinking about how the entire application could be improved by changing the underlying software method - what I mean by that, is when you have staff trained to use software and they are used to that software, it's hard for them to think about an entirely new methodology that's outside of that experience and context. So to continue my story, some of the things we introduced in the rewrite had to do with using an off-the-shelf BPM engine. Our thinking was "why reinvent the wheel when there are already so many companies out there who have figured it out?" So this became the first point of contention - even through a BPM package may satisfy the baseline requirements, there are usually some fairly rigid rules within a fixed methodology that are required by the software itself. It's a tradeoff - you get better accuracy and many of the bells-and-whistles you're looking for, but to take advantage of them you need to conform to the methodology designed within the software. Once implemented, the next challenge was to get our business users to understand that the methods have changed - the outcomes are the same, but by using something more standardized, it will be easier and better to use the same software to do similar tasks. This change is rather disruptive, produces a lot of user angst and confuses people, who are still used to using the original, legacy software.

So around the time we first got our users used to the changes with the introduction of the BPM technology, we were introduced to a new process improvement group. Seeing this as an "opportunity" - the group was brought in to look at the existing methods and suggest improvements that could be incorporated into the newly designed software. This included analysts actually sitting down with the business users and observing how they use the software. We welcomed the additional resources and hoped that a few "magic bullets" would be identified to really make our newly rewritten software fantastic, hopefully saving money and time by identifying process improvements through the application of technology. So now, we're getting to the crux of my article.

The report that was delivered was interesting, not in all the suggested changes and aggregate analysis defending changes to the overall workflow, but in that it copied almost requirement-by-requirement what we had already defined and planned in the rewrite. It basically validated what we already knew and made the same improvement suggestions that we had already planned. What we received was evolution, rather than the hoped for revolution. And please don't think that I'm diminishing the importance of what the process improvement group did - it's nice to have validation in what we are doing and have planned and there's certainly value in that. I think that as a group we had much greater hopes and that our expectations were unrealistic. I also think that in our world of software development we spend a lot of time trying to improve things rather than searching for new solutions - that outside-of-the-box solution that's so new it can redefine what we are doing and how we are doing it. Not to say there's no value in evolution. One of the easiest things to defend are improvements that cut the bottom line - even small process improvements can do this and the accumulated savings can be quite significant.

So what is better? In my opinion, revolution is thinking about a problem and coming up with a solution that is so novel it doesn't fit within the original problem statement. The downside is that it can be very risky. And no, the ideas don't often happen overnight. Evolution on the other hand, involves small, incremental changes and has less risk, but when that's what you rely upon, you could get trumped by a new market contender with revolutionary ideas. I really think we should use both methods but in general try to think in revolutionary terms - more of a mindset than a strict method. We should embrace the good ideas backed by real data, and defer on ideas that offer little to the bottom line that have no data to support them.

How I long for a time when I receive information that is filtered for relevancy (important to me and customized for my needs), from several sources, all in one place using a single sortable and filterable delivery mechanism. There's been a metamorphosis of thinking when it comes to communication that started with emails, went to the use of distribution lists, forums, communities, wikis and now messaging aggregation with software like Slack. This last example is something transformative that's been happening for several years and is only now beginning to get some attention outside of software developers. You would think that this is an example of evolution, but when it moves outside of the development word it becomes revolution.

Thursday, November 20, 2014

Quantified and Qualifed Data



I'm back on the Data bandwagon and please excuse me for being persistent - if you read my last post, I made some statements about the importance of data. I then listed some ways we, as product management, should assemble data to support our product actions. I'm continuing with some thoughts that have bearing that I was remiss in pointing out the last time. If you read that last post, I made some assumptions that you can infer regarding the data itself, but wasn't very explicit so this is a bit of a clarification. What I called Data that last time should have been called "Quantified and Qualified Data" - I'll explain.

Monday night (2014.11.17) I attended the Atlanta Mobile Developer's Group meeting in Buckhead (this was at Alliance One and hosted by eHire). The presentation was called "The Tau of Mau: How to turn meaningless app downloads into engaged users" provided by Jeff Steinke. The presentation was one of the better I've attended this year - in this case "MAU" is Monthly Acquired Users and refers to a trend in the mobile industry to measure success by registrations. He used a graph bearing from 0 to 700K that hockey-sticks over a period of several months and began with little explanation to see if, based on that little bit of information and making the assumption that the room was full of potential investors, would we be willing to invest.

Without giving too much more of his presentation away and to get to the point of today's post, several slides in Jeff talked about how data should be both Quantified AND Qualified and how that first exercise put all the reliance on the quantity, and not on how qualified the data was. For mobile app users (and really for most B2C web users), downloads mean very little without engagement. For one of Jeff's company's (Less Meeting), he listed three things that allowed them to know a better picture of success: download (registration); completion of a short tutorial; and finally the use of the app to schedule a meeting. The talk itself boiled down to engagement and the definition of engagement (for most it's convergence - if the user isn't using your product, then a free download has little meaning). Jeff had done enough analysis to determine that if their new user accomplishes the three things on his list, there was a high degree of certainty that the newbie would become a real, paying customer. Back to the initial example used by Jeff to illustrate unqualified data, the company had a lot of MAU but very little actually ongoing engagement. Hard to monetize users if your application is a "one trick pony."

From my own personal experience, I've worked on several applications where the project decision was made for me and ultimately that decision was flawed. The problem is in looking at the raw data and not applying some sound reasoning to filter the data into something that's qualified. In the earlier days of the web there was a lot of emphasis on getting application registrations - this was based on an old-school thought that when people buy software they have skin-in-the-game and as a result they become users. The issue with that assumption is that the web changed the paradigm - all those early companies (most now defunct) based their logic on sheer numbers, and it was relatively easy to get funding (everyone wanted to invest in the next new start up and become an internet-millionaire!). Saying you had millions of "users" (meaning registrations) sounds awesome to investors who would hand-over-money just for an opportunity, without any sound reasoning behind what would power the monetization of those users. When the Dot-bomb dropped and there was a rush to convert all those free registrants to paying customers, the companies fell like dominoes. The analysis was flawed.

The other metric that often used to sell-a-company is number of site visits. The argument is that if you have a lot of visitors, you can always build a revenue model of page views and click-throughs. As someone who has also worked in this type of environment, this can also become a flawed statistic. When you look at the actual number of views you need to make any appreciable money from this model, you're not making much until you get into the 100s of millions. The corresponding likelihood of a click-through is likewise a flawed statistic. If the keywords driving those ads aren't relevant to the user (meaning things have to align just-right - user type, application type, paid-for-words and the gods!) the actual revenue gains are significant as single-instances, but flawed as a sum. Also, these types of campaigns are cyclical in nature so they can rarely be relied upon (one exception is to create a "key accounts" model where you have broad-spectrum advertisers who already have established brands). A technique to help you qualify these numbers is to tag pages to ensure that the user stays on the long enough to actually see the ads placed. Another is to use SEM to aid in placing inbound, specialty pages, which tends to have a synergistic affect on organic search.

So what else can you do? I think that using experts to help you decide can go a long way towards qualifying the data (I'm a fan of Data Science). I also think it's very important to use both the experience of your team and the information you garner from existing customers, to determine how that data rolls-up into something that can be used. If you look at something and don't understand it, re-examine the data to see if it fits some patterns or anti-patterns that make sense. Another idea is to leverage your network of technical experts - I'm sure we all know and have worked with professionals who have "the eye" in regards to gaining insight into the data. Ask lots of questions, gather your data and make sense of it.  Put monetary figures against what's happening and compare it to what you know and possibly don't know. Strive for understanding, and make the data work for you.

More information regarding the Atlanta Mobile Developer's Group: http://www.meetup.com/Atlanta-Mobile-Developers-Group/

Jeff Steinke's blog: http://www.jeffsteinke.com/

(also published on LinkedIn)

Saturday, November 15, 2014

Good Product Management is Based on Good Data

I harp on this all the time and I'm sure my colleagues are tired of me saying it - but I'm of the firm opinion that we should DO NOTHING in regards to product management or product development, without the data to support our decisions. The foundation of any change is underpinned by data to support that change - whimsical changes or even changes supported by some perceived need, mean very little without supported analytics. Foregoing the due diligence, even for small changes can not only be detrimental to the application but can also ultimately impact your company's bottom-line. You do not want to be in the position of defending your actions, even if they seem innocuous, against a product backlog that has real calculated ROI.

Even Dev Prospecting (the idea that sometimes we need to push a change that could garner new business or customers with some nebulous "maybe" result through innovation) should have a minimal data construct projecting what may come from doing so. For this I look at data models in parallel or similar affinity vertical markets.

If you aren't paying attention to the data, making some effort to understand the trends, and have the ability to filter the noise and make some decent projections based on what you see, you're doing more harm than good. So as a Product Manager, what can you do to get this data?


Research: Public Searches. The first avenue for any good product manager is to start doing some searches online using probably keywords. I think most product managers already have some ability at finding public information - it's a skill one develops over the years and is a good starting point for just about any knowledge gathering. Google is your friend, but this is only a starting point. I'd suggest you start brainstorming and as you broaden your searches, additional keywords will suggest themselves in the results you find. Make sure you note what you find but stay relatively focused using the additional words as possibilities as they can get you really distracted (ask me how I know?).

Research: Examine Your Internal Data. The second tool in every product managers tool-kit is internal data. Most of us have applications that have been running for several years and the data is sitting there for the grabbing. Look at the data points you have and see if there's information there that can be used for modeling or to suggest avenues that support your case. Just be careful, especially if you're forecasting to take this data with a grain-of-salt. Using existing data works great to support cost savings; it's much more dangerous to use it to support revenue opportunities.



Research: Use Your Existing Customer Pool. It's always blown-my-mind when I've come to a company and realized that there is almost no interaction with the existing customers, beyond simple support and account management. If you have enough data to identify problem areas in your application via support calls and emails, what's more useful than to reach out to your customers and start a conversation. Begin with what they like, move to what they don't like, then start suggesting things you'd like to do. The information you receive can be quite compelling and leading you to paths making good application decisions.

Research: Use Your Existing Sales and Support Pool. As above, we often receive feedback for what's wrong or bad with what we are doing from an application level. What's harder is to get a sense of what really needs to be changed or fixed. Use data as the foundation, then interviews with your coworkers to gauge what will have the most impact - you'll often be surprised at what you find out and once again, these are leads that can direct you towards real innovation. I love getting sales figures and using the information to defend a case for doing something, or even better, against doing something being driven by someone with influence but no clear understanding of what's needed.

Big Data and DataScience. The last and this is something that I'm a big fan of - hopefully your company has embraced DataScience and hired a good python developer to parse through your tables. It's amazing what can be discovered by trending data and looking for graph-outliers or anti-trends. As product managers, we need to better understand how this last tool can be used effectively, to support our case when making product decisions.
I think most product managers understand all of this, if at a very subconscious level. At minimum, keep your mind open and don't simply disqualify ideas being promoted by your coworkers - I know that we're all busy and that this is easy to do, but your do yourself an injustice and really exhibit a lack of respect for those you depend upon the most.

"I get it...I get it"...

-- John

Tuesday, October 28, 2014

A Look at Products vs Features

My thoughts on Products vs Features
Or how to think about developing products rather than features.
(Also published to LinkedIn)

Borrowed from marketoonist.com
 In the course of "usual" product development, we product managers ideally work from the user outward - basically what problem are we trying to solve and what's the fastest way to get there. Since we think of problems in regards to users, personas and entities/actors, the solution is often one that incorporates technology and heuristics ad a fairly granular level. Think of this as feature fulfillment. What I've learned over the course of the last decade is that this isn't enough. Too often we develop something that is so specific, only to "refactor" or otherwise redesign the software underpinnings at a later time due to unanticipated discovery over time. We've all been there and done that I'm sure, especially anyone involved in software development over time.

Instead, we should be thinking of solutions that have broader application, or at least open up the conversation towards reuse that could expand into a product. In classic development, we often approach things with the end-goal in mind. What's the MVP? How quickly can we get this into production, delighting the user or otherwise providing a feature solution. Subsequent development often introduces similar concepts and provides an avenue to "reuse the code" due to similarities. At this point, if your coding partners have some experience they may suggest a refactor and reuse which is good. What I'm suggesting is that we should try to lead with this idea and then do some analysis to decide whether we spend the time upfront in design or shelve broader product concept for quick delivery.

An example - you may have worked on an application that has a need to do some calculation such as an amortization calculator. You basically provide a utility to calculate interest and principle payments to determine how different loan lengths impact your monthly payment or time-to-complete the loan. The following year you have a different calculator needed in another area of the application that's similar so initially you may think to just copy the code for the second app. By the time you get to the third calculator needed, someone in dev says "hey maybe we should combine the apps and make a configurable solution that can be used anywhere" - which is a great idea, however now you face the task of absorbing the technical debt to rework everything.

Instead of "going the normal route" as described above, let's look at this from a product perspective. The problem you're trying to solve is the need to make some calculations via some utility available to the user. Doesn't that sound like something that's likely to be needed in more than one instance? Is something available that you can find that's inexpensively licensed as a plug in, rather than writing the code? If so, how much complexity to incorporate the component, etc. and how useful is it? Can it be applied to different types of calculations? Really, is it any good or just a hack? If we're doing our jobs as product managers we should be thinking along these lines and through the process of discovery and affinity comparison, decide if we can do a better job than what's available off-the-shelf and if so, that should suggest that there's probably an opportunity hidden there. We should also do some basic research and analysis on any potential opportunities so we aren't doing things in a whimsical manner. In my opinion, all of our product decisions should be based on data - otherwise we're just guessing.

So if you decide that there is indeed an opportunity, how to defend the additional up-front development cost? This is the tricky part - you can certainly over-engineer a "widget" - ask me sometime about building a super-configurable application that was so complex the adoption was very limited - there's a need to balance the planned intent against what's being executed at any given time. The start of all this should be a conversation with your devs, and in particular your architects, about the approach. If they know up-front that you're thinking about this new "widget" as a potential product, they will certainly build into the design some "hooks" so it can be flexible, and some baseline configuration that doesn't entail too much of an engineering effort. You have to get your team involved early and build some basic product planning before being involved in the execution of code. You also need to rein in some of those efforts so your guys aren't building a nuclear power plant when your trying to supply a battery.

Bottom line? Start thinking of what you're building as something more than simple components - start thinking of them as features of potentially stand-alone applications (only skinnied-down to solve a specific problem). Second, talk to your devs and discuss what you're thinking - you'll be surprised at what they can come up with. Invention is indeed the mother of necessity, but don't rely on only yourself as a guide to the approach.

-- John

Tuesday, October 7, 2014

Get Active!



Having doing Product Management for many years I found myself recently in a bit of a slump - at some point I reached a threshold where what I was doing just wasn't very satisfying. This was due to several factors:
  1. Post-Release Blues - the "1.0" of my most recent responsibility had gone to market which is really exciting. All the months of creative energy expelled in a giant burst, which ultimately ended up being anti-climactic - to achieve an MVP the product was lacking in several key areas to really make it functional. These were the result of a lack of understanding regarding the real needs of the user - we had made some assumptions, and as usual when you make decisions based on guesses rather than real data, those assumptions led to a less-than-usable product.
  2. Post-Release Follow-up - So the next several months were spent bringing the product up to what is/was the actual MVP. This included bugs but mostly it was lack of processing functionality (not as in calculative capacity, but rather in business process management function). We've finally gotten the product to a point where it was relatively safe to release to the public, and more importantly to our internal business users.
  3. New Priorities - of course things changed during the overall process to release to 1.0 and these new changes (support for an entirely new conceptual product-line) also added complexity that mired our teams for several months (and it didn't help that the requirements changed several times before we got this right). Luckily the agile process allowed us to be nimble and adjust our product roadmap to bring these new features to the forefront. It did cause a bit of a stir company-wide as our roadmap and commitments changed. Tough to do this in a relatively heavy top-down managed company.
  4. Note Celebrating Small Successes - nothing like feeling like your behind to keep you from enjoying things that go right. We all need to keep things in perspective and it's often difficult to see the trees when you're in the middle of the Sequoia National Forest.
So in total, my overall morale had been low. So what to do about it? In my case I became more active in activities set to expand the mind - and no I'm not talking about mind-expanding drugs. I started to go to local industry meets (it's helpful if your company is part of the local tech scene - in Atlanta there's the Technology Association of Georgia (TAG) which is helpful as there are always meetings by various associations going on - I joined the Product Management association within TAG). But don't limit yourself there. I also joined the local ProductCamp and started to go to the yearly unconference and at some point will participate in presentation. But getting involved in those two groups was like having an appetizer during a nice meal - you want more! Besides the local, more formalized groups, look at MeetUp and what it has to offer. What to do?

My suggestion is to expand outside of your normal circles - start on the technology side and move outward - see where it goes. I have a firm belief that from a product management perspective, anything that we do to further a product should be first supported by good data. If you don't have data how do you know that what you're doing meets what the market wants, or through extrapolation, what the market could want (emerging or untapped markets)? Everything that we do as technologists should be supported by data. So what's more natural than to get involved in the local data groups? I joined both the Microsoft BI group and the Data Scientist ATL group - the former is a bit more on the technical application side while the latter is more about theory and the procedure/process of collection and filtering. Both are interesting in different ways and a byproduct is that you will meet people that may not be in your normal circles. I really enjoy both groups and they have expanded my horizons. The latter group is also very social - I think this is an important aspect of what we do as technologists that becomes lost in all the work we become saturated within - meeting people at a more casual level lets the mind explore new possibilities.

An example - during the last Data Science ATL meetup I met several new people - one was a consultant who really liked to connect people of similar mindsets to each other to promote technology and innovation; the second was a php UI developer (Not a group I normally associate with); the third was a data scientist. All three were co-mingling, having snacks and a couple of beers with discussions roving all over the place: work, technology, possibility, networking - you get the idea. Somehow I got into a discussion regarding SpaceX and what Elon Musk is doing in the private sector and this got things rolling into what new innovations along those lines were being explored here in Atlanta. The result was unobstructed idea flow, laughter and an expansion of awareness. It's also how the beginnings of an idea happen - often through casual conversation - you just need to find the right people to involve in those conversations.

Taking some of the thinking regarding these groups and what I found there, I started my own MeetUp group - this one around Socal Business application (meaning the application of social networking tools to the enterprise) - something that interests me. I've had one meetup so far with a little over half attending. It's a start and it gets things going - if nothing else it gets my own mind moving. Details about Atlanta Social Business Product Management. If you look at my last couple of posts you'll see reference to the MeetUp and what this topic is all about.

So where does this take you? I don't think it matters - what matters is getting your mind into new situations and possibilities so you aren't so buried in the day-to-day grind of achieving sprint goals. Step away from your desk occasionally and have a conversation with those in your office you don't normally interact with. Join your office leadership committee or volunteer to help plan office events. Do things to break up your normal activities so the job doesn't wear on you and you'll be glad you did - your employer will be too (as long as you don't do anything to get arrested!).

-- John

Monday, August 11, 2014

The Argument for Social Business

There's currently quite a bit of buzz around the term Social Business - especially around HCM (Human Capital Management) so I thought I would address the term. I think what makes it a bit confusing is that the term is already used as defined by Muhammad Yunis in 2009 in his books "Creating a world without poverty - Social Business and the future of capitalism" and "Building Social Business -  The new kind of capitalism that serves humanity's most pressing needs." The books in general are about changing business prime-directives from maximizing profits to financial self-sustainment and re-investment. This is much different from the term Social Business used to describe a socialization trend happening in businesses that refers to the use of social tools in everyday business practices.

A better term might be "Collaborative Business" and in general refers to the use of some of those abstract constructs used in social networks but applied at the business level. Most of these constructs involve the "group think" concepts of shared posts, transparent threading of messages/emails/posts, the aggregation of interactivity across multiple tools and the community aspect of collaboration as embraced by businesses, at minimum internally and ultimately with external clients and experts. Ideally Social Business incorporates the various communities involved with the business and introduces a higher degree of interaction between the communities. The stress is on engagement and the result can be measurable depending on how things are defined (aligning the goals of the business with the efforts involved in sharing information). HCM has begun adopting some of these constructs, especially for internal communication but I believe to be effective these tools should be implemented beyond the enterprise - however it's still a good first step at implementation.

IBM has done a rather remarkable job introducing the concept into it's core communication - much of this came about through a corporate initiative to allow its employees to blog articles to help disseminate information which then lead to their hosting both internal and external blogs - this was followed up by the extension of the old Lotus Notes into the current Domino product. I recently attended a TAG Social Society event (and if you haven't joined TAG - the Technology Association of Georgia, then this single group may make it worth while) that had the topic What Will My Email Look Like “Tomorrow”? (it was actually a combined event of separate TAG groups but the presentations were mostly around Social Business). Louis Richardson from IBM was the keynote - he started out with the the quote "Email is where messages end up to die" and spoke about the relative waste of using email to communicate, especially between a group trying to get consensus on a decision - his example had 6 people in it who all had to copy the entire group every time a comment was made. By the time several people commented (and some copied as yet other interested parties) there were quite a few emails in total (I believe he said the average is 150 emails to make one decision). He then referenced the way Domino addresses the problem - I won't go into detail but if you're really interested you may want to look up the feature description.

I'm using the IBM and the TAG Social Society reference above to illustrate my point - there are many thinking about the future of communication and collaboration and how it will be addressed by business, especially enterprise business. Why all the interest? If you look at the demographics, particularly of the Millennials and post Millennials that are making their way into business, you'll understand that this latest generation interacts differently with information than most of the "gray hairs" currently controlling business. This new group is used to multitasking using a variety of electronic devices, combining messaging, video, text-in-email, social media - you name it and they use it, often all at once. It's been observed that the group in general has become "wired" differently in the way they think - with information coming into the funnel (so to speak) in multiple work streams. The biggest change is the way information is collated by multiple participants into threads (the advantage is that information is shared to a larger group without multiple posts like in the email example provided by L. Richardson above).

Think about being a Millennial and being used to this work stream way of thinking, then hitting the plodding methods traditionally (well, at least the last 20 years) used by business where email is king - I can imagine the frustration and lack of enablement - and this may explain why companies who have adopted a more collaborative way of communicating using social tools have become so successful - Google is a good example. It might also indicate why so much of the talent pool is moving towards these "hot" companies (yeah you get free lunches and you always hear about the company culture - but is that enough?). My theory is that if we don't provide a communicative atmosphere of collaborative tools we aren't just missing the boat, we're basically given a paddle and told to make the best of it.

That that end, I think that the beginnings of the solution has already been adopted by technology teams (they tend to be the first to adopt good ideas) in most software development shops through the use of collaboration tools like Campfire and github. Many of the tools and ceremonies have also been adopted by the Agile and scrum shops to improve team communication - and it works! so I've pitched the idea to those I know who facilitate the agile/scrum model. To further examine how Social Business might be implemented, I've created a new MeetUp group called Atlanta Social Business Product Management (yeah it's obnoxious but I couldn't think of anything that had more zing). Please join if you're interested.

Monday, July 28, 2014

The Three Laws of Software Development - Conway's Law

During a recent class, Peter Saddington referenced three laws which collectively are knows as the Three Laws of Software Development:
  1. Humphrey's Law
  2. Ziv's Law
  3. Conway's Law
I wrote about Humphrey's and Ziv's Laws in my previous posts. Conway's Law is perhaps the one that is most well known of the three.
Looking up "Conway's Law" you'll find it in Jeff Sutherland's book "Exploring Scrum: The Fundamentals." Conway's Law is an adage named after Melvin Conway who first mentioned the idea in 1968. It states "Organizations which design systems...are constrained to produce designs which are copies of the communication structures of these organizations" or it's also written as "If you have four groups working on a compiler, you'll get a 4-pass compiler" (sic). That second version is a bit dated by today's standards. I believe Peter stated it a bit more colloquially as "Software always mirrors the group that built and designed it - if the company is dysfunctional due to poor communication then the software will have the same dysfunction" or simply "everyone needs to communicate better."
This law is as proverbial as the previous two and like them, is more of a valid sociological observation. I found this diagram by Manu Cornet on Bonkers World that I thought was pretty funny and actually a fair representation of Conway's Law...


Manu's diagram isn't specific to Conway but you can infer what you like. The Microsoft and Oracle diagrams are especially telling. Peter tied this law into the need for excellent communication and collaboration as a necessity in producing great software. I've personally seen this law represented myself in the many start-up companies that I've had the honor of working with and the opposite end of the spectrum, large corporate software endeavors (at some point I'll relate a project that had over 100 sign-offs).
Early stage Start-ups are typically small and have a tiny organizational chart - the software produced is usually fairly simple structurally and the communication is more one-to-one due to the tight org structure. This ability to be nimble due to size produces many advantages, especially the ability to go to market quickly. Larger organizations begin struggling to produce software with any speed - there are so many check-points and stakeholders involved in the production that there are often small changes that impact both the software in complexity and the company's ability to market quickly. Large corporate entities are often bound to many revisions, checks-and-balances, approvals - you get the idea. Not only does the software fill with complexity but releases become a nightmare.
Tying back to the class, the emphasis is to make small incremental changes and expose those changes for approval back to the stakeholders early and often so the team can adjust and do a better job meeting expectations. Not only do you produce better and more efficient software, but the release becomes less problematic - there are fewer surprises. I don't know how many times I've worked on a project where the smaller changes truly delight the customer while the larger changes produce confusion and revolt - this is usually ascribed to poor training but even taken that as a viable excuse, doesn't it amount to a lack of communication?

Monday, July 21, 2014

The Three Laws of Software Development - Ziv's Law

(previously published on LinkedIn)
During a recent class, Peter Saddington referenced three laws which collectively are knows as the Three Laws of Software Development:
  1. Humphrey's Law
  2. Ziv's Law
  3. Conway's Law
I wrote about Humphrey's Law in my previous post.


Looking up "Ziv's Law" you'll find it in Jeff Sutherland's book "Exploring Scrum: The Fundamentals." Ziv's Law states that software development is unpredictable and that the documented artifacts such as specifications and requirements will never be fully understood. First we need to all agree that new product design is the fundamental nature of software development - and that basically whenever we're creating something new, no matter how close it is to something we've done before, it's impossible to be confident of all the constraints and needs of this new development. Sure we can do some affinity comparisons to something we've done previously, but there's always some variability (if there wasn't then why didn't we just use what was built the first time?), and it's virtually impossible to predict the outcome with much accuracy due to this variability. And further, the more the new development diverges from the previous, the more unlikely the accurate prediction of the outcome.


This law, much like Humphrey's Law, is about uncertainty. It's all about the precision of functional and non-functional requirements in regards to software development projects - no matter how well they are understood and written during the time of scoping and initial refinement, things tend to change while the actual development is going on (whether the business need changes or there are changes inherent to the development process itself that impact the complexity). This law is important because it's fundamental to understanding how the agile process, and scrum in particular, addresses this particular issue.


With scrum, because the intervals are fixed and the stages of development divided into bite-sized chunks that can fit into the interval, there's more room to make adjustments in the subsequent intervals (sprints) to allow for what is discovered. This also brings into play the experiences of those working on the user story (i.e. your technical team) so you're already getting additional input into the outcome. All this leads to a much more satisfying final product, both for those involved in the process and for those who are supporting or needing the results of those efforts - and after all, that's what we are trying to do, delight the customer (general term for whom we're doing the effort).


So why is this law important? It has to do with failure and what we do as a result of failure. If we accept that what we are building in any one sprint isn't perfect, then we'll understand that the feedback we receive from the customer for what's been done can be incorporated into the next sprint, allowing us to make adjustments to a final, satisfying outcome. Small failures are OK and should be expected - it's the ultimate result adjusting to those small failures that will make us successful.


-- John

The Three Laws of Software Development - Humphrey's Law

(previously published on LinkedIn)
During a recent class, Peter Saddington referenced three laws which collectively are knows as the Three Laws of Software Development:
  1. Humphrey's Law
  2. Ziv's Law
  3. Conway's Law
I had never heard of these observations referenced as "laws" and wanted to explore them further with my own research. Apparently Conway's Law is used in some scholastic software development classes but the other two are a bit obscure to me. We'll start with Humphrey's Law.




Looking up "Humphrey's Law" you first encounter a Wikipedia entry regarding something called "The Centipede's Dilemma" - which is something psychologists call the centipede effect or the centipede syndrome. The reference has to do with what happens when a normal action is interrupted by the person's own awareness of the action. An example might be a baseball player who normally uses his training, muscle memory and a bit of instinct to swing and hit a home run, but because he's consciously aware of what he's doing, he basically over-thinks the action, causing him to strike. The effect is also called hyper-reaction or Humphrey's Law, named for English psychologist George Humphrey, who wrote about the effect in 1923 using a poem by Katherine Craster, usually titled "The Centipede's Dilemma." - yeah I know I'm getting fairly esoteric in this but I think it's good to understand where this stuff comes from.


The Centipede's Dilemma
A centipede was happy – quite!
Until a toad in fun
Said, "Pray, which leg moves after which?"
This raised her doubts to such a pitch,
She fell exhausted in the ditch
Not knowing how to run.


While the poem is interesting it does illustrate the main point, which is "Thus, the eponymous "Humphrey's law" states that once performance of a task has become automatized, conscious thought about the task, while performing it, impairs performance. Whereas habit diminishes and then eliminates the attention required for routine tasks, this automaticity is disrupted by attention to a normally unconscious competence." - rather obscure, right? So the question remains, how does this fit into the Humphrey's Law as referenced by Jeff Sutherland, co-creator of Scrum and where the Three Laws of Software Development idea was presented?


Jeff Sutherland's version of Humphrey's Law states that "users don't know what they want until they see working software" or to paraphrase Peter Saddington, "People don't know what thy want until they see what they don't want." This seems to be the opposite of what "Humphrey's Law" as referenced by psychologists means, although they do share some common threads:
  1. Both ideas are about human response to exterior stimuli
  2. Both ideas are observations which seem true.
  3. Both contain an element of subconscious objectivity
  4. Both share the same name
I had original concluded that they are the same law, only cross-referenced from the perspective of the observer for the Software Development reference, and from the subject (actor(s) aka centipede) in the psychologists' version. Of course this was purely conjecture - and someone commented on my thread on LinkedIn to say that it's actually a different Humphrey - a software developer.


So what about application? These types of laws represent an observable problem, so using the adage that one must first recognize the problem before it can be addressed, the next step is to identify how Humphrey's Law is impacting the development of good software and using various methods to resolve the issue. Peter did a great job illustrating the problem using a "I want a car" technique that I hope to demonstrate to the product guys and my teams at Altisource Labs. I'm working through this now in the form of an interactive class and demonstration which I'll share with you in a subsequent post. In the meanwhile, I'll continue exploring the other two Laws in my next posts.


-- John

Building Personas

(also published on LinkedIn)
From my last post you may have garnered my desire to begin using some of the Pragmatic Marketing training and incorporating the best ideas into my current projects. Specifically, I began creating personas to use within the user stories and to support each development effort by providing additional context for the team members. When I broached the idea with the development team I initially was met with silence. On further discussion with some of the team members I received a rather interesting response "Well, yeah those are fine for UI but they really aren't useful to a developer." I met this with a bit of dismay - I really thought they would buy into the idea of providing additional context, as the current development groups (we went from a highly efficient team to several inexperienced teams) need all the context they can get. The counter-argument I received had to do with security settings and how typical UI-type personas did not provide enough information to determine access controls. It's interesting how some of those same members who desire efficient agile teams argue against using tools to make it so.




To reinforce the idea, a recent CSPO class emphasized the same thing - provide personas that can be used to help get the team into the mind of the user. At this point I could use some hard data to support the idea (mostly to get buy-in from the team) so anyone out there reading, if you have some fact-based analysis I that I can use it would be helpful in supporting my ideas. In the meanwhile, I'm continuing to build my personas to support epics/stories going forward, hopefully this will nudge the team into being more accepting. As to the question of security settings and access controls, it seems that if the personal has a well defined description and these are of importance, the information can be added into the personal description, no?




In any case, I'm currently defining personas for about a dozen different user-interacting roles into the current application. More updates to follow.




-- John

Saturday, July 19, 2014

Pragmatic Marketing and Scrum

(a slightly different post published on LinkedIn)
For the past 13 or so years I've been working in a variety of software development shops practicing Agile, and generally some modification of scrum. My focus has always been to take the ideal scrum implementation and continue to make it better - I can say that I've only found a very few shops that practice the "ideal" version of scrum as conceived by the founders of that term. At some point I'll get into that a bit more but the purpose of this article and a few to follow are to take what I've learned most recently from the Pragmatic Marketing training, and apply bits-and-pieces to my current job situation.

First a bit about Pragmatic Marketing.
I believe I first found Pragmatic Marketing in the early 2000's - at the time I was a Director of Technology (my role was more of a Product Manager one, I just didn't realize it yet) for a small start-up that had begun making some headway - in fact we were just turning a profit and had begun to expand the various departments of the company, including Product Management. We hired a very good upper management PdM person who, when presenting his ideas, etc. suggested that we might want to take a look at the materials provided by Pragmatic Marketing (I'll use PM going forward) to guide our expansion plans - at the time you could sign-up for a free PM print magazine that was published, I think either quarterly or bi-monthly and by signing up you had access to the other materials and past issues as PDFs (I may still have those stored in my files somewhere). After reviewing most of what was there we started to employ many of the ideas and also the terms associated with the course - it basically gave us a basis of terminology that was previously missing. Of course, being a start-up it was hard to justify taking the paid-course, but I had always been curios about it.

Next, about the current implementation of scum in my current position.
I function as a Product Manager for two different development groups. The first is a legacy platform that is primarily in maintenance mode, so it's a rather small team practicing kanban and mostly doing user stories relating to reporting needs to compliance and regulatory changes. At some point this team will be absorbed into other efforts, but at present there's enough work to keep them busy. The legacy platform has been running for over 10 years and frankly many of the efficiencies from 10 years ago have been exceeded by company demand which leads to the second group. Group two is a complete redesign using a different technology stack and a common services platform. The second group has gone from one team to five, then shrunk back to a single, consolidated team and now is expanding again into three teams. I share the product owner role with another product manager and we're lead by a Director who acts more as a principal product owner (his focus is more on the strategic direction and he has responsibilities tied to the P&L). From my scale, these newly defined teams are about 70% where I think it should be from a classic scrum perspective (it continues to get better but there's still a bunch of work to do). The group two team started with scrum and was relatively successful up to the initial release of the new product. Recently the expansion of the team into two groups (and building to three) has lead to inefficiencies and our velocity is shot.

A few weeks ago I attended three Pragmatic Marketing courses: Foundation, Focus and Build.
The main thing that I took away from the classes was that I already knew much of the information (you would expect that after 13 or so years doing product, right?) - the classes actually provided a framework of using what I already knew that was a bit more organized. It also provided some insights and approaches that I hadn't tried. Now we get into what I've learned and what I intend to implement to improve our teams.

There was an interesting graph as part of the Build class that I want to share, that has relevancy to what I've so far described. This has to do with context, and how it impacts the effectiveness of teams.

Basically, the first tenant is that the more context a team has the more efficient they become in executing what the market needs.
The better the context, the less that needs to be expressed as any type of requirements for a team - that's not to say that there doesn't have to be documentation, but due to the past shared experiences of the core team, the high trust levels between all participants, and the familiar processes, the teams reacted well, quickly and accurately to requests coming from the market. This is really the ideal state of a good scrum team, where from a PdM perspective, just enough detail is included in a user story. This is well illustrated by the previous velocity of the teams while executing the initial release of the product.

The second tenant is that the less context the team has, the more content and requirements product needs to produce for the team.
The analogy is to take some software feature to a contract shop - because they have very little context, they need much more information in order to be successful - this ends up being lots of artifacts, requirements and specifications. You can see that by expanding our previous team with many new, novice team members we should have adjusted as product managers by providing much more context, to help them ramp up and be successful. By having the unlikely expectations that these newly formed teams would magically become as quick and efficient as the original teams (the idea is to spread the experienced members across the new teams as leadership, so they can impart the technical information needed to get the teams up to speed) within a couple of sprints, we didn't consider that the amount of context provided to the team by product needed to correspondingly increase. So one simple graph and about 5 minutes of discussion in my PM class provided a world of context for me, in first identifying the problem and then taking some steps to improve the situation.

So what to do?
There were several recommendations that I drew from the classwork - remember that the main thing I'm trying to fix is a problem of context - how to get all the business information that product has packed into our brains and disseminate it to the new team members. When I describe a new feature idea I usually use flow diagrams to show how the process could be defined and wasn't spending much time describing the why - I'm now scheduling more time to go over the market need so that the teams are all on the same page. That's a start and seems obvious, but until I recognized that there's a problem how does one improve?

One thing that was emphasized by the instructor, Steve Gaynor (who was fantastic, by the way) was that there was a lot of information addressed in the class, and instead of trying to apply everything, just pick out 2 or 3 things and focus on making them work. So the other classroom idea I found interesting in regards to the problem was the concept of personas.

As with most of my Product Management friends, I usually include the "actor" (a representation of who is interacting with the feature) in my user stories. The concept of persona is a bit different - it's to create a fictitious "who" that's more defined. Using real data (surveys from the market was one suggestion for garnering the information), the product group creates a persona or personas who would represent the most likely candidate(s) for the use of the product. Because there's a lot more information provided as background of the persona, the user becomes more than an actor and the background suggests the needs of the user to the team doing the user story. So instead of "As an admin user" you might say "As Greg Jones" as the beginning of your story statement. There's an entire persona called "Greg Jones" that's been created that represents the characteristics of a typical admin user, including experience with the software and business, background and expertise, the amount of time spent on tasks, etc - whatever is relevant. By expressing personas that can be identified as a real entity, the team has a better experience when developing the feature and thus satisfying or delighting the persona, thereby receiving context from the very beginning of the story. Sounds good, right?

So I started stubbing in the personas for a particular business unit late last week and will continue this coming week. I'll report on this and some of the other ideas that fell out of the PM training in my next post.

-- John Eaton

Tuesday, July 1, 2014

The Accidental Product Manager

I've been doing Product Management for over 13 years. When I reflect back upon my entry into that field, I find that there isn't any single event that led me there. Even my entry into software development was roundabout - having come from a graphics background. I've also found that this is a common trait among other product managers I've met or worked with - it seems we all started elsewhere then gravitated towards what we like best about software development, the actual design and creation of something new. Continuing on this thread, I haven't seen any specific courses for product management (sure there are certification courses by companies such as Pragmatic Marketing, but I've never seen any specific course work offered in universities to prepare a student for software product management). I have encountered some common characteristics of the best PdMs that I've met worth sharing:
  1. Most of the professional product people I've met enjoy a high degree of creativity. The best excel at designing new products that not only provide a great deal of interactivity with the user, but also exceeds the perceived need - it takes genuine creativity to accomplish both goals.
  2. Most product managers are super-detail oriented, with the ability to talk high-level, but then dive deep into the minutiae. There's no room for sloppiness or a lack of details and you'll rarely find a PdM that exhibit those traits.
  3. The best product managers make the user the primary driver of the application and strive to enhance the user-experience. If you haven't heard of heuristics you probably shouldn't be a product manager.
  4. Contrary to the belief of many, a good PdM also talks frequently with his/her technology teams to make sure what's being asked for doesn't have an extreme cost (not just money, but also time, resources and a consideration of the overall technical design). If you're not talking to your teams you should consider doing something else.
  5. Product managers should strive to stay abreast of the latest technology - at minimum to keep things fresh and ideally to take advantage of new opportunities. This also includes training in disciplines outside of his/her comfort zone. If you haven't done Agile you should look into it and learn about being a product owner in a scrum team.
  6. I don't think I could ever take myself seriously, or call myself successful without having actually taken a product from design all the way to market. I both count my wins and learn from my mistakes.
Just a few thoughts - thanks for reading!

-- John Eaton