If you want to be a bad product manager, don’t bother measuring the results of product development work. Just put new features in there and don’t see whether they make a difference. If a customer asked for it, it must be worth doing. If people really don’t like it or if it’s hurting the product, you’ll probably hear about it pretty quickly. Plus, the market and competition is changing so quickly that you don’t have time to think about measuring the impact of new features after they are implemented. Once the work is done, you need to focus all your attention on the next set of features to add.
If you want to be a good product manager, measure the impact of the product changes you implement. Product managers need to be constantly evaluating the changes being made to a product and measuring whether they were successful.
Too often, product managers implement new features, functionality, or make other changes to a product without a true understanding of why these changes are being made. The product manager may think he or she has a logical reason for requesting the change — a specific customer asked for the feature, an engineer suggested the change, senior management requested it — though that is just part of the picture.
Even if there is a legitimate reason why the change should be made (and good product managers should know that none of the above reasons are truly legitimate reasons in and of themselves), the product manager has a responsibility to go several steps further and quantify the impact of the changes. Though this may seem as though it is creating more work for the product manager, it in fact will make his or her job much easier. Product managers need to be able to quantify “success” for any given change, rule out changes that are less likely to be successful, and measure all work which is implemented. This allows the product manager to ensure a higher likelihood for success and also show the impact of the change to gain support for future changes.
Besides just coming up with an idea and implementing it, there are several steps product managers need to go through before engaging in product development:
- Define the expected impact of the changes. Different products and different changes will have different impacts, though some popular measures are
- increased usage
- increased revenue from existing customers
- new customer acquisition
- improved customer retention rate
- improved customer satisfaction
- increased market share
- references to change in blogs, media coverage, or analyst reports
- Establish goals for the changes. Once you have defined the specific measures, the next step is to explicitly state what you hope this change will achieve. Good goals are SMART goals (or MT goals, if you prefer), making it clear whether the goal was met or not.
- Determine how you will measure the impact. Though you may know the impact you expect or hope to see and have a specific quantitative goal in mind, you must be able to measure it to evaluate its success. For example, if you are considering adding a new feature to your web site, and your goal is to have 10% of your customers using the feature of the website within 30 days of it being released, you need to have web analytics in place to measure whether the goal is met or not. Though this seems obvious, often times work required to measure the impact of a change is not considered until too late in the development process for measurements to be put in place, or, even worse, after the change has already been implemented.
- Measure the impact and objectively evaluate. After the change is implemented, compare your actual results to your expected results. Were the results achieved? Why or why not? What could have caused the results? Are there additional changes which are needed? What was done well that should be replicated for future product changes? What did you learn which could improve your success in the future?
Many may resist this process for various reasons. There are several common objections and responses:
- Objection: “There is just too much work required to go through all these steps for each feature! We can’t define the impact, establish goals, figure out measurements, and then actually measure everything we do! If we did, we would only be able to release a fraction of the number of changes that we do now.” Good! The job of a product manager is not to make changes to the product just for the sake of making changes. Products must have goals, and the product manager must focus on meeting and exceeding those goals. Adding new features is not a goal; increasing revenue is a goal. If this process slows the process down a bit, that may be a good thing. Instead of money being spent on 10 mediocre new features, money may be spent on 1 good one which has a much bigger impact than all 10 mediocre ones combined.
- Objection: “We don’t want to set goals if we’re not sure we can meet them.” If the goal of a new feature is to increase revenue by 10%, and your new feature increases revenue by 5%, you may not have reached your goal, though you still increased revenue by 5%! Sure, it fell short of the target, though it is still well above where you were before. Instead of criticizing the inability to meet the goal, evaluate whether the goal was realistic, what could have been done differently to meet the goal, what changes can be made now to get to the goal, and what can be done different in the future.
- Objection: “We don’t have a way of measuring the impact of our changes.” Some changes are harder to measure than others, and it may not be practical or worthwhile to measure every single minuscule product change. However, without metrics and measures, product managers are “flying blind.” Product changes require an organization to investment time, money, and other resources, and there is an expected return on that investment which — sooner or later — you will need to demonstrate. Establishing and tracking metrics will allow you to create a better product and identify problems earlier. It is in the best interest of your product, your organization, your customers, and you as a product manager to determine how those measurements can be put in place.
- Objection: “We can’t agree on what the impacts and goals should be.” If that is indeed the case, then avoiding these steps completely will not solve that problem. This process may be difficult to start, though a team will only get better at it over time. It may not be possible to get complete agreement on all of the details, though going through the process will identify where goals are not aligned. For example, the marketing manager may want to implement changes which will generate more new customers for a web application, while an engineer may want to implement changes which will make the application run faster. In this case, getting even general agreement on areas on which to focus would be beneficial.
Still, in light of these objections, there is value in going through the first 2 steps outlined above even if there is no way to effectively follow through on steps 3 and 4 just yet. Discussing expected impact and defining goals with the relevant stakeholders is an incredibly useful exercise. Rather than just delving into the details of how a change will be made, as often happens, you are really focusing the conversation on why the change should be made at all. Getting in the habit of going through this process is beneficial, even if it is not possible to completely track or follow through on measurements, as it can establish the mindset for approaching product development going forward.
Implementing metrics and measurements may be an intimidating and overwhelming step for a product manager. However, if done properly, it can potentially lead to enormous improvements in the product. It will make product development less contentious and more evidence-based, leading to a more efficient and effective management process. Additionally, it can make you as a product manager more effective, since your time and efforts are focused on areas which will provide value, and you will be able to show the value you have created — a true measure of a good product manager.
Translations available:
Your article is very interesting and this topic is one that I cannot emphasis more on, while talking to stakeholders in my company.
Just to add to your article – in real life it always does not work out. A typical case is in certain new markets where users are themselves not sure of what they want. Investing too much in trying to find the right solution that makes sense, may not be a worthwhile exercise. A way to deal with this situation is to work closely with engineers in coming up with quick prototypes and letting customers play with it till it is possible for all involved to measure the impact and applicable use cases. The key here is that there should be much less resource / cost investment – minimum QA. Once, the prototype and the features there in get validated, you have your next candidate to be the product (or a new product feature). This has always worked for me …
wow u have pretty goo steps on product manager
When you work in a SaaS environment, every view has its own page, so you can use web analytics to see if the view is used, and potentially what would need to be changed in the view. Hopefully, you have delivered one and only one minimal marketable feature in a given view.. Seeing use also translates into knowing the financial value of the view.
Jeff,
Excellent post.
Your opening statement is especially true: “Too often, product managers implement new features, functionality, or make other changes to a product without a true understanding of why these changes are being made.”
I think one of the reasons is that the discipline of Product Management in high-tech is relatively young. Whereas other disciplines such as Engineering and Sales are quite structured, Product Management is often far less so.
But I think this is changing for the better. I believe articles such as this play an important role in taking the practice of Product Management to the next level. Keep ’em coming!
– Raj
Accompa – Affordable Requirements Tool for Product Managers
Somewhere here I read someone saying that changes to their infrastructure were impacting their product roadmaps. Separating your technology, platforms, products, and form factors with adaptor patterns should solve this problem.
Your technologies and those of your suppliers changes may impact your delivery dates, but the platform floats on top of the technology, and your products float on top of your platforms, so functionality should remain stable. This should give you stability at the UI, while everthing else is changing underneath it.
Excellent post! Thank you.
I also agree with the mitigating comments made, regarding some specific situations (a new product has a different evolution curve than an already existing one). David, your comment regarding SaaS is a good way to know. But taking decisions only based on this measure can lead to wrong decisions as some information may only be used once a month but really more than useful… 🙂
I love this topic and agree that this step is so important if you want to maximize the intellectual benefit your company derives from each and every release. Moreover, beyond measuring and evaluating the results, I have noticed that such have a way of getting buried in the flurry of post-project paperwork and post mortems. I like to fold this information into the long-term record the product roadmap, recording quantitatively and qualitatively what worked and how we could even better target our objectives.
Do you really hear this? “We don’t want to set goals if we’re not sure we can meet them.†That’s being silly – but maybe it’s common in an organization that does something like bonus people on absolute status against metrics.
And on this one “We can’t agree on what the impacts and goals should be.†– I feel like that’s a bigger problem then just measuring it – it would tell me they maybe shouldn’t be building something until someone makes a decision over what problem they are solving and how they are to solve it.
On a related topic, I have some thoughts on how you measure product manager’s performance – there is a lot of overlap here with your ideas I think.
http://requirements.seilevel.com/blog/2006/01/measuring-product-manager-performance.html
Anyway, it’s a great topic.
The Heros in a company I used to work for were told to have said, “we can’t get there with the people we have.” Translate that to layoff the current crowd and get a new crowd.
The primary goal in any company is hitting some dollar figure, and making a growing contribution towards that every quarter. This goal is always there even if nobody sets it. Otherwise, you don’t have to wait for the next crowd.
Joy — I’ve heard all of these objection, though usually not verbatim.
No one ever comes out and says “We don’t want to set goals if we’re not sure we can meet them” directly — it’s usually part of a discussion where they talk about how their group can do the right work to meet the goals, they just don’t trust that another group will pull their weight. Or, there will be caveats about the methods we’re using to estimate revenue or traffic or ROI, and how they’re not tested or reliable
When I hear that, it’s always just shorthand for someone not being willing to sign up for a goal since they’re not sure they can meet it. If they were 100% confident they could meet it, they wouldn’t have all these objections. And it’s not just about whether you get a bonus based on it — it’s about not wanting to look bad, part of a bigger issue about people not admitting mistakes and not working towards continuous self-improvement.
And yes, “We can’t agree on what the impacts and goals should be” is part of a bigger discussion. I’ve found that often you don’t have that bigger discussion if you don’t bring up metrics. A new idea is proposed, everyone gets excited, you start building it, and the only way to get people to stop for a second and think about “why” we’re doing it is to ask about metrics, which leads into the discussion about goals.
When implementing a requirement, you might find that it impacts some other programmer’s dependencies and code. All of the impacted programmers are supposed to sit down and negotiate a solution. That solution may include partitioning the relevant problem and solution spaces, transferring responsibilities, partitioning the requirement itself. This has to be done.
The same can be said about goals. If a goal is the responsibility of a larger organization, it will typically be broken down and allocated across the organization. The metrics have to be broken down and allocated as well.
Coupling and cohesion can be applied to goals to the same degree that they are applied to code. Appropriate coupling at the code level provides for encapsulation. Likewise goals, responsibilities, and metrics.
There doesn’t have to be a larger discussion. There does have to be someone that decides how the goal will be partitioned. That someone is ultimately responsible for the goal. The people or unit allocated a partition are responsible for the subgoal.
Without an intervening project manager, the partition within a team falls to the team lead, and across teams falls to the product manager.
Individual programmers in separate teams should be able to negotiate around their code without escalation.
If they do not want to take on the work, because of some fear of failure, then the underlying issue is risk. In the insurance industry risk is speadout and shared. Each entity holding some portion of the now spreadout risk, will have less impact if that risk is realized. So, like the responsibility negotiations between programmers allocating a requirement, the same kind of breaking down and allocating reduces the risk.
Another approach to the risk is evolutionary development where several teams address the same risk independently. Then, at different points in time, the progress of each approach is weighed. Some approaches can be eliminated. Then, continue and iterate this process until the functionality is delivered.
Utlimately, the delivered functionality will have to meet the non-functional (“How Well”) requirements, or the release criteria might have to be reduced.
The reality is that nobody fails. Failure is up to management, not up to the people doing the work. Sometimes you can meet the goals at the specified level of performance and still fail. Failure is not tied to having goals or realizing risks, so the developers should be fearless in their pursuit of functionality.
No no no…
If you are going to measure your impact via platitude-scale metrics like “increased customer retention”, then this is obviously only applicable to really really significantly big new features… otherwise the relationship between feature and customer retention rate is tenuous at best.
This is fine for the big features that come along (for most of us) not that often, but what about the 300 small-to-medium features that make up the other half of the PM’s day?
Lauren has a point. Does the overhead involved in measuring a feature’s impact make sense for the smaller type features? Since you suggest measuring the impact of features, do you have any meatrics on the cost of this process? I’d be very interested in this information. Thank you.
Since revenue and other financial metrics are generally important to a company, can you offer suggestions on how to accurately measure the financial impact of a feature? For example, how do you measure the financial impact of adding a new report to a product? It certainly is buyer-facing, but how can we measure the feature’s effectiveness in signing up new customers (especially when the report is not on the product list as a separate item, but rather simply bundled into the existing product)?
If you release in an Agile manner, you will end up releasing one minimal marketable feature at a time. That would be a feature set focused on enabling one task. In other methodology environments, you would release larger feature sets.
Each release should have its own sales cycle. How many upgrades did you sell? How fast? How many new customers or returning customer did you have?
How many features fell from points of difference to points of parity? How many new features added to your points of difference? How many features moved from points of contention to points of difference, or points of parity?
You should be able to score each release in these ways. You may not get it down to a feature, but you would know how your feature mix is driving revenue.
Start with “Software by Numbers.” Then, read ” The Executive Guide to Boosting Cashflow and Sahreholder Value,” by V. Rory Jones.
You have to know the value of a feature before it gets put into the development pipeline. It might be that the feature is a parity feature that provides no competitive value, so you could skip it entirely.
Hello
As newly registered user i only wanted to say hi to everyone else who uses this forum 😉