[PMC Weekly Insight] The tyranny of the easily-measured

Philip Morgan

(Programming note: I'm taking a short 2-week break from my lengthy series on survey marketing to talk about today's topic, which is contextually relevant, but is still a break from reporting directly on the research project.)

What gets measured gets managed (or improved), they say.

What does this mean about what is easily measured vs. what's not easily measured? We have this conventional wisdom that in order to control or improve something, it must be measured. But does this actually mean what's easily measured becomes more important, or will become institutionalized as more important?

Just the framing of that question suggests the answer: no, of course not. Ease of measurement doesn't correlate to importance of what's measured. In fact, I'd bet that ease/difficulty of measurement correlates directly to frequency of measurement, not importance:

It's probably obvious: what's most easily measured is what's going to get most regularly measured. But tthis seems to me like a better way to decide what to measure:

Reducing the risk on high-impact things is a great way to think about what to measure. Even if all you knew about was actual impact -- and the risk component was a mystery -- you'd still have a good pointer to where to deploy measurement effort. Look for what's actually impactful, and measure that.

Ideally, measure what's high impact and high risk, even if it's not easy to measure. Here are some examples of this combination of high impact and high risk:

  • As I'm writing this, it's the 50th anniversary of the Apollo moon landing, a classic high impact-high risk project. In fact, the term "moonshot" has come to stand for high impact-high risk efforts.
  • Extending into a new market or wholesale re-positioning.
  • Building something new where that new thing might effect an important metric or aspect of the businesses performance. For example, imagine a large subscription-based business. Small changes could have large revenue effects, because those small changes are multiplied across a large subscriber base. This is the principle by which conversion rate optimization produces leverage.

If it's not easy to measure this high impact-high risk stuff for your clients, help them measure it anyway! Or invent ways for them to measure it.

TEI participant Stephen Kuenzli has just published a very interesting piece of original research that illustrates inventing a way to measure the difficult-to-measure: nodramadevops.com/state-of-application-secret-delivery-and-audit-practices/

He started by asking some questions about secret key handling in a cloud environment, and found a strong correlation between how securely this is done and the level of satisfaction that DevOps engineers have with how its done. This suggests that you can ask the safer, easier-to-answer question "How satisfied are you with secret-management practices" and if the answer is negative, you can with some confidence believe that there are problems with the level of security for secret key handling at that organization.

In simple terms, Stephen has uncovered a useful, valid proxy measurement. This is a great example of what I mean when I say you can invent ways for clients to measure what's important, even if it's not easy to measure. Also... read the whole report! It's interesting, and a great example of how you do a scrappy, self-directed research project: nodramadevops.com/state-of-application-secret-delivery-and-audit-practices/

Stephen took on this research as a speculative investment in his expertise and authority. If a client with decent budget had commissioned it, I could see Stephen charging $50k or more to do the exact same research, but with the client owning the resulting IP. Will Stephen doing this as in-house research produce a similar or greater ROI to Stephen's business? We shall see! This is speculative stuff, after all, and research like this is an investment that takes time to pay off. But if Stephen's client LTV is, let's say $50k or greater, then all it takes is Stephen's research influencing 1 prospect to become a client for it to produce ROI that's similar or greater than if he had done this research as work-for-hire for a single client.

I've noticed a few places where the tyranny of the easily-measured prevails:

Email marketing software. Opens, clicks, subscribes, and unsubscribes are the most easily measured. But how valuable are they?

Here's one suggestion of a not-so-easily-measured thing that would -- in the context of your consulting work --be far more valuable: how many companies have 3 or more people at that organization on your email list? Or here's a similar metric, but expressed as a time-based event: notify me when 3 or more people at the same organization join my list within a defined period of time, like 1 month.

Why is this not-so-easily-measured-thing 1 valuable? Well, ask yourself what it might mean if 3 or more people from the same org join your email list within a relatively short period of time. We can't know for sure in every instance, but in some instances this would happen because Person 1 at Org A joins, finds your email list relevant and either forwards an email to someone else at Org A and that person joins your list, or Person 1 verbally tells others at Org A to join your list. Either way, you have your message and PoV spreading by word of mouth at Org A. What does this WoM mean? It might mean this org has a need that relates to what you talk about on your email list, and there's an opportunity to serve that need more deeply with a paid engagement. 2

There are a lot of "might" and "may" qualifiers in the previous paragraph, but that's OK. The core idea is valid, which is that the most easily measured stuff in email marketing software is less valuable than things that are less easy to measure.

The less easy to measure stuff also tends to be specific to a niche use case, which might also explain why it's not baked into the software. For example, a retail e-commerce site might have no use for reporting or alerting that tells them when more than 3 people from the same organization have joined their email list. That might be noise to them, despite being valuable signal to a consultant. I could certainly see why a huge email marketing platform like Mailchimp might avoid baking niche reporting needs into the core software. It's too wierd, or confuses the question of who the sofware is for (Mailchimp's answer: "everyone!"), or creates some other undesirable second order consequence. If Mailchimp functions like a conventional platform, then even if they do recognize the value of measuring less easy-to-measure things, they'll choose to leave the task of actually doing that measuring to third party integrations.

To summarize: I understand many of the good reasons why the not-so-easy-to-measure stuff is left unmeasured. There's still significant value in looking past the easily measured to the important-to-measure. 3

Next example: social media. The easy-to-measure stuff here is: quantity of likes, comments, and re-shares. The not-so-easy-to-measure stuff is: actual impact of your thinking as delivered through social media posts or your commenting on other folk's posts. 4

I could imagine a world in which a social network like LinkedIn deploys natural language processing to do a sentiment analysis of the comments on each post, and in addition to simple, dumb "engagement" metrics based on likes, comments, shares, and reach it adds a positive/negative/controversial sentiment analysis metric to their reporting on the performance of a post. This would not get us much closer at all to understanding the actual impact of the thinking expressed in a LinkedIn post, but it would be an incremental move away from the stuff that's easy to measure and towards more important but harder-to-measure stuff.

There are certainly other places where what is easily measured wins out over what is important to measure, but those are two easily understood examples.

I want to encourage you to push through what's easily measured to what's important to measure.

Let me leave you with this, which I see as a wonderful allegory about the importance of knowing what to measure, which we could also think of as the importance of knowing where to look when things go wrong: www.nytimes.com/interactive/2019/07/16/world/europe/notre-dame.html


Here's what's been happening on my paid Daily Insights email list:

[display-posts posts_per_page="3" include_excerpt="true" category="daily-insight"]


  1. What exactly is the threshold beyond which something is not-so-easy-to-measure?
    For most SaaS products, that will be measurement that involves a third party tool or custom code. In other words, if it's built into the software, then it's easy to measure. If it's not, then it's not easy to measure, again seen from the perspective of most users.
  2. Those who suffer from imposter syndrome will first invent a negative explanation for why multiple people at Org A are joining your list. The Resistance will say: "It's actually so they can all laugh at this fraud with an email list!" It doesn't take too long to begin recognizing The Resistance's very distinctive accent and start to overlook its lame attempts at keeping you safe.
  3. Incidentally, I'm very proud of Paul Jarvis for turning off tracking pixels and link redirection on his Mailchimp account. He talks a bit about that in Liston and my interview with him: offline.simplecast.com/episodes/paul-jarvis-on-building-an-audience-and-a-company-of-one
  4. I'd say measuring social media's role within a funnel is not trivially easy, but it's also not as difficult to measure as what I'm about to propose in the next paragraph.
    But, critically, few to no high-dollar consulting engagements are actually sold using social media, or a funnel involving social media. So that's why I'm avoiding talking about this exact measurement.