Weekly Insight

Think through three critical questions with me

My work is an ongoing effort to answer these questions, and to make the ever-improving answers I generate into actionable advice:

  1. How can self-employed professionals use specialization to create outsized value?
  2. How can ordinary folks cultivate and sell extremely valuable self-made expertise?
  3. How can average firms build a systematic innovation capacity to--over time--become exceptional firms?

If you'd like to look over my shoulder as I think, research, and iterate my way through these questions, I welcome you to do so by joining my email list.

I email this list a blend of my thinking, findings of my research and experiments, occasional irreverent humor and interesting links once a week, on Tuesdays or Wednesdays. Of course, every email contains a 1-click unsubscribe link.

The goal of this list is to share my learnings, mistakes, and growing body of insight in a regular fashion. If you enjoy that kind of spectator sport, do join. If you'd prefer more frequent emails, you can subscribe for Daily Insight. If you'd rather wait for the books that come out of this effort every 3 to 5 years or the long-form articles every few months, do that instead. There's an RSS feed, if you prefer that kind of distance and anonymity.

If you do join my list, I hope you ultimately join me in the exploration it represents; I hope you move from spectator to teammate or member-in-spirit. Being a spectator is fine, of course. But wouldn't it be so much more fun to join me in the thinking, questioning, and growth I'm engaged in? To that end, I hope if you join this list you'll interact with me, question my assumptions, challenge my conclusions, contribute your experience & point of view, and generally expand the depth and nuance of what's happening within my list's digital walls.

All you need do is opt in, get a sense for the vernacular, and then press REPLY whenever you're ready to challenge yourself to think deeply and articulate your perspective on the issue at hand.

If you do this, we'll both win.

Sincerely,

--Philip.

Start getting weekly emails from me. They will help you generate leads for advisory services.

"Iโ€™m sure people have told you this before, but I would pay a subscription for your email insights. There really isnโ€™t anything like it anywhere." -- Frank McClung,ย https://drawingonthepromises.com

"I feel bad not paying you for this email! This is amazingly valuable content that you sent me." -- Sasha Jolich, SDA Software Associates

Archive

[PMC Weekly Insight] Coding, and altered states

By Philip | July 10, 2019

Executing a simple research project of your own is SOOOOO EYE-OPENING1.

It’s not the first thing this will teach you, but eventually you will learn that research is akin to how George Bernard Shaw once described photography: we make a lot of photographs because, like salmon swimming upstream, so few of them make it. With research, there are just so many opportunities to distort things with seemingly small mistakes. I’ve been seeing all the opportunities-to-screw-up that show up when I’m abstracting detail, which is a necessary part of this process.

As I’ve been comparing the coded results of my two samples, I’ve seen this over and over again.

Please bear in mind that I need to re-code some of this data, so the picture I’m seeing here will change, but let me show you the coded data from my two samples as it exists now.

The LinkedIn sample

Full resolution version of this file: pmc-dropshare.s3-us-west-1.amazonaws.com/Photo-2019-07-10-06-27.JPG

The list sample (meaning people from this email list who responded to the same set of questions as the LinkedIn sample)

Full resolution version of this file: pmc-dropshare.s3-us-west-1.amazonaws.com/Photo-2019-07-10-06-26.JPG

Altered states

This morning, I printed out my counted and ordered list of codes from my two samples (the two images I shared above), and just sat there for a long time comparing them. Comparing them, and asking questions about what I was seeing there, and looking for patterns in the data.

It’s fascinating to notice my emotions as I do this. Perhaps for some folks, the emotional layer of this kind of research isn’t consequential, but for me, it is. It’s really hard to get over this “you’re doing it wrong!” feeling when looking at the LinkedIn data because that group seems to invest in their career in ways I don’t get excited about. Then I swing back to a more objective state and start letting myself look past the emotions and trying to see what the data is actually saying.

I did the following: I looked at the underlying detail behind several of the codes that were at the top of the list of codes for my LinkedIn sample. For example, you’ll notice that in the LinkedIn sample, a-coding-learning –which means “the action of learning about coding” — is at the top of the list of codes for question 4. You’ll also notice that a-online-courses is at the top of the list of codes for question 4 when you look at my email list sample. I was curious about this.

What I found when I re-examined the underlying responses was that I’d coded things differently between the two samples. On the LinkedIn sample, which was the first one I coded, I had abstracted things more than with the email list sample. Said slightly differently, my email list sample used a more granular set of codes to represent the underlying data. The LinkedIn sample used a less granular set of codes, which means I abstracted the detail in that sample more. Same coder, different approach to coding.

Once I looked back at the underlying detail, this became clear. On the email list sample, when someone referenced reading a book or participating in an online curse, I coded those as a-reading-books and a-online-course. But when someone made the same sorts of references in the LinkedIn sample, I coded both the books and course responses as a-coding-learning. Again, I abstracted responses more with the LinkedIn sample.

Why this difference? It wasn’t intentional. It might have come out of some biased view based on my initial — mostly emotional — response to seeing the data for the first time a few weeks back. But I think it’s more likely an accidental outcome of coding the two sample groups at different times when I was in a different headspace. I really do think it’s just that innocent and simple2.

If I hadn’t walked backwards from the codes to the underlying data, I would have started trusting my codes. I would have trusted them too much, and started to see them as an un-distorted miniature of the original.

Those of you that are as old as I am or older, or those of you who are younger but have given yourself the gift of caring about history will remember microfiche machines.

Coding responses to a survey does not create a microfiche image of the underlying responses. Instead, it creates an abstraction of the underlying responses.

Next steps for me in this project are cleaning up my coding and figuring out next steps based on what I see in the cleaner data resulting from better coding.

And also, I’ll never be able to look at a news headline that says “New study shows $THING” without remembering the ability of abstraction to distort.

-P


Here’s what’s been happening on my Daily Insights list:

  • [PMC Daily Insight] Identifying your point(s) of view - (I wrote this up for a client recently, and thought it'd be useful to share with y'all as well.) A point of view is an opinion you can suppport with evidence, data, or a good argument. A PoV can be any of the following: A perspective on how something should be done. For example, consider:…
  • [PMC Daily Insight] Tom Waits’ 2-step career master plan - I really love the 2-step career master plan outlined by Tom Waits in his song, Goin' Out West. Here's a partial excerpt from a song that's hard to pull a partial excerpt from because the whole thing is so amazing: "They got some money out thereThey're givin' it awayI'm gonna do what I wantAnd I'm…
  • [PMC Daily Insight] More degrees of freedom than you realize - I can barely contain my excitement about Dan Oshinsky's new website, which is here: inboxcollective.com Some context first, before I explain what I find so exciting about Dan's site. Context: Dan Oshinsky is leaving a job as Director of Newsletters at The New Yorker, and setting himself up as a consultant helping clients with... newsletters.…

Notes

  1. If you want to read up on this experiment, check out what might be one of my longest — and potentially most boring — series of emails:
    1. philipmorganconsulting.com/pmc-survey-marketing/
    2. philipmorganconsulting.com/pmc-the-de-biasing-survey/
    3. philipmorganconsulting.com/pmc-survey-marketing-recruitment/
    4. philipmorganconsulting.com/pmc-survey-marketing-initial-data-from-the-de-biasing-survey/
    5. philipmorganconsulting.com/pmc-weekly-insight-survey-marketing-qualitative-analysis-of-the-de-biasing-survey/
    6. philipmorganconsulting.com/pmc-weekly-insight-survey-marketing-coding-and-counting/
    7. philipmorganconsulting.com/pmc-weekly-insight-its-a-grind-grind/
    8. philipmorganconsulting.com/pmc-weekly-insight-slogging-away-the-morning/
  2. I was about to cite the “hungry judges” research as an example of how time of day and other factors can effect decision making, but some quick Googling uncovered a raft of issue with this research and how it’s been interpreted, so I’ll hold off on using that research as evidence.
    But really, you don’t need research evidence to know that all sorts of things can effect our behavior, and with something like coding survey responses, it’s easy to see how these external or internal mental/emotional effects can alter how we do the coding.

[PMC Weekly Insight] Slogging away the morning

By Philip | July 3, 2019

Quick tophat: Could you help a client of mine with some research they’re doing? If you think of yourself as a consultant and know something about how you or your firm generates leads, please share your experience here emailforexperts.com/clg-survey/ It’s a 5 minute commitment that will enrich a very valuable dataset Tom is developing and sharing back with those who participate.


A belated happy Wednesday to you!

It’s belated because I spent the entire morning coding responses from my list sample for my survey marketing experiment1.

It was more work than coding the responses to the LinkedIn sample because 1) more responses and 2) more verbose, thoughtful responses from y’all. And then I had to do some re-work because I got sloppy and overwrote a bunch of columns in my spreadsheet where I was doing the coding and had to do some of that over again. ๐Ÿ˜

I think y’all will enjoy the following two snapshots of the data from this coding. Remember, this is you, meaning these are coded responses that came from people on this email list who responded to my survey.

In response to the question “2. Please list ways you have you spent time and money for career development.”:

And one more that’s particularly interesting, this one in response to the question “7. Consider your entire career as a self-employed software developer and times you have gotten new opportunities, better projects, or other forms of career improvement. What do you think led to these improvements in your career?”:

I’ll have a more robust analysis for you in a week or two.

In the meantime, some interesting stuff that came up while coding these responses:

Do you code clearly bad-faith responses?

^^^ This dingaling took the survey while also insulting the survey questions (highlighted row). I ignored these responses because they are noise, not signal.

If this happens to you while doing something similar, don’t sweat it. If you sample enough people, you’ll get stuff like this. Just exclude these obviously bad faith answers and move on.

UI matters

When coding responses, seemingly minor UI stuff (column width and how text wraps in a spreadsheet) really matters. Fighting the UI by popping back and forth from the column you are trying to read to the column you are writing codes into vs being able to easily read and code without the back and forth is a surprisingly big deal. So take the time to set up the UI to make the process easy.

Look-ahead responses

It’s interesting to me that multiple people seem to have reviewed all the survey questions before answering them. So far it’s4 out of 56 responses that evidence this behavior.

I don’t think this is a problem; it’s simply interesting to be able to see this.

Tough stuff

Man, some responses are really tough to code! Ex: “Meet with people to discuss future career changes.” Is this networking, informal mentoring, or something else? Or all of the above?

I just give it a few minutes of thought, come up with the best code I can, and move on. I’m not sure it’s worth more effort than that.

Now that I’ve got all my survey responses coded, I’m much closer to being able to write up a first draft of my findings. Woot! I won’t lie: it’s been a slog getting here.

There might be ways of doing research that don’t involve some slog work. But if you’re willing to embrace a bit of slogging, you can generate some absolutely fascinating, unique insights based on proprietary datasets that you collect and own. I think that’s worth some slogging.

If you’re in the US and celebrate Independence Day, then happy Independence Day to you! If you don’t, happy Get A Lot Done Because The Phone and Inbox are Quiet Day! If you’re outside the US and wonder if this country is going crazy and taking the rest of the world to hell with it, you have a valid concern.

I’m kind of kidding with the fatalistic pray for Mojo GIF.

You know that William Gibson quote, right? “The future is here, it’s just not evenly distributed.” The same, I believe, is true of positive change. It’s here and happening and there’s lots of it, it’s just not evenly distributed. Don’t over-focused on the negative and sources that profit from focusing on the negative.

Happy Wednesday,

-P


Recent Daily Insights

  • [PMC Daily Insight] Identifying your point(s) of view - (I wrote this up for a client recently, and thought it'd be useful to share with y'all as well.) A point of view is an opinion you can suppport with evidence, data, or a good argument. A PoV can be any of the following: A perspective on how something should be done. For example, consider:…
  • [PMC Daily Insight] Tom Waits’ 2-step career master plan - I really love the 2-step career master plan outlined by Tom Waits in his song, Goin' Out West. Here's a partial excerpt from a song that's hard to pull a partial excerpt from because the whole thing is so amazing: "They got some money out thereThey're givin' it awayI'm gonna do what I wantAnd I'm…
  • [PMC Daily Insight] More degrees of freedom than you realize - I can barely contain my excitement about Dan Oshinsky's new website, which is here: inboxcollective.com Some context first, before I explain what I find so exciting about Dan's site. Context: Dan Oshinsky is leaving a job as Director of Newsletters at The New Yorker, and setting himself up as a consultant helping clients with... newsletters.…


  1. If you want to read up on this experiment:
    1. philipmorganconsulting.com/pmc-survey-marketing/
    2. philipmorganconsulting.com/pmc-the-de-biasing-survey/
    3. philipmorganconsulting.com/pmc-survey-marketing-recruitment/
    4. philipmorganconsulting.com/pmc-survey-marketing-initial-data-from-the-de-biasing-survey/
    5. philipmorganconsulting.com/pmc-weekly-insight-survey-marketing-qualitative-analysis-of-the-de-biasing-survey/
    6. philipmorganconsulting.com/pmc-weekly-insight-survey-marketing-coding-and-counting/
    7. philipmorganconsulting.com/pmc-weekly-insight-its-a-grind-grind/ โ†ฉ

[PMC Weekly Insight] It’s a grind-grind

By Philip | June 26, 2019

“It’s a grind-grind
It’s a grind
It’s a grind-grind”

— “Bus to Beelzebub”, Soul Coughing

That’s Soul Coughing, singing about the act of coding survey responses.

I’m kidding, but yet… it can be a grind. The coding part, I mean.

I’m continuing to work on my survey marketing experiment1 I’m not done coding the list sample responses.

In particular, the list sample is some real work to code because y’all were much more verbose in your responses than the LinkedIn sample. My hot take on this difference: this is because y’all are much more actively investing in your careers. That’s probably why you’re on this list in the first place! So you simply have more to say on the subject.

Getting help with the grind

What about getting help with the high-effort parts of a research project like I’m conducting? Does that make sense? Is it worth doing? If yes, how would you go about getting help?

Let’s start with the how question and then get to the should you question.

Partner with a researcher

You can partner with someone who has research experience. You’ll most easily find this type of experience in academia.

Graduate-level students are one option. They bring the research rigor while you bring the business context and connections needed for the project. The collaboration may help them with their progress towards a degree, or with their publication needs, or with something else that’s important to them. And the collaboration helps you with your client work or marketing. So there’s a shared incentive in this arrangement.

Professors or departments are another option, though they may be more selective because — at a departmental level — they’d be committing more resources to the project and so need to be more discerning about what they say yes to. At the professor or department level, you may gain worthwhile credibility because you’re involving a greater level of research rigor in your project, along with the brand of the professor or department’s institution.

Outsource

You can find freelance researchers outside of academia. They might be able to help you by taking on high effort work.

By outsourcing parts of your study to a freelance researcher, you can buy back some of your time, but at what cost? Yes, there’s the financial cost, which is fine. But there’s also the cost of you being at least partially removed from parts of the process, and this might cost you insight and confidence in the outcome.

Should you get help with the grind?

This is all good stuff, but you need to evaluate whether the following costs are worth it:

  • Loss of control and flexibilty. In embracing a greater level of research rigor, you will be giving up certain forms of control and flexibility. You might be committing to a larger sample size, or a more expensive recruitment process, for example. For a high profile important study, this could be worth it. For others, it could not be, especially combined with the potential loss of flexibility. More on this below.
  • Loss of insight. In getting outside help, you’ll necessarily be less involved in all aspects of the study. This might cause you to feel less confident in the insight your results generate. To be clear, it might not cause this outcome, depending on how you handle it. But the risk is there.
  • Collaboration. In so many contexts, teamwork is presented as an unalloyed good, but in some contexts it is a cost that doesn’t pay off. In innovation work, the value of a collaborative team needs to be closely scrutinized. Yes, the team approach might produce value. “Many hands make for light work.” This is true. But also, many hands make for a lack of agility, additional expensive communication overhead, and a potential lack of focus and clarity. So specifically in the context of innovation work, a collaborative approach may be less effective.

In a large, high-profile research project, the benefits of putting together a team are really worth considering. But in a small research project like mine, it’s possible to nearly ruin the whole thing by building an unnecessary team in order to avoid a few hours of unpleasant work. Much better to just do the effing work myself and avoid all those costs that would come from assembling a team.

Mixed methods and qualitative/inductive flexibility

I’ve been reading a freaking fantastic book on research, and now I feel like I have a foundational reading list for you if you’re interested in doing research in a business context. The list:

  1. “Mixed Methods: A short guide to applied mixed methods research”, by Sam Ladner
  2. “How to Measure Anything: Finding the Value of Intangibles in Business”, by Douglas Hubbard

The book I’ve been reading recently is the first on the above list. It’s a short, highly readable, largely jargon-free book. And it’s just excellent. It helps you understand the inherent contradiction — and the resulting power — that comes from blending quantitative and qualitative methods in the same study.

One of the points Dr. Ladner makes is that qualitative methods — which are inductive in nature (generating new theories) rather than deductive (attempting to test the truth of a theory) — are also more agile and usually involve less up-front cost.

Something I believe but can’t prove: academics default to quantitative/deductive approaches rather than qualitative/inductive approaches. This might result in a mismatch if you partner with an academic on your research.

You may begin with an ill-defined question, a strong but vague sense of what you want to learn, or what simply amounts to the wrong question to ask2. So if you use a high-up-front-cost method to answer a question like this, what you really have is an expensive boondoggle. A lean, iterative approach might have been much better, and starting with a small qualitative-dominated study might be a much better match between the maturity of your question and the method you use to get answers. In other words, a small qualitative-dominated study is the better tool to help improve your question.

Wrap-up

All this to say, I’m an advocate for the following process:

  1. Do your best to define a good question for your research project.
  2. Embrace the grunt work you’re about to deal with. Begin with a small-scale study that uses agile, flexible methods. This might mean avoiding getting outside help.
  3. Use the results from your initial small-scale study to refine your question. Also use these results as assets that help you connect and build trust with prospective clients. In other words, use the results of your study as marketing material.
  4. (Possibly) cycle through a few more small-scale iterations as you refine your question.
  5. Only once you’ve proven the value and clarity of your question would you consider scaling it up, and at this point you could benefit quite a lot from partnering with an academic or freelance researcher.

Questions?

Responses to these emails in this series about research have been, like, crickets, except for a few folks. This suggests I’m talking about stuff that’s relevant to only a small amount of my list.

Your thoughts?

-P


Recent Daily Insights

  • [PMC Daily Insight] Identifying your point(s) of view - (I wrote this up for a client recently, and thought it'd be useful to share with y'all as well.) A point of view is an opinion you can suppport with evidence, data, or a good argument. A PoV can be any of the following: A perspective on how something should be done. For example, consider:…
  • [PMC Daily Insight] Tom Waits’ 2-step career master plan - I really love the 2-step career master plan outlined by Tom Waits in his song, Goin' Out West. Here's a partial excerpt from a song that's hard to pull a partial excerpt from because the whole thing is so amazing: "They got some money out thereThey're givin' it awayI'm gonna do what I wantAnd I'm…
  • [PMC Daily Insight] More degrees of freedom than you realize - I can barely contain my excitement about Dan Oshinsky's new website, which is here: inboxcollective.com Some context first, before I explain what I find so exciting about Dan's site. Context: Dan Oshinsky is leaving a job as Director of Newsletters at The New Yorker, and setting himself up as a consultant helping clients with... newsletters.…

Notes

  1. If you want to read up on this experiment:
    1. philipmorganconsulting.com/pmc-survey-marketing/
    2. philipmorganconsulting.com/pmc-the-de-biasing-survey/
    3. philipmorganconsulting.com/pmc-survey-marketing-recruitment/
    4. philipmorganconsulting.com/pmc-survey-marketing-initial-data-from-the-de-biasing-survey/
    5. philipmorganconsulting.com/pmc-weekly-insight-survey-marketing-qualitative-analysis-of-the-de-biasing-survey/
    6. philipmorganconsulting.com/pmc-weekly-insight-survey-marketing-coding-and-counting/ โ†ฉ
  2. Douglas Hubbard talks a lot about how common and disastrous this is. It’s a simplification of his posiiton, but a pretty fair one, to say that we are excellent at choosing the wrong things to measure, and some lean iteration helps us to arrive at better things to measure. โ†ฉ

[PMC Weekly Insight] Survey marketing: Coding and counting

By Philip | June 19, 2019

My survey marketing experiment1 continues. As a quick reminder, I’m experimenting with using a survey to connect and build trust with a group of people.

Today, we arrive at some further simple data munching. The first round of simple data munching was made up of counts and averages of the quantitative data the survey contains. This was both conceptually and mechanically simple.

The next round of data munching is also conceptually simple, but a bit more mechanically involved. This part involves:

  1. Coding the survey open-ended responses so that they are more standardized and easy to analyze.
  2. Counting the frequency of the resulting coded responses. For example, once I did the coding of the LinkedIn sample, the action of networking showed up quite a lot in the open-ended responses. But how often? And where does this activity show up in a sorted list of all activities mentioned? This is why I want to count the frequency of the coded responses.

Doing this was not difficult, and was all accomplished using Google Sheets2. The coding involved making some judgement calls. For example, when a survey respondent says “I do a lot of experimenting with programming patterns to find alternate solutions”, how do I best code that? That’s the kind of judgment call I’m talking about. (I went with “a-coding-learning” in this case.)

Then I used a pivot table to count the frequency of the coded responses. Then I sorted the results of the pivot table by the count of each response.

Here’s what I came up with (get ready to do some SCROLLIN’!):

In case it’s not easy to read, the above is: “Coded responses to ‘2. Please list ways you have you spent time and money for career development.'”

The above is: “Coded responses to ‘4. Please list ways you have you spent time and money for developing your technical skills.'”

The above is: “Coded responses to ‘6. Please list ways you have you spent time and money for business or self-employment skills?'”

The above is: “Coded responses to ‘7. Consider your entire career as a self-employed software developer and times you have gotten new opportunities, better projects, or other forms of career improvement. What do you think led to these improvements in your career?'”

I experimented with putting the un-summarized list of coded responses into a word cloud generator, which was fun and something I might include in the report I write up about this, but I think the word cloud obscures more than it reveals when compared to a simple table.

Elevator pitch summary of your research

If I had just 20 seconds or so to summarize what this research is teaching me, I’d say the following:

The self-employed software developers I’ve surveyed “in the wild” invest in career development with a heavy usage of online learning platforms like Pluralsight and IRL events, and they find new or better opportunities primarily through networking and experimenting with their own business. They invest about 300% more in cultivating technical skills than they do in cultivating business skills. In my study, they used the word “marketing” exactly zero times.

I want to point out that the next-to-last sentence is purposefully constructed to be provocative, but its support in my data is questionable. Or rather, “they invest about 300% more in cultivating technical skills than they do in cultivating business skills” is one of several possible framings for the underlying data. Here are a few other possible framings:

  • Super factual: “When asked about how they invest in technical skills, respondents are about three times more verbose than when asked about how they invest in business or self-employment skills.”
  • Less provocative, still attempting to be factual: “I don’t have data on exactly how much time or money self-employed devs invest in technical vs. business skills, but my data does show that they clearly emphasize investing in technical skills over business skills.”
  • Simple, but suggesting a motive that the data might not support: “Self-employed devs seem way more interested in technical skills than business or self-employment skills like marketing.”

To be clear: I’m not talking about how I’d interpret or frame this data in the body of a report, but rather in a time-compressed situation where being memorable and somewhat provocative is more important than being accurate or nuanced. In that time-compressed situation, impact is created differently than it is in a less time-bound situation.

This all points to the actually difficult part: interpreting this data. Some of the key difficulties include:

  • Attributing intent or motive. What do I make of the fact that my respondents are more verbose in responding to questions about technical skills?
  • Interpreting importance. When I code and summarize the responses to open-ended questions, my list of codes for the “technical skills” and “where does opportunity come from” questions are much longer lists than the list of codes for the “business/self-employment skills” question. Maybe this does not mean that respondents actually emphasize the technical skills. Maybe it means they get lower ROI from that investment, so there’s more of that investment but less result from it. Maybe it means it simply requires more words to describe how they invest in tech skills, but if we look at the investment in terms of ROI or something else, a different picture would emerge.

Interviews with some of my respondents would help clarify the questions above. In fact, I think it would be irresponsible of me to draw firm conclusions from this data without conducting around 5 interviews to better understand my respondents motive and thinking.

Next week: I’ll have had time to code and summarize the responses to the other sample, which is folks from this very email list, and so I’ll be able to compare the two samples.

Questions on this stuff? I’d love to hear ’em. Hit REPLY ๐Ÿ™‚

-P


Recent Daily Insights

  • [PMC Daily Insight] Identifying your point(s) of view - (I wrote this up for a client recently, and thought it'd be useful to share with y'all as well.) A point of view is an opinion you can suppport with evidence, data, or a good argument. A PoV can be any of the following: A perspective on how something should be done. For example, consider:…
  • [PMC Daily Insight] Tom Waits’ 2-step career master plan - I really love the 2-step career master plan outlined by Tom Waits in his song, Goin' Out West. Here's a partial excerpt from a song that's hard to pull a partial excerpt from because the whole thing is so amazing: "They got some money out thereThey're givin' it awayI'm gonna do what I wantAnd I'm…
  • [PMC Daily Insight] More degrees of freedom than you realize - I can barely contain my excitement about Dan Oshinsky's new website, which is here: inboxcollective.com Some context first, before I explain what I find so exciting about Dan's site. Context: Dan Oshinsky is leaving a job as Director of Newsletters at The New Yorker, and setting himself up as a consultant helping clients with... newsletters.…


  1. If you want to read up on this experiment:
    1. philipmorganconsulting.com/pmc-survey-marketing/
    2. philipmorganconsulting.com/pmc-the-de-biasing-survey/
    3. philipmorganconsulting.com/pmc-survey-marketing-recruitment/
    4. philipmorganconsulting.com/pmc-survey-marketing-initial-data-from-the-de-biasing-survey/
    5. philipmorganconsulting.com/pmc-weekly-insight-survey-marketing-qualitative-analysis-of-the-de-biasing-survey/
  2. There are better tools, like Delve, for doing this, but Google Sheets is good enough for my needs here.

[PMC Weekly Insight] Survey marketing: Qualitative analysis of the de-biasing survey

By Philip | June 11, 2019

My survey marketing experiment1 continues. As a quick reminder, I’m experimenting with using a survey to connect and build trust with a group of people.

It’s been a few weeks since I’ve updated you on this2, so a quick recap seems helpful. As I previously wrote:

The three things that must simultaneously be true for survey marketing to work:

1. Question my audience can answer about themselves

2. Question is one the audience is curious about RE: their peers/the industry

3. Question is one I am very interested in the answer to

This confluence of factors makes it possible for me to serve my audience by answering a question both they and I care about, then sharing the answer back with them using a permission mechanism that was created using the same means I used to generate the answer in the first place: a survey.

I started this project with a very open-ended survey that I am referring to as my “de-biasing survey”. The purpose of this survey is to align my thinking with how my sample group thinks about the question of investing in their career.

I distributed this survey to a sample I recruited from LinkedIn using scrappy, inexpensive methods. I also forked the survey and sent the fork to a sample recruited from my email list.

I shared some quantitative results here, and in that data there’s a pretty clear difference between my LinkedIn sample and my email list sample, with the email list sample seemingly much more interested in and active in career development. And younger, too, you good-looking lot!

That pretty much brings us current. Now to dig into the qualitative data this research has yielded.

The qualitative data

For this project, I’m thinking of the responses to my open-ended questions as qualitative data. It’s not as rich or nuanced a qualitative dataset as realtime audio or video or IRL interviews would yield, but it’s still useful because it adds context to the quantitative data.

Here’s an example of a few responses to one of my open-ended questions. The responses come from the LinkedIn sample, and the question was:

Consider your entire career as a self-employed software developer and times you have gotten new opportunities, better projects, or other forms of career improvement. What do you think led to these improvements in your career?

  • Coincidence. It’s much harder now because the applicant pool is overloaded.
  • taking many shots
  • capacity to focus and deal with problems
  • How to get more customers
  • “Being curious and open. I tell people about the things I’m interested in and the projects I hack together in my own time. Everytime that has come up in a “”9-to-5″” work environment it has led to me getting more money and interesting conversations (e.g. would you like to work here, would you like this project)”
  • Networking & experience.
  • I am not a full-time software developer. I started because my place of work needed certain applications not commercially available.
  • I haven’t been very successful in finding good projects.
  • longevity
  • Being friendly, honest, hard-working and producing quality results.

This list is the first 10 responses to that question, listed in the chronological order the responses showed up. You really get a sense of the range here, from quick 1-word responses–some seeming to be nonsequiturs or misreadings of the question–to more lengthy, seemingly more thoughtful responses. This is totally normal in the context of a survey like this one.

Bias alert!

At this point, I’m on the lookout for a subtle bias in myself, which would be to discount the shorter responses in some way. To assume they’re less valuable, less thoughtful, or less meaningful to my question. Remember, when you are starting with no data, the marginal value of additional data is huge until you get to about 30 data points, then it starts tapering off pretty quickly. I’m referencing Douglas Hubbard here, who has said that beyond about 30 samples you need to quadruple the sample size to reduce error–which we can think of as uncertainty–by half. I can’t find online this nifty graph that Douglas shared in a recent webinar for the Military Operations Research Society, but the graph below, from a different source, conveys the same idea. Notice how the curve pretty quickly goes asymptotic around the 30 sample mark:

This shows the decreasing marginal value of additional data. The biggest gains happen between 0 and ~30 samples.

Anyway! I think it would be a mistake for me to discount the value of any of my responses here, even if the responses are super short or don’t make a lot of grammatical sense. They’re still data, and they’re still moving me from massive uncertainty to greatly-reduced uncertainty.

Cleaning the qual data

In order to address this bias in myself, and to make this qual data more useful, I need to normalize the responses to open-ended questions. This is coding the responses. I’ll do this right here, as an example, for a few of the above responses.

  1. First example:
    1. Actual response: “Coincidence. It’s much harder now because the applicant pool is overloaded.”
    2. Coded to: “s-chance, s-competition”
  2. Second example:
    1. Actual response: “taking many shots”
    2. Coded to: “a-volume”
  3. Third example:
    1. Actual response: “capacity to focus and deal with problems”
    2. Coded to: “a-problemsolving”

You’ll notice each of my codes begins with a letter, which is a shorthand for one of two things: “a-” means action/activity, and “s-” means sentiment, or a sort of feeling/worldview being expressed. This allows me to sort and filter more easily, and it’s a meaningful distinction here.

Any open-ended tagging or categorization system, such as my coding system here, presents a challenge because you can invent an infinite number of categories and become highly granular in your categorization. This is why almost every time I’ve ever set up a CRM for myself, I abandon using it. It collapses under the weight of its own complexity which, ironically, I created by creating a too-granular category/tagging system!

So… be careful with your coding system. ๐Ÿ™‚ You want it to be expressive and not conceal too much granularity and nuance, but you also want it to be useful, which means avoiding excessive complexity which means avoiding excessive detail and granularity.

What I’ll be doing next with this research is coding the qualitative answers, and I’ll do so in an iterative way. I’ll read through each column of responses to open-ended questions, and set up an adjacent column in the spreadsheet that contains the responses and that’s where I’ll put in the coded responses. If column C contains responses to an open-ended question, I’ll add a column D for my codes, and so on. Theoretically a RDBMS would be better here–or perhaps Airtable–but I’m sticking with a spreadsheet at this point because it’s good enough.

For each new action or sentiment I find in the qual data, I’ll create a new code. Then, I’ll pull out a list of all the codes and look for opportunities to simplify the coding schema by collapsing sufficiently similar codes into one, and then search for the old codes in my spreadsheet and replace them with the new codes based on the now-simplified schema. This is where the process becomes more art than science.

Next steps

Here’s my list of next steps for this research project:

  • Code the open-ended responses into one of two categories:
    • Crisply defined activities (somewhat objective on my end. More normalizing than interpreting.)
    • Sentiments (quite subjective on my end. More interpreting than normalizing. This is where I can skew objective or skew towards “rack the shotgun style filtering” where I apply my own worldview.)
      • Interesting to note my personal emotional reaction to some of the sentiments expressed. Judgey! ๐Ÿ™
  • Analyze the coded responses:
    • Word cloud to facilitate easy, “cotton candy” sharing of results like those shitty infographics everywhere online do. ๐Ÿ™‚
    • Simple quant analysis (“what % of respondents list this activity/sentiment?”)
    • Really, really think about what the patterns I see with the above analysis methods might be saying. Is there a story in the data?
  • Compare the LinkedIn sample vs. the list sample
  • And of course, write up my findings into a report to share back with those who left their email address for me.
  • Decide whether to extend or pivot based on what I’ve learned.

Interesting derivative questions

Thus far, this research–even though I’m nowhere near “done”–has raised some very interesting questions for me:

  • Wow, the responses from my email list sample are so immediately strikingly different. “Better” in my view. What filters for these kinds of people? What “racks the shotgun” for them? Where do they hang out in an already-filtered group?
  • Do I create two reports–one per sample group–or just one?
  • Was the structure of my survey questions redundant? I got a few comments to the effect that it was. I saw the later survey questions as drill-downs going deeper on earlier questions, but a few participants saw them as redundant. I need to be sensitive to this if I design a second survey to go bigger with this research.

Wrap-up

I think I’ll use the next few free Weekly Insight articles to update you on the continuation of this research and, incidentally, give myself helpful deadlines to keep it going. ๐Ÿ™‚

Looking forward to sharing my method and what I learn with you.

-P


Recent Daily Insights

  • [PMC Daily Insight] Identifying your point(s) of view - (I wrote this up for a client recently, and thought it'd be useful to share with y'all as well.) A point of view is an opinion you can suppport with evidence, data, or a good argument. A PoV can be any of the following: A perspective on how something should be done. For example, consider:…
  • [PMC Daily Insight] Tom Waits’ 2-step career master plan - I really love the 2-step career master plan outlined by Tom Waits in his song, Goin' Out West. Here's a partial excerpt from a song that's hard to pull a partial excerpt from because the whole thing is so amazing: "They got some money out thereThey're givin' it awayI'm gonna do what I wantAnd I'm…
  • [PMC Daily Insight] More degrees of freedom than you realize - I can barely contain my excitement about Dan Oshinsky's new website, which is here: inboxcollective.com Some context first, before I explain what I find so exciting about Dan's site. Context: Dan Oshinsky is leaving a job as Director of Newsletters at The New Yorker, and setting himself up as a consultant helping clients with... newsletters.…

  1. If you want to read up on this experiment:
    1. https://philipmorganconsulting.com/pmc-survey-marketing/
    2. https://philipmorganconsulting.com/pmc-the-de-biasing-survey/
    3. https://philipmorganconsulting.com/pmc-survey-marketing-recruitment/
    4. philipmorganconsulting.com/pmc-survey-marketing-initial-data-from-the-de-biasing-survey/
  2. Moving is hella disruptive! ๐Ÿ™‚ Cheryl and I are partially moved into our long-term rental, waiting for the moving company to deliver our furniture and the possessions we didn’t bring with us by car.

[PMC] The Opportunity Early Warning System

By Philip | June 7, 2019

List member Ben pointed me to this, and it’s quote interesting.It’s a short piece from AngelList on the opportunity in the funeral business and how some startups are moving on that opportunity.

It causes one to think: what creates opportunity in the first place?

Not what created this specific opportunity, but what creates opportunity in general?

In the case of the ~20 Billion funeral market, two things had to happen: 1) broad changes in customer sentiment and 2) changes in legislation. It’s possible the former drove the latter, but even if not, changes in sentiment and legislation both had to happen in order to create the kind of opportunity we’re seeing today.

That opportunity, by the way, can be very simply thought of as “green burials”; burials done without as much or any preservation and durable hardware.

I’m getting very speculative here, but it’s interesting to try to imagine how one might have seen this opportunity coming and moved on it just early enough to build up a leading market position, but not too early, because that often doesn’t work out well. To build a model of how one might spot opportunity coming, let’s first think about market awareness.

The more aware a market is, the less education cost you’ll bear.

Circa 2006, the consumer market was largely unaware of the value of a $600 pocket-sized computer that could also make phone calls. This meant that Apple and other smartphone pioneers had a lot of expensive education and awareness-building work to do. You could think of this as an investment that lowered their future cost of sale, you could think of it as a cost that early movers have to bear in order to earn their first-mover advantage, or you could simply think of it as a cost that’s unavoidably bundled with innovation. But what you can’t do is ignore this cost, because it can be significant enough to kill innovations that are too early. The short 5-year lifespan of Apple Newton MessagePad was certainly not a single-factor failure, but the state of awareness of the circa 1993 market certainly played a role. Even at an inflation-adjusted $1129 price, it would have been extremely expensive to educate the market on the value of the Newton and acquire customers beyond those who already “got it”.

So this first part of our model is market awareness. This is critical.

The second part of our model for spotting opportunity coming at the right time is… well, I don’t have a great term for it, so let’s think of it as adjacency or context.

Sometimes new opportunity in Industry X is created by changes that originated in… Industry X. And sometimes, it’s not.

Sometimes new opportunity in Industry X is created by changes that originated outside Industry X. This is what I mean by adjacency/context. Sometimes the changes that create opportunity originate elsewhere, in either an adjacent industry or in the larger cultural context.

The reading I’ve done on the opportunity in the burial business suggests that it’s broader customer sentiment about “green” issues that’s driving the current opportunity in the burials business. This is a change in the larger cultural context. It effects not just the burials business but also retail, auto, CPG, and a multitude of other sectors. And this change did not originate within the burials business. It came from the outside.

A quick recap of our model:

  1. Market awareness: Increases in customer awareness (awareness of the value of a new innovation) drive down the cost of selling that innovation.
  2. Adjacency/Context: Change that creates opportunity can come from within an industry, but–quite importantly–it can also come from adjacent industries or the larger cultural context.
  3. Curiosity: Your ongoing curiosity drives a recurring process of inquiry into the above 2 factors.

Back to our larger thought experiment here: How might one have seen this funeral business opportunity coming and moved on it just early enough to build up a leading market position, but not too early?

In a general sense, and following our model here, I think you might have paid attention to broad changes in the culture and repeatedly asked of each change: “I wonder how this might effect my customers?”

It might seem like I’ve made an argument against specialization here, but I don’t see it that way because specialization is not myopia, it’s strategic focus. This kind of focus doesn’t magically deprive you of the ability to pay attention to the big picture.

In fact, specialization makes this “opportunity early warning system” work better. If you don’t have deep insight into a specific vertical or horizontal, you won’t see the full potential of adjacent or broad-based cultural changes to impact your area of specialized expertise. Specialization gives you the ability predict or imagine how external changes could effect your area of specialization. Without the specialized expertise, you might see the external changes just fine, but you won’t fully grasp the implications of those changes for your area of focus (because you haven’t chosen and pursued a single area of focus!). [1]

This process of repeated inquiry from our innovative funerla business person might have looked something like this:

  1. “Huh, seems like every third article I read about these days is about the green movement.”
  2. “What would the funeral business look like if our customers insisted we were ‘green’ also?”
  3. “I suppose they might want fewer chemicals and less hardware involved in the process. I wonder what that might look like?”
  4. “OK, I’ve taken this as far as I can inside my head. I need to talk to customers and get their perspective on this. I especially need to understand their state of awareness and whether they need to be educated about the value of a burial with fewer chemicals and less hardware, or whether they just ‘get it’ already because of the broader cultural green movement.”

Is this process economically efficient? Not in the short term. You might burn through quite a few non-starter ideas because the adjacent/cultural change is insufficiently relevant to your clients, or because the market awareness isn’t high enough.

But in the long term, I can’t imagine running a great expertise-driven business without regularly investing in this process. Because this kind of thing is what keeps you creating new, exceptional value over the long haul.

-P


1: I didn’t have time here to go into the notion that as a more profitable specialist, you can simply afford more time to pay attention to adjacent/contextual changes. In other words, you’re not working so damn much, so you can read and learn and inquire more broadly.


Recent Daily Insights

  • [PMC Daily Insight] Identifying your point(s) of view - (I wrote this up for a client recently, and thought it'd be useful to share with y'all as well.) A point of view is an opinion you can suppport with evidence, data, or a good argument. A PoV can be any of the following: A perspective on how something should be done. For example, consider:…
  • [PMC Daily Insight] Tom Waits’ 2-step career master plan - I really love the 2-step career master plan outlined by Tom Waits in his song, Goin' Out West. Here's a partial excerpt from a song that's hard to pull a partial excerpt from because the whole thing is so amazing: "They got some money out thereThey're givin' it awayI'm gonna do what I wantAnd I'm…
  • [PMC Daily Insight] More degrees of freedom than you realize - I can barely contain my excitement about Dan Oshinsky's new website, which is here: inboxcollective.com Some context first, before I explain what I find so exciting about Dan's site. Context: Dan Oshinsky is leaving a job as Director of Newsletters at The New Yorker, and setting himself up as a consultant helping clients with... newsletters.…

[PMC] Social risk, and dating models

By Philip | June 6, 2019

Let’s imagine you’re a person interested in dating someone who works as a model.

Quora has advice for you. Here’s one of the better answers:

(source: www.quora.com/How-does-one-date-a-model, please note there’s stuff at that link some might find sexist or otherwise objectionable.)

I spent a good long time trying to figure out what part of the list above doesn’t apply directly to lead generation for your services and couldn’t come up with anything. Here’s that list, adapted to answer the question: “How do you generate leads you find more desirable for some reason? Better clients, more valuable work, advisory instead of implementation, etc.”

  1. Find employment in a business that deals with the kind of leads you want to generate, intentionally build up a network (access) and credibility (trust), and then move into generating leads from that network as an indie.
  2. Hang out in places where the leads you want to attract socialize with each other. Your “hanging out” will be intentional and oriented around helpful service.
  3. Become close friends with the kind of person you want to have as a lead and have them introduce you to friends and co-workers.
  4. Become a celebrity and then generate leads through the fame and attention brought on by that. (This is the fundamental idea behind content marketing and thought leadership.)
  5. Become the kind of person you want to attract as leads, and use your insider status and insight into this community to attract leads from within the community.

Notice what’s not on this list: place an ad in Craigslist or set up a profile on a dating site and troll (in the fishing sense, not in the internet hater sense) for the kind of person you’re looking for.

Ads for sure can work. But their effectiveness is limited compared to other approaches that do more to build trust and social access.

The bottom line: Ads limit your risk exposure to financial risk only.

Lead generation approaches that include an element of social risk work better for attracting more desirable clients–or advisory services leads–because they build trust and social access.

-P

Recent Daily Insights

  • [PMC Daily Insight] Identifying your point(s) of view - (I wrote this up for a client recently, and thought it'd be useful to share with y'all as well.) A point of view is an opinion you can suppport with evidence, data, or a good argument. A PoV can be any of the following: A perspective on how something should be done. For example, consider:…
  • [PMC Daily Insight] Tom Waits’ 2-step career master plan - I really love the 2-step career master plan outlined by Tom Waits in his song, Goin' Out West. Here's a partial excerpt from a song that's hard to pull a partial excerpt from because the whole thing is so amazing: "They got some money out thereThey're givin' it awayI'm gonna do what I wantAnd I'm…
  • [PMC Daily Insight] More degrees of freedom than you realize - I can barely contain my excitement about Dan Oshinsky's new website, which is here: inboxcollective.com Some context first, before I explain what I find so exciting about Dan's site. Context: Dan Oshinsky is leaving a job as Director of Newsletters at The New Yorker, and setting himself up as a consultant helping clients with... newsletters.…

[PMC] Pareto, Power Laws, Patterns, and Onions

By Philip | June 5, 2019

The Pareto Principle article on Wikipedia is a really valuable read.

Unless you’ve been living under a rock and not reading the hundreds (thousands, perhaps?) of articles online that over-apply the idea of the Pareto Principle, you already get the main idea.

20% of something is responsible for 80% of something else.

That’s the core idea, expressed in the most general possible terms.

Here’s the actual principle as described by Wikipedia:

“The Pareto principle (also known as the 80/20 rule, the law of the vital few, or the principle of factor sparsity) states that, for many events, roughly 80% of the effects come from 20% of the causes.”

The critical caveat there is: “for many events”. Not for all events. Not for all phenomena.

That said, the Pareto Principle is impressively explanatory. It fits lots of phenomena, and in so doing provides a reassuringly simple sense of order to the world.

It, however, does not explain why 80% of the effects come from 20% of the causes. And it does not predict causation; meaning it does not predict which causes will have that outsized contribution to effects.

The Pareto Principle is not truly universal either. You can’t just pick any pair of cause and effect and say 20% of this cause will be responsible for 80% of this other effect. I mean you could say that, but you would often be wrong. ๐Ÿ™‚

If you read the Wikipedia article (and I think you should), you’ll see lots of examples of situations where the Pareto Principle does and does not apply. Here is one interesting place where it does not:

“However, it is important to note that while there have been associations of such with meritocracy, the principle should not be confused with farther reaching implications. As Alessandro Pluchino at the University of Catania in Italy points out, other attributes do not necessarily correlate. Using talent as an example, he and other researchers state, โ€œThe maximum success never coincides with the maximum talent, and vice-versa.โ€, and that such factors are the result of chance.”

Digging deeper into Pluchino’s research is fascinating, and we find this helpful summary of their research from the MIT Technology Review.


Quick aside: it was amusingly ironic to me that the abovelinked MIT Tech review article was published in early 2018, around the same time that one of the biggest recent “black swan” events involving complex systems was kicking the crap out of the world economy. The Pluchino paper’s authors talk about how luck is the determining factor in success, and they were doing so at a time when something like luck was an outsized factor in the health of the world economy, but in a negative–rather than positive–direction.


From the MIT Tech Review summary:

“The results are something of an eye-opener. Their simulations accurately reproduce the wealth distribution in the real world. But the wealthiest individuals are not the most talented (although they must have a certain level of talent). They are the luckiest. And this has significant implications for the way societies can optimize the returns they get for investments in everything from business to science.”

Just like with the Pareto principle, there’s a certain sense of comfort we get from ideas that are simple to express, but in this case the explanation is: “They’re wealthy because they’re lucky”, not “They’re getting outsized results because they know what 20% of inputs to focus on”. Both ideas suggest a simple correlation between inputs and results.

Here’s the actual paper Pluchino and his team published, BTW. The MIT Tech Review summary (necessarily, I imagine) omits lots of juicy, interesting detail from the source paper. For example, the following:

“In fact, from the micro point of view, following the dynamical rules of the TvL model, a talented individual has a greater a priori probability to reach a high level of success than a moderately gifted one, since she has a greater ability to grasp any opportunity will come. Of course, luck has to help her in yielding those opportunities. Therefore, from the point of view of a single individual, we should therefore conclude that, being impossible (by de๏ฌnition) to control the occurrence of lucky events, the best strategy to increase the probability of success (at any talent level) is to broaden the personal activity, the production of ideas, the communication with other people, seeking for diversity and mutual enrichment. In other words, to be an openminded person, ready to be in contact with others, exposes to the highest probability of lucky events (to be exploited by means of the personal talent).”

There’s a whole series of articles begging to elaborate on just that one paragraph! And there are echos of the widely distributed quote: “The harder I work, the luckier I get.”

What’s our bottom line here?

I hope you’re a little less quick to apply the Pareto Principle to any ole situation, and more thoughtful about it when you do. Going forward, I certainly will be.

There’s a deeper question I’m grappling with here:

If you’re curious what contributes to success, who do you model?

I mean sure, you can just throw up your hands and say: “Luck matters. So…. good luck! See ya!”

But if you’re in the business of advising clients, surely you’re interested in more systematic ways of helping them improve their condition, right? Surely they’d like to get a little more for their money than an elaborately fashioned statement of: “Good luck, kid!”.

So that raises the question: who do you study and attempt to model as you’re looking for the patterns that contribute to success?

I don’t mean who specifically do you look at to understand what contributes to success. I mean where in the distribution of outcomes do you look?

Let’s think in Pareto terms, and let’s think about 3 layers of an “onion”:

  • In the outer layer of the “onion”, 80% of the effects come from 20% of the causes. Ex: 80% of dollars of global wealth come from 20% of people.
  • In the middle layer of this onion, we apply the same 80/20 distribution to the top 20% from the previous layer of the onion. Ex: 64% of global wealth comes from 4% of people.
  • In the innermost layer of this onion, we go again with the 80/20 applied to the previous layer and get: 41% of the global wealth comes from 1% of the people (and in this we hear echoes of Occupy Wallstreet’s “the 1%”).

If you’re wanting to understand what contributes to success in some endeavor, which layer of this onion do you look at to discover patterns?

It’s tempting to look at the innermost layer–the 41/1 distribution. It’s tempting because this is the smallest group, so, questions of access and transparency aside, it should be the easiest to study. It’s also the group with the most emotionally impressive ratio between inputs and outcomes. After all, these people have the same 24 hours in a day and many of the same fundamental constraints we all do. If they need to fly from New York to London for business, they can’t do it 10 times faster on a $50 million Learjet than someone who buys a $500 coach ticket on Norwegian Air. Other aspects of their experience might be 10 or 100 times better, but they won’t get there 10x faster. Yet despite sharing many constraints with the rest of us, they get such outsized results!

This innermost layer of the Pareto Onion is a seductive place to look for patterns that contribute to success, but I am unconvinced that it’s the best place for such lessons.

My hunch is that, like Pluchino’s team claims, idiosyncratic factors like luck–but not limited to luck–play an outsized role in what gets you to that inner layer of the Pareto Onion.

I am respectful of the role that idiosyncratic factors play in your life and career. And like the Pluchino paper says, you can (and should!) do things that increase the chances that these idiosyncratic factors work in your favor.

But I’m more interested in understanding systematic factors that contribute to success for small expertise-driven businesses. And that means I’m not looking at the inner layer of the Pareto Onion. In fact, I’m not sure I’m even looking at any layer of this onion. In other words, the best lessons for systematic success might not lie in the top 20% at all!

I hope this is at least a little bit shocking to you.

I can’t prove this yet. It’s just a hunch.

But stay tuned as I follow this thread. I think it’s important.


On a personal note, is anyone else psyched that Apple is adding mouse support to iPaDoS 13 or whatever they’re calling it? When my wife and I hit the road 2 months ago I committed to using an iPad as my primary work computer (and quickly upgraded to the larger size iPad Pro).

Working primarily on an iPad has added productive friction that I’ve appreciated. And I’ve been surprised how quickly I’ve habituated to touching the screen. Even when I pull out my laptop for some thing I can only do there, I find myself instinctively reaching to touch the screen.

And yet, I’ll be super happy to have mouse support on the upgraded iPad this fall. I hope it supports this SwiftpointGT mouse I have.

-P

[PMC] Survey marketing: initial data from the de-biasing survey

By Philip | May 30, 2019

(Readin’ time: 3m 36s)

My survey marketing experiment [1] continues!

Here’s the initial “de-biasing” survey summary data from my two samples.

The LinkedIn Sample

My LinkedIn sample was a convenience opt-in sample, recruited using a Sales Navigator search for:

  • Self-employed
  • In United States
  • More than 10 years in current position
  • Keyword “software developer” in LI profile

I invited these folks to connect with me, and in my LI connection request message, I said the following:

Hi @firstname, my name is Philip. I am working to better understand how self-employed devs improve their career. Would you be willing to spare 3m for a survey? It will mean the world to me.

-> https://www.getfeedback.com/r/4B6uBroa

Iโ€™m not selling anything; you have my NO SALES PITCH GUARANTEE.

The automation tool I used–LinkedProspect–sent 1537 connection requests. 364 (23.68%) of them accepted, 38 (10.44%) of those started the survey, and 22 (57.9%) of those completed the survey, which means 1.43% of the sample I attempted to recruit from LinkedIn actually completed the survey.

The list sample

I also recruited a sample from my email list. I sent this sample to a fork of the survey so I could compare the two samples.

This sample was also a convenience opt-in sample. Across three daily emails, I invited this sample using the following text:

Quick tophat: I am working to better understand how self-employed devs improve their career. Would you be willing to spare 3m for a survey? It will mean the world to me.

-> https://www.getfeedback.com/r/fNWSDcfj

Iโ€™m not selling anything; you have my NO SALES PITCH GUARANTEE.

An average of 1898 people received this invitation 3 times. There’s no way for me to know how many actually saw, noticed, or really thought about this invitation, but let’s use the same funnel math as with the LinkedIn sample. So from 1898 “connection requests”, 56 (2.96%) of those started the survey, 33 (58.93%) of those completed the survey, which means that 1.75% of the sample I attempted to recruit from my email list actually completed the survey.

Summary data

Here’s the first picture of the data I’ve collected, looking at it through a quantitative lens:

The next step is to dig through the responses to open-ended question and look for patterns there, but even at this summary level there seems to be a distinct difference between y’all (if you’re reading this in your inbox) and my LinkedIn sample.

This is what I would have guessed. Members of an email list that’s about marketing and expertise is likely to filter for self-employed people who are interested in investing in their career. This is a biased sample.

Having a LinkedIn profile and responding to connection request there is also a filtering mechanism that also gives me a biased sample.

These biases are totally OK! I’m trying to understand how self-employed devs think about investing in their career, but implicit in that question is an assumption: I only really care about the subset of this group that I can actually reach! So the bias in my sample matches the bias I’d experience everywhere else in my business, so this sampling bias isn’t going to skew my results in a way that undermines my decision-making (and yes, I do have a decision I need to make about how I message what I do).

How statistically valid is my data?

Wrong question!

Before I started this project, I had zero quantitive data on my question. I was operating on intuition and the sense of the market I gained through lots of conversations and casual, qualitative research. This is not low-value information at all. But! It’s not quantitive.

So even a small amount of not-exactly-rigorously-collected quantitive data represents a significant increase in what I know. And that’s super valuable.

Next steps on this project, which I’ll report back to you on as I complete them:

  1. Analyze the qualitative data I’ve collected.
  2. (Likely, but not 100% sure) Ask the respondents who provided an email address for a brief interview. Aim for 5 such interviews, with the goal of going deeper on the dataset and getting even more qualitative data.
  3. Write and deliver the promised report.
  4. Decide if I want to invest in writing the “real” survey and recruiting a broader sample, or instead pivot to an improved question and start the process over again. [2]

-P


Notes

1: If you want to read up on this experiment:

  1. https://philipmorganconsulting.com/pmc-survey-marketing/
  2. https://philipmorganconsulting.com/pmc-the-de-biasing-survey/
  3. https://philipmorganconsulting.com/pmc-survey-marketing-recruitment/

2: I’m working hard to resist having a strong, emotional reaction to this initial collection of data. One hot-take that wouldn’t be totally unreasonable would be: “Crap!! Only 36% of my LinkedIn survey respondents even care about the superset of services that my business lives in!!” But 36% of a massive market is… still massive. So I’m working to temper my initial emotional reaction to the data so that I can take an objective look at the qualitative data that accompanies this quant data.

[PMC] Survey marketing recruitment

By Philip | May 15, 2019

Quick tophat: I am working to better understand how self-employed devs improve their career. Would you be willing to spare 3m for a survey? It will mean the world to me.

-> https://www.getfeedback.com/r/fNWSDcfj

Iโ€™m not selling anything; you have my NO SALES PITCH GUARANTEE.

(Readin’ time: 8m 54s)

“If a survey falls in the forest and no one fills it out, does the survey produce data?” — The Senile Prophet

Let’s talk about recruitment. This is the fancy word for finding folks who might respond to my survey.

Recently I hired Heather Creek, a PhD andinternal surveys consultant at the Pew Charitable Trust, to give a presentation to TEI on surveying. It was fantastically informative.

We learned from Heather that there are 8 distinctly different kinds of sample populations you might survey, grouped into 3 categories:

  • Probability samples
    • Random general public
    • Random from a curated list (like a customer list)
    • Intercept
    • Panel
  • Convenience or opt-in samples
    • Non-probability panel
    • Convenience intercept
    • Snowball
  • Census
    • Every member of a population

For my research project on how self-employed devs invest in their career, I’m recruiting a convenience intercept sample.

I asked the folks at Qualtrics, which has a very robust survey and survey analytics platform and a research services department, what they would charge to do this kind of recruiting. I believe they would recruit from a bunch of panels they have access to, meaning the sample they recruit for me might be a probability panel which is considered a more rigorous type of sample.

They quoted me something like $40 per recruit, and said the range of cost per recruit ranges from small (like $10/per) to much greater at over $100 per recruit for people that are hard to find or have very specific characteristics.

Is one sampling method better than the others? Are the more rigorous (probabilistic and census) sampling methods more desirable? You can’t answer that without knowing what your research question and other parameters are.

For my purposes (reducing uncertainty in a business context), my less rigorous and less probabilistic method is fine. But my approach would not work for other research projects with different questions being asked or greater uncertainty reduction needs.

Chances are, if you’re doing research to benefit your own business or help a client make better decisions or help all your future clients make better decisions, you can assemble a sample using less rigorous methods just like I am. Your question is likely to be very focused (and if it’s not, that’s a problem you need to fix first before surveying or interviewing) and you can recruit from a small but pretty homogenous group to assemble your sample. Both of these things help you produce more impactful findings.

To expand on this, what question you choose is certainly the most impactful variable in this whole process! No amount of rigor in your survey design, recruitment, and sampling methodology can compensate for asking the wrong question.

Last week I sat in on a webinar hosted by the Military Operations Research Society, where Douglas Hubbard gave a really fascinating almost-2-hour-long presentation. Douglas offered numerous examples of asking the wrong question. He made the general claim–and I have no reason to doubt this–that in most business situations, the economic value of measuring a variable is usually inversely proportional to the measurement attention it typically gets.

In other words, we are reliably bad at choosing what things to study (or reliably good at misplacing our investigative effort)! Here’s one example he gave, specific to IT projects:

  1. Initial cost
  2. Long-term costs
  3. Cost saving benefit other than labor productivity
  4. Labor productivity
  5. Revenue enhancement
  6. Technology adoption rate
  7. Project completion

This list is ordered from lowest to highest information value. Meaning the value of knowing #1 on this list is significantly lower than the value of knowing #7. So want to guess what most folks will spend the most effort on measuring?

You guessed it. Not #7. The effort is focused on the first few items on this list, meaning the effort is focused on the lowest impact stuff.

I tell you this to contextualize the discussion of recruiting a sample for my survey.

Me asking the right question is so dramatically much more important than using highly rigorous methods downstream in my research.

We generally use the phrase “good enough for government work” in a somewhat pejorative way, but it fits here in a more neutral way. In other words, there’s no need to strive for extremely high levels of rigor in the context of research for business purposes. Neither should we be sloppy. Horses for courses.

How I’m recruiting

I’m recruiting the sample for my de-biasing survey in two ways.

The first is a method I learned from Ari Zelmanow. This approach uses LinkedIn connection requests to ask a group of people to fill out my survey. I honestly didn’t think this would work at all, much less work well.

Here are some numbers I captured mid-project for a recent recruitment project:

  • Connection requests sent: 155
  • Connections accepted: 55 (35.48% of connection requests)
  • Surveys completed: 17 (10.97% of connection requests)

If you have some experience recruiting for surveys, you know those numbers are very, very good. Like, eyebrow-raising good.

I can’t take credit for this; I was simply running Ari’s playbook here.

I will note that the numbers I’m seeing for my current project (understanding how self-employed devs invest in their career) are much less impressive. ๐Ÿ™‚

  • Connection requests sent: 1537
  • Connections accepted: 21.28% of connection requests
  • Surveys completed: 20 (1.301%)

Those last two numbers will climb a bit over the next week or two, but you can see they’re much lower than the previous set (and unlikely to ever close the 10x gap in performance). The previous recruitment outreach was for a client project investigating developer sentiment around a specific platform. Again, the question you’re investigating matters. A lot!

The LinkedIn connection message I’m using is not just your standard “Hi, let’s connect” message. Instead, it’s a message that explains the purpose of my research and asks folks to fill out my survey. So the connection message is not really about connecting on LinkedIn, it’s about recruiting for my survey.

You’ve seen the message before. It’s almost identical to the message at the top of this and yesterday’s email:

Hi @firstname. I am working to better understand how self-employed devs improve their career. Would you be willing to spare 3m for a survey? It will mean the world to me.

-> https://www.getfeedback.com/r/fNWSDcfj

Iโ€™m not selling anything; you have my NO SALES PITCH GUARANTEE.

The @firstname field is a variable that’s personalized at runtime by LinkedProspect, the tool I’m using to automate my outreach. I like LinkedProspect over Dux-Soup because LinkedProspect runs in the cloud and doesn’t require me to babysit the connection automation process the way Dux-Soup does.

I’m identifying candidates for my recruitment pool using a LinkedIn Sales Navigator search. I’ve put in time making sure the results of this search are as relevant as possible to the survey. This is another important variable in this process. If I define a pool of candidates that doesn’t find my research question relevant or interesting, it will effect my results.

In fact, if I wasn’t able to continue this research project for some reason, I already have data that supports a low-confidence conclusion: the question of career development or investing in one’s own career is less interesting or relevant to self-employed software developers than a question about a technology platform is to developers with experience in that platform. Even if I couldn’t look at the results of the survey for some reason, I could still reasonably (again, with low confidence) draw this conclusion based on the response rate I’m seeing.

As you also know, I’m recruiting for this survey from a second pool of candidates. What I haven’t mentioned yet is that I’m pointing this second pool of candidates to a fork of the survey. It’s identical: same questions, same delivery platform (GetFeedback). But I forked the survey so I could compare the two candidate pools and hopefully answer this question: is my email list different from self-employed devs who have LinkedIn profiles?

In more colloquial terms: I suspect y’all are special. Will the survey data support this belief?

You’ll remember that I’m using a convenience intercept sampling method. This is not a probabilistic sampling method, which means… probably not much in the context of this research. But a more rigorous research project would suffer from this less rigorous recruiting method.

Let’s look at how my email list as a group is performing in terms of response to my survey. I had to think a bit about the question of which number to use as my “top of funnel” number. Is it the total number of people on my list who are sent these daily emails, or is it that number multiplied by my global average 27.36% open rate?

Well, for the LinkedIn outreach I’m using the total number of people I reached out to, so for a fair comparison I should use the total number of people each of the last two emails got sent out to.

  • Email addresses exposed to my survey request: 1,906
  • Surveys completed: 23 (1.207%)

Again, that last number will climb over time as I repeat my CTA to take the survey for the rest of this week. It’s surprisingly close to my LinkedIn recruitment numbers with one notable difference: It’s taken me about 2 weeks to get 20 responses from the LinkedIn candidate pool. It took me 2 days to get 23 responses from my email list.

Another fundamental difference between these two recruitment methods is the LinkedIn method gives me one shot at getting a response, while my email list gives me multiple opportunities to get a response.

On that webinar I mentioned earlier, Douglas Hubbard shared some info about the Student-T method that I don’t really understand yet, but he boiled it down to this easier to understand takeaway:

As your number of samples increases, it gets easier to reach a 90% confidence level. Beyond about 30 samples, you need to quadruple the sample size to cut error in half.

Remember that we’re talking about my de-biasing survey here, which is not really measuring anything. It’s using open-ended questions to explore the problem space and make sure my thinking about the question aligns with the way my sample population thinks about the question.

All that to say that at this stage of my research, I’m less interested in confidence level in my findings and more interested in having enough data to do a good job of de-biasing myself. In other words, the de-biasing survey’s purpose is to make sure I ask the right question(s) in the second survey I’ll use in this project. The de-biasing survey is less of a measuring tool and more of a making-sure-I-don’t-screw-up-the-measurement-question tool. ๐Ÿ™‚

When I get to the second survey in this project, I’ll be more interested in error and confidence and sample size.

I’ll end with this:

This is the only dick-ish response I’ve ever gotten to LinkedIn outreach, and I’ve reached out to thousands of people using the method described above.

So I’m way over that 30 sample size threshold, which gives me an extremely high confidence level when I say this: almost every human will either want to help with my research (at best) or ignore my request (at worst). It’s exceedingly rare to encounter hostile jerks, and such people are extreme outliers.

I think I’ve got you up-to-the-minute with this research project! I haven’t looked at the survey responses yet, so I don’t think there’s anything more for me to say about this, unless y’all have questions. Please do hit REPLY and let me know.

This email series will continue as I have more to share about this project. I’m on a plane to SFO tomorrow to participate in a money mindset workshop and then supervise the movers packing up our house (we’re moving to Taos!!), so I won’t have a ton of time for this research project until next week anyway. I’ll keep repeating my “take the survey” CTA to this list for the remainder of this week, and then turn off the de-biasing survey, work through the results, construct my measuring survey, and then update you.

-P