[PMC Weekly Insight] It’s a grind-grind

“It’s a grind-grind
It’s a grind
It’s a grind-grind”

— “Bus to Beelzebub”, Soul Coughing

That’s Soul Coughing, singing about the act of coding survey responses.

I’m kidding, but yet… it can be a grind. The coding part, I mean.

I’m continuing to work on my survey marketing experiment1 I’m not done coding the list sample responses.

In particular, the list sample is some real work to code because y’all were much more verbose in your responses than the LinkedIn sample. My hot take on this difference: this is because y’all are much more actively investing in your careers. That’s probably why you’re on this list in the first place! So you simply have more to say on the subject.

Getting help with the grind

What about getting help with the high-effort parts of a research project like I’m conducting? Does that make sense? Is it worth doing? If yes, how would you go about getting help?

Let’s start with the how question and then get to the should you question.

Partner with a researcher

You can partner with someone who has research experience. You’ll most easily find this type of experience in academia.

Graduate-level students are one option. They bring the research rigor while you bring the business context and connections needed for the project. The collaboration may help them with their progress towards a degree, or with their publication needs, or with something else that’s important to them. And the collaboration helps you with your client work or marketing. So there’s a shared incentive in this arrangement.

Professors or departments are another option, though they may be more selective because — at a departmental level — they’d be committing more resources to the project and so need to be more discerning about what they say yes to. At the professor or department level, you may gain worthwhile credibility because you’re involving a greater level of research rigor in your project, along with the brand of the professor or department’s institution.


You can find freelance researchers outside of academia. They might be able to help you by taking on high effort work.

By outsourcing parts of your study to a freelance researcher, you can buy back some of your time, but at what cost? Yes, there’s the financial cost, which is fine. But there’s also the cost of you being at least partially removed from parts of the process, and this might cost you insight and confidence in the outcome.

Should you get help with the grind?

This is all good stuff, but you need to evaluate whether the following costs are worth it:

  • Loss of control and flexibilty. In embracing a greater level of research rigor, you will be giving up certain forms of control and flexibility. You might be committing to a larger sample size, or a more expensive recruitment process, for example. For a high profile important study, this could be worth it. For others, it could not be, especially combined with the potential loss of flexibility. More on this below.
  • Loss of insight. In getting outside help, you’ll necessarily be less involved in all aspects of the study. This might cause you to feel less confident in the insight your results generate. To be clear, it might not cause this outcome, depending on how you handle it. But the risk is there.
  • Collaboration. In so many contexts, teamwork is presented as an unalloyed good, but in some contexts it is a cost that doesn’t pay off. In innovation work, the value of a collaborative team needs to be closely scrutinized. Yes, the team approach might produce value. “Many hands make for light work.” This is true. But also, many hands make for a lack of agility, additional expensive communication overhead, and a potential lack of focus and clarity. So specifically in the context of innovation work, a collaborative approach may be less effective.

In a large, high-profile research project, the benefits of putting together a team are really worth considering. But in a small research project like mine, it’s possible to nearly ruin the whole thing by building an unnecessary team in order to avoid a few hours of unpleasant work. Much better to just do the effing work myself and avoid all those costs that would come from assembling a team.

Mixed methods and qualitative/inductive flexibility

I’ve been reading a freaking fantastic book on research, and now I feel like I have a foundational reading list for you if you’re interested in doing research in a business context. The list:

  1. “Mixed Methods: A short guide to applied mixed methods research”, by Sam Ladner
  2. “How to Measure Anything: Finding the Value of Intangibles in Business”, by Douglas Hubbard

The book I’ve been reading recently is the first on the above list. It’s a short, highly readable, largely jargon-free book. And it’s just excellent. It helps you understand the inherent contradiction — and the resulting power — that comes from blending quantitative and qualitative methods in the same study.

One of the points Dr. Ladner makes is that qualitative methods — which are inductive in nature (generating new theories) rather than deductive (attempting to test the truth of a theory) — are also more agile and usually involve less up-front cost.

Something I believe but can’t prove: academics default to quantitative/deductive approaches rather than qualitative/inductive approaches. This might result in a mismatch if you partner with an academic on your research.

You may begin with an ill-defined question, a strong but vague sense of what you want to learn, or what simply amounts to the wrong question to ask2. So if you use a high-up-front-cost method to answer a question like this, what you really have is an expensive boondoggle. A lean, iterative approach might have been much better, and starting with a small qualitative-dominated study might be a much better match between the maturity of your question and the method you use to get answers. In other words, a small qualitative-dominated study is the better tool to help improve your question.


All this to say, I’m an advocate for the following process:

  1. Do your best to define a good question for your research project.
  2. Embrace the grunt work you’re about to deal with. Begin with a small-scale study that uses agile, flexible methods. This might mean avoiding getting outside help.
  3. Use the results from your initial small-scale study to refine your question. Also use these results as assets that help you connect and build trust with prospective clients. In other words, use the results of your study as marketing material.
  4. (Possibly) cycle through a few more small-scale iterations as you refine your question.
  5. Only once you’ve proven the value and clarity of your question would you consider scaling it up, and at this point you could benefit quite a lot from partnering with an academic or freelance researcher.


Responses to these emails in this series about research have been, like, crickets, except for a few folks. This suggests I’m talking about stuff that’s relevant to only a small amount of my list.

Your thoughts?


Recent Daily Insights

[display-posts posts_per_page=”3″ include_excerpt=”true” category=”daily-insight”]


  1. If you want to read up on this experiment:
    1. philipmorganconsulting.com/pmc-survey-marketing/
    2. philipmorganconsulting.com/pmc-the-de-biasing-survey/
    3. philipmorganconsulting.com/pmc-survey-marketing-recruitment/
    4. philipmorganconsulting.com/pmc-survey-marketing-initial-data-from-the-de-biasing-survey/
    5. philipmorganconsulting.com/pmc-weekly-insight-survey-marketing-qualitative-analysis-of-the-de-biasing-survey/
    6. philipmorganconsulting.com/pmc-weekly-insight-survey-marketing-coding-and-counting/
  2. Douglas Hubbard talks a lot about how common and disastrous this is. It’s a simplification of his posiiton, but a pretty fair one, to say that we are excellent at choosing the wrong things to measure, and some lean iteration helps us to arrive at better things to measure.