Survey marketing: initial data from the de-biasing survey

(Readin’ time: 3m 36s)

My survey marketing experiment [1] continues!

Here’s the initial “de-biasing” survey summary data from my two samples.

The LinkedIn Sample

My LinkedIn sample was a convenience opt-in sample, recruited using a Sales Navigator search for:

  • Self-employed
  • In United States
  • More than 10 years in current position
  • Keyword “software developer” in LI profile

I invited these folks to connect with me, and in my LI connection request message, I said the following:

Hi @firstname, my name is Philip. I am working to better understand how self-employed devs improve their career. Would you be willing to spare 3m for a survey? It will mean the world to me.

-> https://www.getfeedback.com/r/4B6uBroa

I’m not selling anything; you have my NO SALES PITCH GUARANTEE.

The automation tool I used–LinkedProspect–sent 1537 connection requests. 364 (23.68%) of them accepted, 38 (10.44%) of those started the survey, and 22 (57.9%) of those completed the survey, which means 1.43% of the sample I attempted to recruit from LinkedIn actually completed the survey.

The list sample

I also recruited a sample from my email list. I sent this sample to a fork of the survey so I could compare the two samples.

This sample was also a convenience opt-in sample. Across three daily emails, I invited this sample using the following text:

Quick tophat: I am working to better understand how self-employed devs improve their career. Would you be willing to spare 3m for a survey? It will mean the world to me.

-> https://www.getfeedback.com/r/fNWSDcfj

I’m not selling anything; you have my NO SALES PITCH GUARANTEE.

An average of 1898 people received this invitation 3 times. There’s no way for me to know how many actually saw, noticed, or really thought about this invitation, but let’s use the same funnel math as with the LinkedIn sample. So from 1898 “connection requests”, 56 (2.96%) of those started the survey, 33 (58.93%) of those completed the survey, which means that 1.75% of the sample I attempted to recruit from my email list actually completed the survey.

Summary data

Here’s the first picture of the data I’ve collected, looking at it through a quantitative lens:

positioning services - Experiential marketing learning for independent consultants

The next step is to dig through the responses to open-ended question and look for patterns there, but even at this summary level there seems to be a distinct difference between y’all (if you’re reading this in your inbox) and my LinkedIn sample.

This is what I would have guessed. Members of an email list that’s about marketing and expertise is likely to filter for self-employed people who are interested in investing in their career. This is a biased sample.

Having a LinkedIn profile and responding to connection request there is also a filtering mechanism that also gives me a biased sample.

These biases are totally OK! I’m trying to understand how self-employed devs think about investing in their career, but implicit in that question is an assumption: I only really care about the subset of this group that I can actually reach! So the bias in my sample matches the bias I’d experience everywhere else in my business, so this sampling bias isn’t going to skew my results in a way that undermines my decision-making (and yes, I do have a decision I need to make about how I message what I do).

How statistically valid is my data?

Wrong question!

Before I started this project, I had zero quantitive data on my question. I was operating on intuition and the sense of the market I gained through lots of conversations and casual, qualitative research. This is not low-value information at all. But! It’s not quantitive.

So even a small amount of not-exactly-rigorously-collected quantitive data represents a significant increase in what I know. And that’s super valuable.

Next steps on this project, which I’ll report back to you on as I complete them:

  1. Analyze the qualitative data I’ve collected.
  2. (Likely, but not 100% sure) Ask the respondents who provided an email address for a brief interview. Aim for 5 such interviews, with the goal of going deeper on the dataset and getting even more qualitative data.
  3. Write and deliver the promised report.
  4. Decide if I want to invest in writing the “real” survey and recruiting a broader sample, or instead pivot to an improved question and start the process over again. [2]

-P


Notes

1: If you want to read up on this experiment:

  1. https://philipmorganconsulting.com/pmc-survey-marketing/
  2. https://philipmorganconsulting.com/pmc-the-de-biasing-survey/
  3. https://philipmorganconsulting.com/pmc-survey-marketing-recruitment/

2: I’m working hard to resist having a strong, emotional reaction to this initial collection of data. One hot-take that wouldn’t be totally unreasonable would be: “Crap!! Only 36% of my LinkedIn survey respondents even care about the superset of services that my business lives in!!” But 36% of a massive market is… still massive. So I’m working to temper my initial emotional reaction to the data so that I can take an objective look at the qualitative data that accompanies this quant data.