[PMC] Survey marketing recruitment

Quick tophat: I am working to better understand how self-employed devs improve their career. Would you be willing to spare 3m for a survey? It will mean the world to me.

-> https://www.getfeedback.com/r/fNWSDcfj

I’m not selling anything; you have my NO SALES PITCH GUARANTEE.

(Readin’ time: 8m 54s)

“If a survey falls in the forest and no one fills it out, does the survey produce data?” — The Senile Prophet

Let’s talk about recruitment. This is the fancy word for finding folks who might respond to my survey.

Recently I hired Heather Creek, a PhD andinternal surveys consultant at the Pew Charitable Trust, to give a presentation to TEI on surveying. It was fantastically informative.

We learned from Heather that there are 8 distinctly different kinds of sample populations you might survey, grouped into 3 categories:

  • Probability samples
    • Random general public
    • Random from a curated list (like a customer list)
    • Intercept
    • Panel
  • Convenience or opt-in samples
    • Non-probability panel
    • Convenience intercept
    • Snowball
  • Census
    • Every member of a population

For my research project on how self-employed devs invest in their career, I’m recruiting a convenience intercept sample.

I asked the folks at Qualtrics, which has a very robust survey and survey analytics platform and a research services department, what they would charge to do this kind of recruiting. I believe they would recruit from a bunch of panels they have access to, meaning the sample they recruit for me might be a probability panel which is considered a more rigorous type of sample.

They quoted me something like $40 per recruit, and said the range of cost per recruit ranges from small (like $10/per) to much greater at over $100 per recruit for people that are hard to find or have very specific characteristics.

Is one sampling method better than the others? Are the more rigorous (probabilistic and census) sampling methods more desirable? You can’t answer that without knowing what your research question and other parameters are.

For my purposes (reducing uncertainty in a business context), my less rigorous and less probabilistic method is fine. But my approach would not work for other research projects with different questions being asked or greater uncertainty reduction needs.

Chances are, if you’re doing research to benefit your own business or help a client make better decisions or help all your future clients make better decisions, you can assemble a sample using less rigorous methods just like I am. Your question is likely to be very focused (and if it’s not, that’s a problem you need to fix first before surveying or interviewing) and you can recruit from a small but pretty homogenous group to assemble your sample. Both of these things help you produce more impactful findings.

To expand on this, what question you choose is certainly the most impactful variable in this whole process! No amount of rigor in your survey design, recruitment, and sampling methodology can compensate for asking the wrong question.

Last week I sat in on a webinar hosted by the Military Operations Research Society, where Douglas Hubbard gave a really fascinating almost-2-hour-long presentation. Douglas offered numerous examples of asking the wrong question. He made the general claim–and I have no reason to doubt this–that in most business situations, the economic value of measuring a variable is usually inversely proportional to the measurement attention it typically gets.

In other words, we are reliably bad at choosing what things to study (or reliably good at misplacing our investigative effort)! Here’s one example he gave, specific to IT projects:

  1. Initial cost
  2. Long-term costs
  3. Cost saving benefit other than labor productivity
  4. Labor productivity
  5. Revenue enhancement
  6. Technology adoption rate
  7. Project completion

This list is ordered from lowest to highest information value. Meaning the value of knowing #1 on this list is significantly lower than the value of knowing #7. So want to guess what most folks will spend the most effort on measuring?

You guessed it. Not #7. The effort is focused on the first few items on this list, meaning the effort is focused on the lowest impact stuff.

I tell you this to contextualize the discussion of recruiting a sample for my survey.

Me asking the right question is so dramatically much more important than using highly rigorous methods downstream in my research.

We generally use the phrase “good enough for government work” in a somewhat pejorative way, but it fits here in a more neutral way. In other words, there’s no need to strive for extremely high levels of rigor in the context of research for business purposes. Neither should we be sloppy. Horses for courses.

How I’m recruiting

I’m recruiting the sample for my de-biasing survey in two ways.

The first is a method I learned from Ari Zelmanow. This approach uses LinkedIn connection requests to ask a group of people to fill out my survey. I honestly didn’t think this would work at all, much less work well.

Here are some numbers I captured mid-project for a recent recruitment project:

  • Connection requests sent: 155
  • Connections accepted: 55 (35.48% of connection requests)
  • Surveys completed: 17 (10.97% of connection requests)

If you have some experience recruiting for surveys, you know those numbers are very, very good. Like, eyebrow-raising good.

I can’t take credit for this; I was simply running Ari’s playbook here.

I will note that the numbers I’m seeing for my current project (understanding how self-employed devs invest in their career) are much less impressive. 🙂

  • Connection requests sent: 1537
  • Connections accepted: 21.28% of connection requests
  • Surveys completed: 20 (1.301%)

Those last two numbers will climb a bit over the next week or two, but you can see they’re much lower than the previous set (and unlikely to ever close the 10x gap in performance). The previous recruitment outreach was for a client project investigating developer sentiment around a specific platform. Again, the question you’re investigating matters. A lot!

The LinkedIn connection message I’m using is not just your standard “Hi, let’s connect” message. Instead, it’s a message that explains the purpose of my research and asks folks to fill out my survey. So the connection message is not really about connecting on LinkedIn, it’s about recruiting for my survey.

You’ve seen the message before. It’s almost identical to the message at the top of this and yesterday’s email:

Hi @firstname. I am working to better understand how self-employed devs improve their career. Would you be willing to spare 3m for a survey? It will mean the world to me.

-> https://www.getfeedback.com/r/fNWSDcfj

I’m not selling anything; you have my NO SALES PITCH GUARANTEE.

The @firstname field is a variable that’s personalized at runtime by LinkedProspect, the tool I’m using to automate my outreach. I like LinkedProspect over Dux-Soup because LinkedProspect runs in the cloud and doesn’t require me to babysit the connection automation process the way Dux-Soup does.

I’m identifying candidates for my recruitment pool using a LinkedIn Sales Navigator search. I’ve put in time making sure the results of this search are as relevant as possible to the survey. This is another important variable in this process. If I define a pool of candidates that doesn’t find my research question relevant or interesting, it will effect my results.

In fact, if I wasn’t able to continue this research project for some reason, I already have data that supports a low-confidence conclusion: the question of career development or investing in one’s own career is less interesting or relevant to self-employed software developers than a question about a technology platform is to developers with experience in that platform. Even if I couldn’t look at the results of the survey for some reason, I could still reasonably (again, with low confidence) draw this conclusion based on the response rate I’m seeing.

As you also know, I’m recruiting for this survey from a second pool of candidates. What I haven’t mentioned yet is that I’m pointing this second pool of candidates to a fork of the survey. It’s identical: same questions, same delivery platform (GetFeedback). But I forked the survey so I could compare the two candidate pools and hopefully answer this question: is my email list different from self-employed devs who have LinkedIn profiles?

In more colloquial terms: I suspect y’all are special. Will the survey data support this belief?

You’ll remember that I’m using a convenience intercept sampling method. This is not a probabilistic sampling method, which means… probably not much in the context of this research. But a more rigorous research project would suffer from this less rigorous recruiting method.

Let’s look at how my email list as a group is performing in terms of response to my survey. I had to think a bit about the question of which number to use as my “top of funnel” number. Is it the total number of people on my list who are sent these daily emails, or is it that number multiplied by my global average 27.36% open rate?

Well, for the LinkedIn outreach I’m using the total number of people I reached out to, so for a fair comparison I should use the total number of people each of the last two emails got sent out to.

  • Email addresses exposed to my survey request: 1,906
  • Surveys completed: 23 (1.207%)

Again, that last number will climb over time as I repeat my CTA to take the survey for the rest of this week. It’s surprisingly close to my LinkedIn recruitment numbers with one notable difference: It’s taken me about 2 weeks to get 20 responses from the LinkedIn candidate pool. It took me 2 days to get 23 responses from my email list.

Another fundamental difference between these two recruitment methods is the LinkedIn method gives me one shot at getting a response, while my email list gives me multiple opportunities to get a response.

On that webinar I mentioned earlier, Douglas Hubbard shared some info about the Student-T method that I don’t really understand yet, but he boiled it down to this easier to understand takeaway:

As your number of samples increases, it gets easier to reach a 90% confidence level. Beyond about 30 samples, you need to quadruple the sample size to cut error in half.

Remember that we’re talking about my de-biasing survey here, which is not really measuring anything. It’s using open-ended questions to explore the problem space and make sure my thinking about the question aligns with the way my sample population thinks about the question.

All that to say that at this stage of my research, I’m less interested in confidence level in my findings and more interested in having enough data to do a good job of de-biasing myself. In other words, the de-biasing survey’s purpose is to make sure I ask the right question(s) in the second survey I’ll use in this project. The de-biasing survey is less of a measuring tool and more of a making-sure-I-don’t-screw-up-the-measurement-question tool. 🙂

When I get to the second survey in this project, I’ll be more interested in error and confidence and sample size.

I’ll end with this:

This is the only dick-ish response I’ve ever gotten to LinkedIn outreach, and I’ve reached out to thousands of people using the method described above.

So I’m way over that 30 sample size threshold, which gives me an extremely high confidence level when I say this: almost every human will either want to help with my research (at best) or ignore my request (at worst). It’s exceedingly rare to encounter hostile jerks, and such people are extreme outliers.

I think I’ve got you up-to-the-minute with this research project! I haven’t looked at the survey responses yet, so I don’t think there’s anything more for me to say about this, unless y’all have questions. Please do hit REPLY and let me know.

This email series will continue as I have more to share about this project. I’m on a plane to SFO tomorrow to participate in a money mindset workshop and then supervise the movers packing up our house (we’re moving to Taos!!), so I won’t have a ton of time for this research project until next week anyway. I’ll keep repeating my “take the survey” CTA to this list for the remainder of this week, and then turn off the de-biasing survey, work through the results, construct my measuring survey, and then update you.

-P