Insight for Independent Consultants

I’d like to help you thrive as an indie consultant

My Indie Experts email list is a place where I do that. If getting better at attracting opportunity via your expertise is interesting to you, please join. Two ways to get this insight; inbox or RSS:

    The curse

    (Readin’ time: 2m 3s)

    On the shuttle from Taos to ABQ, I met an aeronautical engineer who had a theory on the root cause of the recent 737 Max crashes.

    He was a dapper-looking guy with white hair, probably in his late 50’s, and had a charming British accent.

    He started his career as a flight engineer, which he told me was the last of the non-pilot roles to be automated out of existence. (Automation got to the navigator’s job before it got the flight engineer’s job.)

    Those of you that have seen the “Airplane!” movie, or those of you with even more gray hair (or less hair, period) than me will remember the third seat in the cockpit of larger passenger planes, behind the 2 pilot seats. That’s where the flight engineer would sit and do their work of monitoring and regulating the plane’s systems.

    Now that work is largely automated. As they say, never compete against a computer!

    So this guy from the shuttle moved into an airplane design engineering role after his job got automated away.

    His theory about the 737 Max issue is that the test pilots did not perceive any problem with the plane’s flight performance because their relatively high skill level didn’t allow them to perceive a problem. The 737 Max is a more difficult plane to fly, but the test pilots easily absorbed this higher difficult level because they’re more highly trained and experienced.

    The everyday operations pilots, however, don’t have this level of experience and training. So their lack of advanced skill made possible the situations that resulted in several crashes and numerous fatalities.

    Again, this is the theory of the guy I met on the shuttle. He’s in a better position than I to construct theories like this, but who knows if his theory is actually a good one.

    In essence, this guy was saying that the curse of knowledge played a significant role in the problems with the 737 Max. The test pilots weren’t aware that their higher skill was absorbing a higher difficulty level of operating the 737 Max, and so they didn’t communicate the need for additional training for the pilots who fly the production aircraft every day.

    Does this remind you of the situations you help your clients improve? I’d bet it does, even if your work has nothing to do with designing and testing aircraft!

    As I reviewed Wikipedia’s list of cognitive biases, I noticed several that might play a role in the guy on the shuttle’s theory. In other words, his theory might be the result a biased view wherein he overweights the role of test pilots in general, or underweights the role of engineering (his “tribe”) in aircraft malfunction.

    Bottom Line: This stuff is all part of being a good consultant. Helping your clients see past their biases–and seeing past your own–helps you effect better diagnoses and recommendations.

    -P

    PS – I laughed at almost every joke in the abovelinked supercut of “Airplane!” scenes. And then felt a general sense of amazement at how many of them are in some way sexist. We’ve come a long way since the 70’s.

    Subtle sampling bias

    (Readin’ time: 1m 15s)

    A few days back, list member Josh Earl was kind enough to point out a form of bias that can effect your efforts to validate an idea.

    This got me thinking…

    Your reputation filters what you hear and experience, so you might think the thing you specialize in is more common than it really is. This is a form of sampling bias.

    Social media is another form of sampling bias.

    I see the folks I follow on Twitter talking a lot about the long-running Macbook Pro keyboard issue and a noticeable decline in the customer experience at Apple stores. Is this a broad-based public perception issue for Apple? I have no idea. My Twitter “sample” suggests it is, but if I gathered a different sample using something like a census survey of everyone who walks out of an Apple store for a multi-day period, I’d get different data that might lead to a different conclusion. And if I gathered another sample yet, like a census of every person in Texas who has interacted in any way with the Apple brand or products, I’d get yet a different conclusion.

    Bottom line: think about all the filters you have put in place, purposefully or not. These may have good effects (filtering for good clients, etc.), but they also act as a sort of sampling bias. This isn’t necessarily a problem you need to fix, but it’s good to be aware you are seeing a filtered view of the world. If you are trying to get an accurate picture of some part of the world that intersects with your area of expertise or are trying to be highly objective in your advice to clients, remember that you might need to compensate for sampling bias.

    -P

    The $12,500 pourover

    (Readin’ time: 1m 48s)

    While in Sebastopol for the move, I went to a new coffee shop a few times.

    It’s called ACRE, their product is wonderful and seemingly ethically produced, and you can order beans from them if you, like me, don’t have a bead on a local roaster you like (yet): www.acrecoffee.com/shop

    On a funny note, I tried some of this local New Mexico Pinon coffee (nmpinoncoffee.com/products/Traditional%20Piñon?category=coffee0) and it was “LOL, who snuck a pound of maple syrup into my coffee when I wasn’t looking?” Not my preference.

    Anyway, ACRE coffee in Sebastopol has a $12,500 pourover machine.

    positioning services - Experiential marketing learning for independent consultants

    At an estimated $15/hour barista wage, that machine costs 833.3 hours of barista time, not including operating costs, etc. If a pourover takes (let’s estimate) 10 minutes of a Baristas time and this machine cuts that down to 2 minutes, it’s a net savings of 8 minutes of barista time per pourover, meaning it’ll take at least 6,250 pourovers to pay for the machine.

    It’s a gorgeous, well-built machine made in Brooklyn. It automates the pouring-water part of the pourover process, and it works quite well.

    It has this motorized pouring spout that zooms around to all the active drippers, dispensing hot water in precisely measured and timed amounts. It’s really fun to watch.

    I asked the owner why he invested in it, and he pretty casually answered that it was to free up his staff to do other tasks. He had this borderline “eh, it’s only money, so why not?” vibe as he talked about the investment.

    You do *not* want your entire job or business to be on the receiving end of that attitude. 🙂

    In the case of ACRE and their baristas, there’s plenty of other stuff for their staff to do, so this $12,500 pourover machine ain’t puttin’ nobody out of a job.

    But maybe it’s keeping them from having n+1 baristas working, even during busy times. Maybe it’s making the business more profitable?

    For sure, the Poursteady machine has mostly commoditized the making of pourover coffees at ACRE by automating the most time-consuming part of the process.

    Two points for you to ponder:

    • If we think of this as innovation, do you consider this internal-facing or external-facing innovation? Why?
    • What investments in your business could you leverage in a similar way? Do you consider this a good investment for ACRE? Obviously we don’t have all the context here, but knowing what we know: is this the best investment they could have put $12,500 towards?

    -P

    Ooooh that smell! Can’t you smell that smell!

    (Readin’ time: 3m 49s)

    The difference between direct response and brand marketing has recently been particularly interesting to me.

    My wife and I have just moved to Taos, NM, so I’m in that headspace where you don’t ignore the stuff you usually ignore [1]. In this case, I didn’t ignore the big billboard at the front of the local market where anybody can post a paper advertisment.

    Here’s one that caught my eye:

    positioning services - Experiential marketing learning for independent consultants

    This advertisement is classic direct response (DR) marketing, and I want to objectively describe what it’s doing and then comment on how these DR techniques relate to selling expertise.

    A: Attention-grabbing headline

    If there’s one thing I’ve heard more than anything else about DR headlines, it’s that their job is to give the reader a reason to keep reading.

    In fact, that’s often a meta-framework for how every line of DR copy is written: each line’s job is to give you a reason to read the next line. Each paragraph’s job is to give you a reason to read the next paragraph, until you reach the point in the copy where the sale is made.

    DR headlines often try to make use of vivid imagery (“greased lightning” in the example above).

    B: Use of curiosity

    Curiosity is often a key element in DR marketing. In the example above, you see it used quite heavily. If we could measure this example’s use of curiosity on a 5-point scale, I’d say it measures around a 4.

    I’ve seen more intense uses of curiosity, but not by much. Brennan Murphy is pumping up the volume on the curiosity pretty high in our example here.

    Generally this curiosity “recipe” has two ingredients:

    1. A description of a result, outcome, or benefit of the thing being sold.
    2. A missing detail/details about how this result, outcome, or benefit is achieved.

    So you’ve got two levers you can pull here. If you pull on them both real hard, you can produce a strong sense of curiosity in your reader.

    Or… you can give off a very strong “scent” of DR marketing. You know how some people love cilantro, some are meh about it, and to some people it tastes like soap? DR is kind of like that. With cilantro, it’s a genetic variation that makes it taste like soap to some people. With DR, I think it’s heavy usage of curiosity that is part of the “scent” of DR that some people can pick up on.

    C: Attempts to demonstrate value

    DR marketing will often attempt to anchor the value of the thing being sold against large dollar amounts. Sometimes these anchoring attempts withstand scrutiny, sometimes they don’t.

    D: Attempts to create urgency

    DR marketing often attempts to create a feeling of urgency or a sense the reader will miss out if they do not take action immediately.

    Common ways of doing this include describing a limited supply of the thing being sold or imposing a deadline for taking action. These limits can be real, as they are with events like an IRL seminar, or they can be artificial, as they often are with uses of DR marketing to sell zero marginal cost digital goods or recorded events like webinars.

    DR and selling expertise

    I won’t have a complete answer to this question: is DR marketing compatible with selling expertise?

    The partial, needs-to-be-explored-more answer is: yes and no.

    If we strip away the pressure and as much as possible of the scent that generally accompanies DR, then it’s a valuable tool for selling expertise.

    But it can be a slippery slope! It’s easy to take any one of the elements I’ve described above and start to pull on that lever incrementally harder and harder and before you know it your audience is smelling the distinct scent of overdone DR marketing.

    And if that happens, then they’re going to question the value of your expertise.

    Because if it was truly, self-evidently valuable, why do you need to act like those who are selling something much less valuable?

    Important nuance

    To be clear, direct response marketing doesn’t have to involve pressure. It’s merely predicated on the idea that people will be able to directly respond to a specific piece of content, and that you can measure and optimize based on data these responses will collect. And often, the goal is to acquire data, not immediately sell something.

    All this too say, the key quality of direct response marketing is that it involves someone being able to respond to a specific piece of content, and you being able to track or measure their response somehow.

    I also want to add that–lest you think I’m “against” DR–DR is critical for most of us when bootstrapping our businesses. Most of us would be completely unable to generate leads online without DR.

    That said, DR can have a “scent” and it definitely will have that undesirable scent if it’s used in tandem with pressure or unrealistic claims.

    So the key to using DR effectively in bootstrapping an expertise-based business is to utilize its strengths without over-using them. Overuse of DR’s strengths leads inevitably to the stanky scent we see in the poster I shared above.

    -P

    1: I continue to wonder if this is one of the most simple but important keys to innovation: overcoming our normal hard-wired tendency to conserve cognitive resources through schema bucketing and ignoring that which is familiar.

    Mailbag: SPoF

    (Readin’ time: 1m 47s)

    List member Josh Earl said something really crucial in response to my SPoF email. Josh graciously gave me permission too share this with you:


    Hey Philip, good email. I generally agree with you about doing many smaller validations along the way.

    One caution that I wanted to add, based on my own experience, is that there’s a strong danger of selection bias here.

    The top 1% of your email subscribers tend to love you so much, they will literally sign up for anything, do anything, and buy anything you create.

    This can really skew your perception of demand for a product if you’re not aware of how strong the effect is.

    Here’s a scenario that I’ve seen multiple times before:

    Say you have an email list of 10,000 people.

    You have an idea for a product that comes to you, proverbially naked in the proverbial shower.

    You send out a couple of emails with a link to opt in to a pre-launch list. You get 100 signups.

    Yay! People are interested.

    You start building the product, and you do a beta offer to this tiny little list. You make 25 sales.

    Yay! People are REALLY interested.

    You can’t help yourself, and you start doing the math—”Wow, I got a 25% conversion rate on this, and I only had 100 people signed up. If even 5% of my main list buys this, I’d make 500 sales!”

    You build the final product, and you do a launch to your entire list, and…

    You end up making another 15 sales.

    WHHHAAA?!?

    When this happens the reason is that almost everyone who had enough interest in this offer to consider buying had already signed up for the pre-launch list.

    When launch time comes, there aren’t a lot of people left on your main list who would consider buying.

    One way to account for this is to raise the bar in the early validation steps.

    And assume that whatever response you get early on captures the majority of the interested people.

    For example, you might set a threshold of getting 500 or more signups to a pre-launch list instead of 100.

    You want to see a very strong response from your entire list early on, not just the hyper-responsives.

    I think I’ll do my email about this today. 🙂


    Josh is right. I missed this really important manifestation of bias that you need to watch out for in validating an idea.

    Consider signing up for Josh’ email list: joshuaearl.com

    It’s great stuff.

    -P

    [PMC Weekend Edition] “Specialists like us are the future.”

    (Readin’ time: 27 secconds)

    Thanks to all the list members who sent me this: www.theguardian.com/cities/2019/apr/29/are-the-hyper-specialist-shops-of-berlin-the-future-of-retail

    It’s a really interesting read, and it suggests at least one path forward for retailers threatened by Amazon.

    Of course, this path forward has already been mapped out by Kim and Mauborgne in general terms: don’t compete with the 800-lb gorilla at all. Find a “blue ocean” to compete in instead. The article has quite a few interesting specific examples of businesses that are implementing this general strategy.

    Happy Saturday,

    -P

    SpoF

    (Readin’ time: 2m 49s)

    We work to avoid single points of failure in software. What about when validating an idea?

    This question came up recently thanks to a client, who I was helping think through validating an online workshop/course.

    It’s useful to model this situation along a two pole spectrum.

    On one extreme, you do a one-shot validation based on pre-sales.

    On the other extreme, you think about validation as a group or series of micro-validations that you work through in an iterative fashion.

    Let’s look at each extreme of this spectrum.

    One-shot validation

    The common case here is pre-selling your thing (digital product, workshop, course, etc) while you’re building it and if your pre-sales don’t meet some goal (usually defined as a total number of sales) by some date, you refund the money and shut the project down, usually before you’ve built the whole thing.

    If this is truly the only way you’re validating, then this is one-shot validation. You either achieved your sales goal by your deadline, or you didn’t.

    I have concerns about one-shot validation, but let’s talk about the other extreme of this spectrum first before we get to those concerns.

    Multiple micro-validations

    On the multiple micro-validations end of the spectrum, you are deploying multiple smaller ways of validating your idea.

    This might look like the following:

    1. The idea muse visits you in the shower. You, of course, are naked and wet, but you still welcome this visit.
    2. You think about the idea a while and can’t find a reason it’s an obviously bad idea. You’re motivated to build it.
    3. You publish a simple landing page where anyone who wants to hear more about this thing as you build it can opt in with an email address. You share this landing page in some way, perhaps to an email list you have, perhaps on social media, or perhaps some other way.
    4. You get some opt-ins. Yay! This is the first micro-validation!
    5. You start communicating with the lovely people who have opted in to your landing page. They give you feedback. One person asks when the product will be available; they’re excited about it. Yay! Second micro-validation!
    6. You get closer to having something you can ship. You offer discounted beta access to the folks who have opted in. A few buy. Yay! Third micro-validation.

    It’s a pretty clear difference, isn’t it, between the multiple micro-validation model and the one-shot validation.

    I have concerns

    I have concerns about the one-shot validation model because it incorporates less data into your decision-making and affords you fewer opportunities to iterate your idea.

    At its worst, it shields your work from critique because it doesn’t emphasize 2-way communication with a group of early adopters, and shielding your work from critique has the side-effect of cutting off your access to growth and greatness.

    At best, it’s a quick and easy validation method that might protect you from pursuing a losing idea but might also cut off the air supply from an idea that’s almost great.

    For a while I had a sort of informal mentorship with the fantastically good art photographer John Wimberley.

    He once said that he’s very reluctant to critique younger photographer’s work because he feels that they might be just a few experiments away from something really great, and if they come to him asking for critique at that point, the weight of a negative critique from him could slow or stall their progress, which would be a loss to the medium and harmful to that person.

    I think this expresses something important and relevant to idea validation.

    On the one hand, shielding yourself from critique is harmful to you, so you have to embrace feedback from the market.

    On the other hand, I worry that one-shot validation is embracing the wrong kind of critique; one that might kill off potentially good ideas because the feedback itself is hamfisted and un-nuanced.

    What do you think?

    -P

    Can’t see it to fight it

    (Readin’ time: 1m 51s)

    Jason Molina wrote this song, “Ring the Bell”, that has this line that just guts me every time I hear it.

    I tried and tried to pull out the right excerpt for you, but each attempt lost critical context, so instead here’s the whole song:


    Help does not just walk up to you
    I could have told you that
    I’m not an idiot
    I could have told you that
    In every serpent’s eye watch you go where you go
    Every serpents double tongue takes a turn with your soul
    If you let them ring your bell (x2)
    They’re ringing the bell (x2)
    Why wouldn’t i be trying to figure it out
    Everyone tells you that
    Everyone tells you not to quit
    I can’t even see it to fight it
    If it looks like i’m not trying i don’t care what it looks like
    Cause i stood at the altar and everything turned white
    All I heard was the sound… of the world coming down around me (x2)
    Why wouldn’t i be trying
    Why wouldn’t i try (x2)
    Them double tongues are singing hear the wail of the choir through the fog (x2)
    They’re always close
    They’re always so close
    Always close
    Always so close
    If there’s a way out it will be step by step through the black (x2)
    Why wouldn’t i be trying to figure it out
    It don’t mean i’m not trying if i don’t make it back
    I know serpents will cross universes to circle around our necks
    I know hounds will cross universes to circle around our feet
    They’re always they’re close
    Always so close
    Step by step one’s beside me to kill me or to guide me
    Why wouldn’t i be trying to figure which one out (x2)


    And really, you have to listen to it to understand the feeling of it: youtu.be/5xgKpUKPpmc

    The line that guts me:

    I can’t even see it to fight it

    Anyone who has ever struggled with depression understands this. The song, like many good songs, is somewhat ambiguous, but if you’ve ever dealt with depression, you know instantly what Jason is singing about when he reaches that line. You feel it in your bones.

    This “can’t even see it” idea applies to other things too!

    • Positioning
    • Mindset
    • Habits

    While depression is pretty destructive, these other hard-to-see items can be harmful, neutral, or helpful.

    What they have in common is how they become sort of invisible to us. We “can’t even see it”, even though these things are very impactful on our businesses.

    So today’s question: what has become invisible to you?

    -P

    Survey marketing recruitment

    Quick tophat: I am working to better understand how self-employed devs improve their career. Would you be willing to spare 3m for a survey? It will mean the world to me.

    -> https://www.getfeedback.com/r/fNWSDcfj

    I’m not selling anything; you have my NO SALES PITCH GUARANTEE.

    (Readin’ time: 8m 54s)

    “If a survey falls in the forest and no one fills it out, does the survey produce data?” — The Senile Prophet

    Let’s talk about recruitment. This is the fancy word for finding folks who might respond to my survey.

    Recently I hired Heather Creek, a PhD andinternal surveys consultant at the Pew Charitable Trust, to give a presentation to TEI on surveying. It was fantastically informative.

    We learned from Heather that there are 8 distinctly different kinds of sample populations you might survey, grouped into 3 categories:

    • Probability samples
      • Random general public
      • Random from a curated list (like a customer list)
      • Intercept
      • Panel
    • Convenience or opt-in samples
      • Non-probability panel
      • Convenience intercept
      • Snowball
    • Census
      • Every member of a population

    For my research project on how self-employed devs invest in their career, I’m recruiting a convenience intercept sample.

    I asked the folks at Qualtrics, which has a very robust survey and survey analytics platform and a research services department, what they would charge to do this kind of recruiting. I believe they would recruit from a bunch of panels they have access to, meaning the sample they recruit for me might be a probability panel which is considered a more rigorous type of sample.

    They quoted me something like $40 per recruit, and said the range of cost per recruit ranges from small (like $10/per) to much greater at over $100 per recruit for people that are hard to find or have very specific characteristics.

    Is one sampling method better than the others? Are the more rigorous (probabilistic and census) sampling methods more desirable? You can’t answer that without knowing what your research question and other parameters are.

    For my purposes (reducing uncertainty in a business context), my less rigorous and less probabilistic method is fine. But my approach would not work for other research projects with different questions being asked or greater uncertainty reduction needs.

    Chances are, if you’re doing research to benefit your own business or help a client make better decisions or help all your future clients make better decisions, you can assemble a sample using less rigorous methods just like I am. Your question is likely to be very focused (and if it’s not, that’s a problem you need to fix first before surveying or interviewing) and you can recruit from a small but pretty homogenous group to assemble your sample. Both of these things help you produce more impactful findings.

    To expand on this, what question you choose is certainly the most impactful variable in this whole process! No amount of rigor in your survey design, recruitment, and sampling methodology can compensate for asking the wrong question.

    Last week I sat in on a webinar hosted by the Military Operations Research Society, where Douglas Hubbard gave a really fascinating almost-2-hour-long presentation. Douglas offered numerous examples of asking the wrong question. He made the general claim–and I have no reason to doubt this–that in most business situations, the economic value of measuring a variable is usually inversely proportional to the measurement attention it typically gets.

    In other words, we are reliably bad at choosing what things to study (or reliably good at misplacing our investigative effort)! Here’s one example he gave, specific to IT projects:

    1. Initial cost
    2. Long-term costs
    3. Cost saving benefit other than labor productivity
    4. Labor productivity
    5. Revenue enhancement
    6. Technology adoption rate
    7. Project completion

    This list is ordered from lowest to highest information value. Meaning the value of knowing #1 on this list is significantly lower than the value of knowing #7. So want to guess what most folks will spend the most effort on measuring?

    You guessed it. Not #7. The effort is focused on the first few items on this list, meaning the effort is focused on the lowest impact stuff.

    I tell you this to contextualize the discussion of recruiting a sample for my survey.

    Me asking the right question is so dramatically much more important than using highly rigorous methods downstream in my research.

    We generally use the phrase “good enough for government work” in a somewhat pejorative way, but it fits here in a more neutral way. In other words, there’s no need to strive for extremely high levels of rigor in the context of research for business purposes. Neither should we be sloppy. Horses for courses.

    How I’m recruiting

    I’m recruiting the sample for my de-biasing survey in two ways.

    The first is a method I learned from Ari Zelmanow. This approach uses LinkedIn connection requests to ask a group of people to fill out my survey. I honestly didn’t think this would work at all, much less work well.

    Here are some numbers I captured mid-project for a recent recruitment project:

    • Connection requests sent: 155
    • Connections accepted: 55 (35.48% of connection requests)
    • Surveys completed: 17 (10.97% of connection requests)

    If you have some experience recruiting for surveys, you know those numbers are very, very good. Like, eyebrow-raising good.

    I can’t take credit for this; I was simply running Ari’s playbook here.

    I will note that the numbers I’m seeing for my current project (understanding how self-employed devs invest in their career) are much less impressive. 🙂

    • Connection requests sent: 1537
    • Connections accepted: 21.28% of connection requests
    • Surveys completed: 20 (1.301%)

    Those last two numbers will climb a bit over the next week or two, but you can see they’re much lower than the previous set (and unlikely to ever close the 10x gap in performance). The previous recruitment outreach was for a client project investigating developer sentiment around a specific platform. Again, the question you’re investigating matters. A lot!

    The LinkedIn connection message I’m using is not just your standard “Hi, let’s connect” message. Instead, it’s a message that explains the purpose of my research and asks folks to fill out my survey. So the connection message is not really about connecting on LinkedIn, it’s about recruiting for my survey.

    You’ve seen the message before. It’s almost identical to the message at the top of this and yesterday’s email:

    Hi @firstname. I am working to better understand how self-employed devs improve their career. Would you be willing to spare 3m for a survey? It will mean the world to me.

    -> https://www.getfeedback.com/r/fNWSDcfj

    I’m not selling anything; you have my NO SALES PITCH GUARANTEE.

    The @firstname field is a variable that’s personalized at runtime by LinkedProspect, the tool I’m using to automate my outreach. I like LinkedProspect over Dux-Soup because LinkedProspect runs in the cloud and doesn’t require me to babysit the connection automation process the way Dux-Soup does.

    I’m identifying candidates for my recruitment pool using a LinkedIn Sales Navigator search. I’ve put in time making sure the results of this search are as relevant as possible to the survey. This is another important variable in this process. If I define a pool of candidates that doesn’t find my research question relevant or interesting, it will effect my results.

    In fact, if I wasn’t able to continue this research project for some reason, I already have data that supports a low-confidence conclusion: the question of career development or investing in one’s own career is less interesting or relevant to self-employed software developers than a question about a technology platform is to developers with experience in that platform. Even if I couldn’t look at the results of the survey for some reason, I could still reasonably (again, with low confidence) draw this conclusion based on the response rate I’m seeing.

    As you also know, I’m recruiting for this survey from a second pool of candidates. What I haven’t mentioned yet is that I’m pointing this second pool of candidates to a fork of the survey. It’s identical: same questions, same delivery platform (GetFeedback). But I forked the survey so I could compare the two candidate pools and hopefully answer this question: is my email list different from self-employed devs who have LinkedIn profiles?

    In more colloquial terms: I suspect y’all are special. Will the survey data support this belief?

    You’ll remember that I’m using a convenience intercept sampling method. This is not a probabilistic sampling method, which means… probably not much in the context of this research. But a more rigorous research project would suffer from this less rigorous recruiting method.

    Let’s look at how my email list as a group is performing in terms of response to my survey. I had to think a bit about the question of which number to use as my “top of funnel” number. Is it the total number of people on my list who are sent these daily emails, or is it that number multiplied by my global average 27.36% open rate?

    Well, for the LinkedIn outreach I’m using the total number of people I reached out to, so for a fair comparison I should use the total number of people each of the last two emails got sent out to.

    • Email addresses exposed to my survey request: 1,906
    • Surveys completed: 23 (1.207%)

    Again, that last number will climb over time as I repeat my CTA to take the survey for the rest of this week. It’s surprisingly close to my LinkedIn recruitment numbers with one notable difference: It’s taken me about 2 weeks to get 20 responses from the LinkedIn candidate pool. It took me 2 days to get 23 responses from my email list.

    Another fundamental difference between these two recruitment methods is the LinkedIn method gives me one shot at getting a response, while my email list gives me multiple opportunities to get a response.

    On that webinar I mentioned earlier, Douglas Hubbard shared some info about the Student-T method that I don’t really understand yet, but he boiled it down to this easier to understand takeaway:

    As your number of samples increases, it gets easier to reach a 90% confidence level. Beyond about 30 samples, you need to quadruple the sample size to cut error in half.

    Remember that we’re talking about my de-biasing survey here, which is not really measuring anything. It’s using open-ended questions to explore the problem space and make sure my thinking about the question aligns with the way my sample population thinks about the question.

    All that to say that at this stage of my research, I’m less interested in confidence level in my findings and more interested in having enough data to do a good job of de-biasing myself. In other words, the de-biasing survey’s purpose is to make sure I ask the right question(s) in the second survey I’ll use in this project. The de-biasing survey is less of a measuring tool and more of a making-sure-I-don’t-screw-up-the-measurement-question tool. 🙂

    When I get to the second survey in this project, I’ll be more interested in error and confidence and sample size.

    I’ll end with this:

    positioning services - Experiential marketing learning for independent consultants

    This is the only dick-ish response I’ve ever gotten to LinkedIn outreach, and I’ve reached out to thousands of people using the method described above.

    So I’m way over that 30 sample size threshold, which gives me an extremely high confidence level when I say this: almost every human will either want to help with my research (at best) or ignore my request (at worst). It’s exceedingly rare to encounter hostile jerks, and such people are extreme outliers.

    I think I’ve got you up-to-the-minute with this research project! I haven’t looked at the survey responses yet, so I don’t think there’s anything more for me to say about this, unless y’all have questions. Please do hit REPLY and let me know.

    This email series will continue as I have more to share about this project. I’m on a plane to SFO tomorrow to participate in a money mindset workshop and then supervise the movers packing up our house (we’re moving to Taos!!), so I won’t have a ton of time for this research project until next week anyway. I’ll keep repeating my “take the survey” CTA to this list for the remainder of this week, and then turn off the de-biasing survey, work through the results, construct my measuring survey, and then update you.

    -P