[PMC Weekly Insight] Coding, and altered states

Executing a simple research project of your own is SOOOOO EYE-OPENING1.

It’s not the first thing this will teach you, but eventually you will learn that research is akin to how George Bernard Shaw once described photography: we make a lot of photographs because, like salmon swimming upstream, so few of them make it. With research, there are just so many opportunities to distort things with seemingly small mistakes. I’ve been seeing all the opportunities-to-screw-up that show up when I’m abstracting detail, which is a necessary part of this process.

As I’ve been comparing the coded results of my two samples, I’ve seen this over and over again.

Please bear in mind that I need to re-code some of this data, so the picture I’m seeing here will change, but let me show you the coded data from my two samples as it exists now.

The LinkedIn sample

Full resolution version of this file: pmc-dropshare.s3-us-west-1.amazonaws.com/Photo-2019-07-10-06-27.JPG

The list sample (meaning people from this email list who responded to the same set of questions as the LinkedIn sample)

Full resolution version of this file: pmc-dropshare.s3-us-west-1.amazonaws.com/Photo-2019-07-10-06-26.JPG

Altered states

This morning, I printed out my counted and ordered list of codes from my two samples (the two images I shared above), and just sat there for a long time comparing them. Comparing them, and asking questions about what I was seeing there, and looking for patterns in the data.

It’s fascinating to notice my emotions as I do this. Perhaps for some folks, the emotional layer of this kind of research isn’t consequential, but for me, it is. It’s really hard to get over this “you’re doing it wrong!” feeling when looking at the LinkedIn data because that group seems to invest in their career in ways I don’t get excited about. Then I swing back to a more objective state and start letting myself look past the emotions and trying to see what the data is actually saying.

I did the following: I looked at the underlying detail behind several of the codes that were at the top of the list of codes for my LinkedIn sample. For example, you’ll notice that in the LinkedIn sample, a-coding-learning –which means “the action of learning about coding” — is at the top of the list of codes for question 4. You’ll also notice that a-online-courses is at the top of the list of codes for question 4 when you look at my email list sample. I was curious about this.

What I found when I re-examined the underlying responses was that I’d coded things differently between the two samples. On the LinkedIn sample, which was the first one I coded, I had abstracted things more than with the email list sample. Said slightly differently, my email list sample used a more granular set of codes to represent the underlying data. The LinkedIn sample used a less granular set of codes, which means I abstracted the detail in that sample more. Same coder, different approach to coding.

Once I looked back at the underlying detail, this became clear. On the email list sample, when someone referenced reading a book or participating in an online curse, I coded those as a-reading-books and a-online-course. But when someone made the same sorts of references in the LinkedIn sample, I coded both the books and course responses as a-coding-learning. Again, I abstracted responses more with the LinkedIn sample.

Why this difference? It wasn’t intentional. It might have come out of some biased view based on my initial — mostly emotional — response to seeing the data for the first time a few weeks back. But I think it’s more likely an accidental outcome of coding the two sample groups at different times when I was in a different headspace. I really do think it’s just that innocent and simple2.

If I hadn’t walked backwards from the codes to the underlying data, I would have started trusting my codes. I would have trusted them too much, and started to see them as an un-distorted miniature of the original.

Those of you that are as old as I am or older, or those of you who are younger but have given yourself the gift of caring about history will remember microfiche machines.

Coding responses to a survey does not create a microfiche image of the underlying responses. Instead, it creates an abstraction of the underlying responses.

Next steps for me in this project are cleaning up my coding and figuring out next steps based on what I see in the cleaner data resulting from better coding.

And also, I’ll never be able to look at a news headline that says “New study shows $THING” without remembering the ability of abstraction to distort.

-P


Here’s what’s been happening on my Daily Insights list:

  • [PMC] You’ll want to use the 3/8″ spanner for that… - (Readin’ time: 3m 10s) I have a confession: I don’t have any idea what software my CPA uses to file my taxes. Shocking, I know. What would it say about the situation if I did know what kind of software he uses? It would mean that I’ve accidentally or intentionally learned about the software CPAs…
  • [PMC] A restaurant for rule-breakers - (Readin’ time: 2m 37s) At some point in history, someone broke the rules and ate a meal in their car. At some later point, Sonic set up a restaurant specifically designed for just these kind of rule-breakers. Think about it: cars haven’t been around nearly as long as food has. This means that humans have…
  • [PMC Weekend Edition] “5,000 generalists are not capable of running a country of over 1 billion people” - (Readin’ time: 52 seconds) This is interesting: http://marginalrevolution.com/marginalrevolution/2019/04/the-indian-school-of-public-policy.html Here’s an excerpt: India is changing very rapidly and launching new programs and policies at breakneck pace–some reasonably well thought out, others not so well thought out. Historically, India has relied on a small cadre of IAS super-professionals–the basic structure goes back to Colonial times when a…

Notes

  1. If you want to read up on this experiment, check out what might be one of my longest — and potentially most boring — series of emails:
    1. philipmorganconsulting.com/pmc-survey-marketing/
    2. philipmorganconsulting.com/pmc-the-de-biasing-survey/
    3. philipmorganconsulting.com/pmc-survey-marketing-recruitment/
    4. philipmorganconsulting.com/pmc-survey-marketing-initial-data-from-the-de-biasing-survey/
    5. philipmorganconsulting.com/pmc-weekly-insight-survey-marketing-qualitative-analysis-of-the-de-biasing-survey/
    6. philipmorganconsulting.com/pmc-weekly-insight-survey-marketing-coding-and-counting/
    7. philipmorganconsulting.com/pmc-weekly-insight-its-a-grind-grind/
    8. philipmorganconsulting.com/pmc-weekly-insight-slogging-away-the-morning/
  2. I was about to cite the “hungry judges” research as an example of how time of day and other factors can effect decision making, but some quick Googling uncovered a raft of issue with this research and how it’s been interpreted, so I’ll hold off on using that research as evidence.
    But really, you don’t need research evidence to know that all sorts of things can effect our behavior, and with something like coding survey responses, it’s easy to see how these external or internal mental/emotional effects can alter how we do the coding.