I was reading a scholarly article on polling and the issues it creates in terms of the democratic process last week.  In the article, the authors note many of the problems with polling, and there are many.  I worked for a major national polling firm in Canada for a couple of years whilst in undergrad.  There, I learned just how dodgy supposedly ‘scientific’ polling can be.

My issues have less to do with methodology, where random computer-generated phone numbers are called.  Rather, they have to do with both the wording of questions and the manner in which they are asked.  I should also note that the rise of cell phones complicates the ability to do random sampling.  Something like 48% of American adults only have cell phones (I have not had a landline since 2002, a decade before I emigrated to the US).  It is illegal to use random computer-generated calling to cell phones in the US.

The authors of the study I read commented on the manner in which questions were worded, and the ways in which this could impact results.  For example, last year during the great debate about the repeal of Obamacare, it became very obvious that a not insignificant proportion of Americans did not realize that the Affordable Care Act, or ACA, was the legislative act that created what we call Obamacare.  So you have people demanding the repeal of Obamacare, thinking they would still have their ACA.  Obamacare was originally a pejorative term created by (mostly Republican) opponents to the ACA.  They figured that by tying the legislation to a president wildly unpopular amongst their constituency (if not the population as a whole), they could whip up public opposition to the ACA.  It worked.

But now consider a polling question concerning the popularity or unpopularity of Obamacare/ACA.  Does a pollster ask people about their thoughts on Obamacare or on the ACA?  Or does that pollster construct a question that includes the slash: Obamacare/ACA?  How, exactly does the pollster tackle this issue?  Having worked on a team that attempted to create neutral-language questions for a variety of issues at the Canadian polling firm, I can attest this is a difficult thing to do, whether the poll we were trying to create was to ask consumers their thoughts on a brand of toothpaste or the policies and behaviours of the government.

But this was only one part of the problem.  I started off with the polling firm working evenings, working the phones to conduct surveys.  We were provided with scripts on our computer screens that we were to follow word-for-word.  We were also monitored actively by someone, to make sure we were following the script as we were meant to, and to make sure that we were actually interviewing someone taking the poll seriously.  More than once, I was instructed to abandon a survey by the monitor.  But the monitor didn’t listen to all the calls.  There was something like 125 work stations in the polling room.  And 125 individuals were not robots.  Each person had different inflections and even accents in their voices.  Words did not all sound the same coming out of the mouths of all 125 people.

When I had an opportunity to work with the monitor to listen in on calls, I was struck by how differently the scripts sounded.  One guy I worked with was from Serbia, and had a pretty thick Serbian accent, so he emphasized some words over others; in most cases, I don’t think his emphasis made a different.  But sometimes it could.  Another guy had a weird valley girl accent.  The result was the same as the Serbian’s.  And some people just liked to mess with the system.  It was easy to do.  They did this by the way they spoke certain words, spitting them out, using sarcasm, or making their voice brighter and happier than in other spots.

Ever since this work experience in the mid-90s, I have been deeply sceptical of polling data.  There are already reasons, most notably the space for sampling error, which means that, with the margin of error, most polls are accurate within plus or minus 3%.  That doesn’t sound like a lot, but the difference between 47% and 53% is significant when it comes to matters of public policy.  Or support for candidates.  And more to the point, the media does not report the margin of error, or if it does, does so in a throwaway sentence, and the headline reads that 47% of people support/don’t support this or that.

But, ultimately, it is the working and means of asking that makes me deeply suspicious of polling data.  And as polling data becomes even more and more obsessed over by politicians, the media, and other analysts, I can’t help but think that polling is doing more than most things to damage democracy, and not just in the United States, but in any democracy where polling is a national obsession.

Source: Matthew Barlow