In an effort to shed more light on how we work, The Times is running a series of short posts explaining some of our journalistic practices. Read more of this series here.
For the 2018 midterm elections, The Upshot partnered with Siena College to conduct polls in key districts and publish the results in real time (a first for any news organization).
Siena has worked with The Times on other polling projects, including a handful of races in the 2016 election season. For this project, Siena employed four other call centers in the United States and Canada to call more than 2.6 million people.
Back at The Times, a large team of journalists in graphics, digital design and interactive news worked together to present the results in a way that was clear and engaging.
Below, Amanda Cox, the editor of The Upshot, and Nate Cohn, an Upshot domestic correspondent, answer questions about the project. Their answers have been lightly edited and condensed for clarity.
How did The Upshot come up with this idea?
NATE COHN Amanda wanted to know whether it was possible to do polling live, in real time. At first I thought it was totally crazy, on a number of levels.
I remembered that when we were doing polling with Siena in 2016 they had given me access to this very simple interface that lets you see how many people have been called. It was actually a really educational experience for me. I know things like the margin of error as abstract concepts, but when you look at a poll that has 120 people in it and it says something totally different than what you expect, you freak out a little bit.
How did you decide to cap the completed responses at 500?
COHN There are diminishing returns to polling in general. If you had a curve of what the margin of error looks like, at 100 respondents it’s 10, at 500 respondents it’s like four and a half, at 1,000 respondents you’re at like three. So you get most of your gain in the first 500 respondents and then you have pretty limited returns after that. That’s true for everything.
How do you know if polling works? If a candidate wins?
COX I think it’s more complicated than that. People do sometimes use that metric and I think it’s a bad metric.
Does that margin of error mean what it says it does? When we get voting data back and we know who actually voted, does the electorate look like we thought it would? That’s an even more sophisticated way that Nate uses.
You do know some real answers about who voted — not who they voted for but the fact that they did vote. So that’s a higher-order way of judging: Is it truthful or not?
What are the risks of publishing incomplete data as it comes together?
COX Results of small samples are noisy, but there’s nothing really that different from a poll that’s 10 people from being complete and a poll that’s complete. You just cross some arbitrary threshold that you set. All polls are incomplete. All polls until Election Day are just polls.
What are some challenging parts of this project?
COHN People have been saying telephone polling was about to be broken for a decade, and it could be tomorrow, it could be in 10 years, it could be never.
It is totally true that response rates have continued to decline.
I think that cellphone spam-calling is a major factor. I will confess that I have a zero percent response rate on unknown numbers, even though I am entirely dependent on people not following that practice.
Our samples are just not as good as they were two years ago; if we don’t do anything to control them, they are less representative. So we have had to take some steps to preserve the quality of our samples beyond what we had to do four years ago.
For instance, we make sure, at the end of our poll, that we have the right number of Democrats from an urban area and the right number of Republicans from an urban area. Before, we kind of just let it go and it landed in the right place.
COX Some types of people are more likely to answer their phone — older people have higher response rates. So that is a challenge.
Figuring out who is going to vote is a big question for all pollsters; it’s not unique to us. Do you just trust people if they say they’re going to vote or do you look at what they’ve done in the past? That’s another huge way that polls can be wrong or not representative of reality.
COHN I think after 2016 everyone in the world was like why and how are the polls wrong? And the answer is there's no shortage of reasons why the polls could be wrong. And that's okay, too. Obviously, no one likes it when the polls are wrong on a systematic basis, like they were in 2016, but that's just the sort of thing that inevitably happens and will happen again. But we did feel like the public’s understanding and expectations of polling were off.
If you go through the live polling page, it’s basically step by step a look at every component of where error comes from that we can describe for people in a concrete way.
Our hope, at the end of it, is that people come away understanding what polls can and can’t do.
A note to readers who are not subscribers: This article from the Reader Center does not count toward your monthly free article limit.
Follow the @ReaderCenter on Twitter for more coverage highlighting your perspectives and experiences and for insight into how we work.
Melina Delkic is a senior staff editor. @MelinaDelkic
Source: Read Full Article