Survey Mode and Polling Accuracy 

Kevin Collins, Survey 160

One of the common questions we get from clients is “How accurate are surveys using text messages?” It’s a question we have been focused on since the founding of Survey 160. In 2018, we fielded surveys that were about as accurate as the New York Times’ congressional-district polling. Those results were produced with around 70% of the sample size and ten times the completed interviews per interviewer hour, implying a far lower cost per completed interview for the text message survey.

Most of the work we do – and most of the way that text messages are used in survey research – are as part of a mixed-mode approach, employing multiple data collection methods. This complicates the task of parcelling out the accuracy that the text mode adds to the bundle of tools employed within a single poll. But we can look at the accuracy of these bundles. Fortunately, the team at FiveThirtyEight has captured survey modes with a high degree of detail. We recently dived into those data (see methodology section at bottom), dividing state-wide and national polls out separately. 

We see that in state-level polls, mixed-mode surveys that include text are the second most common group of pre-election polls (and that most mixed survey methods do include a text message component). But in national polls, online methods – especially opt-in panels but also other types of online surveys – predominate. This disparity reflects the difficulty that online surveys can have in reaching sufficient numbers of respondents in smaller geographies.  

Turning to the accuracy of these methods, we see differences in patterns across state and national results in terms of the absolute mean error on the Democrat minus Republican margin. The national polls, broken out by mode, are noisier with many fewer observations except for those conducted via online panel providers. The graphs below show individual poll error estimates (in grey, jittered for greater visibility) as well as the average of those error estimates (in black).

In state polling, live phones (once considered the gold standard for survey research) are actually the least accurate on average, with an average error of 3.6 pp on the D-R. The best-performing group is “Online non-panel”, which is almost entirely composed of AtlasIntel’s river sampled poll. But probability panels (typically, respondents recruited by mail and interviewed online as part of an ongoing panel) and mixed mode surveys with and without text messages, and even text-only surveys, each perform better on average than live phone surveys and opt-in online panels.

These findings on accuracy are consistent with our own internally-conducted experimental work. In 2023 we conducted a mode experiment comparing live phones and text, with and without an IVR supplement, in the lead up to the Kentucky gubernatorial election that year. The results, which were presented at AAPOR last year, found text to be far more accurate than the live phone results.

We also looked back to the 2020 polls to provide an additional point of comparison. Using the analogous set of polls for the analysis, we see that more polls then were still conducted exclusively by live phone interview, and a smaller proportion used text message methods either alone or in combination with other modes. 

But like in 2024, in 2020 polls exclusively conducted by live phone interview were the least accurate in terms of absolute error on the Democrat-Republican margin. In 2020, both in state and national polls, both text polls and mixed-mode surveys that include text messages out performed both live phone and opt-in panel surveys on this measure. 

While there are different ways to cut these data, other researchers have come to similar conclusions. For example, Nate Silver reviewed polls from 2016 to 2020 and found that exclusively text polls were the most accurate method. And more recently, G. Elliott Morris writing at FiveThirtyEight found use of text messages was one mode correlated with greater accuracy in 2024. Of course, such correlative research does not necessarily imply causation, since researchers using text messages (as well as other methods, like online non-panel surveys) are often using more complex weighting schemes, but together these studies conclusively show that texting can produce high quality results without relying on ever-more-costly live phone interviews.

Methodological Notes

The inclusion criteria for polls in this meta-analysis include those that are included in the FiveThirtyEight database that were conducted on or after October first through the day before Election Day of their respective election year. We further limited these to likely voter polls for consistency (and because some firms offer multiple estimates for alternative populations or likely voter models), and we removed polls for which FiveThirtyEight could not provide methodological information on fielding mode. In 2024, this set of criteria produced 518 state polls and 160 national polls. In 2020, this set of criteria produced 724 state polls and 243 national polls.

We recoded the listed methodology into the categories shown above. In 2020, the data collected by FiveThirtyEight and provided to readers was much less extensive and specific especially about the variety of online survey methods, so we manually coded AtlasIntel polls as “Online Opt-in” and Ipsos Polls (which had purchased the KnowledgePanel from GfK in 2019) as “Probability Panel” to increase comparability across election cycles. 

Previous
Previous

What people really think of artificial intelligence

Next
Next

Can Text Message Surveys Produce Accurate Results?