Survey Mode and Polling Accuracy 

One of the common questions we get from clients is “How accurate are surveys using text messages?” It’s a question we have been focused on since the founding of Survey 160. In 2018, we fielded surveys that were about as accurate as the New York Times’ congressional-district polling. Those results were produced with around 70% of the sample size and ten times the completed interviews per interviewer hour, implying a far lower cost per completed interview for the text message survey.

But most of the work we do – and most of the way that text messages are used in survey research – are as part of a mixed-mode approach, employing multiple data collection methods. This complicates the task of parcelling out the accuracy that the text mode adds to the bundle of tools employed within a single poll. But we can look at the accuracy of these bundles. Fortunately, the team at FiveThirtyEight has captured survey modes with a high degree of detail. We recently dived into those data (see methodology section at bottom), dividing state-wide and national polls out separately. 

We see that in state-level polls, mixed-mode surveys that include text are the second most common group of pre-election polls (and that most mixed survey methods do include a text message component). But in national polls, online methods – especially opt-in panels but also other types of online surveys – predominate. This disparity reflects the difficulty that online surveys can have in reaching sufficient numbers of respondents in smaller geographies.  

Turning to the accuracy of these methods, we see differences in patterns across state and national results in terms of the absolute mean error on the Democrat minus Republican margin. The national polls, broken out by mode, are noisier with many fewer observations except for those conducted via online panel providers. The graphs below show individual poll error estimates (in grey, jittered for greater visibility) as well as the average of those error estimates (in black).

In state polling, live phones (once considered the gold standard for survey research) are actually the least accurate on average, with an average error of 3.6 pp on the D-R. The best-performing group is “Online non-panel”, which is almost entirely composed of AtlasIntel’s river sampled poll. But probability panels (typically, respondents recruited by mail and interviewed online as part of an ongoing panel) and mixed mode surveys with and without text messages, and even text-only surveys, each perform better on average than live phone surveys and opt-in online panels.

These findings on accuracy are consistent with our own internally-conducted experimental work. In 2023 we conducted a mode experiment comparing live phones and text, with and without an IVR supplement, in the lead up to the Kentucky gubernatorial election that year. The results, which were presented at AAPOR last year, found text to be far more accurate than the live phone results.

We also looked back to the 2020 polls to provide an additional point of comparison. Using the analogous set of polls for the analysis, we see that more polls then were still conducted exclusively by live phone interview, and a smaller proportion used text message methods either alone or in combination with other modes. 

But like in 2024, in 2020 polls exclusively conducted by live phone interview were the least accurate in terms of absolute error on the Democrat-Republican margin. In 2020, both in state and national polls, both text polls and mixed-mode surveys that include text messages out performed both live phone and opt-in panel surveys on this measure. 

Next
Next

Can Text Message Surveys Produce Accurate Results?