What people really think of artificial intelligence

Kevin Collins, Survey 160

The technological capabilities of Large Language Models (LLMs) and other generative AI technologies have been rapidly evolving, but much of the public discussion of these tools has focused on what the technology can do rather than what the users want. In this research, we wanted to know what the public actually thought about these tools, and how they were using them. To that end, we recently fielded an internally-funded text-to-web survey (n=546) to assess interest in and perception of generative AI tools.

This survey found that the general public is less interested in, and more wary of, this technology than is commonly seen in the tech community. Usage is relatively low, with frequent or paid usage is even lower, while concerns about accuracy and even the probability that this technology will end human life are fairly high. The public on average wants more regulation and greater protection of intellectual property, but does not want higher taxes levied on these companies and systems. And while the public on net both opposes export controls and government investment in AI, respondents also favored maintaining an American lead on the technology over China. 

 Key Takeaway #1: Interest in using AI has not yet taken off among the mass public. 

We first asked about usage of commonly used platforms that have AI technology built into them, as this may be a place of entry for much of the public. Among users of Gmail, Google Search, and Bing, usage of integrated AI tools is modest. 80% of self-reported Gmail users say they do not use the AI tools, 57% of Google Search users say they do not use these tools. Only 11% of our sample self-reports they use Bing Search, and 27% of those users report using the integrated AI tools.  For each Gmail, Google Search, and Bing, among those people who say they are users of the platforms and are aware of the AI tools, only a minority of people aware of the AI tools say they make the platforms better.

Usage of ChatGPT is the highest (35%), followed by Gemini (19%) and Copilot (12%). But these numbers overstate usage, as most of that usage is from sporadic users rather than users who say that they use these tools weekly or more frequently. 

Overall interest in paying for tools is low. Asked about actual paid subscriptions to ChatGPT, just 3% of the public reports having a paid subscription, or about 10% of all people who say they have used ChatGPT. 

We also asked about two hypothetical use cases. There is a low interest in paying $20 a month for “personable, intelligent, and caring-feeling conversation about any topic on demand (7% overall, though higher than among men than women), but somewhat more interest in paying $20 a month for “high quality tailored instruction and feedback in any topic of your choice” (16%).

Key Takeaway #2 Among respondents who are interested in AI tools, the desired uses are diverse but are not work-focused.

We asked people who reported using LLMs what they are using it for. Overall, more people are using LLMs for personal and recreational uses, instead of work uses, though the reverse is true for those who self-identify as early adopters of technology.

We also asked all respondents, regardless of their experience with generative AI systems, what they would like these systems to do. The most common answer was either uncertainty, or nothing at all (or to just go away). Among those who did provide a substantive answer, the most common themes were medical (either medical research such as curing cancer or diagnostic tasks), providing assistance in personal tasks, including both things that can be completed online (such as researching a vacation or planning dinner) and things that cannot (such as cooking dinner or cleaning the house). Some respondents also identified researching and writing as a desired use, though again these were often for personal uses. Others reported that they wanted the systems to be more accurate or honest, and less biased (though different respondents had different views about the ways the system was biased currently).

Key Takeaway #3: Trust in LLM technology remains low.

This relatively low interest in AI tools may be a reflection of low trust in the accuracy of the products. When asked “How much do you trust current generative AI technology to provide accurate results?”, only 13% said either “Trust entirely” or “Trust mostly”, compared with 35% who said “Distrust entirely” or “Distrust mostly”, with the remainder either split or unsure.  Self-described early adopters of technology were more trusting, evenly split 30% trusting to 31% untrusting.

When asked in “In a few words, regardless of how much you use AI tools now, what if anything would you like them to be able to do in the future?”, the most common answer (about 54% of respondents) was either nothing, don’t know, or that the respondents wished AI would go away. However, among those who did provide substantive answers, a relatively common theme was making AI tools more accurate and honest, or less biased (about 5%). 


Methodology Statement

This is a survey of US adults fielded from February 5 through February 11, 2025..  The survey was sponsored by Survey 160 for methodological research purposes. The surveys were fielded by a combination of a probability-sampled text-to-web interviews. Respondents were sampled from, or matched to, the TargetSmart commercial file. There were two attempts made to respondents, except for those who refused on the first attempt. The median survey completion time was 10 minutes and 32 seconds. The response rate was 0.56% To weight these data, we used a raking algorithm, weighting on age, gender, race, Census region, educational attainment, and 2024 turnout and vote choice. Age, gender, race, and region are measured on the sample frame variables, using that frame distribution as weighting targets. Educational attainment, 2024 turnout, and vote choice are all measured through self-reports. Educational attainment weighting targets come from 2024 Current Population Survey public microuse data, and turnout and vote choice weighting targets from the Federal Election Commission. Accounting for this design effect, the margin of error is 9.84 percentage points. However, this margin of error may not incorporate other forms of non-sampling error such as coverage error, measurement error, or non-response error.

Previous
Previous

The Public’s Concerns about AI (and “Probability of Doom”)

Next
Next

Survey Mode and Polling Accuracy