The Public’s Concerns about AI (and “Probability of Doom”)

In our last post, we reported the results of a survey that showed the public to be relatively unenthusiastic about current large language model artificial intelligence (AI) technology, and including their skepticism about the accuracy of answers provided by Large Language Models (LLMs). We also asked respondents about their concerns about this technology and what government policy responses they would favor. 

Key Takeaway #1: The Public is concerned about AI and has a high “p(Doom)”

One common way that folks in the AI industry talk about what could go wrong is the “Probability of Doom”, or “p(Doom)” for short. There is no agreed-upon question wording to measure belief. In this survey respondents were asked “What is the probability that the development of AI technology will someday lead to the extinction of all human life, on a scale of 0 to 100, where 0 means no possible chance, and 100 means absolute certainty?”

The mean value is 35.6% probability (median 20), and 31% of respondents have a p(Doom) greater than 50.  If anything, this assessment of p(Doom) is actually higher among respondents who reported having used an LLM, though the difference is not statistically significant in this sample size.

This average is somewhat higher than past surveys of AI safety researchers, which had a mean value of 30%.  It should be noted that the public does not generally excel at such probabilistic assessments and systematically over-estimates probabilities of rare events and small proportions in the population. 

Key Takeaway #2: The public generally supports more regulation, but has mixed views on how to implement AI policy

Strong majorities of the public, including a plurality of Republicans, favor more regulation of AI. Specifically, when offered a choice between two statements, respondents were more likely to agree with “Development of AI systems should be more tightly regulated by the government to keep the public safe from harms of a new and not yet fully understood technology” instead of “Government regulation would only interfere with the development of valuable AI technology and should therefore be limited” (53% to 15%, with 32% unsure).

Strong majorities of the public also believe it is important to maintain a national advantage for America in this technology, agreeing with “It is important for American national security to promote development of AI technology in the US faster than it is developed by China” over the statement “American national security is not threatened if China develops AI technology faster than it is developed within the US” (50% to 17% with 33% undecided). However here there is a more clear partisan divide, with Democrats less inclined to support the view that AI should be developed in the US faster than in China for national security reasons.

There are also strong majorities in favor of intellectual property protections for writers and artists whose work is used to train AI systems. 58% of respondents agreed with the statement “Writers and other artists should be able to prevent AI systems from being trained on their work unless they are compensated”, compared to only 12% agreeing with “AI systems should be permitted to be trained on any publicly accessible writing or artistic work without being required to pay copyright fees to the original creators”, with 30% undecided.

When asked which of a menu of possible government policies about AI they would support, majorities oppose both investment in AI infrastructure, export restrictions, and higher taxes on AI company profits or computational usage. Majorities favored liability frameworks for civil lawsuits, disclosures for AI-generated content in media and advertising, limiting government use of AI such as in criminal sentencing decisions, and the creation of an AI oversight agency. 

Methodology Statement

This is a survey of US adults fielded from February 5 through February 11, 2025..  The survey was sponsored by Survey 160 for methodological research purposes. The surveys were fielded by a combination of a probability-sampled text-to-web interviews. Respondents were sampled from, or matched to, the TargetSmart commercial file. There were two attempts made to respondents, except for those who refused on the first attempt. The median survey completion time was 10 minutes and 32 seconds. The response rate was 0.56%. To weight these data, we used a raking algorithm, weighting on age, gender, race, Census region, educational attainment, and 2024 turnout and vote choice. Age, gender, race, and region are measured on the sample frame variables, using that frame distribution as weighting targets. Educational attainment, 2024 turnout, and vote choice are all measured through self-reports. Educational attainment weighting targets come from 2024 Current Population Survey public microuse data, and turnout and vote choice weighting targets from the Federal Election Commission. Accounting for this design effect, the margin of error is 9.84 percentage points. However, this margin of error may not incorporate other forms of non-sampling error such as coverage error, measurement error, or non-response error.

Next
Next

What people really think of artificial intelligence