Image

Foolish Pollsters Are Now Just Asking AI What Voters Would Say in Response to Questions and Publishing It at Face Value

Last month, Axios was forced to issue a bizarre correction for a blog post about a growing maternal health crisis in the United States.

The story quoted new poll findings by a company called Aaru, representing them as research based on the feedback of American adults. But according to an editor’s note, the piece had to be “updated to note that Aaru is an AI simulation research firm.”

In other words, Axios had failed to disclose that it was citing alleged “polling data” that wasn’t drawn from human respondents at all. Instead, it was dreamed up by a large language model —yet the latest sign of every imaginable industry trying to leverage AI, even when doing so makes absolutely no sense.

As Digital Theory Lab director Leif Weatherby and University of California, Berkeley, computer sciences professor Benjamin Recht explain in a guest essay for the New York Times, the practice that tricked Axios is called “silicon sampling,” and it’s a recipe for disaster.

“The idea behind silicon sampling is simple and tantalizing,” they write. “Because large language models can generate responses that emulate human answers, polling companies see an opportunity to use AI agents to simulate survey responses at a small fraction of the cost and time required for traditional polling.”

If that sounds like vast overreach that could undermine the value of opinion polling itself, you may be correct. The data only has value “insofar as it summarizes the beliefs and opinions of actual humans,” as Weatherby and Recht argue. “Using simulations of human opinions in place of the real thing will only worsen our broken information ecosystem, and sow distrust.”

Pollsters have long relied on statistical models to make up for a relatively small pool of responses while addressing possible variables that could skew the data. After all, convincing people to answer questions on the phone or online isn’t exactly easy.

But making up responses wholesale using AI is obviously a terrible alternative, and can easily introduce biases and “influence public opinion itself, rather than merely to report what the public thinks,” as Weatherby and Recht warn.

Silicon sampling supercharges the trend by introducing biases of the AI models themselves. In a 2025 paper, researchers from Northeastern University found that silicon sampling are “generally not reliable substitutes for human respondents, especially in policy settings.”

“The models struggle to capture nuanced opinions and often stereotype groups due to training data bias and internal safety filters,” the paper reads. “Therefore, the most prudent approach is a hybrid pipeline that uses AI to improve research design while maintaining human samples as the gold standard for data.”

A separate paper by University of Bern psychology postdoc Jamie Cummins, which has yet to be peer reviewed, found that generating “silicon samples” involves making “many analytic choices” that could have a significant “impact on sample quality.”

Even a “small number of decisions can dramatically change the correspondence between silicon samples and human data,” Cummins found.

Despite these widespread concerns, Aaru,and other companies like it are raising hundreds of millions of dollars in funding, according to Weatherby and Recht, including partnerships with Stanford University and public opinion poll heavyweight Gallup.

It’s an alarming new trend, highlighting how AI tools continue to erode public trust by presenting often hallucinated fiction as fact. It’s especially concerning given its potential to sway public opinion with polls based on AI slop, further entrenching the values of AI models that have long been found to suffer from inherent biases.

“Pure fictions are on the brink of being treated as scientific and political knowledge,” Weatherby and Recht concluded in their essay. “If we do not pull back, our understanding of society might become artificial, too.”

More on AI slop: Wall Street Journal Editor-in-Chief Instructs Staff to Welcome AI Sloplords

The post Foolish Pollsters Are Now Just Asking AI What Voters Would Say in Response to Questions and Publishing It at Face Value appeared first on Futurism.

Releated Posts

Trump Hires Orbital Towing Company to Build Space Interceptors

When the goal is to construct a multi-tiered orbital missile-defense platform, there aren’t a lot of contractors to…

Apr 11, 2026 3 min read

Psychologists Found Something Horrible About the Kind of Men Seeking Trad Wives

The tradwife aesthetic, as it’s sold to women, is easy enough to understand. In our world of social…

Apr 11, 2026 2 min read

Man Punished for Breaking Into Moo Deng’s Zoo Enclosure

The maniac who broke into Moo Deng’s crib finally got his due. The BBC reports that a Thai…

Apr 11, 2026 2 min read

To Get Swole, Teens Are Pumping Themselves Full of Drugs Meant for Fattening Cows for the Slaughterhouse

In previous years, looks-conscious teens might have splurged on glamour shots at the mall, or fine-tuned their selfies…

Apr 11, 2026 2 min read