Image

Nonprofit Research Groups Disturbed to Learn That OpenAI Has Secretly Been Funding Their Work

If you’re running a frontier AI company, now’s not the time to rest on your laurels. The stakes could hardly be higher: whichever corporation manages to outmaneuver its rivals stands to capture not just enormous wealth, but significant political influence over what we’re told is one of the most consequential technologies in human history.

As some child safety advocates recently discovered, that kind of pressure is manifesting in corporate jockeying that is morally bankrupt, to put it lightly.

Organizers at several child safety nonprofits told the San Francisco Standard they were blindsided to learn that the Parents and Kids Safe AI Coalition, a mysterious if wholesome-sounding group, was not the up-and-coming grassroots organization it appeared to be. It was, in fact, a front group founded by lawyers working for OpenAI, the company behind ChatGPT.

The scheme was straightforward enough. The Safe AI Coalition reached out to activist organizations across the country, soliciting their endorsement for a set of child safety policy proposals. Coincidently, those proposals were eerily similar to the ones found on child safety legislation in California that OpenAI itself had co-signed, which would have protected AI companies from liability associated with their products.

Outside organizers — whose endorsements gave the coalition the veneer of a popular front — said they’d been given no indication that the coalition was founded, funded, and directed by OpenAI. The reveal only came after the groups joined together to challenge the policy initiative they had signed on to support, which led at least two organizations to pull their support.

“It’s a very grimy feeling,” an anonymous organizer told the Standard. “To find out they’re trying to sneak around behind the scenes and do something like this — I don’t want to say they’re outright lying, but they’re sending emails that are pretty misleading.”

Josh Golin, executive director of the nonprofit FairPlay for Kids, declined to join the coalition after discovering OpenAI’s involvement. He told the Standard he’d like OpenAI to step aside so that “advocates and parents and public health professionals” can decide how to regulate AI, not the tech industry. “I don’t want OpenAI to write their own rules for how they interact with children,” Golin said.

There’s a simple explanation for OpenAI’s seemingly duplicitous actions: its regulatory demands aren’t so much about safety, but about currying favor with the state. The AI company spent some $3 million on political lobbying in 2025, up from $1.76 million in 2024. Insiders have alleged that the company’s research teams, which previously shared work on all things AI, good or bad, have now begun to act as an advocacy arm for the AI industry.

By using the Parents and Kids Safe AI Coalition as a front group to appeal to federal regulators, OpenAI both ensures it can influence the conversation at the highest levels while forestalling heavier legislation that would no doubt come from the states. It’s a crowded field, with major tech giants like Microsoft, Google, Amazon, Meta, Anthropic, xAI, and IBM all jockeying for supremacy. Weaponizing child safety as a lobbying tool may be a bad look, but when you’re a multibillion dollar tech company plotting an IPO, letting someone else pick up that weapon first could be fatal.

More on OpenAI: Panicked OpenAI Execs Cutting Projects as Walls Close In

The post Nonprofit Research Groups Disturbed to Learn That OpenAI Has Secretly Been Funding Their Work appeared first on Futurism.

Releated Posts

Meta Installing Software on Employee Computers to Track Everything They Do, Feed the Data to AI

As if activity-monitoring software installed on your work computer that snitches on you if you’re away from the…

Apr 22, 2026 2 min read

Prego Pivots From Budget-Tier Pasta Sauce to Small Microphones That Listen to Your Family’s Intimate Conversations

Ever sit down at an awkward family dinner and think to yourself: “You know what this conversation needs?…

Apr 22, 2026 3 min read

Chinese Workers Horrified as Bosses Direct Them to Train Their AI Replacements

For years, a buzzy Silicon Valley startup called Mercor has been hiring an army of desperate job-seekers —…

Apr 21, 2026 3 min read

Concern Grows That AI Is Damaging Users’ Cognitive Abilities

Last year, a team of researchers led by MIT research scientist Nataliya Kosmyna used electroencephalograms to monitor the…

Apr 21, 2026 3 min read

Jeff Bezos’ Botched Space Launch Was So Bad It Could Threaten NASA’s Entire Moon Program

Jeff Bezos’ space company Blue Origin experienced a significant setback over the weekend. During its third launch, the…

Apr 21, 2026 3 min read

Guess What This Creepy Underwater Thing Is That Was Photographed by US Navy Divers for NASA

It may look like the underwater remains of a church that’s been submerged for hundreds of years following…

Apr 21, 2026 3 min read

Tesla Drivers Losing Patience at Elon Musk’s Eternal Excuses

Tesla CEO Elon Musk has been promising that his company’s EVs will be capable of fully driving themselves…

Apr 21, 2026 4 min read

JPMorganChase Data Center Gets $77 Million Handout to Create Grand Total of One Job

Back in March, we shared the story of a $136 million data center going up in Northeastern Ohio,…

Apr 21, 2026 3 min read

Nvidia CEO Loses His Cool at Tough Question

Look, being CEO of the largest company by market cap in the world isn’t a cakewalk. It takes…

Apr 21, 2026 3 min read