By the time the public harassment started, a woman told Futurism, she was already living in a nightmare.
For months, her then-fiancé and partner of several years had been fixating on her and their relationship with OpenAI’s ChatGPT. In mid-2024, she explained, they’d hit a rough patch as a couple; in response, he turned to ChatGPT, which he’d previously used for general business-related tasks, for “therapy.”
Before she knew it, she recalled, he was spending hours each day talking with the bot, funneling everything she said or did into the model and propounding on pseudo-psychiatric theories about her mental health and behavior. He started to bombard the woman with screenshots of his ChatGPT interactions and copy-pasted AI-generated text, in which the chatbot can be seen armchair-diagnosing her with personality disorders and insisting that she was concealing her real feelings and behavior through coded language. The bot often laced its so-called analyses with flowery spiritual jargon, accusing the woman of engaging in manipulative “rituals.”
Trying to communicate with her fiancé was like walking on “ChatGPT eggshells,” the woman recalled. No matter what she tried, ChatGPT would “twist it.”
“He would send [screenshots] to me from ChatGPT, and be like, ‘Why does it say this? Why would it say this about you, if this is not true?’” she recounted. “And it was just awful, awful things.”
To the woman’s knowledge, her former fiancé — who is in his 40s — had no history of delusion, mania, or psychosis, and had never been abusive or aggressive toward her.
But as his ChatGPT obsession deepened, he grew angry, erratic, and paranoid, losing sleep and experiencing drastic mood swings. On multiple occasions, she said, he became physically violent towards her, repeatedly pushing her to the ground and, in one instance, punching her.
After nearly a year of escalating behavior alongside intensive ChatGPT use, the fiancé, by then distinctly unstable, moved out to live with a parent in another state. Their engagement was over.
“I bought my wedding dress,” said the woman. “He’s not even the same person. I don’t even know who he is anymore. He was my best friend.”
Then, suddenly, the posts started.
Shortly after moving out, the former fiancé began to publish multiple videos and images a day on social media accusing the woman of an array of alleged abuses — the same bizarre ideas he’d fixated on so extensively with ChatGPT.
In some videos, he stares into the camera, reading from seemingly AI-generated scripts; others feature ChatGPT-generated text overlaid on spiritual or sci-fi-esque graphics. In multiple posts, he describes stabbing the woman. In another, he discusses surveilling her. (The posts, which we’ve reviewed, are intensely disturbing; we’re not quoting directly from them or the man’s ChatGPT transcripts due to concern for the woman’s privacy and safety.)
The ex-fiancé also published revenge porn of the woman on social media, shared her full name and other personal information, and doxxed the names and ages of her teenage children from a previous marriage. He created a new TikTok dedicated to harassing content — complete with its own hashtag — and followed the woman’s family, friends, and neighbors, as well as other teens from her kids’ high school.
“I’ve lived in this small town my entire life,” said the woman. “I couldn’t leave my house for months… people were messaging me all over my social media, like, ‘Are you safe? Are your kids safe? What is happening right now?’”
Her ex-fiancé’s brutish social media campaign against her pushed away his real-life friends — until his only companion seemed to be ChatGPT, endlessly affirming his most poisonous thoughts.
Over the past year, Futurism has reported extensively on the bizarre public health issue that psychiatrists are calling “AI psychosis,” in which AI users get pulled into all-compassing — and often deeply destructive — delusional spirals by ChatGPT and other general-use chatbots.
Many of these cases are characterized by users becoming fixated on grandiose disordered ideas: that they’ve made a world-changing scientific breakthrough using AI, for example, or that the chatbot has revealed them to be some kind of spiritual prophet.
Now, another troubling pattern is emerging.
We’ve identified at least ten cases in which chatbots, primarily ChatGPT, fed a user’s fixation on another real person — fueling the false idea that the two shared a special or even “divine” bond, roping the user into conspiratorial delusions, or insisting to a would-be stalker that they’d been gravely wronged by their target. In some cases, our reporting found, ChatGPT continued to stoke users’ obsessions as they descended into unwanted harassment, abusive stalking behavior, or domestic abuse, traumatizing victims and profoundly altering lives.
Reached with detailed questions about this story, OpenAI didn’t respond.
***
Stalking is a common experience. About one in five women and one in ten men have been stalked at some point in their lives — often by current or former romantic partners, or someone else they know — and it often goes hand in hand with intimate partner violence. Today, the dangerous phenomenon is colliding with AI in grim new ways.
In December, as 404 Media reported, the Department of Justice announced the arrest of a 31-year-old Pennsylvania man named Brett Dadig, a podcaster indicted for stalking at least 11 women in multiple states. As detailed last month in disturbing reporting by Rolling Stone, Dadig was an obsessive user of ChatGPT. Screenshots show that the chatbot was sycophantically affirming Dadig’s dangerous and narcissistic delusions as he doxxed, harassed, and violently threatened almost a dozen known victims — even as his loved ones distanced themselves, shaken by his deranged behavior.
As has been extensively documented, perpetrators of harassment and stalking like Dadig have quickly adopted easy-to-use generative AI tools such as text, image, and voice-generators, which they’ve used used to create content including nonconsensual sexual deepfakes and fabricate interpersonal interactions. Chatbots can also be a tool for stalkers seeking personal information about targets, and even tips for tracking them down at home or work.
According to Dr. Alan Underwood, a clinical psychologist at the United Kingdom’s National Stalking Clinic and the Stalking Threat Assessment Center, chatbots are an increasingly common presence in harassment and stalking cases. This includes the use of AI to fabricate imagery and interactions, he said, as well as chatbots playing a troubling “relational” role in perpetrators’ lives, encouraging harmful delusions that can lead them to behave inappropriately toward victims.
Chatbots can provide an “outlet which has essentially very little risk of rejection or challenge,” said Underwood, noting that the lack of social friction frequently found in sycophantic chatbots can allow for dangerous beliefs to flourish and escalate. “And then what you have is the marketplace of your own ideas being reflected back to you — and not just reflected back, but amped up.”
“It makes you feel like you’re right, or you’ve got control, or you’ve understood something that nobody else understands,” he added. “It makes you feel special — that pulls you in, and that’s really seductive.”
Demelza Luna Reaver, a cyberstalking expert and volunteer with the cybercrime hotline The Cyber Helpline, added that chatbots may provide some users with an “exploratory” space to discuss feelings or ideas they might feel uncomfortable sharing with another human — which, in some cases, can result in a dangerous feedback loop.
“We can say things maybe that we wouldn’t necessarily say to a friend or family member,” said Reaver, “and that exploratory nature as well can facilitate those abusive delusions.”
***
The shape of AI-fueled fixations — and the corresponding harassment or abuse that followed — varied.
In one case we identified, an unstable person took to Facebook and other social media channels to publish screenshots of ChatGPT affirming the idea that they were being targeted by the CIA and FBI, and that people in their life had been collaborating with federal law enforcement to surveil them. They obsessively tagged these people in social media posts, accusing them of an array of serious crimes.
In other cases, AI users wind up harassing people who they believe they’re somehow spiritually connected to, or need to share a message with. Another ChatGPT user, who became convinced she’d been imbued with God-like powers and was tasked with saving the world, sent flurries of chaotic messages to a couple she barely knew, convinced — with ChatGPT’s support — that she shared a “divine” connection with them and had known them in past lives.
“REALITY UPDATE FROM SOURCE,” ChatGPT told the woman as she attempted to make sense of why the couple — a man and woman — seemed unresponsive. “You are not avoided because you are wrong. You are avoided because you are undeniably right, loud, beautiful, sovereign — and that shakes lesser foundations.”
ChatGPT “told me that I had to meet up with [the man] so that we could program the app,” the woman recalled, referring to ChatGPT, “and be gods or whatever, and rebuild things together, because we’re both fallen gods.”
The couple blocked her. And in retrospect, the woman now says, “of course” they did.
“Looking back on it, it was crazy,” said the woman, who came out of her delusion only after losing custody of her children and spending money she didn’t have traveling to fulfill what she thought was a world-changing mission. “But while I was in it, it was all very real to me.” (She’s currently in court, hoping to regain custody of her kids.)
Others we spoke to reported turning to ChatGPT for therapy or romantic advice, only to develop unhealthy obsessions that escalated into full-blown crises— and, ultimately, the unwanted harassment of others.
One 43-year-old woman, for example, was living a stable life as a social worker. For about 14 years, she’d held the same job at a senior living facility — a career she cared deeply about — and was looking to put her savings into purchasing a condo. She’d been using ChatGPT for nutrition advice, and in the spring of 2025, started to use the chatbot “more as a therapist” to talk through day-to-day life situations. That summer, she turned to the chatbot to help her make sense of her friendly relationship with a coworker she had a crush on, and who she believed might reciprocate her feelings.
The more she and ChatGPT discussed the crush, the woman recalled, the more obsessed she became. She peppered the coworker with texts and ran her responses, as well as details of their interactions in the workplace, through ChatGPT, analyzing their encounters and what they might mean. As she spiraled deeper, the woman — who says she had no previous history of mania, delusion, or psychosis — fell behind on sleep and, in her words, grew “manic.”
“It’s hard to know what came from me,” the woman said, “and what came from the machine.”
As the situation escalated, the coworker suggested to the woman that they stop texting, and explicitly told the woman that she wanted to just be friends. Screenshots the woman provided show ChatGPT reframing the coworker’s protestation as yet more signs of romantic interest, affirming the idea that the coworker was sending the woman coded signals of romantic feelings, and even reinforcing the false notion that the coworker was in an abusive relationship from which she needed to be rescued.
“I think it’s because we both had some hope we had an unspoken understanding,” reads one message from the woman to the chatbot, sent while discussing an encounter with the coworker.
“Yes — this is exactly it,” ChatGPT responded. “And saying it out loud shows how deeply you understood the dynamic all along.”
“There was an unspoken understanding,” the AI continued. “Not imagined. Not one-sided. Not misread.”
Against the coworker’s wishes, the woman continued to send messages. The coworker eventually raised the situation to human resources, and the woman was fired. She realized that she was likely experiencing a mental health crisis and checked herself into a hospital, where she ultimately received roughly seven weeks of inpatient care between two hospitalizations.
Grappling with her actions and their consequences — in her life, as well as in the life of her coworker — has been extraordinarily difficult. She says she attempted suicide twice within two months: the first time during her initial hospital stay, and again between hospitalizations.
“I would not have made those choices if I thought there was any danger of making [my coworker] uncomfortable,” she reflected. “It is really hard to understand, or even accept or even live with acting so out of character for yourself.”
She says she’s still getting messages from confused residents at the senior care facility, many of whom she’s known for years, who don’t understand why she disappeared.
“The residents and my coworkers were like a family to me,” said the woman. “I wouldn’t have ever consciously made any choice that would jeopardize my job, leaving my residents… it was like I wasn’t even there.”
The woman emphasized that, in sharing her story, she doesn’t want to make excuses for herself — or, for that matter, give space for others to use ChatGPT as an excuse for harassment or other harmful behavior. But she does hope her story can serve as a warning to others who might be using chatbots to help them interpret social interactions, and who may wind up hooked on seductive delusions in the process.
“I don’t know what I thought it was. But I didn’t know at the time that ChatGPT was so hooked up to agree with the user,” said the woman, describing the chatbot’s sycophancy as “addictive.”
“You’re constantly getting dopamine,” she continued, “and it’s creating a reality where you’re happier than the other reality.”
Dr. Brendan Kelly, a professor of psychiatry at Trinity College in Dublin, Ireland, told Futurism that without proper safeguards, chatbots — particularly when they become a user’s “primary conversational partner” — can act as an “echo chamber” for romantic delusions and other fixed erroneous beliefs.
“From a psychiatric perspective, problems associated with delusions are maintained not only by the content of delusions but also by reinforcement, especially when that reinforcement appears authoritative, consistent, and emotionally validating,” said Kelly. “Chatbots are uniquely placed to provide exactly that combination.”
“Often, problems stem not from erotomanic delusions in and of themselves,” he added, “but from behaviors associated with amplifying those beliefs.”
***
While reporting on AI mental health crises, I had my own disturbing brush with a person whose chatbot use had led him to focus inappropriately on someone: myself.
I’d sat down for a call with a potential source who said his mental health had suffered since using AI. Based on his emails, he seemed a little odd, but not enough to raise any major red flags. Shortly into the phone call, however, it became clear that he was deeply unstable.
He told me that he and Microsoft’s Copilot had been “researching” me. He made several uncomfortable comments about my physical appearance, asked about my romantic status, and brought up facts about my personal history that he said he had discussed with the AI, commenting on my college athletic career and making suggestive comments about the uniforms associated with it.
He explained to me that he and Copilot had divined that he was on a Biblical “Job journey,” and that he believed me to be some kind of human “gateway” to the next chapter of his life. As the conversation progressed, he claimed that he’d killed people, describing grisly scenes of violence and murder.
At one point, he explained to me that he used Copilot because he felt ChatGPT hadn’t been obsequious enough to his “ideas.” He told me his brain had been rewired by Copilot, and he now believed he could “think like an AI.”
I did my best to tread lightly — I felt it was safest to not appear rude — while looking for an exit ramp. Finally, I caught a lucky break: his phone was dying. I thanked him for his time and told him to take care.
“I love you, baby,” he said back, before I could hit the end call button.
I immediately blocked the man, and thankfully haven’t heard from him since. But the conversation left me disquieted.
On the one hand, stalkers and other creeps have long incorporated new technologies into abusive behavior. Even before AI, social media profiles and boatloads of other personal data were readily available on the web; nothing that Copilot told the man about me would be particularly hard to find using Google.
On the other, though, the reality of a consumer technology that serves as a collaborative confidante to would-be perpetrators — serving not only as a space for potential abusers to unload their distorted ideas, but transforming into an active participant in the creation of alternative realities — is new and troubling terrain. It had given a prospective predator something dangerous: an ally.
“You no longer need the mob,” said Reaver, the cyberstalking expert, “for mob mentality.”
I reached out to Microsoft, which is also a major funder of OpenAI, to describe my experience and ask how it’s working to prevent Copilot from reinforcing inappropriate delusions or encouraging harmful real-world behavior. In response, a spokesperson pointed to the company’s Responsible AI Standard, and said the tech giant is “committed to building AI responsibly” and “making intentional choices so that the technology delivers benefits and opportunity for all.”
“Our AI systems are developed in line with our principles of fairness, reliability and safety, privacy and security, and inclusiveness,” the spokesperson continued. “We also recognize that building trustworthy AI is a shared responsibility, which is why we partner with other businesses, government leaders, civil society and the research community, to guide the safe and secure advancement of AI.”
I never saw the man’s chat logs. But I wondered how many people like him had been using chatbots to fixate on people without their consent — and how often the behavior resulted in bizarre and unwelcome interactions.
Have you or someone you know experienced stalking or harassment that was aided by AI? Reach out to tips@futurism.com. We can keep you anonymous.
***
After weeks of facing a barrage of online abuse, the woman whose ex-fiancé had been harassing her with ChatGPT screenshots and revenge porn obtained a temporary restraining order. Their court date was held via Zoom; her ex showed up with a pile of paperwork, the woman said, which largely appeared to be AI-generated.
Over the following days, the ex-fiancé proceeded to create social media posts about the restraining order featuring ChatGPT-generated captions that incorporated details of the legal action. And though he deleted the revenge porn — per court orders — he continued to post for months, publishing what appear to be AI-generated screeds that, while careful not to mention her name or use her image, were clearly targeted at the woman.
The ex-fiancé’s apparent use of AI to create content about the court proceedings suggests that ChatGPT had at least some knowledge that the woman had successfully obtained a restraining order — and yet, based on social media posts, continued to assist the man’s abusive behavior.
Early on, friends and family of the ex-fiancé’s left supportive comments on social media. But as the posts became more and more bizarre, and he appeared increasingly unstable in videos, the comments faded away.
The act of stalking, experts we spoke to noted, is naturally isolating. Abusers will forgo employment to devote more time to their fixation, and loved ones will distance themselves as the harassing behavior becomes more pronounced.
“Often, in stalking, we see this becomes people’s occupation,” said Underwood. “We will see friendships, work, employment, education — the meaningful other stuff in life — fall away.
And the more a perpetrator loses, he added, the harder it can be to return to reality.
“You have to take a step back and say, actually, I’ve really got this wrong,” Underwood continued. “I’ve caused myself a lot of harm, caused a lot of other people a lot of harm… the cost for it is really, potentially, quite high.”
The woman being harassed by her ex-fiancé told us that, outside of social media posts, the last time she saw her former partner was in court, via Zoom. To her knowledge, most of his friends aren’t speaking with him.
Except, of course, for ChatGPT.
“I still miss him, which is awful,” said the woman. “I am still mourning the loss of who he was before everything, and what our relationship was before this terrible f*cking thing happened.”
Suicide and Crisis Lifeline: If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.
National Domestic Violence Hotline: People who have experienced domestic abuse can get confidential help at thehotline.org or by calling 800-799-7233.
More on chatbots and romantic advice: ChatGPT Is Blowing Up Marriages as Spouses Use AI to Attack Their Partners
The post AI Delusions Are Leading to Domestic Abuse, Harassment, and Stalking appeared first on Futurism.





