What would you do if you found out a for-profit AI assistant had been trained on your work — and was using your name — without your permission?
The company formerly known as Grammarly, Superhuman, has forced many writers to ask that precise question. Last August, it quietly debuted a feature called “Expert Review,” through which users could get feedback on their writing from what the company styled as AI clones of professional writers.
The feature was discontinued in early March after explosive criticism, but not before Superhuman-nee-Grammarly caught itself a class-action lawsuit led by investigative journalist Julia Angwin. At the Verge, editor in chief Nilay Patel was another of the writers aped by the AI company — a fact that loomed heavy in the latest taping of his podcast Decoder, featuring Superhuman CEO Shishir Mehrotra as a guest.
“You do not have our permission to use our names to do this,” Patel challenged early in the interview. “You had little check marks next to the name that indicated it was somehow official. People did not like this, I did not like this, and you removed the feature.”
“First off, I’d say I understand and respect how challenging a world it is for experts and idea generators these days,” Mehrotra replied in a display of highly sanitized corporate-speak. “It deeply pained me to feel that we under-delivered for them. And I’d really like to apologize for that. That was not our intention.”
Mehrotra continued the odd apology, rationalizing that, from a CEO’s point of view, it wasn’t even that good a feature.
“It wasn’t good for experts, it wasn’t good for users. It was a fairly buried feature. It had very little usage,” the executive said. “You mentioned it last week and talked about it. It took months for anybody to even sort of find it. All that doesn’t really matter. We can do much, much better. I believe we can and we will do better.”
When Patel directly challenged how much Superhuman should pay human writers it cloned for its feature, Mehrotra became indignant, arguing that AI clones are a matter of attribution, not impersonation.
“When somebody uses your content, should they attribute you? Of course. And to attribute you, you have to use your name,” the CEO declared. “There’s a different line which is, should people be able to impersonate you? And I think that is a very different standard. And we saw the lawsuit. Respectfully, we believe the claims are without merit. The idea that the feature is impersonation is quite a big stretch.”
As the back-and-forth goes on, it becomes clear that Mehrotra has a much different view of impersonation than most people — even though, as the Verge editor points out, the lawsuit is really about using names and identities for commercial purposes without their consent.
“And so, here you did have a commercial purpose here,” said Patel, who isn’t involved in the class action. “You were selling the software and names were appearing as inspired by our names.”
“I’ll have to leave the legal arguments for the lawsuit and for the court case,” came Mehrotra’s reply.
More on AI: Novel Pulled From Shelves After Author Is Accused of Using AI
The post CEO Confronted Over Using AI to Clone Real People Without Their Consent appeared first on Futurism.










