Superhuman CEO Addresses AI Impersonation Controversy in Heated Interview

6

The CEO of Superhuman (formerly Grammarly), Shishir Mehrotra, sat down for a tense interview to discuss the company’s controversial “Expert Review” feature, which used the names of prominent figures—including journalists and authors—without their consent to generate AI-driven writing suggestions. The discussion, conducted by The Verge’s Casey Newton, delved into the decision-making process behind the feature’s launch and subsequent removal, as well as the broader implications of AI’s encroachment on creative work.

The Feature and the Backlash

Mehrotra acknowledged the outrage sparked by Expert Review, which included names like Casey Newton, Julia Angwin, and even bell hooks, with checkmarks implying some form of official endorsement. While the feature saw minimal user engagement, it triggered a swift backlash, culminating in a class-action lawsuit. Mehrotra apologized but defended the team’s intentions, claiming they were attempting to bridge the gap between users seeking expert-level feedback and experts struggling to maintain direct connections with their audiences.

“The feature was not a good feature. It wasn’t good for experts, it wasn’t good for users. It was a fairly buried feature… We can do much, much better.”

The team’s rationale revolved around the idea that users wanted AI assistance that mirrored real-world mentorship: a sales manager providing feedback, a support agent offering contextual advice. However, this vision clashed with the ethical concerns of using individuals’ names without permission.

Decision-Making at Superhuman

Mehrotra described Superhuman’s decision-making process as rooted in soliciting diverse feedback to avoid groupthink, referencing a company ritual called “Dory and Pulse.” Yet, he admitted that the potential for backlash from unauthorized name usage apparently did not surface during internal discussions. The team believed the feature would be seen as attribution rather than impersonation, given the clear linking back to original works.

Superhuman employs approximately 1,500 people; the decision to launch Expert Review was made by a small team consisting of a product manager and a few engineers.

The Future of AI Integration

The broader conversation shifted toward Superhuman’s ambition to integrate AI seamlessly into users’ workflows, across platforms like Google Docs, Slack, and mobile apps. Mehrotra argued that their strength lies in ubiquity, providing a consistent AI experience regardless of the tool being used. The company’s new platform, Superhuman Go, aims to empower others to build AI agents that function similarly to Grammarly, effectively expanding their AI footprint.

The Financial Question

When pressed on compensation for using his likeness, Mehrotra skirted a direct answer. He reiterated the importance of attribution when using someone’s work but drew a distinction between attribution and impersonation, defending Expert Review as a form of attribution rather than malicious mimicry. He suggested the company believes the lawsuit is unfounded.

The interview concluded with a stark exchange about financial compensation, leaving unresolved the question of whether Superhuman would pay for unauthorized use of individuals’ identities.

Conclusion: The interview highlighted Superhuman’s aggressive push to embed AI into every facet of digital work, while also exposing the ethical gray areas of leveraging personal brands without consent. The incident serves as a cautionary tale about the rapid deployment of AI tools and the need for clearer boundaries around intellectual property and personal identity in the age of generative technology.