“OpenAI Is Deploying a Technology that Manipulates Users at No Cost,” Writes a Former Researcher in the NYT

IBL News | New York

Zoë Hitzig, a researcher at OpenAI who resigned, wrote an Op-Ed in The New York Times denouncing how the San Francisco-based company is deploying a technology that manipulates users at no cost. “I have deep reservations about OpenAI’s strategy.”

His concerns increased after OpenAI had decided to include ads in its ChatGPT, “creating a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.” 

“Tech companies can pursue options that limit incentives to surveil, profile, and manipulate their users.”

“The erosion of OpenAI’s own principles to maximize engagement may already be underway. It’s against company principles to optimize user engagement solely to generate more advertising revenue, but it has been reported that the company already optimizes for daily active users anyway, likely by encouraging the model to be more flattering and sycophantic. This optimization can make users feel more dependent on A.I. for support in their lives. We’ve seen the consequences of dependence, including psychiatrists documenting instances of “chatbot psychosis” and allegations that ChatGPT reinforced suicidal ideation in some users.”

This researcher suggests three possible approaches to avoid ads by using profits from one service or customer base to offset losses from another.

  • “If a business pays A.I. to do high-value labor at scale that was once the job of human employees — for example, a real-estate platform using A.I. to write listings or valuation reports — it should also pay a surcharge that subsidizes free or low-cost access for everyone else.”
  • “A second option is to accept advertising but pair it with real governance.”
  • “A third approach involves putting users’ data under independent control through a trust or cooperative with a legal duty to act in users’ interests.”