In a dramatic upheaval at OpenAI, more than 500 employees left or threatened to resign after a controversial board decision. The board signed a resolution calling for the removal of chief executive Sam Altman, a move that stunned the tech industry and surprised many investors and staff who had not anticipated such an event. The decision sparked talk of a possible internal coup, with Altman’s leadership suddenly called into question just days after the company’s public achievements with its AI products.
The weekend that followed was chaotic. Nearly half of OpenAI’s roughly 700-person workforce joined a revolt or expressed intention to exit the company unless the board’s course changed. Some of those departures could align with a new AI division that would be led by Altman if corporate leadership realigns. Additionally, Greg Brockman, another OpenAI co-founder, stepped down from his role as chair in a show of solidarity with Altman. The abrupt split highlighted deep concerns about governance, strategy, and the direction of artificial intelligence development at the firm.
One open letter from participants stated that recent actions demonstrated a lack of competence, judgment, and concern for the mission and the people involved. It asserted that leadership had failed to align with OpenAI’s stated aims and the broader responsibility to society and the field of AI safety.
Sutskever apologizes.
Among the hundreds who signed the open letter, prominent figures included Mira Murati, the chief technology officer; Brad Lightcap, the chief operating officer; and Ilya Sutskever, chief scientist and a co-founder who has long advocated taking a cautious approach to advancing artificial intelligence. Sutskever was the executive who informed Altman of the termination via video call late Friday, an action that drew scrutiny from many observers on how the decision was communicated and executed.
In the days following the upheaval, Sutskever issued a public note expressing regret for participating in the board’s move. He stated that he did not intend to harm OpenAI, affirmed affection for the organization and its accomplishments, and pledged to do everything possible to help reunite the company and restore stability. The episode has prompted discussions about governance structures in high-stakes AI ecosystems and the balance between rapid innovation and thoughtful safety considerations. The incident is now a case study in leadership, loyalty, and the fragility of startup culture under intense scrutiny. (Citation: industry insiders and company observers)