It was reported this week that OpenAI has disbanded its mission alignment team, and fired a woman (ostensibly because she discriminated against men) who opposed adding an “Adult Mode” to ChatGPT. Meanwhile, a former OpenAI researcher published a NYT op-ed about the erosion of OpenAI’s principles.
Notably, these moves come after OpenAI’s contentious restructuring into a Delaware public benefit corporation, which required assurances to the AGs of California and Delaware that the new structure would remain true to OpenAI’s original nonprofit mission to develop AI for humanity’s benefit. The way this was supposed to occur was that OpenAI-the-nonprofit was given a golden share to control OpenAI-the-benefit-corporation’s board.
The available evidence suggests … the mission may have been redirected.
Now, maybe that’s because of the identity of the individuals appointed to OpenAI-the-nonprofit’s board, which include current and former tech execs, a private equity guy, a corporate lawyer, and Sam Altman. And certainly, there may be a broader lesson here about the general toothlessness of the benefit corporation form – we’re seeing similar issues at Anthropic, which is also organized as a benefit corporation.
But the problem likely runs deeper. For one thing, we all remember when OpenAI’s board tried to fire Altman, resulting in an employee revolt. That wasn’t surprising, because OpenAI (and Anthropic) compensate their employees with equity – incentivizing them to prioritize financial value. OpenAI and Anthropic have operated much more like VC-backed startups than social enterprises, and that may ultimately be rooted in the fact that AI requires such enormous capital investment that it simply is not practical to expect anything other than prioritization of profit.




