I’m genuinely surprised and concerned that so many people seem comfortable with OpenAI’s decision to abandon its foundational values in favor of profit. Wasn’t OpenAI originally established as a non-profit organization, dedicated to bringing together the best minds in AI research for the greater good?
It feels almost ironic that the original mission statement can still be found in OpenAI’s Introduction to OpenAI post (link).
From the original mission statement, I’d like to highlight the following two paragraphs:
OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.
We’re hoping to grow OpenAI into such an institution. As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. We’ll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies.
This statement was signed by Greg Brockman and Ilya Sutskever, and it also lists other founding members: Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, Wojciech Zaremba, Pieter Abbeel, Yoshua Bengio, Alan Kay, Sergey Levine, Vishal Sikka, Sam Altman, and Elon Musk.
Of this group, only three remain at OpenAI today: Sam Altman, Greg Brockman, and Wojciech Zaremba. Greg Brockman himself seems to have one foot out the door.
While I’m not entirely certain about Sam Altman, Greg Brockman, and Elon Musk, I believe the rest of the founders genuinely believed in these original principles at the time.
So, what happened?
There’s a lot of speculation, but I think it’s not that hard to understand. Long ago, they must have concluded that their original goals couldn’t be achieved without selling something in return (donations and grants can only take a non-profit so far). Elon Musk, Sam Altman, and probably Greg Brockman offered to solve this, and everyone just went along without thinking about the long-term consequences.
They created a for-profit arm to help the non-profit monetize its work. It makes sense—and doesn’t at the same time.
This is where the ethics of the situation start to concern me. The shift along the way has been unethical, yet nobody raised alarms until it was too late. When they finally tried to intervene, it backfired, as the for-profit entity was already too entrenched. Asking employees of the for-profit side to sacrifice their jobs for ethical reasons is unrealistic. They were convinced to join under the promise of this structure, and it’s the for-profit setup within the non-profit that is problematic—not the people themselves. I doubt the founders were prepared to reverse course after committing to this model for so long.
When they realized their original goals were unachievable, I believe they should have released their research into the public domain and dissolved the organization, acknowledging it as a failed project. Furthermore, regulations should have prevented the creation of the for-profit entity in the first place.
What’s worse is that by choosing to continue, they effectively betrayed the community and the people who worked for them and supported them. I strongly believe that OpenAI’s early accomplishments were driven by a noble purpose—to build a better future for all of us. But in the end, it seems that only a few with the means to buy into their business model will truly benefit.