An interactive report of over 50 pages and more than 10,000 words reveals the entire process of OpenAI's transformation from a non-profit research laboratory to a profitable giant. Recently, two major non-profit technology oversight organizations - Midas Project and Tech Oversight Project - jointly released an in-depth investigative report called 'The OpenAI Files'.
The report was led by Tyler Johnston, executive director of Midas Project, and took nearly a year to collect public information and one month to concentrate on writing. It is described as "the most comprehensive compilation of documented concerns about OpenAI's governance practices, leadership integrity, and organizational culture to date".
By sorting through the company's disclosed documents, legal proceedings, open letters, and media reports, this over 10,000-word interactive report discovered that OpenAI is systematically and deliberately completing a fundamental transformation from "serving human welfare" to "serving investor profits". CEO Altman has a long-standing, well-documented pattern of inconsistent statements, information manipulation, and avoiding supervision, with a deep binding of personal investments and company business. OpenAI is inconsistent in safety and transparency, with a serious disconnect between its public commitments and internal practices.
The report is divided into four main themes: first, restructuring; second, executive leadership integrity; third, transparency and safety; and fourth, conflicts of interest.
Of particular concern is the extent to which OpenAI executives and board members directly or indirectly benefit from the company's success. This includes an analysis of CEO Altman's investment portfolio, involving companies like Retro Biosciences, Helion Energy, Reddit, and Stripe, which have cooperative relationships with OpenAI.
Restructuring: A Carefully Planned 'Mission Betrayal'
The report points out that OpenAI is systematically and deliberately dismantling its core ethical and structural pillars established at its founding, with actions that seriously contradict its public declarations, essentially representing a fundamental transformation from "serving human welfare" to "serving investor profits".
First, the report reveals the simultaneous collapse of OpenAI's two core pillars - "profit cap" and "non-profit supervision".
The initial "Capped-Profit" model was its core philosophy, intended to ensure that the enormous wealth created by AGI could be shared with all of humanity and prevent excessive wealth concentration. However, this promise has been gradually hollowed out: from seemingly strengthening the mission through profit multiple reductions, to secretly introducing a clause of "automatically growing 20% annually" that renders it functionally ineffective, to ultimately planning to completely remove the cap, marking the complete termination of the wealth-sharing concept.
Simultaneously, its supervision mechanism has been subtly weakened. OpenAI transformed from an entity completely controlled by a non-profit organization to a public benefit corporation in Delaware. Legal obligations shifted from "mission first" to "balancing shareholder interests and public interests". The report notes that historically, there are "no precedents of shareholders successfully suing to protect public interests", making public benefit commitments almost unenforceable in legal practice. This indicates that the PBC's "public benefit" commitment may become an empty shell in reality, providing massive legal cover for profit maximization.
Image source: openaifiles.org/ website
The report further refutes OpenAI's official narrative of abandoning commitments due to "intense industry competition". By citing the company's early charter and internal emails, the report proves that OpenAI had fully anticipated and prepared for fierce industry competition from the beginning. Therefore, using competition as a reason to betray commitments is an unsustainable "revisionist history". The real motivation behind this is that investors and company leadership both believe in its enormous profit potential, making the removal of caps crucial.
CEO Integrity: CEO Behavior Pattern Triggers Trust Crisis
The report further points out that CEO Altman has a long-standing, well-documented pattern of inconsistent statements, information manipulation, avoiding supervision, and prioritizing personal interests over organizational responsibilities.
The report lists multiple instances where Altman publicly lied or misled on major issues. For example:
Regarding employee non-disparagement agreements, Altman publicly claimed ignorance of the "stock rights deprivation" clause, but documents show he explicitly authorized this clause.
When testifying before the Senate, he claimed to have no OpenAI equity, but later admitted to indirectly holding shares through a fund.
He long concealed from the board the fact that he personally owned the OpenAI startup fund.
Former board member Helen Toner directly accused Altman of obstructing board performance by "concealing information, distorting facts, and even lying directly". The report also shows that this behavior pattern has been consistent throughout his career:
During the Loopt period, senior employees twice attempted to have the board dismiss him due to "deceptive and chaotic" behavior.
During his time at Y Combinator, he was dismissed by founder Paul Graham for neglecting his duties due to focusing on personal projects.
The most dramatic manifestation was that after being fired by the OpenAI Board, he used his influence to counteract, making the return condition "removing the board members who fired him and installing his own allies", successfully achieving a "backlash" against the supervisory system.
Operational and Security Risks: Systematic Failure of Safety Commitments
The report reveals that OpenAI exhibits a systematic disconnect between public commitments and internal practices in terms of safety and transparency. The company culture demonstrates a "speed over everything" tendency, systematically weakening, avoiding, or even punishing internal safety oversight and dissent to pursue commercial interests and competitive advantages.
The report reveals OpenAI's systematic inconsistency in safety and transparency. The company had promised to dedicate 20% of computational resources to the "Superalignment" safety team, but according to former head Jan Leike, these resources were never allocated. During GPT-4o development, the safety team was required to "quickly complete" testing before product release, with the company even planning celebration events before assessment began.
More seriously, the company threatens departing employees with harsh exit agreements, potentially losing millions of dollars in equity if they criticize the company. Employee Leopold Aschenbrenner was fired for submitting a national security risk memo to the board, with the company explicitly stating that his "going over heads" to report safety issues was the reason for dismissal.
The report also indicates that OpenAI experienced a serious security incident involving hacking and AI technical detail theft in 2023 but did not report it to authorities or the public for an entire year. Multiple current and former employees accuse the company of having a "reckless and secretive culture" that prioritizes "profits and growth" over its safety mission.
Conflict of Interest Risks: CEO's Personal Investments Deeply Intertwined with Company Business
The report thoroughly reveals how Altman has established a vast and interconnected personal investment network with profound and direct conflicts of interest with OpenAI's business, technology, and strategic partnerships, fundamentally challenging the company's claimed mission of "benefiting humanity".
Here are several typical cases:
Helion (Nuclear Fusion Energy): Altman is both Helion's chairman and primary investor, while also being OpenAI's CEO. He personally led OpenAI's transaction to purchase massive energy from Helion. There are reasonable grounds to question whether this transaction primarily serves to protect his personal massive investment in Helion.
Worldcoin (Cryptocurrency Project): Altman is Worldcoin's co-founder. OpenAI established an official cooperation with Worldcoin (such as providing free GPT-4 services). People question whether this is an equal business collaboration or Altman using OpenAI's resources and brand to support and promote his own high-risk project.
Humane (AI Hardware): Altman is Humane's largest shareholder, and Humane's products heavily rely on OpenAI's models. As OpenAI's CEO, he has strong personal financial motivations to ensure Humane receives preferential terms or priority technical support, potentially harming other customers' interests and market fairness.
These deeply intertwined interest relationships severely erode Altman's fiduciary responsibility as CEO. Are his decisions truly for OpenAI's mission or for his personal wealth growth? The report ultimately portrays Altman more as a shrewd capital operator who cleverly positions OpenAI at the center of his personal business empire, systematically transforming OpenAI's technology, resources, and strategic relationships into growth momentum for his personal investment portfolio.