To cap off a day of product releases, OpenAI researchers, engineers, and executives, including OpenAI CEO Sam Altman, answered questions in a wide-ranging Reddit AMA on Friday.
OpenAI the company finds itself in a bit of a precarious position. Itâs battling the perception that itâs ceding ground in the AI race to Chinese companies like DeepSeek, which OpenAI alleges mightâve stolen its IP. The ChatGPT maker has been trying to shore up its relationship with Washington and simultaneously pursue an ambitious data center project, while reportedly laying groundwork for one of the largest financing rounds in history.
Altman admitted that DeepSeek has lessened OpenAIâs lead in AI, and he also said he believes OpenAI has been âon the wrong side of historyâ when it comes to open-sourcing its technologies. While OpenAI has open-sourced models in the past, the company has generally favored a proprietary, closed-source development approach.
â[I personally think we need to] figure out a different open source strategy,â Altman said. âNot everyone at OpenAI shares this view, and itâs also not our current highest priority [âŠ] We will produce better models [going forward], but we will maintain less of a lead than we did in previous years.â
In a follow-up reply, Kevin Weil, OpenAIâs chief product officer, said that OpenAI is considering open-sourcing older models that arenât state-of-the-art anymore. âWeâll definitely think about doing more of this,â he said, without going into greater detail.
Beyond prompting OpenAI to reconsider its release philosophy, Altman said that DeepSeek has pushed the company to potentially reveal more about how its so-called reasoning models, like the o3-mini model released today, show their âthought process.â Currently, OpenAIâs models conceal their reasoning, a strategy intended to prevent competitors from scraping training data for their own models. In contrast, DeepSeekâs reasoning model, R1, shows its full chain of thought.
âWeâre working on showing a bunch more than we show today â [showing the model thought process] will be very very soon,â Weil added. âTBD on all â showing all chain of thought leads to competitive distillation, but we also know people (at least power users) want it, so weâll find the right way to balance it.â
Altman and Weil attempted to dispel rumors that ChatGPT, the chatbot app through which OpenAI launches many of its models, would increase in price. Altman said that heâd like to make ChatGPT âcheaperâ over time, if feasible.
Altman previously said that OpenAI was losing money on its priciest ChatGPT plan, ChatGPT Pro, which costs $200 per month.
In a somewhat related thread, Weil said that OpenAI continues to see evidence that more compute power leads to âbetterâ and more performant models. Thatâs in large part whatâs necessitating projects such as Stargate, OpenAIâs recently announced massive data center project, Weil said. Serving a growing user base is fueling compute demand within OpenAI, as well, he continued.
Asked about recursive self-improvement that might be enabled by these powerful models, Altman said he thinks a âfast takeoffâ is more plausible than he once believed. Recursive self-improvement is a process where an AI system could improve its own intelligence and capabilities without human input.
Of course, itâs worth noting that Altman is notorious for overpromising. It wasnât long ago that he lowered OpenAIâs bar for AGI.
One Reddit user asked whether OpenAIâs models, self-improving or not, would be used to develop destructive weapons â specifically nuclear weapons. This week, OpenAI announced a partnership with the U.S. government to give its models to the U.S. National Laboratories in part for nuclear defense research.
Weil said he trusted the U.S. government.
âIâve gotten to know these scientists and they are AI experts in addition to world class researchers,â he said. âThey understand the power and the limits of the models, and I donât think thereâs any chance they just YOLO some model output into a nuclear calculation. Theyâre smart and evidence-based and they do a lot of experimentation and data work to validate all their work.â
The OpenAI team was asked several questions of a more technical nature, like when OpenAIâs next reasoning model, o3, will be released (âmore than a few weeks, less than a few months,â Altman said), when the companyâs next flagship ânon-reasoningâ model, GPT-5, might land (âdonât have a timeline yet,â said Altman), and when OpenAI might unveil a successor to DALL-E 3, the companyâs image-generating model.DALL-E 3, which was released around two years ago, has gotten rather old in the tooth. Image generation tech has improved by leaps and bounds since DALL-E 3âs debut, and the model is no longer competitive on a number of benchmark tests.
âYes! Weâre working on it,â Weil said of a DALL-E 3 follow-up. âAnd I think itâs going to be worth the wait.â