OpenAI is losing another key member of its AI safety team as Lilian Weng, a lead safety researcher and VP of Research and Safety, announced her departure. After a seven-year tenure that saw her spearhead major projects like the creation of OpenAI’s safety systems unit, Weng revealed on social media that her last day with the company would be November 15. “After seven years at OpenAI, I’m ready to reset and explore new challenges,” Weng wrote in her farewell post.
Weng’s career at OpenAI began in 2018 when she joined the startup’s robotics team. Her early work included developing a robotic hand capable of solving a Rubik’s cube—a project that took two years to complete and showcased OpenAI’s innovative approach to robotics and machine learning. As OpenAI shifted its focus to language models, Weng transitioned to applied AI research, eventually leading efforts to build the startup’s dedicated safety systems team after the launch of GPT-4 in 2023.
Under Weng’s leadership, the safety systems team grew significantly, expanding to over 80 experts in AI safety, research, and policy. The team has played a critical role in developing safeguards for OpenAI’s increasingly powerful AI systems, addressing concerns around reliability and safety for products used by millions globally.
Despite Weng’s departure, OpenAI expressed confidence in the future of its safety systems team. “We deeply appreciate Lilian’s contributions and are confident that the team will continue to play a key role in ensuring our systems are both safe and reliable,” an OpenAI spokesperson said in a statement.
Weng’s exit comes as part of a broader wave of high-profile departures from OpenAI in recent months, with many citing concerns over the company’s focus. Other notable exits include CTO Mira Murati, Chief Research Officer Bob McGrew, and Research VP Barret Zoph. Some of these executives have criticized OpenAI for prioritizing commercial growth over rigorous AI safety practices, raising questions about the company’s long-term strategic direction.
Weng’s move also follows the departure of Ilya Sutskever and Jan Leike, leaders of the now-dissolved Superalignment team focused on steering superintelligent AI development. Both have since joined Anthropic, an OpenAI competitor emphasizing safety-first AI development.
As OpenAI continues to face scrutiny over its approach to balancing safety and innovation, Weng’s exit signals a significant shift. It underscores the broader tension within the AI industry between rapid commercialization and the need for robust safety protocols. OpenAI’s response to these challenges will likely shape the company’s trajectory as it competes in a rapidly evolving field.
The departure of key personnel like Weng, who played an instrumental role in shaping OpenAI’s safety measures, may also impact the company’s ability to maintain its leadership in AI safety research. With executives and researchers leaving for competitors or starting new ventures, OpenAI will need to bolster its efforts to attract top talent and retain its focus on safety, even as it navigates the demands of expanding its product lineup.
In the wake of Weng’s announcement, industry observers are left wondering about the future of OpenAI’s safety strategy and the implications of this leadership shake-up. As AI technology continues to advance rapidly, the company’s commitment to safety will be tested, and its response may determine its standing in the industry.