One other of OpenAI’s high safety researchers, Lilian Weng, introduced on Friday that she will likely be leaving the startup. Weng served as vice chairman of analysis and safety since August and, earlier than that, was head of OpenAI’s safety methods workforce.
in a publish in XWeng mentioned that “after 7 years at OpenAI, I really feel able to restart and discover one thing new.” Weng mentioned his final day will likely be Nov. 15, however didn’t specify the place he’ll go subsequent.
“I made the extraordinarily troublesome choice to depart OpenAI,” Weng mentioned within the submit. “As I have a look at what we’ve got achieved, I’m very pleased with everybody on the Safety Programs workforce and have nice confidence that the workforce will proceed to thrive.”
Weng’s departure marks the newest in a protracted line of AI safety researchers, coverage researchers and different executives who’ve left the corporate within the final yr, with a number of of them accused OpenAI of prioritizing industrial merchandise over AI safety. Weng joins Ilya Sutskever and Jan Leike – OpenAI leaders now dissolved tremendous alignment workforcewho tried to develop strategies for operating superintelligent AI methods, who additionally left the startup this yr to work on AI safety elsewhere.
Weng first joined OpenAI in 2018, in response to her. LinkedInengaged on the startup’s robotics workforce that ended up constructing a robotic hand that would resolve a Rubik’s dice, a job that took two years to perform, in response to its submit.
As OpenAI started to focus extra on the GPT paradigm, so did Weng. The researcher transitioned to assist construct the startup’s utilized AI analysis workforce in 2021. Following the launch of GPT-4, Weng was tasked with making a devoted workforce to construct safety methods for the startup. in 2023. Immediately, OpenAI’s safety methods unit has greater than 80 scientists, researchers and coverage consultants, in response to Weng’s submit.
That is lots of people on AI safety, however many have expressed considerations about OpenAI’s deal with safety because it tries to construct more and more highly effective AI methods. Miles Brundage, long-time coverage researcher, left the startup in October and introduced that OpenAI was disbanding its AGI readiness workforce, which it had suggested. On the identical day, the New York Occasions profiled a former OpenAI researcher, Suchir Balajiwho mentioned he left OpenAI as a result of he thought the startup’s know-how would deliver extra hurt than good to society.
OpenAI tells TechCrunch that executives and safety researchers are engaged on a transition to exchange Weng.
“We deeply admire Lilian’s contributions to revolutionary safety analysis and the creation of rigorous technical safeguards,” an OpenAI spokesperson mentioned in an emailed assertion. “We’re assured that the Safety Programs workforce will proceed to play a key function in guaranteeing our methods are safe and dependable, serving a whole bunch of tens of millions of individuals all over the world.”
Different executives who’ve left OpenAI in current months embrace Chief Know-how Officer Mira Muratianalysis director Bob McGrew and VP of Analysis Barret Zoph. In August, the distinguished researcher Andrej Karpathy and co-founder John Schulman In addition they introduced that they have been leaving the startup. A few of these folks, together with Leike and Schulman, left to hitch an OpenAI competitor, Anthropic, whereas others began their very own corporations.