Latest

Former OpenAI Chief Scientist Launches ‘Safe’ Rival AI Lab

In a move that highlights the growing concerns about AI safety, former OpenAI Chief Scientist Ilya Sutskever has launched a new AI research firm called Safe Superintelligence Inc. (SSI). Sutskever’s departure from OpenAI was due in part to internal tensions regarding the prioritization of safety. With SSI, Sutskever aims to avoid distractions from management and commercialization pressures, focusing solely on AI safety and development. The company’s mission is to create a “safe superintelligence” that will not harm humanity and will operate based on values like liberty and democracy. SSI will be a research-focused organization rather than pursuing commercialization like OpenAI. The company plans to recruit talent in the United States and Israel as it pursues its goal of advancing AI safety. This development comes after OpenAI disbanded its super alignment team, leading to the departure of key members. Sutskever and other former OpenAI researchers have been critical of the company, citing concerns about safety and prioritization of profitability. The launch of SSI and other rival firms reflects the growing importance of AI safety in the development of artificial intelligence.