Latest

Top Names Join Biden in AI Safety Group, Including OpenAI, Microsoft, Google, Apple, and Amazon

Leading tech companies, including OpenAI, Microsoft, Google, Apple, and Amazon, have joined President Biden in the U.S. AI Safety Institute Consortium (AISIC), a group dedicated to promoting the development and deployment of safe and trustworthy artificial intelligence (AI). This consortium, established in response to an executive order demanding the safe use of AI, brings together over 200 representatives from AI developers, academics, government and industry researchers, civil society organizations, and users.

Commerce Secretary Gina Raimondo emphasized that the consortium’s goal is to set safety standards and protect the innovation ecosystem. The consortium will work on developing guidelines for evaluating AI models, risk management, safety, and security, as well as applying watermarks to AI-generated content. By collaborating with leaders from industry, civil society, and academia, the consortium aims to confront challenges and maintain America’s competitive edge in AI development.

In addition to the tech giants, representatives from the healthcare, academia, worker unions, banking sectors, and state and local governments have joined the consortium. International partners are also expected to participate, with a focus on developing tools for AI safety worldwide.

Notably absent from the list of participating firms are Tesla, Oracle, and Broadcom, while TSMC, a non-U.S.-based company, is not listed either. The consortium’s formation comes as the misuse of generative AI tools and the proliferation of AI-generated deepfakes pose significant challenges, including the spreading of misinformation and the rise of fraudulent communications.

The AISIC consortium builds on the commitments made by AI and tech companies during a meeting with the Biden Administration last year, during which they pledged to develop AI responsibly. The participating companies recognize the need for collaboration and information sharing to ensure the responsible development and deployment of AI technologies.

Overall, the formation of the AISIC consortium represents a significant step towards promoting AI safety and responsible innovation, with a diverse group of stakeholders working together to establish standards and guidelines for the safe and trustworthy use of artificial intelligence.