AI PLEDGE

The White House-brokered pledge lacked detail on key issues in generative AI development

Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI signed up to a safety pledge

We may earn a commission from links on this page.
Google logo.
Google launched Bard in March as tech companies compete to dominate the AI space.
Photo: Shannon Stapleton (Reuters)

Seven leading AI companies—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI—pledged to adopt voluntary safeguards on the development of artificial intelligence systems, the White House said in a press release today (July 21). The voluntary commitment comes ahead of any actual regulation in the US, which the Biden administration said it is currently exploring both via executive orders and bipartisan legislation.

The release of OpenAI’s ChatGPT in November marked the first time the general public gained access to using generative AI to create new content, throwing the technology into the spotlight. The AI bot sparked both awe at its potential and concerns about AI taking away jobs, spreading disinformation on scale, or having other harmful effects on society.

Advertisement

Those concerns soon led to calls for regulation across the world. The companies that committed to the White House pledge are driving the technology’s rapid development as they race to dominate the AI space. These companies have been also loud and clear about embracing regulation in part to help shape the fast-growing industry.

AI pledge’s pros and cons

The companies’ pledge touch on various aspects of the technology’s development, including:

🧪 internal and external testing of their AI systems when it comes to areas of misuse, societal risks, and national security concerns, before release;

🫱‍🫲 committing to sharing information across the industry and with governments, the public, and academics on managing AI risks;

❗ developing robust mechanisms to ensure that users know when content is AI generated such as via a watermark;

Advertisement
Advertisement

📣 and publicly reporting their AI systems’ capabilities and limitations.

But the details around the pledges are vague. For instance, it’s not clear which type of AI systems should undergo testing, or what information these companies are expected to share across the industry and with governments. It’s also not clear how exactly these pledges will be enforced, and how the seven companies were chosen.

Governments around the world are racing to regulate the fast-moving AI industry

While the US lags in actual regulation, the EU and China have either set regulations or are close to doing it. Last week, China became one of the first countries to regulate the nascent industry, although it softened some of its restrictions from its initial April draft, pointing to how countries grapple with balancing both creating guardrails while also encouraging innovation.

Advertisement

Meanwhile, the EU—which has a track record of implementing stricter rules in areas such as data privacy practices or antitrust compared to other regions—released last month its latest draft of the AI Act. The act proposed classifying AI systems by risk, with higher-risk systems facing more compliance rules than lower-risk ones. Some critics still find the EU’s proposed regulation too restrictive. The final version of the AI Act is expected to not pass until later this year. Much can happen between now and then with regulating the novel industry.