Last Friday, the leading seven companies in A.I. development in the United States agreed to voluntary safeguards on the development of the technology, according to the White House. A New York Times article states that the companies have agreed to manage the risks of the new tools even while competing over the potential of artificial intelligence.
The seven companies in question– Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAi– made their commitments to this agreement, and to new standards for safety, trust, and security during a meeting with President Biden at the White House last Friday.
From the Roosevelt Room at the White House, Mr. Biden said that “We must be cleareyed and vigilant about the threats emerging from emerging technologies that can pose — don’t have to but can pose — to our democracy and our values… This is a serious responsibility; we have to get it right… And there’s enormous, enormous potential upside as well.”
This announcement appears as these companies, and others, are rocketing the development of A.I. forward, each reaching to further advance their versions of the technology that can offer new ways to create content without human input. However, as these new tools have advanced, public concern has grown, including increased fears of A.I. replacing human art and writing, the spread of disinformation, and concerns about under-regulation.
The safeguards put into place on Friday are an early, tentative step as governments across the world seek to construct frameworks for the safe and legitimate development of A.I. and production of A.I.-generated material.
The Biden administration stressed that companies must ensure that “innovation doesn’t come at the expense of Americans’ rights and safety.”
“Companies that are developing these emerging technologies have a responsibility to ensure their products are safe.”
Leave a Comment