The Group of Seven (G7), a group of the advanced economies of the world, agreed to a code of conduct for companies engaged in developing artificial intelligence technologies.
While the code of conduct is voluntary, it is meant to serve as guidance for companies that are developing AI technologies, a stopgap until formal regulations are in place.
What Is The Agreement?
The code of conduct is an 11-point agreement mean to promote safe and responsible practices.
According to the report, the document includes the following wording:
“….aims to promote safe, secure, and trustworthy AI worldwide and will provide voluntary guidance for actions by organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems”
Many of the leading AI companies have already developed their own voluntary guidelines as well as funding organizations for studying the safety of AI development.
Anthropic, Google, Microsoft and OpenAI announced a safety forum for studying possible harms from AI and recently pledged $10 million to fund the organization.
The same organizations, plus IBM, Meta, Nvidia, and Palantir have also agreed to a pledge of safety and security in the development of AI.
The G7 agreement follows along those same contours.
Nevertheless, the agreement is important because it underlines the importance of evaluating for possible risks and asks companies to take solid steps before formal regulations are imposed.
Group of Seven (G7)
The group of seven is made up of seven countries plus one region (the European Union).
Member countries are:
- United Kingdom
- United States
United States Presidential Executive Order
President Biden is reported to have drafted an executive order for federal agencies to set standards and influence companies that develop AI technologies to adhere to safe and secure practices.
The United States Federal Trade Commission is reported to be specifically charged with scrutinizing AI companies.
Featured Image by Shutterstock/Lano4ka