COUNTRIES FROM ALL CONTINENTS SIGN AN AGREEMENT FOR THE RESPONSIBLE DEVELOPMENT OF IA
The United States, China, the European Union and giants such as Brazil and India, among others, have agreed to monitor the safe and responsible development of Artificial Intelligence.
Twenty-eight countries (including the European organization) from the five continents of the planet have made this commitment. To this end, they signed the so-called Bletchley Declaration.
As reported by Europa Press, an EditoRed partner, the Al Safety Summit was held at Bletchley Park Museum, located in Buckinghamshire (England). The meeting lasted two days, on November 1 and 2, 2023.
The summit proposed “a safer future for both the Al and the world”, according to the UK Department of Science, Innovation and Technology, as reported by Europa Press.
The signatories also include Chile, Germany, Spain, France, Japan, South Korea, Saudi Arabia, the United Arab Emirates and Australia (the full list is given below).
This is the first global agreement that sets out both the opportunities and risks posed by AI, as well as the need for governments to work together to address the most relevant challenges.
With this agreement, the signatories “accept the urgent need to collectively understand and manage the potential risks through a new joint global effort to ensure that AI is developed and deployed safely,” according to a statement from the UK government.
Europa Pres reports that at the summit, “those present agreed that substantial risks may arise from the intentional misuse of lA and that there is particular concern about those related to cybersecurity, biotechnology and disinformation.”
The Declaration states that there is “a potential for serious, even catastrophic harm, whether deliberate or unintended, arising from the most significant capabilities of these AI models.”
Because the summit participants believe that the understanding of the risks of this tool and its capabilities, which are “not fully understood,” must be deepened, they have also agreed to work together to support a scientific network on AI security thanks to the AI Security Institute announced a few days ago by British Prime Minister Rishi Sunak.
INCREASING BENEFITS, DECREASING RISKS
The Declaration specifies that this is “a unique moment to act and affirm the need for the safe development of AI and for the transformative opportunities of AI to be used for good and for all, inclusively in our countries and globally. This includes public services such as health and education, food security, science, clean energy, biodiversity and climate, to realize the enjoyment of human rights and strengthen efforts to achieve the UN Sustainable Development Goals.”
The signatories assert that AI also poses significant risks, including in areas of everyday life. To that end, they say, “we welcome relevant international efforts to examine and address the potential impact of AI systems in existing forums and other relevant initiatives, and the recognition of (the need for) the protection of human rights, transparency and explainability, fairness, accountability, regulation, security, adequate human oversight, ethical aspects, mitigation of bias, privacy and data protection.”
The risk of dissemination of false information has not gone unnoticed in the Declaration. It says: “We also note the potential for unforeseen risks arising from the ability to manipulate content or generate misleading content. All of these issues are of critical importance and we affirm the need and urgency to address them.”
LOCAL AND GLOBAL APPROACHES
In the Declaration, the signatory countries indicate that they seek to move towards AI that is humanistic, trustworthy, responsible and safe. To this end, they indicate that this technology must be supported by a proportionate regulatory and governance environment that also ensures innovation.
As the realities are different for each country, it is proposed that where appropriate, “risk classifications or categorizations can be developed based on national circumstances and applicable legal frameworks”. But it also emphasizes “the importance of cooperation, where appropriate, on approaches such as common principles and codes of conduct”.
The statement also calls on all relevant actors in this field to be transparent and accountable in measuring, monitoring and mitigating potentially harmful capabilities and associated effects that may arise from their application.
The idea is that, once the risks have been identified, progress will be made in the construction of policies in the different countries that guarantee security and the public good. To this end, they have proposed the establishment of an international research network on AI security.
ANNUAL MEETINGS
The participating countries have determined, as part of this declaration, that in order to make further progress on security, they will meet every year in person, with France being the next venue for this summit. Prior to this, in six months, it will be South Korea that will be featured as co-host at a smaller-scale virtual summit on lA.
“This ensures a lasting legacy of the Summit and continued international action to address lA risks, including informing national and international risk-based policies in these countries,” the release noted.
THE LIST OF COUNTRIES
In alphabetical order, the countries and organizations that signed the Bletchley Declaration are listed below:
- Australia
- Brazil
- Canada
- Chile
- China
- European Union
- France
- Germany
- India
- Indonesia
- Ireland
- Israel
- Italy
- Japan
- Kenya
- Kingdom of Saudi Arabia
- Netherlands
- Nigeria
- The Philippines
- Republic of Korea
- Rwanda
- Singapore
- Spain
- Switzerland
- Türkiye
- Ukraine
- United Arab Emirates
- United Kingdom of Great Britain and Northern Ireland
- United States of America