An International AI Treaty is a global agreement that seeks to regulate the development, deployment, and governance of artificial intelligence (AI) technologies across nations. Read here to learn more.
There has been growing momentum towards creating one due to concerns over the rapid advancement of AI technologies, their ethical implications, and the potential for misuse.
The Council of Europe Framework Convention on artificial intelligence and human rights, democracy, and the rule of law was opened for signature during a conference of Council of Europe Ministers of Justice in Vilnius.
It is the first-ever international legally binding treaty aimed at ensuring that the use of AI systems is fully consistent with human rights, democracy and the rule of law.
International AI Treaty
The International AI treaty, called the Framework Convention on Artificial Intelligence, Human Rights, Democracy and Rule of Law, was drawn up by the Council of Europe.
The Framework Convention was signed by Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, the United Kingdom, Israel, the United States of America, and the European Union.
- The treaty provides a legal framework covering the entire lifecycle of AI systems.
- It promotes AI progress and innovation; while managing the risks it may pose to human rights, democracy and the rule of law. To stand the test of time, it is technology-neutral.
- The Framework Convention was adopted by the Council of Europe Committee of Ministers on 17 May 2024.
- The 46 Council of Europe member states, the European Union and 11 non-member states (Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the United States of America and Uruguay) negotiated the treaty.
- Representatives of the private sector, civil society and academia contributed as observers.
- The treaty will enter into force on the first day of the month following the expiration of a period of three months after the date on which five signatories, including at least three Council of Europe member states, have ratified it.
- Countries from all over the world will be eligible to join it and commit to complying with its provisions.
Keys aspects of the treaty
- Human-Centric AI: The treaty mandates that AI systems must be designed and operated in alignment with human rights principles, ensuring they support and uphold democratic values.
- Transparency and Accountability: The treaty stipulates that AI systems, particularly those interacting with humans, must operate transparently. It also requires governments to provide legal recourse when AI systems infringe on human rights.
- Risk Management and Oversight: The treaty establishes frameworks for assessing and managing the risks associated with AI and oversight mechanisms to ensure adherence to safety and ethical standards.
- Protection Against Misuse: The treaty incorporates safeguards to prevent AI from being used to undermine democratic processes, including the preservation of judicial independence and ensuring public access to justice.
Enforcement Mechanisms
- Legal Accountability: Signatory nations are required to enact legislative and administrative measures to ensure AI systems adhere to the treaty’s principles like human rights and accountability in AI deployment.
- Monitoring and Oversight: The treaty establishes oversight mechanisms to monitor compliance with AI standards.
- International Cooperation: The treaty promotes collaboration among signatories to harmonise AI standards, share best practices, and address transnational AI issues, recognizing the global nature of AI technologies.
- Adaptability: The framework is designed to be technology-neutral, enabling it to evolve alongside advancements in AI, ensuring that standards remain relevant and enforceable as AI technologies rapidly progress.
- Exception in the Treaty: The treaty applies to all AI systems except those used in national security or defence, though it still requires that these activities respect international laws and democratic principles.
Need for an International AI Treaty
- Ethical Concerns: The development of AI technologies has raised questions about privacy, bias, and accountability. An international treaty could establish ethical guidelines for ensuring that AI systems respect human rights, prevent discrimination, and avoid harmful consequences.
- AI in Warfare: The use of AI in military applications, such as autonomous weapons, is a contentious issue. A treaty could address the risks of AI-driven warfare, promoting peaceful use and preventing an AI arms race.
- Standardization of Regulations: Currently, countries are developing their own AI regulations, leading to fragmented approaches. A treaty could help create uniform standards, ensuring that AI development is aligned with international norms and that systems developed in different countries are interoperable and ethically governed.
- Transparency and Accountability: Governments and corporations developing AI could be required to maintain transparency in their processes and be held accountable for their AI systems’ societal impact. This would be crucial for trust-building.
- Global Collaboration and Research: A treaty could promote the sharing of AI research, encouraging cooperation across countries, while safeguarding sensitive information related to national security or proprietary technology.
Global Attempts Towards an International AI Treaty
- UNESCO adopted recommendations on AI ethics in 2021, marking the first global framework that governments could follow in creating national regulations.
- OECD Principles on AI (2019): These principles provide a basis for trustworthy AI, focusing on inclusive growth, human-centred values, transparency, and accountability.
- EU AI Act: The European Union’s proposed regulations aim to classify AI applications based on their risk levels, which could be used as a model for an international treaty.
AI regulations in India
While India does not have specific AI laws, the regulation of AI is currently embedded within broader data protection, IT, and sector-specific laws. Further AI-specific regulations may emerge soon, as AI becomes more integrated into governance and society.
Data Protection Laws:
- Digital Personal Data Protection Act (DPDP) 2023: India has introduced the DPDP Act, which governs the collection, processing, and storage of personal data. AI systems, especially those that use personal data, are required to comply with these provisions.
- IT Act, 2000: This law provides a legal framework for electronic governance and cybersecurity. It indirectly regulates AI, especially in cases related to online fraud, cybersecurity threats, and unauthorized use of AI technologies.
AI Ethics Guidelines:
- NITI Aayog’s AI Strategy (2018): In its report, “National Strategy for Artificial Intelligence,” NITI Aayog laid out a roadmap for AI adoption in India, emphasizing ethical AI usage, privacy, and security considerations. However, these are more policy guidelines than enforceable laws.
- Responsible AI for All: NITI Aayog also emphasizes a human-centric approach to AI that aligns with global norms on ethical use, fairness, and inclusivity.
Sector-Specific AI Regulations:
- Healthcare: The Indian Council of Medical Research (ICMR) has guidelines for the ethical use of AI in medical research, ensuring AI is used responsibly in diagnostics and patient care.
- Finance: AI use in financial services, such as fintech, is regulated by the Reserve Bank of India (RBI) guidelines, particularly concerning data protection, fraud detection, and automated credit decision-making.
Ongoing Developments:
- AI Legislation in Progress: India is reportedly working on sectoral regulations and a more comprehensive policy framework for AI, which could include laws specifically tailored to address AI-related concerns such as liability, transparency, and ethical usage.
- Expert Committees: The Ministry of Electronics and Information Technology (MeitY) has set up committees to explore how AI can be regulated in various sectors, considering ethical, legal, and social implications.
Read: IndiaAI Mission
Challenges
- Geopolitical Tensions: AI is seen as a strategic asset by many countries, making it difficult to reach a consensus on restrictions that could limit national advantages.
- Differences in AI Capabilities: Countries with advanced AI capabilities may be reluctant to sign onto a treaty that could hamper their technological edge while developing nations might push for more equitable access to AI technology.
- Enforcement: Ensuring compliance with an international treaty on AI would be challenging, particularly in areas like cyber defence, where nations may be unwilling to fully disclose AI capabilities.
Conclusion
Although discussions around an international AI treaty are still in the early stages, efforts by global organizations and governments point to increasing recognition of the need for a coordinated approach to governing AI responsibly.
Over time, we might see frameworks similar to nuclear non-proliferation or climate change agreements, setting boundaries on AI development and deployment at a global scale.
Frequently Asked Questions (FAQs)
Q. What are the international laws for AI?
Ans: On 5 September 2024, the first international AI treaty, involving countries like the United States, Britain, and European Union members, aims to protect human rights and promote responsible AI use, though experts have raised concerns about its broad principles and exemptions.
Q. What is the European Union AI treaty?
Ans: It promotes AI progress and innovation while managing the risks it may pose to human rights, democracy and the rule of law. To stand the test of time, it is technology-neutral. The Framework Convention was adopted by the Council of Europe Committee of Ministers on 17 May 2024.
Q. Does India have an AI law?
Ans: As of 2024, India does not have comprehensive, standalone laws specifically governing Artificial Intelligence (AI). However, AI-related issues are regulated indirectly through a variety of existing legal frameworks, guidelines, and policies.
-Article by Swathi Satish
Leave a Reply