How can OSINT be used for the enforcement of the EU AI Act?
These are my preliminary thoughts on the potential of OSINT for policy enforcement. I would be very happy to discuss this idea further.
Executive Summary
This policy memo explores the potential of Open-Source Intelligence (OSINT) for the enforcement of the EU AI Act. Comparing the monitoring of AI development to the existing OSINT efforts in nuclear nonproliferation, this memo advocates for civil society organisations supporting compliance with the EU AI Act through OSINT. Despite its limitations, OSINT is a useful method to support the regulation of especially high-risk systems and it should become a part of the AI policy enforcement toolbox.
Background
Open-Source Intelligence (OSINT) is a popular method of gathering information legally from publicly available sources and analysing them. OSINT is used by individuals and organisations to identify potential threats, conduct investigations, and make informed decisions. OSINT pairs well with policy enforcement, as it enables interested parties to map the field and raise alarm if gathered intelligence contradicts public commitments.
OSINT has already been successfully used to monitor the spread of nuclear technologies and to track compliance with nonproliferation treaties. Nuclear proliferation leaves traces and cues that can be exploited through open source analysis to gain insights into a given nuclear program.1 The Nuclear Threat Initiative found that ‘the use of publicly available data and analysis is effective at identifying high-risk nuclear trade’.2 Other civil society initiatives, like the project Arms Control Wonk, confirm this. The deployment of OSINT for nuclear nonproliferation, thus, serves as a useful benchmark for testing the potential of OSINT in AI policy enforcement.
Despite the popularity of OSINT in other domains, its potential for monitoring the development of AI has not yet been explored. Rather, literature has focused on the power of AI for boosting the conduct of OSINT itself. Therefore, this memo outlines how OSINT can be used as one of the tools for AI policy enforcement. According to American senior intelligence figures, ‘upwards of 80 percent of intelligence needs can be met by OSINT.’3 There is a massive amount of information publicly available, and the data reflecting the progress in the AI domain should be harnessed by governments and civil society organisations for good.
Key Findings and Recommendations
The development and deployment of AI has often been compared to nuclear technologies. While this metaphor might be misleading in some cases, it can be quite productive for OSINT since both AI and nuclear face high secrecy and their infrastructure relies on hardware that can be tracked.
While secrecy surrounding these potentially dangerous technologies is understandable, secrecy does not guarantee security.4 The susceptibility of AI models to jailbreaks, adversarial attacks, and other security-related challenges has not been successfully remedied yet. According to the EU AI Act, providers of high-risk AI systems are mandated to report serious incidents caused by their AI systems. However, the EU AI Office responsible for the enforcement of the EU AI Act is expected to have only approximately 100 employees.5 Similarly, the capacities of national authorities are also likely to be very limited. It will be challenging to monitor all the key developments in the sprawling AI industry. Therefore, greater security and compliance with the EU AI Act might be achieved if civil society actors contribute to monitoring through OSINT.
The traceability of hardware has been proposed as an effective measure to ‘gain visibility into AI development and use’.6 Governments are already trying to take the lead and restrict access to powerful hardware, such as semiconductors, through new legislation.7 To support this effort, civil society could monitor GPU clusters to identify potential threats. The Centre for the Study of Existential Risk (CSER) already attempted to monitor the data centres of technological companies through open source optical imagery.8 As one of the metrics used in the EU AI Act to identify potentially dangerous models is the power of compute (stricter rules exist for models trained with more than 10^25 FLOPs), tracking the hardware itself seems to be a good proxy for monitoring AI development and compliance with the EU AI Act.
While OSINT has a great potential to support policy enforcement, it also raises some security concerns. Information on AI development gathered through OSINT could be misused by actors with their own agenda not related to effective policy enforcement. Therefore, duty of care and responsible internal policies must be adopted by organisations conducting OSINT in this domain to prevent misuse and ensure responsible confidentiality of potentially dangerous data. In addition, OSINT should not be considered a replacement of other forms of monitoring and legal obligation for AI companies to submit their information to responsible regulatory bodies. Instead, its role should be viewed as supplementary and it should become an another compliance monitoring tool in the policy enforcement toolbox.
European Commission – JRC. "The Role of Open Source Information for Non-Proliferation." JRC Scientific and Policy Reports, 2016. Publications Office of the European Union. https://publications.jrc.ec.europa.eu/repository/bitstream/JRC102286/jrc102286_os_for_nonproliferation.pdf.
Dumbacher, Erin, et al. "Signals in the Noise: Preventing Nuclear Proliferation with Machine Learning & Publicly Available Information." NTI, 2021. https://www.nti.org/analysis/articles/signals-in-the-noise-preventing-nuclear-proliferation-with-machine-learning-publicly-available-information/.
Hobbs, Christopher and Matthew Moran. "Armchair Safeguards: The Role of Open Source Intelligence in Nuclear Proliferation Analysis." In: Hobbs, C., Moran, M., Salisbury, D. (eds) Open Source Intelligence in the Twenty-First Century. New Security Challenges. Palgrave Macmillan, 2014. https://doi.org/10.1057/9781137353320_5.
Alvara, Sanaa. "Nuclear Secrecy: A Case for Lifting the Veil." Center for Arms Control and Non-Proliferation." Center for Arms Control and Non-Proliferation, 2023. https://armscontrolcenter.org/nuclear-secrecy-a-case-for-lifting-the-veil/.
Uuk, Risto. "The EU AI Act Newsletter #45: AI Office in the European Commission, and More." The Artificial Intelligence Act, Substack, 2024.
Heim, Lennart et al. "Computing Power and the Governance of AI." Centre for the Governance of AI, 2024. https://www.governance.ai/post/computing-power-and-the-governance-of-ai.
European Commission. "European Chips Act." European Commission, 2022. https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/european-chips-act_en.
Krawec, Christina. "A New Org Announcement: Would Your Project Benefit from OSINT?" Effective Altruism Forum, 2024. https://forum.effectivealtruism.org/posts/qqStyYJsTLgJJZmBr/new-org-announcement-would-your-project-benefit-from-osint?utm_source=substack&utm_medium=email.