At the time of writing, more than a thousand people, 1377, have signed the document '
Pause Giant AI Experiments: An Open Letter' which proposes a moratorium of at least six months on the development of artificial intelligence. The reasons behind this proposal are undoubtedly based on the rapid acceleration of various artificial intelligence tools such as Midjourney, ChaptGPT, or Abbrevia.me. The proposal then responds to the need to manage risk.
AI presents significant threats and vulnerabilities to any observer, even those minimally aware. The document identifies the acceleration of technological competition as the greatest of inherent threats. Furthermore,
a hypothetical Super-AI could produce systemic risks to our rights and freedoms and to democracy itself.
However, none of the considerations presented in this letter is even remotely new.
If the public and private sectors had instead taken the necessary steps to ensure compliance with existing regulations, this letter would not have been necessary. First and foremost, it is simply undeniable that technology must respect fundamental rights, at least in those states that define themselves as democracies. Only one explanation is possible if the risk is so pressing as to request a six-month moratorium on development:
it is acknowledged that development labs are testing and/or launching products whose operation or use violates the set of values, principles, and rights that underpin our democracies and that should not be subject to commercialisation. We have ample examples of this with Cambridge Analytica or
Team Jorge.
[Recibe los análisis de más actualidad en tu correo electrónico o en tu teléfono a través de nuestro canal de Telegram]
Why do scientists, philosophers, and jurists worry about a super-AI? The terrifying thought experiment offered by Max Tegmark in
Life 3.0, whose first chapter depicts a counterintuitive utopia-dystopia, explains this very well. A company designs a general-purpose intelligence that governs the world from an ethical standpoint of peace, justice, sustainable development, and fundamental rights by predetermining, promoting, and manipulating an entire generation of political professionals. A long-standing dream of humanity is achieved, and yet the example causes unease.
It demonstrates that we would need a new divinity, ‘the AI’, to predetermine us through probabilities that arise from a set of correlations that we were never able to design, intuit, or apply, but that are trivial for the machine.
It is essential to understand that the risks identified in the Open Letter are rooted in the business model that emerged from the late 1990s until the approval of the General Data Protection Regulation. Midjourney, ChaptGPT, or Abbrevia.me have done nothing different from what Google, Facebook, Amazon, and dozens of social networks or mobile applications did before:
they acted faster and more visibly. ‘Move fast and break things’ was the philosophy that led to this crisis. The competitive advantage of many AI entities is based on something as simple as having processed our data for decades ‘to improve the user experience,’ under the benign umbrella of a semblance of the welfare state, to which Morozov referred, a highly onerous contract was offered to us as a donation.
Each of our emails or searches was rigorously analysed, every keystroke on your mobile, every time you gave a like, dictated audio in your messaging, and corrected errors, every step taken, every heartbeat recorded, was feeding data and giving life to the current generation of single-purpose artificial intelligence. Moreover, not long ago, we discovered that someone was listening on the other side of the voice assistants without ever having asked us for permission to participate in a language analytics laboratory as research subjects.
The comparison between this reality and the other great scientific achievement of the late century is significant. When we faced genetic engineering and biotechnology in the 1990s, the risk was evident. The mere suggestion of the possibility of creating chimeras and manipulating what defines us as human beings stirred our consciences. Thousands of years of religious and ethical tradition had defined very precisely what we understood by the concept of ‘human’. This led us to establish a legal framework, the
Oviedo Convention, which has inspired practice in this field.
In this area, the fundamental role of university research and regulatory controls that subject both basic and applied research to ethical controls and controlled trials should be emphasised.
Unlike biomedical research, and regardless of the contribution of university research, the deployment of information and communication technologies has been highly dependent on entrepreneurial initiative. It is a business model in which investment rounds for startups or growth targets in established companies are highly dependent on the timing of deployment and release the product to the market. In fact, it is necessary to resort to analysis scenarios such as
Gartner’s Hype Cycle to understand the degree of maturity, disruption, and implementation of a technology.
The development of AI seems to have abandoned the purely university context of development. Among the top ten conclusions of the ‘
Artificial Intelligence Index Report 2023’ (Stanford Institute for Human-Centered Artificial Intelligence (HAI)), the first is that in this sector, the industry has taken over and is far ahead of academia.
In 2022, there were 32 industry machine learning models compared to only three academic ones. And so, while academia applies the lessons learned in ethical biomedical research to both research design and oversight by ethics committees, the industry applies criteria of ethical and legal
self-regulation.
In genetics, our societies can visualise what defines the essence of being human, but the same has not happened in the areas of privacy or AI, which escape our mental framework of reference.
In a society where users were not willing to pay one euro per year to avoid the monetisation of their data in the early WhatsApp version, we never reached the maturity or political will to promote an international framework on privacy, despite the efforts of the Spanish Data Protection Agency and data protection authorities around the world, such as the
Madrid Resolution (2009) that promoted global privacy standards, or the creation of a special rapporteur by the
United Nations in 2015.
With our inaction, we have encouraged a race to accumulate and use personal information hoarded by very few providers. We are approaching three decades of monetising our data, in a framework of asymmetry emphasised by
Paul Schwartz, as early as 1998, in which the individual always loses, and where the ‘explicit consent’ of the GDPR is absolutely ineffective.
This process has not only allowed certain companies that operate in a quasi-monopoly regime in their segment of information society services to take off, but also to build the hardware (cloud), software (data analytics), and information infrastructure that has given them an advantage in deploying AI models orphaned of any oversight. These companies, unlike their competition, have more than enough resources, according to Viktor Mayer-Schöenberger (Access Rules (2022)), to comply with the current regulatory framework thanks to the financial muscle they acquired when regulations were weak. We must understand that this period has fed back into the problem we face today in the field of Artificial Intelligence.
On the other hand, as professors
Luis Moreno and Andrés Pedreño have pointed out,
the deployment of AI occurs in a competitive environment that in recent times has taken on clearly geopolitical overtones. In fact, this has been one of the arguments used by prominent
industry leaders against calls to halt development. This occurs in an asymmetric context where the European Union is lagging. The United States leads the territory of generously lax, if not fluid, interpretation of legal requirements applicable to information and communication technologies. Meanwhile, China is deploying its efforts thanks to state capitalism that incorporates significant monopolies of data sources and total social control for which AI predictability is relevant. According to David Yang (
AI-tocracy), there is mutual reinforcement: the state enhances investment in this technology, consolidating its control capabilities, and this places five of the country’s companies in the top five positions in facial recognition AI (
Harvard Gazzette).
Meanwhile, the European Union positions itself as a giant regulator, champion of fundamental rights, extraordinarily slow in its legislative process, although there is now a sense of urgency, and constrained by its own regulatory interpretation, which dissuades research, innovation, and entrepreneurship.
The Open Letter that motivates this article is based on alertness, risk identification, and only finds one solution:
to press the panic button. For quite some time, a sector of researchers has been applying
risk-based design methodologies. In this approach, the relevant question is
how, and the answer is twofold. First, obviously, the answer is to comply with the laws. Thus, the first declaration of the
Digital Rights Charter, promoted by the Spanish government, affirms the principle of the rule of law. Secondly, as the European Parliament pointed out and is incorporated in the
Proposal for an AI Regulation, evolving legislation is necessary. In fact, the authors of our Letter, instead of a structure of principles or values, opted for a clearly proto-normative one that sent a clear message:
there is no time, our speed and that of digital technologies are asymmetric, we cannot wait in either the legal or social sphere, we cannot leave anyone behind, as expressly stated during the presentation of the document.
And we do not lack models. The 1981 Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data, along with the regulatory framework for clinical trials, are clear examples of where we should be heading.
These rules create very precise legal and ethical ecosystems, open to self-regulation and individual and business commitment, but also define a framework of state protection guaranteed by independent administrative authorities with enforcement and judicial guarantees. It is not coincidental that data protection authorities in countries with solid data protection frameworks, such as
Canada or
Italy, have opened audits into ChatGPT performance. This is perfectly legitimate. However, it is worrying that they are not promoting generalisable frameworks and collaboration and that there is no perceived concerted, predictable approach aimed at ordering the compliance conditions of the industry as a whole.
Therefore, it is necessary to have international and European laws capable of governing the design and use of artificial intelligence, along with a set of social policies that ensure that the weakest members of society are not sacrificed, that rules are not forced, and that the resources of the state powers and public liberties are not diminished in this revolution. We need AI for the common good, and this will require new ways of understanding work, business, society and our democracies as a whole. Managing the deployment of AI is a common challenge for humanity that must be addressed with urgency. Proclamations and statements of principles are not enough, nor is simply alerting about the risks, imposing a moratorium, or practising self-restraint. It is time for decisive action by public authorities to guarantee our rights and advance international and national regulation.