-
+
WORLDCOIN

Biometrics: A goldmine approaching the frontier of privacy

Ricard Martínez

9 mins - 12 de Septiembre de 2023, 07:00

Both X, the social network formerly known as Twitter, along with Worldcoin are clearly committed to biometrics. The objective is new, and the forms and consequences are well known. On the layer of an apparently necessary and even benign service, a race for staking one’s claim in an emerging market has been unleashed. It is nothing new, it is the monetisation of the privacy of their users for the provision of value-added services. The obligatory question is obvious: will the policies of the European Union and its data protection authorities be too late once again?

As Evgeny Morozov lucidly pointed out, these initiatives will be presented to the user with a generous benevolence verging on the social state. There are two arguments for their rapid implementation that are easily digestible by the population. The first, and most obvious, is that of security. The use of your fingerprint, or facial recognition, are simple, practical, and direct instruments to implement and, in fact, have been used for some time now in online contracting processes in electronic banking or communications. The next logical step is their generalisation. 

From a common-sense point of view, no one could resist having a verified account. Who would not agree with having an instrument that prevents fake accounts and makes it possible to identify and prosecute haters? On the other hand, and in a complementary way, Worldcoin’s effort even has incorporated a humanitarian dynamic to this initiative. The aim is to provide a standard of trust that will make the internet world safer.

[Recibe los análisis de más actualidad en tu correo electrónico o en tu teléfono a través de nuestro canal de Telegram]

In this equation it is essential to include a crucial subjective element: convenience. All of us hate user and password or PIN validation. It forces us to memorise or to have some method to keep track of such keys. Some people rotate three passwords, others use methods based on elements of the registration environment, standardised combinations of syllables that make sense to the user, the list of the Gothic kings... But in the end, even for the most demanding, there is a limit to the tolerance for error itself. We detest the three attempts – we hate to have forgotten our password. Moreover, ordinary mortals limit themselves to using the same email account and the same password everywhere, and this poses a major security risk. Our readers can log on to “Have I been pwned?” and check if their email account and password have been compromised. They likely will have a bad time if their email and passwords for their place of work appear on the list of compromised information. 

Since the risk of this having happened is very high, additional policies have been adopted. This makes it necessary to look for double validation systems that are projected for example on something you are, something you have or something you know. This implies that after the first access you must use an electronic signature, receive a PIN on your mobile phone, etc. In order to make these policies effective, workers and civil servants have been forced to use their own means to validate their identity that border on legality, such as a private mobile phone number. 

Finally, the low level of implementation of electronic signature procedures is evident. In this context, “convenience is the key”. Google and Facebook have already discovered this. Hence, they have operated informally as trusted third parties in access and registration processes. Biometrics is the next step and like any technology it is not without risks. Poor-quality programming can lead to errors exposing threatening vulnerabilities. However, minimally robust, simple, and user-friendly should be the most commonly implemented procedure. 

At this point we cannot forget the lessons learned. There is probably a lot of money at stake and a race to become dominant market players. Nobody gives a penny for a nickel, as my grandmother used to say. If a company offers us security “for free”, and for their sake, or even pays to process our image, we should be very demanding in verifying their true intentions. And this diligence is necessary even when the model is indeed altruistic and responds to the strictest legality. And this duty of diligence and verification can be demanded ex officio from the authorities that watch over our rights in the field of competition, consumption, and data protection. From a risk perspective, there are many reasons that underscore this need.

The first of these concerns the definition of European reference frameworks in the provision of services linked to electronic identity authentication. The applicable regulations, which have been evolving over the last decade, incorporate a constant value from the outset. The system requires entities operating in the market as trusted third parties. This implies considerable investment in terms of robustness in application design, security, and data protection. Lessons learned show that the emergence of a disruptive technology or business model kills entire markets. Therefore, the deployment of new business models in this sector should be subject to the rules. We should not be fooled once again by the excellently profitable argument of innovation in the absence of rules and generosity bordering on a gift.

There is a second, well-known dimension: to what jurisdiction are the companies offering these companies subject? Although I hope this is a wrong prediction, we are well on our way to Schrems III, and the lawyer will not be far behind. The Court of Justice of the European Union has repeatedly called into question the guarantee of our fundamental right to data protection in the United States. Let us avoid hypocrisy here. The list of questionable countries would be rather long. There simply has been no controversy. 



From a risk perspective, we should ask ourselves what the implications will be if a company handles millions of biometric identifiers under rules such as FISA or the Cloud Act. It is essential to underline an additional element: the identification takes place in a concrete context. It is done from a terminal that can be geolocated, linked by complementary identifiers such as IP, MAC, SIM, or IMEI numbers. In addition, it can be correlated with cookies, fingerprints, or session traceability. From the point of view of national security and intelligence, this does not seem to be a negligible resource.

Finally, biometric data integrate special categories of data. Without going into the distinction between high-level and low-level biometrics, one element is indisputable. An element physically linked to a human person is used for a specific purpose. Nobody feels comfortable when in a police station a friendly officer helps us to take fingerprints. But on the internet, it is a different matter. There are no uniforms, there is no obvious coercion. It is a cordial affair, and there is someone who even is willing to pay. That is precisely why the General Data Protection Regulation incorporates very precise safeguards. The risk analysis for rights and the data protection impact assessment are the best known. Before offering a service such as the one in question, the data controller must inevitably ensure to analyse the risks and impacts of the technology.

From this last point of view, essential questions arise. Regardless of the fact that Organic Law 3/2018 of 5 December on the Protection of Personal Data and the Guarantee of Digital Rights allows the processing of biometric data with consent, there are other things to consider. When the European Data Protection Committee analyses the implications of the use of facial recognition for enforcement purposes, it details certain procedures that we should consider. Firstly, the legal system as a whole should be taken into account. Processing biometric data simply because the user has consented is very poor from a legal point of view. Let us not forget that previous disruptive markets were characterised by monopolistic scenarios and asymmetry in the position of the user. On the other hand, and as has been pointed out, if the objective is to offer services in the market as a trusted third party, consent by itself would be an insufficient basis.

On the other hand, it is essential to deploy a judgement of proportionality that begins with a requirement of normative predetermination. And, given that fundamental rights are at stake, it is doubtful to say that the absence of a law enables the treatment to be carried out. Moreover, it should be recalled that the measure should be fit for purpose – the least invasive of all possible measures, passing the public interest test. Reviewing whether this task has been undertaken seems more than sufficient to justify an ex officio audit by the data protection authorities. And, it is likely that all tests will be passed. And questions will still need to be asked about the process and about the implications when the technology is embedded in artificial intelligence environments to which the latest version of the AI Act pays significant attention.

Lessons learned in the social media realm should be applied here. We cannot demonise innovation or artificially limit it. But neither can we wait for the first failure. Doing nothing leads to all too familiar results. When the judge or authority fails to act, entire markets can be wiped out. The disruptive entrepreneur occupies a position of dominance and accumulates ample resources to deal with any kind of liability, and eventually, when he enters the path of compliance, he does so in a territory without competition. On the other hand, reactive action leads to the imposition of regulatory barriers that subsequently prevent European companies from coming into being and competing. Will we make the same mistakes again?
 
Se puede leer el artículo original en español

¿Qué te ha parecido el artículo?
Participación