top of page

Hated in the Nation - Are we That Close to a Black Mirror Dystopian Future?

When considering facial recognition and digital identification, our minds naturally turn to our smartphones. 

"Bring your face closer to the screen to unlock the device," "Enable Face ID to proceed with payment," "Fingerprint not recognized, enter the code"; how often do we read these notifications each day? The above are routine activities facilitated by such tools.

This technology originates from the work of Alphonse Bertillon, who developed the first scientific biometric identification method for a Parisian prison in the early 1900s. From the early 1900s to the introduction of the now-famous "Touch ID," followed by "Face ID," the progress has been swift. In the last 100 years, and particularly the last 20, technology has advanced significantly to simplify human life. These simplifications, legally and otherwise, raise numerous questions and issues.

Facial recognition and digital identification today represent some of the most advanced and controversial emerging technologies in the digital age. By analysing and interpreting human facial features, these technologies enable the identification, authentication, or tracking of individuals in various contexts, from mobile device access to public surveillance and national security.

Facial recognition plays a crucial role in many sectors, including security, e-commerce, financial services, and healthcare. While development of these technologies was gradual until 2019, the COVID-19 pandemic accelerated their adoption, amplifying existing questions. This adoption has been driven by the increasing availability of digital data and the growing computational power of information systems.

New solutions, new problems

However, the pervasive use of facial recognition raises numerous legal, ethical, and social issues that require careful consideration and regulation. The foremost issue concerns privacy and the protection of personal data, particularly regarding the collection, storage, and use of citizens' biometric information. A notable public interest case is the 2019 report by the Financial Times on the installation of surveillance cameras able to monitoring over 270,000 sqm at King's Cross station in London, following the previous year's installation of facial recognition cameras in the busy areas of Soho, Piccadilly Circus, and Leicester Square during the holiday shopping season. Authorities claimed these installations were "in the interest of public safety and to ensure the best possible experience for visitors."

Since 2018, what initially justified the use of this technology has clashed with fundamental human rights. Concerns about the unregulated use of facial recognition, potential discrimination and violations of fundamental rights have been raised, especially considering that the data is rarely processed directly by public administration but by third parties who own the relevant software.

Another interesting case involves Clearview AI, a U.S. company in the AI sector that provides facial recognition software used by thousands of agencies and law enforcement worldwide for operations, investigations, and other activities, allowing image comparison with the company's database. With a $50,000 project, it has invested in research on augmented reality glasses capable of scanning people's faces to use in protecting airports and military bases. The project aims to expedite controls and save time for airport staff, while increasing security. The controversy, however, lies in the company's future. Clearview, which already holds the largest image database in the world, announced it is seeking further investments to reach the goal of 100 billion facial images within a year, averaging 14 images per person on Earth.

Social and Ethic impact

Ethical questions arise about transparency and accountability in the use of this technology, its potential impact on individual freedom, and mass surveillance. It is known that in countries with authoritarian regimes, such technologies are particularly used for repressive activities. Socially, facial recognition raises questions about changes in power dynamics and the potential for abuse by public authorities or private companies, increasing the risk of this technology being used for surveillance and social control, undermining civil liberties and human rights. There are also concerns about errors and false identification, with potentially serious consequences for those involved.

New perspective from the law

Legislatively, there is a pressing need for stricter regulation to ensure both technological development and the corresponding profits for companies, as well as the protection of citizens' "digital" rights. At the European level, regulations exist to balance the use of facial recognition with the protection of individuals' fundamental rights. The European Union's General Data Protection Regulation (GDPR) provides a robust regulatory framework for the collection and processing of personal data, including biometric data used for facial recognition.

Additionally, the ECHR and national privacy and data protection laws define the rights and duties related to the use of this technology, including the right to privacy, data protection, and non-discrimination. The most important act, right now about the application of those rights is revealed by the case of Glukhin v. Russia (application no. 11519/20), based on the “Use of facial-recognition technology breached rights of Moscow underground protestor”. However, the application of these regulations and their adaptation to the challenges posed by facial recognition remain topics of ongoing legal and social debate.

AI Act

With the approval of the AI Act, questions arise regarding how AI will impact judicial contexts. The act was in fact welcomed with enthusiasm among many, but significant criticisms were raised.

The use of AI, particularly for facial recognition, has advantages in combating crime, but Amnesty International highlights the risks to human rights. The regulation seeks a balance between security and privacy, but its practical application will be crucial.

The AI Act offers limited guarantees to those most affected and marginalized communities. It does not prohibit the use and export of dangerous AI technologies and does not adequately protect migrants, refugees, and asylum seekers. Furthermore, there are inadequate provisions on accountability and transparency, which could exacerbate human rights violations.

One of the reasons for the debate over the use of facial recognition in the judicial context has been the fight against terrorism and organised crime, phenomena often linked to mass immigration in Europe in recent years.

The AI Act allows facial recognition in certain sectors and prohibits it in others, defining it in Recital 15 as "the automatic recognition of physical, physiological, and behavioural characteristics of a person to determine their identity by comparing their biometric data with those of others stored in a reference database, regardless of the person's consent." Systems intended for biometric verification for authentication and access to services are excluded.

In Article 3, paragraphs 34 and 35 of the regulation, it defines "Biometric data" as personal data obtained from specific technical processing relating to the physical, physiological, or behavioural characteristics of a natural person, such as facial images or fingerprint data and "Biometric identification" as the automated recognition of human physical, physiological, behavioural, or psychological characteristics to determine the identity of a natural person by comparing their biometric data with those stored in a database.

Starting from this point, it will be necessary to improve this new regulation at EU level, avoiding different treatment in the EU countries.


bottom of page