From two-dimensional systems to 3D algorithms to skin biometrics: today facial recognition has an extreme degree of accuracy. Is it possible to deceive him?
From Mission: Impossible from airport security systems to software for unlocking our devices and online payments, facial recognition has become one of the best-known technologies of the digital age. It all seems easy: a camera captures a face, the software compares it with a database and identifies the person in a few seconds. Reality, however, is more complex than scrolling through images in search of the perfect “match”.
In detail, the path that led computers to “see” like human beings starts from the 1960s, when the first experiments tried to extract geometric information from facial features. Since then the sector has made enormous strides forward, evolving from two-dimensional systems to 3D algorithms and skin biometrics, which today allow a degree of accuracy that was unthinkable just twenty years ago.
Nodes of the face. At the core of traditional facial recognition is the idea that each face has a set of unique, measurable points. The first companies in the sector, such as Identix, defined these references as “nodal points”, nodal points: around 80 elements which include the distance between the eyes, the width of the nose, the shape of the cheekbones and the depth of the orbits, just to name the simplest to identify. Software first detects the face in an image, then separates it from the background and finally measures these points to transform them into a numerical code, the so-called “faceprint”.
Obstacles. For years this system worked only in ideal conditions: controlled lighting, frontal view, very few variations in expression, while small changes in perspective or light were enough to throw the comparison off track. And it is precisely to overcome these limits that three-dimensional models were born.
From science fiction to reality. 3D technology captures the real geometry of the face by exploiting curves and reliefs that are more stable over time, such as the contour of the eyes, nose and chin. A 3D system can recognize a face even in the dark, because it is not based on color but on depth, and it can do so even if the head is tilted up to 90 degrees. After detecting the face, the algorithm aligns it in space, measures the surfaces with sub-millimeter precision and generates a model that is encoded and compared with those present in the database.
The fact is that many databases still contain 2D images: for this reason modern software “flattens” the three-dimensional scheme through dedicated algorithms to make it compatible with existing archives.
Added to this is skin biometry, a technique that analyzes pores, lines and micro-textures to differentiate even monozygotic twins, further expanding the accuracy of recognition.
Applications. Today this technology is used in many contexts: from immigration (where it compares travellers’ photos and fingerprints), to airport security systems, from corporate access control to banking services (which verify identity without documents or PINs). Some companies even use it to record employee attendance and times. However, privacy concerns also increase along with the effectiveness of the software.
Risks. Many fear unauthorized use in public places, the risk of errors affecting innocent people and the possibility of identity theft, especially since these systems work without the user realizing it. And if facial recognition continues to improve, its impact on society will depend on how governments and companies balance innovation, security and the protection of individual rights.
