Facial Recognition

The facial recognition is a technique that is used to capture the face of a human in the digital format from a video or a photo available in the system. This facial recognition Techniques work with the combination of several facial features to make it work. This facial recognition has several features that make the security stronger than ever before. This facial recognition is used in security purposes in Army and other forces, in Smartphones, ID verification, Biometric Applications.

History

Persons who invented and experience face recognition are Woody Bledsoe, Helen Chan Wolf, and Charles Bisson.

In 1964 and 1965, Bledsoe, Helen Chan and Charles Bisson, who worked on using the computer to identify and capture the human faces. This facial recognition project was funded by the named intelligence agency and he completed the project. This project is based on the available references, it was revealed that some parts of the should be the manual marking of various landmarks on the face such as the eye centers, mouth, etc., and these were information is mathematically rotated by computer to compensate for pose variation. The space between the mouth, face, eyes are also automatically computed and compared with each other between images to identify the identity

Given in a large database of images and photographs, the major problem was to select from the database and a small set of records such that the image in the records matched the with the photograph. The success of this method is measured in terms of the ratio of the answers in the answer list to the number of records in the database found. Bledsoe described the following difficulties:

“ This facial recognition problem is as made difficult by the great variability in changing the orientation of head and tilt, lighting intensity and angle, facial expressions, aging, and other problems, etc. Some other important attempts made at face recognition by machines have allowed for a little in these quantities. Yet the method of pattern matching of unprocessed optical data, which is often used by some researchers, is to fail in cases where the variability is great. In particular, the pattern matching is very low in the two pictures of the same person with two different which captured in two different head rotations. ”

This project was labeled as man-machine because the humans extracted the coordinates of a set of features from the photographs, which were after used by the computer for facial recognition. Using a graphics tablet the operator would extract the coordinates of the features such as the center of pupils, the inside corner of eyes and the outside corner of eyes the point of widow’s peak, and so on. From these coordinates and a list of 20 distances, such as the width of the mouth and width of eyes, pupil to pupil, were computed. These facial operators could process about 40 pictures an hour. When building the database system, the name of the person in the photograph was associated and linked with the list of computed distances and stored in the computer. In the facial recognition phase, the set of distances was compared with the corresponding distance for each of the photographs, sets a distance between the photograph and the database record. The closest records are returned and stored.

Because it is unlikely that any two of the pictures would match in head to head rotation, lean, tilt, and scale which is from distance from the camera, each set of distances is normalized to represent the face in a front orientation. To achieve this normalization, the program first tries to find the tilt, the lean, and the rotation of the face. Then, using these angles, then computer undoes the effect of all these transformations on the computed distances. To compute all these in identified angles, the computer must have a ability to find the three-dimensional geometry of the head. Because these actual heads were unavailable, Bledsoe in 1964 used a standard head derived from the measurements on seven heads of the persons.

After Bledsoe left facial recognition project in 1966, this work was continued at the Stanford Research Institute, which is in USA and a researcher named Peter Hart. In these experiments performed on a database of over 2000 photographs and facial samples, the computer has again same outperformed the humans when sent between with the same recognition tasks as Bledsoe 1968.  Peter Hart a researcher from Stanford in 1996 enthusiastically recalled the total project with an exclamation, named “It really worked!”

By about 1997, the system facial recognition was developed by Christoph von der Malsburg and some graduate students of the University of Bochum which is located in Germany and in the University of Southern California located in the United States again outperformed most database systems with those of MIT (Massachusetts Institute of Technology) and the University of Maryland which rated next. The Bochum university system was developed through funding by the best research lab in the world named United States Army Research Laboratory. After this, The software was sold with a name as ZN-Face and which is used by customers such as Deutsche Bank and several Flight operators of airports and in many other busy locations. The software was awesome “robust enough to make the identifications from less-than-a perfect face views. It can also often seen through such as impediments to the identification as mustaches, beards, and changed hairstyles and with sunglasses. 

In 2006, the performance report was obtained for the latest face recognition algorithms which were evaluated in the Face Recognition Grand Challenge which is known as FRGC. The High-resolution face images and 3-D face scans, and the iris images were used in those tests. The obtained results indicated that the new algorithms of facial recognition are 10 times more accurate than the older face recognition algorithms of 2002 and even 100 times more accurate than those of 1995. Here Some of the algorithms were able to outperform the human participants in recognizing the faces and could uniquely identify the identical twins.

In the meantime, U.S. Government-sponsored evaluations and challenging problems have helped the spur over two orders-of-magnitude in the face-recognition system performance. From 1993, the error rate in the automatic face-recognition systems has decreased by 272 times. The reduction applies to all systems that match the people with their face images captured in a studio or mugshot in the environments. 

The Low-resolution images of the faces can be enhanced by using a technology named using face hallucination.

Techniques For Facial Recognition

In the present situation, the process of face recognition is performed in two steps. 1. The first step involves feature extraction and selection of face and the second step is the classification of objects. After all Later developments introduced in varying the technologies to the procedure made.  Some of the most notable features include some following techniques:

Traditional: 

Some face recognition algorithms and databases identify our facial features by extracting The data of eyes, nose, mouth, and distance between them. For example, an algorithm of facial recognition may analyze the standard position, size, and shape of the eyes, nose, cheekbones, and jaw and distance between them. After that These features are then used to search for other images in the database with matching features of the face.

Other algorithms will normalize the gallery of face images and then used them after the compress the face data, only saving the data and in the image that is useful for face recognition. A probable image of the face is then compared with the face data. One of the earliest successful systems are is based on the template matching techniques which are applied to a set of salient facial features that providing a sort of compressed face representation of the face.

Facial Recognition algorithms and databases can be divided into two main approaches they are 1. geometric, which looks at distinguishing features, of the photometric, which is a statistically identified approach that distills or converts a photo into values and compares those values with all the templates to the eliminate variances. Some facial recognition types classify these algorithms into two broad categories: 1. The holistic and feature-based models. The older attempts made to recognize the face in its entirety while the feature-based facial recognition hardware was subdivided into components such as according to the features and analyze each as well as the spatial location of the face with respect to other features.

Popular facial recognition algorithms include the principal component analysis by using eigenfaces and linear discriminant analysis with elastic bunch graph matching using the Fisherface algorithm to work, the hidden Markov chain model and the multilinear subspace learning are using tensor representation and the neuronal motivated dynamic link matching of the face.

3-Dimensional recognition

The Three-dimensional face recognition technique uses a set of  3D sensors that capture the facial information and about the shape of a face and this obtained information is then used to identify the distinctive features on the surface of a face, like the size of the eye sockets, nose, and chin.

There is One advantage of 3D face recognition is that it is not affected by all these changes in lighting like other facial recognition techniques. It can also identify a face from a range of viewing angles, including a profile view and other things. Three-dimensional data points from the face will vastly improve the precision of face recognition in 3D techniques. 3D research is more enhanced by the development of capable sensors that do an awesome job of capturing 3D face imagery. These sensors work by projecting the structured light onto the face. Up to a dozen or more of the image, sensors can be placed on the same CMOS chip and along with the CMOS chip. The sensors placed near the CMOS chip captures a different part of the spectrum on the face.

Sometimes accurate 3D matching techniques could be more sensitive to facial expressions.

A new method is to introduce into a way to capture the 3D picture by using three tracking cameras that are pointing at different angles of the face. In this one camera will be pointing at the opposite of the face, and the second camera sensor one to the side of the face, and third camera sensor one at an angle which collects the face data. All these cameras sensors will at a time so that it can track a face in real-time and be able to detect and recognize the face.

Skin texture analysis

Another emerging trend in facial recognition uses the visual details of the skin of the face, as captured in standard digital or scanned images. This technique is named as Skin Texture Analysis, which turns the unique lines, and patterns, and spots apparent in a person’s skin into a mathematical space of facial recognition.

Surface Texture Analysis also works much the same way as facial recognition does. A picture is taken from a patch that will distinguish any of the lines, pores and the actual standard skin texture. It can identify the contrast between the identical pairs, which are not yet possible techniques which are using facial recognition software alone.

Some Tests have shown that with the addition of skin texture analysis and performance in recognizing faces which can increase 20 to 25 percent.

Facial recognition combining different techniques

every method has its all advantages and disadvantages and technology companies have amalgamated the traditional, 3D recognition and the Skin Textual Analysis, to create the recognition systems that have higher rates of success of facial recognition.

Combined techniques have an advantage over the other systems. It is relatively insensitive to the changes in expression which including blinking, frowning or smiling and has the ability to compensate for the mustache or the beard growth and the appearance of eyeglasses on the face. The system is also uniform with respect to race and gender.

Thermal cameras

A different form of taking facial data for face recognition is by using the thermal cameras, by this technique he cameras will only detect the shape of the head and it will ignore the glasses, hats, or makeup. Like the conventional 2D cameras and thermal cameras that can capture the facial imagery even in the low-light and the nighttime conditions without using a flash and exposing the position of the camera. However, a problem with using thermal pictures for face recognition is that the databases for face recognition are limited. Researchers like Diego Socolinsky and Andrea Selinger (2004) use thermal face recognition in real life and the operation scenes, and at the same time, they build a new database of thermal face images. The research uses a low-sensitive and low-resolution ferroelectric electric sensor that is capable of acquiring the long-wave thermal infrared (LWIR). The results from the fusion of LWIR and regular visual cameras have greater results in outdoor probes. The Indoor results show that visual has a 97.05% accuracy and while LWIR has 93.93%, and the fusion technique has 98.40%, however on the outdoor proves the visual has 67.06%, the LWIR 83.03%, and the fusion technique has 89.02%. These studies used about 240 subjects and required an over a period of 10 weeks to create a new database for facial recognition.

In 2018, the researchers from the U.S. Army Research Laboratory (ARL) developed a technique that would allow the facial imagery obtained using the thermal camera with those in the databases that were captured using the conventional camera. This technique utilized the artificial intelligence and machine learning system to allow researchers to the visibly compare conventional and the thermal facial imagery which is Known as a cross-spectrum synthesis method due to how it made facial recognition from two different imaging modalities, this method will synthesize a single image by analyzing the multiple facial regions and details. It consists of a non-linear regression model that will maps a specific thermal image into a facial image and optimization issue that projects the will latent the projection back into the image space of facial recognition.

ARL scientists have noted that the approach works by combining global information which will features across the entire face with local information features regarding the eyes, nose, and mouth. In addition to enhancing the discriminability of the synthesized image and the facial recognition system can be used to transform a thermal face signature into a refined visible image of the face. According to the performance tests conducted at ARL, researchers will found that the multi-region which will cross-spectrum the synthesis model which will demonstrate a performance improvement of about 30% over the baseline methods and about 5% over the state-of-the-art methods. It has also been tested for the landmark detection for the thermal images.

Anti Facial Recognition

In January 2013 the Japanese researchers from the National Institute of Informatics created a ‘privacy visor’ glasses that use nearly the infrared light to make the face underneath it and unrecognizable to the face recognition software. The latest version of the anti facial recognition uses a titanium frame, and with a light-reflective material and a mask which uses the angles and patterns to disrupt the facial recognition technology through in both absorbing and bouncing backlight sources. In December 2016 a form of anti-CCTV cameras and facial recognition sunglasses named ‘reflectances’ were invented by a spectacle company named custom-spectacle-craftsman which is based in Chicago which is later named as Scott Urban. They reflect infrared light and, optionally, and visible light which makes the users face to show a white blur to cameras.

Another method that was invented to protect from the facial recognition systems are specifically styled haircuts and face make-up patterns that fool the algorithms to detect a face, which is also known as computer vision dazzle. Incidentally, the makeup styles are popular with Juggalos can also protect our face against facial recognition algorithms.

Advantages and Disadvantages of Facial Recognition

Compared to other biometric systems

One key for the advantage of the facial recognition system is that it is able to person to mass identification as it does not require the cooperation of the test subject to work.  The systems which are designed properly installed in the airports, multiplexes, and other than public places that can identify the individuals among the crowd, without the passers-by even being aware of the facial recognition system.

However, when compared to the other biometric techniques and face recognition may not be able most reliable and efficient. The Quality measures will play a major important role in facial recognition systems as large degrees of the variations that are possible in face images for facial recognition. Factors like the illumination, expression, pose and noise during the face capture that can affect the performance of facial recognition systems. Among all the biometric systems, facial recognition has the highest false acceptance and facial recognition rejection rates, thus the questions have been raised on the effectiveness of the face recognition software in cases of the railway and airport security systems

Weaknesses

A researcher named Ralph Gross, at the Carnegie Mellon Robotics Institute in the year of 2008 which describes one of the obstacles related to the viewing angle of the face: “Face recognition that been getting pretty good at the full-frontal faces and the 20 degrees off, but as soon as you go towards the facial profile, there’ve been some problems.” Besides the pose variations and low-resolution of face images are also very hard to recognize the face. This is one of the main obstacles to face recognition in facial surveillance systems.

Face recognition is less effective if the facial expressions may vary. A big smile that can render the system as less effective than effective. For example, a country named Canada, in the year 2009 is only allowed only the neutral facial expressions which are present in passport photos.

There is also an inconstancy in the datasets which are used by researchers. Researchers may use anywhere from several subjects to the scores of subjects and a few hundreds of images to thousands of images. It is important for the researchers to make available the datasets they are used to each other or have at least a standard dataset of facial recognition.

Data privacy is the main concern when it comes to storing the biometrics data in the companies. Datastores about the face or biometrics can be accessed by the third party if not stored properly or maybe hacked. In the Techworld, Parris adds in 2017, Hackers will already be looking to replicate the people’s faces to trick the facial recognition systems, but the technology that has proved harder to hack the fingerprint or voice recognition technology in the past.

Ineffectiveness

Critics of the technology that complain that the London Borough of the Newham scheme that has, as of 2004, never recognized a single criminal and despite several the criminals in the systems of database living in the Borough and the system that has been running for the several years. While Not once, as far as the police to know, has Newham’s to automatic face recognition system that has spotted as a live target.” This information seems to be more conflict with a claim that the system was credited with a 34% reduction in the crime rate. However, this can be explained by the notion that when the public is very regularly told that they are under the constant video surveillance with advanced the face recognition technology, this fear alone that can reduce the crime rate, whether to face the recognition system that technically that works or does not. This has been the basis for several other face recognition based in the security systems, where the technology itself does not work with particularly well but the user’s perception of the technology does not.

An experiment in the year 2002 by the local police department in Tampa, in the state of Florida, that had similarly made disappointing results. A system at Boston’s Logan Airport was shut down in 2003 after failing to make any matches during a two-year test period. In the year 2014, Facebook stated that in a standardized in the two-option of facial recognition test, and the online system scored 97.25% accuracy when compared to the human benchmark of 97.5%. In the year 2018, and a report by the civil liberties and rights campaigning the organization Big Brother Watch revealed that the two UK police forces and South Wales Police and the Metropolitan Police, were using live the facial recognition at the public events and in the public spaces, in September 2019, the South Wales Police use of the facial recognition which was ruled lawful.

The facial recognition systems that are often advertised as having the accuracy near 100%; this is misleading as the studies which are often used much smaller than the sample sizes than would be necessary for the large scale applications. Because the facial recognition techniques are not completely accurate, it creates a list of the potential matches. A human operator must then look through these and potential matches and the studies show the operators pick the correct match out of the list only about half the time. This technique causes the issue of targeting and the wrong suspect about facial recognition.

Applications

Social media

The important applications in the Social media platforms have adopted the facial recognition capabilities to diversify their functionalities in order to attract a wider user base with amidst stiff competition from different applications for facial recognition. The techniques that Founded in 2013, the researcher named Looksery went on to raise money funds for its face modification app on the Kickstarter. After the success of crowdfunding, the Researcher named Looksery launched which is in October 2014. The application allows the video to chat with others through a special filter for faces that modifies the look of users in real-time. While there is an image that augmenting the applications such as the FaceTune and the Perfect365, they are limited to some static images, whereas Looksery that allowed the augmented reality to the live videos. In late 2015, Snapchat that purchased Looksery, which would then become its landmark of the lenses function

A social media SnapChat’s that animated the lenses, which would use the facial recognition technology, that revolutionized and redefined the selfie camera by allowing the users to add filters to change the way they look.

The selection of the filters in the changes that every day, may some examples that include one of the users look like an old and wrinkled version of themselves, one that airbrushes their skin, and one that places a virtual flower crown on top of their head. The dog filter that is the most popular filter that helped propel the continual success of Snapchat, with popular celebrities like Gigi Hadid, Kim Kardashian, and these likes regularly posting the videos of themselves with the dog filter.

Deep Face

DeepFace is a deep facial recognition system that was created by a research group at the Facebook company. It identifies the human faces in the digital images. It employs a nine-layer neural engine that net with over the 120 million connection weights and was trained on the four million images which were uploaded by Facebook users. The system is said to be 97% accurate and, compared to 85% for the US Investigation agency named FBI. FBI’s Next Generation Identification system. One of the software creators named Yaniv Taigman that came to Facebook via their acquisition of the Face.com.

ID verification

The emerging use of facial recognition is in the use of the ID verification services. Many companies and other software providers are working continuously in the market now to provide these services to banks, ICOs, and other e-businesses services.

Face ID

A Smartphone company named Apple introduced Face ID setup on the flagship smartphone named iPhone X as a biometric authentication successor to the Touch ID which is a fingerprint-based system. Face ID system has a facial recognition sensor that consists of two parts:

1. “Romeo” this module that projects more than 30,000 infrared dots onto the user’s face, and another sensor named “Juliet” module that reads the pattern. These patterns are sent to a local “Secure Enclave” which in the device’s central processing unit (CPU) to match with the phone owner’s face. The facial pattern is not accessible by the Apple company. This facial system will not work with eyes closed, to prevent unauthorized access.

Apple developed This technology with care in a user’s appearance, and it works objects like hats, scarves, glasses, and many sunglasses, grown beard and makeup on the face.

This technology has a flood illuminator that works in the dark which is used as an infrared flash that throws out in seen infrared light onto the user’s face to properly read the 30,000 facial points created by the Face ID System.

Deployment in security services

Commonwealth

Some Countries Borders force like The Australian Border Force and New Zealand Customs Service has set up an automated border processing system called SmartGate that uses the face recognition, which compares the face of the traveler with the data in the e-passport microchip. All Canadian international airports are using facial recognition as part of the Primary Inspection Kiosk program that compares every traveler face to their photo stored on the passport document. This program first came to Vancouver International Airport in the year 2017 and was rolled up to all remaining international airports in 2018-2019. The Tocumen International Airport in Panama operates which an airport-wide surveillance system using hundreds of live face recognition cameras to identify the wanted individuals who are passing through the airport.

The Police forces in the United Kingdom have been Now trialing live facial recognition technology at all public events since 2015. However, a recent report and investigation by Big Brother Watch found that all these systems were up to 98% inaccurate in detecting the face of the individual.

In the year of May 2017, a man was arrested using the automatic facial recognition (AFR) system mounted on a van operated and which was by the South Wales Police. Ars Technica reported that this appears to be the first time AFR has to arrest a person.

The Live facial recognition has been trialled since the year of 2016 in the streets of London. It will be used on a regular basis from the Metropolitan Police from the beginning of 2020. 

China

As of late 2017, China has placed facial recognition and artificial intelligence technology in Xinjiang city. Reporters of visiting the region found surveillance cameras installed every hundred meters in several cities, as well as facial recognition checkpoints at the areas like gas stations, shopping centers, and the mosque entrances. In 2020, China provided a grant to develop the facial recognition technology to identify the people who wear the surgical or dust masks and by matching solely to eyes and foreheads.

The Netherlands

Like China, but a year earlier, The country Netherlands has deployed the facial recognition and artificial intelligence technology since 2016. The database of the Dutch police currently contains over 2.2 million pictures of 1.3 million Dutch citizens. This accounts for about 8% of the population in the country.  Hundreds of cameras have been deployed in the city of Amsterdam alone to use facial recognition. Automatic Facial Recognition systems resemble other mobile CCTV systems.

South Africa

In South Africa, in the year 2016, in the city of Johannesburg announced it was rolling out smart CCTV cameras complete with the automatic numùber plate recognition and facial recognition.

Additional uses

At Super Bowl XXXV in January 2001, police in Tampa Bay, Florida used Viisage face recognition software to search for potential criminals and terrorists in attendance at the event. 19 people with minor criminal records were potentially identified.

In the 2000 Mexican presidential election, the Mexican government employed face recognition software to prevent voter fraud in polling booths. Some individuals had been registering to vote under several different names, in an attempt to place multiple fake votes. By comparing new face images to those already in the voter database, the authorities and the Mexican government were able to reducers duplicate registrations. Similar technologies are also used in the United States to prevent people from obtaining fake identification cards and driver’s licenses.

Face recognition has been leveraged as a form of biometric authentication for the various computing platforms and devices; in Android 4.0 “Ice Cream Sandwich” added facial recognition using a smartphone’s front camera as a means of unlocking devices, while the company named Microsoft introduced face recognition login to its Xbox 360 video game console through its Kinect accessory, as well as Windows 10 via its “Windows Hello” platform which requires an infrared-illuminated camera system. Apple’s iPhone X smartphone was introduced with facial recognition into the product line with its “Face ID” system platform, which uses an infrared illumination camera system.

Face recognition systems have also been used by photo management software to identify the subjects of photographs and by enabling features such as searching images by person, as well as suggesting the photos to be shared with a specific contact if their presence were detected in a photograph.

Facial recognition is used as added security in certain tech websites, phone applications, and some payment methods.

In the country of the United States’ popular music and country’s music celebrity Taylor Swift surreptitiously employed facial recognition technology at a music concert in the year 2018. The camera was embedded in a kiosk and near a ticket booth and scanned concert-goers as they entered the facility for the known stalkers.

On August 18, 2019, The Times Magazine reported that the UAE-owned Manchester City hired a Texas-based firm and a Blink Identity, to deploy the facial recognition systems in a driver program. The club has planned a single and super-fast lane for the supporters who visit the Etihad stadium. However, civil rights groups cautioned the club against the introduction of this technology at the stadium, saying that it would risk “normalizing a mass surveillance tool”. The policy and campaigns officer at Liberty, Hannah Couchman said that Man City’s move is alarming to facial recognition. since the fans will be obliged to share deeply this sensitive personal information with a private company, and they feared that they could be tracked and monitored in their everyday lives.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close
Close

Subscribe To Our Newsletter

To Get Latest News , Reviews & Updates