Easy surveillance? Your face is your boarding pass at Abu Dhabi airport

Amid the rising concerns over artificial intelligence (AI) use, the Abu Dhabi International Airport has launched a biometric service, allowing passengers to use their face as boarding pass.

In a report, the Khaleej Times says it is the first phase of the project as the solutions will be implemented at select self-service baggage touchpoints, immigration e-gates, and boarding gates and then applied across all passenger touchpoints at the Abu Dhabi airport.

The technology uses hi-tech biometric cameras to verify passenger details at various points in the airport.

The implementation of this technology is expected to enhance the passenger journey and establish the Midfield Terminal Building as the first international airport with biometric capabilities at all customer touchpoints.

Abu Dhabi-based tech company NEXT50 will introduce its cutting-edge AI solutions alongside global artificial intelligence and technology solutions partners IDEMIA and SITA

In this connection, NEXT50I CEO Ibrahim Al Mannaee said the biometrics initiative was a component of the Emirate’s ambition for digital transformation.

“Once the project is fully realized, the airport will be the only airport in the region with biometric solutions implemented across all customer touchpoints, contributing to Abu Dhabi Airport’s vision to become the operator of the most technology-driven airport in the world, providing a seamless journey to all its passengers.”

Once the project is delivered, it will offer passengers a convenient, simplified, contactless and hygienic experience from ‘curb-to-gate’. This will result in reduced wait times and less time queuing for passengers.

AI use is becoming controversial

There are serious concerns about the use of AI which are multiplied by the allegations like China using the technology to track down protesters.

AI presents three major areas of ethical concern: privacy and surveillance, bias and discrimination, and the role of human judgment, said Sandel who teaches a course in the moral, social, and political implications of new technologies.

“Debates about privacy safeguards and about how to overcome bias in algorithmic decision-making in sentencing, parole, and employment practices are by now familiar,” said Sandel, referring to conscious and unconscious prejudices of programme developers and those built into datasets used to train the software.

“But we’ve not yet wrapped our minds around the hardest question: Can smart machines outthink us, or are certain elements of human judgment indispensable in deciding some of the most important things in life?”

Experts are also worried about invoked governance concerns. Whose ethical systems should be applied? Who gets to make that decision? Who has responsibility to care about implementing ethical AI? Who might enforce ethical regimes once they are established? How?

In the beginning, the AI was the engine of high-level STEM research and has become essential across a vast array of industries like health care, banking, retail, and manufacturing.

“But its game-changing promise to do things like improve efficiency, bring down costs, and accelerate research and development has been tempered of late with worries that these complex, opaque systems may do more societal harm than economic good,” says The Harvard Gazette.

It also mentioned lack of regulation. “With virtually no US government oversight, private companies use AI software to make determinations about health and medicine, employment, creditworthiness, and even criminal justice without having to answer for how they’re ensuring that programs aren’t encoded, consciously or unconsciously, with structural biases.”

And there is human factor too as many worry whether the coming age of AI will bring new, faster, and frictionless ways to discriminate and divide at scale.

They say part of the appeal of algorithmic decision-making is that it seems to offer an objective way of overcoming human subjectivity, bias, and prejudice.

But we are discovering that many of the algorithms that decide who should get parole, for example, or who should be presented with employment opportunities or housing … replicate and embed the biases that already exist in our society.

LEAVE A REPLY

Please enter your comment!
Please enter your name here