The volume of travel around the world is increasing rapidly, as 2017 reached a record level after the number of people who visited countries other than their countries of origin for the purpose of tourism reached nearly 1.4 billion people, a figure that is expected to reach 1.8 billion tourists By 2030.
This growing number of tourists means that the queues for passport control officers will grow longer. Although the vast majority of those who are stopped by security forces at the borders do not pose any danger or threat, the occurrence of this slows down the pace of an already long process. And everyone who wants to cross the border separating any two countries in the world must pass through it.
Border security officers, on the other hand, do a tedious job, as every hour they have to make hundreds of decisions about whether to allow this or that person through, based on their personal judgment of each of them. The pressure on these people is increasing in order for them to make the right decisions, in light of the threats related to the possibility of attacks, or the occurrence of human smuggling or trafficking.
Although these officers may be able to take advantage of the technical capabilities of their computer system, they rely on their intuition and experience, as they check the papers of the majority of passengers they pass.
Anyone who has ever been denied entry to a country by passport officers - even for a short time - probably knows how uncomfortable and unsettling an experience like this can be. Staring into the dead eyes of a passport officer as he checks your passport is always a nerve-wracking experience.
But the future may hold passport officers of another invisible kind, who have a role in making decisions about whether or not to allow someone to cross borders. Officers with whom one could not argue, discuss, or even soften a frown with a smile.
The current period is witnessing a number of world governments providing funding for research to develop systems based on artificial intelligence techniques that can help determine whether border crossing authorities should allow this or that traveler to enter or not.
Skip the podcast and read onMorahakatyTeenage taboos, hosted by Karima Kawah and edited by Mais Baqi.
The episodes
The end of the podcast
One of these systems is being developed by an American technology company that worked with the country's border and customs authorities after the attacks of September 11, 2001, to create technologies to identify dangerous travelers long before they did. They fly to the United States.
The company has a threat and risk assessment system called LineSight, which greedily collects data on travelers from various US government agencies, as well as other sources, to produce an assessment of the risks associated with each of them, based on mathematical calculations.
This program has been expanded and enhanced to include other types of travelers and cargo that may be of interest to border security officials. To clarify the mechanism of the "LineSight" system, an official in the company named John Kendall cites the example of two hypothetical travelers, called Roman and Sandra.
According to this hypothetical scenario, the girls have two flawless passports, valid passports, and two valid entry visas. This would make them bypass the majority of security systems without accountability, but the artificial intelligence algorithms of the "Line Sight" system detect something suspicious about Romain's travel pattern in particular, as she visited the country she is heading to according to this scenario many times over the past few years, and with it A number of children with different last names, something a predictive analysis method links to "human trafficking".
Moreover, this girl had purchased the travel card using a credit card issued by a bank linked to a human trafficking ring for prostitution in Eastern Europe. Linesight is able to obtain this information from the airline on which Romaine travels, and verify this by matching it with the database of the relevant law enforcement agencies.
An official in the company that developed this system says that all this information can be collected and sent to the customs official, before each of the two girls even confirms the reservation for her flight.
Besides, Linesight can similarly be used to analyze cargo shipments, which helps gather information through which possible smuggling cases can be detected and identified.
The strength of this company's AI technologies is its ability to absorb and evaluate a huge amount of data in a very short period of time. To the extent that the Linesight system can process all the data related to a particular case and complete the assessment process that indicates whether it is threatening or not, in no more than two seconds.
However, there are concerns about using these technologies to analyze data in this way. Their algorithms are "trained" to recognize patterns or behaviors based on stored data sets, which can replicate the same biases found in the original data.
For example, algorithms “trained” on data from the US judicial system have been shown to replicate unfair bias against black defendants, who are incorrectly described as more likely to be repeat offenders than their white counterparts by almost twice as much. This means that these algorithms have followed the path of human bias that exists in the US judicial system.
Erika Busey, of the Bernin Institute for Justice at New York University School of Law, worries that similar biases could creep into the algorithms used to make decisions involving immigration and passport authorities.
However, officials of the company that developed the "Linesite" system hope that this matter will be addressed by allowing these algorithms to learn from their mistakes. And they say in this regard that if a person is prevented from entering a country - due to these artificial intelligence techniques - and it turns out later that this happened by mistake, this leads to the algorithms automatically updating according to the new data, which means that they become "smarter" once. Yet another, especially since it doesn't rely on human intuition or prejudices.
The company also says that within the framework of this system, there is no distinction between some data and others in importance, as all relevant information is provided to border and customs officials.
But there are teams working in this area and seeking to go a step further, by using artificial intelligence techniques to determine whether we should trust travelers or not. Passport officers issue their decisions in this regard based on their reading of the traveler's body language and the way he answers their questions. And there are those who hope that artificial intelligence techniques may be better than humans, in detecting any signs that this or that traveler is behaving in a deceptive manner.
And Aaron Elkins, a computer expert at San Diego State University, notes that humans can usually only detect such indicators in 54 percent of cases. As for the machines equipped with the aforementioned technologies, they were - according to multiple studies - able to achieve this goal accurately more than 80 percent of the times.
Alkins is one of the inventors of a system for checking travelers called "Avatar", which may soon be available to those responsible for reviewing passports at border crossings.
This system uses a screen on which a virtual security officer appears, who asks travelers some questions in conjunction with the technologies contained in the system, checking the nature of the person's stance, the movement of his eyes, and the changes in his voice.
The team responsible for "Avatar" believes that they have succeeded in "teaching" this system how to detect the appearances of a person's appearance that indicate that he is deceiving, after conducting experiments in laboratories that included tens of thousands of people.
Meanwhile, another similar system, called the iBorder CTRL, is about to undergo trials at three land border crossings in Hungary, Greece and Latvia. This system, in turn, includes a robotic interrogator who is interrogating travelers, after he was trained on video clips showing people, some of whom are lying, and some of them are honest.
Among those who participated in the development of this system; Kelly Crockett is an expert in computer-generated intelligence at Manchester Metropolitan University in the UK. Crockett says that the iBorder CTRL monitors very simple gestures that may not be noticed, and picks up any indications of slight changes in facial features, as well as subtle, imperceptible movements forward or backward.
Crockett has high hopes for the success of the experiments that will be conducted on this system in the three countries, which will constitute the first stage of field tests for it. She says that the team responsible for this system hopes that the accuracy of the results in those tests will be 85 percent.
However, the debate continues about whether artificial intelligence techniques capable of detecting lies work effectively in the real world or not.
Vera Wilde, a researcher in the field of lie detection and among the critics of the "iBorder CTRL" technology, believes that it has not yet been scientifically proven that there is a definitive link between our external behavior and our involvement in deception, which is the reason why judicial systems do not recognize Lie detectors and do not use them.
Even if science proves the existence of this relationship, the use of technologies like this at border crossings raises thorny legal questions. There are legal experts who believe that the use of lie detection techniques can amount to an "illegal search and seizure". They believe that subjecting a person to the process of compulsory examination represents a confiscation of his thoughts and an inspection of his mind and mind, which requires the issuance of a legal note in this regard in the United States, for example. Experts warned that this could be a legal problem in Europe as well.
It should be noted here that the travelers on whom this system will be tested will do so of their own free will, and they will also still have to pass through a passport officer before they are allowed to enter the country of their destination.
On the other hand, it seems that artificial intelligence technologies will not be able to completely replace the human element, when it comes to the responsibility for border control. All the companies and institutions responsible for the three technical systems mentioned in these lines agree that they will continue to rely heavily on humans to interpret data and information, regardless of the degree of development of the technologies they use.
Apart from all this, relying on machines and technological devices to make decisions about who is allowed to enter a country or not raises significant concerns among human rights activists and privacy advocates, as they wonder if border control officials will reveal to them - For example - about the reasons for a technological system reaching the conviction that a traveler will be denied entry because he represents a major threat?
"We need transparency in terms of how the algorithm itself was developed and implemented, as well as about how in its calculations it treats different types of data and gives each of its own weight or weight," says Erika Bossi.
This researcher believes that there is also a need to learn transparently about how the human elements - entrusted with making decisions at the border crossings - were trained to interpret and understand the conclusions drawn by artificial intelligence techniques, and how the system as a whole is reviewed and its results audited.
On the other hand, officials of the American company that developed the "Line Site" system believe that artificial intelligence techniques may be a major tool in dealing with the challenges facing the issue of controlling borders between countries, in light of the presence of a "complex group of threats", which they see as different. Now than it was a few years ago.
Ultimately, the success of using these advanced technologies to patrol borders will depend not only on their ability to early detect who poses a threat, but also on their use making mobility easier for the 1.8 million of us who love to travel Here and there around the world.
You can read the original article on the BBC Future website
---------------------------------------
You can receive notifications of the most important topics after downloading the latest version of the BBC Arabic application on your mobile phone.