• English

Four reasons will not make the robot occupy your place at work now

We will all work alongside robot devices sooner than we think.There are four things that will help us understand these devices and their role in the workplace better.

Returning to 1959, he used artificial intelligence through a group of dazzling algorithms to solve a previous problem that had long been..

These algorithms played an important role in fixing the matter by knowing when the next signal is identical to the issued signal, and then wiping it electronically.The solution was very elegant, and is still used to this day.

These machines used a system of algorithms known as "Madalan".It was the first time that artificial intelligence was used in the workplace.

It has become widely mentioned today that artificial intelligence computers are coming to take our jobs.You will be able to finish the work to be completed throughout the week before you eat your breakfast.

It does not need a coffee break, a pension, or even to sleep.

Although he will be assigned to the machine many jobs in the future, it is at least the short term that these advanced machines will work with us side by side.

Despite the impressive achievements at the level of many professions, including the ability to stop forgery before it falls, and to monitor cancer accurately, it is even the most artificial intelligence machines these days do not have anything close to the general intelligence of man.

According to a 2017 McNe Foundation report, with the available technology it can only become five percent of jobs completely automatic, but 60 percent of jobs can perform robots for nearly a third of their tasks.

It is important to remember that not all robots use artificial intelligence, and the same shortcomings that prevent these machines that use artificial intelligence from controlling the world will also make them frustrated partners.

Therefore, before we bet on the absence of our sun as human beings in the workplace, we present below some of the rules that you need to know about working with your new colleagues from robots.

The first rule: robots do not think like humans

تخطى البودكاست وواصل القراءةالبودكاستمراهقتي (Morahakaty)

Teenage taps, from the presentation of a dignity as a vehicle and prepared by Mays Baqi.

Episodes

Podcast End

Within the time when the "Madalan" system of artificial intelligence was a revolution in long -range phone calls, the Hungarian philosopher Michael Bolani was thinking carefully about human intelligence..He realized that while some skills, such as using microbial grammar, could be divided into rules that can be explained to others, many of them cannot be divided or divided..

Humans can do what is called implicit capabilities without realizing how this happens.This includes practical capabilities such as riding a bike, preparing the dough, as well as doing a higher level of missions.Unfortunately, if we do not know such rules, we cannot teach them to the computer.This is the paradox that Bolani talks about.Instead of trying to conduct a reverse engineering for human intelligence, computer scientists tried to overcome this problem by developing artificial intelligence to think in a completely different way in which it depends on data or information, instead of ideas..

"Maybe it occurred to you that the way in which artificial intelligence works is that we understand humans, and then build artificial intelligence in the same way in the same way," says Rich Karwana, chief researcher at the Microsoft Research Foundation..

He adds: "But things did not happen in that way.".It sets, for example, the planes that were invented a long time before we had a detailed understanding of flying among birds, and therefore we had different flying dynamics.However, today we have planes that can fly higher and faster than birds.

أربعة أسباب لن تجعل الروبوت يحتل مكانك في العمل حاليا

For example, Facebook prepared a training for its face recognition program, which is known as "Deep Face", to learn about a group of nearly four million pictures.By looking at marked images, or written on the name of the same person, that program in the end of the matter was able to marry the faces correctly in about 97 percent of the times.

The factors available for artificial intelligence programs such as "Deep Face" make them ascending stars in the sky of the Silicon Valley, and they overcome its inventors in driving cars, identifying the sound, and translating written texts from one language to another, and of course signs of images.Such programs are expected to penetrate multiple areas in the future, from health care to financial activity.

The second rule: Your new friends of robots are not infallible

These devices make mistakes as well, but the idea that they are based on information means that they can make fatal errors, such as the time that an automated program concluded that a turtle printed with 3D printing feature is a gun.

This program, for example, cannot think in a practical way, because it thinks according to specific patterns, and in this case, visual patterns here depend on the pixels (which is the smallest individual element in the digital image matrix).

As a result, changing one pixel in a form of images can lead to such a failure to recognize it.

The third rule: The robot cannot explain why he made a decision

Another problem is artificial intelligence in the paradox that Bolani spoke about.Because we do not fully understand how to learn our brains, we have made artificial intelligence think like statistics.The paradox is that we now have a very limited idea of what is going on within the brains of artificial intelligence.

We usually call the "black box problem", because although we know the information or data that we nourish for the robot, and we see the results that are issued on that, we do not know how that box before us reach these results..

"Thus we have two types of intelligence that we do not really understand," says Karwana..Such automated nervous networks do not have linguistic skills, and therefore they cannot explain to you what they do and why.As in all artificial intelligence devices, they do not have the ability to understand and logic available to humans.

Several decades ago, Karwana applied an artificial intelligence program on some medical data.This included things such as pathological symptoms, and their results, and the goal was to calculate the extent of the risk of death that the patient may be exposed to on a specific day, so that doctors can take preventive measures.

It seemed that things were going well, until a student at the University.He was dealing with the same data with a line by a line through the simplest algorithms, in order to be able to read its logic in the decision -making process.One of the readings used to say: "Asthma is good for you if you have pneumonia.".But doctors were surprised by such a mistake, and they said it should be repaired.

Astro is a dangerous factor in pneumonia, as both affect the lungs.Doctors will not know, in confirmation, why the machine learned this rule.

With the increased interest in using artificial intelligence in the interest of the public, many experts are concerned in this field.This year, the European Union imposed new legislation that gives individuals the right to obtain an explanation of the logic behind the decisions of artificial intelligence devices.

Meanwhile, the American Army's research arm, which is the advanced defense research projects agency (Darba), is investing $ 70 million in a new program to explain and explain the decisions of artificial intelligence devices.

"Recently, there is a significant improvement in the accuracy of these systems," says David Ganing, who runs the project in Darba.He adds: "But the price we pay for this is that these systems are so complicated that we do not know why it recommends something specific, or why it makes a certain movement in a game of games.".

The fourth rule: robots can be biased

There is increasing concern that some algorithms may hide unintended biases, such as racism, sexual discrimination.For example, recently, a software of software was assigned to advise whether the convicted criminal is likely to return the ball into the crime, so the result was that the advice was doubled in its cruelty regarding black skin..

The whole thing is about how to train digital systems.If the data in which the devices are fed are intact and free from impurities, then their decisions will be often sound.But usually there are human biases present during the information feeding process.

One of the stark examples can be easily standing in the "Google" translation service, as one of the scientists indicated in the Midiam magazine last year, if you want to translate "is a nurse.She is a doctor, "From English to Hungarian, then she has prepared her translation into English..he is a doctor".

The algorithm has been trained in a text consisting of about a trillion internet page.But all these programs can do is find specific patterns, such as doctors are likely to be male, and that nurses are likely to be female..

Another way from which bias can be sneaks, through weight.As is the case in humans, our colleagues from artificial intelligence robots analyze information (data) through their "weight", that is, judge issues and considerations in terms of being more or less important.

Perhaps the algorithm is decided that the postal symbol of a person has a relationship with the percentage of his financial or banking dependence, which is something that happens in the United States, and through which it is discriminated against the ethnic minorities, which tend to live in poor neighborhoods.

This is not only about racism or sexual discrimination.There will also be discrimination that was not mindful.This dilemma is well explained by the economist Daniel Kahniman, who won the Nobel Prize, who spent his life to study irrational biases in the human mind, in an interview with the "Freikonx" code in 2011.

"The methods of reasoning in their own nature will result in biases, and this is true for the human being and the artificial intelligence machine, but the reasoning for artificial intelligence is not necessarily human inference.".

Robots are coming, and will change the future of work forever.But until it becomes more close to the human being, it will need our need to stand beside her.But it does not seem incredibly that our robots colleagues will improve the way they appear in front of others.

Category

Related Articles