中文

kayo@kayosmt.com

current location: HOME>>KAYO News>>Industry News>>How to restrain robot morality?
Share to:
Contact us


KAYO Automation Technology Co Ltd


Email: sales@kayosmt.com

          kayo@kayosmt.com

          sales001@kayosmt.com

          jack@kayosmt.com


Tel:    +86-15811141631

         +86-18021196153 

         +86-18021196152

How to restrain robot morality?

Release date:2019/03/28

On December 18, 2018, the EU high-level expert group on artificial intelligence released a draft code of conduct for artificial intelligence. The draft aims to guide the creation of a "trusted artificial intelligence" in the context of concerns that artificial intelligence will replace humans and destroy character. How do you make robots more trustworthy? Can they be given moral cultivation? The author interviewed Colin Allen, distinguished professor of the department of philosophy and history of science at the university of Pittsburgh and Yangtze river chair professor at xi 'an jiaotong university.


Q: what model should be used to develop the character of artificial intelligence?

Alan: we reviewed the development of machine character in the moral machine and thought that the best answer was a mixture of "top-down" and "bottom-up" models. Let's start with what "top-down" and "bottom- up" mean. Let's use these terms in two different ways. One is an engineering perspective, which is a technology and computer science perspective, such as machine learning and artificial evolution, and the other is a moral perspective. Machine learning and artificial evolution don't start with any rules. They just try to make machines conform to a particular type of behavior description, and when given an input that causes a machine to behave in this way, it can behave in a particular type. This is called "bottom-up." By contrast, a "top-down" approach implies a clear model for assigning rules to the decision-making process and trying to write rules to guide machine learning. We can say that in engineering, "bottom-up" is a learning experience from data, while "top down" is preprogramming with deterministic rules.


Q: how to make AI ethical?

Alan: well, the first thing to say is that human beings are not fully moral themselves. The essence of human beings is to act out of self-interest, regardless of the needs and interests of others. However, a moral agent must learn to suppress its own desires for the convenience of others. The robots we build today don't actually have their own desires or motivations, because they don't have selfish interests. So, there's a big difference between training artificial intelligence and training human morality. The problem with training machines is how we can give them the ability to be sensitive to what is important to human moral values. Furthermore, does a machine need to recognize that its behavior will cause suffering to humans? I thought it was demand. We can think about programming machines to behave in this way, without thinking about how to make robots prioritize the interests of others, since machines don't have a self-interested nature.


Q: what are the "virtues" of AI?

Allen: there are a lot of different meanings for the 'moral character' of artificial intelligence, or 'moral machine', or 'mechanical character’. I’ve grouped these meanings into three. In the first sense, a machine must have exactly the same qualities as a human being. In the second sense, machines do not have to be fully human, but they should be sensitive to the reality-related reality and able to make their own decisions based on it. A third sense is that machine planners consider the virtues of machines at the lowest level, but do not give robots the ability to value moral reality and make decisions.

For now, the first sense of the machine is still a science fiction. So, in my book the moral machine, I skip over the critique of it, and am more interested in the critique of machines that fall between the second and third senses. Today, we expect planners to take character into account when planning robots. That’s because robots are likely to do more and more work in the public domain without direct human supervision. This is the first time we have invented a machine that can operate unsupervised, and it is the most substantial difference between the character problems of artificial intelligence and some of the previous scientific and technological character problems. In such an "unsupervised" situation, we expect machines to make more ethical decision plans, and we expect machines to be planned not only for safety, but also for values that humans care about.