Their findings in brief:
- Robots that make autonomous decisions, such as those being designed to assist the elderly, may face ethical dilemmas even in seemingly everyday situations.
- One way to ensure ethical behavior in robots that interact with humans is to program general ethical principles into them and let them use those principles to make decisions on a case-by-case basis.
- Artificial-intelligence techniques can produce the principles themselves by abstracting them from specific cases of ethically acceptable behavior using logic.
- The authors have followed this approach and for the first time programmed a robot to act based on an ethical principle.
Simply programming a set of rules into robots may work for a while, but only if they're less intelligent than a human and are incapable of networking with one another (don't want a Geth rebellion). But with AI's that have human-level complexity or above you would probably have to give them compassion, which might be difficult as you could program it to recognize emotional responses and a list of proper responses and create a sociopath.
ReplyDelete