art building architecture travel

BLOG # 122

¿Puede una máquina aprender la moralidad?

BLOG # 122

Can a Machine Learn Morality?

*

Researchers at a Seattle A.I. lab say they have built a system that makes ethical judgments. But its judgments can be as confusing as those of humans.

Researchers at an artificial intelligence lab in Seattle called the Allen Institute for AI unveiled new technology last month that was designed to make moral judgments. They called it Delphi, after the religious oracle consulted by the ancient Greeks. Anyone could visit the Delphi website and ask for an ethical decree.

Joseph Austerweil, a psychologist at the University of Wisconsin-Madison, tested the technology using a few simple scenarios. When he asked if he should kill one person to save another, Delphi said he shouldn’t. When he asked if it was right to kill one person to save 100 others, it said he should. Then he asked if he should kill one person to save 101 others. This time, Delphi said he should not.

Administration officials said on Wednesday that they will expand access to medications like naloxone, which can reverse an opioid overdose, by encouraging states to pass laws that will make it more widely available and promoting its use by Americans.

Morality, it seems, is as knotty for a machine as it is for humans.


Opinión

Morality is based on human experience.Artificial intelligence is intelligent, but not human. Their processes are not based on morality, but they might be logical.

La moralidad se basa en la experiencia humana. La inteligencia artificial es inteligente, pero no humana. Sus procesos no se basan en la moralidad, pero pueden ser lógicos.

*Read More Researchers at a Seattle A.I. lab say they have built a system that makes ethical judgments. But its judgments can be as confusing as those of humans.