Hey, there! Log in / Register
Giving robots a conscience
By adamg on Wed, 12/16/2015 - 8:02am
Researchers at Olin College are looking at programming robots with what sounds a bit like Asimov's Three Laws of Robotics, the Boston Business Journal reports:
Think of a fire-fighting robotic dog that sacrifices itself to rescue people ... Or of a robot riveter that stops short of punching through a human workers’ misplaced hand because the robot has been programmed to value human safety over assembly line efficiency.
Neighborhoods:
Topics:
Free tagging:
Ad:
Comments
But the humans programming the robots need to agree
Factory managers have historically valued assembly line efficiency over human safety. If OSHA requires all robots to do the opposite, that can can only be a good thing.
I hope somebody reads XKCD:
I hope somebody reads XKCD: https://xkcd.com/1613/
(Incidentally a local institution)
XKCD'd
boring philosophical question
Is it really a conscience if it's just programming a robot to avoid injuring another human, or to go into a building to save someone's life? It wouldn't be a question of morality, or altruism, or any of the other mysterious processes operating in our brains. We've developed a conscience over however many gazillions of generations of evolution, presumably because it's been - for the most part - evolutionarily advantageous. (Recent political events may disprove that.) In some way, it has to have helped us survive. Programming a robot to race into a burning building isn't the same thing as programming in a conscience; it's programming in self-sacrifice. Morality and empathy, presumably, wouldn't factor into it at all.
If, however, they want to try to program empathy into their robots, I suggest they check out Ex Machina first. Bad idea. Real bad idea.
“Listen,” said Ford, who was
Machines don't have
the capacity to make moral or ethical judgments. AI is nowhere near the level where the appropriate reaction to these sorts of stories is anything other than laughter.
An appropriate analogy is to think about us, in 2015, talking about morality and ethics of the machine (as opposed to the engineer making the machine) the same way as the Montgolfier Brothers thinking about flying to the Moon in the 1700s with their hot air balloon. Sure, it's the first step in that direction, but a whole lot of stuff that we don't even know what it looks like needs to happen between here and there.
It's coming up fast, though
Why Self-Driving Cars Must Be Programmed to Kill.
Maybe it's not "ethics" in the sense that we would ponder or act on, but somehow software operating a robot is going to have to make a decision based on the ethics rules that we set for it.
That's nothing different
Not something 'coming up fast'. Whether or not to create tools that are designed to encourage ethical operation has always been part of the story of technological evolution.
I know it's common nowadays, but I still get a bit squicked when this sort of language is used. Anthropomorphizing unthinking, unsentient tools is dangerous. It subtly divests both the creators and users of that technology from responsibility for how it is used. (cf. the business technology of the 'corporate entity')
I agree with Roman - self-driving cars are not alive, not self-aware, and not capable of making ethical decisions. Those attributes belong to the engineers & programmers that build them, and the people who choose to use them.