Hey, there! Log in / Register

Giving robots a conscience

Researchers at Olin College are looking at programming robots with what sounds a bit like Asimov's Three Laws of Robotics, the Boston Business Journal reports:

Think of a fire-fighting robotic dog that sacrifices itself to rescue people ... Or of a robot riveter that stops short of punching through a human workers’ misplaced hand because the robot has been programmed to value human safety over assembly line efficiency.

Neighborhoods: 
Topics: 
Free tagging: 


Ad:


Like the job UHub is doing? Consider a contribution. Thanks!

Comments

Factory managers have historically valued assembly line efficiency over human safety. If OSHA requires all robots to do the opposite, that can can only be a good thing.

up
Voting closed 0

I hope somebody reads XKCD: https://xkcd.com/1613/

(Incidentally a local institution)

up
Voting closed 0

IMAGE(http://imgs.xkcd.com/comics/the_three_laws_of_robotics.png)

up
Voting closed 0

Is it really a conscience if it's just programming a robot to avoid injuring another human, or to go into a building to save someone's life? It wouldn't be a question of morality, or altruism, or any of the other mysterious processes operating in our brains. We've developed a conscience over however many gazillions of generations of evolution, presumably because it's been - for the most part - evolutionarily advantageous. (Recent political events may disprove that.) In some way, it has to have helped us survive. Programming a robot to race into a burning building isn't the same thing as programming in a conscience; it's programming in self-sacrifice. Morality and empathy, presumably, wouldn't factor into it at all.

If, however, they want to try to program empathy into their robots, I suggest they check out Ex Machina first. Bad idea. Real bad idea.

up
Voting closed 0

“Listen,” said Ford, who was still engrossed in the sales brochure, “they make a big thing of the ship's cybernetics. A new generation of Sirius Cybernetics Corporation robots and computers, with the new GPP feature.”
“GPP feature?” said Arthur. “What's that?”

“Oh, it says Genuine People Personalities.”

“Oh,” said Arthur, “sounds ghastly.”

A voice behind them said, “It is.” The voice was low and hopeless and accompanied by a slight clanking sound. They span round and saw an abject steel man standing hunched in the doorway...

“Ghastly,” continued Marvin, “it all is. Absolutely ghastly. Just don't even talk about it. Look at this door,” he said, stepping through it. The irony circuits cut into his voice modulator as he mimicked the style of the sales brochure. “All the doors in this spaceship have a cheerful and sunny disposition. It is their pleasure to open for you, and their satisfaction to close again with the knowledge of a job well done.”

As the door closed behind them it became apparent that it did indeed have a satisfied sigh-like quality to it. “Hummmmmmmyummmmmmm ah!” it said...

"Thank you the marketing division of the Sirius Cybernetics Corporation," said Marvin, and trudged desolately up the gleaming curved corridor that stretched out before them. "Let's build robots with Genuine People Personalities," they said. So they tried it out with me. I'm a personality prototype. You can tell, can't you?"

up
Voting closed 0

the capacity to make moral or ethical judgments. AI is nowhere near the level where the appropriate reaction to these sorts of stories is anything other than laughter.

An appropriate analogy is to think about us, in 2015, talking about morality and ethics of the machine (as opposed to the engineer making the machine) the same way as the Montgolfier Brothers thinking about flying to the Moon in the 1700s with their hot air balloon. Sure, it's the first step in that direction, but a whole lot of stuff that we don't even know what it looks like needs to happen between here and there.

up
Voting closed 0

Why Self-Driving Cars Must Be Programmed to Kill.

Maybe it's not "ethics" in the sense that we would ponder or act on, but somehow software operating a robot is going to have to make a decision based on the ethics rules that we set for it.

up
Voting closed 0

Not something 'coming up fast'. Whether or not to create tools that are designed to encourage ethical operation has always been part of the story of technological evolution.

I know it's common nowadays, but I still get a bit squicked when this sort of language is used. Anthropomorphizing unthinking, unsentient tools is dangerous. It subtly divests both the creators and users of that technology from responsibility for how it is used. (cf. the business technology of the 'corporate entity')

I agree with Roman - self-driving cars are not alive, not self-aware, and not capable of making ethical decisions. Those attributes belong to the engineers & programmers that build them, and the people who choose to use them.

up
Voting closed 0