Hey, there! Log in / Register

Maybe it's just as well Boston has gone slow on self-driving cars: Northeastern prof says he can make the vehicles hallucinate

And wouldn't that be something? LSD-ladled robo-taxis suddenly careening all over Storrow Drive at rush hour.

Northeastern Global News reports on work by Kevin Fu, a professor of engineering and computer science, to give self-driving cars "false coherent realities," essentially to make them "see" things that aren't there.

Fu takes advantage of the optical stabilization that robo-cars, just like the better cameras these days, use to help "deblur" their own little slices of reality and make decisions based on the supposedly clearer images from their cameras.

"Normally, it's used to deblur, but because it has a sensor inside of it and those sensors are made of materials, if you hit the acoustic resonant frequency of those materials, just like the opera singer who hits the high note that shatters a wine glass, if you hit the right note, you can cause those sensors to sense false information," Fu says.

Topics: 


Ad:


Like the job UHub is doing? Consider a contribution. Thanks!

Comments

I get how this could be a problem and should be resolved but is it any worse than human error? You can hit the car with the right pitch and cause it to see things that do not exist. You can do the same thing with humans but pointing laser pointers in their eyes or throwing out some other distraction. It would be the same level of difficulty because this stuff already happens with humans.

up
Voting closed 3

I don't think so.
A computer's algorithms clicking along, running its decision-making stream, saying "obstacle - no, wait, it's gone. Good." is MUCH worse than a human's decision-making stream, saying "obstacle - no, wait, it's gone, no - it can't just have gone, where the F is it??!!"

up
Voting closed 1

Also when we make decisions we sometimes have a clear reason but sometimes it's instincts. With self driving cars the rules are pretty clear If X happens you do Y and so forth. Granted the math might look much funkier when you add in more parameters but it's still making decisions based off of defined rules inputted by others. So if I'm trying to sue you my lawyer can pull that data up and pull out the rules that oed to the accident and what you did or did not do to prevent it.

The law gives humans incredible leeway in liability based off of these factors. If you kill someone by pure accident it's very different than by neglect which is very different from meaning to do it. Obviously if a self driving car killed someone it's probably not by a sinister motivation but neglect may be easier to prove than against a human because with the self driving cars we can see exactly what it did or didn't do any why. At which point the question is why wasn't this taken into consideration?

up
Voting closed 0

This was centered around self-driving cars, the methods they use to "see" and process contextual queues around the vehicle.

I think the jist of it was, the AI lacked understanding object permanence so that when a cyclist it was tracking gets obscured by another vehicle, the AI essentially thought the cyclist no longer existed.

EDIT: Here it is, granted its from September 2021 and in the world of AI/Tech that might as well be forever ago, so I'm sure so aspects have gotten better:

https://www.economist.com/science-and-technology/is-it-smarter-than-a-se...

up
Voting closed 2

That reminds me of my thoughts while sitting in traffic behind a new driver in a crazy traffic situation. Sometimes traffic is so fast moving or so jam packed that you have to pretty much break the rules in order to move. Like a fast moving rotary where there never is a break or that right hand turn but traffic is always backed up so you have to inch your way into it. Concepts a new driver has a hard time comprehending "you don't go until you have x amount of clearance" but what happens if that never happens? The new driver freezes... And so will the self driving cars. I'm not sure how you can work around that, you can't program a car to break the rules or you will be liable if it does and something bad happens.

I was listening to a podcast that spoke about this same situation but with who to prioritize in the event of an accident. Should your car prioritize your safety? Your passengers safety? The safety of the other passenger? Does age matter? If the cars can talk do they trade health and age data and decide based off of that together? These are all decisions we make on a daily basis while driving (the parent who swerves so the drivers side gets hit instead of the side the kids are on) but what happens when someone has to write that code? Are they liable that I got killed because the car I was in decided it was better for my door to take the impact rather than the other door, saving one life but ending mine?

up
Voting closed 3

That was a plot line to the Will Smith movie: I, Robot. Cool action flick but of course the Isaac Asimov's collection of short stories was way better. He didn't have an answer either.

up
Voting closed 0

I haven't watched it yet but I'll have to find it on streaming!

up
Voting closed 1

The autonomous car traps are working! https://zkm.de/en/autonomous-trap-001

up
Voting closed 0