Jump to content

A philosophical question


Recommended Posts

The OP makes no reference to AI.

 

It's implicit in the poser

What do the designers of a driverless car decide the car should do in similar situation?

Can they decide?

 

I had assumed that the OP wasn't talking about cars rolling down hills after the family labrador has knocked the handbrake. :)

Link to comment
Share on other sites

Radio 5 this morning, cannot remember the guy's name who was been interviewed, but he was a professor from Huddersfield University. As the interview was on driverless cars and heavy trucks it's safe to presume he must be well versed in these matters. According to himself, driverless cars will not happen, but he can forsee trains of driverless trucks going down the motorways in convoy.

 

Angel1.

 

We already have that..it's called the railway :)

Link to comment
Share on other sites

All agreed but the OP was wondering about AI machines, not people, and the AI machine needs a decision matrix. It can't just be along for the ride.
Since no decision matrix can overrule laws of physics, the OP should usefully be referenced to Asimov's Three Laws of Robotics, pursuant to the first of which (if implemented adequately) an AI car would likely follow the same approach (brake and avoid, crash if needed) that I did out of reflex.
Link to comment
Share on other sites

Since no decision matrix can overrule laws of physics, the OP should usefully be referenced to Asimov's Three Laws of Robotics, pursuant to the first of which (if implemented adequately) an AI car would likely follow the same approach (brake and avoid, crash if needed) that I did out of reflex.

 

Asimov's law breaks down immediately in the OP's very real scenario though.

 

From a commercial perspective (I know you'll like this as an IP chap) who is going to buy / use a car which kills the owner / occupant in my scenario where the algorithm says killing the occupant is better than killing the 6 children on the crossing?

 

---------- Post added 07-02-2018 at 13:56 ----------

 

Do children have less economic value than the OAPs who are costing the nation billions in healthcare costs, pensions and other benefits ?

 

Yes children do have less economic value than OAP's. By way of easy explanation, OAP's pay still 11% of income tax.

Link to comment
Share on other sites

Asimov's law breaks down immediately in the OP's very real scenario though.
Not in the first combination-

You know that if you brake you will not be able to stop and you will kill the child

What do you do?

 

1) swerve to miss the child

 

BUT

 

2) swerve, you crash and probably die.

'will kill' > 'probably die' = 1st law says to swerve and avoid the child ;)

 

scenario posted on 3) is irrelevant, the car occupants' family is not involved

 

scenario posted on 4) depends on probability of killing the 6 OAP at the time of impact vs probability of killing car occupant

 

scenario posted on 5) -depends on probability of killing the 6 children at the time of impact vs probability of killing car occupant

From a commercial perspective (I know you'll like this as an IP chap) who is going to buy / use a car which kills the owner / occupant in my scenario where the algorithm says killing the occupant is better than killing the 6 children on the crossing?
IP is not automatically synonymous with marketing USPs. It's more of a "corporate swiss army knife", and never a guarantee of commercial success. A fact of corporate life I'm constantly having to explain to MDs, CEOs, CFOs, <...> ;)
Link to comment
Share on other sites

It’s incredibly daft as a question - the reason it is brought up is to make us fear AI. But it is so exaggerated that it is clearly propaganda against machine logic.I nearly had a 12 year old on the bonnet and my right food plus assisted braking saved her life without question.

 

If you are wondering about credentials - I have been involved with research into similar topics for a long time.

 

No, it's an Ethics question. Ethics comes in to our lives frequently and some people have to make ethical decisions in their work: often people in medical professions but also people working in AI. The recent debates about military robots is a good example. Ethics is a common module on philosophy courses so the question is very relevant.

Link to comment
Share on other sites

I was in that very situation over 2 decades ago. Kid ran out of a jennel between 2 cars, in a very narrow street with cars parked on one side. I wad coming down at 25-30. Young and still inexperienced, I shouldn't have been doing anymore than 15-20.

 

My first reaction was to swerve and brake at the same time. I (still) can't explain it, because it happened far too fast. I just knew instantly (and I mean nanosecond-instant), that there was no way I could emergency-stop in time. So as I braked -with the kid about middle of the road as I came to his level- I swerved towards the jennel, about a car length in the line of parked cars.

 

I avoided the kid. Probably by no more than a couple of feet. I knackered the left side axle, including the wheel hub (that had the shape of the bollard I crash-stopped against, in it).

 

A good outcome, all things considered. Oh, and there's your answer OP: in real life, it's all reflexes, you're (consciously speaking) only just along for the ride ;)

 

Christ, how many near misses have you had? Remind me never to get in a car with you :bigsmile:

Link to comment
Share on other sites

Once these 'driverless cars' get onto the road, I can forsee many problems.

 

For instance, a Bugatti's or Koenigsegg's 'algorithm' will be heavily weighed to the self preservation of itself rather than pesky humans and what-not. Quite righly so.

 

Something like a Volvo will be programmed to top itself milliseconds before having to come up with the ethical decision, in the safe knowledge that whatever happens afterward is absolutely nothing to do with itself, thus remaining posthumously blameless ... not to mention that Volvo's safety credentials remain unspoiled, a top selling point for the Swedish car manufacturing bumpkins.

 

If car manufacturers are obliged to fit the same standardised 'decision chip' in the manufacture of their cars, will this open up a whole new market for 'customised decision chip' programmers? If so, I see no point in self-driving cars in the first place, just like Elon Musk who appears to be figuring out how to put them safely into orbit around Mars. Top man!

 

 

Personally, I enjoy driving, so will never bother with such devices.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.