Tesla model S and the three laws of robotics
On Wednesday there was a claim of a nearly brand new Tesla model S catching on fire. See the video posted on YouTube:
Speculation was rampant that the highly touted and supposedly insanely safe battery pack was the culprit. The rumors did not go over well with TSLA stock, which also by chance got downgraded earlier in the morning by Baird.
Tesla quickly fired back though, with an official statement of:
As soon as the vehicle collided with the object, it started running diagnostic tests. It found a problem and told the driver to pull over and immediately get out of the vehicle. After that, the vehicle started to smoke, and then the fire started. The design of the Model S purposely vented the fire to the front and side of the vehicle so that the driver compartment was not compromised.
Did everybody fully digest that? The model S, a piece of machinery, performed a self diagnostic after hitting debris in the road. On its own, it determined that is was not safe to continue, notified the driver to pull over and exit the vehicle, saving the passenger.
The vehicle’s last act of life was to save a human.
Prolific science fiction writer Isaac Asimov came up with the notion of the three laws of robotics (made famous in the movie iRobot).
They are as follows:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The Tesla model S demonstrates quite clearly it follows the first of the three laws. The question then becomes, does it follow the rest?