The Three Laws of Robotics are a set of rules devised by the science fiction author Isaac Asimov. Of course, Robots and artificial intelligences do not inherently contain or obey the Three Laws; their human creators must choose to program them in, and devise a means to do so. But with the rise of Machine Learning and AI, this is an important and interesting philosophical concept to look at.
1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Here’s an illustration justifying why these laws are in the order that they are. Tell us your thoughts in the comments.