An interesting point to be discussed is the integrity of AI. Sci-fi movies have often predicted reality and what worries me is that in almost all sci-fi movies AI join the "dark side" and fight against humans. If a machine is self-learning and its goal is to be better than others (e.g. a self-driving car to be better than its competitors) how long it will take to learn the old human vices and business tricks? Will we have good or bad AI?