1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

What if AI turns to have morality and integrity issues?

Discussion in 'Self-Drivers Café' started by betterthanAI, Jan 12, 2017.

  1. betterthanAI

    betterthanAI New Member

    An interesting point to be discussed is the integrity of AI. Sci-fi movies have often predicted reality and what worries me is that in almost all sci-fi movies AI join the "dark side" and fight against humans. If a machine is self-learning and its goal is to be better than others (e.g. a self-driving car to be better than its competitors) how long it will take to learn the old human vices and business tricks? Will we have good or bad AI?
  2. neo

    neo New Member

    That's a good question. If AI learns from humanity, I doubt they will be 100% good. I read somewhere that self-driving cars in order to be prepared for the roads and reduce the testing period are learning from computer driving games. What if they learn from GTA? :) Maybe good driving machines but questionable behavior on streets..
  3. roy89

    roy89 New Member

    There will be guidelines and strict rules on "AI training". AI can't be trained by fools. It's self-training but it learns from suggested surces which need to be safe.

Share This Page