The AI safety mindset: 12 rules for a safer AI future by Cassie Kozyrkov

https://www.youtube.com/watch?v=EjBXZrQ7fTs&feature=youtu.be&ab_channel=Robotex

  1. Don’t be distracted by science fiction

    Science fiction version of AI has to do with person hood - autonomous entities. Instead, AI comes from humans like all technologies.

    All tools are better than humans be it paper/pencil as it has more memory, a bucket of water as it can hold more water etc. The question is to identify what is it specifically that AI is better at.

    It is about “Explain by -instructions- examples”

  2. Remember that the objective is subjective

    The decision is subjective to some context e.g. detecting cat / not-cat will work differently for a tiger if the context is that “The detected cats need to be used as pets”

  3. Strive for decision intelligence

    https://youtu.be/EjBXZrQ7fTs?t=494 Amplify the decision maker of the human

  4. Wish responsibly

    This is about choice of data sets and choice of objective. We need to be careful of how one expresses the wish so that the true spirit of the wish is captured.

  5. Think like a site reliability engineer

  6. Test everything!

  7. Always use pristine data for testing

    Just like we test humans by giving them questions they haven’t seen before in an exam, AI systems need to be tested in the same way.

  8. Get in the habit of splitting your data

  9. Avoid jumping to conclusions

  10. Make sure your data are representative

  11. Open the textbook with analytics

    If the author of a textbook is prejudiced, the student will also pick up the same prejudices. The teacher needs to make sure that they analyze the textbook before giving it to the student for learning.

    Datasets, like textbooks, also have human authors.

  12. Seek a diversity of perspectives