The AI safety mindset: 12 rules for a safer AI future by Cassie Kozyrkov¶
https://www.youtube.com/watch?v=EjBXZrQ7fTs&feature=youtu.be&ab_channel=Robotex
Don’t be distracted by science fiction
Science fiction version of AI has to do with person hood - autonomous entities. Instead, AI comes from humans like all technologies.
All tools are better than humans be it paper/pencil as it has more memory, a bucket of water as it can hold more water etc. The question is to identify what is it specifically that AI is better at.
It is about “Explain by -instructions- examples”
Remember that the objective is subjective
The decision is subjective to some context e.g. detecting cat / not-cat will work differently for a tiger if the context is that “The detected cats need to be used as pets”
Strive for decision intelligence
https://youtu.be/EjBXZrQ7fTs?t=494 Amplify the decision maker of the human
Wish responsibly
This is about choice of data sets and choice of objective. We need to be careful of how one expresses the wish so that the true spirit of the wish is captured.
Think like a site reliability engineer
Test everything!
Always use pristine data for testing
Just like we test humans by giving them questions they haven’t seen before in an exam, AI systems need to be tested in the same way.
Get in the habit of splitting your data
Avoid jumping to conclusions
Make sure your data are representative
Open the textbook with analytics
If the author of a textbook is prejudiced, the student will also pick up the same prejudices. The teacher needs to make sure that they analyze the textbook before giving it to the student for learning.
Datasets, like textbooks, also have human authors.
Seek a diversity of perspectives