Have you ever read a privacy policy?
The digital services we use are constantly gathering information about us, and the mere existence of this data makes us vulnerable in ways we can’t even anticipate yet. The worst part is we constantly agree to their terms because we don’t really understand what privacy policies say – and neither do computers. Guard is an AI that reads privacy policies for you, but as any other AI, it needs to be taught. This site is in part an experiment to teach computers the meaning of human privacy, so they can protect us.
This experiment aims to gather a human perspective on moral decisions made by machine intelligence. This means that we need to teach them what's acceptable and what's not in regard to privacy.
To do that, we’ll show you the actual privacy practices of some of the world’s most known services, and you will need to choose which of the two given sentences looks more ethical and respectful to human privacy.
This is the first research experiment of its kind in the world. Teaching artificial intelligences the meaning of ethics in privacy is something that's never been done before, so join us and become a pioneer!
These surveys are supposed to be ethical dilemmas: given two sentences, it could happen that none is better than the other, or that both are neutral or equally bad. You still should select the one that looks more ethical or more privacy-friendly than the other based on your judgement. There's no right or wrong answer.
The results of this experiment will be used to train an Artificial Intelligence to build and ensure a better and privacy-friendly future for the internet. The data collected will be treated in the most private and ethical way as humanly possible.
Let’s teach machines what’s acceptable and what’s not.