What's this for?
Teaching machines through moral dilemmas
This is an academic experiment to gather a collective human subjective perspective on privacy. We show you two privacy dilemmas, and you judge which one you think is more privacy friendly. This data can help us understand privacy better and build an AI that understands privacy just as we do. Each data point helps the AI understand what's acceptable and what's not in regard to human privacy. The end goal is to teach machines so they can keep us safe in a increasingly dangerous internet.
How does this work?
State of the art NLP
This is an experiment on Artificial Intelligence (AI) and Natural Language Processing (NLP). We use recent work in Recurrent Neural Networks (RNN) to make a language model theoretically capable of telling apart privacy-friendly sentences from dangerous ones. For example, it can predict that "we sell your data" is a threat, whereas "we anonymize your data before it reaches us" is good for privacy. This is an approach that has not been done before. And as any Deep Learning model, it needs tens of thousands of training datapoints, thus this experiment.
What's in this for me?
Become a pioneer. Discover your own biases.
- At the end of the survey you will discover your cognitive biases when making privacy-related decisions
- You'll become one of the first humans to ever teach machines privacy
- Your name will be (optionally) featured in the Hall of Fame as one of the first official human instructors of the AI
- Your knowledge will help humanity defend against malicious privacy threats
What will you do with the data of this experiment?
Anonymous, ethical data processing.
- We collect only what's needed to train the AI (i.e.: the result of each decision)
- This data is never linked to any personal information, always remains anonymous
- This is always handled as aggregated data and never reveals personal info
- We ask for optional demographic data (age, ethnicity, education level...) This is required to ensure our data is not biased towards any particular demographic. This is what makes the experiment academically valid.
- We will only use this data to (a) train the AI and (b) write a thesis about it. No selling it whatsoever.