What's in this for me?
Find out your inner workings and become a pioneer
- Discover your cognitive biases when making ethical, privacy-related decisions
- Compare your ethics against the average human, get to know yourself better and share your results
- You'll become one of the first humans to ever teach machines privacy. Learn more.
- Your name will be (optionally) featured in the Hall of Fame as one of the first official human instructors of the AI
- Your knowledge will help humanity defend against malicious privacy threats
What's this for?
Learn your biases – while helping build an AI that reads privacy policies for you
This is an academic experiment to build an Artificial Intelligence that will read privacy policies for you – and protect you against the threats it finds. It will keep you safe in a increasingly dangerous internet. Read more about what we're building in the future and how will it look like.
What will you do with the data of this experiment?
Anonymous, ethical data processing.
At Guard, we're huge privacy advocates. We really do care for privacy, and that's not your typical marketing slogan. We believe privacy should be a human right and we're working hard towards empowering everyone so this becomes a reality. The data gathered in this experiment will be handled in the most ethical and private way. Read how.
Here's what we will do with your data in plain English:
- We collect only what's needed to train the AI (i.e.: the result of each decision)
- This data is never linked to any personal information, always remains anonymous
- This is always handled as aggregated data and never reveals personal info
- We ask for optional demographic data (age, ethnicity, education level...) This is required to ensure our data is not biased towards any particular demographic. This is what makes the experiment academically valid.
- We will only use this data to (a) train the AI and (b) write a thesis about it. No selling it whatsoever.