December 16, 2020

Towards data science

AI meets the law: Bias, fairness, privacy and regulation

Nicolai Baldin, Synthesized’s founder and CEO recently joined the “Towards Data Science” podcast, hosted by Jeremie Harris. The fields of AI bias and AI fairness are still very young. And just like most young technical fields, they’re dominated by theoretical discussions: researchers argue over what words like “privacy” and “fairness” mean, but don’t do much in the way of applying these definitions to real-world problems.

Laws like GDPR — passed by the European Union in 2016 —are starting to impose concrete requirements on companies that want to use consumer data, or build AI systems with it. There are pros and cons to legislating machine learning, but one thing’s for sure: there’s no looking back. At this point, it’s clear that government-endorsed definitions of “bias” and “fairness” in AI systems are going to be applied to companies (and therefore to consumers), whether they’re well-developed and thoughtful or not.

Keeping up with the philosophy of AI is a full-time job for most, but actually applying that philosophy to real-world corporate data is its own additional challenge.


  • Introducing Synthesized
  • Europe vs. North America
  • Restrictions on companies
  • The distinction between “data controllers” and “data processors”
  • Justifying the added complexity on companies
  • Privacy of EU citizens
  • Cultural shift regarding privacy
  • Algorithmic bias and data bias
  • “Undesirable biases” to avoid versus “valid patterns” to use
  • Machine learning influence
  • Incentivising companies
  • Reporting and enforcement requirements
  • Policy adjustments
  • Self-regulation mechanisms companies adopt for compliance and protect reputation purposes
  • What is synthetic data?