The power of AI and its ever-growing prevalence in our lives rightly makes us question whether we, as a society, are ready for it. As practitioners, technologists and researchers, yes, we’re ready. However, our data, on which AI so heavily depends, tells an entirely different story.
We believe data is a true mimic of society. Used correctly, it can be a positive force for change, but it must also be handled with care, for whilst it’s good on the whole, it’s also tainted with an accurate reflection of the bias that exists in every corner of the world. Now more than ever, discrimination is something that is no longer accepted, allowed or tolerated, and if our data is rotten, then so may be the AI that readily consumes it.
In order to build responsible AI, we must first address the problem at its very foundation: the bias that exists in all data. Bias in data leads to incorrect interpretations of a model’s behaviour, and more importantly to companies potentially falling foul of regulatory restrictions. We’ve recently witnessed a number of class action law suits including Google’s ‘age-discrimination class action lawsuit’ and Uber’s biased rider ratings lawsuit, and we expect more regulators, especially in the financial and insurance sectors, to start actively looking into the area of fairness of decision-making systems.
The Synthesized Data Platform is already used by numerous companies varying from startups to international conglomerates to deliver high volumes of high-quality data assets for development and testing purposes in a privacy-preserving manner. The platform incorporates the crucial components for privacy-preserving data sharing, data curation and the creation of completely new synthesized data assets, which never existed before!
We’re now releasing bias mitigation and fairness scoring capability to enable every organization to not only understand the biases in their data, but also correct these biases. It’s available as part of the Synthesized Data Platform immediately.
Furthermore, we believe fairness and ethical use of data should be key elements of any data-driven company. Adhering to the core values of Synthesized, it has become essential for us to launch the Synthesized Community Edition, a freemium version of the platform, to enable any organisation to root out data bias and promote fairness, diversity, and inclusion. We believe this benefits not only businesses, but also wider society.
You can apply here for the Community Edition.
Synthesized Community Edition requires no coding or deep technical expertise to get started. It couldn’t be easier: simply upload a structured data file, such as a spreadsheet, to kick start the process, and from there our intelligent platform takes over. With regards to data bias, the platform is trained to understand a wide array of regulatory and legal definitions regarding contextual bias and it automatically identifies issues across data attributes like gender, age, race, religion, sexual orientation, and more. SIgn up using this link.
The inherent simplicity of the platform enables it to be used in every industry with immediate results. Finance companies can create fairer credit ratings, insurance assess claims more equitably, human resources can eliminate bias from their hiring process and universities can ensure all admission decisions are based on a fair and equitable basis.
And that’s just the start. We are committed to provide more functionality to the community to unleash and amplify the value of data adhering to the best data practices in the coming weeks and months.