Share
Releases
March 15, 2022

Act now to avoid ‘Vicious cycle of AI bias’

Author:
Synthesized
Act now to avoid ‘Vicious cycle of AI bias’

Financial services and fintech businesses can gain significant competitive advantage if they take action to deal with, and remove, unfair bias in their data now, ahead of potentially tougher regulations being introduced which could force companies to act in future.

That’s the view of Nicolai Baldin, founder of award-winning data and DataOps firm, Synthesized, who was speaking on “What are we doing to stamp out AI bias in financial services?", the latest edition of the ‘Open Finance’ podcast, which is released today (Mon 14th) and produced by leading financial technology provider, Finastra.

Baldin, CEO of Synthesized, said: “People are going to care more and more in the coming years about algorithms being fair and data being unbiased. It’s highly possible that regulations will become even stricter so it’s better for businesses to adjust now so that they can save resources and ensure that we protect users and customers sooner.”

“Many companies can quickly gain competitive advantage if they put fairness first. There are very clear incentives for companies to self-regulate but in terms of regulatory frameworks, we see different proposals regarding data bias and fairness. There’s a need for a practical solution to ensure data and AI is as free from bias as humanly possible.”

Adam Lieberman, Head of Artificial Intelligence and Machine Learning at Finastra, speaking on the same podcast, said “failure by society to deal with data bias would mean a vicious cycle of bias in artificial intelligence.”

“We need to seek an understanding of bias and prioritise it, otherwise we are just going to replicate it. Machine learning in a nutshell is about using historical data, drawing patterns and insights and making predictions.

“If we don’t define what bias is, and then look for it in our data to make sure it’s as bias- free as it can be, then we are just going to be learning to replicate this bias that lives in our datasets. It’s a cycle of bias that can be prevented.”

Data bias causes AI and ML bias and that bias can often impact, and cause discrimination against, legally protected groups.

Synthesized is an AI and data management business which is keen to promote ethical and responsible use of AI. Last year, Synthesized made public and freely available its FairLens tool - the world’s first data-centric open-source software for identifying, visualising and measuring data bias.

Synthesized has also developed the first agile test data transformation platform for organisations to generate realistic synthetic data in a matter of minutes. It enables organisations to accelerate digital transformation and cloud adoption with best practices in data management, it removes personal attributes along with the risk of data breaches, reputational risks, or risk of fines for non-compliance. 

Synthesized’s platform can be used to detect and mitigate inequalities and bias in data, AI and machine learning training and software development.
Synthesized recently welcomed UNESCO’s adoption of the first ever global agreement on ethics in AI, but is also calling for a UK-wide code of conduct to be established which would give a “green tick” to those companies who comply in developing ethical and responsible AI in a transparent manner and with the UNESCO standard.