Share:
Platform
June 3, 2021

The Future of AI: Holding Ourselves Accountable

Author:
The Future of AI: Holding Ourselves Accountable

We all have biases, some implicit and others explicit. The former, however, can be especially dangerous, particularly as it begins to creep its way into decision-making algorithms via incomplete, or low-quality data. Indeed, artificial intelligence is a powerful tool but it is only as good as the data it is fed.  

If AI models are trained on poor-quality data, its performance deteriorates significantly. More importantly, individuals could face illegal and unjust discrimination in any number of decisions; from credit applications being turned down to denied insurance offerings. In other words, lives are affected.

It is no wonder conversations are being incited to hold organisations liable when deploying such technology; to provide clarity on how decisions are made. In fact, just recently, the European Union introduced regulations on the use of AI to encourage exactly that. They are calling for Explainable AI; but what is it?

Explaining Explainable AI

In order to ensure that our AI systems are making decisions in line with our values and intentions, we need to understand how our models are coming to their conclusions. We need to know what the decision drivers are. AI is meant to be used as a tool to enhance our own decision-making processes, not replace it altogether. To put it another way, choices should not be delegated to AI with full independence from human intervention. 

Unfortunately, artificial intelligence traditionally runs complex calculations on a wealth of data in the backend to build models and determine outcomes. It can do so in a much more efficient manner than a data scientist could, but the downside of this is that the decision-making process is essentially turned into a black box. Data practitioners may see the information going in and out, but everything in between is obscured. Explainable AI attempts to transform that black box into a glass one; an aquarium if you will.

Of course, for many organisations, the fear then is that more regulations will only present further impediment to innovation.

Yet, this does not have to be the case.

Regulations Drive Responsible Innovation

In many ways, regulations simply push organisations to do better. Choosing to work towards smarter, more responsible AI is not just a societal obligation but an intelligent business decision too. If organisations choose not to address bias, they put themselves at a disadvantage. For example, bias in data could lead to unattractive offerings and result in a loss of customers, and consequently a loss of business.

Granted, achieving transparency is no easy feat and there is no one silver bullet solution. According to IBM’s ‘Global AI Adoption Index 2021’, 58% of organisations have admitted that building models on data that has inherent bias was among the biggest barriers to developing trusted AI.

Nevertheless, there is reason to be hopeful as we already have solutions in development today that can help with addressing bias and improving trust in technology. 

Looking Towards the Future

There are platforms presently available, such as Synthesized, that can assess datasets in real-time, pinpoint instances of bias and alter marginal distributions as desired to accommodate underrepresented classes. Rather ironically, with the help of AI, new datasets can also be generated to accommodate for a much broader range of scenarios which may have been missed by the original dataset. In this way, building a more complete and fair dataset for robust AI models.

Indeed, simulated data, or AI-generated data that looks and behaves like original data, might just play an important role in the future of Explainable AI. With simulated data, adjustments can be easily made through the removal or generation of data for each attribute. By experimenting this way and examining the changes in output, organisations can gain greater clarity into what drives their model’s decisions.

With that said, choosing who you partner with is equally important. Tackling bias is not for the faint of heart, so ensuring that your partners have, at their core, a genuine passion to achieve fairness, is key. Your partner should be seeking to not solely identify biases, but also mitigate them; focusing on its root cause - the data.

A Final Note

As a community, we need to be working towards a more responsible use of AI; this includes holding ourselves accountable as well. Businesses need to recognise that the decisions they choose to make with the help of AI will be under mounting scrutiny. What is important to note, however, is that artificial intelligence is not the real problem. It is neither good nor evil. Rather, AI models are only as good as the data they are trained on. The key to responsible and wide adoption of AI is high quality, highly representative and unbiased data. Synthesized can help you achieve this.