On the latest edition of our podcast, Mind the Data Gap, host Nicolai Baldin sits down with Gillian Docherty to discuss all things AI. Nicolai and Gillian explore people's fears about the development and use of AI, the demand for more transparency, and how the world might eliminate fairness or bias in data.
The following conversation has been edited for clarity and brevity.
I am delighted to be joined by Gillian Docherty, a leader in British computer science, described as one of the most influential women in technology. She believes that Scotland will become a world-leading destination for data science. Gillian Docherty is the CEO of The Data Lab in Scotland, and the Chair of Scottish AI Alliance.
The Data Lab is Scotland's innovation centre for data science and artificial intelligence. Gillian is also Chair of Scotland's AI Alliance leadership circle who are guiding the work to implement the recommendations laid out for Scotland's AI strategy, published earlier this year in March.
The Data Lab is part of our broader set of innovation centres here in Scotland, with sector domains, specialisms, and some who are more cross-cutting technology. We were set up about eight years ago, and it was a challenge based on the premise that our organizations were not investing in R&D to the same extent as some other countries around the world.
We had some world-leading academics in certain fields, domains and areas, who worked with industry partners but often didn't engage with industry partners here in Scotland. So the innovation centre program was set up to encourage those links and get more organizations active in research.
At The Data Lab, we focus on the use of novel and innovative data science and AI to help our organizations flourish. And we do that through a range of services and collaborative innovation, where we link organizations with academics, our team of data scientists, funding opportunities or business advice.
The goal of The Data Lab is to help Scotland maximize the value of data to change lives and make us a robust and vibrant country, through the use of data science and AI. Our three priorities are; driving economic benefit for the country, through supporting organizations, growing and developing new solutions and capabilities in data and AI, and driving societal benefits.
So that is supporting our public sector and our voluntary sector in the use of data and AI and the solutions that they have for either citizens or their users or clients. And finally, a challenge that applies around the world is the climate emergency. So how does The Data Lab support organizations to build a new capability or adopt new ways of working that help to address the climate emergency?
Gillian: So that’s a really good question. I've been at The Data Lab since the beginning, so I've seen a lot of developments over the last seven years.
Predominantly at the start, there were a lot of organizations that were progressive in their thinking. Scotland has a very significant financial services sector and fintech sector, and those organizations understood that they needed to use data.
I think the journey we’ve now come on seven years later is that we've seen that challenge and that understanding of the opportunity is much more pervasive across the SME community. We've seen a growing number of tech startups where data and AI is key to some of the capabilities that they're building.
And I think the public awareness of the impact of data has increased significantly across those seven years. We’ve seen throughout that time the expansive use from the tech giants, their use of data in the solutions that they build. And the impact of that has become much more prevalent in society.
I think from an AI perspective we've seen unfortunately over the years some challenging use of AI. That's either biased, not representative or fundamentally not ethical. And the spotlight on that has increased quite a bit, specifically over the last year or two, which is fantastic.
Nicolai: I just want to touch on the data bias and data being used by AI. I think we agree that AI can't really operate without data, so we need data to train the AI machine learning models. We need AI data to test machine learning models as well. As we discussed previously, in the recent poll Synthesized conducted in Scotland we learned that over two-thirds of people in Scotland are worried about how the data is used by organizations.
Gillian: I think that awareness has risen significantly. So I'm not surprised by that number. In terms of what we should do about it, if you look at Scotland's AI strategy, it's about driving trustworthy, ethical, and inclusive AI. And it is clear that being open and transparent is going to help increase individuals’ confidence in the use of data.
Regulation has a part to play, and GDPR is there. I think that has helped significantly. It’s also about holding organisations to account, in terms of the use of data. I think an element that is needed, and we could do a lot more on, is education. Providing people with the capability to ask the right questions. We're not about turning everyone into data scientists. It's about how we enable people in society to understand a little bit more and to ask the right questions. So when they sign up for the service or participate in a digital environment, or they buy from certain organizations, they can ask and are enabled with the right questions, to understand the answers and make an informed choice about whether to engage or not.
Nicolai: That's a very good point, Gillian, in terms of making sure that those rules are transparent and very simple to understand. If I ask myself “do I trust AI and trust organizations handling my data?”, I don't. Because I don't know how it's handled and this needs to be very transparent and it needs to be given and provided to people, and they should know how to use it and how the data is used.
And also I think that when we think of AI there are so many different aspects of AI, right? So there are different machine learning methods, there are different prediction algorithms, there are different recommendation systems and all of them are different.
It also depends on how they are constructed, how they're built. Needless to say, there is a big topic of explainability. So we want to ensure that when decisions are made by AI, those decisions can be explained.
Gillian: I think the results of the poll are very interesting and indicate that there needs to be more education. The question is how do we organize that effectively and how do we work collaboratively on educating society.
And it needs to be a collaborative effort between organizations, universities, and the government, and we see some prominent results already. The fact that nearly two-thirds of people living in Scotland are worried about how AI and data are being treated is a good indicator.
Nicolai: Gillian, something we've touched on is data bias. And we know that if data used by AI is biased, the decisions made by AI will be biased as well. According to the same poll, one in three people in Scotland would support the use of AI in terms of understanding data bias and also mitigating data bias as well.
Gillian: I think that's an interesting result and clearly, one in three is an interesting statistic. That, by definition, means two-thirds don't support the use of AI in discovering and eliminating unfairness and bias.
So again, I think there are some things to explore in that. Are people as aware as they could be about how AI would discover and eliminate unfairness? And if you're not aware of how it would do, then how could you be in support of it? So there's probably some education and real-life examples of both or how AI would discover and eliminate unfairness and bias in data. But I also think there's a big element here.
It often takes a human in the loop and multi-diverse teams with a diversity of backgrounds and expertise. And we are seeing that extensively in the work we do. Teams working with AI need to be much broader than they were in the past. So to augment your data engineers, data scientists, machine learning experts, et cetera.
How do we have social scientists, humanists and ethics experts in the teams? How do we involve a more diverse, robust team structure? Because often it will take a human to say where the unintended consequences of that algorithm are. What part of society does it not represent either in the data or in the way the AI is developed?
So I'm not necessarily surprised by that number, but I do think there's work to be done on using AI, people and humans together to make sure as much as possible, AI that is developed is trustworthy, ethical, and inclusive.
Gillian: I don't think so, not from what I'm seeing. I think there's a real openness and willingness to explore and understand. I don't think bias is taboo anymore. And I think it's encouraging that more conversations are happening around it. I think the key thing is action. In the development of AI, and we're working in Scotland on the development of an AI playbook. And how do we give people the capability to A, understand, and B, do something about it?
Nicolai: I couldn’t agree more, Gillian. We also see with many of our clients that actually look into the data bias and AI bias, and start implementing different solutions generally, but also working with partners. So working with other companies introducing those technologies as well, you also see quite a bit of awareness of the difference between AI bias and data bias. It goes without saying that if data is biased, the AI, which is using the data is gonna be biased as well. But now many companies understand that in data bias, the data is used not only by machine learning algorithms but it’s also used by marketing teams, product teams, software engineers, test engineers. We need to make sure that the bias is eliminated on a fundamental level from data pipelines not only on the machine learning level, and we see many companies looking into this area, which is quite promising as well.
Something we touched on at the beginning, which is data bias being intertwined with AI bias. AI bias is intertwined with AI explainability. And there is definitely a need for AI to be more explainable. So we want to understand how it works, how decisions are made. However, there is a dilemma in the sense that if you want to innovate and provide better services, better solutions, sometimes the AI, which many companies develop, becomes so complex. Of course, it’s very precise but so complex that it's a little bit difficult to explain how it works.
Gillian: That's a really good question. And, we're probably still working on it, along with your organization and many others, on how we balance that trade-off. So I don't think it's crystal clear at the moment. The fundamental foundation is that if an AI decision or algorithm has an impact on an individual or a society, and their ability to get a loan or a mortgage or participate in some other social programs, then we should be able to explain it. There are potentially some areas that don’t apply to us as individuals and as humans, in industrial sectors, in manufacturing or in the machine or other intelligence sectors, where the explainability doesn't have a direct impact on an individual.
So, we've got to ask, what is the AI being used for? What is the impact on individuals? And then, in that case, I think we do have to kind of draw a line around it being explainable. And we need to continue the work, the research, significant research is going on in this space. And I know organizations, like yours, that are working extensively in this space. We do need to continue the work on how in the areas of very complex AI, we make that more explainable than it currently is.
Nicolai: I can't agree more with you Gillian. And I think it's also down to society to request what areas of AI should be explainable. So maybe also there is a need for another poll, but that is fascinating. We've talked a lot about AI being responsible and we want to ensure that AI and data are used responsibly. However, as we understand businesses often think in terms of metrics.
Gillian: I think an organization needs to hold true to its own values in everything it does, whether it's AI or not. And I believe AI is just another tool in the toolbox of how an organization operates, functions and brings products to market, or if the public sector supports its citizens with services etc. So AI is another tool for organizations to use, but fundamentally an organization and what it does and how it does it needs to hold true to its own values. And our clients, citizens, society and our patients are holding organizations more accountable now than maybe they have in the past.
And I think that will drive through to engagement, sales and revenue. I'd probably bring it fundamentally back to organizational values and what they stand for, what they believe in and how they go about doing their business. And equally that also then needs to apply in how they use AI.
But also which business sectors we can say are treating AI responsibly, and can be used as an example for other companies, to treat AI responsibly. I think it’s important even for us to be constantly thinking internally, the way we treat and develop AI, we try to make sure we use all the methods in a responsible manner.
Gillian: I think there are some fantastic examples of the use of different types of technology, including AI. There's a fantastic example, Naturescot in the automatic classification of species and the highlands that feeds into a biodiversity challenge, which clearly is a huge challenge alongside climate gases. I think there are great examples out there, and equally, I applaud organizations who have uncovered unconscious bias or unintended consequences later down the line and then stopped. And they’ve readdressed those particular challenges. When these things are found, how do organizations address them?
Nicolai: I fully agree, Gillian, more work needs to be done in terms of companies collaborating and designing those metrics so that we are very clear with the way we measure AI being responsible and also making sure that it's understood by society and also aligned with the government initiatives.
Nicolai: So the area of data bias is quite close to us in the sense that we focused a lot on enabling companies to understand how data bias works, how to identify it, how to also mitigate it and how to use AI to do that.
And I think also very important when we work with companies, we always think about the architectures, which are used to identify bias. For example, the architecture of AI, the architecture of machine learning models, and making sure that those are available and shared across the organisation so that everyone understands how this has been used and what the benefits are. And also, it's very important to work together with stakeholders and different business units on making sure that we are aligned on the core business metrics they would like to drive. And what it means by AI being responsible for a specific organization.
What it means by AI being explainable for a specific business unit is a conversation to be had with some of the stakeholders, and to make sure we are aligned on that. But it's very important to have a unified framework for companies to say that one specific model for mortgages, loans and fraud detection is explainable. And also the AI used to make decisions is also doing that in a responsible manner. So it's fair. It's also about not making many biased decisions and having full visibility of that is important. So we definitely see there is a huge need for that, especially in the US, where they are extremely sensitive right now.
And we see many companies right now expanding their diversity and inclusion departments and making sure that there is comprehensive reporting and monitoring of those aspects across different business units. But of course, more work needs to be done, such as data bias and making sure that the organizations have the right framework to understand what data bias is and how to mitigate it as well.
Gillian: Good work Nicolai, keep it up.
Gillian: As a woman in the technology field for almost 30 years now, we have always been challenged. In fact, a very sad statistic is that even fewer girls and young women are taking computer science, and that is challenging for me to understand.
I think we all have an onus to work in the sector and the area of AI, and more broadly in the technology sector to showcase role models of successful diverse individuals that work in the sector. And to showcase the kind of roles that we do.
I think there's often a misconception that whether it's computer science, tech, or more broadly, that it's a type of job where you sit on your own. You don't work in teams, you sit in front of a screen coding (albeit for the last two years, we have been sitting in front of a screen). But that's been the pandemic rather than AI.
We all have an onus to showcase the wonderful, fabulous roles, jobs and teams that you can be part of. In terms of advice, take any opportunities that present themselves. Even if you don't think you can do it, say yes to those opportunities, step through the door and take the risk. Surround yourself with a supportive network. I honestly can't emphasize the reliance on my network that I've had over the years. They've kept me sane many times, they're always a great sounding board for me and they will support you. The other aspect is that we need diverse thinkers and diversity of all its kinds, gender, race, socioeconomic or political, working in this field to eliminate some of the challenges from bias that we've talked about.
Gillian: Even with a degree in computing science, I had no idea what I wanted to do. And, I was fortunate to join IBM just as part of their graduate recruitment with no clear vision or about where it would take me. And, I was fortunate to work for IBM for 22 years. I did different roles and jobs every three to four years and got to work with some amazing leaders, clients and amazing technology over those years. And then I had the opportunity to come to The Data Lab and to build it.
And, and I'll be honest, Nicolai, I was a little bit nervous about that. “I'm a corporate person, this is all I know, I've not done a start-up organization before”, or so I thought. My network and my mentors encouraged me to go and do it. One of my coaches once said, “if it doesn't scare you, then it's not big enough”. And I made the leap, resigned and came to do this much to my mom's horror. She asked, who is this Data Lab? But, it's been just a fantastic journey.
That's fascinating, the mentorship. It's also important for us to think about how we provide the rights of mentorship to the younger generation, women and men, and make sure that the opportunities are clear. The successful examples are visible as well. That's also the reason why we're keen to do this podcast video, to make sure that people have these examples in front of them.
Thank you very much for the opportunity to join you on this podcast. And, it's been great to discuss all things data, AI, bias and women.