Share:
Business
December 7, 2023

Decoding generative AI: Navigating the reality and noise

Decoding generative AI: Navigating the reality and noise

At the recent Microsoft Security | BlackHat Europe Reception Panel, Synthesized discussed generative AI in the current moment. What makes it particularly special? How can we establish trust and ensure responsible AI? And how do we secure generative AI from the ground up?

We sat down with Don Brown, Field CTO, Denis Borovikov, CTO, and Nicolai Baldin, CEO, to delve deeper into these topics, which resulted in the following blog post.

There is a lot of hype around generative AI right now – almost every company is now claiming they are an AI company. We’re starting to hear some customers are confused – they don’t understand what differentiates generative AI from early machine learning use cases that have been around for years, and they are having a hard time separating the noise from the real innovation. This question is in two parts:

a) First, what makes generative AI so special – why should business, technology, and security leaders care about it, and what makes it different from the existing way of doing things?

Nicolai B: Generative AI holds the promise to revolutionize global business operations. Unlike traditional machine learning, it emulates human thinking but often excels in various domains such as customer success, code writing, translation, document summarization, and insightful data extraction. These tasks, which are present in multiple industries, are nearly at the point where they can be easily automated. The true impact is palpable now, evidenced by major banks heavily investing in this technology. How does it differ from traditional AI? Traditional machine learning produces basic-level outputs, while generative AI comprehends inputs deeply, generating entirely new data, images, video, and more. The potential to revolutionize industries hinges on technological innovation, enabling model optimization, such as billion-parameter models that can operate on smartphones. Training and testing sophisticated Large Language Models (LLMs) on smartphones has become remarkably accessible.

Denis B: Traditional machine learning methods are used mainly for classification and prediction tasks. It replaced more traditional statistical methods that were used for business intelligence. But still, machine learning models are viewed as a method of insight extraction within business intelligence.

Generative AI allows to work with and generate practically any content, not just simply statistical insights. Modern generative AI models can work with text, images, video, audio, documents, etc. This makes generative AI suitable for practically all domains of the business outside of business intelligence. Businesses have the opportunity to automate all routine tasks everywhere - legal, operations, product, customer support, etc.

Don B: Security leaders generally think in terms of threat models, the concept of documenting the known threats and their relative probabilities of happening and success. AI changes these models massively, even without having new offensive capabilities simply by doing everything orders of magnitude faster and on a larger scale. If a trained team of offensive attackers can simply direct an AI, it will move faster than defenders can introduce mitigations.

b) Second, when it comes to security and generative AI, what is real versus noise – in what scenarios is AI having a real impact today?

Nicolai B: Traditionally, datasets have been masked and then made available. Generative AI allows you to identify the core of the data - what information it is fundamentally communicating. Generative AI can take these insights and generate new, better data with more reliable insights, which is useful for application development, workflows etc. It helps solve the security problem of keeping original data secure. Lock data down but keep lights on and run business.

Denis B: Sci-fi like scenarios of AI rebelling against us are hypothetical, far from the present, and unlikely. They are not impossible, in theory but following this logic we can start to worry about many things such as a potential asteroid hitting the Earth. Having said that, there are way less futuristic scenarios where AI is simply abused by wrongdoers. Content generation and reasoning abilities can be applied to organize scams, cyber attacks, and similar things.

Don B: Right now it’s mostly IP concerns, weaponizing existing pipelines, and research. We are on the cusp of a new world, but it’s still the cusp. Biggest worry right now is using AI to find new undisclosed vulnerabilities in software. Same as it ever was, but at scale.

Customers and governments are rightfully concerned about responsible and trusted AI – which are broad categories that include responsible use, regulation, transparency, audit, and more. We know that customers won’t use technology that they do not trust – so, with that in mind, what should this room of technology leaders be thinking about to build trust and ensure responsible AI from the start?

Denis B: There are two fundamental approaches to handle safety issues: bureaucratic and technological. I doubt that bureaucratic measures would be anyhow effective against such complex and opaque systems such as generative AI. I believe that generative AI technology can be audited and controlled only by other generative AI technologies. Especially in cases when reasoning abilities of such systems supersede human abilities. So I believe regulators should stimulate the market of gen-AI based systems of audit, content-filtering, and so on. There might be some certification for such systems from the government. Like certified fake news monitors.

Don B: This is likely an area where governments will need to step in and set standards, given the competing pressures of the markets in various countries. Maybe the hardest problem in AI outside of AGI right now is trying to carve out the balance between safety versus losing ground overall to the competition that doesn’t care about safety.

Nicolai B: How can you be responsible with AI? Challenge with AI is that ethical and legal parameters have not yet been fully defined. EU AI Data Protection Regulation was only announced recently. According to Rishi Sunak, UK is not going to legislate in the short term.

There is growing interest in the security of AI – protecting the data, models, and output. This is certainly not a new concept but something that is becoming front-and-center as generative AI goes more mainstream. What are some of the questions you are hearing from customers? What considerations should those building generative AI solutions be making to ensure they are built more secure from the ground up?

Denis B: There is very strong demand for hosted solutions where there is a full control over data and the model.

Don B: Other than the classic defenses to actually protect IP, this truly is still an area of R&D and probably the area where we will see most of the new class of “AI vulnerabilities” come from. For instance, researchers discovered that simply by making ChatGPT repeat a word forever, they could induce it to recall some of the materials it was trained on. These are things no one foresaw and are still trying to wrap some guardrails on. The current solution is to make it against the T&C’s, a truly soft control.

What is the one big takeaway for this room when it comes to security and AI – what must we do to ensure that organizations everywhere realize the value and promise of this massive technological leap?

Nicolai B: Data remains key.

Denis B: You should give organizations full control over how their data is used for training of models, and how models are accessed by organizations.

Don B: The area where I am most concerned about AI is not a big bang where all of a sudden our lives are subsumed by an all knowing and all powerful software, it’s using AI to subtly influence public opinion over time. That time is already here.

Learn more: