Another busy week has passed in Las Vegas and Black Hat 2023 was full of many remarkable sessions and intriguing discussions. I really like the way all the amazing people and community come together to share experiences and knowledge.
A lot has happened in the industry over the past year. One of the largest, lurking topics was the advancements of AI and what it means for security, so the anticipation was surely on Jeff Moss (aka Dark Tangent) who introduced an amazing keynote from Maria Markstedter, founder of Azeria Labs.
Before Maria took the stage, Jeff provided his perspective on AI and it was one of the best explanations I have heard to date, which is all about transitioning problems into prediction problems and that AI accelerates analysis and quickly creates predictions with possible outcomes. It becomes critical that AI accuracy gets measured and transparent so security teams can make decisions with high levels of confidence.
Jeff also mentioned a recent example where Zoom had changed its Terms of Service allowing online meetings to be used to train its AI algorithms with no option for users to opt in or out. This raises the question over control, whether companies want to be part of AI training models, and the issues with privacy.
Maria’s delivery of the keynote highlighted many important topics that we must address as AI adoption accelerates. She explained a brief history of information processing and AI. As we have learned all too often, adding security too late can spell disaster and it appears that we might repeat that same mistake with AI. The industry has frequently talked about security by design and security by default, and now the question is whether we are applying those lessons to AI models and algorithms or will we add guardrails later.
It's always a challenge to move fast or approach with caution as they conflict with each other, but it appears that the AI race has started and is in full motion. It’s possible that we’ll leave security behind as a result. Moving forward, we must approach AI with responsibility, and it’s up to the security industry to save the world when AI goes rogue.
The keynote made me think about the training models and how AI training can potentially become a privacy nightmare. Privacy in recent years has become a Digital Rights Management issue, however with AI it could potentially become a Digital DNA rights management issue. It could become a questions of who owns the rights to my online persona and what can stop AI models from cloning my online persona and creating multiple versions of me in a digital world.
Another important topic that Maria mentioned was around the critical need for identity and access management of AI agents, as ultimately they just become another form of an identity that needs access to neural networks, Large language models (LLMs), data and algorithms. How much should we allow access, does this mean it’s important to apply the principle of least privilege to AI agents, and how can we ensure they do not become overprivileged and get out of control? AI privileged access will become an important security control to ensure that the AI agent only has access to the minimum dataset needed to make the best predictions.
Maria ended the keynote with the big question of if and when AI will replace human cybersecurity personnel. She believes that it at least hasn’t happened yet, however, it becomes important that cybersecurity professionals must become skilled with AI.
AI has arrived. There’s no question about that. We must protect it from being abused and we’ll need clear algorithm “explainability” to build trust and acceptance. Today, we are no longer starting with a blank slate. AI will accelerate decisions and automation, and cybersecurity will be vital to its success and safety.
Joseph Carson, chief security scientist and Advisory CISO, Delinea