AI has been kind of a joke for a while. Where it worked pretty well, it was invisible (e.g. smartphone soft keyboards), and where it didn't, people had a field day (look up videos of Scottish people trying to use voice assistants like the Amazon Echo). Then Dall-e 2 was released to the public. And then Midjourney. Suddenly, there are multiple paid services that auto-generate stories for children using AI to generate both the story and corresponding images based on your prompts. It's all happening more quickly than most people anticipated, I think.
In security, anti-virus had a big win with machine learning. So much so, that it unseated the industry's largest pure play vendors (Symantec, McAfee), who didn't respond quickly enough to the trend to survive the massive customer exodus. Beyond next-gen AV, the impact of AI/ML seems like it should be massive, according to the marketing copy, but in reality seems entirely overblown.
I've tested several products claiming to use AI/ML to better detect attacks, and the failure of these models has been complete, even in the most controlled and prepped circumstances. AI-generated images didn't offer much to security teams, but the moment OpenAI made ChatGPL available to the public, security folks started exploring what it could do.
The quality of results I've seen has been astonishing. Ask it "why should I be a CISO" and it gives a response that, as a blog post, no one would ever guess was written by AI. It can effortlessly give remediation guidance to vulnerabilities and help reverse engineer software alongside IDA Pro. I think it might be a stretch to say that it could help with security's alleged talent shortage, but folks are definitely going to explore the limits of what it can do, and I wouldn't be surprised to see it embedded in commercial products before long.
Perhaps AI/ML will revolutionize security products after all, but we just needed better AI/ML tech, from outside our industry to make it happen.