Bug bounty programs already struggle with low-quality reports and reports taken directly from scanners with no followup analysis by the submitter. Here's a case where a researcher apparently used ChatGPT to create a vuln report out of thin air. ChatGPT asserted there was a vuln in a smart contract, created a writeup, and requested a reward for identifying an authorization bypass.
But the vuln didn't exist, either in theory or practice. It just sounded reasonable to a non-expert. To quote from the article, “I was most surprised by the mixture of clear writing and logic, but built off an ignorance of 101-level coding basics."
The Cyber Grand Challenge demonstrated the potential for machine learning in vuln detection and exploitation. ChatGPT and its ilk may eventually improve on that, but it doesn't seem like something to worry about any time soon.