

Most of this is just marketing crap from Anthropic.
Finding vulnerabilities in code and generating complex, multistep exploits with publicly available models is possible now. This biggest hurdles now is setting correct context and actually knowing what to look for. Any “guardrails” for this behavior are easily bypassed by framing the detection and exploit generation as a valid dev style question in the most difficult of situations.
They likely just trained a model without guardrails in this case.
What they are doing here is over-hyping a problem and framing it like they are the only ones with a solution. LLM security issues are more in-focus now that companies have dumped a ton of resources into building AI systems they don’t really understand.

















I am quite literally an expert in security and know the community quite well. Of course, there were some raised eyebrows to this, but that was about it. It’s the big company execs that are calling this the next big thing, because money. (The article basically reflects my opinion. There were some reputable and people quoted, but then there was Jeetu Patel (Cisco) going all weird with this. Frickin idiot.)
TBH, I have written off all the AI shills I knew in the industry. Sure, make a buck where you need to but goddamn, don’t turn full fucking evangelist.
Disclaimer: I am paid to be an expert in security for a day job, but I still think I can be an idiot with this stuff. Meh. It’s paid the bills for the last 20 years.