Can someone who knows more about AI agents explain this to me?
Axios quotes an Anthropic spokesperson re: the Claude leaks, who says that this new capability is coming: "A “persistent assistant” running in background mode that lets Claude Code keep working even when a user is idle. "
Isn’t that what they claim “agents” are doing already? Or am I missing something? I would hate to learn that “agents” are just a marketing term and they don’t actually do the thing these companies claim… ;)
Currently the AI only runs for a short time after you provide a prompt. So say you ask it to ‘draft a letter to my congressman demanding an end to the war’, the AI will read what you wrote and output its interpretation of what you want, then it will stop.
What they’re talking about here is something very different, something which can continue processing inputs all of the time. It would be ‘aware’ of (depending on what you give it access to) emails coming in, what you’re working on in other programs, calendar events, etc. The idea is that it could potentially interrupt you with suggestions, maybe even anticipate what you will want and do it for you.
Obviously this is going to be risky at first. We’ve already seen stories of AIs deleting entire projects, what could they get up to if they’re allowed to be your online stand-in with access to everything on your device?
What they’re talking about here is something very different, something which can continue processing inputs all of the time. It would be ‘aware’ of (depending on what you give it access to) emails coming in, what you’re working on in other programs, calendar events, etc. The idea is that it could potentially interrupt you with suggestions, maybe even anticipate what you will want and do it for you.
I really thought that was the pitch for these already-existing agents.
Isn’t that exactly what’s described in this story from 2 months ago, where an AI agent executes a series of tasks that end up in this Meta exec getting her email deleted? That sounds way more complicated than “draft a letter to my congressman.” She even writes “I had to RUN to my Mac mini like I was defusing a bomb” implying that she was away from her desk. So what’s the difference between this already-existing tech and the new “persistent assistant” that can work when the user is idle?
The AI in the article you linked was Open Claw, which is an open-source version of one of these persistent AIs, so you’re right. It links to LLMs like Claude, but Anthropic haven’t actually released their own version yet, which is why it was showing up in the original files as ‘built but not yet shipped’.
Ah this makes sense. Thank you!!
The term agent is very broad. It encompasses all sorts of levels of function. Claude code runs when you give it instructions, and it will keep running until it finishes those instructions but then it stops.
A persistent agent would find a new task. Even if the user didn’t explicitly tell it to work on that.
I sincerely hope that when it comes to code this only applies to changes discussed and user-approved with a Plan step in front. I’d hate to come back to it having refactored my entire app with the usual schema drift and it trying to run pwsh commands even though my app runs on a different system
See an agent is more one model, a model is just the parrot. It’s one component in the agent. You still need systems around that to manage data and run and serve, interact etc. It can theoretically do all that but it all depends on how you build it.
So Claude-X is a model, Claude code is the agent infrastructure.


