AI coding assistants like Claude Code, Cursor, and GitHub Copilot are becoming part of our daily workflow. They read our files, understand our codebase, and help us write code faster. But there’s a problem - they can also read your .env files.
How about don’t let ai read your anything.
I’d go further: have a separate user-account, on your own machine, whose entire filesystem is separate from the rest of your machine, for using LLMs in.
NO access to your normal accounts, your email, your browser-history or bookmarks, NOTHING except that-account’s stuff.
Machiavellianism is intrinsic to the companies which produce them, we need to be presuming it to be intrinsic to the LLM’s, too, as for some of them it is intrinsic, & we don’t know which ones, yet.
Zero-Trust.
_ /\ _
Just sandbox it instead.
Instead, upload .env poisons.
deleted by creator





