

We’re not really prepared for the democratization of knowledge, or at least applications of knowledge, that LLMs might enable. Imagine a powerful jailbroken LLM. You ask it how to make an effective remotely operated bomb. You then direct it to not only prepare instructions, but create an augmented reality overview that you can view through a pair of smart glasses. It projects images onto your environment and literally guides your hands through the process of making a powerful bomb. No thought required; just move your hands along with the projection. There’s a reason we have mass shooters but not many mass bombings. It’s not as easy as one might think, and it carries a high risk of the would-be bomber exploding themselves instead. But this? It eliminates all the guesswork, all you have to do is align your hands with what the goggles tell you.
On the less evil side, imagine doing the same thing for medical care. Imagine you could put on a pair of AR goggles and be guided through the process of performing a surgery. Imagine a world where even though it’s illegal, untrained people in increasing numbers are performing major surgeries on each other. An extreme response to the cost of medical care.
Sure, LLMs are deeply flawed on many axis. But they do get it right often enough to matter. Even if the bomber’s LLM manages to result in a dud, or a bomb going off while building it, one times in twenty, that would still dramatically increase the accessibility of home-built explosive devices. And that could be the case across many disciplines and applications.














I’m hoping he bungles the negotiations so much that it’s like the plot of a Sitcom. In the end the only way to make it work is for JD Vance to personally convert to Shia Islam.