Florida officials are opening an investigation into OpenAI and ChatGPT, its popular chatbot product, in part concerning its alleged assistance in helping plan a mass shooting at Florida State University last year. James Uthmeier, the state’s attorney general, announced the probe Thursday morning in a video statement on X. “We’ve also learned that ChatGPT may likely have been used to assist the murderer in the recent mass school shooting at Florida State University that tragically took two lives,” Uthmeier said.

  • WoodScientist@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    2 days ago

    We’re not really prepared for the democratization of knowledge, or at least applications of knowledge, that LLMs might enable. Imagine a powerful jailbroken LLM. You ask it how to make an effective remotely operated bomb. You then direct it to not only prepare instructions, but create an augmented reality overview that you can view through a pair of smart glasses. It projects images onto your environment and literally guides your hands through the process of making a powerful bomb. No thought required; just move your hands along with the projection. There’s a reason we have mass shooters but not many mass bombings. It’s not as easy as one might think, and it carries a high risk of the would-be bomber exploding themselves instead. But this? It eliminates all the guesswork, all you have to do is align your hands with what the goggles tell you.

    On the less evil side, imagine doing the same thing for medical care. Imagine you could put on a pair of AR goggles and be guided through the process of performing a surgery. Imagine a world where even though it’s illegal, untrained people in increasing numbers are performing major surgeries on each other. An extreme response to the cost of medical care.

    Sure, LLMs are deeply flawed on many axis. But they do get it right often enough to matter. Even if the bomber’s LLM manages to result in a dud, or a bomb going off while building it, one times in twenty, that would still dramatically increase the accessibility of home-built explosive devices. And that could be the case across many disciplines and applications.

    • brianpeiris@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      I understand you’re trying to consider both sides of this for the sake of argument, but the issue I have with it is that it is justifying current real world harm in the name of hypothetical (arguably unlikely) future benefit.