• 6 Posts
  • 613 Comments
Joined 3 years ago
cake
Cake day: June 14th, 2023

help-circle



  • The categories that they used for “sabotage” (Entering proprietary information into a different AI, using unapproved chatbots, and using low-quality AI responses as-is) seem like they’re just put together so they can blame employees for sabotage for the failure of the AI rollout, rather than employers trying to wedge it onto a bad use case, or not rolling it out properly.

    The first two just seem like the company having issues with people going straight to ChatGPT, and using that as-is, and the third seems to be more people not really caring and using the AI output as required.

    None of that comes across as outright sabotage like the organisation or article the to imply. All three seem like reasonable end-points of telling people to use AI, and giving them metrics they need to meet, or a not-great interface, so they just go off and use a different AI thing, because it’s all AI, and basically the same thing, right?









  • If the entire world had access to free healthcare, chances are research and development would grind to a halt unless they also funded research and development. Taxpayers would need to be willing to pay a company hundreds of millions of dollars if they discovered a useful product.

    I don’t see why it would. A company would still invest in research if they thought they had a chance to sell it to the healthcare system, for example. It wouldn’t be the first nor last time something like that happened, and the latter case isn’t too different from how it works already.

    Consider insulin, for example. Research into it and drugs for treatment of diabetes doesn’t happen exclusively in the US.