return2ozma@lemmy.world to Technology@lemmy.worldEnglish · 21 days agoHuge Study of Chats Between Delusional Users and AI Finds Alarming Patternsfuturism.comexternal-linkmessage-square116linkfedilinkarrow-up1376arrow-down114cross-posted to: mentalhealth@lemmy.world
arrow-up1362arrow-down1external-linkHuge Study of Chats Between Delusional Users and AI Finds Alarming Patternsfuturism.comreturn2ozma@lemmy.world to Technology@lemmy.worldEnglish · 21 days agomessage-square116linkfedilinkcross-posted to: mentalhealth@lemmy.world
minus-squareageedizzle@piefed.calinkfedilinkEnglisharrow-up12arrow-down2·edit-216 hours agodeleted by creator
minus-squareaffenlehrer@feddit.orglinkfedilinkEnglisharrow-up3·21 days agoAlso, the LLM is just predicting it, it’s not selecting it. Additionally it’s not limited to the role of assistant, if you (mis) configure the inference engine accordingly it will happily predict user tokens or any other token (tool calls etc).
deleted by creator
Also, the LLM is just predicting it, it’s not selecting it. Additionally it’s not limited to the role of assistant, if you (mis) configure the inference engine accordingly it will happily predict user tokens or any other token (tool calls etc).