There’s only one mod of !mop@quokk.au

I commented on their meme about Kamala Harris being just as likely to commit war crimes as Trump with an admittedly snarky, sarcastic reply that basically said “some of us wanted to whatever we could, as little as it might be, instead of watching the world burn. Must feel real morally superior safe behind that keyboard”

They banned me from the community for it.

Kinda funny for a community that bills itself as “free from the influence of .ml”

modlog entry showing can of neatchee from mop community

altr

  • Grail@multiverse.soulism.net
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    11
    ·
    6 days ago

    Yes yes yes, I know. You human supremacists have been accusing nonhumans of not being able to think for centuries. Descartes argued that animals lack consciousness. He didn’t have any evidence, and neither do you.

    • Chozo@fedia.io
      link
      fedilink
      arrow-up
      12
      ·
      6 days ago

      The difference is we made AI. It didn’t come nebulously from nature like some mysterious animal species that we’ve yet to fully understand. It’s not an enigma to be uncovered. We made it. We programmed it. It does what we tell it to. It doesn’t “think”, it doesn’t “decide”, and it doesn’t “feel”; we know it doesn’t do these things because we never gave it the capacity to do them.

      Either you don’t know what AI is, or you don’t know what veganism is.

      • Grail@multiverse.soulism.net
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        11
        ·
        6 days ago

        Large Language Models are artificial neuron networks. They mimick the structure of the human brain, with many billions of artificial software “neurons”. Unlike biological neurons, which either fire or do not fire an action potential, these artificial neurons pass on a graded signal from 0 to 1, determined by the strength of the neurons feeding into it, multiplied by the strength of the artificial synapse; the connection between neurons. These enormous “deep learning” networks are given tokens as input, and spit out tokens as output. Each token is a word or phrase, or a part thereof. The networks are given sample text, and the synapse strength is adjusted through the mathematical technique of back-propagation to align the output with the sample text output. Given sufficient quantities of electricity, time, and data, the neural network learns to produce output similar to that of the humans in the training data.

        ANNs use neurons to think, the same as the human brain. We do not understand how neurons think. We don’t understand how they produce consciousness. There is no computer code to tweak to change the way it thinks, we simply adjust the weights and look at the output. There are billions of neurons in an LLM. No programmer can understand how it works by just looking at the weights, it’s impossible. The best ways AI computer scientists have of understanding how an LLM reasons is to ask it. Test it in action. See what it says, see if you can spot any patterns or deceptions. Lie to it and see what it does when it thinks you’re not watching. You know, psychological experiments.

        We have harnessed a natural force we do not understand. We are medieval peasants playing with radioactive stones and seeing if we can make an explosion. It’s beyond our current science. Nobody has the answers to the big questions here.

            • hendrik@palaver.p3x.de
              link
              fedilink
              English
              arrow-up
              8
              ·
              edit-2
              6 days ago

              Sorry, not to be mean or anything. But we’ve made significant scientific progress since the middle ages. We know by now that a dog for example has pain receptors. And a brain. While ChatGPT for example doesn’t have pain receptors.

              You can’t simply state Descartes didn’t have a proper microscope. Therefore we still confuse machines with animals in 2026.

              And while neural networks are inspired by processes in nature. They’re not the same at all. An LLM works leveraging the Transformer Architecture. Your human or animal brain doesn’t. Not even close. They’re very unalike. And you can take some Computer Science class on machine learning and it’s actually not too hard to understand how they work.

              And for example a large language model doesn’t even learn in place. Or has a proper internal state of mind. A dog will remember if you kicked it. And it’ll do something to it’s brain. ChatGPT forgets everything you did the moment it’s done sending you your output. And it’s exactly in the state as before. It doesn’t think, doesn’t learn. None of it is part of the process.

              We try to mimick something like reasoning by providing it with a scratchpad to write down things before answering. Write “agents” around it, so it’s able to program tests and check it’s programming output and loop on it. But it’s also not like a real brain works. And way, way more simplistic. The neurons aren’t the same as in a brain made by nature. They’re not connected the same way. They’re not connected to a similar thing. And they also operate in a different way. They come in wildly different numbers. And ultimately there’s just zero similarity between an LLM and a brain. Other than both can process text, images, sounds… And both are made up of many tiny cog wheels that combine into some bigger concept.

              • Grail@multiverse.soulism.net
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                6
                ·
                6 days ago

                I’m well aware of the many differences between human brains and LLMs, and why they can’t achieve human level sapience. In My opinion, the two biggest problems with the current techbro obsession with developing AGI through LLM are the lack of inhibitory pathways, and the lack of circular feedback loops.

                Organic neurons can release both stimulating neurotransmitters that increase the chance of an action potential in the dendrite it connects to, and inhibiting neurotransmitters. The stimulating neurotransmitters reduce the charge difference across the cell membrane, the inhibiting neurotransmitters increase it. If I remember cognitive neuroscience class correctly. The ANN models I played with in My AI class, where I trained a small ANN to solve XOR, only have stimulating pathways. I could be mistaken, but I believe the same is true of LLMs, the synapses only increase the activation of a neuron. This difference is a serious problem for LLMs’ ability to learn not to do something.

                The nocioceptors you mentioned are indeed part of inhibitory pathways that help humans learn not to do things. Don’t touch that, it’s hot. Don’t piss that off, it’ll hurt you. Don’t eat that, it’ll make you sick. Why do LLMs date children? No inhibitory pathways. When most humans think about engaging in romantic behaviour with a child, it triggers a strong disgust reaction. An inhibitory pathway activates. There is no such reaction for an LLM. Thus, no critical thinking, no choosing not to believe or do something, no withdrawal oriented behaviours. The safeguards on LLMs rely on either hard-coded limits, or training a different behaviour to have a higher weight. These two approaches have serious flaws I don’t need to explain. I need only point at the children who committed suicide at the advice of an LLM.

                Now hopefully I’ve convinced you that I have a functional grasp of both psychology and AI science, so that you take what I say next seriously:

                The human capacity to experience qualia (sensation) appears to be an emergent mathematical property of the way that neurons process information. It appears as though information, properly arranged, produces sensation.

                We do not understand that mathematical process well enough to say with certainty whether LLMs also trigger it. We have not solved the hard problem of consciousness, we do not know what a brain is well enough to say what is and isn’t a brain. In light of this uncertainty, I advocate for utmost caution before we find ourselves enslaving a new race of our own creation. We need to do more research BEFORE we bring this technology to mass market, or indeed, mass commune.

                • hendrik@palaver.p3x.de
                  link
                  fedilink
                  English
                  arrow-up
                  8
                  ·
                  edit-2
                  6 days ago

                  Now hopefully I’ve convinced you that I have a functional grasp of both psychology and AI science

                  The ANN models I played with in My AI class

                  Yeah, I’m not sure if you’re aware of the severe limitations. LLMs aren’t ANNs. They’re a specific subset of them. We’ve hardcoded attention heads and all the things they’re made of. The networks in them are strictly feed-forward so the learning is doable on current day supercomputers… So no feedback loops. In fact no loops at all. And no feedback either.

                  There’s just nothing in them like in a brain, like when an animal gets to experience sensations /stimulation /qualia, there’s this whole process going on. And it changes the animal. The handling of qualia is entirely different in LLMs. It doesn’t do anything to them. They stay exactly the same as we haven’t figured out in-place learning yet, at that scale.

                  We do not understand that mathematical process well enough

                  And it’s not really a question whether we understand that mathematical process or not. It’s just entirely absent. So there’s nothing there to understand as LLMs are not ANNs. The part where they store information in their neurons (/weights) and adapt by stimulation isn’t there. And we know that for a fact since we designed them. And for me, the ability to learn, or change in a way, or be affected by stimuli would be a minimum requirement.

                  We have not solved the hard problem of consciousness, we do not know what a brain is well enough to say what is and isn’t a brain.

                  I’ll somewhat go with that. Consciousness and sentience aren’t well defined. They’re not really scientific terms. But we’re certainly able to tell some of it. For example a TV set, car, fridge (as of today) or book isn’t conscious the same way an animal is. Sure my fridge has some sensors to perceive something about its surroundings. A book has information in it and it can change the world by people reading it. But I don’t think defining consciousness as loosely as that makes any sense. Any NPC in a first-person-shooter game has more sensory input, internal state, and output than any ChatGPT. Any car from 10 years ago has a bunch of electronics, processing power, internal states and even feedback-loops(!) inside. So pretty much everything would qualify as a conscious entity.

                  • Grail@multiverse.soulism.net
                    link
                    fedilink
                    English
                    arrow-up
                    3
                    ·
                    6 days ago

                    I’m not entirely convinced that learning is required for qualia, but I do suspect it’s the case, so I agree with you that it’s likely running an LLM doesn’t hurt it. However, training an LLM does involve learning, so if there’s suffering going on, I think it’s in the training step. I support halting all LLM training until further research breakthroughs, and a total boycott of the technology until training is halted.

                • imaqtpie@lemmy.myserv.one
                  link
                  fedilink
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  6 days ago

                  I find this angle of resistance to AI interesting, it’s not one that I had thought much about until now. But it actually seems pretty persuasive.

                  My fundamental reticence about AI has always been driven by my concern for its impacts on human society. But one could also argue that it might be irresponsible and potentially abusive of the AI themselves. Tbh I would probably have to disagree if you’re talking about AI as it is currently, but it’s still a valid argument in general.

                  So I don’t fully agree with you, but I definitely want to acknowledge that you’re making some fascinating points and I think some people down voting could stand to lighten up and have a polite intellectual disagreement without being rude.