• 12 Posts
  • 665 Comments
Joined 1 year ago
cake
Cake day: January 11th, 2025

help-circle





  • Yes, I can see that.

    The “AI” that we have now is not actually AI, that’s just a marketing term. Actual experts (read: Not people like Sam Altman) point out that LLMs are severely flawed and will always return bad information. This problem is baked into the way these models function. Making what we’ve got into actual AI like you said isn’t going to happen, full stop.

    Don’t believe the horseshit you hear from people trying to sell something.


  • The irony of your response is strong. Also, you DID say that:

    I view AGI as inevitable became it’s the natural end goal of us incrementally improving our AI systems over a long enough period of time. As with all human-created technology, we will keep improving it. It doesn’t matter how slow the process is - as long as we keep heading in that direction, we will eventually reach the destination. The only things that could stop us, as far as I can see, are either destroying ourselves some other way before we get there or substrate independence - meaning general intelligence simply cannot be created without our biological wetware. I however see no reason to assume that, since human brains are made of matter just like computers are and I don’t think there’s anything supernatural about intelligence.

    It sounds like you’ve bought into techbro bullshit, but don’t realize it.