• 11 Posts
  • 116 Comments
Joined 8 months ago
cake
Cake day: August 28th, 2025

help-circle




  • That was unironically my take when AI bots started taking over the internet forums.

    I even said to a friend, people will start to go back to small closed communities or groups where everybody knows each other, and everybody knows no AI crap will be shared in those groups. Local art will be more valuable because we know for sure that it’s human made and someone put real effort on it.

    Mainstream media will be always prone to be AI poisoned.


  • I commented on this post.

    I said it doesn’t make sense, anyone can scrap Reddit or Lemmy just fine to train on LLM.

    They’re just making it official.

    If you don’t want your data trained on LLMs just stop interacting on the internet.

    I’m not very into this subject but I think they can use proxies to bypass rate limitation by IPs while scrapping a lot of Reddit per second. Even Lemmy can be scrapped.

    Even your views might help LLMs training because your views tell which content is worth to focus and scrap.






  • They’re annoying to be honest.

    I used Qwen 3.5 for some research a few weeks ago, at first the good thing was every sentence was referenced by a link from the internet. So I naturally thought “well, it’s actually researching for me, so no hallucination, good”. Then I decided to look into the linked URLs and it was hallucinating text AND linking random URL to those texts (???), nothing that the AI outputs was really in the web page that was linked. The subject was the same, output and URLs, but it was not extracting actual text from the pages, it was linking a random URL and hallucinating the text.

    Related to code (that’s my area, I’m a programmer), I tried to use Qwen Code 3.5 to vibe code a personal project that was already initialized and basically working. But it just struggles to keep consistency, it took me a lot of hours just prompting the LLM and in the end it made a messy code base hard to be maintained, I asked to write tests as well and after I checked manually the tests they were just bizarre, they were passing but it didn’t cover the use cases properly, a lot of hallucination just to make the test pass. A programmer doing it manually could write better code and keep it maintainable at least, writing tests that covers actual use cases and edge cases.

    Related to images, I can spot from very far most of the AI generated art, there’s something on it that I can’t put my finger on but I somehow know it’s AI made.

    In conclusion, they’re not sustainable, they make half-working things, it generates more costs than income, besides the natural resources it uses.

    This is very concerning in my opinion, given the humanity history, if we rely on half-done things it might lead us to very problematic situations. I’m just saying, the next Chernobyl disaster might have some AI work behind it.