• 0 Posts
  • 353 Comments
Joined 3 years ago
cake
Cake day: June 18th, 2023

help-circle
  • I find that it really depends. Some contracted work mostly take specialised equipment, where the necessary skill is something you can pick up. The cost of the equipment is often (at entry level, but more than good enough for the task at hand) less than the show + a couple of hours.

    You spent the whole weekend maybe. But you learned something new. Got some nice tools. And more often than not, did a better job because you didn’t rush anything.


  • I’m genuinely curious as to what the fuck identifying on the OS level has to do with social media, and then what the fuck that has to do with protecting kids. If you’re a parent who engages with your child, and… hear me out here… take care of your child, restricting access is done the same way they they don’t get access to detergents, and similar.

    In the consumption of media, have tools that let parents manage and control the type of content they can access. Similar to how you can child proof cabinets.

    And, back to my original question. What the fuck does this have to do with identifying on the fucking operating system level?

    I’m genuinely curious if anyone pushing this has been asked to justify this? Surely, you’d expect some aspect of reasoning to be behind this, no?

    Edit: not to mention. Corporations have shown to reliably and consistently be bereft of any and all ethics and morals. One can more easily argue that identifying children is likely going to be harmful, as they’ll be tracked and targeted in any way that can be argued to private equity groups (or similarly condensed evils), to generate “value”. “Want to do behavioral experiment on kids? We can now do this insanely cheap, as we track the effect on a per child basis”









  • There is no law that governs Linux development related to this, enywhere else. There is only a law in CA that requires this functionality (which would break any and all software infrastructure). Why would any maintainer of any Linux distribution, not actively dependent on following an untested law (from a legal PoV), even consider implementing it? This got a lot of headlines, because it’s absurd and stupid.

    If maintainers wanted to comply, what the fuck would it actually entail? 99% of operating system doesn’t have any specific human users to identify. The only reasonable approach is to ignore it. If data centers in CA for Azure, AWS, GCP, or any other, wants to comply with this (which is impossible), either spend some of that tax free revenue to combat Meta’s suspected 2 billion USD effort in getting these online ID laws pushed through.






  • I saw somebody in work upload a firewall config xml and start querying if stuff was blocked. I actually thought it was a pretty clever use of it.

    I would find it some place between worrisome and you-should-lose your-job, depending on how important that firewall is. This might seem exaggerated, but if your colleague had showed that config to a child, and then asked them yes and no questions, a game to which the child happily participated in. I would consider that exactly as reasonable, and exactly as responsible, as asking an LLM. Imagine someone doing this, for an important firewall config… and taking the child’s answers at face value. It should be fair to think that this person is grossly unqualified, and showing a dangerous lack of judgment.

    And, that’s just the issues I would have regarding using a bullshit generator as a source of truth. If the firewall config could be considered sensitive information, uploading that to a third party, would be grounds for dismissal for entirely separate reasons.




  • I hear you. I’m very much the same, both in trying to not pay too much attention for the same reasons, but also the trade, though perhaps not all that specialised.

    Once the economic aspect of this comes to the conclusion we already know: it isn’t sustainable. I think we might start to see a more sensible approach to LLM usage.

    The current status is as if people are asking LLMs if a mushroom they picked is safe to eat, and then serving the whole family. A more sensible approach would be to get a name suggestion from the LLM, then use that as an entry point to manually verify it.

    The LLM user should always be the expert. I.e., don’t serve something potentially poisonous. Let it come with suggestions, by all means. But if you don’t know enough to verify the correctness of what it says, then you already lost. Unfortunately, this is how most people use it now. Followed by being shocked “it lied”.



  • Indeed.

    Here is the article that lead me to it: https://acko.net/blog/the-l-in-llm-stands-for-lying/

    When I listen to apocalyptic predictions as a result of AI (transformer based generative LLMs to be specific), they’re all based on assumption that it “adds value, but at a high energy cost”.

    They don’t consider the destruction of human knowledge, where bullshit generators are “informing” decisions, and “curating” insights. Similar to how all steel made after the invention of nuclear weapons is useless for certain applications, so I find books written after the rise of LLMs.

    If only it also didn’t come at the low cost of destroying the ability to reason (as numerous studies have shown). Silverlining is that it’s also absurdly energy demanding, and further pushing the climate past the point of no return. At the very least, we’re are in for a hefty and long recession when the bubble pops. What’s not to like?