• 193 Posts
  • 218 Comments
Joined 3 years ago
cake
Cake day: June 30th, 2023

help-circle











  • I’m not so sure that power usage should be dismissed so easily just because it is distributed instead of centralized. The slop per watt rate may even be worse than at a datacenter. Fundamentally, we should care more about efficiency.

    Imagine a panel of 20 standard LED light bulbs. That’s 180 watts, roughly the equivalent of GPU usage while a local LLM is doing any work. If you keep that in mind, then you have to ask yourself if the benefit you’re getting out of your local LLM is really worth that energy cost. Now, monetarily speaking, that’s not a ton of money, because electricity is cheap, but would you flip that switch for the duration of the task you’re performing? What if you could use conventional non-LLM methods to do it instead? Would that be more efficient? And where is your electricity coming from? Is it a solar farm, or a coal plant?

    How was your local LLM trained? Was there copyrighted material in its training data set? Were low-wage workers asked to sift through horrendous content to clean up the data?

    We need to consider the externalities, even when using local LLMs. We moved so quickly from the initial release of ChatGPT to now, that we never stopped to ask those questions. They remain unanswered until someone cares enough to think.





  • I hope the rest of Canada isn’t just annoyed at having to hear about Doug all the time. This might just be more of an Ontario (and Toronto) problem right now, but it shouldn’t be ignored just because it is contained in that province.

    Speaking as an ethnic Sri Lankan (but Canadian national), Doug Ford and his ilk remind me very much of the political dynasty that Sri Lanka fell victim to. Sri Lanka saw rampant corruption for decades, followed by a devastating economic crisis and government overthrow. It will take them years to recover, and the political family responsible might still get away with it.

    Keep an eye on the Ford family. Aside from the Rob Ford dumpster fire, his nephew Michael Ford was previously the Ontario minister of citizenship and multiculturalism, briefly attempted to run in the Toronto mayoral race, and is now a registered lobbyist at Toronto City Hall.

    I think Doug is a bit smarter though, and his political cronies and business pals are probably going to benefit much more than his family, but that doesn’t make it any better for the people of Ontario or Canada.


    Archive link: https://archive.is/zXQRP (Yes, I’m aware of the problems with archive.is, but I think it’s important to let people bypass the paywall in this case)


  • I suspect the problem is that there are many developers nowadays who don’t care about code quality, actual engineering, and maintenance. So the people who are complaining are right to be concerned that there is going to be a ton of slop code produced by AI-bro developers, and the developers who actually care will be left to deal with the aftermath. I’d be very happy if lead developers are prepared to try things with AI, and importantly to throw the output away if it doesn’t meet coding standards. Instead I think even lead developers and CTOs are chasing “productivity” metrics, which just translates to a ton of sloppy code.



  • Here you go. It’s very short:

    In their recent Comment article, Eddy Keming Chen et al. argue that current large language models (LLMs) already display human-level intelligence, based on behavioural evidence (see Nature 650, 36–40; 2026). I suggest that this framing obscures a fundamental asymmetry.

    The authors treat human minds and LLMs as two comparable systems: effectively, two black boxes that are evaluated by their outputs. But this symmetry is fictitious. Human intelligence is a natural phenomenon, from which the very concept of intelligence is reconstructed. The generative mechanisms of the human mind are not yet fully understood. By contrast, LLMs are systems that are designed and built. Their operating principles — statistical optimization of token prediction — are known, even if internal complexity makes it difficult to retrace the steps that produce the outputs. LLMs are complex, but they are not inherently mysterious black boxes.

    When we attribute intelligence to humans, no alternative explanation for their cognitive behaviour is available, nor is it needed. But there is a sufficient explanation for the behaviour of LLMs, which does not infer understanding or intelligence: the known generative mechanism itself.

    This does not mean that artificial general intelligence is impossible in principle. But establishing it would require evidence that the cognitive behaviour of a system cannot be fully accounted for by its known generative mechanism alone.






  • The lessons I took away from both Chernobyl and Fukushima are that human mismanagement was the cause of both of those disasters. Also that the aftermath of Fukushima was managed very well when competent (non-corporate) agencies were brought in. So please don’t take those cases as examples of typical nuclear power operation. They are the exception. Nuclear very safe when managed properly. People have to screw up really badly for it to go wrong.

    I’d be happy with increased solar, wind and hydro, but those are very dependent on geography. If the choice is between fossil fuels and nuclear, I’d go with the nuclear energy option 100% of the time. Canada has been operating reactors since 1968, and we have around 15 in operation at the moment. They are safe because we are good at operating them safely.

    I’d recommend watching a documentary called “Pandora’s Promise” it talks about older generations of environmentalists who were very anti-nuclear but then reconsidered their views when they realized that their stance simply lead to significantly increased fossil fuel use, which translates to far more harm for both us and the planet.