

Wheee, polycrisis is such a fun way to describe the modern world. Let’s just have every possible crisis all at once.


Wheee, polycrisis is such a fun way to describe the modern world. Let’s just have every possible crisis all at once.


Absolutely. There are tons of open-licenced, open-weight (the equivalent of open-source for AI models) models capable of what is called “tool usage”. The key thing to understand is that they’re never quite perfect, and they don’t all “use tools” quite as effectively or in the same way as each other. This is common to LLMs and it is critical to understand that at the end of the day they are just text generators, they do not “use tools” themselves. They create specific structured text that triggers some other software, typically called a harness but could also be called a client or frontend, to call those tools on your system. Openclaw is an example of such a harness (and not a great or particularly safe one in my opinion but if you want to be a lunatic and give an AI model free reign it seems to be the best choice) You can use commercial harnesses too by configuring or tricking them into connecting to a local model instead of their commercial one, although I don’t recommend this for a variety of reasons if you really want to use claude code itself people have done it but I don’t find it works very well since all its prompts and tool calling is optimized for Claude models. Besides OpenClaw, Other popular harnesses for local models include OpenCode (as close as you’re going to get to claude for local models) or Cursor, even Ollama has their own CLI harness now. Personally I use OpenCode a lot but I am starting to lean towards pi-mono (it’s just called pi but that’s ungoogleable) it is very minimal and modular, making it intentionally easy to customize with plugins and skills you can automatically install to make it exactly as safe or capable or visual as you wish it to be.
As a minor diversion we should also discuss what a “tool” is, in this context there are some common basic tools that some or most tool-use models will have or understand some variation of, out of the box. Things like editing files, running command-line tools, opening documents, searching the web, are common built-in skills that pretty much any model advertising itself capable of “tool use” or “tool calling” will support, although some agents will be able to use these skills more capably and effectively than others. Just like some people know the Linux commandline fluently and can completely operate their system with it, while others only know basic commands like ls or cat and need a GUI or guidance for anything more complex, AI models are similar, some (and the latest models in particular) are incredibly capable with even just their basic built-in tools. However they’re not limited by what’s built in, as like I said, they can accept guidance on what to use and how to use it. You can guide them explicitly if you happen to be fluent in their tools, but there are kind of two competing models for how to give them that guidance automatically. These are MCP (model context protocol) which is a separate server they can access that provides structured listings of different kinds of tools they can learn to use and how they work, basically allowing them to connect to a huge variety of APIs in almost any software or service. Some harnesses have an MCP built-in. The other approach is called “skills” and seems to be (to me) a more sensible and flexible approach to giving the AI model enough understanding to become more capable and expand the tools it can use. Again, providing skills is usually something handled by the harness you’re using.
To make this a little less abstract you can put it in perspective of Claude: Anthropic provides several different Claude models like Haiku, Sonnet, and Opus. These are the text-generation models and they have been trained to produce a particular tool usage format, but Opus tends to have more built-in capability than something like Haiku for example. Regardless of which model you choose though (and you can switch at any time) you’ll be using a harness, typically “claude code” which is typically the CLI tool most people use to interact with Claude in an agentic, tool calling capacity.
On the open and local side of the landscape, we don’t have anything quite as fast or capable as Claude code unfortunately, but we can do surprisingly okay considering we’re running small local models on consumer hardware, not massive data center farms being enticingly given away or rented for pennies on the dollar of what they’re actually costing these companies on the hopes of successful marketshare-capture and vendor-lock-in leading to future profits.
Here are some pretty capable tool-use models I would recommend (most should be available for download through ollama and other sources like huggingface)


Not enough, obviously, because they’re baaaack.


This is probably just someone’s effort to pick a color similar looking to a green-screen in film, since it is serving the same technical effect.


Do you have any examples? I’m not a chemist but I don’t believe you can have “chloride” alone as an ingredient. If it were alone it would be elemental chlorine, which is an entirely different animal and I sincerely doubt any drink maker would be putting free chlorine into their drinks.
A “chloride” on the other hand is a compound of chlorine already combined with some other element, which is presumably not sodium or you would’ve not said the sodium and chloride were separate. So you could have “potassium chloride” for example, but this would not turn into “sodium chloride” simply by existing in the same liquid as elemental sodium, because it’s perfectly happy sticking with the potassium and being potassium chloride.


Pugilism and strength training are not things I usually associate with progressive politics, but maybe he’s practicing punching nazis, which is always acceptable, and unfortunately badly needed nowadays.


And so many of these “common men” still seem to really believe that no matter what he actually says or does, all that matters is that he talks like the person they imagine him to be, which they believe means he unequivocally understands and cares about them and can do no wrong. He really does love the poorly educated, and you can see why.
The reality distortion field Trump supporters seem to be trapped in is rapidly approaching the strength of a black hole. I’m not sure what happens when it all collapses and they all fall into the event horizon but I’ll certainly be glad if they can’t escape and we never have to hear from most of them ever again.


Basically, that’s not where the farmland is (or, when it was first being settled, the fur, which provided the major economic incentives for why that area was settled in the first place). You also have to think about how the land was settled. Settlers from the east used mountain valleys to get around. Mountain valleys in that circled area aren’t easily traversable and don’t go anywhere or lead anywhere useful. Settlers from the southwest used ships and followed shipping routes up the coast. When you consider both these settlement methods simultaneously (and they were in fact used almost simultaneously) you will come to the conclusion that these are some of the most remote areas to be settled in the continental US, and their relative remoteness has a lot to do with why they were settled the way they were.
Meanwhile, from the perspective of a ship sailing up the coast there are few good protected anchorages to use as a sheltered waystation or safe harbor in case of inclement weather directly along the coast, but if you go just a little further you’ll reach good port lands (it’s literally called “Portland”) or Seattle and you might as well journey just a little further to stop there instead if you possibly can. When you consider people taking a long and perilous journey around the horn of South America (there was no Panama Canal) you’re almost at the end of the line, and you aren’t going to want to stop 99% of the way, you’re so close that you’ll push on to the end, and that’s why Portland, Seattle and Vancouver developed where they did. The farmland got worse the further north you went and became increasingly unsustainable so nobody really went much further before the gold rush provided yet another economic incentive to draw people there, but that’s a different story.
Great news! Also thanks for providing the follow-up, hopefully it helps people who use Unraid in the future.


As a professional software developer, I truly hope that is the case (and I plan to charge at least 10x my current rate after the AI bubble pops when I’m looking for my next job as I expect there to be a massive shortage of people skilled enough to actually deal with the nightmare spaghetti AI code bases)
Fun times ahead.


No, I think you do get it. That’s exactly right. Everything you described is absolutely valid.
Maybe the only piece you’re missing is that “almost right, but critically broken in subtle ways” turns out to actually be more than good enough for many people and many purposes. You’re describing the “success” state.
/s but also not /s because this is the unfortunate reality we live in now. We’re all going to eat slop and sooner or later we’re going to be forced to like it.
You can do all those things with proper routing and there is no difference from mobile devices (as long as they use DHCP and what mobile device wouldn’t?). What I’m suggesting does not change anything on the public side. You still authenticate publicly to renew your certificates. You still give the same certificates to both public and local networks. They’re still valid. Nothing changes.
The only difference is that when you’re local, your DNS gives you the correct local IP address where that service is hosted, say, 192.168.12.34 instead of using public DNS, getting an external IP that’s on the wrong side of the router, and having to go outside your own network and come back in. Hairpin is like that simpsons episode where Abe goes in the revolving door, takes off his hat, puts his hat back on, and goes back out the same revolving door in the span of 2 seconds. It’s pointless, why are you doing that? If you didn’t want to be on the outside of the network, why are you going to the outside of the network first? Just stay inside the network. Get the right IP. No hairpin routing needed. No certificate madness needed. Everything just works the way its supposed to (because this is in fact the way it’s supposed to work)
Convenience is great until it becomes inconvenient. But that’s a journey we all make :)
I’m not too familiar with unraid but from a little research I just did it seems like you’re right. That does seem like a really unfortunate design decision on their part, although it seems like the unraid fans defend it. Obviously, I guess I cannot be an unraid fan, and I probably can’t help you in that case. If it were me, I would try to move unraid to its own port (like all the other services) and install a proxy I control onto port 443 in its place, and treat it like any other service. But I have no idea if that is possible or practical in unraid. I do make opinionated choices and my opinion is that unraid is wrong here. Oh well.
I’d argue that your internally hosted site should not be published on ports other than 80/443. Published is the key word here, because the sites themselves can run on whatever port you want and if you want to access them directly on that port you can, but when you’re publishing them and exposing them to the public you don’t want to be dealing with dozens of different services each implementing their own TLS stack and certificate authorities and using god-knows-what rules for security and authentication. You use a proxy server to publish them properly. And there’s no reason you can’t or shouldn’t use that same interface internally too. Even though you technically might be able to directly access the actual ports the services are running on on your local network, you really probably shouldn’t, for a lot of reasons, and if you can, maybe consider locking that down and making those services ONLY listen on 127.0.0.1 or isolated docker networks so nothing outside the proxy host itself can reach them.
If you don’t want your services to listen on 80/443 themselves that’s reasonable and good practice, but something should be, and it should handle those ports responsibly and authoritatively to direct incoming traffic where it needs to go no matter the source. Even if (or especially if) you need to share that port with various other services for some reason, then either way you need it to operate that port as a proxy (caddy, nginx, even Apache can all do this easily). 443 is the https port, and in the https-only world we should all be living in, all https traffic should be using that port at least in public, and the https TLS connection should be properly terminated at that port by a service designed to do this. This simplifies all sorts of things, including domain name management and certificate management.
tl;dr You should have a proxy that publishes all your services on port 443 according to their domain name. When https://photos.mydomain.com/ comes in, it hits port 443 and the service on port 443 sees it’s looking for “photos”, handles the certificates for photos, and then decides that immich is where it is going and proxies it there, which is none of anyone else’s business. Everyone, internal or external, goes through the same, consistent, and secure port 443 entrance to your actual web of services.


Nextcloud, CalDAV, Thunderbird.


That seems like the kind of problem that a radio and a spike belt were designed to solve.


This is typically called “continuous deployment” or “CD”, a close neighbor to “continuous integration” or “CI”, and you will find that this is a very deep rabbit hole.
It’s intentionally roundabout, as it has security implications when you make that process too direct and automated. You don’t really want to just give your forgejo repository root commandline access to the machine it’s running on (and it doesn’t want you to do that either). Good software like Forgejo doesn’t trust itself nevermind its users, and sets things like this up in a way that it has to pass through various gates in the process that control what it’s doing a little more carefully and explicitly. At the end of the day of course it’s always potentially dangerous to be running automatic code deployments like this, but adding the extra hoops to jump through is one way of putting extra barriers against someone trying to profoundly violate your machine. There’s a swiss cheese model of security going on here, where yes, there are holes in each of the slices, but unless all the holes of all the different slices line up exactly like they’re supposed to, an attacker can’t get through.
With that said, there are tons of CD options out there and it’s totally possible to roll your own especially for a simple use-case like this, but forgejo runners are absolutely the easiest and most native way of handling this, they follow the github actions configuration almost perfectly (for better or worse, it’s become the standard now, god save us all). The initial setup is a bit front-loaded, but once you’ve got your runner connected, you’re laughing. Smooth as silk. Don’t worry too much about the “risks” side of the setup, if this is truly a single-user Forgejo where you’re not letting other people create repos and you’re not blindly copying other people’s repos or accepting dangerous PRs from people, the risks are minimal, you’re the only one running github actions on it, you can give it access to the same machine Forgejo is on without too much worry. You’re poking a few holes in the swiss-cheese security model but, us self-hosters have gotta do what we’ve gotta do with our limited resources.
Once the runner’s connected, just pretend you’re dealing with github actions from that point forward, set the “runs-on” attribute to point at whatever you tagged your runner with, and either use native github actions directly from github, or they’re also mirrored on forgejo.org (for example https://code.forgejo.org/actions/setup-go) or you can mirror them yourself, or you can just avoid using pre-packaged actions at all and just script your heart out and run straight bash commands. It’ll run and do whatever you tell it to. It’ll pull down the latest copy of the repo and deploy it wherever you want, however you want, you can have it run deployment scripts you’ve saved inside the repo itself, whatever you need to do to get it deployed.


Uranium 233 is NOT the same as Uranium 235. That’s … sort of the whole point?
That said, they can both be used as nuclear fuels. They aren’t the same, as I said, so they have to be used and operated somewhat differently. Uranium 233 is significantly less ideal as a nuclear fuel for a lot of complex and various reasons, but some of the things that make it somewhat less ideal as a fuel also make it MUCH less ideal for (clandestine) nuclear weapons, so on the whole, even though it is sort of worse than u235 at BOTH things in some ways, the fact that it’s SO much worse at nuclear weapons makes it somewhat more attractive for power generation in exchange, and this is a good thing that makes it potentially worth using for power generation because nobody’s going to be able to secretly start a nuclear weapons program with it.
Note this is all pretty debatable, as there are, as far as I am aware, no actual examples of commercial U233 nuclear reactors in existence, and nobody has really had an opportunity to use it in a clandestine nuclear program, and this is all imaginary at this point. We are speculating about hypothetical performance here. We will not know what the real performance and viability is until somebody actually does it at scale within real constraints. For now, it’s strictly theoretical and that means it’s very much subject to debate and the real-world implementation may have nothing to do with any of these principles or projections. We just don’t know what a U233 power plant would actually be able to accomplish because there aren’t any. Some people claim it’s a superfuel. Maybe they’re right, I don’t know, but I believe that if it really was as great as its proponents say, somebody would’ve found a way to make it work commercially by now. They are invited to prove me wrong at any time. I’ll wait.
I think that’s a reasonable rule of thumb to start from, but like most things in physics it’s not guaranteed and is rarely exactly that simple.