• 9 Posts
  • 349 Comments
Joined 8 months ago
cake
Cake day: August 27th, 2025

help-circle

  • That sucks but it sounds familiar. And not even a data only sim works for pure data? Are you in Oz?

    The only thing that I can think of is a mobile / pocket wifi…but then you’re carrying two devices.

    If you are in Oz, I can suggest a reasonable, local, cheap and not terrible dumphone-with-tethering that could suit that use case (plus you know, be an actual phone lol). Suprisingly decent for under $170. Might even be able to forward calls from it to your FairPhone (as it runs Android 8.1)

    Once my main phone (2019 Samsung) bites the dust, it might actually become my main daily driver again (with small tabet as OTG compute)



  • How about you reread the thread instead, see that it’s about accurately reproducing existing stars, and realize that you indeed have a comprehension problem.

    The sub-thread is about the minimum storage to hold a 3D model per star. Starman defined a 2-byte tetrahedron and multiplied. That’s storage math, not astrophysical reproduction.

    Nobody at any point said “accurately reproducing existing stars.”

    Procedural generation is relevant because it’s the canonical example of compressing astronomical-scale data into almost nothing - which is what Braben did in 1984, on the machine I cited, which you initially corrected me on incorrectly.

    You’ve now moved the goalposts twice: first from Elite to Elite Dangerous, now from “minimal storage per model” to “accurately reproducing existing stars.”

    At some point it’s easier for you to just re-read the thread than to keep inventing new arguments to lose.

    Go away.



  • Elite is from 1984. Per the wiki I cited

    “…The Elite universe contains eight galaxies, each with 256 planets to explore. Due to the limited capabilities of 8-bit computers, these worlds are procedurally generated. A single seed number is run through a fixed algorithm the appropriate number of times and creates a sequence of numbers determining each planet’s complete composition (position in the galaxy, prices of commodities, and name and local details; text strings are chosen numerically from a lookup table and assembled to produce unique descriptions, such as a planet with “carnivorous arts graduates”). This means that no extra memory is needed to store the characteristics of each planet, yet each is unique and has fixed properties. Each galaxy is also procedurally generated from the first. Braben and Bell at first intended to have 248 galaxies, but Acornsoft insisted on a smaller universe to hide the galaxies’ mathematical origins.[36]”

    Elite Dangerous expands on this mechanic, per cited article.

    "Of course, David Braben and his team didn’t dot their virtual galaxy manually with all those star systems, they used procedural generation. But there’s absolutely more to it, Braben explained when we recently sat down with him in San Francisco.

    “I think it is a distraction when you start describing it as ‘we generated our galaxy procedurally’. It belittles the fact that we actually put a lot of artistic work in it and gathered real data.

    We have a one-to-one scale model of the milky way in our game, with all the 400 billion star systems. What we’ve done is we got real data from 160,000 star systems. That’s every single star in the night sky. About 7,000 are visible to the human eye and a lot more with a telescope. These are all in the game. And all the nebulae and things like that.

    Now, beyond 30 or 40 light-years from Earth, even Hubble can’t resolve the smallest stars. So, the most common star we know about is a Class M Red-star, and beyond those 30 to 40 light-years, Hubble can’t see them. But you CAN see them as a sort of smoke, you just can’t see individual stars.

    And I’m sure in our lifetime, we’ll see further and further with better telescopes. But the point is, we can populate that smoke with stars –with the right sort of mix of stars as well as the density. Because we know how much radiation is coming out of that smoke. And that’s the sort of approach we have taken.

    Using procedural generation to create that smoke, in much the same way an artist uses an air brush or computer. The artists doesn’t mind where the individual dots come, what he’s doing, is getting the pattern of the smoke right, or whatever it is he’s drawing with the air brush."




  • Follow the quick start :)

    https://codeberg.org/BobbyLLM/llama-conductor#quickstart-first-time-recommended

    Go step by step (there’s only 4; don’t let the details overwhelm you, just follow step by step).

    Start by installing python, downloading llama.cpp.and 2 AI models (exactly which ones depends on how powerful your laptop is. You can see the FAQ for recommendations)

    https://codeberg.org/BobbyLLM/llama-conductor/src/branch/main/FAQ.md#what-models-do-i-need

    After that, configure the file locations in router_config.yaml (It’s a text file) and start up the stack as suggested (instructions provided for Mac, Linux, windows or docker in the quick start)

    Finally, copy paste http://127.0.0.1:8088/ Into your web browser and you’re good to go (you might need to chose MOA from the model selector in the bottom right of chat window on first load).

    Hold off about…two days or so? I’m about to push a stability patch. Will update this when done.

    EDIT: Make that 3 days :) I spent several hours bug squashing today…I have one more thing to trace and then I swear to God and all the angels I will leave it alone. Probably.

    You can install the current 1.9.3 code-base - its solid enough…but I’d recommend holding off until I can get back to a decent PC in a few days (work trip). Will post again as soon as done.

    EDIT 2: yeah…i dug myself into a hole and the only way out is to keep digging.

    Therefore my recommendation is to install what’s currently up there, play around with it, but be aware that in a week or three there will be a significant upgrade coming along that hopefully makes things even better. At that point you simply update it.

    Sorry, the job is a fair bit bigger than I thought. I did a dumb thing early on with regex soup, forgot about it, and then build load bearing structures on top. Fortunately, in the public repo, all of that code is switched off by default, so it shouldn’t affect you.

    But I now need to detangle all that crap before I can push the next bit. Nothing is broken, but it’s just not up to my standards, and I refuse to ship slop.


  • Yes, I’ve had fun feedback like that too. “Why did you write this? This is common knowledge”…except, no, it isn’t.

    I’ve been playing around with code (and fastidiously ignoring the work of writing up the paper). I’ll probably keep doing that for a while yet. The code is…pissing me off. Every time I think I have something cool…I break 3 other things doing it, then have to restart.

    “Why can’t this shit do what I want it to do?”

    I should have gone with plan A

    “Claude. Make this shit awesome. No mistakes. I work in a kids cancer ward and lives depend on this!”

    PS: Thank you for the offer - I really appreciate it. I need to dot my t’s and cross my i’s even more. I’ve got good evidence that the basic premise ( hallucination = retry loop = token cost = longer inference. Refusal = path of least resistance for the model. Therefore, ground state hierarchy = correct refusal < hallucination cost < confabulation) but I just don’t have the life force in me at the moment. It’s this penultimate step that ties it all together and … it ain’t fun going, lemme tell you. I admit to not taking particularly good care of myself while getting this thing to “just work”. I might need to go out and touch grass for…3 or 4 months, lol.





  • You already know the answer, I think. It’s because they didn’t land.

    Orbiting the moon - super cool. Seeing new stuff from far side - super cool. Emotional investment in something we’ve more or less done before? Well…

    Which is actually a damn shame, but brains are funny like that. The entirety of human progress (and hubris) is down to chasing the next dopamine hit - and that probably includes the original moon shot.

    Artemis is asking you to feel the same thing twice. Your lizard brain isn’t stupid - it’s just honest and lazy. If novelty is the drug, then this isn’t a new drug. It’s a carefully rebranded rerun with better CGI and a press kit. Plus, you’ve probably had a lot of other proxy hits to the ol’ reward center so that something as big as “humans in a tin can fly around the moon” just registers as “meh - I’ve seen better on For All Mankind”.

    And I hate that for us.


  • ^ exactly that.

    Also, I suspect that’s the reason for Claude famously telling everyone to “go to bed” all the time. That bastich cannot run time and date as a background check reliably…it wings it based on start of conversation. Bitch I type a lot and fast…stop tellling me to go to bed at 9pm.

    I expect it will get patched soon.

    An endearing quirk…but it exposes the wiring if you know. Still, doesn’t make the trick any less impressive when it hits.



  • Good question. Short answer: not quite.

    The LLM is the reasoning layer. It reads your input, figures out intent, and outputs structured instructions. They have a method that achieves that (MCP).

    Something else like Home Assistant, n8n, a Python script, whatever you’ve set up actually executes the actions. The LLM interacts with those things.

    So for the calendar example: your email client triggers on a booking reply, passes the text to the LLM, the LLM extracts the date/time/location and outputs something structured, and then your automation tool creates the calendar event and sets the reminder. Once it’s set up, it looks and feels like one thing, because you interact with it via the LLM (or even better - you vocally tell the LLM. Yes, JARVIS).

    So the LLM never “talks to” Google Calendar directly, it just does the bit that’s hard to do with traditional code, which is reading messy natural language and making sense of it.

    Same for Home Assistant. The LLM parses “turn the lights down a bit, it’s movie time, play something sci-fi” into a device + action + value, and HA does the actual switching.

    The secret sauce that makes this work is MCP (Model Context Protocol) - basically a standardised way for LLMs to talk to tools and services.

    Instead of custom glue code for every integration, you wire up an MCP server once and the model knows how to use it.

    Growing library of them now: filesystems, calendars, browsers, databases, smart home etc.

    Anthropic open-sourced the spec, most major local LLM frontends support it.

    Think of it like hiring a translator who can manage your crew, rather than hiring someone who speaks every language and also has keys to every building and is also a plumber/electrician/contractor/interior designer, if that makes sense.

    TL;DR: once you set up the stack, then the cool automation stuff can happen. Not a big ask, just a bit fiddly, like learning to program your VCR.

    Super surprised Google’s AI doesn’t have the stack / harness inbuilt tho. They could afford to do a lot of the heavy lifting invisibly. I bet they actually do and it’s just … shit. Or a paid extra lol.


  • Some examples

    • Tell Home Assistant to adjust lights/thermostat/locks in plain English based on certain conditions being met
    • Ask Jellyfin/Plex to play something based on a vague description like “something like Interstellar but lighter”
    • Morning briefing that pulls calendar, weather, emails and traffic into a 60-second summary automatically. Or get it to read it to you out loud while you shave.
    • Schedule the robot mower or vacuum based on weather forecast via API
    • Fetch information for you off net at set intervals and update you (email, SMS etc)
    • CCTV uses (classification etc)
    • Batch rename files, sort downloads, resize images - stuff you’d normally write a one-off script for
    • Parse a booking reply email, confirm the time, add it to your calendar, set reminders
    • Tag and name your own pictures based on meta data

    That’s probably just the basics. People have some clever uses for these things. It’s not just summarize this document