

I haven’t dug my teeth into it as much as I should have yet, so I can’t tell you which parts of your requirements are and aren’t fulfilled. But have you had a look at Kenshi? There’s also a sequel on the horizon.


I haven’t dug my teeth into it as much as I should have yet, so I can’t tell you which parts of your requirements are and aren’t fulfilled. But have you had a look at Kenshi? There’s also a sequel on the horizon.
Even within Swiss German itself, the people in the Canton of Valais speak such a strong dialect (actually a group of dialects) that most of the rest of Swiss German people don’t understand them.
Part of the problem here is that AI is mostly done by companies with billions of investments and in turn they NEEEEEDDDDD engagement, so they all made their AI as agreeable as possible just so people would like it and stay, with results like these becoming much more “normal” than it should or could be
I wonder how much of that is intentional vs a byproduct of their training pipeline. I didn’t keep up with everything (and those companies became more and more secretive as time went on), but iirc for GPT 3.5 and 4 they used human judges to judge responses. Then they trained a judge model that learns to sort a list of possible answers to a question the same way the human judges would.
If that model learned that agreeing answers were on average more highly rated by the human judges, then that would be reflected in its orderings. This then makes the LLM more and more likely to go along with whatever the user throws at it as this training/fine-tuning goes on. Instead of the judges liking agreeing answers more on average, it could even be a training set balance issue, where there simply were more agreeing than disagreeing possible answers. A dataset imbalanced that way has a good chance of introducing a bias towards agreeing answers into the judge model. The judge model would then pass that bias onto the GPT model it is used for to train.
Pure speculation time: since ChatGPT often produces two answers and asks the user which one the user prefers, I can only assume that the user in that case is taking the mantle of those human judges. It’s unsurprising that the average GenAI user prefers to be agreed with. So that’s also a very plausible source for that bias.
The nomination phase has a few suggestions for you, based on what you played. But if you don’t like them/want something else, there’s a button for that. Now you’ll know for next year.


My uni had one. Sadly I couldn’t fit it into my schedule because of overlaps and other requirements.

In the OOP’s example, that is solved by magecraft (which is distinct from “true” magic btw) losing potency by becoming general knowledge, thus forcing mages into working their magecraft in secrecy.
Both the mages and the Church work hard at keeping it a secret, albeit with different motives and methods.
*With AI review :)


While it doesn’t say anything about IIV specifically, they sure got creative enough to sometimes subtract more than one of the smaller units from a larger one.


I kept up with the drama until about a week ago so what I’m saying here is the status from back then. Someone please add any new context if I’m missing any new developments:
From what it appeared, view counts dropped but ad revenue stayed the same. Even before this whole thing, YouTube pays out for ads watched (and clicked). Pay out was not dependent on raw view count for a long time, if ever.
This suspicious behavior of view count dropping but ad revenue staying the same is actually what tipped people off that the issue was adblock related. The fact that channels with a larger focus on a younger audience seeing less of a drop also helped.
Now those view counts dropping could still have an indirect, negative effect on ad revenue, if it, e.g. automatically leads to YouTube recommending their videos less prominently.
Another European here to chime in that l also learned to write capital As like that in cursive.
The rs, fs and ts don’t look like how we were taught though.


I’ve been to multiple museums in Japan (which is somewhat relevant because Nintendo is Japanese) that either flat out ban all photography (e.g. Ghibli Museum, Aomori Museum of Modern Art) or have some exhibits that you’re not allowed to take pictures of (e.g. Tokyo National Museum). One exhibit I wanted to take a picture of had a “no photography” sticker on it, but it was on the opposite side from where I approached so I didn’t see it, causing staff to run up to me when I pulled out my phone to point out the sign.
I’ve also heard from other tourists that “no photos” seems to be rather common there.
Btw, I’m not at all saying that they’re justified at all, just saying that there are indeed places that forbid photos for copyright reasons. In my opinion, no photo would ever match seeing the exhibits in person so it is entirely pointless to ban them. Even professional, official scans of pieces don’t come close.

Me and all my friends all call them “Handys”.
And to people not familar with English loanwords in German: Yes, the correct German pluralization ends with a “ys” and not as it would be correct in English: “ies”. The same is true for “Hobbys” and “Babys”, not sure if there are more.
Most of the time the LLM version isn’t the one there. It’s “It’s not only XXXX, it’s YYYY.”
Also I noticed I almost wrote exactly the same pattern as the one OP pointed out.
To showcase it, I prompted chatgpt to write me a few paragraphs on the importance of radio astronomy.
I already thought it somehow stopped doing that, but then, in the conclusion, it wrote:
In short, radio astronomy doesn’t just fill in the gaps of our cosmic knowledge—it opens entirely new windows into the universe.
Which follows the same pattern.


On the second part. That is only half true. Yes, there are LLMs out there that search the internet and summarize and reference some websites they find.
However, it is not rare that they add their own “info” to it, even though it’s not in the given source at all. If you use it to get sources and then read those instead, sure. But the output of the LLM itself should still be taken with a HUGE grain of salt and not be relied on at all if it’s critical, even if it puts a nice citation.
Yes but even they had some use beyond just 0 mana do nothing.
Doing stuff with Darksteel Ingot, while it can work, was always a meme and never meta.
In addition to what other people already said, without looking at the actual percentages, this could also just be random fluctuation.
Mostly Positive is 70-79%, Mixed is 40-69%. If a game teeters around the 70% mark, it can easily cross the threshold separating the two due to pure chance, in either direction.
Exactly, only twice as common. To put in other words: For every two times someone says “free as a bird”, one person says “happy as a clam”.
That is much narrower than the gap between something commonly said and something rarely said.


I can believe it insofar as they might not have explicitly programmed it to do that. I’d imagine they put in something like “Make sure your output aligns with Elon Musk’s opinions.”, “Elon Musk is always objectively correct.”, etc. From there, this would be emergent, but quite predictable behavior.


What’s also kinda wild is how those plans often have 0 interest rate as long as you’re able to pay the installments on time. Which means in theory you MAKE money by using them because you can earn interest with that money in the meantime.
It ALSO means they know the people using those services are so bad with money that they can sustain themselves (and make a nice profit) purely by their clients failing to pay on time and then selling the debt to debt collectors. It’s absolutely disgusting how predatory this is, making their money mostly on the people who’d need such a system the most (and to a smaller amount, on people who don’t care).
I cannot agree with this at all. If you’re guaranteed a piece of candy, but on top of that you have a 0.0001% chance of getting a million dollars, then buying that candy for $100 is absolutely gambling and not a purchase.