Shinji_Ikari [he/him]

  • 9 Posts
  • 295 Comments
Joined 6 years ago
cake
Cake day: July 29th, 2020

help-circle
  • Outside of our lefty bubble, the propaganda is effective.

    If you haven’t already built up distrust for sources like the NYT or the washington post, you will see articles from (who you believe are) trustworthy sources, downplaying the crimes around the world. Remember, the bulk of violence during the GWOT happened pre-social media. You mostly saw what was shown. You didn’t look deeper, or read sources by counter opinions because they were ‘probably propaganda’.

    Stepping into 2026, nearly everyone has seen what Israel has done to Gaza. People still have the walls up in their mind, it’s Israel’s doing, Israel is the rogue state. The US wouldn’t do that, etcetcetc.

    If you’re in the armed forces, chances are you signed up because you were kinda dumb. Not really the kind to read foreign newspapers, or any newspaper really. Maybe you grew up poor and didn’t really have the bandwidth to care about the greater world when you’re struggling to eat. A few years in you’ve seen the inner workings, the inefficiencies, the general disdain for humanity, but you understand it’s how the military is, tough to make you an efficient fighter or whatever. But you’ve still got a few years left on your contract, “they own you” is the mantra repeated to every boot in the country. Your frontal lobe is developing, starting to think a little more about your place in the system. At least we’re not in a hot war, right?

    Well here come’s donnie, ole president of peace, starting a new forever war. You’ve seen the shit from gaza on your phone, it probably made you rub a couple braincells together but you can’t worry about it much. Don’t forget, the government owns you. They’ve changed your shift schedule 6 times in the last two weeks. You’re operating on 3 hours of sleep but it’s okay, you’re tough, you have energy drinks.

    Oh shit, news came in, a girls school they say? Must have been an accident, must be fake news, it’s propaganda. [Those brain cells are starting to vibrate, the rubbing is creating heat]. The US admits it. The US claims it was a mistake. This flies in the face of all the propaganda you’ve been fed saying “We make sure we hit the right target, that’s why we have all this tech. we’re not like those terrorists who kill kids. We can stick a missile up a terrorist’s ass from half the world away”. You start thinking about it. You learn it was a cluster munition, it was a double tap. You still have two years left on your contract before college is paid for and you get a nice deal on a home loan with the GI bill. You can make it through to provide for your family.

    You get the call it’s boots on the ground time. The dots start connecting, those two brain cells get an ember burning. You realize this is real, the mind games you and everyone around you have been playing stop hitting.

    You call the hotline.


  • Back in college I took a couple machine learning classes. After the second I understood where the market would eventually end up. It’s a pattern matching machine, if you were to provide infinite data and infinite compute, you could have the machine do enough regression to match the presented surface of whatever the data represented.

    I sat back and was like “oh this is sorta cool but sorta dumb”. You can’t create a novel thought process from this, you are limited by what data you can collect and label, and labeling data is an extremely time consuming process because it relies on humans. But also, you don’t really need labeled data if you don’t care about correctness. You can get away with feeding the regression machine a load of data and label more generally based on how close certain points are in a vector space. It’s how “sentiment analysis” works. You can take IMDB’s database of reviews that each have some words and a star rating, and use the star rating to categorize in “good” and “bad”, then average out the distance between certain words and the frequency they appear within the “good”, “mid”, and “bad” spectrum.

    Suddenly, relationships show up, “lawyer” is close to “criminal” in the 3d space.

    What modern LLMs do, is just layer this same system with a few short-circuits called context windows. It basically maintains a space of relevance within the broader context of what the model was trained in. For the IMDB example, lets say you’re asking a machine about action movies with x, y, z characteristics, the context maintains those to short-circuit the larger model to retain ‘focus’ and give you near-by relationships.

    With enough data you can recreate language based on distance markers and frequency. But back to my original point, it’s the surface level of what it was trained on, a plaster mask. The mask doesn’t have the complexity of the muscles and skin it was formed on, it’s shallow.

    That all being said, the ability to make a shallow mask is useful for cross-referencing large amounts of data. The disaster strikes when it’s treated as an all knowing god and used to do military strikes.











  • I’m a little OCD on keeping track of the time, not sure why. One thing I enjoy is wearing a watch. I’ll look at it half a dozen times in a row to make sure I read it right, but I also enjoy wearing it because I think it looks neat and I like seeing it and watching it tick.

    Maybe a cheapish watch that the kid likes, and would want to wear rather than needing to remember to wear it, could help? A little dose of “that’s neat” while also seeing the time.



  • For this task in particular, this would be somewhat foundational to a design and a believable but incorrect value could incur thousands of dollars in mistakes and time later on, some far harder to debug than others. It’s essentially an age old battle between my brain and interacting with spreadsheets That I just need to get over. It would be cool if you could use llms in adversarial forms where they look to prove another llm wrong or verify output to some 3-4 9s of accuracy but I have a brain and can do that too.

    I’ve worked on various hard problems that hit the limits of the llms pretty quickly. It’s frustrating because so much of the information that used to be on the Internet is gone now, and what’s left can’t be found due to how bad search engines have gotten, and even using the llm as a search engine just pops up the same webpages I’ve already deemed as unhelpful.