Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot

  • 0 Posts
  • 117 Comments
Joined 1 year ago
cake
Cake day: March 10th, 2025

help-circle
  • Mniot@programming.devtoLemmy Shitpost@lemmy.worldYeeeesh, tough choice.
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    5 days ago

    Yes, you are correct that the numbers for the Green Party aren’t usually enough to make a difference in any election. I’m just frustrated that they’re not even providing a legitimate alternative to the Democratic Party.

    How I’d want the Greens to work:

    1. Run real candidates in local elections who care about the environment and get shit done. Be able to point to things like town X electing a Green as mayor and not having their water poisoned. Or 6 state reps from the Greens passing air pollution controls and getting more business into the state instead of less. Pro-bike laws. Etc.
    2. Ask both national parties what they’re going to promise to do for the environment and either endorse the better one or release a statement that both parties are unwilling to do anything for the environment and encouraging people to vote local.

    How the Greens actually work:

    1. Take money from private interests
    2. Abandon all local races
    3. Fuck around at the national level, drawing attention and money away from groups that would try to get anything done




  • “Exactly as horrible” is unfair to you. But it sounds like you’re advocating to give up on rule of law and just have the strongest most violent people be the ones to decide what’s right. And I’d argue that you’d just get back to where we are right now: wealthy people would control the system, they’d employ strong violent people to enforce their personal whims as “law”, and you’d be complaining that nobody is willing to beat up the pedophile (because his friends would hire goons to kill them).

    I mean, presumably that’s what’s stopping you personally from implementing your own recommendation, right? Because if you showed up and kicked this guy’s ass you’d be beaten and arrested by the police.






  • Many on the cultural right are forgetting something critical: same-sex marriage doesn’t infringe upon anyone else’s rights. A crucial argument against gender ideology was the infringement on women’s rights. But unlike trans edge cases such as women’s sports or prisons, marriage isn’t a zero-sum issue. There isn’t a finite number of spots on the “marriage team.” My getting married takes nothing away from straight couples.

    It’s too bad they’re both too incurious to think for themselves and so media-illiterate that they haven’t read The Handmaid’s Tale… Obviously lesbian couples can be broken up and forced into miserable straight marriages and this is precisely what the right would want to do! (It’s even part of the American past that MAGA wants to return to!)






  • Cars in general are the problem and even if they all went electric they’d be bad. (But cities would be much quieter and they are hella fun to drive.)

    If you’re able to use a bicycle for some of your trips instead of a car, that’s a good change. (And if you’re not then you might not even be able to use an EV car if you could afford it. It takes way longer to charge a battery than to fill a gas tank.)






  • The “agents” and “agentic” stuff works by wrapping the core innovation (the LLM) in layers of simple code and other LLMs. Let’s try to imaging building a system that can handle a request like “find where I can buy a video card today. Make a table of the sites, the available cards, their prices, and how they compare on a benchmark.” We could solve this if we had some code like

    search_prompt = llm(f"make a list of google web search terms that will help answer this user's question. present the result in a json list with one item per search. <request>{user_prompt}</request>")
    results_index = []
    for s in json.parse(search_prompt):
      results_index.extend(google_search(s))
    results = [fetch_https://communick.news/url(url) for url in results_index]
    summarized_results = [llm(f"summarize this webpage, fetching info on card prices and benchmark comparisons <page>{r}</page>") for r in results]
    
    return llm(f"answer the user's original prompt using the following context: <context>{summarized_results}</context> <request>{user_prompt}</request>")
    

    It’s pretty simple code, and LLMs can write that, so we can even have our LLM write the code that will tell the system what to do! (I’ve omitted all the work to try to make things sane in terms of sandboxing and dealing with output from the various internal LLMs).

    The important thing we’ve done here is instead of one LLM that gets too much context and stops working well, we’re making a bunch of discrete LLM calls where each one has a limited context. That’s the innovation of all the “agent” stuff. There’s an old Computer Science truism that any problem can be solved by adding another layer of indirection and this is yet another instance of that.

    Trying to define a “limit” for this is not something I have a good grasp on. I guess I’d say that the limit here is the same: max tokens in the context. It’s just that we can use sub-tasks to help manage context, because everything that happens inside a sub-task doesn’t impact the calling context. To trivialize things: imagine that the max context is 1 paragraph. We could try to summarize my post by summarizing each paragraph into one sentence and then summarizing the paragraph made out of those sentences. It won’t be as good as if we could stick everything into the context, but it will be much better than if we tried to stick the whole post into a window that was too small and truncated it.

    Some tasks will work impressively well with this framework: web pages tend to be a TON of tokens but maybe we’re looking for very limited info in that stack, so spawning a sub-LLM to find the needle and bring it back is extremely effective. OTOH tasks that actually need a ton of context (maybe writing a book/movie/play) will perform poorly because the sub-agent for chapter 1 may describe a loaded gun but not include it in its output summary for the next agent. (But maybe there are more ways of slicing up the task that would allow this to work.)