• 0 Posts
  • 71 Comments
Joined 3 years ago
cake
Cake day: July 5th, 2023

help-circle
  • Most Linux distributions are free (free as in beer and free as in free speech and freedom to modify). Some are backed by big corporations with questionable activities (e.g., Ubuntu owned by Canonical adding ads and data tracking by default).

    Federation is a different concept (relating to the interconnecting of content platforms, such as email or Lemmy).

    Linux itself is the underlying kernel code which programs talk to act as a mediator between software and hardware. Each Linux distribution is basically a software suite built on-top.

    Arch is specifically notable for having a very fast software update cycle.

    In contrast, Debian is a distribution with the “slow and stable” mantra. Software officially supported and distributed for it only receives updates every few years after extensive stability testing. The goal is to never have a random update break anything. This also means it is slow to receive support for new hardware unless you manually install it. It often supports running newer software but it won’t be nicely managed by the OS and you’ll be doing manual work to maintain it. The consequence? I have a new graphics card, and booting into Debian just gives me a black screen. I needed to use the terminal to download and install Nvidia’s driver myself.

    Arch isn’t so concerned with stability. It’s still tested, but their goal is to make sure new hardware and software advances can be used right away. Think weeks instead of years. This means it will support newer hardware and any news about Linux advancements will be on your machine before long. It also means that sometimes things slip through the cracks and one piece of software might break, or break another one. You might need to pay attention to Arch news before updating to see if there are any incompatibilities before updating.

    There are different distributions building on top of these. Arch itself must be installed from scratch, a tricky process. Debian is more streamlined. Ubuntu is built on Debian, having lots of stability, but has alternative software repositories to keep things a bit more up to date. Arch has variations that make it easier to install.

    Arch gives more flexibility in what you install and more control of your system. Debian has lots of flexibility as well. Ubuntu has a bit less. Mint is a popular choice, built on Ubuntu, and it removes some of the “chaff” people complain about being added into Ubuntu.

    Linux distributions can run on basically anything. A smart toaster might run Linux. If it can run Windows, it will probably run a Linux distribution with a quarter of the memory usage at double the speed because Windows hogs resources with unnecessary and unkillable software in the background.


  • PixelProf@lemmy.catomemes@lemmy.worldha… wait, yes! Haha!
    link
    fedilink
    arrow-up
    20
    arrow-down
    1
    ·
    9 months ago

    It’s tough as a computer science professor from a related perspective. Lots of students arbitrarily hating anything AI related because of this, including all of the traditional techniques from the 60 years prior to the rise of LLMs and diffusion models, and others misconstruing or discounting any AI class that isn’t LLM or diffusion related.

    I never like to say technology is inevitable, as the inevitability argument is one of the best marketing tools major companies have to justify their poor ethics and business models (see: the gig economy founders, the “Momentum” mindset). It’s clear, though, that there is quite a paradigm shift occuring.



  • Can’t say I’m deep in this space, but I think there’s a lot of sentiment towards going more lean with operations and aiming for direct donation toward Firefox development (which I don’t believe is presently an option) which seemingly, if Mozilla narrowed to their core (Firefox, MDN), the community would likely show heavy support. I have my doubts it would fully cover the bill in a sustainable way, but I at least think that’s one of the main sentiments.



  • Interesting points, maybe a book I’ll have to give a read to. I’ve long thought that information overload on its own leads to a kind of subjective compression and that we’re seeing the consequences of this, plus late stage capitalism.

    Basically, if we only know about 100 people and 10 events and 20 things, we have much more capacity to form nuanced opinions, like a vector with lots of values. We don’t just have an opinion about the person, our opinion toward them is the sum of opinions about what we know about them and how those relate to us.

    Without enough information, you think in very concrete ways. You don’t build up much nuance, and you have clear, at least self-evident logic for your opinions that you can point at.

    Hit a sweet spot, and you can form nuanced opinions based on varied experiences.

    Hit too much, and now you have to compress the nuances to make room for more coarse comparisons. Now you aren’t looking at the many nuances and merits, you’re abstracting things. Necessary simulacrum.

    I’ve wondered if this is where we’ve seen so much social regression, or at least being public about it. There are so many things to care about, to know, to attend to, that the only way to approach it is to apply a compression, and everyone’s worldview is their compression algorithm. What features does a person classify on?

    I feel like we just aren’t equipped to handle the global information age yet, and we need specific ways of being to handle it. It really is a brand new thing for our species.

    Do we need to see enough of the world to learn the nuances, then transition to tighter community focus? Do we need strong family ties early with lower outside influence, then melting pot? Are there times in our development when social bubbling is more ideal or more harmful than otherwise? I’m really curious.

    Anecdotally, I feel like I benefitted a lot from tight-knit, largely anonymous online communities growing up. Learning from groups of people from all over the world of different ages and beliefs, engaging in shared hobbies and learning about different ways of life, but eventually the neurons aren’t as flexible for breadth and depth becomes the drive.


  • Any good options recommended for self-hosting something similarly functional that doesn’t take too much effort to get up, audit, maintain? Discovery isn’t really important for me, so federated isn’t really necessary, but a cool extra. I’d love to host something or contribute to hosting for my gaming groups, my class or multiple classes at my school, or otherwise. Voice, chat, screen share, camera, would all be great if possible, but range of options would be good. I’m still using Mumble for gaming…

    Haven’t tinkered much with Matrix nor do I know much about Revolt, but I’m curious before I look into it deeper if anyone in the community has experience hosting any communication platforms for small, invitational groups.


  • Oh yeah, the 365 version is terrible. And post of the time, it could have been a Python Gradio interface or similar simple implementation without having to fight so much to make basic things work. Most of what I want Excel to do it just isn’t efficient enough for; particularly with lets and lambdas, it’s gotten quite powerful as a programming paradigm where you can visualize and manipulate your data spatially in a kind of Logo / NetLogo style way which is really interesting, but the second you reference a few thousand cells a few times even a solid CPU starts screaming.

    I use Excel for a decent number of tasks and can do some magic with it, but only ever really for work where it’s easier to share a weird Excel sheet than it is to pass around a Python script (which given I teach Python, isn’t actually as often as most people experience).


  • But what about those of us in R1C1 mode using lambdas to do recursive cell operations across data pulled from multiple sheets? Am I anywhere near the kinda of Eldritch horrors discussed? I’ve also written indirect references based on Sheet name to populate filters from web scraped tables. I just don’t know how deep the pit goes at this point.


  • Yeah, I wasn’t a fan of the visual scripting, but I do consider composing nodes in the editor, connecting signals, modifying field values with sliders, having global variables in a separate editor, visual curve editors, file managers, etc. to be a form of visual scripting by a different name, and I do quite like that.

    I’ve been curious how this sort of editor would work for non-game code, like making a CLI in C, C++, Kotlin, etc. Where you primarily interact with nodes and inspectors for data organization and scripts for behaviour implementation. I need to go back to Smalltalk to see some of the ideas there for alternative code organization structures.


  • Maybe I’m an old fogey, but I usually hear more pushback against visual languages as being too finicky to actually create anything with and I usually advocate for a blending of them, like working in Godot and having nodes to organize behaviour but written scripts to implement it.

    I really appreciate the talks from Bret Victor, like Inventing on Principle (https://youtu.be/PUv66718DII), where he makes some great points about what sorts of things our tooling, in addition to the language, could do to offload some of the cognitive load while coding. I think it’s a great direction to be thinking, where it’s feasible anyways.

    Also, one reason folks new to programming at least struggle with text code is that they don’t have the patterns built up. When you’re experienced and look at a block of code, you usually don’t see each keyword, you see the concept. You see a list comprehension in Python and instantly go “Oh it’s a filter”, or you see a nested loop and go “Oh it’s doing a row/column traversal of a 2d matrix”. A newbie just sees symbols and keywords and pieces each one together individually.


  • PixelProf@lemmy.catoMicroblog Memes@lemmy.worldRaw dawing
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Yeah, my guess is that this post is implying the typical case - it wasn’t disrupting grades specifically, so it wasn’t diagnosed. You may have gotten those grades by staying up until 3am as a child, lying to get out of forgotten homework, had more injuries, pushed through work by building up a healthy reserve of depression and anxiety, struggled socially because you couldn’t prioritize both school and socials or because you couldn’t connect with most other people because of your way of talking, been horribly forgetful, etc. but because grades number stays high, nothing is wrong. It’s easy for people to see grades as the metric for mental wellness which is wild


  • Oh absolutely, I’m pretty sure I’m on the same page with this. I only pose that to someone who believes they’ve found people who respect them, and particularly those who have felt for a long time that their voice didn’t matter, it is counterproductive to approach them and their group with outward hostility.

    Telling them the people who took them in and listened to them are vile, abusive, disgusting people and are exactly the problem they say everyone says you are, is just reinforcing of their views.

    Consider the comment originally replied to; paraphrase because mobile is hard, “those loudest about being victimized are the most eager to take their pound of flesh”. This can easily sound like:

    1. (Man) I’ve been victimized and nobody lets me voice this except for this gang/cult/militia. Cult says they should be allowed to “get support” and they know the way (it’s bad).
    2. (Outsider) Claiming to be a victim usually means you are a terrible person.
    3. (Man) So according to outsiders, if I seek help, I’m a bad person. According to my (cult etc) if I tell them, they will offer a form of support. I can stay with these people and get something of support, or I can leave them, be ostracized, and any attempts to voice my feelings will lead me to being labeled someone eager to take a pound of flesh.

    They need to be shown that those on the outside understand them and are better people than those who took them in. They are with people whose form of empathy and respect is so distorted and toxic, but it’s the only model of that experience they know.

    Your comment, upon my read, felt like anyone in that position would feel justified in their gang telling them that everyone on the outside is out to get them. If they already think everyone else is a predator, what is attacking their friends, their family, and their opinions, going to do?

    They will only leave when they know they will arrive somewhere with the respect they craved without those toxic feelings they repressed during their time with a hateful group.

    So I guess it’s less about the content of the comment, more of the way it represented the ideas, the timing, and the perceived intention.


  • I agree that much of the problem is men on men and this patriarchy - men who do not want to uphold patriarchal values can often be ostracized and demonized by those who do - but I believe OP was specifically noting that then those men who get abused and ostracized cannot speak out of seek help because many people will simply snap back at them saying that they are part of the problem and resources need to be given elsewhere. They cannot endure the abuse, and their own cohort becomes abusive, and the only way to avoid the abuse from all sides (in their view) becomes joining the “social excrement” they wanted to escape in the first place.

    Angry screams tend to mask sad and lonely tears. Hatred does not end hatred; hatred ends through non-hate alone. Non-hate is not inaction, though. If we do not look at them, and ourself, with empathy and kindness and understanding and patience, they will continue living in a world devoid of and therefore ignorant to empathy and kindness and understanding and patience.


  • I think centralization played a big role in this, at least for software. When messaging meant IRC, AIM, Yahoo, MSN, Xfire, Ventrilo, TeamSpeak, or any number of PHP forums, you had to be able to pick up new software quickly and conceptualized the thing it’s doing separate from the application it’s accomplished with. When they all needed to be installed from different places in different ways you conceptualize the file system and what an executable is to an extent. When every game needs a bit of debugging to get working and a bit of savvy to know when certain computer parts are incompatible, you need a bit of knowledge to do the thing you want to do.

    That said, fewer people did it. I was in highschool when Facebook took off, and the number of people who went from never online to perpetually online skyrocketed.

    I teach computer science, I know it isn’t wholly generational, but I’ve watched the decline over the past decade for the basics. Highschool students were raised on Chromebooks and tablets/phones and a homogenous software scene. Concepts like files, installations, computer components, local storage, compression, settings, keyboard proficiency, toolbars, context menus - these are all barriers for incoming students.

    The big difference, I think, is that way more people (nearly everyone) has some technical proficiency, whereas before it was considered a popular enough hobby but most people were completely inept, but most of students nowadays are not proficient with things past a cursory level. That said, the ones who are technically inclined are extremely technically inclined compared to my era, in larger numbers at least.

    Higher minimum and maximum thresholds, but maybe lower on average.


  • Yeah, that’s definitely the way to see it, and as that I think it’s great. I think it might overload the term dark patterns a bit too much, and would have liked to have seen a different name used (as a game design academic), but I absolutely agree with and appreciate the approach otherwise.

    Edit to include, I guess why I have that hesitation with an example - I couldn’t link this in a class I’m teaching without loads of caveats because suddenly 80% of the curriculum gets seen as abusive when it’s really just experience design and explain the grey (which we do, so this is quite helpful for that particular purpose), and I would need to caveat that when they see the term out in the wild it will be used differently.


  • All I’m commenting on, as a game design researched and professor, is that it’s an established term in a discipline which means something else to those actually within the discipline. These are still patterns, and they can absolutely be harmful patterns, but the terminology is being overloaded and there is some interesting nuance within it.

    Also, just to comment on the last quip there, and yes - to those I’ve spoken to, they are okay with those because they (being actively involved in the industry) know more than most people to educate and supervise and ensure that playing games with these patterns doesn’t turn into harmful behaviours. They also call them out for what they are - often, very bad design.

    I guess that’s really the line they drew - these patterns are more gray than the examples they presented. Most are good sometimes and terrible other times depending on how it is used. The term “dark patterns” as used professionally refers to always bad, always deceptive, always harmful. I do like having that line, even if it means the dark side is a much smaller subset of the greater space, then you can easily say, “If this uses a single dark pattern, it’s out. If it uses a lot of ‘grey’ patterns, be cautious. If it’s nothing but grey patterns, it’s purely abusive trash.”


  • Interesting. I was chatting with a lot of big name AAA designers and indie designers discussing dark patterns, and they’ve got a very different opinion on what constitutes a dark pattern. To them, largely, it needs to be more technical deception - like having a fake “X” button, or immediately popping up an ad over where a button was to trick you into clicking it, or bait-and-switching pricing before the user notices.

    I tried to raise these kinds of patterns as problematic, and it was a mixed bag. The general vibe from them was that they’d only call it a dark pattern if it deceives the player to get more money than they were prepared to spend (or similar for ads). If the player knows what they’re getting into, and they are presented with a choice to stop or continue, it’s on them.

    And I’ll admit, while I don’t go that far (and there were designers in both camps), I can at least understand how all game design is manipulation, in the same way that teaching and storytelling is manipulation, and drawing the lines can be very hard. Your job is to convince the player that they are having fun and want to keep playing. Resources in a game have no real value, only valued by the scarcity and utility of them, which the designer intentionally assigns to convince the player it’s more or less valuable.

    Curiously, the examples listed in the OP were exactly the patterns I see designers discuss, but don’t seem to be the patterns on the website (like “illusion of control”, artificial scarcity, which is like, game designs while thing).

    Either way, nice to have this as a resource because honestly a lot of these elements are what I’d put in the “bad / abusive design” category rather than purely dark patterns, but still great to highlight, but I can agree that we should probably be careful blanket calling these dark patterns; examples: It mentions illusion of control being separating you into shards of leader boards so that you can be in the top 500 of a shard rather than top 200,000 world ranking or whatever, or claw machines choosing whether you successfully grab an item rather than relying on skill. How does this compare to Uncharted not letting enemies successfully shoot you in the first few seconds of an action sequence to give you time to ground yourself, or Resident Evil spawning different loot and enemies based on how good/bad you play?

    I’d say, is it to extract money from you in the short term, but it’s more grey than a non-designer might read into from lists like these.