• underscores@lemmy.zip
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    7 hours ago

    Worst features Valve has

    1. VAC (uses AI to achieve nothing)

    2. This

    thank you for reading

  • CosmoNova@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    14 hours ago

    You know this is totally on brand for Valve if you ever had the displeasure trying to reach out to their support.

  • sp3ctr4l@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    2
    ·
    edit-2
    20 hours ago

    Valve’s customer service responses have always been mostly a canned series of bot messages.

    Their in-house support has always been 99% automated.

    Its very obvious if you’ve ever interacted with them at more than an occasional, superficial level.

    You have to be quite persistent to get a message from an actual human being.

    Yep, the automated messages often have the name of ostensibly a human attached to them.

    So do all kinds of other bots, since way before ChatGPT and LLMs took off.

    What, did you a think a human person actually read every single complaint report of a hacker or cheater in a video game with any kind of a massively used anti cheat system?

    No! You have bots, analytic systems, screen that shit, just the same as all our resumes on Indeed, or our activity and profiles on dating apps have been being analysed and evaluated by bots, again, since way before LLMs got as prevalent as they are today.

    Then you filter. Humans only see the odd ones that defy categorization, basically, or trigger a certain set of flags that are designated as ‘probably needs an actual human to handle this one’.


    This has been a tech industry standard for almost two decades.

    Valve is just now overhauling that system to use an LLM, because those are actually better than a very complex series of chained regex searches.

    The alternative would be to do what Meta or Google or Amazon do: Hire armies of tens to hundreds of thousands of offshore contractors and give them all PTSD for pitiful wages, manually evaluating everything.

    Apparently this is not widely known, by people who’ve never worked in an entreprise level tech company?


    Using LLMs to evaluate and assist a massive anti-cheat system is actually a great way to be be able to do an anti-cheat system… without hooking directly into your kernel.

    These things are very good at pattern recognition, and if you tune them to specifically only work with inputs from the server from gaming sessions, you can significantly improve server-side/backend detection of players/clients doing things that are highly suspicious or outright impossible given the actual rules of the game.

  • Quetzalcutlass@lemmy.world
    link
    fedilink
    English
    arrow-up
    72
    arrow-down
    2
    ·
    23 hours ago

    I know Valve wants to remain a small-ish company, but automating in-house support has literally never improved things for the customer. It’s even worse if it’s tied into their anti-cheat - a false positive can lock you and your entire family out of multiplayer, and good luck getting a human to overturn it after the former support staff is moved to other teams.

    I’d say it’s weird they didn’t focus on using this to help fix their nearly nonexistent community moderation, but I’ve been told their hands-off approach is deliberate due to a libertarian bent among the higher ups.

    • Godort@lemmy.ca
      link
      fedilink
      English
      arrow-up
      42
      arrow-down
      1
      ·
      23 hours ago

      One thing Valve is known for is testing things. They typically make sure technology works before rolling it out everywhere.

      I’m willing to bet that they have either solved most of the problems a tool like this has by massively limiting its scope, or it never actually gets past a beta test phase.

      • warmaster@lemmy.world
        link
        fedilink
        English
        arrow-up
        26
        arrow-down
        2
        ·
        22 hours ago

        This. They have explicitly said that they are testing AI applications throughout the company and that it is not a concerted effort. It’s a few devs wanting to try it to see if it actually adds real value or not. That’s it.

        • lordbritishbusiness@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          12 hours ago

          It’s the best way, if it’s useful it’ll be used, if not then you’re not wasting time or money. Suits Valve’s methodology.

          • sp3ctr4l@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            19 hours ago

            Well you got me there.

            https://github.com/SteamTracking/SteamTracking/tree/master/ProtobufsWebui

            There’s the directory with the file in the screenshots, service_steamgpt.proto, updated 4 days ago along with a number of others, seems like it and a whole batch of related files did not exist before then.

            I am uncertain if this … basically scraping operation is tracking the main Steam client or the Beta or what.

            There is not a very helpful description of what exactly is being pulled here, in the readme/project description.

            EDIT:

            Perhaps if you are more familiar with Protobufs, you can take a gander at these and come away with a guesstimate as to what these are doing?

            • vulpivia@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              3
              ·
              18 hours ago

              My guess is that this is part of some kind of machine learning pipeline, where users label edge cases to help train the model. Since it operates on account data, match data, and logs (see CSteamGPT_GetTask_Response), an anti-cheat use case would make sense, but it’s hard to say for sure.

              This looks like data exchanged between the Steam client and server, and doesn’t contain any logic on its own.

    • False@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      21 hours ago

      This is an incorrect assertion. Making common actions self service without needing a human is almost always a customer win. For example automatic refunds on request if your request meets the correct criteria, instead of needing a human to look at it and make an arbitrary decision. Or having a knowledge base of common issues that can help people fix problems on their own without needing to talk to a person. Both are much faster and more repeatable.

      • Agent_Karyo@piefed.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        17 hours ago

        But this is not viable for every use case. If there is a major issue with my Bank account, I want to speak to person, period.

        Specific actions have automated workflows is of a course a good thing.

        Documentation is also good, but it often doesn’t account for edge cases or your unique situation. Not to mention, the majority of the public is not going have the desire to deal with documentation.

    • ampersandrew@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      22 hours ago

      They improved their support ticket throughput by orders of magnitude by automating a lot of it already. There are lots of versions of automation, too, like collecting information about the user’s problem before you even get to a human.

      • Quetzalcutlass@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        2
        ·
        22 hours ago

        Right, but there’s a difference between automating a refund if they can detect the purchase happened in the last two weeks and has less than two hours of playtime, versus complex support problems being handled by an LLM that can be mislead or hallucinate.

        I suppose it’s fine if it’s limited to giving advice on solving the problem and has to escalate to a human if any server side action is required, but it being tied to anti-cheat has me worried that’s not the case.

    • Squizzy@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      17 hours ago

      Their support staff are always being commended, seems odd to me.

      At the same time they allow rusdian war crime simulators.

    • ColeSloth@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      16 hours ago

      The non existent community moderation is by design and purpose. Valve wants it that way. They refuse to be any sort of gatekeepers in it.

    • cybervseas@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      23 hours ago

      I think it could have been an interesting usecase to chat with a steambot to get game recommendations.

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        edit-2
        20 hours ago

        This is not meant to be a chatbot.

        It is meant to evaluate gaming sessions of CS2 (and/or potentially any VAC enabled game, maybe).

        Its an experimental, prototype of improving VAC’s serverside, backend analysis capabilities, to better detect cheaters and hackers.

        You don’t need kernel level access into everyone’s pcs.

        You can run analytics on what the server records as happening in the game session, to detect odd patterns and things that should be impossible.

        LLMs are … the entire thing that they do is handle massive inputs of data, and then evaluate that data.

        The part of an LLM that generates a response, in text form, to that data, is a whole other thing.

        They can also output… code, or spreadsheets, or images, or 3d models, or… any other kind of data.

        Like say, a printout of suspicious activity in a game session, with statistically derived confidence intervals and timestamps and analysis.

        The you have another, differently tuned LLM, ingest the data the first LLM produces, and turn it into something else.

        You see the ModelEvaluation and then MetaModelEvaluation?

        That looks like what they’re doing to me.

        Detailed Server Logs -> Model Evaluation -> MetaModelEvaluation.

        If you’ve ever run a dedicated multiplayer server and had to deal with hackers… you’re gonna be looking through server logs to sniff out nonsense.

        Server-side cheat detection, very oversimplified, is having automatic systems do that.

          • sp3ctr4l@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            14 hours ago

            I can still hardly believe that the tech industry at large just decided to broadly roll out LLM integration into essentially every element of their businesses, having just no idea what they actually do.

            Like 2 years ago now, I was figuratively pulling my hair out, reading the discussion panel schedule for Microsoft led conferences on LLMs and cybersecurity.

            Literally every topic was a different kind of way that smashing an LLM into a complex business system… increases potential failure points, broadens attack surfaces… because networked LLMs literally are security vulnerabilities.

            Not a single topic about how to use LLMs defensively, how to use them to turbocharge malware signature recognition, nothing like that.

            All just a bunch of ‘make sure you don’t do this!’ warnings, and then everyone did them anyway.

      • Quetzalcutlass@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        22 hours ago

        Their current recommendation engine is already a marvel and the only one I’ve ever come across that actually directs me to niche stuff I might be interested in.

        • with the amount of information they collect on their customers, it better be damn good. honestly, the only reason it’s not a huge privacy problem is because they zealously guard that data to protect their near monopoly on PC gaming.

          Gabe has been pandering to gamers and mostly giving us what we want, but when he dies, we better hope the next dude in charge isn’t some corporate suit that only cares about maximizing profits in every way that they can, or the enshitification of Steam is going to really fucking hurt. imagine if Valve was run like Microsoft. for example, the next guy might cut a deal with Microsoft to stop supporting Proton.

  • littleomid@feddit.org
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    14
    ·
    22 hours ago

    All this talk with valve being a “good” monopoly are such horse shit. Valve WILL enshittify, maybe not today, maybe not tomorrow, buts it’s coming and people are acting like it won’t happen.

    • givesomefucks@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      2
      ·
      22 hours ago

      At the same time, people drastically overestimate how big of a deal it would be…

      Someone could always just stop buying games on there.

      And if it did “go away” and people lost their games, they think how much they’ve spent on games over decades, and not how much it would cost to replace what they still play.

      Out of all the real life horrible shit going on, very few people have the longevity of Valve as a priority.

      It’s not 100% safe but at this point it’s more likely to be here in 20 years than the country it’s based in.

      But besides all that, I don’t think you know what the word “monopoly” means if you think steam is one.