

Which is a container, not an encoding.


Which is a container, not an encoding.


Oh yes, the routers and gateways that most people have that are isp provided that may not actually have open VPN or wireguard support.
Those ones?
Also putting a VPN in someone else’s house so that all their Network traffic goes through your gateway is pretty damn extreme.


Nor will the VPN work on things like their TV or Roku or game console. You know the things that people typically sit down and watch media on…


Which doesn’t work for The grand majority of devices that would be used to watch said media.
Tvs game consoles rokus so on so forth typically don’t support VPN clients.
The Jonathan clients for these devices also typically don’t support alternative authentication methods which would allow you to put jellyfin behind a proxy and have the proxy exposed to the internet. Gating all access to jellyfin apis behind a primary authentication layer thus mitigating effectively all security vulnerabilities that are currently open.


deleted by creator


Hasn’t stopped agencies from enforcing or following them, states from following them, or law enforcement from adhering to them.
He is given the power the other branches of the government and states give him, and they all give him everything.
Lots of the executive orders have no legal standing, where he has no authority over what they change. Yet here we are.
Yeah, that method addresses that.
Often the catatonic ADHD can’t move. Mode is related to resistance, starting a task.
Giving yourself an easy out that you know you can use after 3 minutes is an easy way to trick your brain into deciding that test starting a task isn’t such a big deal.
Therefore allowing you to move.
If you can’t move because depression then that’s different.


That’s an engineering culture problem. Not a PR problem.


Like I said plenty of products call this tab completion, and it’s context aware completion, or predictive completion. I used an overloaded term but I would have thought after my explanation you would have understood what I meant by this point. You’re continued explanation of classic tab completion is shows otherwise.
and way predates whatever VS Code may have been doing
Also I said Visual Studio, not VS Code. 🤦
Secondly, even if you want to move the goal post by talking about some specific implementation of ML based indexing, ML is not LLM.
I very specifically said that it was ML based, The word was indicates past tense. 🤦
“Modern versions of it are almost entirely LLM based.”
I don’t know how you managed to completely skip reading that last line?
Here we are though arguing over reading comprehension issues. Which honestly is pretty classic for the internet.


I mean, fundamentally, yeah.
But we live in a corporate controlled, corrupt, world and now of these larger companies can be trusted with this process.
Some smaller communities and platforms DO this right sometimes, as they build in house processu that respect privacy. But governments world wide are making this impossible through increasingly strict compliance requirements that actually increase data privacy risks and funnel these needs to 3rd party services who just lie about what they do with the data.
===========
I’m not kidding when I say this is a REAL BIG PROBLEM.
bot based traffic and astroturfing will supplement and replace human communication on platforms like Lemmy. Driving the narrative and how we engage to the whims of a few rich people. Bots are relatively cheap, and easy to deploy at scale across many platforms.
There will be no open corner of the internet safe from manipulation and forced division. More people will be forced into walled gardens from corps that implement human verification, as they are the only ones with the resources to do something (While also being the source of the problem, see how that works?)
How do you carve out spaces that are protected from that? Well, you need to determine who’s a bot, and who’s and actual person.
But we can’t do that, so the alternative is we are ran over by bots and astroturfing till we’re at each other’s throats like good culture war puppets.
The future is bleak…


It’s largely considered ineffective these days. Detecting elements that don’t affect layout is trivial, or elements that are occluded, transparent…etc
Capchas are one of the best options. But even then, LLM users bypass those relatively easily, and LLM users are one of the biggest risk areas for astroturfing.


Apparently meaning from usage cannot be inferred here? Or you’re just being intentionally obtuse?
A not insignificant number of products literally just call it tab completion these days, because tab completion in many products & IDEa is by default predictive completion, which is ML based. And these days, LLM based.


I used the wrong term, but I guess y’all were unable to infer meaning from usage?
Auto tab or w/e it’s called (Some products literally call it tab completion). Visual Studio was doing it around 2018 IIRC, it was ML based, always has been. Modern versions of it are almost entirely LLM based.


Plenty of places already have open multi gender bathrooms with individual stalls with real doors, and a shared hand washing area.


Literally every game that’s made today is using AI as part of the development process.
Damn near every Dev has tab completion on in their IDE. Which is AI based.
==========
Edit:
I used a term, but I guess y’all were unable to infer meaning from usage?
Auto tab or w/e it’s called (Some products literally call it tab completion). Visual Studio was doing it around 2018 IIRC, it’s ML based, always has been. Modern versions of it are almost entirely LLM based


Ironically, I am one of these people building get up action CI pipelines that automate large multi-service multi-team deployments across different cloud vendors and regions.
It’s all a bit of a pain in the ass and. Actions don’t really provide many of the nice controls and safety guarantees we would want.
However, GitHub actions is relatively straightforward to implement and relatively easy to operate.
It’s just very easy for you to foot gun


Training is constant. None of these models by any of these providers are static. You’ll notice that they are releasing new models and new model versions regularly.
This means that training is happening constantly. It never stops. There’s always new shit being trained.


You’re talking to non -tech nerds about something that usually only tech nerds are familiar with.
Just like on Reddit you’re going to get downvoted because people don’t understand. Lemmy is effectively the same in that regard.
Just having data is cheap. Actually serving that data up in a meaningful way is expensive as fuck.
Or because you’re not using a chromium based browser.
Just some classic anti-competative practices