

in most cases language should minimize unneeded complexity imo
Welcome back, Webster.


in most cases language should minimize unneeded complexity imo
Welcome back, Webster.


This is counter-productive and can get you in big trouble IMO. I don’t even get what these are protesting
It reads like a policy/implementation fault. The workers have been told to use AI, but haven’t been told clear information, or are presented with a bad model/interface, so they just hop on Google bard or something familiar that works better.
It’s still using AI, so basically the same thing.


The categories that they used for “sabotage” (Entering proprietary information into a different AI, using unapproved chatbots, and using low-quality AI responses as-is) seem like they’re just put together so they can blame employees for sabotage for the failure of the AI rollout, rather than employers trying to wedge it onto a bad use case, or not rolling it out properly.
The first two just seem like the company having issues with people going straight to ChatGPT, and using that as-is, and the third seems to be more people not really caring and using the AI output as required.
None of that comes across as outright sabotage like the organisation or article the to imply. All three seem like reasonable end-points of telling people to use AI, and giving them metrics they need to meet, or a not-great interface, so they just go off and use a different AI thing, because it’s all AI, and basically the same thing, right?


It is weird that they use it as a national identification number, when they are ostensibly virulently against the concept, and it was never designed to be used in that manner to begin with.
The initial learning curve is very rough, since people might be used to commands from a newer editor like Notepad++, which doesn’t work in vim.
nano at least says which button combination you need to exit, for example.
It is easier past the initial hump, though.
You can also use :x to write and quit, from memory.
They’re also fairly versatile. y-i takes any symbol after. Space, comma, the letter p, you name it. If you can type it, it’ll generally work.
Which can be a bit faster than some graphical editors at times, where you might have to find and select the contents by hand. That can be a bother if there’s a lot.


It’s like a better iPad in a way, since you could run full-scale desktop programs on it, and use it like a desktop.
I wouldn’t be too surprised if things like surfaces were one of the reasons why Apple seems to be making a push to try and make the iPad functional as a computer on its own.


It’s also an 8 gigaparameter model. That’s pretty tiny, even if they use it heaps.


Nobody would pay to use the private system if they could get their needs met for free in the public system.
They might, if they thought there was an advantage to it. Like being seen more quickly, or getting a discount for something else.


If the entire world had access to free healthcare, chances are research and development would grind to a halt unless they also funded research and development. Taxpayers would need to be willing to pay a company hundreds of millions of dollars if they discovered a useful product.
I don’t see why it would. A company would still invest in research if they thought they had a chance to sell it to the healthcare system, for example. It wouldn’t be the first nor last time something like that happened, and the latter case isn’t too different from how it works already.
Consider insulin, for example. Research into it and drugs for treatment of diabetes doesn’t happen exclusively in the US.


More than a decade on, and it’s still one of the best kindles ever made, in my opinion.
You had physical buttons instead of a fiddly touch-screen, you could have music, have it read to you, and also go on the internet.
Plus it’s old enough it supports a bunch of formats, and registers as a mass storage device to a computer, so anything can use it.
There have been a few over time. It originally started out as a project to test some new Reddit features with fake users, using Markov/GPT-2 bots, and then it became funny enough to let users see.
Them calling themselves bots or coincidentally being unable to tell cats and dogs apart was also quite funny back in the day. (They didn’t do any actual image recognition, it was just making links and a title.)


Human drivers, if they could get LIDAR with their car, would probably also use it.
Why not aim for better than what humans can do?


According to the article linked in the article, it’s not that the operating system itself is more demanding, but more that the DE, and Browsers/Websites are more demanding now.
It feels like that Canonical basically needs to do the games thing of having a set of minimum specs for Ubuntu to run at all, and a recommend specs for Ubuntu to run well. Canonically basically bumped up the latter, but it’s being taken as the former.


If memory serves, he also claimed to have been driving when he teleported into a ditch 50 miles away.
Which just comes across like he was driving when he really shouldn’t have been (Drunk/Tired and Emotional), and fallen asleep whilst on the road.


It’s odd, since they used to have a rather nice HTML web interface specifically for low-peformance devices, but it’s since gone away.


This doesn’t seem so bad, though. 2 GB more in about 10 years is pretty reasonable in terms of an increase.
It’s not like they doubled it.


Specifically using publicly available information that they could find on search engines.
They didn’t track them down with a PI or anything quite like that.
Millennials are ruining the device industry smh