• 3 Posts
  • 260 Comments
Joined 3 years ago
cake
Cake day: June 14th, 2023

help-circle

  • For me zip was mainly two things:

    • I put all my things on it that I wanted to use on a shared computer. At the time that included downloads from the internet which I didn’t have access to directly from my own computer.
    • Hot fix for running out of hard disk. Zip was not far off from the sizes hard disks were in the early-mid 90ies. 100 MB of extra room was big and attaching another hard disk wasn’t necessarily an option.

    I’m not sure about them “dominating” though. Virtually none of my friends had one.




  • GPT Researcher is a research agent, just one of many AI tools.

    I think the idea is that these tools let users configure mcp servers, and because mcp doesn’t necessarily use the network but can also just mean directly spawning a process, users can get the tool to execute arbitrary commands (possibly circumventing some kind of protection).

    This is all fine if you’re doing this yourself on your computer, but it’s not if you’re hosting one of these tools for others who you didn’t expect to be able to run commands on your server, or if the tool can be made to do this by hostile input (e.g. a web page the tool is reading while doing a task).


  • I don’t know a C compiler that would break the addition based version either that I could tell him about, but maybe it’s worth pointing out that there are languages with overflow checked arithmetics, e.g. Swift. Still not an example where it doesn’t work though because it has separate overflowing operators (&+ and &-).

    Rust could be a candidate too, it has a flag for runtime overflow checks. Ada would be the strongest candidate since it requires checks, but most compilers have an option for it anyway, and some older ones even disable them by default.

    It’s actually pretty interesting to think about how compilers used to omit checks for performance, while with modern compilers the checked version is often free anyway and can sometimes be faster because it can help with further optimizations down the line.


  • I know it’s 15 years old, but I’m not buying that the trie as such is what improves the performance.

    • traversing a trie built from a sorted list yields the nodes in the original sorted order, so you’re not optimizing the order
    • you do have additional pre-computed information in the trie, but it’s basically just the common prefix of the current and previous word, which can be trivially obtained without the trie
    • subtree pruning basically just translates to skipping until the prefix changes at a shallower level

    Basically, yes storing the data in a trie persistently will improve performance (scanning all words is still slow, but not that slow because you are skipping most early and can do binary search too, while the trie approach does a lot of python dict lookups), but the main performance gain comes from the trie forcing him to use properties that the sorted input also has but he just wasn’t using in his naive implementation (specifically, shared prefixes and subtree pruning).

    He’s also leaving performance on the table at trie build time, which is not part of the benchmark. He’s building the trie in the straight-forward way, but for sorted input it could be built in O(n).







  • It’s not far from the truth. Knives, including cutlery knives, are age restricted in the UK, and you do need ID. And this was indeed just made stricter last year in response to “knife crime”, in particular a 2024 case of a teenager circumventing existing age checks for online knife sales and killing three girls. Now you need to provide photo ID both when buying, and on delivery or collection if you buy online.

    There’s also a proposal to require stores to report bulk sales of knives to the police. This one would exclude cutlery, but e.g. a sale of a set of six steak knives would have to be reported.






  • It doesn’t have VRR but it does have a configurable refresh rate. So e.g. if a game runs at a stable 40 fps you can run the display at 40 Hz too (or 80 Hz for the OLED model) and then you don’t get the uneven frame spacing you’d get from vsync with 40 fps on a 60 Hz display. With VRR the screen would also adjust to whatever frame rate the game produces even if it’s not stable, and the Deck doesn’t do that. But being able to get 40 fps with uniform frame timing instead of the 30 fps you’d have to use if the display was locked to 60 Hz (LCD model) or 90 Hz (OLED model) is a huge difference.