• 8 Posts
  • 263 Comments
Joined 3 years ago
cake
Cake day: June 14th, 2023

help-circle
  • By the way, I really hope that you consider synthesizing concepts. As an exercise, Carroll concludes from his premises that:

    There is no life after death, as the information in a person’s mind is encoded in the physical configuration of atoms in their body, and there is no physical mechanism for that information to be carried away after death.

    But consider the following quote from Strange Loop at the end of Chapter 18, “The Blurry Glow of Human Identity”. Remember, Hofstadter is a physicist, arguably as influential as Carroll in quantum theory, and no less of an anti-dualist or materialist. So, as an exercise, synthesize for yourself an understanding of why Hofstadter says:

    In the wake of a human being’s death, what survives is a set of afterglows, some brighter and some dimmer, in the collective brains of all those who were dearest to them. And when those people in turn pass on, the afterglow becomes extremely faint. And when that outer layer in turn passes into oblivion, then the afterglow is feebler still, and after a while there is nothing left. This slow process of extinction I’ve just described, though gloomy, is a little less gloomy than the standard view. Because bodily death is so clear, so sharp, and so dramatic, and because we tend to cling to the caged-bird view, death strikes us as instantaneous and absolute, as sharp as a guillotine blade. Our instinct is to believe that the light has all at once gone out altogether. I suggest that this is not the case for human souls, because the essence of a human being — truly unlike the essence of a mosquito or a snake or a bird or a pig — is distributed over many a brain. It takes a couple of generations for a soul to subside, for the flickering to cease, for all the embers to burn out. Although “ashes to ashes, dust to dust” may in the end be true, the transition it describes is not so sharp as we tend to think.


  • Well, the burden of proof doesn’t lie with Carroll. Instead, the entire point is that the non-materialist has the burden of evidence:

    Given a quantum state of the relevant fields, it accurately predicts how that state will evolve. Skeptics of the claim defended here have the burden of specifying precisely how that equation is to be modified. This would necessarily raise a host of tricky issues, such as conservation of energy and unitary evolution of the wave function.

    Otherwise I can rely upon Newton’s flaming laser sword; every time you ask about the possibility of non-materialism, I can ask you for the corresponding experiment which opens that possibility. Note that sometimes this is scientifically fruitful, as in the discovery of infrasound leading to many debunkings of hauntings as well as unlocking the secrets of elephant communication. (The more radical position of anti-materialism was conclusively refuted during the colonial era, so we cannot assume that the material world is only hypothetical.)

    This is all made stark in Figure 4, p15, which shows that any possible physical force not in the Standard Model would be so weak and subtle as to be undetectable by humans; when a human claims that they are sensitive to such a force, they have incorrectly implicitly assumed that their body is physically capable of interacting with such a force in a perceptible way. The argument goes much like the argument against electrosensitivity: if you really could sense the weak experimental force then you would be constantly sensing the much stronger ambient forces from the outside environment which we can’t mute.

    A common retort is that quantum states are merely our epistemic knowledge as humans about a fundamentally-unknowable micro-reality below our scale of perception. However, the PBR theorem rules that out by insisting that the quantum wavefunction is ontic. Leifer spent about two years struggling against this result in vain and eventually published Leifer 2014, which both serves as a great overview of the no-go theorems in ontological models and also as an example of how difficult it can be to unlearn previously-accepted beliefs.


  • Materialism and QM have no conflict. What QM shows, via the KS theorem, is that reality cannot be objective; it must be contextual, arising from the participatory interactions between objects and subjects. Carroll 2021 is a fairly hard metaphysical barrier which prevents spurious anti-materialist claims by fully shifting the burden of proof to claimants; if you genuinely think that there are irreconcilable problems with materialism then you must give a physics experiment which violates the Standard Model.

    The reviewers for Hofstadter don’t understand the book, which I have on my shelf and highly recommend. The point of Strange Loop is that the caged-bird metaphor, that there is exactly one mind per one brain, is wrong in both directions: sometimes there’s more than one mind in a brain, and sometimes a mind is not wholly contained in a single brain. The only reason to skip Strange Loop is if you’re a computer scientist or mathematician, in which case you should definitely read GEB first and Strange Loop second.


  • Get in the habit of running jj desc midway through a commit. Have you just discovered an insight in an old module? Do you see how you will write the next few hunks of code? Did you have a moment of clarity where you zoomed out and saw what the next week will look like? Document it! Let your commit messages have multiple paragraphs, each written at a different time. Fill your commits with annotations about what you were doing so that future-you will understand past-you.

    The reasoning here is that, aside from the jj file subcommands which directly alter the index, almost any jj subcommand will work. You might as well pick a subcommand which advances your goals.





  • You need SRE concepts. First, if you break it then you fix it; in a system where anybody can make a change, it’s the changer’s responsibility to meet service objectives. Second, if your boss doesn’t find that acceptable then they need to appoint a service owner and ensure that only the owner can make changes; if the owner breaks it then the owner fixes it. Third, no more than half of your time should ever be spent fixing things; if something is constantly broken then call a Code Yellow or Code Red, tell your service users that you cannot meet your service levels, and stop working on new features until the service is stable again.

    Under no circumstances, ever, should anybody stay late. There should only be normal business hours, which are best-effort, and an on-call rotation which is planned two months in advance. Also, everybody on call should be paid hourly minimum wage on top of salary for their time.





  • You’ve reinvented one of the two reasons that Project Xanadu failed: micropayments have very high overhead relative to the content being paid for. (The other reason is that there literally aren’t data structures which work like Xanadu’s data model.)

    Further, where does money come from? You’re sketching a system where money has relatively high velocity, but it’s all paying for content, which has marginal cost to distribute; how does money get into this system in the first place? This is why Bitcoin’s currently on a trend to zero; once everybody realizes this problem, the system collapses from lack of faith.

    I hope that thinking about this for a bit will radicalize you further towards the understanding that a universal income and artists’ stipend is the economically-sustainable way to compensate artists, rather than forcing folks to swap scraps of digital coinage.


  • From the perspective of somebody who’s actually hacked on Linux: Most Linux maintainers, like most programmers in general, are full of machismo stemming from the inherent difficulty of writing C. It is extremely difficult to write correct C and nobody can do it consistently, so those maintainers are heavily invested in the perception that they are skilled with C. Rust is much easier to write and democratizes kernel hacking, which is uncomfortable for older maintainers due to the standard teenagers-vs-parents social dynamics. Worse, adapting various kernel interfaces so that they are Rust-friendly has revealed that the pre- and postconditions of interface methods were not known before; there is existing sloppiness in the kernel’s internals which is only visible because of Rust-related cleanups.

    Note that Linux is not a GNU project. GNU’s kernel project is GNU Herd. “GNU/Linux” refers to Linux userlands populated with GNU packages. It’s important not to be distracted by this; the kernel is agnostic towards userland and generally is compatible with any loadable executable that uses Linux’s public syscall interface, so the entire discussion of Rust in the kernel is separate from anything going on in userland.

    Most siblings are wrong! PRs written in Rust can be rejected. There are already multiple non-C languages in the kernel. Rust is sufficiently available on the platforms where it will be required for building kernel. Maintainers are only added after they have shown themselves to be socially reliable and they can be removed by other maintainers if they are unresponsive. The only correct sibling points out that Rust is different.



  • If you want to know how Google specifically does things, search for “TeraGoogle”; it’s not a secret name although I don’t think it has a whitepaper. The core insight is that there are tiers of search results. When you search for something popular that many other people are searching for, your search is handled by a pop-culture tier which is optimized for responding to those popular topics. The first and second pages of Google results are served by different tiers; on Youtube, the first few results are served from a personalized tier which (I expect has) cached your login and knows what you like, and the rest of the results are from a generalist tier. This all works because searches, video views, etc. are Pareto-allocated; most of the searches are for a tiny amount of cacheable content.

    There’s also a UX component. Suppose that you dial Alice’s server and Alice responds with a Web app that also fetches resources from Bob’s server. This can only be faster for you in the case where Bob is so close to you (and so responsive) that you can dial Bob and get a reply faster than Alice finishes sending her app. But Alice and Bob are usually colocated in a datacenter, so Alice will always be closer to Bob than you. This suggests that if Alice wants to incorporate content from Bob then Alice might as well dial Bob herself and not tell you about Bob at all. This is where microservices shine. When you send a search to Google, Youtube, Amazon, or other big front pages, you’re receiving a composite result which has queries from many different services mixed in. For the specific case of Google, when you connect to google.com, you’re connecting to a machine running GWS, and GWS connects to multiple search backends on your behalf.

    Finally, how typical of a person are you? You might not realize how often your queries are handled by pop-culture tiers. I personally have frequent experiences where my search turns up zero documents on DDG or Google, where there are no matching videos on Youtube, etc. and those searches take multiple seconds to come up empty. If you’re a weird person who constantly finds googlewhacks then you’re not going to perceive these services as optimized for you, because they cannot optimize for the weird.





  • Hi! You are still bullshitting us. To understand your own incorrectness, please consider what a chatbot should give as an answer to the following questions which I gave previously, on Lobsters:

    • Is the continuum hypothesis true?
    • Is the Goldbach conjecture true?
    • Is NP contained in P?
    • Which of Impagliazzo’s Five Worlds do we inhabit?

    The biggest questions in mathematics do not fit nicely into the chatbot paradigm and demonstrate that LLMs lack intelligence (whatever that is). I wrote about Somebody Else’s Paper, but it applies to you too:

    This attempt doesn’t quite get over the epistemological issue that something can be true or false, determined and decided, prior to human society learning about it and incorporating it into training data.

    Also, on a personal note, I recommend taking a writing course and organizing your thoughts prior to writing long posts for other people. Your writing voice is not really yours, but borrowed from chatbots; I suspect that you’re about halfway down the path that I described previously, on Lobsters. This is reversible but you have to care about yourself.

    Last time, when I tried to explain this to you, you decided to use personal insults. Mate, I’m not the one who has eaten your brains. I’m not the one who told you that LLMs can be turned into genies or oracles via system prompts. I’m certainly not the one who told you that RAG solves confabulation. You may have to stop worshipping the chatbot for a moment in order to understand this but I assure you that it is worthwhile.



  • I think that there are two pieces to it. There’s tradition, of course, but I don’t think that that’s a motive. Also, some folks will argue that not taking hands off the keyboard, not going to a mouse, is an advantage; I’m genuinely not sure about that. Finally, I happen to have decent touch typing; this test tells me 87 WPM @ 96% accuracy.

    First, I don’t spend that much time at the text editor. Most of my time is either at a whiteboard, synchronizing designs and communicating with coworkers, or reading docs. I’d estimate that maybe 10-20% of my time is editing text. Moreover, when I’m writing docs or prose, I don’t need IDE features at all; at those times, I enable vim’s spell check and punch the keys, and I’d like my text editor to not get in the way. In general, I think of programming as Naur’s theory-building process, and I value my understanding of the system (or my user’s understanding, etc.) over any computer-rendered view of the system.

    Second, when I am editing text, I have a planned series of changes that I want to make. Both Emacs and vim descend from lineages of editors (TECO and ed respectively) which are built out of primitive operations on text buffers. Both editors allow macro-instructions, today called macros, which are programmable sequences of primitive operations. In vim, actions like reflowing a paragraph (gqap) or deleting everything up to the next semicolon and switching to insert mode (ct;) are actually sentences of a vim grammar which has its own verbs and nouns.

    As a concrete example, I’m currently hacking Linux kernel because I have some old patches that I am forward-porting. From the outside, my workflow looks like staring out the window for several minutes, opening vim and editing less than one line over the course of about twenty seconds, and restarting a kernel build. From the inside, I read the error message from the previous kernel build, jump to the indicated line in vim with g, and edit it to not have an error. Most of my time is spent legitimately slacking multitasking. This is how we bring up hardware for the initial boot and driver development too.

    Third! This isn’t universal for Linux hackers. I make programming languages. Right now, I’m working with a Smalltalk-like syntax which compiles to execline. There’s no IDE for execline and Smalltalks famously invented self-hosted IDEs, so there’s no existing IDE which magically can assist me; I’d have to create my own IDE. With vim, I can easily reuse existing execline and Smalltalk syntax highlighting, which is all I really want for code legibility. This lets me put most of my time where it should go: thinking about possibilities and what could be done next.