• 0 Posts
  • 353 Comments
Joined 3 years ago
cake
Cake day: June 30th, 2023

help-circle
  • NixOS. Started with Yellow Dog Linux in 1998.

    I don’t do everything through nix’s derivation system.

    Many of my configs are just an outOfStoreSymlink to my configs in the same dotfiles repos. I don’t need every change wrapped in a derivation. Neovim is probably the largest. A few node projects for automations I’m fine using pnpm to manage. Nix still places everything but I can tweek those configs and commit in the same repo without a full blown activation.




  • With these sorts of tasks models really seem to suffer from not knowing what packages or conventions have been deprecated. This is really obvious with an immature ecosystem like nix.

    This is where custom setups will start to shine.

    https://github.com/upstash/context7 - Pull version specific package documentation.

    https://github.com/utensils/mcp-nixos - Similar to above but for nix (including version specific queries) with more sources.

    https://github.com/modelcontextprotocol/servers/tree/main/src/sequentialthinking - Break down problems into multiple steps instead of trying to solve it all at once. Helps isolate important information per step so “the bigger picture” of the entire prompt doesn’t pollute the results. Sort of simulates reasoning. Instead of finding the best match for all keywords, it breaks the queries down to find the best matches per step and then assembles the final response.

    https://github.com/CaviraOSS/OpenMemory - Long conversations tend to suffer as the working memory (context) fills up so it compresses and details are lost. With this (and many other similar tools) you can have it remember and recall things with or without a human in the loop to validate what’s stored. Great for complex planning or recalling of details. I essentially have a loop setup with global instructions to periodically emit reinforced codified instructions to a file (e.g., AGENTS.md) with human review. Combined with sequential thinking it will identify contradictions and prompt me to resolve any ambiguity.

    The quality of the output is like going from 80% to damn near 100% as your knowledge base grows from external memory and codified instructions in files. I’m still lazy sometimes and will use something like Kagi assistant for a quick question or web search, but they have a pretty good baseline setup with sequential thinking in their online tooling.



  • It’s really not that different from a traditional web search under the hood. It’s basically a giant index and my input navigates the results based on probability of relevance. It’s not “thinking” about me or deciding what I should see. When I say a good assistant setup, I mean I don’t use Gemini or ChatGPT or any of the prepackaged stuff that tries to build a profile on you. I run my own setup, pick my own models, and control what context they get. If you check my post history I’m heavily privacy conscious, I’m not handing that over to Google or OpenAI.

    The summary helps me evaluate if my input was good and the results are actually relevant to what I’m after without wading through 20 minutes of SEO garbage to get there. For me it’s like getting the quality results you used to get before search got enshitified. It actually surfaces stuff that doesn’t even show up on the front page of a traditional search anymore.


  • I’m in software development and land on both sides of this argument.

    Having to review or maintain AI slop is infuriating.

    That said, it has replaced traditional web searching for me. A good assistant setup can run multiple web searches for me, distill the useful info cutting through the blog spam and ads, run follow up searches for additional info if needed, and summarize the results in seconds with references if I want to validate its output.

    There was a post a couple days ago about it solving a hard math problem with guidance from a mathematician. Sparked a discussion about AI being a powerful tool in the right hands.



  • Totally agree with your overall point.

    That said, I have to come to the defense of my terminal UI (TUI) comrades with some anecdotal experience.

    I’ve got all the same tools in Neovim as my VSCode/Cursor colleagues, with a deeper understanding of how it all works under the hood.

    They have no idea what an LSP is. They just know the marketing buzzword “IntelliSense.” As we build out our AI toolchains, it doesn’t even occur to them that an agent can talk to an LSP to improve code generation because all they know are VSCode extensions. I had to pick and evaluate my MCP servers from day one as opposed to just accepting the defaults, and the quality of my results shows it. The same can be done in GUI editors, but since you’re never forced to configure these things yourself, the exposure is just lower. I’ve had to run numerous trainings explaining that MCPs are traditionally meant to be run locally, because folks haven’t built the mental model that comes with wiring it all up yourself.

    Again, totally agree with your overall point. This is more of a PSA for any aspiring engineers: TUIs are still alive and well.



  • sloppy_diffuser@sh.itjust.workstoPrivacy@lemmy.mlA few question from a noob
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    1 month ago

    Firefox Nightly + arkenfox userjs + uBlock Origin + Bitwarden as my daily driver.

    Been a couple years since I checked up on arkenfox still being good. I get flagged as a bot all the time and constantly get popups about WebGL (GPU fingerprinting) so I assume its working as intended for my threat model.

    Tails when I really care.

    Mullvad VPN as my regular VPN with ProtonVPN for torrents.

    GrapheneOS / NixOS as my OS.

    Proton Visionary for most cloud services except passwords and I don’t really use Proton Drive. I do use ProtonPass for unique emails to every provider.

    Kagi for searches / AI.

    Etesync for contacts because Proton didn’t sync with the OS last I checked.

    Backblaze B2 for cloud storage with my own encryption via rclone (Round Sync on GrapheneOS)

    Keypass for a few things like my XMR wallets and master passwords I don’t even trust in Bitwarden.

    https://jmp.chat/ for my mobile provider.

    Pihole with encrypted DNS to Quad9.

    https://onlykey.io/ for the second half of my sensitive passwords (Bitwarden, LUKS, Keypass, OS login). First half memorized.

    Its a lot. I burned myself out a couple years ago keeping up with optimizing privacy and this setup has served me well for 2 years without really changing anything. The cloud services are grey areas in terms of privacy but the few ads that leak through uBlock have zero relevance to anything about me.


  • BSP Tree (with custom nodes).

    With a vanilla BSP-tree you can accomplish your diagram. Simply reordering your splits can do it by making the footer and main content areas first. Better approach is to support splits on non-leaf nodes. In the example below just split the root where all its children go to the new top node and a new bottom bar leaf node is created.

    Root (split: vertical, ratio: 0.6)
    ├── Left child (Window A)
    └── Right child (split: horizontal, ratio: 0.5)
        ├── Top child (Window B)
        └── Bottom child (Window C)
    

    To access neighbors you’ll need your nodes to track their parents (double linked). Then you can traverse until all edges are found. Worst case its O(height+num neighbors that exist) if I am remembering.

    Depending on how efficient you want it to be, there are speed ups. It has been awhile, but I do remember keeping the tree as balanced as possible makes the search around log(n). Each split node keeping an index of all children in its sub-tree also reduces how much traversing is needed when you need all children after a split.

    Can get a little complicated but it is doable. That said, how many splits will a TUI have? This may be preemptive to do.

    Custom nodes is where you support some patterns that could use further optimizations. Tables that will always be a grid. Tab bars that are a 1xn grid could be a further specialized node.

    This is all about layout. Fixed/Dynamic width/height windows, padding and margins, borders, are all render processing and don’t effect the layout (unless you want reactivity). By that I mean you have windows that will split differently when the viewport is portrait or landscape and it dynamically adjusts to the window size. Sometimes with different “steps” like a square viewport may be different from both portrait or landscape or 4:3 could be treated different from 16:9.

    TUIs are not my day job but I’ve made a few in my day. Above are just my opinions from experiences. There is no “right” answer but hopefully some of this helps your journey.


    TypeScript is my day job and using a custom JSX Factory makes it pretty easy to define HTML-like interfaces for devs that can support mixing layout, render attributes, content, and app logic.

    Explicit BSP splits:

    <Split type="vertical" ratio={0.6}>
      <WidgetA />
      <Split type="horizontal" ratio={0.5}>
        <WidgetB />
        <WidgetC />
      </Split>
    </Split>
    

    Custom nodes:

    <Container>
      <TabBar>
        <Tab>Tab 1</Tab>
        <Tab>Tab 2</Tab>
      </TabBar>
      <StatusBar />
    </Container>
    

    Not sure your stack but throwing it out there as something I’ve used successfully.








  • If you knew how to start, you wouldn’t be learning anything. So, you are learning. What do you want to learn about as you learn how to architect and develop some software? Web, databases, 3d rendering, gaming, etc.

    Chances are you are going to need a framework, library, or many of both. You’ll be learning those too.

    Back to architecture. Once you have an idea of what you want to build, we want to get something running fast. Because whats going to happen is you are going to make bad decisions. Lot’s of them and that’s good! You want to fail so many times you learn what not to do, how to debug to keep things moving forward, etc.

    So start with a hello world. Serve a webpage. Connect to a database. Draw a square. Then add another and another.

    There is some quote out there that the difference between a beginner and master is the master spent X,000 hours failing. At the end of the day its just time spent learning.

    You may start over. You may switch tech stacks. You maybe give up when something more interesting comes along.

    90% of my personal projects never get completed. I’m usually learning a tool. If I’m reasonably able to use a new tool I’ve learned something. Become really good at learning. Learn to read code. Learn to read type signatures. Learn how the tools you learn work. Learn how to make them do things they were not meant to do. Learn your build system. Learn to setup linting, document your project, setup CI/CD, and so on.

    For reference I’m in my 40s. Started coding at 13. Work R&D and greenfield projects for the same fortune 100 for almost 2 decades. Done everything from web, data pipelines, network code, integrated firmware, etc. As you skill up it gets easier. My team usually picks up a new stack every project as I level them up just to expose them to different things.


  • Been using Nix for just over a year.

    Seconding to go with flakes. No idea wtf channels are or whatever the previous system was.

    Documentation can be confusing due to changes in paradigms. The bare “nix <scope>” seems to be the most modern oppose to “nix-<scope>” (e.g., nix store vs nix-store). That said, not every feature can be found in the newer variants.

    This can make following tutorials difficult if they aren’t using the same paradigm.

    Getting comfortable with the nix language will be helpful. Its a functional programming language, which is very different than languages like bash.

    Not everything has to be done the nix-way. My nvim files are in the same repo, but I just outOfStoreSymlink them instead of wrapping them in a derivation.

    Some useful packages I like not already shared.

    Disk partitioning: https://github.com/nix-community/disko

    Immutable: https://github.com/nix-community/impermanence - Pretty much resets the system to a new install every boot. Discourages manual tweaks via sudo as they get wiped out. You can still mark certain directories to be persistent (logs, personal documents, steam games, etc.).

    Nvfetcher: https://github.com/berberman/nvfetcher - Nix has a concept of overlays. You can pretty much override anything with .override (module args or inputs) and .overrideAttrs (module attribute set or outputs). Nvfetcher helps with checking different sources so you can override a packages src attribute. Why is this useful? So you can install any version you want and are not bound to nixpkgs. That doesn’t mean the install script in nixpkgs will always work on newer versions, but those can be overridden as well if needed.

    Note that disko, impermanence, and nvfetcher all have a flake.nix in the root of the repo. Those provide ready to go overlays so you don’t have to deal with writing your own which is really nice if you want to latest version without much work when available.