

I’d love to understand the costs associated with a project like this. Do you have a breakdown anywhere of where money goes each month?


I’d love to understand the costs associated with a project like this. Do you have a breakdown anywhere of where money goes each month?


Inference is dirt cheap in comparison. Hundreds to thousands of concurrent users can be served by hardware costing in the high-thousands to low-ten-thousands.
Training those same foundational models is weeks to months of time on tens to hundreds of millions worth of hardware.


Played through the game after his video-- no regrets


The simpler the arbitrary string/blob parsing logic the less this happens
https://app.opencve.io/cve/?product=grub2&vendor=gnu
I agree with you that it’d be nice if the cuts were a little shallower and allowed for an encrypted boot partition, but you could still have the system reasonably secure by encrypting the data partitions and signing the entire boot process to detect and abort decryption if the boot partition doesn’t match signatures. You already have to do this with the efi partition if you’re particularly paranoid about that attack vector, so this really isn’t a new one.
If caddy is acting as a proxy for anything, you should not need to forward that port externally. Local host firewalls allowing traffic on your local network is sufficient.
Depending on your physical host layout you may be looking at an issue with nat reflection.
You have not given us enough about your topology to assist in troubleshooting.


What is the context here? What was the original inquiry?


The tl/dw is that a retrospective of history often allows us to draw parallels to the present and predict the future.


There are a few devs who seem to do it right. Slay the spire 2 is stable, complete, and reasonably balanced in it’s current state.
If the game didn’t get another update after today id still feel like I got my money’s worth.


She’s old. She doesn’t want to read a book. As best as I can tell she wants to have something slightly brain engaging during commercials when watching shows, but is far too stubborn to learn a new game she doesn’t already know.
Who are we to tell someone what they should want to do with their twilight years?


The unfortunate reality is they cannot be avoided in some cases. There is not a paid alternative to Facebook, nor are there to a lot of f2p mobile games.
My grandma had a tablet about a decade ago, and I loaded it up with tons of paid $1-$3 casino games for her (it’s what she wanted) but a decade later, when going to reinstall them to a new tablet, all of them no longer exist on the play store and seemingly 100% of current games are either ad supported or require iap to refil your virtual currency.
She literally did what you asked and today she still has no options. What should my 86 year old grandma do in this case?


Chinese, specifically so I can exclusively curse in it like they did on firefly.


There are server chips like the E7-8891 v3 which lived in a weird middle ground of supporting both ddr3 and ddr4. On paper, it’s about on par with a ryzen 5 5500 and they’re about $20 on US eBay. I’ve been toying with the idea of buying an aftermarket/used server board to see if it holds up the way it appears to on paper. $20 for a CPU (could even slot 2), $80 for a board, $40 for 32gb of ddr3 in quad chanel. ~$160 for a set of core components doesn’t seem that bad in modern times, especially if you can use quad/oct channel to offset the bandwidth difference between ddr3 and ddr4.
I think finding a cooler and a case would be the hardest part


+1 for Niagra. It takes a few days to get used to but it’s the launcher every power user didn’t know they wanted. Lifetime purchase options and a very responsive/passionate dev


https://en.wikipedia.org/wiki/Markov_chain
Before the advent of AI, I wrote a slack bot called slackbutt that made Markov chains of random lengths between 2 and 4 out of the chat history of the channel. It was surprisingly coherent. Making an “llm” like that would be trivial.


It definitely can be disabled post-install but is much simpler to install without it at install-time, and has the added benefit of not pulling 2-5gb of other things that won’t be relevant to your use case. It’s not that the disk waste is that big of a deal, but any issues you run into will be that much easier to troubleshoot with fewer moving parts.


That wasn’t quite the takeaway I was going for. You can get a lot done on 8gb of ram. I was just trying to point out that it would probably be your first bottleneck as you started to scale out, and that you should consider using the server headless to make the ram you have go that much further.


All of those would be perfectly cromulent nodes for small containers. The first issue you’ll run into is the low ram. Some homelab projects would cause you to exceed 8gb, but the good news is if you’re using an external backend via NFS, you can always scale out (more nodes) or up(more compute per node,) later with minimal headache.
If you’re going to be memory constrained, don’t waste 1-2gb on a gui, install Ubuntu/Debian/whatever headless


Compsci labs or everywhere?


Not OP but I think this guy is remembering a scene from silicon valley, not from reality. That said it’s probably not that far off. Amazon smart devices absolutely have this “feature” in production today-- and it’s opt-out, not opt-in.
Depending on when you were born: Class of '09
I’m not sure how it would hit for non-millenials but if you went to school in the early aughts the entire series is great.