

Two rack rails bolted together with a power strip and a tray holding my server mini PC. My router is bolted on as well to act as a switch for everything while also providing Wifi to my phone and laptop



Two rack rails bolted together with a power strip and a tray holding my server mini PC. My router is bolted on as well to act as a switch for everything while also providing Wifi to my phone and laptop



I kind of railroaded myself into using calibre unfortunately.
I have a very specific filenaming scheme which I originally came up with back when I only used folders for organising my books in order to group together books that belong to a series but where the series is part of a larger universe.
Basically my folder structure is {World}/{Reading Order}; {Series} #{Series_Index} - {Title} - {Author}
On my kobo I have the autoshelf plugin installed which automatically parses this information when I add books and groups them together by world while filling out the series information.
In order to properly make use of this system I need to use Calibre custom columns and be able to export the books I want with this specific name format. I have yet to find a program other than calibre that would support this.
It would probably be smarter for me to reorganize my books at some point but I really like being able to basically drop a ton of books at once onto my reader using SFTP and as far as I can tell all common options rely on manually downloading the books, sending them directly to the reader or pulling them from their internal file storage in whatever form the application stores them…
I do like Audiobookshelf for the ability to add a book to multiple series, but the missing mass export functions stop me from switching


I name mine after greek and roman gods.
My NAS is bamed Hestia, the goddess of the bearth and home.
My docker server is called Poseidon due to the sea iconography of docker. My second iteration of my docker server where I tried playing around with podman I called Neptune.
I briefly had a Raspberry Pi for experimenting with some stuff which was called Eileithyia, the goddess of childbirth.
My Proxmox machine on which pretty much all ky other servers are run as VMs is called Atlas, as the Titan holding up my personal network.
I also have a truenas VM which I boringly called truenas…


How high is the power bill? I considered getting some more smaller drives but I figured it is more power efficient in the long term to buy bigger HDDs, not to mention that I only have 4 disk slots


I wish I could afford an SSD Nas since my main server is located in my bedroom. For now I have to be content with shutting anything down over night that triggers HDD activity.
I used to have a 4TB Ironwolf HDD but also ran out of space on that. As I already use a 2x16TB NAS server as a backup destination I looked to get another 16TB drive that I might repurpose at some point in the future.
I had to settle for a WD Elements HDD at about 310 Euro. My Ironwolf was really quiet. Might be because it is a 5400 RPM drive. The element almost drives me mad because the drive head clicks very loudly.
Same reason I don’t use my actual synology NAS with Toshiba MG08 drives as more than a backup server but at least those are actual server HDDs and so usually aren’t expected to be quiet.
I also just wanted to rant a bit. Don’t mind me


Quick question, the way you say server/agent architecture, does this mean that the server manages the backup schedule and pulls the backups from the systems or does the connected computer initiate the backups?
I’m currently using synology active backup for my server and used to also use it for my desktop. Linux support is not ideal though and I would like to move to something with similar capabilities that is also not vendor locked.
My personal usecase would be backing up a single server, a desktop and a laptop.


Good questions. Would like to know that too
I have a bare minimum of documentation as markdown files which I take care to keep in an accessible lovation, aka not on my server.
If my server does ever go down, I might really want to access the (admittedly limited) documentation for it
I read the title and this was literally the first thing that popped in my head


Don’t ask me how it’s named but I believe there was a fork of the project relatively recently in reaction due to some AI stuff they did.
The fork has all AI features scrubbed out
Professionally or hobbywise?
Hobbywise I’m pretty dead lately cause I left all my embedded gear at my parents when I moved.
Professionally I am trying to optimize software on a microcontroller to minimize power consumption for my master thesis. Currently I’m sitting at an average power draw of 70 uA at 3V. If all goes well I might get it even lower


Yeah, that would be the ideal scenario I guess.
It should technically be possible by mapping the compose files into the opt folder via docker mounts but I think that’s an unreasonable way to go about this since every compose file would need a mounting point


Proxmox to manage my VMs, SSH for anything on the command line and portainer for managing my docker containers.
One day I will switch probably switch to dockge so my docker-compose files are stored plain on the hard drive but for now portainer works flawlessly.


After reading through some of the comments, here is my opinion.
C would be a good language IF you know your students plan to get into IT, specifically a sector where the low level knowledge is useful. Beyond that, I assume your students probably use windows and I personally always find it a pain to work with C on windows outside of full IDEs like jetbrains and Visual Studio. It’s also a lot more work till you get some results that you are happy about. Unless you start with an Arduino, which I find pretty nice to get students interested in embedded stuff.
I don’t like JavaScript because I find it a mess although it is very useful for anything web related.
Given you said in another comment that this is meant to be a general purpose skill for your students I would strongly recommend python. While I dislike the dynamic type system, it is a very powerful language to get stuff done. You can quickly get results that feel rewarding instead of running into hard to fix issues that turn your students off of programming in general. Also it’s very useful outside of IT as a scripting language for analyzing data in basically any field or for generating nice plots for some document


I remember building something vaguely related in a university course on AI before ChatGPT was released and the whole LLM thing hadn’t taken off.
The user had the option to enter a couple movies (so long as they were present in the weird semantic database thing our professor told us to use) and we calculated a similarity matrix between them and all other movies in the database based on their tags and by putting the description through a natural language processing pipeline.
The result was the user getting a couple surprisingly accurate recommendations.
Considering we had to calculate this similarity score for every movie in the database it was obviously not very efficient but I wonder how it would scale up against current LLM models, both in terms of accuracy and energy efficiency.
One issue, if you want to call it that, is that our approach was deterministic. Enter the same movies, get the same results. I don’t think an LLM is as predictable for that
I used to use enums for my return codes.
Then I got pissed I had to add my enum definition to every project I worked on.
I now return integers based on errno


Thanks. I’ll keep this in mind in case my new stack causes issues again


Hey, just wanted to let you know that my updated stack has been running perfectly since I changed it based on your setup. Thanks


I know that the port forwarding command can be simplified. In my case its this complex because the way it is listed in the gluetun wiki did not work even though I disabled authentication for my local network. The largest part of the script is authenticating with the username and password before actually sending the port forwarding command.
I’ll definitely try adjusting my stack to your variant though. I’ve also tried the healthcheck option before but I must have configured it wrong because that caused my gluetun container to get stuck.
One question regarding your stack though, is there a specific reason for binding /dev/net/tun to gluetun?
I doubt you’ve heard of it honestly. It’s an ADuCM355 from Analog Devices. Internally it uses an Cortex-M3.