

my HDD with 80k power on hours is gonna have to keep kicking for now I guess


my HDD with 80k power on hours is gonna have to keep kicking for now I guess


just installed it. to be honest I didn’t mind the r2modman ui but now I’ve seen the light…
thanks haha


very cool! do you know if gale supports the exporting and importing profiles as hash codes? that’s how me and my friends tend to share modpacks.
either way though I’ll check out, thanks.


I’ve used atlauncher for minecraft without issues. r2modman works perfectly via app image for a lot of steam games as well.
worst comes to worst you can manually drop files in but the tools do exist. it just depends what you’re modding


agreed. even the internal lcds are starting to go with age. I’ve replaced a few in the past month without any physical damage. just decided their time has come I guess.


they’re all containerization programs yes. I believe they differ in some minor details but thanks to the OCI standards a image built with docker will run in podman or vice versa.
distrobox is a little more feature rich for development, meant for exposing services and are interactive by default, vs dockers run and forget methodology.


podman works well, docker is a little finicky due to some systemd weirdness and the whole immutability of it all.
it mainly tries to get you to use distroboxes which are awesome. you can even install something in a distrobox and expose it to the host.


they’ll try to get bailed out but you would have to bail out so many companies it’s not feasible. you cant just bail out one of these companies. they all propped their stock value up on each other, so unless you bail out every company in the tech sector, there will still be trillions of market cap wiped out.
this is a good thing though. it will mostly only affect those who are overly invested in ultimately unprofitable tech, and the rest of us will be able to buy cheap stock for companies affected like Google and Amazon that will be hit massively but obviously are not just gonna go out of business. it’s similar to the covid drop. sucked for rich people but for the average person it wasn’t a massive issue and even had money making opportunities attached to it as these big companies scrambled.


massgrave.dev has you covered if windows throws a fit
is that a big pdu hes holding?


I think modern compilers do actually compile recursion to be equivalent with an iterative model.
edit: yes when possible they will compile to be iterative, but if it can’t be written iteratively it will be a series of calls and returns. depends on the specific type of recursion it looks like.


close, but it would actually be missing the codec in proton. bazzite actually comes with a very comprehensive ffmpeg build and all its dependencies.
steam just can’t include those proprietary libs in proton. luckily, protonge exists for this exact reason.


I haven’t looked at that GitHub but I’m familiar with most of the terms so here goes (verify them if you wish, I can’t promise full accuracy).
portable file server with accelerated resumable uploads: portable most likely means it’s easy to transfer from one server to another should you ever upgrade servers or anything else. resumable means you can pause the transfers if you desire.
dedup: it will automatically deduplicate files. so if you upload the same file twice it will just use the one you previously uploaded, saving space.
webdav is for distributed authoring and versioning. I don’t know a crazy amount about it but assume it means there’s some code in place that aids with collaboration as far as sending a file, working on it, and reuploading goes.
ftp: file transfer protocol.
tftp: trivial file transfer protocol. good for small things but iirc it’s not inherently secure
zerconf: plug and play. no messing with configs needed.
media indexer/all in one file: most likely indexes media uploaded and stores the generated thumbnails in one big file. most likely this is so it’ll be easier to transfer the install to another server if needed (you can move one big file containing all the thumbnails instead of a bunch of tiny ones).
no deps: no dependencies, everything you need is self contained in that repo.
again, double check things your curious about but that’s my interpretation of what most would agree is kind of just a keyword filled description lol


thanks, I’ll check it out!


interesting. what do you use as the model and how is that config set up? I’m not disinterested in trying it I just don’t know much about using it for workflows, is there an article you’d recommend?


I’m glad you found something that works for you but giving ai control over a git workflow sounds like a catastrophy waiting to happen, how do you ensure it doesn’t do something stupid?


according to that page the issue stemmed from an underlying system responsible for health checks in load balancing servers.
how the hell do you fuck up a health check config that bad? that’s like messing up smartd.conf and taking your system offline somehow


almost done re setting everything up after a catastrophic failure (ended up replacing multiple drives, the CPU, the motherboard, the psu, and the ram).
now I’m just running long command after long command, waiting for drives to zero, ensuring extended smart checks pass on new drives, cloning to my backup drives…
this things been down for a few weeks and I’m so excited to have it back up soon!
anyways, moral of the story is, the 3-2-1 strategy is a good strategy for a lot of reasons. just do it, it may save your ass down the line.


in the same vein (storing more data in less bits) you should check out tagged pointers as well!
I don’t think that’s a useless implementation at all. code looks relatively clean, and it definitely has its uses in the embedded systems world.
I’m a big enjoyer of pushd and popd
so if youre in a working dir and need to go work in a different dir, you can pushd ./, cd to the new dir and do your thing, then popd to go back to the old dir without typing in the path again