

Thing is, even using this new app it’s still ST underneath, using their Discovery servers. That’s as big an issue as anything.


Thing is, even using this new app it’s still ST underneath, using their Discovery servers. That’s as big an issue as anything.


To your point, the developer of an iOS Syncthing client (Möbius) has financially support ST development for at least 3 years that I know of. I don’t know how much, but they use it for their own clients, so it’s important to them.
My only concern would be the Discovery servers.


I have no intent to upgrade from my current version (1.27.9) as it’s been fine for years now.
It works, does what I need. There was an update a few months ago, but it offers nothing I need, and would only cause me a ton of work and re-testing to ensure it works as it currently does.
Apps don’t need continuous updating if they work.


I have an ancient Drobo.
Believe it or not, it’s only sound is the fan, which I can’t hear even when it’s on.
SSD will still generate heat, so will need a fan.


Part of why I root is to block them using Ad-away.


Seriously, WTFuck?
I’d be so pissed.
I think the nerd/tinker space today is stuff like self-hosting, local storage, return from cloud.
Apps that don’t phone home to someone else’s server, keeping your contacts, calendar, shopping list, etc on your own stuff.


We could, if someone cared to put in the effort to make that happen.


Come on baby…
Light. My. Fire.


I’d look at getting a used SFF (Small Form Factor) desktop for a LOT less than that Ugreen. I paid less than $50 for mine - at that price I can run a second one when I’m ready.
I’m currently running an old Dell SFF as my server, I’ve had Proxmox on it with 5 drives internally (2.5") with the OS on the NVME.
Initially it had 4GB of ram and ran Proxmox with ZFS just fine (and those drives were various ages and sizes).
It idles at 18w, not much more than the 12w my Pi Zero W idled at, but way more powerful and capable.


One drive failure means an array is degraded until resilvering finishes (unless you have hot spare, at least then the array isn’t degraded and silvering a new drive isn’t as risky).
Resilvering is an intensive process that can push other drives to fail.
I have a ZFS system that takes the better part if a day (24 hours) to resilver a 4TB drive in an 8TB five-drive array (single parity) that’s about 70% full. When uts resilvering I have to be confident my other data stores don’t fail (I have the data locally on 2 other drives and a cloud backup).


“Two in RAID” only means 2 when the arrays on on different systems and the replication isn’t instant. Otherwise it only protects against hardware failures and not against you fucking up (ask me how I know…).
If the arrays are on 2 separate systems in the same place, they’ll protect against independent hardware failures without a common cause (a drive dies, etc), but not against common threats like fire or electrical spikes.
Also, how long does it take to return one of those systems to fully functioning with all the data of the other? This is a risk all of us seem to overlook at times.


If you’re storing “critical data”, you want to look at redundancy (ie backup) and not expecting a single store to not have issues. Drives will fail, and if they fail in a RAID the entire store is at risk until the array is restored. If you don’t have hot spares it’s at even more risk while it’s rebuilding. ZFS is less sensitive to this than traditional RAID, but even it can’t magically restore data from thin air.
The link above discusses the 3-2-1-1-0 which I think is good to understand as 0 refers to verified backups. Unverified backups are no backups at all. It’s not unusual in the SMB space to do a test restore of a percentage of files monthly (Enterprise has entire teams and automation around testing).


I’d be surprised if it could do even a gig.


Of you ever feel like you can’t find the right screws or it just doesn’t hold back together well, just Goop the bastard back together.
So much stuff in my life is now Gooped together - I even Gooped some drives into a desktop that lacked enough mount points.
That stuff is magic in a tube.
The only concern I see here is the external drive. My experience has been that powered off drives fail more often than constantly-on drives. So my external drives are always powered on, I just run a replication script to them on a schedule.
But you do have good coverage, so that’s a small risk.
For stuff like movies I simply use replication as my backup.
Since I share media with fruends/family, I act as the central repository and replicate to them on a schedule (Mom on Monday, Friend 1 on Tuesday, etc), so I have a few days to catch an error. It’s not perfect but I check those replication logs weekly.
I also have 2 local replicas of media, so I’m pretty safe.


In the defense of end users, they got stuff to do and can’t be bothered to take the time which will make no obvious difference to what they need to do.
The average person can’t even describe how a toaster works, let alone anything even slightly more complicated.
And these users have skillets skill sets in other areas - I don’t expect an accountant to know how a computer works, any more than they expect me to understand accountancy or finance.


You’re missing the point - he’s elevating cli above all else, which you don’t have on TV or mobile.
Yes, I know there are media clients, I’ve used them all. And that screenshot is hideous - compare it to Jellyfin on mobile, which looks just like Netflix used to.
Besides, he’s not doing anything different than running a “server stack” (which isn’t accurate, he’s still running a server, the device hosting the media services, even if they’re native to the OS).
Xerox Parc didn’t invest millions in the 60’s and 70’s because CLI was so great.
We don’t use CLI on our microwaves, toasters ovens, tv’s clocks, lights, etc, for a reason.
Thanks for the update
I’ve used Fork for so long I forget it’s a separate app