

Yep.
I run Tailscale on every device that can run it, and have a TS router in one device at home for devices that can’t run it.
Its my fallback Syncthing every has a Discovery server failure.


Yep.
I run Tailscale on every device that can run it, and have a TS router in one device at home for devices that can’t run it.
Its my fallback Syncthing every has a Discovery server failure.


It’s a fantastic app, but doesn’t do sync like SyncThing or Resilio Sync.
It can do things similarly if you work at configuring it, but it can never monitor a remote and sync based on file changes there. That’s not a criticism, it’s a function of the file system approach it takes - it can sync with many different file systems, but it doesn’t have a client at the other end - it simply interfaces with that file system. Fantastic actually.
I’ve used it since about 2010, it was my solution for moving files back and forth for a long time. I still use it for specific things, but I’ve put more effort into ST and Resilio Sync config and management because they’re full-on sync suites.


Instant sync only works for local folders it can monitor. Since it doesnt have a client on the other end, there’s no way to make this happen (it would have to monitor the destination).
This would require keeping a connection open between devices, which is a high cost from a network (and especially battery) perspective.
Its a great app, I’ve used it for 10+ years, paid for it 2 or 3 times because it’s worth it.


I’ve been using Fork for years. Möbius on iOS has financial support from a 3rd party that uses Syncthing in their own processes, so I suspect it will stay around.
That said, Resilio Sync is the other most-viable option I know (and use).
It’s a little less kind to battery with larger folder pairs, and uses more memory since it stores the index in RAM. But it’s robust.


Nothng official, sorry, wish I did!
Mostly personal experience. But that experience is also shared among a group of peers and friends in the SMB space where their clients think they can keep stuff on externals in an office safe only to find they’ve gone tits up nearly every time they pull them out a couple years later. And not the enclosures, the drives themselves - they all have external drive readers for just these kinds of circumstances.
In the enterprise you’d get laughed out of a datacenter for even suggesting cold drives for anything. Of course that’s based around simple bit rot concerns, and why file systems like ZFS use a methodology to test/verify bits on a regular basis.
If nothing else, that bit rot should be enough of a reason to not store data on cold drives. It’s not what drives were designed (or tested) to do.
Edit: Everything I’ve read over the years suggests failures happen as much from things like lubricants hardening from sitting as from bit rot. I’ve experienced both. I’ve seen drives that spin up after ten years but have numerous data errors, and drives that just won’t spin up, while their counterparts that have run nearly continuously are fine (well, their bit-rot was caught by the OS and mitigated). With a running drive you have monitoring, so you know the state.


Meh, you got a spare kidney…


Fine, I write an extensive bit of help with links to QNAP docs and a few other things, and you downvote.
Fine, how about I just delete it, and ya all go figure it out without my help.


I would definitely keep them warm, as in a running machine.
Drives on a shelf die more often than always-on drives.


I use a similar Dell Optiplex 7000 series.
It boots from the NVME, with an 8TB 3.5 disc for data, and a 500GB SD for my VMs. (Since spinning disks can idle much lower than SSD, getting my always-on VMs off the big drive lets it idle, with the SSD peak power being lower than the peak of spinning disk Adding the SSD increased net power slightly).
I use a splitter on the 12v power line for both of the drives. It’s fine.
This box only has an 80w power supply, and with both those drives hooked up it draws 20w at idle, and peaks at 70w when converting multiple videos simultaneously.
The manuall tells you what you can do without voiding the warranty.
Edit: Given it’s age, I’d pull the CPU cooler and replace the paste. It’s likely hardened by now. Mine was randomly rebooting because the cpu would overheat. Replaced the thermal paste and its been rock solid since.


I self host on a 5 year old Dell Optiplex Small Form Factor desktop.
I also have a Raspberry Pi, which has about 1/16 the performance of the desktop - Pi can be used for all sorts of stuff.


Yep.
My Pi is about 8 watts. Really hard to beat.
The SFF started at 12w, but swapping out the data drive for a much larger one pushed it up 5w. And now with 2 VMs always running (PiHole and a Windows VM), it hovers at 20w.
The ancient NAS (Drobo) sits at about 15w.


The number one thing you can do, by orders of magnitude, is to start with power-friendly hardware.
For example, my previous server was an old gaming machine. It’s lowest idle power consumption was 80 watts. That was with running an OS that permitted heavy power reduction control, and enabling every power saving feature in the BIOS.
Compare that to my 2019 Dell Optiplex Small-Form-Factor desktop I’m running as a server. The power supply is rated for 80 watts, MAX. It idles at 20w, peaks at about 70w when converting multiple videos simultaneously. This with an 8 TB enterprise drive for data.
So 1/4 the power draw when idle, where it spends perhaps 90%+ of its time. Even things like Resilio Sync and Syncthing don’t significantly raise CPU time.
Streaming with Jellyfin or Mediamonkey have nearly no CPU impact.
There’s nothing in heavier hardware you could tune to get down to 20w.


That’s not data redundancy - there’s still only one copy of your data.
Those are mitigations against loss of data due to loss of parity.
There’s still only ONE copy of your data.


Fine.
Pull 1 drive and see how redundant your data is while it’s resilvering.
RAID is NOT data redundancy. You still have a single copy of your data.
Tell me again how RAID is data redundancy?
https://www2.eecs.berkeley.edu/Pubs/TechRpts/1987/CSD-87-391.html


RAID isn’t data redundancy, it’s an array of drives combined to form a single logical storage pool. It solves the problem of needing a single storage pool larger than the available drives. As such, it’s very sensitive to loss of a single drive.
At your storage size requirements (2 TB), RAID is unnecessary today.
Edit: Let me say it again for you downvoters-RAID is NOT data redundancy.
There is only ONE copy of your data in RAID (excepting mirroring). It’s why RAID now has double parity and hot spare drive capability.
RAID is for creating a single pool that’s larger than available drive size.
Go ahead and downvote in ignorance, and learn about data redundancy when your RAID fails.
RAID is NOT data redundancy - it’s DRIVE redundancy.
Take it from the source https://www2.eecs.berkeley.edu/Pubs/TechRpts/1987/CSD-87-391.html


Unless you need the super-compactness of a mini PC, the Small Form Factor is a significantly greater value.
You get more horsepower, more space, and better cooling.
And they tend to be very quiet. Mine only has some fan noise when converting video, and it’s always running 2-5 VM’s (mostly Windows).
I disagree with cold backup drives.
In my experience, cold drives fail more often than warm drives. This is why all my data replication is always warm.
All backup solutions should be regulator tested, otherwise you don’t know if you have a backup.
Others have mentioned backup, I’m going to reiterate that.
I have an (old) NAS that frankly I don’t trust to not die. Then again, anything can die, so it’s just one component of my data duplication.
I also have my server which is authoritative for all data, which is then duplicated (on schedules) to the NAS and 2 external drives, so I have 4 local copies.
All mobile devices sync important data to my server.
Power
My NAS idles about 15w. It’s 5 drives, so honestly that’s quite low and tells us it spins down drives.
My server idles at 20w, using NVME as the boot drive, a large data drive, and an SSD for virtual machines. It’s power supply maxes at 80w (which it approaches when I’m converting videos with handbrake).
Before this my server was an old gaming desktop that idled around 100w.
So my server today is a 5 year old Small-Form-Factor Desktop that I picked up for $50. I paid more for the RAM I added. It has enough room internally for one 3.5" drive and the 2.5" SSD…
It’s also quiet - the CPU and power supply fans double as case fans.
Hahaha, really? How, pray tell?
VPN can be run (and is run) such that it can’t be detected (more accurately, is incredibly difficult to detect).