Thanks to this community I’ve learned and I’m feeling inspired. I’ve loved having an NAS for the last few years, but it’s woefully under powered for what I’m using it for these days.
So I’ve ordered some basic PC parts, gonna build a basic setup using an old CPU I got lying about and try the NAS OS I saw talked about on here recently.
TrueNAS looks like a good option with only slight fears it’ll go down the well known path to the dark side like so many free options before.
In any event, I’m looking forward to adding Nextcloud and Jellyfin, to trying out Docker and generally having more control over things.
Thanks again to you all for informing and inspiring.
I’ll be back if I get questions!
When you end up having a mini homelab look into komo.do for container orchestration over the overkill options like kubernetes or portainer
I prefer dockge for putting all of my compositions in one place.
And being able to manage multiple hosts in one UI is the absolute tits. There are a few features I miss from portainer but none strong enough to pull me back. And no bs SaaS licensing and costs…
Portainer is way too bloated for personal use. I liked it initially, but the licensing shit was, well, shit, and the way it managed compose files was garbage. Dockge is way better for my use case, since it works alongside Docker, instead of fucking off to do its own thing.
So what’s the threshold for ‘mini’ vs ‘you need to stop’…? Number of hosts, or number of containers, or number of public services, or…
Not sure, currently have 8 nodes and 40 apps running
When my QNAP finally died on me, I decided to build a DIY NAS and did consider some of the NAS OSes, but I ultimately decided that I really just wanted a regular Linux server. I always find the built-in app stores limiting and end up manually running Docker commands anyways so I don’t feel like I ever take advantage of the OS features.
I just have an Arch box and several docker-compose files for my various self-hosting needs, and it’s all stored on top of a ZFS RaidZ-1. The ZFS array does monthly scrubs and sends me an email with the results. Sometimes keeping it simple is the best option, but YMMV.
My NASs are purely NAS, I prefer a Debian server for… Pretty much everything. But my storage only does storage, I keep those separate (even for an old PC acting as a NAS).
No matter what goes down, I can bring it back up, even with a hardware failure.
I used to do that. I had a QNAP NAS and a small Intel NUC running Arch that would host all my services. I would just mount the NAS folders via Samba into the NUC. Problem is that services can’t watch the filesystem for changes. If I add a video to my Jellyfin directory, Jellyfin won’t automatically initiate a scan.
Nowadays, I just combine them into one. Just seems simpler that way.
I just have my downloader trigger a scan at completion.
I have a few proxmox clusters going, combining it all wouldn’t be practical. This way my servers (tiny/mini/micros I’ve repurposed) stay small with decent sized ssd’s, big storage in 2 NAS’s, and a third for backups.
I would just mount the NAS folders via Samba into the NUC. Problem is that services can’t watch the filesystem for changes. If I add a video to my Jellyfin directory, Jellyfin won’t automatically initiate a scan.
That sounds like a config issue. I use NFS shares in a similar way, and Plex/*arr/etc has zero issues watching for changes.
I think it’s a samba limitation. Maybe NFS works well for that case.
Out of curiosity as an owner of a QNAP NAS, how did it go out? Any signs it was in its last legs? Now that I’ve used one, the form factor is the only thing better than most options out there when I got it.
Nowadays all QNAP, Sinology and other NAS vendors supposedly offer a lot of extra value with their cloud options, but I find them a sure way to get hacked based on the average company’s investment in security (I work in IT, it is a sad affair sometimes) combined with all the ransomware specifically targeting them due to old packages they rely on = I’ll build my next system from the ground up, even if the initial cost is higher and the result is uglier.
It was this nasty Intel clock drift bug: https://forum.qnap.com/viewtopic.php?t=157459
Support was completely unresponsive and refused to do anything. Didn’t even acknowledge the issue AFAIK. I tried to add the resistor but my copy of the NAS didn’t expose the right pins so I couldn’t even solder them on if I wanted to. Then I tried mounting my drives into another Linux machine, at which point I realized they were using some custom version of LVM that didn’t work with standard Linux. I ended up having to buy a new QNAP NAS just to retrieve my data and then I returned it.
After that, I swore off proprietary NASes. If I can’t easily retrieve data from perfectly good drives, it is an absolute no go.
If I can’t easily retrieve data from perfectly good drives, it is an absolute no go.
I’ve run the same md-raid array in three different machines (ok, I’ve added and swapped a couple drives, but still). I love that about md-raid. Pull the drives out of one system, stick them into another system with
mdadm
installed, and it recognizes the array immediately.
I have feeling I may find myself here in time, as I develop this setup more.
If you’re familiar with Linux, I highly recommend it. The flexibility is just great and you can setup whatever dashboards / management tools you need. No need to tie yourself to a specific solution IMHO.
If you’re going with Docker containers, a lot of the NAS OSes just hold you back because they don’t support all the options that Docker offers. You’ll be fighting the system if you need to do any advanced Docker configuration.
Thank you!
I’m not familiar, yet. My background is MS OS but going back as far as CLIs so I’m confident I’ll learn fast.
If you want reliability, keep your NAS as a NAS; don’t run applications on the same system. If you screw something up, you’ll have to rebuild the whole thing. Run your applications in a VM at the minimum, that way you can just blow it away and start over if it gets fucked, without touching the NAS.
I feel like containers work just as well for the “blow it away” usecase though and it doesn’t have the VM overhead.
I went with OMV on older but plenty capable hardware (Intel 4th-7th gen) because 1. I’m cheap, and 2. I could configure it how I wanted.
Glad I went that way, because I was considering “upgrading” to a Synology for a while.
I now have my OMV NAS (currently running on a very-unstressed 2014 Mac mini and a 4-bay drive enclosure), and a separate Proxmox cluster with multiple VMs that use the NAS through NFS shares. Docker-focused VMs are managed by local Dockge instances, which is incredibly handy for visualizing the stacks. Dockge instances can also link to each other, so I can log into any Dockge instance and have everything available.
I can do command line stuff just fine, but I am a visual person, so having all that info right in front of me on one page is very, very helpful.
Oh yeah. I bet you’re feeling lucky you didn’t switch to Synology given the recent drama where they’re locking features down to their branded hard drives, which we all know are just up-charged drives from regular vendors.
What drive bay enclosure are you using btw and how does it connect to your Mac mini?
Never heard of dockge. I’ll have to check it out! I’ve just been using podman and docker-compose scripts.
Drive bay I’m using is a Sabrent DS-SC4B, connected via USB3. I’m currently collecting parts for an actual tower build based on a G4560T.
Interesting! I am assuming each drive shows up as an independent drive that you can raid up however you want in software? Man I was looking for something like this, but at the time I was building my NAS, I couldn’t find something similar so I just decided to build a whole new machine with enough space to contain the drives themselves. Had I known, I might have gone with this and a NUC or something. How’s the performance?
I am assuming each drive shows up as an independent drive that you can raid up however you want in software?
Yeah, each drive is shown as if they were individually attached the machine. RAID how you want (or don’t). I’ve got three 4TB drives in an 8TB RAID5, one 4TB that contains data from my gaming PC that I’m working on moving to the RAID, and then a separate 8TB external drive that everything on the RAID array is rsynced to for backup (not ideal but it’s something).
Man I was looking for something like this, but at the time I was building my NAS, I couldn’t find something similar so I just decided to build a whole new machine with enough space to contain the drives themselves. Had I known, I might have gone with this and a NUC or something.
I’m actually going the other way and building a proper server out of an ancient HP Proliant ML110 G2 that my dad gave me. Shockingly, it’s fully ATX compatible and has 8+ drive bays. I’m just reusing the case though and stuffing it with more modern components; it was originally equipped with a Pentium 4 😂 I’m not a fan of the single USB connection for all that data.
How’s the performance?
Sufficient I suppose. Limited by the single USB 3 connection. The Mac mini isn’t stressed at all, but the RJ45 connector has some fucky Apple weirdness about it that causes it to go to sleep periodically. There’s a workaround for it that I applied a while ago, but it still drops out occasionally. But, that’s an Apple-specific problem, not the enclosure. The enclosure works fine.
I’m actually going the other way and building a proper server out of an ancient HP Proliant ML110 G2 that my dad gave me.
Haha, one of my top concerns at the beginning was form factor. I really could not find a decent 4-bay case at the time that wasn’t super hard to build in or a full-blown ATX. I think the closest I found was a Jonsbo N2, but it doesn’t give enough space for a decent cooler. What I ended up going with was total overkill, a NZXT H1 with a PCI-E NVMe expansion card that gave me 3 extra NVMe slots. So now I have a RAIDZ1 array made up 4x 4TB SSDs. The overall form factor is nice, but the performance is completely ridiculously overspecced. My rationale though is that the SSDs were cheap enough and I think they’ll outlast a regular HDD. I was annoyed at how my WD Reds died within 3-4 years back when I was still using my QNAP.
Now that locally hosting AI models is becoming a thing, I am kinda regretting going small form factor because I can’t cram GPUs in there. So now I am thinking maybe getting one of those 4-foot high small server cabinets and getting a few Sliger CX4170a’s and just building full PCs. I would probably move my main PC into that rack as well. But this is all just thoughts. Budget wise it’s a bit ridiculous, but one can dream!
Sufficient I suppose. Limited by the single USB 3 connection.
Dang, if they made an updated one with USB 4, that’d be sick. Heck, I wouldn’t even mind if they had multiple USB connections coming out of the thing, I just like the form factor.
Consider that a new power efficient CPU may be cheaper by consuming less electricity over a few years!
I hadn’t considered that! Thank you.
I’m hoping the OS, as it’s designed for this, is going to be helpful in getting the right balance with power usage.
This is why I’m using a refurbished mini PC as my home server. Lower wattage for constant uptime at home. Also very quiet.
Bingo! I’ve got 4 mini-PCs (does a 2014 Mac mini count?), and one SFF. The average power draw of this cluster is barely ~90W.
Screenshot from my HASS dashboard:
Yeah with that win10 EOL there’s loads of refurbs out there for cheap.
Only one of them is compatible with Windows 11 lmao - HP Elite G4 mini with an i7-8700T. Everything else is 7th gen or 4th gen.
You can calculate it !
Take your power usage and compute the cost over a year.
I will soon add a SSD because i finally moved from a RAID 1 to RAID 5 (so more HDDs), it consume more electricity.
I can measure how much power it draw because the server is on a smart plug. I calculated an additional 20-30€ a year of electricity, adding a SSD for read/write cache would allow the HDDs to stop spinning, make things faster and will be cost effective over a few years.
i like TrueNas! and after trying out True Nas on bare metal for a year or two, now I run it as a VM under Proxmox.
so awesome
You’re the second person to suggest that approach. I’ll check it out before I do setup next week. Thanks!
You have plenty of options. I use Unraid because I bought it before it became a subscription. But I have a friend running Fedora server with Cockpit and running everything from docker containers. The options are endless. ProxMox is a great choice.
Expect to be ostracised here but if your drives are “junk” (some have SMR), I got better parity performance with Windows Storage Spaces (WSS) than with Unraid. Recoverability and compatibility with old junk hardware was very good too, whereas the bits I had lying around gave me Linux driver conflicts. Trying to install ZFS on Linux gave me a headache, and I then realised I couldn’t expand the array easily when I found other cheap crappy drives to add. WSS doesn’t care, it just keeps trucking.
As for a licence, the old “upgrade from the windows 7 enterprise key that got leaked” trick did it for me. Never paid for it.
I found that I needed to spend more on components with better driver support to have a working NAS on Linux. Windows isn’t open source, but for me it was the cheapest total cost option, and you can still run your containers in it.
I reckon maybe performance is worse on write for WSS? I paid for a PrimoCache licence to fix that though, and now my SSD gets used for initial writes and slowly spools over to the array as the array is able to calculate parity and write with my 10 year old CPU.
Don’t forget to donate to your favorite OSS projects.