I haven't used TrueNAS since it was still called FreeNAS.
I liked FreeNAS for awhile, but after a certain point I kind of just learned how to properly use Samba and NFS and ZFS, and after that I kind of felt like it was just getting in the way.
Nowadays, my "NAS" is one of those little "mini gaming PCs" you an buy on Amazon for around ~$400, and I have three 8-bay USB hard drive enclosures, each filled with 16TB drives all with ZFS. I lose six drives to the RAID, so total storage is about ~288TB, but even though it's USB it's actually pretty fast; fast enough for what I need to for anyway, which is to watch videos off Jellyfin or host a Minecraft server.
I am not 100% sure who TrueNAS is really for, at least in the "install it yourself" sense; if you know enough about how to install something like TrueNAS, you probably don't really need it...
I was like this in the "I love to spend a lot of time mucking about with my server and want to squeeze everything out of it that I can" phase.
In the last few years I've transitioned to "My family just wants plex to work and I could give a shit about the details". I think I'm more of the target audience. When I had my non-truenas zfs set up I just didn't pay a lot of attention, and when something broke it was like re-learning the whole system over again.
My way of dealing with this is to ensure everything is provisioned and managed via gitops. I have a homelab repo with a combination of Ansible, Terraform (Tofu), and FluxCD. I don't have to remember how to do anything manually, except for provisioning a new bare metal machine (I have a readme file and a couple of scripts for that).
I accidentally gave myself the opportunity to test out my automations when I decided I wanted to rename my k8s nodes (FQDN rather than just hostname). When I did that, everything broke, and I decided it would be easier to simply re-provision than to troubleshoot. I was up and running with completely rebuilt nodes in around an hour.
I agree; I tried Free/TrueNAS and some other flavors of various things and always ran into annoying limitations and handholding I didn’t want; now I just use Gentoo with ZFS and do my own thing.
I can setup Samba, NFS, and ZFS manually myself, but why would I want to? Configuring samba users & shares via SSH sucks. It's tedious. It's error prone. It's boring.
Similarly while docker's CLI interface is relatively nice, it's even nicer to just take my phone, open a browser, and push "update" or "restart" in a little gui to quickly get things back up & going. Or to add new services. Or whatever else I want. Sure I could SSH in from my phone, but that's awful. I could go get a laptop whenever I need to do something, but if Jellyfin or Plex or whatever is cranky and I'm already sitting on the couch, I don't want to have to get up and go find a laptop. I want to just hit "restart service" without moving.
And that's the point of things like TrueNAS or Unraid or whatever. It makes things nicer to use from more interfaces in more places.
Yeah; once you get deep enough, you realize you can just install things yourself, configure them with Ansible, or Nix, or whatever, and have full control.
But for probably 90% of users, they just want a UI they can click through and mostly use the defaults, then log into now and then to make sure things are good.
The UI is also especially helpful for focusing on things that matter, like setting up scrubs, notifications, etc. (though even there I think TrueNAS could do better).
It's why Synology persists, despite growing more and more hostile to their NAS owners.
> Yeah; once you get deep enough, you realize you can just install things yourself, configure them with Ansible, or Nix, or whatever, and have full control.
I think if you've gone through the effort of setting up Ansible scripts for setting up & maintaining a NAS, you probably are not actually making a NAS anymore. Like maybe you're doing Ceph or Gluster cluster or something, which can be fun to play with. Heck, I did that with a bunch of ODROID-HC2's as well. It was fun to setup a cluster storage system.
It also wasn't practical at all and at no point did it ever seriously compete with replacing my "real" NAS (which currently is Unraid, but I'd absolutely consider switching to TrueNAS in a future upgrade), since the main feature that a NAS needs to provide is uptime.
That's fair enough; I certainly understand why you might do this if you're just buying something pre-made and letting it sit in a network closet or something; having something that is just pre-made that you can use has advantages.
I guess the thing is that I've never done that with [Free|True]NAS :). I've always used some kind of hardware (either retired rack mount servers or thin clients) and installed FreeNAS on there, and then it never really felt like it saved me a lot of time or effort compared to just doing it all manually.
Of course, I'm being a typical "Hacker News Poster" here; I will (partly) acknowledge that I am the weird one, but at the same time, as I said, if you're in the market to install TrueNAS on your own hardware, you are also probably someone who could relatively easily Google your way through doing it manually in roughly the same amount of time, at least with NixOS.
For me, this is about personal priorities. I’ve been a professional sysadmin and definitely have the skills to build a NAS from scratch - I’ve done it more than once.
But this is one aspect of my life that I look at as core infrastructure that needs to just work, and I don’t really gain anything from rolling my own in this particular category.
I still run a mini home lab where I do all kinds of tinkering. Just not with storage.
But I also completely understand wanting to do this all manually. I’ve been there. That’s just not where I am today.
> but at the same time, as I said, if you're in the market to install TrueNAS on your own hardware, you are also probably someone who could relatively easily Google your way through doing it manually in roughly the same amount of time, at least with NixOS.
Sure, but both are free options, one just takes strictly less work to do the task of being a NAS. Why would I pick the one that's just more work for the same result? If I want a NAS, why would I roll my own with NixOS instead of just picking a distro that focuses on being a NAS out of the box? What's the benefits of doing it manually?
If I want to just play around with stuff in a homelab setting, that's what proxmox clustering is for :) But storage / NAS is boring. It just needs to sit there doing basic storage stuff. I want it to do the least amount of things possible because everything else depends on storage being there.
This reminds me of the famous HN Dropbox comment [0]. It was perfectly correct and yet so wrong at the same time. TrueNAS is probably for the people who want the power and flexibility with almost none of the hassle. Ironically, the people who have to deal with this professionally every day probably want to leave the work at work.
Having a playground/homelab at home is one thing, but playing with your family's data and access to it can get annoying really fast.
I think most people who don’t want to tinker would prefer a Synology or similar NAS solution.
The problem with TrueNAS is it fills the niche where it’s targeting people who want to tinker, but don’t want to learn how to tinker. Which is likely a smaller demographic than those who are willing to roll their own and those who just want a fully off-the-shelf experience.
I also think Synology would be closer to the Dropbox experience than TrueNAS.
Yeah, that's what I was referring to when I said "Hacker News Poster", and it's why I said that I know I'm the weird one. I'm not completely out of touch.
It's a little different though; TrueNAS still requires a fairly high level of technical competence to install on your own hardware; you still need to understand how to manage partitions and roughly what a Jail is and basic settings for Samba and the like. It's not completely analogous to Dropbox because Dropbox is trivial for pretty much anyone.
I rolled my own with freebsd and ZFS. And set up some media server apps in a freebsd jail. It's not as polished and I'm missing out on some features, but I'm definitely learning a lot.
> I can setup Samba, NFS, and ZFS manually myself, but why would I want to? Configuring samba users & shares via SSH sucks. It's tedious. It's error prone. It's boring.
I agree for the most part, though even vanilla Ubuntu has Cockpit if you need a GUI.
Personally I find that getting it set up with NixOS is pretty straightforward and it's "set and forget", and generally it's not too hard to find configurations you can just copypaste done "correctly". And of course, if you break something you can just reboot and choose a previous generation. Of course restarting still requires SSHing in and `systemctl restart myapp`, so YMMV.
I want to configure it myself because now I know exactly how it works. The configuration options I’ve chosen won’t change unless I change them. Disaster recovery will be easy because when I move the disks to a new machine, LVM will just start working.
What do you mean by USB hard drive enclosures? Are you limiting the RAID (8 bay) throughput by a single USB line?! That's like towing a ferrari with a bicycle.
I have one enclosure plugged into a USB 3.0 line, another plugged into a "super speed" line, and one plugged into a Thunderbolt line (shared with my 10GbE Thunderbolt card with a 2x20 gigabit splitter).
This was deliberate, each is actually on a separate USB controller. Assuming I'm bottlenecked by the slowest, I'm limited to 5 gigabits per RAID, but for spinners that's really not that bad.
ETA: It's just a soft RAID with ZFS, I set up each 8-bay enclosure with its own RAIDZ2, and then glued all three together into one big pool mounted at `/tank`. I had to do a bit of systemd chicanery with NixOS to make sure it mounted after the USB stuff started, but that was like ten lines of config I do exactly once, so it wasn't a big deal.
Have you researched the USB-SATA bridge chips in the enclosures? Reliability of those chips/drivers on Linux used to be very questionable a few years ago when I looked around. Not sure if the situation has improved recently given the popularity of NAS devices.
From my research a couple years ago it seemed like most issues involved feeding a bridge into a port multiplier, so I got a multi drive enclosure with no multipliers. I've had no problems so far even with a disk dying in it.
Though even flaky adapters just tend to lock up, I think.
that's a benefit of zfs, it doesn't trust the drives actually wrote the data to the drives, the so called RAID write hole, since most RAID doesn't actually do that checking and drives don't have the per block checksums in a long time. It checksums to ensure.
> fast enough for what I need to for anyway, which is to watch videos off Jellyfin or host a Minecraft server.
4k Blu-ray rips peak at over 100 Mbps, but usually average around 80 Mbps. I don't know how much disk I/O a Minecraft server does ... I wouldn't think it would do all that much. USB2 (high-speed) bandwidth should be plenty for that; although filling the array and scrubbing/resilvering would be painful.
Even though I have over four hundred Blu-rays, I would of course NEVER condone breaking the DRM and putting them on Jellyfin no matter how easy it is or how stupid I think that law is because that would be a crime according to the DMCA and I'm a good boy who would never ever break the law.
That said, I have lots of home movies that just so happen to be at the exact same bitrates as Blu-rays and after the initial setup, I've never really had any issues with them choking or any bandwidth weirdness. Minecraft doesn't use a ton of disk IO, especially since it is rare that anyone plays on my server other than me.
I do occasionally do stuff that requires decent bandwidth though, enough to saturate a WiFi connection at the very least, and the USB3 + USB SS + Thunderbolt never seems to have much of an issue getting to Wifi speeds.
>I liked FreeNAS for awhile, but after a certain point I kind of just learned how to properly use Samba and NFS and ZFS, and after that I kind of felt like it was just getting in the way.
I've been a mostly happy TrueNAS user for about four years, but I'm starting to feel this way.
I recently wrote about expanding my 4-disk raidz1 pool to a 6-disk raidz2 pool.[1] I did everything using ZFS command-line tools because what I wanted wasn't possible through the TrueNAS UI.
A developer from iXsystems (the company that maintains TrueNAS) read my post and told me that creating a ZFS pool from the zfs command-line utility is not supported, and so I may hit bugs when I use the pool in TrueNAS.
I was really surprised that TrueNAS can't just accept whatever the state of the ZFS pool is. It feels like an overreach that TrueNAS expects to manage all ZFS interactions.
I'm converting more of my infrastructure to NixOS, and I know a lot of people just manage their NAS with NixOS, which is sounding more and more appealing to me.
This has been a project of tech hoarding over the last ~8 years, but basically I wanted "infinite storage". I wanted to be able to do pretty much any project and know that no matter how crazy I am, I'll have enough space for it. Thus far, even with all my media and AI models and stock data and whatnot, I'm sitting around ~45TB.
On the off chance that I do start running low on space, there's plenty of stuff I can delete if I need to, but of course I probably won't need to for the foreseeable future.
> I'm converting more of my infrastructure to NixOS, and I know a lot of people just manage their NAS with NixOS, which is sounding more and more appealing to me.
Yeah, that's what I do, I have my NixOS run a Samba on my RAID, and it works fine. It was like fifteen lines of config, version controlled, and I haven't thought about it in months.
I'm in a similar position on my storage... though mostly that I bought a new nas with the intent of storing for Chia, but after 3 days, I decided it was a total waste as I'd never catch up to the bigger miners... So I repurposed it for long term and retired my older nas.
older nas was 4x 4tb drives when I retired it and passed it to a friend.
Current nas is a 6-bay synology with ram and nvme upgraded, along with a 5-bay expansion. All with 12tb drives in a 6drive 2-parity and 5-drive 2-parity arrays. I'm just under 40tb used mostly media.
I bought the drives used on Ebay, and the prices weren't that different for the 8TB vs 16TB when I bought them. Like on the order of ~$15 higher, so I figured I'd just eat that cost and future proof it even more.
Yeah, same. Almost all of the NAS packages sacrifice something - they're great places to start, but just getting Samba going with Ubuntu is easy enough.
Do you think the power consumption matters on your box here? Should you care about the "USB bottleneck"? How do you organize this thing so it's not a mess of USB cables? I kinda wanna make it look esthetically nice compared to something like a proper nas box.
> Do you think the power consumption matters on your box here?
It's actually not too bad; the main "server" idles at around 14W and the power supply for it only goes to 100W under load. The drive bays go up to 100W (I think) but generally idle around 20W each. All together it idles at around ~70-80W.
Not that impressive BUT it replaced a big rack mount server that idled at about 250W and would go up to a kilowatt under load.
> Should you care about the "USB bottleneck"?
Not really, at least not for what I'm doing. I generally can get pretty decent speeds and I think network is often the bottleneck more than the drives themselves.
> How do you organize this thing so it's not a mess of USB cables?
I don't :). It's a big mess of USB cables that's hidden in a closet. It doesn't look pretty at all.
I have a similar setup (a dell wyse 5070 connected to an 8 bay enclosure) though I do not use RAID, I simply have some simple rsync script between a few of a drives. I collect old 1-2TB hard drives as cold storage and leave them on a bookshelf. The rsync scripts only run once a week for the non-criticial stuff.
Not to jinx it, but I have never had a hard drive failure since around 2008!
I think that's a good idea. It's a more manual setup but also more space efficient (if you skip backing up eg linux isos) and causes less wear than RAID.
TrueNAS is just web-based configuration management. As long as you only use the web UI, your system state can be distilled down to the config file it generates.
If you do a vanilla FreeBSD+samba+NFS+ZFS setup, you'll need to edit several files around the file system, which are easy to forget months down the line in case of adjustment or disaster recovery.
The difference between what you've built and TrueNAS may well only become evident if your ZFS becomes corrupted in the future. That isn't to say YOU won't be able to fix it in the future, but I wouldn't assume that the average TrueNAS user could.
i have 4x 4TB drives that are in my dead QNAP NAS.
i've wanted to get a NAS running again, but while the QNAP form factor is great, the QNAP OS was overkill – difficult to manage (too many knobs and whistles) – and ultimately not reliable.
so, i'm at a junction: 1) no NAS (current state), 2) custom NAS (form factor dominates this discussion – i don't want a gaming tower), or 3) back to an off-the-shelf brand (poor experience previously).
maybe the ideal would be a Mac Mini that i could plug 4 HDDs into, but that setup would be cost-inefficient. so, it's probably a custom build w/ NixOS or an off-the-shelf, but i'm lacking the motivation to get back into the game.
I do recommend those little Beelink computers with an AMD CPU.
They can be had for a bit less than the Mac mini, and I had no issues getting headless Linux working on there. I even have hardware transcoding in Jellyfin working with VAAPI. I think it cost me about $400.
I tried a Mac Mini for a while, but they're just not designed to run headless and I ultimately abandoned it because I wanted it to either work or be fixable remotely. Issues I had:
- External enclosure disconnected frequently (this is more of an issue with the enclosure and its chipset, but I bought a reputable one)
- Many services can't start without being logged in
- If you want to use FileVault, you'll have to input your password when you reboot
Little things went wrong too frequently that needed an attended fix.
If you go off the shelf, I recommend Synology, but make sure you get an Intel model with QSV if you plan to transcode video. You can also install Synology OS to your own hardware using Xpenology - its surprisingly stable, moreso than the mac mini was for me.
> Get a Synology Intel model with QSV if you plan to transcode video
How about if you _don't_ plan to transcode video? For example, this years models DS425+ uses the (six year old) Intel Celeron J4125, while the DS925+ uses the (seven year old) AMD Ryzen Embedded V1500B. Why choose one over the other?
I use a QNAP 8-bay JBOD enclosure (TL-D800S) connected via SFF-8088 to a mini-itx PC - I find the form factor pretty good and don't have to deal with QNAP OS.
I was in the same boat - QNAP OS was a complete mess. Ended up nuking it and throwing Ubuntu on there instead. Nothing fancy, just basic config, but it actually works now. Other option is pay Unraid.
What are your requirements for a NAS? What do you want to use it for? Is it just to experiment with what a NAS is, or do you have a specific need like multiple computers needing common, fast storage?
I guess you could run a Linux VM and pass the disks through? I considered something similar for an ARM64 NixOS build server - though that application doesn’t need the disks.
TrueNAS on a good bit of hardware - in my case the latest truegreen NAS is fantastic. You build it, it runs, it's bulletproof. Putting Jellyfin and/or plex on top of it is fantastic.
I do both. The primary server runs Proxmox and I have a physical TrueNAS box as backup server, so I have to do it by hand on Proxmox.
“Have to”, since I no longer suggest virtualizing TrueNAS even with PCI passthru. I will say the same about zfs-over-USB, but you do you. I’ve had too many bad experiences with both (for those not on the weeds here, both are officially very much not supported and recommended, but they _do_ work).
I really like the TrueNAS value prop - it makes something I’m clearly capable of by hand much easier and less tedious. I back up both my primary zfs tank and well as my PBS storage to it, plus cold backups. It does scheduling, alerts, configuration, and shares, and nothing else. I never got the weird K8s mini cluster they ship - seems like a weird thing that clashes with the core philosophy of just offering a NAS OS.
I'm starting a rebuild of my now ancient home server, which has been running Windows with WSL2 Docker containers.
At first, I thought I might just go with TrueNAS. It can manage my containers and my storage. But it's got proprietary bits, and I don't necessarily want to be locked into their way of managing containers.
Then my plan was to run Proxmox with a TrueNAS VM managing a ZFS raidz volume, so I could use whatever I want for container management (I'm going with Podman)
But the more I've researched and planned out this migration, the more I realize that it's pretty easy to do all the stuff I want from TrueNAS, by myself. Setting up ZFS scrubbing and SMART checks, and email alerts when something fishy happens, is pretty easy.
I'm beginning to really understand the UNIX "do one thing and do it well" philosophy.
Can you give a little more details on how the enclosures work? Do you see each drive individually, or does each enclosure show as one "drive" which you then put into a zfs pool.
I'm looking to retire my power hungry 4790k server and go with a mini pc + external storage.
Replied to another comment with my setup - I use a QNAP JBOD enclosure (TL-D800S) connected to a mini-itx PC. (You do need at least one PCIe slot.) Shows up as 8 drives in the OS, the enclosure has no smarts.
I wouldn't do USB-connected enclosures for NAS drives - either SATA via SFF, or Thunderbolt.
I have actually made a Raspberry Pi based NAS and found it was a pain.
The SATA controller isn't terrible, but it and other hardware areas have had many strange behaviors over the years to the point of compiling the kernel being needed to fiddle with some settings to get a hardware device to do what it's supposed to.
Even if you're using power that is well supported eventually you seem to hit internal limits and get problems. That's when you see people underclocking the chip to move some of this phantom power budget to other chips. Likewise you have to power most everything from a separate source which pushes me even closer to a "regular PC" anyhow.
I just grab an old PC from Facebook for under $100. The current one is a leftover from the DDR3 + Nvidia 1060 gaming era. It's a quad core with HT so I get 8 threads. Granted most of those threads cause the system to go into 90% usage even when running jobs with only 2 threads, probably because the real hardware being used there is something like AVX and it can't be shared between all of the cores at the same time.
The SATA controller has been a bit flaky, but you can pick up 4-port SATA cards for about $10 each.
When my Raspberry Pi fails I need to start looking at configurations and hacks to get the firmware/software stack to work.
When my $100 random PC fails I look at the logs to find out what hardware component failed and replace it.
> The SATA controller has been a bit flaky, but you can pick up 4-port SATA cards for about $10 each.
If your build allows the extra money for an LSI or real raid controller is well worth it. The no-name PCI-e sata cards are flakey and very slow. Putting an LSI in my NAS was a literal 10x performance boost, particularly with zfs which tends to have all of the drives active at once.
I am curious about the slow part. I use these crappy SATA cards and I am sure they are crappy, but the drives are only going to give 100MB/s in bursts and they have an LVM cache (or ZFS stuff) on them to sustain more short-term writes.
I get if I was wiring up NVME drives that are going to go 500MB/s and higher all the time.
What I really care about with the SATA and what I mean by flaky is I when I have to reboot a system physically every day because the controller stays on in some way even if it gets a soft `reboot` command and then Linux fills up with IO timeouts because the controller seems to stop working after X amount of time.
> probably because the real hardware being used there is something like AVX and it can't be shared between all of the cores at the same time.
That's not the right explanation; each physical core has its own vector ALUs for handling SSE and AVX instructions. The chip's power budget is shared between cores, but not the physical transistors doing the vector operations.
I don't know about TrueNAS, but with Proxmox the two random 10$ SATA-cards I tried only gave me issues. With first one OS wouldn't boot, second seemed to work fine, but connected drives disappeared as soon as I wrote to them.
Used server-grade LSI cards seem to be the way to go. Too bad they're power hungry based on what I've read.
I do too. For many use cases it's awesome to use an ESP32, Raspberry Pi or Arduino when I want to stash some little widget that can sip a small battery over the next week. It's equally awesome that in many scenarios you can be net positive in your consumption with a super simple solar panel that provides a few watts here and there into a battery.
But at home things are different. While I want to use as little power as possible, the realistic plan for being sustainable at home is to use solar with batteries. That's a plan I actually think can matter and that I am able to participate in for relatively low cost ($10k)
Messing around with a system to save a few watts for me in this context isn't very valuable.
Yeah this is what keeps me from considering old PCs for NAS.
Maybe stating the blindingly obvious but seems like there is a gap in the market for a board or full kit with a high efficiency ~1-10W CPU and a bunch of SATA and PCIe ports.
Minisforum sort of working on it, I'd imagine the AMD "AI" processors are pretty low power at idle as they're mobile chips. Obviously has the downsides of other minipcs tho (high cost, low expandability)
Then you've got to consider what are you optimizing for. Is the power bill going to be offset by the cost of a Pi plus any extras you need, or a cheap second hand PC someone wants to clear out, or free if you can put an old serviceable PC you have already back into use. Is it heat? Noise? Space that it needs to hide away in? Airflow for the location?
Probably not very good. I selected large spinning hard drives because I could get them at a good price for 2TB each and I wanted to setup a RAID5-like system in ZFS and btrfs (lesson learned, btrfs doesn't actually support this correctly) and I wanted to get at least 10TB with redundancy.
I don't know how much each of those SATA disks take up, but probably more than a single Raspberry Pi does.
Likewise it has a few case fans in it that may be pointless. I would prefer it never has a heating issue versus saving a few "whurrr" sounds off in a closet somewhere that nobody cares about.
It's also powering that Nvidia 1060 that I do almost nothing with on the NAS. I don't even bother to enable the Jellyfin GPU transcoding configuration because I prefer to keep my library encoded in h264 right now anyhow as I have not yet made the leap to a newer codec because the different smart TVs have varying support. And sometimes my daughter has a friend come over that has a weird Amazon tablet thing that only does a subset of things correctly.
The 1060 isn't an amazing card really, but it could do some basic Ollama inference if I wanted. I think it has 6GB of memory, which is pretty low, but usable for some small LLMs.
Shouldn't it be more of a "why" to install TrueNAS on a RPi?
The only reason I can see is "I have one that I don't use". Because otherwise...
Idle power isn't all that much better than a low power Intel N100 or something similar. And it's all downhill from there. Network transfer speeds and disk transfers will all be kneecapped by the (lack of) available PCIe lanes. Available RAM or CPU speeds are even worse...
That's addressed in the second section of the article:
> I've found numerous times, running modern applications on slower hardware is an excellent way to expose little configuration flaws and misconceptions that lead to learning how to run the applications much better on more capable machines.
It's less about the why, and more about the 'why not?' :)
I explicitly don't recommend running TrueNAS on a Pi currently, at the end (though I don't see a problem with anyone doing it for the fun, or if they need an absolutely tiny build and want to try Arm):
> Because of the current UEFI limitations, I would still recommend running TrueNAS on higher-end Arm hardware (like Ampere servers).
On a somewhat related note, would you trust a Pi based NAS long term? I've not tried doing one since the Pi 4 which understandably because of its hardware limitations left a lot to be desired, but that part aside I was still finding the pi as a piece of hardware somewhat quirky and unpredictable - power especially, I can't count the number of times simply unplugging a usb keyboard would cause it to reboot.
I've run a Pi NAS as my 2nd onsite replica for over a year without a hiccup, it's using a Radxa Penta SATA HAT with 4x SATA SSDs, and a 2.5 Gbps USB dongle for faster Ethernet[1].
I have been using a Raspberry Pi 4 (8 GB RAM) as my NAS for nearly 5 years. It is incredibly reliable. I run the following software on it: Ubuntu 64-bit, Samba, Jenkins, Postgres and MariaDB. I have attached external hard drives through a USB hub (because Pi does not necessarily have enough power for the external hard drive). I git push to public Samba folders on the Pi, and trigger Jenkins, which builds and installs my server using docker in the Pi.
On the one hand it is good to discover that someone is tackling getting TianoCore working on the Raspberry Pi 5.
On the other hand, they still have the destructive backspace behaviour, and inefficient recursive implementation, that breaks the boot loader spinners that the NetBSD and other boot loaders display. It's a tiny thing, but if one is used to the boot sequence the absence of a spinner makes the experience ever so slightly jarring.
I should add, by the way, that this nicely demonstrates M. Geerling's point here about catching bugs by running things on a Pi.
The TianoCore's unnecessarily recursive implementation of a destructive BS is slow enough, on a Pi 4, and in combination with how the boot loaders themselves emitted their spinners, that I could just, very occasionally, see parts of spinner characters flashing very briefly on the screen when the frame refresh timing was just right; which led me to look into what was going on.
Wasn't there a post somewhere on HN yesterday about how slowing down your programs can help you catch problems? Using low end hardware is an automatic way of forcing that :)
It was a fun project and looked cool but never really worked that well. It was quite unstable and drives seemed to disconnect and reconnect a lot. There are probably better quality connectors out there but I think for a NAS you really want proper SATA connections.
I eventually built my own box and went with OMV again. I like it because it's just userland software you install on Debian. Some of the commenters here who think TrueNAS is overkill might want to check out OMV if they haven't already.
To be honest I still only have a few TB of storage on it, probably not really enough to be worth all the hassle of building and configuring a PC, but it was more about the journey which was fun.
This is fun for learning purposes, but even with the PCIe 3 bus the Pi just isn't that great a server when compared to an Intel N-series machine.
I have two "normal" NAS devices, but I would like to find a compact N100 board with multiple SATA ports, (like some of the stackable HATs for the Pi, some of which will take 4 disks directly on the PCB) to put some older discarded drives to good use.
My go-to solution software-wise is actually to install Proxmox, set up ZFS on it and then drop in a lightweight LXC that exposes the local filesystem via SMB, because I like to tweak the "recycle bin" option and some Mac-specific flags--I've been using that setup for a while, also off Proxmox: https://taoofmac.com/space/notes/2024/11/09/1940#setting-up-...
The article states "I currently run an Ampere Arm server in my rack with Linux and ZFS as my primary storage server" and this is just explaining how to try it out on the Pi, which I found surprisingly interesting. I am glad people like the N100s and wish they would find more relevant articles to talk about them.
The warning about death of three SSDs doesn't inspire too much confidence to be honest... do you think it was due to the usage patterns of Proxmox default settings for ZFS?
Over time and lots of reading random information sources, I got notes about disabling several settings that are cool for datacenter-levels of storage, but useless for the kinds of "Raspberry-pi tied with a couple USB disks" that I was interested in.
The Odroid H4+ might be what you're looking for. It's a N97 SBC from a South Korean manufacturer that's been around a while. The "+" variant has 4 SATA ports. With an adapter board, 2-4 NVMe drives can be attached as well.
Odroid themselves sell simple cases for the H4+ like the Type 3 that can hold several drives if you don't mind the homebrew look. They also sell a mini-ITX kit that makes the board compatible with a mini-ITX case of your own choosing.
There are tons of mini-itx N100 boards with onboard 2.5G or 10G ethernet and 6-8 sata ports for a few hundred bucks available (on Amazon for example). search “n100 mini itx nas”.
There are also some decent 6 and 8 bay mini-itx cases.
I went looking for completed systems
recently and couldn’t find any integrators that make them. Surprised nobody will take a few hundred bucks to plug in all the motherboard power/reset pin headers for me.
QNAP TS-435XeU is a $600 1U short-depth (11") case with quad hotswap SATA, dual NVME, dual 10GbE copper, dual 2.5GbE, 4-32GB DDR4 SODIMM Arm NAS that would benefit from OSS community attention. Includes hardware support for ZFS encryption.
Based on a Marvell/Armada CN9130 SoC which supports ECC, it has mainline Linux support, and public-but-non-upstream code for uboot. With local serial console and a bit of effort, the QNAP OS can be replaced by Arm Debian/Devuan with ZFS.
Rare combo of low power, small size, fast network, ECC memory and upstream-friendly Linux. QNAP also sell a 10GbE router based on the same SoC, which is a successor to the Armada 388 in Helios4 NAS (RIP), https://kobol.io/helios4/
No UEFI support, so TrueNAS for Arm won't work out of the box.
If you can build your own Uboot you should be able to enable UEFI no? It’s not as full featured as EDK2 etc, but it works to boot Linux and I think the BSDs.
Wouldn't straight ZFS with a vanilla OS make more sense for low power devices? TrueNAS, esp the kubernetes flavour seems to have a decent bit of overhead last I looked at it
That TrueNAS ARM fork is great. I've set it up in VMWare Fusion on Mac mini M4 and it runs!
With Thunderbolt 4 M.2 NVMe enclosure, you can plug in M.2 SATA adapter to connect 8 SATA III drives. Little Pico PSU to power the HDDs and it makes a really low powered NAS.
My plan is to give TrueNAS a spin with two drives and if it is stable, move everything into it.
Probably the closest thing that already exists is just running Cockpit[1]. 45Drives even maintains some helpful storage and file sharing plugins for it[2], though some of those are only compatible with x86 for now.
Cockpit hasn't really improved in a while, though, and although I greatly appreciate 45Drives' committment to it, last time I tried to install their stuff I had a lot of issues with deprecated dependencies...
So I just went "raw" smbd and never looked back, but then again I've been running Samba for almost two decades now and configuring it is almost second nature to me (I started doing it as an alternative to Novell/IPX, and later to replace AppleTalk...)
In practice, I've found that worked well because I very seldom had to do any changes to a file server once it was set up (adding shares was mostly a matter of cloning configs, and what minimal user setup there needed to be done in the enterprise was deferred to SAMBA/CIFS glue).
Quite true; raw configuration isn't as flashy so you can't make glitzy videos or blog posts about it (well, outside of the HNsphere at least).
But that's how 99% of the services I run are set up, just a bunch of configuration files managed by Ansible.
The only servers I run in production with a UI are two NASes at home, and the only reason for that is I want to run off the shelf units so I never have to think about them, since my wife and kids rely on them for Jellyfin and file storage.
I would imagine it's because it makes for a lot more fun support possibilities when all the underlying stuff in the stack (kernel, ZFS, Docker, Python, etc. etc.) is subject to the whims of the end user. When you ship the entire OS yourself you can be more certain about the versions of all the kernel+userland stuff and therefore the interactions between them all.
I'm not convinced that TrueNAS Scale is an improvement. I won't make the case that stability and maturity hamper Scale as a whole, but there are definitely discrepancies for SMB performance and limitations in and the use of Kubernetes really overcomplicates things for home users (not saying that TrueNAS Scale Apps were great). I had a version upgrade of Scale catastrophically fail on one system because the Kubernetes implementation couldn't be reconciled during the upgrade (I had no enterprise support, community support had no input, so I was forced to restart from scratch).
You also need a dedicated OS Drive for TrueNAS, which is reasonable in principle for critical systems, but doesn't always really meet the needs of home users with limited drive bays.
Secure boot isn’t mandatory, and if you want secure boot you don’t have to use Microsoft’s keys, you can enroll your own. Lanzaboote for NixOS for example doesn’t use shim - https://github.com/nix-community/lanzaboote .
Sometimes certain product lines act like they consider customers on rare occasion...
But most manufacturers try to lock the firmware down, and users only get a small subset of configuration menus. For example, the Gigabyte rtx based laptops require patching a machine specific bios to even gain access to the oem firmware areas.
Mostly the modern builds just created a bunch a problems nobody wanted, and didn't improve anything as Asus, Gigabyte, and Razer recently showed.
If you are running signed code on many machines. YMMV... Raspberry Pi avoided the signed code features built into most Broadcom ARM chips for good reasons. =3
Amazing that you can build a career out of making useless things with RPi. I don't mean it as a negative thing about the author, but rather this kind of content.
It's a rare case when jack (RPi) of all trades is not at all better than master of one or even other jacks of all trades. Running anything but official distro is pain. Managing official distro is pain. Even you it wasn't it doesn't have enough raw power or I/O to do anything really useful.
It's an amazing "temporary" solution because it's so awful that you will actually replace it with a proper one.
I haven't used TrueNAS since it was still called FreeNAS.
I liked FreeNAS for awhile, but after a certain point I kind of just learned how to properly use Samba and NFS and ZFS, and after that I kind of felt like it was just getting in the way.
Nowadays, my "NAS" is one of those little "mini gaming PCs" you an buy on Amazon for around ~$400, and I have three 8-bay USB hard drive enclosures, each filled with 16TB drives all with ZFS. I lose six drives to the RAID, so total storage is about ~288TB, but even though it's USB it's actually pretty fast; fast enough for what I need to for anyway, which is to watch videos off Jellyfin or host a Minecraft server.
I am not 100% sure who TrueNAS is really for, at least in the "install it yourself" sense; if you know enough about how to install something like TrueNAS, you probably don't really need it...
I was like this in the "I love to spend a lot of time mucking about with my server and want to squeeze everything out of it that I can" phase.
In the last few years I've transitioned to "My family just wants plex to work and I could give a shit about the details". I think I'm more of the target audience. When I had my non-truenas zfs set up I just didn't pay a lot of attention, and when something broke it was like re-learning the whole system over again.
My way of dealing with this is to ensure everything is provisioned and managed via gitops. I have a homelab repo with a combination of Ansible, Terraform (Tofu), and FluxCD. I don't have to remember how to do anything manually, except for provisioning a new bare metal machine (I have a readme file and a couple of scripts for that).
I accidentally gave myself the opportunity to test out my automations when I decided I wanted to rename my k8s nodes (FQDN rather than just hostname). When I did that, everything broke, and I decided it would be easier to simply re-provision than to troubleshoot. I was up and running with completely rebuilt nodes in around an hour.
But configuring a FreeBSD system with zfs and samba is dead easy.
In my experience, a vanilla install and some daemons sprinkled on top works better than these GUI flavours.
Less breakage, fewer quirks, more secure.
YMMV and I’m not saying you’re wrong - just my experience
I agree; I tried Free/TrueNAS and some other flavors of various things and always ran into annoying limitations and handholding I didn’t want; now I just use Gentoo with ZFS and do my own thing.
I can setup Samba, NFS, and ZFS manually myself, but why would I want to? Configuring samba users & shares via SSH sucks. It's tedious. It's error prone. It's boring.
Similarly while docker's CLI interface is relatively nice, it's even nicer to just take my phone, open a browser, and push "update" or "restart" in a little gui to quickly get things back up & going. Or to add new services. Or whatever else I want. Sure I could SSH in from my phone, but that's awful. I could go get a laptop whenever I need to do something, but if Jellyfin or Plex or whatever is cranky and I'm already sitting on the couch, I don't want to have to get up and go find a laptop. I want to just hit "restart service" without moving.
And that's the point of things like TrueNAS or Unraid or whatever. It makes things nicer to use from more interfaces in more places.
Yeah; once you get deep enough, you realize you can just install things yourself, configure them with Ansible, or Nix, or whatever, and have full control.
But for probably 90% of users, they just want a UI they can click through and mostly use the defaults, then log into now and then to make sure things are good.
The UI is also especially helpful for focusing on things that matter, like setting up scrubs, notifications, etc. (though even there I think TrueNAS could do better).
It's why Synology persists, despite growing more and more hostile to their NAS owners.
> Yeah; once you get deep enough, you realize you can just install things yourself, configure them with Ansible, or Nix, or whatever, and have full control.
I think if you've gone through the effort of setting up Ansible scripts for setting up & maintaining a NAS, you probably are not actually making a NAS anymore. Like maybe you're doing Ceph or Gluster cluster or something, which can be fun to play with. Heck, I did that with a bunch of ODROID-HC2's as well. It was fun to setup a cluster storage system.
It also wasn't practical at all and at no point did it ever seriously compete with replacing my "real" NAS (which currently is Unraid, but I'd absolutely consider switching to TrueNAS in a future upgrade), since the main feature that a NAS needs to provide is uptime.
That's fair enough; I certainly understand why you might do this if you're just buying something pre-made and letting it sit in a network closet or something; having something that is just pre-made that you can use has advantages.
I guess the thing is that I've never done that with [Free|True]NAS :). I've always used some kind of hardware (either retired rack mount servers or thin clients) and installed FreeNAS on there, and then it never really felt like it saved me a lot of time or effort compared to just doing it all manually.
Of course, I'm being a typical "Hacker News Poster" here; I will (partly) acknowledge that I am the weird one, but at the same time, as I said, if you're in the market to install TrueNAS on your own hardware, you are also probably someone who could relatively easily Google your way through doing it manually in roughly the same amount of time, at least with NixOS.
For me, this is about personal priorities. I’ve been a professional sysadmin and definitely have the skills to build a NAS from scratch - I’ve done it more than once.
But this is one aspect of my life that I look at as core infrastructure that needs to just work, and I don’t really gain anything from rolling my own in this particular category.
I still run a mini home lab where I do all kinds of tinkering. Just not with storage.
But I also completely understand wanting to do this all manually. I’ve been there. That’s just not where I am today.
> but at the same time, as I said, if you're in the market to install TrueNAS on your own hardware, you are also probably someone who could relatively easily Google your way through doing it manually in roughly the same amount of time, at least with NixOS.
Sure, but both are free options, one just takes strictly less work to do the task of being a NAS. Why would I pick the one that's just more work for the same result? If I want a NAS, why would I roll my own with NixOS instead of just picking a distro that focuses on being a NAS out of the box? What's the benefits of doing it manually?
If I want to just play around with stuff in a homelab setting, that's what proxmox clustering is for :) But storage / NAS is boring. It just needs to sit there doing basic storage stuff. I want it to do the least amount of things possible because everything else depends on storage being there.
This reminds me of the famous HN Dropbox comment [0]. It was perfectly correct and yet so wrong at the same time. TrueNAS is probably for the people who want the power and flexibility with almost none of the hassle. Ironically, the people who have to deal with this professionally every day probably want to leave the work at work.
Having a playground/homelab at home is one thing, but playing with your family's data and access to it can get annoying really fast.
[0] https://news.ycombinator.com/item?id=9224
I think most people who don’t want to tinker would prefer a Synology or similar NAS solution.
The problem with TrueNAS is it fills the niche where it’s targeting people who want to tinker, but don’t want to learn how to tinker. Which is likely a smaller demographic than those who are willing to roll their own and those who just want a fully off-the-shelf experience.
I also think Synology would be closer to the Dropbox experience than TrueNAS.
Yeah, that's what I was referring to when I said "Hacker News Poster", and it's why I said that I know I'm the weird one. I'm not completely out of touch.
It's a little different though; TrueNAS still requires a fairly high level of technical competence to install on your own hardware; you still need to understand how to manage partitions and roughly what a Jail is and basic settings for Samba and the like. It's not completely analogous to Dropbox because Dropbox is trivial for pretty much anyone.
I rolled my own with freebsd and ZFS. And set up some media server apps in a freebsd jail. It's not as polished and I'm missing out on some features, but I'm definitely learning a lot.
I actually can’t recall the last time I setup a share on SMB, has to be years if not tending towards decades.
A few big shares is all I really need; I no longer create a share for every single idea/thing I can think of.
> I can setup Samba, NFS, and ZFS manually myself, but why would I want to? Configuring samba users & shares via SSH sucks. It's tedious. It's error prone. It's boring.
I agree for the most part, though even vanilla Ubuntu has Cockpit if you need a GUI.
Personally I find that getting it set up with NixOS is pretty straightforward and it's "set and forget", and generally it's not too hard to find configurations you can just copypaste done "correctly". And of course, if you break something you can just reboot and choose a previous generation. Of course restarting still requires SSHing in and `systemctl restart myapp`, so YMMV.
I want to configure it myself because now I know exactly how it works. The configuration options I’ve chosen won’t change unless I change them. Disaster recovery will be easy because when I move the disks to a new machine, LVM will just start working.
I’ll take Configuration Management for $100, Alex.
What do you mean by USB hard drive enclosures? Are you limiting the RAID (8 bay) throughput by a single USB line?! That's like towing a ferrari with a bicycle.
Nope!
I have one enclosure plugged into a USB 3.0 line, another plugged into a "super speed" line, and one plugged into a Thunderbolt line (shared with my 10GbE Thunderbolt card with a 2x20 gigabit splitter).
This was deliberate, each is actually on a separate USB controller. Assuming I'm bottlenecked by the slowest, I'm limited to 5 gigabits per RAID, but for spinners that's really not that bad.
ETA: It's just a soft RAID with ZFS, I set up each 8-bay enclosure with its own RAIDZ2, and then glued all three together into one big pool mounted at `/tank`. I had to do a bit of systemd chicanery with NixOS to make sure it mounted after the USB stuff started, but that was like ten lines of config I do exactly once, so it wasn't a big deal.
Have you researched the USB-SATA bridge chips in the enclosures? Reliability of those chips/drivers on Linux used to be very questionable a few years ago when I looked around. Not sure if the situation has improved recently given the popularity of NAS devices.
From my research a couple years ago it seemed like most issues involved feeding a bridge into a port multiplier, so I got a multi drive enclosure with no multipliers. I've had no problems so far even with a disk dying in it.
Though even flaky adapters just tend to lock up, I think.
It seems to work, and has been running for years without issue. `zpool scrubs` generally come back without issue.
that's a benefit of zfs, it doesn't trust the drives actually wrote the data to the drives, the so called RAID write hole, since most RAID doesn't actually do that checking and drives don't have the per block checksums in a long time. It checksums to ensure.
288TB spread over 24 drives on soft RAIDZ2 over USB?! You did check the projected rebuild time in the event of a disc failure, right?
Didn't have to do the projection, I've had to replace a drive that broke. It took about 20 hours.
ETA: It would certainly take longer with more data. I've not gotten anywhere close to the 288TB
> fast enough for what I need to for anyway, which is to watch videos off Jellyfin or host a Minecraft server.
4k Blu-ray rips peak at over 100 Mbps, but usually average around 80 Mbps. I don't know how much disk I/O a Minecraft server does ... I wouldn't think it would do all that much. USB2 (high-speed) bandwidth should be plenty for that; although filling the array and scrubbing/resilvering would be painful.
Even though I have over four hundred Blu-rays, I would of course NEVER condone breaking the DRM and putting them on Jellyfin no matter how easy it is or how stupid I think that law is because that would be a crime according to the DMCA and I'm a good boy who would never ever break the law.
That said, I have lots of home movies that just so happen to be at the exact same bitrates as Blu-rays and after the initial setup, I've never really had any issues with them choking or any bandwidth weirdness. Minecraft doesn't use a ton of disk IO, especially since it is rare that anyone plays on my server other than me.
I do occasionally do stuff that requires decent bandwidth though, enough to saturate a WiFi connection at the very least, and the USB3 + USB SS + Thunderbolt never seems to have much of an issue getting to Wifi speeds.
>so total storage is about ~288TB
?!?
How do you fill 288 TB? Is it mostly media?
>I liked FreeNAS for awhile, but after a certain point I kind of just learned how to properly use Samba and NFS and ZFS, and after that I kind of felt like it was just getting in the way.
I've been a mostly happy TrueNAS user for about four years, but I'm starting to feel this way.
I recently wrote about expanding my 4-disk raidz1 pool to a 6-disk raidz2 pool.[1] I did everything using ZFS command-line tools because what I wanted wasn't possible through the TrueNAS UI.
A developer from iXsystems (the company that maintains TrueNAS) read my post and told me that creating a ZFS pool from the zfs command-line utility is not supported, and so I may hit bugs when I use the pool in TrueNAS.
I was really surprised that TrueNAS can't just accept whatever the state of the ZFS pool is. It feels like an overreach that TrueNAS expects to manage all ZFS interactions.
I'm converting more of my infrastructure to NixOS, and I know a lot of people just manage their NAS with NixOS, which is sounding more and more appealing to me.
[1] https://mtlynch.io/raidz1-to-raidz2/
[2] https://www.reddit.com/r/truenas/comments/1m7b5e0/migrating_...
> How do you fill 288 TB? Is it mostly media?
I kind of purposefully don't fill it up :).
This has been a project of tech hoarding over the last ~8 years, but basically I wanted "infinite storage". I wanted to be able to do pretty much any project and know that no matter how crazy I am, I'll have enough space for it. Thus far, even with all my media and AI models and stock data and whatnot, I'm sitting around ~45TB.
On the off chance that I do start running low on space, there's plenty of stuff I can delete if I need to, but of course I probably won't need to for the foreseeable future.
> I'm converting more of my infrastructure to NixOS, and I know a lot of people just manage their NAS with NixOS, which is sounding more and more appealing to me.
Yeah, that's what I do, I have my NixOS run a Samba on my RAID, and it works fine. It was like fifteen lines of config, version controlled, and I haven't thought about it in months.
I'm in a similar position on my storage... though mostly that I bought a new nas with the intent of storing for Chia, but after 3 days, I decided it was a total waste as I'd never catch up to the bigger miners... So I repurposed it for long term and retired my older nas.
older nas was 4x 4tb drives when I retired it and passed it to a friend.
Current nas is a 6-bay synology with ram and nvme upgraded, along with a 5-bay expansion. All with 12tb drives in a 6drive 2-parity and 5-drive 2-parity arrays. I'm just under 40tb used mostly media.
Well you got it, I just wonder if half that storage would still be effectively infinite for you
I bought the drives used on Ebay, and the prices weren't that different for the 8TB vs 16TB when I bought them. Like on the order of ~$15 higher, so I figured I'd just eat that cost and future proof it even more.
288TB might be enough to store a complete set of laser disk arcade games. /s
I don't think that /s is needed, that's probably true. The entirety of Gamecube games is only a few terabytes.
Same for me, after a while you just want to do something the "managed" software doesn't support.
Now I just run Ubuntu/Samba and use KVM and docker for anything that doesn't need access to the underlying hardware.
Yeah, same. Almost all of the NAS packages sacrifice something - they're great places to start, but just getting Samba going with Ubuntu is easy enough.
Do you think the power consumption matters on your box here? Should you care about the "USB bottleneck"? How do you organize this thing so it's not a mess of USB cables? I kinda wanna make it look esthetically nice compared to something like a proper nas box.
> Do you think the power consumption matters on your box here?
It's actually not too bad; the main "server" idles at around 14W and the power supply for it only goes to 100W under load. The drive bays go up to 100W (I think) but generally idle around 20W each. All together it idles at around ~70-80W.
Not that impressive BUT it replaced a big rack mount server that idled at about 250W and would go up to a kilowatt under load.
> Should you care about the "USB bottleneck"?
Not really, at least not for what I'm doing. I generally can get pretty decent speeds and I think network is often the bottleneck more than the drives themselves.
> How do you organize this thing so it's not a mess of USB cables?
I don't :). It's a big mess of USB cables that's hidden in a closet. It doesn't look pretty at all.
I have a similar setup (a dell wyse 5070 connected to an 8 bay enclosure) though I do not use RAID, I simply have some simple rsync script between a few of a drives. I collect old 1-2TB hard drives as cold storage and leave them on a bookshelf. The rsync scripts only run once a week for the non-criticial stuff.
Not to jinx it, but I have never had a hard drive failure since around 2008!
I think that's a good idea. It's a more manual setup but also more space efficient (if you skip backing up eg linux isos) and causes less wear than RAID.
At least for now, until VPNs become illegal, I haven't worried too much about _all_ my linux ISOs being backed up.
TrueNAS is just web-based configuration management. As long as you only use the web UI, your system state can be distilled down to the config file it generates.
If you do a vanilla FreeBSD+samba+NFS+ZFS setup, you'll need to edit several files around the file system, which are easy to forget months down the line in case of adjustment or disaster recovery.
The difference between what you've built and TrueNAS may well only become evident if your ZFS becomes corrupted in the future. That isn't to say YOU won't be able to fix it in the future, but I wouldn't assume that the average TrueNAS user could.
You... swap the drive, run a ZFS replace command, and that's it. I know I'm coming at this from a particular perspective, but what am I missing?
i have 4x 4TB drives that are in my dead QNAP NAS.
i've wanted to get a NAS running again, but while the QNAP form factor is great, the QNAP OS was overkill – difficult to manage (too many knobs and whistles) – and ultimately not reliable.
so, i'm at a junction: 1) no NAS (current state), 2) custom NAS (form factor dominates this discussion – i don't want a gaming tower), or 3) back to an off-the-shelf brand (poor experience previously).
maybe the ideal would be a Mac Mini that i could plug 4 HDDs into, but that setup would be cost-inefficient. so, it's probably a custom build w/ NixOS or an off-the-shelf, but i'm lacking the motivation to get back into the game.
I do recommend those little Beelink computers with an AMD CPU.
They can be had for a bit less than the Mac mini, and I had no issues getting headless Linux working on there. I even have hardware transcoding in Jellyfin working with VAAPI. I think it cost me about $400.
I tried a Mac Mini for a while, but they're just not designed to run headless and I ultimately abandoned it because I wanted it to either work or be fixable remotely. Issues I had:
- External enclosure disconnected frequently (this is more of an issue with the enclosure and its chipset, but I bought a reputable one)
- Many services can't start without being logged in
- If you want to use FileVault, you'll have to input your password when you reboot
Little things went wrong too frequently that needed an attended fix.
If you go off the shelf, I recommend Synology, but make sure you get an Intel model with QSV if you plan to transcode video. You can also install Synology OS to your own hardware using Xpenology - its surprisingly stable, moreso than the mac mini was for me.
> Get a Synology Intel model with QSV if you plan to transcode video
How about if you _don't_ plan to transcode video? For example, this years models DS425+ uses the (six year old) Intel Celeron J4125, while the DS925+ uses the (seven year old) AMD Ryzen Embedded V1500B. Why choose one over the other?
I use a QNAP 8-bay JBOD enclosure (TL-D800S) connected via SFF-8088 to a mini-itx PC - I find the form factor pretty good and don't have to deal with QNAP OS.
I was in the same boat - QNAP OS was a complete mess. Ended up nuking it and throwing Ubuntu on there instead. Nothing fancy, just basic config, but it actually works now. Other option is pay Unraid.
What are your requirements for a NAS? What do you want to use it for? Is it just to experiment with what a NAS is, or do you have a specific need like multiple computers needing common, fast storage?
The fact that Asahi doesn’t run on M4 cpus (ie the current Mac Mini) is also a consideration.
ZFS on macOS sucks really bad, too, so that rules out the obvious alternative.
I guess you could run a Linux VM and pass the disks through? I considered something similar for an ARM64 NixOS build server - though that application doesn’t need the disks.
TrueNAS on a good bit of hardware - in my case the latest truegreen NAS is fantastic. You build it, it runs, it's bulletproof. Putting Jellyfin and/or plex on top of it is fantastic.
I do both. The primary server runs Proxmox and I have a physical TrueNAS box as backup server, so I have to do it by hand on Proxmox.
“Have to”, since I no longer suggest virtualizing TrueNAS even with PCI passthru. I will say the same about zfs-over-USB, but you do you. I’ve had too many bad experiences with both (for those not on the weeds here, both are officially very much not supported and recommended, but they _do_ work).
I really like the TrueNAS value prop - it makes something I’m clearly capable of by hand much easier and less tedious. I back up both my primary zfs tank and well as my PBS storage to it, plus cold backups. It does scheduling, alerts, configuration, and shares, and nothing else. I never got the weird K8s mini cluster they ship - seems like a weird thing that clashes with the core philosophy of just offering a NAS OS.
My raspberry pi 3b+ used to go corrupt every now and then.
I'm starting a rebuild of my now ancient home server, which has been running Windows with WSL2 Docker containers.
At first, I thought I might just go with TrueNAS. It can manage my containers and my storage. But it's got proprietary bits, and I don't necessarily want to be locked into their way of managing containers.
Then my plan was to run Proxmox with a TrueNAS VM managing a ZFS raidz volume, so I could use whatever I want for container management (I'm going with Podman)
But the more I've researched and planned out this migration, the more I realize that it's pretty easy to do all the stuff I want from TrueNAS, by myself. Setting up ZFS scrubbing and SMART checks, and email alerts when something fishy happens, is pretty easy.
I'm beginning to really understand the UNIX "do one thing and do it well" philosophy.
What enclosure do you use? I had trouble finding a good one.
I've had mixed luck too. The one I'm using now has been mostly ok. https://a.co/d/4AiF1Zp
It actually has been considerably more reliable than the MediaSonics than they replaced.
Can you give a little more details on how the enclosures work? Do you see each drive individually, or does each enclosure show as one "drive" which you then put into a zfs pool.
I'm looking to retire my power hungry 4790k server and go with a mini pc + external storage.
Replied to another comment with my setup - I use a QNAP JBOD enclosure (TL-D800S) connected to a mini-itx PC. (You do need at least one PCIe slot.) Shows up as 8 drives in the OS, the enclosure has no smarts.
I wouldn't do USB-connected enclosures for NAS drives - either SATA via SFF, or Thunderbolt.
In my case, each drive shows up individually. I create a new vdev encompassing all of them.
Ive seen a lot of people linking beelink mini PCs lately: https://www.bee-link.com/products/beelink-me-mini-n150
The cost is very low, but it would have to be a nvme only build.
I have actually made a Raspberry Pi based NAS and found it was a pain.
The SATA controller isn't terrible, but it and other hardware areas have had many strange behaviors over the years to the point of compiling the kernel being needed to fiddle with some settings to get a hardware device to do what it's supposed to.
Even if you're using power that is well supported eventually you seem to hit internal limits and get problems. That's when you see people underclocking the chip to move some of this phantom power budget to other chips. Likewise you have to power most everything from a separate source which pushes me even closer to a "regular PC" anyhow.
I just grab an old PC from Facebook for under $100. The current one is a leftover from the DDR3 + Nvidia 1060 gaming era. It's a quad core with HT so I get 8 threads. Granted most of those threads cause the system to go into 90% usage even when running jobs with only 2 threads, probably because the real hardware being used there is something like AVX and it can't be shared between all of the cores at the same time.
The SATA controller has been a bit flaky, but you can pick up 4-port SATA cards for about $10 each.
When my Raspberry Pi fails I need to start looking at configurations and hacks to get the firmware/software stack to work.
When my $100 random PC fails I look at the logs to find out what hardware component failed and replace it.
> The SATA controller has been a bit flaky, but you can pick up 4-port SATA cards for about $10 each.
If your build allows the extra money for an LSI or real raid controller is well worth it. The no-name PCI-e sata cards are flakey and very slow. Putting an LSI in my NAS was a literal 10x performance boost, particularly with zfs which tends to have all of the drives active at once.
I am curious about the slow part. I use these crappy SATA cards and I am sure they are crappy, but the drives are only going to give 100MB/s in bursts and they have an LVM cache (or ZFS stuff) on them to sustain more short-term writes.
I get if I was wiring up NVME drives that are going to go 500MB/s and higher all the time.
What I really care about with the SATA and what I mean by flaky is I when I have to reboot a system physically every day because the controller stays on in some way even if it gets a soft `reboot` command and then Linux fills up with IO timeouts because the controller seems to stop working after X amount of time.
> probably because the real hardware being used there is something like AVX and it can't be shared between all of the cores at the same time.
That's not the right explanation; each physical core has its own vector ALUs for handling SSE and AVX instructions. The chip's power budget is shared between cores, but not the physical transistors doing the vector operations.
Thank you for the correction.
I don't know about TrueNAS, but with Proxmox the two random 10$ SATA-cards I tried only gave me issues. With first one OS wouldn't boot, second seemed to work fine, but connected drives disappeared as soon as I wrote to them.
Used server-grade LSI cards seem to be the way to go. Too bad they're power hungry based on what I've read.
i have had a random $10 sata card, has worked fine over the last 5 yrs
I like the power efficiency of the Raspberry Pi. Prior to it, I used a Macbook that sipped power like 11 W.
I do too. For many use cases it's awesome to use an ESP32, Raspberry Pi or Arduino when I want to stash some little widget that can sip a small battery over the next week. It's equally awesome that in many scenarios you can be net positive in your consumption with a super simple solar panel that provides a few watts here and there into a battery.
But at home things are different. While I want to use as little power as possible, the realistic plan for being sustainable at home is to use solar with batteries. That's a plan I actually think can matter and that I am able to participate in for relatively low cost ($10k)
Messing around with a system to save a few watts for me in this context isn't very valuable.
What's the power usage like?
Yeah this is what keeps me from considering old PCs for NAS.
Maybe stating the blindingly obvious but seems like there is a gap in the market for a board or full kit with a high efficiency ~1-10W CPU and a bunch of SATA and PCIe ports.
https://www.minisforum.com/pages/n5_pro
Minisforum sort of working on it, I'd imagine the AMD "AI" processors are pretty low power at idle as they're mobile chips. Obviously has the downsides of other minipcs tho (high cost, low expandability)
Then you've got to consider what are you optimizing for. Is the power bill going to be offset by the cost of a Pi plus any extras you need, or a cheap second hand PC someone wants to clear out, or free if you can put an old serviceable PC you have already back into use. Is it heat? Noise? Space that it needs to hide away in? Airflow for the location?
I've been eyeing off the Radxa ROCK 5 ITX with the Rockchip RK3588. There are two variants, one gives you 4x SATA, the other gives you 1x PCIe.
There’s also the Orion O6 if you need more IO/perf - https://radxa.com/products/orion/o6#techspec
I'm using a retired 4790k build for mine, idles at 60W and I really need to do something about that.
Probably not very good. I selected large spinning hard drives because I could get them at a good price for 2TB each and I wanted to setup a RAID5-like system in ZFS and btrfs (lesson learned, btrfs doesn't actually support this correctly) and I wanted to get at least 10TB with redundancy.
I don't know how much each of those SATA disks take up, but probably more than a single Raspberry Pi does.
Likewise it has a few case fans in it that may be pointless. I would prefer it never has a heating issue versus saving a few "whurrr" sounds off in a closet somewhere that nobody cares about.
It's also powering that Nvidia 1060 that I do almost nothing with on the NAS. I don't even bother to enable the Jellyfin GPU transcoding configuration because I prefer to keep my library encoded in h264 right now anyhow as I have not yet made the leap to a newer codec because the different smart TVs have varying support. And sometimes my daughter has a friend come over that has a weird Amazon tablet thing that only does a subset of things correctly.
The 1060 isn't an amazing card really, but it could do some basic Ollama inference if I wanted. I think it has 6GB of memory, which is pretty low, but usable for some small LLMs.
Shouldn't it be more of a "why" to install TrueNAS on a RPi?
The only reason I can see is "I have one that I don't use". Because otherwise...
Idle power isn't all that much better than a low power Intel N100 or something similar. And it's all downhill from there. Network transfer speeds and disk transfers will all be kneecapped by the (lack of) available PCIe lanes. Available RAM or CPU speeds are even worse...
That's addressed in the second section of the article:
> I've found numerous times, running modern applications on slower hardware is an excellent way to expose little configuration flaws and misconceptions that lead to learning how to run the applications much better on more capable machines.
It's less about the why, and more about the 'why not?' :)
I explicitly don't recommend running TrueNAS on a Pi currently, at the end (though I don't see a problem with anyone doing it for the fun, or if they need an absolutely tiny build and want to try Arm):
> Because of the current UEFI limitations, I would still recommend running TrueNAS on higher-end Arm hardware (like Ampere servers).
totally agree. it becomes extremely obvious when applications are poorly optimized or have outdated build systems that don't support ARM.
imo, if the software doesn't work without issue on my Pi, it isn't good enough for prod.
On a somewhat related note, would you trust a Pi based NAS long term? I've not tried doing one since the Pi 4 which understandably because of its hardware limitations left a lot to be desired, but that part aside I was still finding the pi as a piece of hardware somewhat quirky and unpredictable - power especially, I can't count the number of times simply unplugging a usb keyboard would cause it to reboot.
I've run a Pi NAS as my 2nd onsite replica for over a year without a hiccup, it's using a Radxa Penta SATA HAT with 4x SATA SSDs, and a 2.5 Gbps USB dongle for faster Ethernet[1].
[1] https://www.jeffgeerling.com/blog/2024/radxas-sata-hat-makes...
I have been using a Raspberry Pi 4 (8 GB RAM) as my NAS for nearly 5 years. It is incredibly reliable. I run the following software on it: Ubuntu 64-bit, Samba, Jenkins, Postgres and MariaDB. I have attached external hard drives through a USB hub (because Pi does not necessarily have enough power for the external hard drive). I git push to public Samba folders on the Pi, and trigger Jenkins, which builds and installs my server using docker in the Pi.
On the one hand it is good to discover that someone is tackling getting TianoCore working on the Raspberry Pi 5.
On the other hand, they still have the destructive backspace behaviour, and inefficient recursive implementation, that breaks the boot loader spinners that the NetBSD and other boot loaders display. It's a tiny thing, but if one is used to the boot sequence the absence of a spinner makes the experience ever so slightly jarring.
* https://github.com/NumberOneGit/edk2/blob/master/MdeModulePk...
* https://github.com/tianocore/edk2/blob/master/MdeModulePkg/U...
* https://tty0.social/@JdeBP/114658278210981731
* https://tty0.social/@JdeBP/114659884938990579
I should add, by the way, that this nicely demonstrates M. Geerling's point here about catching bugs by running things on a Pi.
The TianoCore's unnecessarily recursive implementation of a destructive BS is slow enough, on a Pi 4, and in combination with how the boot loaders themselves emitted their spinners, that I could just, very occasionally, see parts of spinner characters flashing very briefly on the screen when the frame refresh timing was just right; which led me to look into what was going on.
Wasn't there a post somewhere on HN yesterday about how slowing down your programs can help you catch problems? Using low end hardware is an automatic way of forcing that :)
I did something like this a while ago, using https://wiki.radxa.com/Dual_Quad_SATA_HAT (though I installed OpenMediaVault rather than TrueNAS).
It was a fun project and looked cool but never really worked that well. It was quite unstable and drives seemed to disconnect and reconnect a lot. There are probably better quality connectors out there but I think for a NAS you really want proper SATA connections.
I eventually built my own box and went with OMV again. I like it because it's just userland software you install on Debian. Some of the commenters here who think TrueNAS is overkill might want to check out OMV if they haven't already.
To be honest I still only have a few TB of storage on it, probably not really enough to be worth all the hassle of building and configuring a PC, but it was more about the journey which was fun.
This is fun for learning purposes, but even with the PCIe 3 bus the Pi just isn't that great a server when compared to an Intel N-series machine.
I have two "normal" NAS devices, but I would like to find a compact N100 board with multiple SATA ports, (like some of the stackable HATs for the Pi, some of which will take 4 disks directly on the PCB) to put some older discarded drives to good use.
My go-to solution software-wise is actually to install Proxmox, set up ZFS on it and then drop in a lightweight LXC that exposes the local filesystem via SMB, because I like to tweak the "recycle bin" option and some Mac-specific flags--I've been using that setup for a while, also off Proxmox: https://taoofmac.com/space/notes/2024/11/09/1940#setting-up-...
The article states "I currently run an Ampere Arm server in my rack with Linux and ZFS as my primary storage server" and this is just explaining how to try it out on the Pi, which I found surprisingly interesting. I am glad people like the N100s and wish they would find more relevant articles to talk about them.
Well, I am curious about compact SATA options, and have a peer response to yours that is eminently useful, so… I’d say it’s still on topic.
The warning about death of three SSDs doesn't inspire too much confidence to be honest... do you think it was due to the usage patterns of Proxmox default settings for ZFS?
Over time and lots of reading random information sources, I got notes about disabling several settings that are cool for datacenter-levels of storage, but useless for the kinds of "Raspberry-pi tied with a couple USB disks" that I was interested in.
I don't know how you got that idea when I explicitly say it was a hardware issue.
Ouch my fault... that's what happens for reading Hacker News in a work pause while the code compiles >_<
Obligatory XKCD reference
The Odroid H4+ might be what you're looking for. It's a N97 SBC from a South Korean manufacturer that's been around a while. The "+" variant has 4 SATA ports. With an adapter board, 2-4 NVMe drives can be attached as well.
Any good cases for that? I’d be afraid of ending up with a lump of duct-taped SATA SSDs around the PCB…
Odroid themselves sell simple cases for the H4+ like the Type 3 that can hold several drives if you don't mind the homebrew look. They also sell a mini-ITX kit that makes the board compatible with a mini-ITX case of your own choosing.
Also the Odroid-H4+ supports IBECC, that means, ECC parity check done by the CPU using normal non-ECC RAM modules. Very suitable for ZFS/TrueNAS.
I do wish they'd spec-bump to Twin Lake CPUs soon.
Thanks! I’ve used ODROID devices in the past, but wasn’t aware of that variant.
There are tons of mini-itx N100 boards with onboard 2.5G or 10G ethernet and 6-8 sata ports for a few hundred bucks available (on Amazon for example). search “n100 mini itx nas”.
There are also some decent 6 and 8 bay mini-itx cases.
I went looking for completed systems recently and couldn’t find any integrators that make them. Surprised nobody will take a few hundred bucks to plug in all the motherboard power/reset pin headers for me.
Yeah, that’s why I was asking. Most of the mini-PC market is either doing “the cheapest possible desktop” or “the cheapest possible gaming box”.
QNAP TS-435XeU is a $600 1U short-depth (11") case with quad hotswap SATA, dual NVME, dual 10GbE copper, dual 2.5GbE, 4-32GB DDR4 SODIMM Arm NAS that would benefit from OSS community attention. Includes hardware support for ZFS encryption.
Based on a Marvell/Armada CN9130 SoC which supports ECC, it has mainline Linux support, and public-but-non-upstream code for uboot. With local serial console and a bit of effort, the QNAP OS can be replaced by Arm Debian/Devuan with ZFS.
Rare combo of low power, small size, fast network, ECC memory and upstream-friendly Linux. QNAP also sell a 10GbE router based on the same SoC, which is a successor to the Armada 388 in Helios4 NAS (RIP), https://kobol.io/helios4/
No UEFI support, so TrueNAS for Arm won't work out of the box.
If you can build your own Uboot you should be able to enable UEFI no? It’s not as full featured as EDK2 etc, but it works to boot Linux and I think the BSDs.
Indeed, it should be doable.
Wouldn't straight ZFS with a vanilla OS make more sense for low power devices? TrueNAS, esp the kubernetes flavour seems to have a decent bit of overhead last I looked at it
I think at that point you get close to the "dropbox problem".
The audience/needs for TruNAS are probably looking to not have to do much beyond either turning it on or plugging in an update stick.
That TrueNAS ARM fork is great. I've set it up in VMWare Fusion on Mac mini M4 and it runs!
With Thunderbolt 4 M.2 NVMe enclosure, you can plug in M.2 SATA adapter to connect 8 SATA III drives. Little Pico PSU to power the HDDs and it makes a really low powered NAS.
My plan is to give TrueNAS a spin with two drives and if it is stable, move everything into it.
A great exercise, and one that stretches the platform in a way that will inevitably help. That's incredible value and a great public service by jeff.
Probably not a great idea given how ZFS is architected for memory utilization and ECC. :-)
I use a pi5 w/ an m.2 ssd for piracy and it just crashes all the time. Randomly locks up, haven’t been able to fix it.
I feel like that article took longer than it should have done to say that your storage probably won't work
Why don’t people just use like, their computer? I just turn mine on if I want to watch something
Doesn't scale to n+1 users.
I’d really like the TrueNAS UI only, separated completed from an OS or its virtualisation setup.
Probably the closest thing that already exists is just running Cockpit[1]. 45Drives even maintains some helpful storage and file sharing plugins for it[2], though some of those are only compatible with x86 for now.
[1] https://cockpit-project.org
[2] https://github.com/45Drives?q=cockpit
Cockpit hasn't really improved in a while, though, and although I greatly appreciate 45Drives' committment to it, last time I tried to install their stuff I had a lot of issues with deprecated dependencies...
So I just went "raw" smbd and never looked back, but then again I've been running Samba for almost two decades now and configuring it is almost second nature to me (I started doing it as an alternative to Novell/IPX, and later to replace AppleTalk...)
In practice, I've found that worked well because I very seldom had to do any changes to a file server once it was set up (adding shares was mostly a matter of cloning configs, and what minimal user setup there needed to be done in the enterprise was deferred to SAMBA/CIFS glue).
Quite true; raw configuration isn't as flashy so you can't make glitzy videos or blog posts about it (well, outside of the HNsphere at least).
But that's how 99% of the services I run are set up, just a bunch of configuration files managed by Ansible.
The only servers I run in production with a UI are two NASes at home, and the only reason for that is I want to run off the shelf units so I never have to think about them, since my wife and kids rely on them for Jellyfin and file storage.
That’s the plight of the content creator - keep things shiny and interesting enough even if it’s not really what people actually use :)
I wonder why TrueNAS wants to run as an OS. Surely most of the work of being a NAS happens in userspace?
I would imagine it's because it makes for a lot more fun support possibilities when all the underlying stuff in the stack (kernel, ZFS, Docker, Python, etc. etc.) is subject to the whims of the end user. When you ship the entire OS yourself you can be more certain about the versions of all the kernel+userland stuff and therefore the interactions between them all.
I've used Open Media Vault (OMV) as a NAS on a Raspberry Pi, which served as storage for my Jellyfin server, for a few years.
TrueNAS has been so annoying to use for me.
I really wish I just used something else or raw Ubuntu server.
The Time Machine backup feature corrupts itself.
You can’t have home assistant and Time Machine backups on at the same time. It just feels like a janky UI that has no polish too.
I've been thinking of setting up a TrueNAS system to do Time Machine and home assistant... Can you elaborate on these issues?
Well, there is TrueNAS Scale, which is Debian-based. Which one are you running Scale?
I'm not convinced that TrueNAS Scale is an improvement. I won't make the case that stability and maturity hamper Scale as a whole, but there are definitely discrepancies for SMB performance and limitations in and the use of Kubernetes really overcomplicates things for home users (not saying that TrueNAS Scale Apps were great). I had a version upgrade of Scale catastrophically fail on one system because the Kubernetes implementation couldn't be reconciled during the upgrade (I had no enterprise support, community support had no input, so I was forced to restart from scratch).
You also need a dedicated OS Drive for TrueNAS, which is reasonable in principle for critical systems, but doesn't always really meet the needs of home users with limited drive bays.
Interesting. I've only run Core for that reason. I wanted something rock solid. I've never tried Scale but was thinking of switching.
TrueNAS is confusing and difficult to setup. I went with Ubuntu and ZFS.
What's the most cost-effective NAS hardware/software combo lately?
"One glaring problem with the Raspberry Pi is no official support for UEFI"
GTFO, as you re-key your installations this fall with Microsoft's permission. =3
Secure boot isn’t mandatory, and if you want secure boot you don’t have to use Microsoft’s keys, you can enroll your own. Lanzaboote for NixOS for example doesn’t use shim - https://github.com/nix-community/lanzaboote .
Sometimes certain product lines act like they consider customers on rare occasion...
But most manufacturers try to lock the firmware down, and users only get a small subset of configuration menus. For example, the Gigabyte rtx based laptops require patching a machine specific bios to even gain access to the oem firmware areas.
Mostly the modern builds just created a bunch a problems nobody wanted, and didn't improve anything as Asus, Gigabyte, and Razer recently showed.
https://www.youtube.com/watch?v=4er6kD-pxZs
https://www.youtube.com/watch?v=KhfqhCxqpQ8
If you are running signed code on many machines. YMMV... Raspberry Pi avoided the signed code features built into most Broadcom ARM chips for good reasons. =3
Jeez even less pcie lanes then a n100
Man, I hate to ruin a celebration but we have copyparty[0] now. Such wow, much amazing.
[0] https://github.com/9001/copyparty/
Those aren’t remotely the same class of thing, though.
Amazing that you can build a career out of making useless things with RPi. I don't mean it as a negative thing about the author, but rather this kind of content.
It's a rare case when jack (RPi) of all trades is not at all better than master of one or even other jacks of all trades. Running anything but official distro is pain. Managing official distro is pain. Even you it wasn't it doesn't have enough raw power or I/O to do anything really useful.
It's an amazing "temporary" solution because it's so awful that you will actually replace it with a proper one.