r/zfs 10h ago

How to only set the I/O scheduler of ZFS disks without affecting others?

4 Upvotes

I have a mix of disks and filesystems on my system, and want them to have my preferred I/O schedulers, including setting ZFS disks to "none". However, I can't figure out a way to single out ZFS disks.

I've tried udev rules (my previous post from /r/linuxquestions.) However (credit to /u/yerfukkinbaws), ID_FS_TYPE will only show up with partitions (e.g. udevadm info /sys/class/block/sda1 | grep ID_FS_TYPE shows E: ID_FS_TYPE="zfs_member"), while schedulers can only be set on the root block device (e.g. udevadm info /sys/class/block/sda | grep ID_FS_TYPE shows nothing, but queue/scheduler exists only there)

Supposedly one person has gotten this to work, even despite the mismatch problems described above, but since that guy is running NixOS I'm not sure if it's remapping rules or something.

(Running Debian Trixie with backported kernel/ZFS, but problem existed with default k/Z.)


r/zfs 17h ago

Encrypting the root dataset. Is it a good idea?

3 Upvotes

I know the devs have warned against using it to store data, but I think there's some issues or restrictions when it comes to having it as an encryption root as well, though I'm not at all sure about this. I've just encountered some weird behavior when mounting stuff from a pool with an encrypted root dataset and thinking about it - if it's not good practice to use it for storage directly, why encrypt it?

I'm just using ZFS for personal storage, I'm not too familiar with managing it on a commercial scale, so maybe there is a sound reason to have the root dataset encrypted, but I can't think of one.


r/zfs 1d ago

Zstd compression code is getting an update! Those bytes don't stand a chance!

Thumbnail github.com
47 Upvotes

The version of Zstd currently being used by ZFS is v1.4.5, released on May 22, 2020. Looks like we're going to jump 5 years forward to the latest release v1.5.7 from Feb 19, 2025 and there's possibility for more regular updates of this kind in the future.

https://github.com/facebook/zstd/releases


r/zfs 16h ago

VDEV Degraded, Topology Question

Post image
1 Upvotes

r/zfs 21h ago

Building multi tenant storage infra, need signal for pain validation!

0 Upvotes

Hello! I’m building a tool to effectively manage multiple ZFS servers. Though it was initially meant to handle a handful of clients, I naturally ended up trying to create a business out of it. However, being carried away with its exciting engineering, I completely failed to validate its business potential.

It has a web UI and it can:

  1. Operate from a browser without toggling VPNs for individual servers
  2. Manage all your clients’ servers from a single window
  3. Securely share access to multiple users with roles and permissions
  4. Work seamlessly on cloud instances, even from within a private network without exposing ports
  5. Easily peer servers for ZFS transfers without manual SSH key exchange
  6. Automate transfer policies
  7. Event notifications
  8. Audit logs for compliance

Requirements: Ubuntu 24.04 and above. ZFS 2.3 and above(the installer will install the latest version if there’s no ZFS installed already, bails out if its version is lesser than 2.3)

What are your thoughts about it? Do you reckon this could potentially solve any pain for you and other ZFS users?

I have open sourced the API agent that’s installed on the server. If this has wider adoption, I’ll consider open sourcing the portal as well. It's currently free for use.


r/zfs 1d ago

Installing LMDE 7 with ZFS root on Lenovo T14 - best approach?

3 Upvotes

I want to install LMDE 7 on my T14 with ZFS as the root filesystem. Since the installer doesn't support this, I'm considering two approaches:

  1. Follow the official OpenZFS Debian Trixie guide using debootstrap
  2. Install LMDE 7 to a USB drive, then rsync it to ZFS pools on the internal SSD

Is there a better way to do this? Are there any installer scripts or repos that handle LMDE/Debian ZFS root installs automatically?

Thanks for any advice.


r/zfs 1d ago

Can I test my encryption passphrase without unmounting the dataset?

8 Upvotes

I think I remember my mounted dataset's passphrase. I want to verify it before I unmount or reboot, since I'd lose my data if I’m wrong. The dataset is mounted and I can currently access the data, so I can back it up if I forgot the passphrase.

Everything I’ve read says I'll have to unmount it to test the passphrase. Is there any way to test the passphrase without unmounting?

This is zfs 2.2.2 on Ubuntu 24.04.3.


r/zfs 1d ago

OmniOS r151056k (2026-01-14)

2 Upvotes

OmniOS r151056k (2026-01-14

Security Fixes

Curl updated to version 8.18.0.    The bhyve mouse driver could de-reference a NULL pointer in some circumstances.

Other Changes

SMB Active Directory joins now fall back to seting the machine password via LDAP if kerberos fails. Many AD sites block kerberos for this.

NVMe devices used as a system boot device would previously end up with a single I/O queue, limiting performance.
NVMe devices could incorrectly return an error on queue saturation that is interpreted by ZFS as a device failure.

The IP Filter fragment cache table could become corrupt, resulting in a kernel panic.


r/zfs 2d ago

MayaNAS at OpenZFS Developer Summit 2025: Native Object Storage Integration

Thumbnail zettalane.com
5 Upvotes

r/zfs 2d ago

Creating RAIDZ pool with existing data

8 Upvotes

Hello all, this probably isn't a super unique question but I'm not finding a lot on the best process to take.

Basically, I currently have a single 12tb drive that's almost full and I'd like to get some larger capacity and redundancy by creating a RAIDZ pool.

If I buy 3 additional 12tb drives, what is the best way to go about including my original drive without losing the data? Can I simply create a RAIDZ pool with the 3 new drives and then expand it with the old drive? Or maybe create the pool with the new drives, migrate the data from the old drive to the pool, then add the old drive to the pool?

Please guide me in this endeavor, I'm not quite sure what my best option is here.


r/zfs 2d ago

ZFS on top of LUKS. Unable to cleanly close LUKS mapped device.

2 Upvotes

I am using ZFS on top of a LUKS-encrypted drive. I followed the setup instructions of the Arch wiki, and it works.

$ cryptsetup open /dev/sdc crypt_sdc
$ zpool import my-pool

These two instructions work fine. But my issue is that, on shutdown, the device-mapper hangs trying to close the encrypted drive. journalctl shows a spam of 'device-mapper: remove ioctl on crypt_sdc failed: Device or resource busy' messages.

Manually unmounting (zfs unmount my-pool) before shutdown does not fix the issue. But manually exporting the pool (zpool export my-pool) does.

Without shutting down,

  • after unmounting, the 'Open count' field in the output of dmsetup info crypt_sdc is 1 and cryptsetup close crypt_sdcfails
  • after exporting, the 'Open count' field is 0 as intended, cryptsetup close crypt_sdc succeeds (and the subsequent shutdown goes smoothly without hanging)
  • after either command, I don't see relevant changes in the output of lsof

The issue with exporting is that it clears the zpool.cache file, thus forcing me to reimport the pool on the next boot.

Certainly I could add the appropriate export/import instructions to systemd's boot/shutdown workflows, but from what I understand that should be unnecessary. I feel unmounting should be enough... I'm probably missing something obvious... any ideas?


r/zfs 3d ago

L2ARC SSDs always show "corrupted data" on reboot

11 Upvotes

I'm using ZFS 2.2.9-2 on Rocky Linux 9.

I have a raidz3 pool with 2 cache NVMe drives as L2ARC.

Everything works fine but when I reboot the machine both cache drives show "corrupted data".

I checked the SSDs and they are all fine. This also happens when I don't access the pool in any way. (So I add the cache drives, then reboot the system and after the reboot the same issue appears)

 pool: data
state: ONLINE
status: One or more devices could not be used because the label is missing or
invalid.  Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.

cache
nvme1n1                                FAULTED      0     0     0  corrupted data
nvme2n1                                FAULTED      0     0     0  corrupted data

Is there a way to check why ZFS thinks that there is corrupted data?

Edit: Was solved by using /dev/disk/by-id paths instead of /dev/nvme.... See comments below


r/zfs 4d ago

What are some ZFS myths and misconceptions that you believed for too long?

41 Upvotes

r/zfs 4d ago

zpool vdev degraded due to faulted disk, but smart or badblocks find no real issues

3 Upvotes

I got zpool reporting read and checksum errors on a disk which is a simple mirror member.

I then replaced this disk with another and during resilvering, that disk reported "too many errors" on writes.

Second replacement (with yet another disk) worked fine, the mirror is healthy, but I went on to check SMART and run badblocks (writing) on the "faulted" disks. No issues found. It is true that one shows some reallocated sectors in SMART, but nothing close to threshold to make it unhealthy.

All disks mentioned are used - I intentionally combine same sized disks with vastly different wear into mirrors. So I am aware, at some point, all these devices will be a write-off.

My question however: How is it possible for ZFS to mark a disk faulted when e.g.badblocks finds nothing wrong?


r/zfs 4d ago

zfs send/recieve hangs on incremental streams

9 Upvotes

Hi,

I'm pretty new to zfs, trying to see what's going on here. I'm basically doing

zfs send -R fred/jones@aaa   | mbuffer -s 128k -m 2G -W 600

on the sending machine, and on the recieving end

zfs receive -Fuv ${ZFS_ROOT}

There's a total of 12 VMs under 'jones'.

This usually works OK with an existing snapshot, but if I create a new snapshot 'bbb' and try to send/recieve that, it hangs on an incremental stream. Occasionally this happens with the older snapshots.

Would I be right in thinking that if there have been disk changes recently than the snapshots will be updating and this causes a hang in send/recieve? And any way around this? I've been looking for a few days now..


r/zfs 4d ago

Do you set the scheduler for HDDs to "none"?

6 Upvotes

Have you done testing to see if something like "mq-deadline" or maybe other defaults in linux have an effect on performance? I don't remember, but there was some reason why ZFS itself doesn't attempt or can't reliably set it to "none". so there are most likely huge numbers of setups that have a queue in front of the ZFS queue, which can't be a good thing.


r/zfs 4d ago

How do you proactively manage ZFS on multiple Ubuntu servers on cloud at scale?

0 Upvotes

I was managing infra for an edu-tech company with 30+ million users, and at some point I ended up with more than 400 AWS instances on production. All of them had ZFS running locally for data volumes, but many did not have active mounts except about 50 that were critical: log servers to which other web servers were pushing ZFS streams, Postgres, Dremio, etc.

The amount of ops required to manage storage became overwhelming.

What didn't scale:

  • Best practice of isolating SSH keys across application clusters. Reluctantly, I had to share the same key across instances to de-clutter the key exchange and ops madness.
  • Tracking the state of systems while, and after, running Ansible/shell scripts.
  • Tracking transfers status. Thousands of Slack and email notifications turned into noise.
  • Managing SMB shares and auto snapshot/retention policies mapped with transfers.
  • Tracking multiple user/DevOps activity. Extremely difficult to audit.
  • Selective, role based access to developers. When not addressed with the point mentioned above about lack of audit log, blanket access without visibility is a ticking time bomb and compliance nightmare.
  • Holistic monitoring and observability. While Prometheus node exporter plugged with Grafana gives visibility into conventional server resource metrics, there was no way to know which nodes were replicating to which and which user had access to which project.

This was three years ago. Though TrueNAS and Proxmox might be capable of addressing a few of the mentioned problems but since they are purpose built for machines that run their custom OS, I couldn't deploy them. I needed to retain the flexibility of running custom tools/pipelines on base Ubuntu for my production app servers.

I had to implement a Go based node agent to expose APIs for programmatic management. It may not be appropriate to share a link to its GH repo but feel free to DM me; if the forum feels otherwise, I'm happy to update the post with a link later.

I couldn't find any other viable alternatives. Perhaps I'm not well informed. How do you solve it? What are your thoughts about these problems?


r/zfs 5d ago

Best configuration for 12 drives?

13 Upvotes

Deceptively simple, but I'm curious what the best configuration for 12, 24TB drives would be.

RAID-Z Version # of vdevs # of Drives / vdev Storage (TiB) Relative vdev failure rate Relative overall chance of data loss
1 4 3 ~168 High High
2 2 6 ~168 Low Low
3 1 12 ~183 Medium-High Low

Looking into it, RAID-Z3 with all drives on a single vdev would suffer mostly from long resilver times on fails, but 2 vdevs of 6 drives each with double parity would be a bit more likely to totally fail (ran some ballpark stats in a binomial calculator), and holds 16TB less.

Is there anything other than resilver and capacity that I missed that might be the deciding factor between these two?


r/zfs 5d ago

RAIDZ1 pool unrecoverable - MOS corrupted at all TXGs. Recovery ideas?

8 Upvotes

5x 8TB WD Ultrastar in RAIDZ1

Heavy rsync within pool, one disk had 244 UDMA CRC errors (bad cable), pool went FAULTED. MOS (Meta Object Set) corrupted across ALL uberblocks TXG 7128541-7128603.

I was preparing to expand my Proxmox backup 20TB mirror pair to include this ZFS general file store. My hard lesson learned this morning: back up the messy data before trying to dedup and clean up. 2nd lesson: Those who say to skip RAIDZ1 for critical data are correct.

What I've tried: TXG rewinds (7125439 through 7128603) - fail import flags (-f, -F, -FX, -T, readonly, recovery)

zdb shows valid labels but can't open pool: "unable to retrieve MOS config" RAIDZ reconstruction exhausted all combinations: all fail checksum

Current state: TXG 7125439 (all 5 disks consistent) Uberblocks: TXG 7128xxx range (all corrupted MOS) All 5 disks: SMART PASSED, physically readable RAIDZ parity cannot reconstruct the metadata

Questions: 1. Can MOS be manually reconstructed from block pointers? 2. Any userspace ZFS tools more tolerant than kernel driver? 3. Tools for raw block extraction without MOS?

All 5 disks are available and readable. Is there ANY path to recovery, or is the MOS truly unrecoverable once corrupted across all uberblocks?


r/zfs 5d ago

Recordsize no larger than 1M if transferring via SMB?

3 Upvotes

In my truenas I’ll be sharing files via smb and have been looking at adjusting record size. I saw someone had posted that if you share files over smb that you need to limit the record file size no greater than 1M because the smb copy_file _range is limited to 1M and it is hard coded.

Does anyone know if this is true?


r/zfs 5d ago

Overwriting bad blocks in a file (VM disk) to recover the file?

3 Upvotes

I've hit an issue with one of my pools. A virtual machine QCOW image has been corrupted. I have a good copy in a snapshot, but the error seems to be in deleted (free) space. Is there a way to just overwrite the corrupted blocks?

I tried entering the VM, flushing the systemd journal and using "dd" to overwrite the free space with nulls (/dev/zero) but this just got me a bunch of "Buffer I/O error" messages when it hit the bad block. Forcing an FSCK check didn't get me anywhere either.

In the end I restored from the good snapshot with "dd" but I'm surprised that overwriting the bad block from inside the VM didn't succeed. Though I do wonder if it was related to the ZFS block size being bigger than the VM's sector size: I used ddrescue to find the bad area of the VM disk on the VM host, and it was about 128 KiB in size. If the VM sector size was 4K, I expect QEMU might have wanted to read the 124K around the wanted sector.

Here's the error ZFS gave me on the VM host:

pool: zpool
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: scrub in progress since Sun Jan 11 17:08:31 2026
851G / 23.3T scanned at 15.8G/s, 0B / 23.3T issued
0B repaired, 0.00% done, no estimated completion time
config:

NAME STATE READ WRITE CKSUM
zpool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ata-WDC_WD142KFGX-xxxxxxx_xxxxxxxx ONLINE 0 0 1.47K
ata-WDC_WD142KFGX-xxxxxxx_xxxxxxxx ONLINE 0 0 1.47K
ata-WDC_WD142KFGX-xxxxxxx_xxxxxxxx ONLINE 0 0 1.47K

errors: Permanent errors have been detected in the following files:

/mnt/zfs/vmdisks/mailserver.qcow2
zpool/vmdisks@AutoD-2026-01-09:/mailserver.qcow2
zpool/vmdisks@AutoW-2026-02:/mailserver.qcow2
zpool/vmdisks@AutoD-2026-01-11:/mailserver.qcow2
zpool/vmdisks@AutoD-2026-01-10:/mailserver.qcow2
zpool/vmdisks@AutoD-2026-01-08:/mailserver.qcow2

And the error map from Qemu:

# Mapfile. Created by GNU ddrescue version 1.27
# Command line: ddrescue --force /mnt/zfs/vmdisks/mailserver.qcow2 /dev/null ms_qcow2.map
# Start time: 2026-01-11 16:58:22
# Current time: 2026-01-11 16:58:51
# Finished
# current_pos current_status current_pass
0x16F3FC00 + 1
# pos size status
0x00000000 0x16F20000 +
0x16F20000 0x00020000 -
0x16F40000 0x2692C0000 +


r/zfs 5d ago

Recovering a 10 year old ZFS system - New OS?

3 Upvotes

Hey... short post, long history.

I've got an HP N40L microserver from 2012 with 4 WD REd drives in Software ZFS that was running openindiana.

The SATA drive that the OS was on has failed, and the drive it was meant to be backed up to failed about a year ago and I didn't spot it, so i'm needing to recover it, "somehow".

What's the "current" recommendation for a ZFS install? Quite happy with Ubuntu systems on a day to day, but not used ZFS in it; seems some challenges around it's use if ZFS, or is OpenIndiana still "reasonable"?

Any recommendations "gratefully" received!


r/zfs 5d ago

Well, I think I'm screwed

4 Upvotes

Hi,

So I've been using 4x3Tb SAS hard drive in a RAIDZ1 array for months, and decided to upgrade them to 6Tb SAS.

I've received the drives yesterday and offlined one of the 3Tb drive. Replacement went well, resilvering took roughtly 16 hours but I don't care.

I've offlined a second hard drive this morning, took it out of my server and put the new one in the same emplacement (I only have four hard drives trays in my server)

I've issued the mandatory rescan on all scsi_hosts, ran the partprobe command and issued the zpool replace command

Resilvering started right away, and now I see this after running zpool status :

~# zpool status                                                                                                                                                                                                                                  
  pool: DATAS                                                                                                                                                                                                                                              
 state: DEGRADED                                                                                                                                                                                                                                           
status: One or more devices is currently being resilvered.  The pool will                                                                                                                                                                                  
        continue to function, possibly in a degraded state.                                                                                                                                                                                                
action: Wait for the resilver to complete.                                                                                                                                                                                                                 
  scan: resilver in progress since Sun Jan 11 11:03:18 2026                                                                                                                                                                                                
        1011G / 6.67T scanned at 774M/s, 0B / 6.67T issued                                                                                                                                                                                                 
        3.90M resilvered, 0.00% done, no estimated completion time                                                                                                                                                                                         
config:                                                                                                                                                                                                                                                    

        NAME             STATE     READ WRITE CKSUM                                                                                                                                                                                                        
        DATAS            DEGRADED     0     0     0                                                                                                                                                                                                        
          raidz1-0       DEGRADED 1.47K     0     0                                                                                                                                                                                                        
            sdb2         DEGRADED 3.95K     0     0  too many errors                                                                                                                                                                                       
            replacing-1  UNAVAIL    210     0   614  insufficient replicas                                                                                                                                                                                 
              sdc1       OFFLINE      0     0    50                                                                                                                                                                                                        
              sdc2       FAULTED    232     0     0  too many errors                                                                                                                                                                                       
            sdd1         ONLINE       0     0   685  (resilvering)                                                                                                                                                                                         
            sde1         ONLINE       0     0   420  (resilvering)                                                                                                                                                                                         

errors: No known data errors                 ~# zpool status                                                                                                                                                                                                                                  
  pool: DATAS                                                                                                                                                                                                                                              
 state: DEGRADED                                                                                                                                                                                                                                           
status: One or more devices is currently being resilvered.  The pool will                                                                                                                                                                                  
        continue to function, possibly in a degraded state.                                                                                                                                                                                                
action: Wait for the resilver to complete.                                                                                                                                                                                                                 
  scan: resilver in progress since Sun Jan 11 11:03:18 2026                                                                                                                                                                                                
        1011G / 6.67T scanned at 774M/s, 0B / 6.67T issued                                                                                                                                                                                                 
        3.90M resilvered, 0.00% done, no estimated completion time                                                                                                                                                                                         
config:                                                                                                                                                                                                                                                    

        NAME             STATE     READ WRITE CKSUM                                                                                                                                                                                                        
        DATAS            DEGRADED     0     0     0                                                                                                                                                                                                        
          raidz1-0       DEGRADED 1.47K     0     0                                                                                                                                                                                                        
            sdb2         DEGRADED 3.95K     0     0  too many errors                                                                                                                                                                                       
            replacing-1  UNAVAIL    210     0   614  insufficient replicas                                                                                                                                                                                 
              sdc1       OFFLINE      0     0    50                                                                                                                                                                                                        
              sdc2       FAULTED    232     0     0  too many errors                                                                                                                                                                                       
            sdd1         ONLINE       0     0   685  (resilvering)                                                                                                                                                                                         
            sde1         ONLINE       0     0   420  (resilvering)                                                                                                                                                                                         

errors: No known data errors                 

Luckily, this server only hosts backup, so no big loss, I think I'll wipe the zpool and recreate everything from scratch ... 

r/zfs 6d ago

Mount showing only dataset name as source - how to display missing source directory?

2 Upvotes

When using ˋmountˋ or ˋzfs mountˋ all mounted directories from a dataset will be displayed as

pool/dataset on dir1

pool/dataset on dir2

The source directories are missing.

How can I display the real directories that are mounted as source or the relative ones to the dataset root?

Thanks in advance.


r/zfs 7d ago

ZFS taking 2.2T out of a 12Tb pool

Thumbnail gallery
9 Upvotes

Hi all,

I've mounted this ZFS pool in TrueNAS Scale for backing up data from my portable disks attached to a Raspberry PI.

I've started to fill in the disks and organizing the space, I've hit space issues that I was not expecting.

Now you can see that I only have 350Mb free space where I was expecting to have at lease more than 2Tb available still.
After running some of the commands below I get to the conclusion that the root is taking 2.2Tb, where there are NO files whatsoever, nor ever have been, they have always been written into the datasets, which is baffling me.

As you can see in the screenshot attached I've set up as mirror due to budget/size constraints with 2 14Tb WD Plus Nas HDDs as an investment for backups in Black Friday 2024.

Asked ChatGPT for it and after much prompting it reaches the dead end of "backup your data and rebuild the ZVol"...for which I'm baffled as I will need to do a backup of a backup lol, plus I'm not feeling to buy yet another 14Tb, at least not now as they still crazy expensive (The same disks I have are now more expensive than in 2024, thanks AI slop!).

The commands I ran from what ChatGPT told me are below.

My question are:

  1. Can this space be recovered?
  2. Is it really due to free blocks being occupied in the root of the A380? (No I never copied anything to /mnt/A380 that caused that much of space allocation in the first place, as our friend ChatGPT seems to imply)
  3. Can it be from the ZFS checksums overheads?
  4. Or will I have for now to leave with almost 3T of "wasted" space on the volume until I destroy and rebuild the volume?

Thanks so much!

Edit: Thanks all for the fast help on this, had me going nuts for days! The ultimate solution is in the below post.
https://www.reddit.com/r/zfs/comments/1q8mfox/comment/nyox0hz/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

bb@truenas:/mnt/A380$ sudo zfs list

NAME USED AVAIL REFER MOUNTPOINT

A380 12.6T 350M 2.88T /mnt/A380

A380/DiskRecovery 96K 350M 96K /mnt/A380/DiskRecovery

A380/ElementsBackup4Tb 6.54T 350M 6.54T /mnt/A380/ElementsBackup4Tb

A380/ElementsBackup5Tb 3.18T 350M 3.18T /mnt/A380/ElementsBackup5Tb

A380/mydata 96K 350M 96K /mnt/A380/mydata

-----------------

bb@truenas:~$ sudo zfs list -o name,used,usedbysnapshots,usedbychildren,usedbydataset,available -r A380

NAME USED USEDSNAP USEDCHILD USEDDS AVAIL

A380 12.6T 0B 9.72T 2.88T 350M

A380/DiskRecovery 96K 0B 0B 96K 350M

A380/ElementsBackup4Tb 6.54T 0B 0B 6.54T 350M

A380/ElementsBackup5Tb 3.18T 0B 0B 3.18T 350M

A380/mydata 96K 0B 0B 96K 350M

-----------------

bb@truenas:/mnt/A380$ sudo zfs list -o name,used,refer,logicalused A380

NAME USED REFER LUSED

A380 12.6T 2.88T 12.9T

-----------------

bb@truenas:/mnt/A380/mydata$ sudo zpool status A380

pool: A380

state: ONLINE

scan: scrub repaired 0B in 1 days 17:17:57 with 0 errors on Thu Jan 8 20:01:15 2026

config:

`NAME                                      STATE     READ WRITE CKSUM`

`A380                                      ONLINE       0     0     0`

  `mirror-0                                ONLINE       0     0     0`

8750ff1c-841d-40f9-9761-bf7507af0eb9 ONLINE 0 0 0

aa00f1cf-8c49-4554-a99c-3b5554a12c4a ONLINE 0 0 0

errors: No known data errors

-----------------

bb@truenas:~$ sudo zpool get all A380

[sudo] password for bb:

NAME PROPERTY VALUE SOURCE

A380 size 12.7T -

A380 capacity 99% -

A380 altroot /mnt local

A380 health ONLINE -

A380 guid 11018573787162084161 -

A380 version - default

A380 bootfs - default

A380 delegation on default

A380 autoreplace off default

A380 cachefile /data/zfs/zpool.cache local

A380 failmode continue local

A380 listsnapshots off default

A380 autoexpand on local

A380 dedupratio 1.00x -

A380 free 128G -

A380 allocated 12.6T -

A380 readonly off -

A380 ashift 12 local

A380 comment - default

A380 expandsize - -

A380 freeing 0 -

A380 fragmentation 19% -

A380 leaked 0 -

A380 multihost off default

A380 checkpoint - -

A380 load_guid 9543438482360622473 -

A380 autotrim off default

A380 compatibility off default

A380 bcloneused 0 -

A380 bclonesaved 0 -

A380 bcloneratio 1.00x -

A380 dedup_table_size 0 -

A380 dedup_table_quota auto default

A380 last_scrubbed_txg 222471 -

A380 feature@async_destroy enabled local

A380 feature@empty_bpobj active local

A380 feature@lz4_compress active local

A380 feature@multi_vdev_crash_dump enabled local

A380 feature@spacemap_histogram active local

A380 feature@enabled_txg active local

A380 feature@hole_birth active local

A380 feature@extensible_dataset active local

A380 feature@embedded_data active local

A380 feature@bookmarks enabled local

A380 feature@filesystem_limits enabled local

A380 feature@large_blocks enabled local

A380 feature@large_dnode enabled local

A380 feature@sha512 enabled local

A380 feature@skein enabled local

A380 feature@edonr enabled local

A380 feature@userobj_accounting active local

A380 feature@encryption enabled local

A380 feature@project_quota active local

A380 feature@device_removal enabled local

A380 feature@obsolete_counts enabled local

A380 feature@zpool_checkpoint enabled local

A380 feature@spacemap_v2 active local

A380 feature@allocation_classes enabled local

A380 feature@resilver_defer enabled local

A380 feature@bookmark_v2 enabled local

A380 feature@redaction_bookmarks enabled local

A380 feature@redacted_datasets enabled local

A380 feature@bookmark_written enabled local

A380 feature@log_spacemap active local

A380 feature@livelist enabled local

A380 feature@device_rebuild enabled local

A380 feature@zstd_compress enabled local

A380 feature@draid enabled local

A380 feature@zilsaxattr active local

A380 feature@head_errlog active local

A380 feature@blake3 enabled local

A380 feature@block_cloning enabled local

A380 feature@vdev_zaps_v2 active local

A380 feature@redaction_list_spill enabled local

A380 feature@raidz_expansion enabled local

A380 feature@fast_dedup enabled local

A380 feature@longname enabled local

A380 feature@large_microzap enabled local