The open source port of OpenZFS on Windows. Like others are saying there is nothing comparable to ZFS on Windows, so if you want to use ZFS it will have to be attached through the network in some form. If you check the compatibility of the hardware carefully you should also have no problems running it on non-Oracle hardware, in my (albeit limited) experience.
FileSystem > ZFS
ZFS is a combined file system and logical volume manager designed by Sun Microsystems (now owned by Oracle), which is licensed as open-source software under the Common Development and Distribution License (CDDL) as part of the ?OpenSolaris project in November 2005. OpenZFS brings together developers and users from various open-source forks of the original ZFS on different platforms, it was announced in September 2013 as the truly open source successor to the ZFS project.
Described as The last word in filesystems, ZFS is scalable, and includes extensive protection against data corruption, support for high storage capacities, efficient data compression, integration of the concepts of filesystem and volume management, snapshots and copy-on-write clones, continuous integrity checking and automatic repair, RAID-Z, native NFSv4 ACLs, and can be very precisely configured.
Contents
- Creating the Pool
- Snapshots
- File Sharing
Status
Debian kFreeBSD users are able to use ZFS since the release of Squeeze, for those who use Linux kernel it is available from contrib archive area with the form of DKMS source since the release of Stretch. There is also a deprecated userspace implementation facilitating the FUSE framework. This page will demonstrate using ZFS on Linux (ZoL) if not specifically pointed to the kFreeBSD or FUSE implementation.
Due to potential legal incompatibilities between CDDL and GPL, even both of them are OSI-approved free software license that complies with DFSG, ZFS development is not supported by the Linux kernel. ZoL is a project funded by the Lawrence Livermore National Laboratory to develop a native Linux kernel module for its massive storage requirements and super computers.
Features
- Pool based storage
- Copy-on-Write
- Snapshots
- Data integrity against silent data corruption
- Software Volume Manager
- Software RAID
Installation
ZFS on Linux is provided in the form of DKMS source for Debian users, you would need to add contrib section to your apt sources configuration to be able to get the packages. Also, it is recommended by Debian ZFS on Linux Team to install ZFS related packages from Backports archive, upstream stable patches will be tracked and compatibility is always maintained. When configured, use following commands to install the packages:
The given example has separated the steps of installing Linux headers, spl and zfs. It's fine to combine everything in one command but let's be explict to avoid any chance of messing up with versions, future updates will be taken care by apt.
Creating the Pool
Many disks can be added to a storage pool, and ZFS can allocate space from it, so the first step of using ZFS is creating a pool. It is recommended to use more than 1 whole disk to take advantage of full benefits but you are still cool to proceed with only one device or just a partition.
Open Zfs On Windows
In the world of ZFS, device names with path/id are usually used to identify a disk, because names of /dev/sdX is subject to change by the operating system. These names can be retrieved with ls -l /dev/disk/by-id/ or ls -l /dev/disk/by-path/
Basic Configuration
The most common pool configurations are mirror, raidz and raidz2, choose one from the following:
mirror pool (similar to raid-1, ≥ 2 disks, 1:1 redundancy)
raidz1 pool (similar to raid-5, ≥ 3 disks, 1 disk redundancy)
raidz2 pool (similar to raid-6, ≥ 4 disks, 2 disks redundancy)
stripe pool (similar to raid-0, no redundancy)
single disk stripe pool
Advanced Configuration
If building a pool with a larger number of disks, you are encouraged to configure them into more than one group and finally construct a stripe pool using these vdevs. This would allow more flexible pool design to trade-off among space, redundancy and efficiency.
Different configurations may have different IO characteristics under certain workload pattern, please refer to see also section at the end of this page for more information.
- 5 mirror (like raid-10, 1:1 redundancy)

- 2 raidz (like raid-50, 2 disks redundancy in total)
ZFS can make use of fast SSD as second level cache (L2ARC) after RAM (ARC), which can improve cache hit rate thus improving overall performance. Because cache devices could be read and write very frequently when the pool is busy, please consider to use more durable SSD devices (SLC/MLC over TLC/QLC) preferably come with NVMe protocol. This cache is only use for read operations, so that data write to cache disk is demanded by read operations, and is not related to write operations at all.
ZFS can also make uses of NVRAM/Optane/SSD as SLOG (Separate ZFS Intent Log) device, which can be considered as kind of write cache but that's far from the truth. SLOG devices are used for speeding up synchronous writes by sending those transaction to SLOG in parallel to slower disks, as soon as the transaction is successful on SLOG the operation is marked as completed, then the synchronous operation is unblocked quicker and resistance against power loss is not compromised. Mirrored set up of SLOG devices is obviously recommended. Please also note that asynchronous writes are not sent to SLOG by default, you could try to set sync=always property of the working dataset and see whether performance gets improved.
Provisioning file systems or volume
After creating the zpool, we are able to provision file systems or volume (ZVOL). ZVOL is a kind of block device whose space being allocated from zpool, you are able to create another file system on it like any other block device.
provision a file system named data under pool tank, and have it mounted on /data
thin provision a ZVOL of 4GB named vol under pool tank, and format it to ext4, then mount on /mnt temporarily
- destroy previously created file systems and ZVOL
Openzfs
Snapshots
Snapshot is a most wanted feature of modern file system, ZFS definitely supports it.
Creating and Managing Snapshots
making a snapshot of tank/data
- removing a snapshot
Backup and Restore (with remote)
It is possible to backup a ZFS dataset to another pool with zfs send/recv commands, even the pool is located at the other end of network.
File Sharing
ZFS has integration with operating system's NFS, CIFS and iSCSI servers, it does not implement its own server but reuse existing software. However, iSCSI integration is not yet available on Linux. It is recommended to enable xattr=sa and dnodesize=auto for these usages.
NFS shares
To share a dataset through NFS, nfs-kernel-server package needs to be installed:
Set up recommended properties for the targeting zfs file system:
Configure a very simiple NFS share (read/write to 192.168.0.0/24, read only to 10.0.0.0/8):
Verify the share is exported successfuly:
Stop the NFS share:
CIFS shares
CIFS is a dialect of Server Message Block (SMB) Protocol and could be used on Windows, VMS, several versions of Unix, and other operating systems.
To share a dataset through CIFS, samba package needs to be installed:
Because Microsoft Windows is not case sensitive, it is recommended to set casesensitivity=mixed to the dataset to be shared, and this property can only be set on creation time:
Configure a very simiple CIFS share (read/write to 192.168.0.0/24, read only to 10.0.0.0/8):
Verify the share is exported successfuly:
Stop the CIFS share:
Encryption
ZFS native encryption was implemented since Zol 0.8.0 release. For any older version the alternative solution is to wrap ZFS with LUKS (see cryptsetup). Creating encrypted ZFS is straightforward, for example:
ZFS will prompt and ask you to input the passphrase. Alternatively, the key location could be specified with the 'keylocation' attribute.
ZFS can also encrypt a dataset during 'recv':
Before mounting an encrypted dataset, the key has to be loaded (zfs load-key tank/secret) first. 'zfs mount' provides a shortcut for the two steps:
Interoperability
Last version of ZFS released from ?OpenSolaris is zpool v28, after that Oracle has decided not to publish future updates, so that version 28 has the best interoperability across all implementations. This is also the last pool version zfs-fuse supports.

Later it is decided the open source implementation will stick to zpool v5000 and make any future changes tracked and controled by feature flags. This is an incompatible change to the closed source successor and v28 will remain the last interoperatable pool version.
By default new pools are created with all supported features enabled (use -d option to disable), and if you want a pool of version 28:
Windows Zfs Mount
All known OpenZFS implementations have support to zpool v5000 and feature flags in major stable versions, this includes illumOS, FreeBSD, ZFS on Linux and OpenZFS on OS X. There are difference on the supported features among these implementations, for example support of large_dnode feature flag was first introduced on Linux, and spacemap_v2 is not supported on Linux until ZoL 0.8.x. There are more features have differential inclusion status other than feature flags, like xattr=sa is only available on Linux and OS X, whereas TRIM was not supported on Linux until Zol 0.8.x.
Advanced Topics
These are not really advanced stuff like internals of ZFS and storage, but are some topics not relevant to everyone.
- 64-bit hardware and kernel is recommended. ZFS wants a lot of memory (so as address space) to work best, also it was developed with an assumption of being 64-bit only from the beginning. It is possible to use ZFS under 32-bit environments but a lot of care must be taken by the user.
Use ashift=12 or ashift=13 when creating the pool if applicable (though ZFS can detect correctly for most cases). Value of ashift is exponent of 2, which should be aligned to the physical block size of disks, for example 29=512, 212=4096, 213=8192. Some disks are reporting a logical block size of 512 bytes while having 4KiB physical block size (aka 512e), and some SSDs have 8KiB physical block size.
- Enable compression if not absolutely paranoid because ZFS can skip compression of objects that it sees not effect, and compressed objects can improve IO efficiency
- Install as much RAM as financially feasible. ZFS has advanced caching design which could take advantage of a lot of memory to improve performance. This cache is called Adjustable Replacement Cache (ARC).
- Block level deduplication is scary when RAM resource is expensive and limited, but such feature is getting increasingly promoted on professional storage solutions nowadays, since it could perform impressively for scenarios like storing VM disks that share common ancestors. Because deduplication table is part of ARC, it's possible to use a fast L2ARC (NVMe SSD) to mitigate the problem of lacking RAM. Typical space requirement would be 2-5 GB ARC/L2ARC for 1TB of disk, if you are building a storage of 1PB raw capacity, at least 1TB of L2ARC space should be planned for deduplication (minimum size, assuming pool is mirrored).
- ECC RAM is always preferred. ZFS makes use of checksum to ensure data integrity, which depends on the system memory being correct. This does not mean you should turn to other file systems when ECC memory is not possible, but it opens the door of failing to detect silent data corruption when the RAM generate some random errors unexpectedly. If you are building a serious storage solution, ECC RAM is required.
Storing extended attributes as system attributes (Linux only). With xattr=on (default), ZFS stores extended attributes in hidden sub directories which could hurt performance.
Setting dnodesize=auto for non-root datasets. This allows ZFS to automatically determine dnodesize, which is useful if the dataset uses the xattr=sa property setting and the workload makes heavy use of extended attributes (SELinux-enabled systems, Lustre servers, and Samba/NFS servers). This setting relies on large_dnode feature flag support on the pool which may not be widely supported on all OpenZFS platforms, please also note Grub does not yet have support to this feature.
Thin provision allows a volume to use up to the limited amount of space but do not reserve any resource until explicitly demanded, making over provision possible, at the risk of being unable to allocate space when pool is getting full. It is usually considered a way of facilitating flexible management and improving space efficiency of the backing storage.
See Also
Aaron Toponce's ZFS on Linux User Guide
The Z File System (ZFS) from FreeBSD handbook
FAQ and Debian section by ZFS on Linux Wiki
ZFS article on Archlinux Wiki
ZFS article on Gentoo Wiki
Oracle Solaris ZFS Administration Guide - HTMLPDF
zpool(8), zfs(8), zfs-module-parameters(5), zpool-features(5), zdb(8), zfs-events(5), zfs-fuse(8)
CategoryStorage
- 1Using ZFS Storage Plugin (via Proxmox VE GUI or shell)
- 2Misc
- 2.3Example configurations for running Proxmox VE with ZFS
- 3Troubleshooting and known issues
Using ZFS Storage Plugin (via Proxmox VE GUI or shell)
After the ZFS pool has been created, you can add it with the Proxmox VE GUI or CLI.
Adding a ZFS storage via CLI
To create it by CLI use:
Adding a ZFS storage via Gui
To add it with the GUI:Go to the datacenter, add storage, select ZFS.
Misc
QEMU disk cache mode
If you get the warning:
or a warning that the filesystem do not supporting O_DIRECT, set the disk cache type of your VM from none to writeback.
LXC with ACL on ZFS

ZFS uses as default store for ACL hidden files on filesystem.This reduces performance enormously and with several thousand files a system can feel unresponsive.Storing the xattr in the inode will revoke this performance issue.
Modification to do
Warning: Do not set dnodesize on rpool because GRUB is not able to handle a different size.see Bug entry https://savannah.gnu.org/bugs/?func=detailitem&item_id=48885
Example configurations for running Proxmox VE with ZFS
Install on a high performance system
As of 2013 and later, high performance servers have 16-64 cores, 256GB-1TB RAM and potentially many 2.5' disks and/or a PCIe based SSD with half a million IOPS. High performance systems benefit from a number of custom settings, for example enabling compression typically improves performance.
- If you have a good number of disks keep organized by using aliases. Edit /etc/zfs/vdev_id.conf to prepare aliases for disk devices found in /dev/disk/by-id/ :
Use flash for caching/logs. If you have only one SSD, use parted of gdisk to create a small partition for the ZIL (ZFS intent log) and a larger one for the L2ARC (ZFS read cache on disk). Make sure that the ZIL is on the first partition. In our case we have a Express Flash PCIe SSD with 175GB capacity and setup a ZIL with 25GB and a L2ARC cache partition of 150GB.
- edit /etc/modprobe.d/zfs.conf to apply several tuning options for high performance servers:
- create a zpool of striped mirrors (equivalent to RAID10) with log device and cache and always enable compression:
- check the status of the newly created pool:
Using PVE 2.3 on a 2013 high performance system with ZFS you can install Windows Server 2012 Datacenter Edition with GUI in just under 4 minutes.
Troubleshooting and known issues
ZFS packages are not installed
If you upgraded to 3.4 or later, zfsutils package is not installed. You can install it with apt:
Grub boot ZFS problem
- Symptoms: stuck at boot with an blinking prompt.
- Reason: If you ZFS raid it could happen that your mainboard does not initial all your disks correctly and Grub will wait for all RAID disk members - and fails. It can happen with more than 2 disks in ZFS RAID configuration - we saw this on some boards with ZFS RAID-0/RAID-10
Boot fails and goes into busybox
If booting fails with something like
is because zfs is invoked too soon (it has happen sometime when connecting a SSD for future ZIL configuration). To prevent it there have been some suggestions in the forum.Try to boot following the suggestions of busybox or searching the forum, and try ONE of the following:
a) edit /etc/default/grub and add 'rootdelay=10' at GRUB_CMDLINE_LINUX_DEFAULT (i.e. GRUB_CMDLINE_LINUX_DEFAULT='rootdelay=10 quiet') and then issue a # update-grub
b) edit /etc/default/zfs, set ZFS_INITRD_PRE_MOUNTROOT_SLEEP='4', and then issue a 'update-initramfs -k 4.2.6-1-pve -u'
Snapshot of LXC on ZFS
If you can't create a snapshot of an LXC container on ZFS and you get following message:
you can run following commands
Now set /mnt/vztmp in your /etc/vzdump.conf for tmp
Replacing a failed disk in the root pool
Glossary
- ZPool is the logical unit of the underlying disks, what zfs use.
- ZVol is an emulated Block Device provided by ZFS
- ZIL is ZFS Intent Log, it is a small block device ZFS uses to write faster
- ARC is Adaptive Replacement Cache and located in Ram, its the Level 1 cache.
- L2ARC is Layer2 Adaptive Replacement Cache and should be on an fast device (like SSD).
Further readings about ZFS
- https://www.freebsd.org/doc/handbook/zfs.html (even if written for freebsd, of course, I found this doc is extremely clear even for less 'techie' admins [note by m.ardito])
- https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/ (and all other pages linked there)
and this has some very important information to know before implementing zfs on a production system.
Very well written manual pages

See also

