firewall-cmd --permanent --add-service=nfs
firewall-cmd --permanent --add-service=mountd
firewall-cmd --permanent --add-service=rpc-bind
firewall-cmd --reload
Saturday, February 29, 2020
Centos NFS Firewalld rules
Sunday, February 16, 2020
Thursday, February 13, 2020
Iozone & dd on the Gigabyte AORUS NVMe Gen4 1TB SSD
This runs on an Asus motherboard: https://www.asus.com/us/Motherboards/TUF-GAMING-X570-PLUS/ with the AMD Ryzen 5 3600: https://www.amd.com/en/products/cpu/amd-ryzen-5-3600 and 64 GB of RAM from G-Skill (F4-3200C16-32GTZN)
The test drive is from Gigabyte and 1TB large: https://www.gigabyte.com/us/Solid-State-Drive/AORUS-NVMe-Gen4-SSD-1TB
Iozone: Performance Test of File I/O
Version $Revision: 3.489 $
Compiled for 64 bit mode.
Build: linux-AMD64
Most importantly, I am re-mounting the drive for each test, that clears the buffer cache. Running watch -n 0.2 free -h I can observe the wanted effect.
The exact command used to run the tests is:
./iozone -e -r 4 -r 8 -r 16 -r 32 -r 64 -r 128 -r 512 -r 1024 -r 2048 -r 4096 -r 8192 -r 16384 -s 4g -i 0 -i 1 -i 2 -i 8 -f /apps/tfile -U /apps
CentOS Linux release 8.1.1911 (Core)
4.18.0-147.5.1.el8_1.x86_64
mkfs.ext4 /dev/nvme0n1
random | random | ||||||
testfile MB | reclen kB | write | rewrite | read | reread | read | write |
4096 | 4 | 1609 | 1892 | 2391 | 2291 | 98 | 1672 |
4096 | 8 | 1669 | 1972 | 2386 | 2289 | 167 | 1831 |
4096 | 16 | 1700 | 2049 | 2398 | 2299 | 256 | 1947 |
4096 | 32 | 1720 | 2075 | 2395 | 2302 | 314 | 1102 |
4096 | 64 | 1733 | 2079 | 2301 | 2303 | 503 | 2048 |
4096 | 128 | 1723 | 2061 | 2303 | 2298 | 728 | 2055 |
4096 | 512 | 1659 | 1953 | 2406 | 2309 | 1747 | 1969 |
4096 | 1024 | 1657 | 1975 | 3458 | 3405 | 2176 | 1968 |
4096 | 2048 | 1661 | 1972 | 2304 | 2307 | 2796 | 1961 |
4096 | 4096 | 1669 | 1964 | 3879 | 3876 | 3255 | 1955 |
4096 | 8192 | 1668 | 1952 | 4143 | 3866 | 3732 | 1948 |
4096 | 16384 | 1605 | 1898 | 4082 | 4079 | 3824 | 1901 |
mkfs.ext4 /dev/nvme0n1
random | random | ||||||
testfile MB | reclen kB | write | rewrite | read | reread | read | write |
4096 | 4 | 1674 | 1999 | 2300 | 2298 | 97 | 1734 |
4096 | 8 | 1746 | 2086 | 2302 | 2302 | 167 | 1911 |
4096 | 16 | 1787 | 2124 | 2302 | 2302 | 254 | 2018 |
4096 | 32 | 1808 | 2157 | 2302 | 2305 | 314 | 2105 |
4096 | 64 | 1812 | 2182 | 2304 | 2298 | 502 | 2144 |
4096 | 128 | 1801 | 2153 | 2305 | 2299 | 808 | 2122 |
4096 | 512 | 1735 | 2037 | 2315 | 2378 | 1819 | 2028 |
4096 | 1024 | 1735 | 2046 | 3500 | 3456 | 2157 | 2034 |
4096 | 2048 | 1748 | 2036 | 2310 | 2307 | 2778 | 2042 |
4096 | 4096 | 1745 | 2031 | 3885 | 3903 | 3249 | 2030 |
4096 | 8192 | 1731 | 2016 | 4008 | 3909 | 3741 | 2028 |
4096 | 16384 | 1674 | 1977 | 4072 | 4059 | 3997 | 1979 |
random | random | ||||||
testfile MB | reclen kB | write | rewrite | read | reread | read | write |
4096 | 4 | 1659 | 1980 | 2394 | 2261 | 96 | 1735 |
4096 | 8 | 1737 | 2082 | 2342 | 2291 | 117 | 1922 |
4096 | 16 | 1724 | 2133 | 2398 | 2393 | 252 | 2028 |
4096 | 32 | 1805 | 2157 | 2398 | 2298 | 310 | 2100 |
4096 | 64 | 1815 | 2175 | 2302 | 2297 | 493 | 2133 |
4096 | 128 | 1807 | 2087 | 2388 | 2295 | 838 | 2142 |
4096 | 512 | 1736 | 2048 | 2439 | 2313 | 1756 | 2038 |
4096 | 1024 | 1736 | 2035 | 3471 | 3325 | 2115 | 2047 |
4096 | 2048 | 1732 | 2043 | 2403 | 2404 | 2680 | 2050 |
4096 | 4096 | 1745 | 2046 | 3879 | 3896 | 3260 | 2045 |
4096 | 8192 | 1739 | 2027 | 3975 | 3884 | 3727 | 2032 |
4096 | 16384 | 1675 | 1978 | 4071 | 4055 | 3983 | 1985 |
random | random | ||||||
testfile MB | reclen kB | write | rewrite | read | reread | read | write |
4096 | 4 | 1583 | 1857 | 2397 | 2297 | 97 | 1647 |
4096 | 8 | 1639 | 1957 | 2399 | 2258 | 167 | 1819 |
4096 | 16 | 1687 | 1981 | 2386 | 2385 | 248 | 1920 |
4096 | 32 | 1697 | 2021 | 2393 | 2384 | 316 | 2000 |
4096 | 64 | 1722 | 2048 | 2395 | 2397 | 509 | 2031 |
4096 | 128 | 1711 | 2010 | 2397 | 2298 | 841 | 2016 |
4096 | 512 | 1644 | 1889 | 2713 | 2585 | 1934 | 1939 |
4096 | 1024 | 1659 | 1919 | 3490 | 3468 | 2258 | 1942 |
4096 | 2048 | 1649 | 1936 | 2402 | 2404 | 2766 | 1930 |
4096 | 4096 | 1649 | 1880 | 3920 | 3932 | 3320 | 1935 |
4096 | 8192 | 1642 | 1867 | 4035 | 3986 | 3727 | 1914 |
4096 | 16384 | 1598 | 1871 | 4124 | 4118 | 4064 | 1870 |
mkfs.ext4 /dev/nvme0n1
random | random | ||||||
testfile MB | reclen kB | write | rewrite | read | reread | read | write |
4096 | 4 | 1660 | 1938 | 2395 | 2392 | 99 | 1716 |
4096 | 8 | 1723 | 2025 | 2397 | 2389 | 166 | 1887 |
4096 | 16 | 1770 | 2028 | 2305 | 2393 | 255 | 1997 |
4096 | 32 | 1781 | 2067 | 2302 | 2390 | 313 | 2074 |
4096 | 64 | 1800 | 2143 | 2396 | 2395 | 518 | 2122 |
4096 | 128 | 1792 | 2123 | 2398 | 2393 | 800 | 2122 |
4096 | 512 | 1724 | 1974 | 2423 | 2399 | 1821 | 2009 |
4096 | 1024 | 1725 | 2019 | 3483 | 3465 | 2210 | 2012 |
4096 | 2048 | 1727 | 1964 | 2401 | 2404 | 2818 | 2006 |
4096 | 4096 | 1729 | 2023 | 3909 | 3931 | 3307 | 1997 |
4096 | 8192 | 1718 | 2005 | 4038 | 4040 | 3743 | 1971 |
4096 | 16384 | 1660 | 1912 | 4136 | 4111 | 4071 | 1938 |
./iozone -r 4 -r 8 -r 16 -r 32 -r 64 -r 128 -r 512 -r 1024 -r 2048 -r 4096 -r 8192 -r 16384 -s 4g -i 0 -i 1 -i 2 -i 8 -f /apps/tfile -U /apps
The biggest difference here is, not specifying the -e option which then excludes flush (fsync,fflush) in the timing calculations.
random | random | ||||||
testfile MB | reclen kB | write | rewrite | read | reread | read | write |
4096 | 4 | 2782 | 3831 | 2397 | 2389 | 99 | 2939 |
4096 | 8 | 2997 | 4164 | 2397 | 2390 | 168 | 3507 |
4096 | 16 | 3124 | 4427 | 2398 | 2395 | 259 | 3974 |
4096 | 32 | 3211 | 4523 | 2400 | 2393 | 309 | 4314 |
4096 | 64 | 3296 | 4589 | 2398 | 2395 | 507 | 4467 |
4096 | 128 | 3217 | 4574 | 2398 | 2392 | 799 | 4537 |
4096 | 512 | 3037 | 4079 | 2487 | 2402 | 1808 | 4032 |
4096 | 1024 | 3022 | 4093 | 3499 | 3464 | 2292 | 4088 |
4096 | 2048 | 3025 | 4063 | 2406 | 2404 | 2857 | 4068 |
4096 | 4096 | 3023 | 4065 | 3917 | 3934 | 3282 | 4067 |
4096 | 8192 | 3003 | 4032 | 4029 | 4024 | 3708 | 4061 |
4096 | 16384 | 2836 | 3804 | 4120 | 4110 | 3885 | 3817 |
dd if=/dev/zero of=/apps/file1.txt count=50240 bs=1M conv=fsync
50240+0 records in
50240+0 records out
52680458240 bytes (53 GB, 49 GiB) copied, 26.8013 s, 2.0 GB/s
dd if=/dev/zero of=/apps/file2.txt count=50240 bs=1M
50240+0 records in
50240+0 records out
52680458240 bytes (53 GB, 49 GiB) copied, 26.3247 s, 2.0 GB/s
Monday, February 10, 2020
Centos 8 & Bonding 2 x 10GbE ports
more /etc/sysconfig/network-scripts/ifcfg-bond0
BONDING_OPTS="downdelay=0 miimon=100 mode=balance-alb updelay=0"
TYPE=Bond
BONDING_MASTER=yes
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6_DISABLED=yes
IPV6INIT=no
NAME=bond0
UUID=1a59e993-a523-40b2-92f3-a59aef54d932
DEVICE=bond0
ONBOOT=yes
MTU=9000
MACADDR=00:51:82:11:22:02
more /etc/sysconfig/network-scripts/ifcfg-enp4s0f0
TYPE=Ethernet
NAME=enp4s0f0
UUID=e4d0c588-1938-4c5a-b8ec-f4c2af13991c
DEVICE=enp4s0f0
ONBOOT=yes
MASTER=bond0
SLAVE=yes
more /etc/sysconfig/network-scripts/ifcfg-enp4s0f1
TYPE=Ethernet
NAME=enp4s0f1
UUID=a1422007-0895-4817-97b5-2b07d960c7c1
DEVICE=enp4s0f1
ONBOOT=yes
MASTER=bond0
SLAVE=yes
Then activate the connections:
ifup bond0
ifup enp4s0f0
ifup enp4s0f1
Configured a LAG group in the switch for the ports in scope
Run next to check the status of the of the connection
> nmcli
[root@node-2 ~]# nmcli
bond0: connected to bond0
"bond0"
bond, 00:51:82:11:22:02, sw, mtu 9000
ip4 default
inet4 192.168.1.110/16
route4 0.0.0.0/0
route4 192.168.0.0/16
enp4s0f0: connected to enp4s0f0
"Intel X550T"
ethernet (ixgbe), 00:51:82:11:22:02, hw, mtu 9000
master bond0
enp4s0f1: connected to enp4s0f1
"Intel X550T"
ethernet (ixgbe), A0:36:9F:27:B5:52, hw, mtu 9000
master bond0
enp6s0: disconnected
"Realtek RTL8111/8168/8411"
1 connection available
ethernet (r8169), A8:5E:45:E2:F1:04, hw, mtu 1500
Tune TCP network for data transfer in 10GbE or higher network
Some background information: https://community.rti.com/static/documentation/perftest/3.0/tuning_os.html
10GbE, Centos 7: /etc/sysctl.d/99-sysctl.conf
net.core.rmem_default = 524287
net.core.rmem_max = 16777216
net.core.wmem_default = 524287
net.core.wmem_max = 16777216
https://community.rti.com/kb/how-can-i-improve-my-throughput-performance-linux
echo "8388608" > /proc/sys/net/ipv4/ipfrag_high_threshold
Some more background: https://mapr.com/docs/51/AdministratorGuide/Configure-NFS-Write-Perfo-Thekerneltunablevalu-d3e72.html
echo "options sunrpc tcp_slot_table_entries=128" >> /etc/modprobe.d/sunrpc.conf
echo 128 > /proc/sys/sunrpc/tcp_slot_table_entries
Some tuning ideas for Windows: https://www.drastic.tv/support-59/supporttipstechnical/320-optimizing-windows-networking
https://www.cyberciti.biz/faq/linux-tcp-tuning/
https://www.ibm.com/support/knowledgecenter/linuxonibm/liaag/wkvm/wkvm_c_tune_tcpip.htm
/etc/sysctl.d/10-network.conf
#https://www.cyberciti.biz/faq/linux-tcp-tuning/
net.core.wmem_max=33554432
net.core.rmem_max=33554432
net.ipv4.tcp_rmem=2097152 16777216 33554432
net.ipv4.tcp_wmem=2097152 16777216 33554432
net.ipv4.tcp_window_scaling=1
net.ipv4.tcp_timestamps=1
net.ipv4.tcp_sack=1
net.ipv4.tcp_no_metrics_save=1
net.core.netdev_max_backlog=5000
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_low_latency=1
Thursday, February 6, 2020
Install mpi4py
export CC=/usr/lib64/openmpi/bin/mpicc
pip install mpi4py
Thursday, January 23, 2020
Counting lines of code in shell scripts
> find . -name '*.sh' | xargs wc -l | sort -nr
Not sorted
> find . -name '*.sh' | xargs wc -l
Friday, January 17, 2020
When it becomes handy to disable NFS attribute caching
If changes made by one client need to be reflected on other clients with finer granularity, the attribute cache lifetime can be reduced to one second using the actimeo option, which sets both the regular file and directory minimum and maximum lifetimes to the same value:
> mount -t nfs -o actimeo=1 server:/export /mnt
This has the same effect as:
> mount -t nfs -o acregmin=1,acregmax=1,acdirmin=1,acdirmax=1 \ server:/export /mnt
To disable the caching completely:
> mount -t nfs -o actimeo=0 server:/export /mnt
Thursday, January 16, 2020
Docker on CentOS 8
> dnf list docker-ce --showduplicates | sort -r
> sudo dnf install https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm
> sudo dnf install docker-ce
> systemctl enable docker
> systemctl start docker
> systemctl status docker
> cd ~
> mkdir docker-centos
> cd docker-centos
> echo "FROM centos" > Dockerfile
> docker build .
> docker run centos uname -a
> docker run centos cat "/etc/redhat-release"
Sunday, November 17, 2019
Linux ZFS and disk configuration a collection of notes
zpool iostat -v 1
#mounting an encrypted filesystem
zfs mount -l -a
#adding slog to a pool
zpool add zfs log /dev/nvme0n1
#adding l2arc to a pool
zpool add zfs cache /dev/nvme0n2
#checking all zfs module parameters
modinfo zfs
#check atime enabled?
zfs get all |grep atime
#disable atime
zfs set atime=off zfs
zfs set atime=off zfs_sata
#check serial number of disk
[root@node-2 ~]# sginfo -s /dev/sda
Serial Number 'PDNLH0BRH9F2DJ'
#check physical and logical block size of disks
lsblk -o NAME,PHY-SEC,LOG-SEC,SIZE,TYPE,ROTA
#physical vs. logical block size
- physical sector size -> actual hard drive reads and writes
- logical sector size -> supported smallest hard drive reads and writes
basically ashift means the exponent, index or power to the base of 2 e.g. for a physical sector size of 512 == 2^9 -> ashift of 9, or 4k == 2^12 -> ashift 12
#Example for 4x2TB rotational disks
[root@node-2 ~]# lsblk -o NAME,PHY-SEC,LOG-SEC,SIZE,TYPE,ROTA |grep 'disk 1'
sdk 4096 512 1.8T disk 1
sdi 4096 512 1.8T disk 1
sdl 4096 512 1.8T disk 1
sdj 4096 512 1.8T disk 1
[root@node-2 ~]# ls -l /dev/disk/by-id/
total 0
lrwxrwxrwx. 1 root root 9 Nov 17 23:14 ata-ST2000LM015-2E8174_WDZ3WZFN -> ../../sdj
lrwxrwxrwx. 1 root root 9 Nov 17 23:15 ata-ST2000LM015-2E8174_WDZAAC2H -> ../../sdi
lrwxrwxrwx. 1 root root 9 Nov 17 23:15 ata-ST2000LX001-1RG174_WDZASXRK -> ../../sdk
lrwxrwxrwx. 1 root root 9 Nov 17 23:15 ata-ST2000LX001-1RG174_ZDZ4TJK2 -> ../../sdl
#creating the pool
zpool create zfs_sata -o ashift=12 mirror ata-ST2000LM015-2E8174_WDZAAC2H ata-ST2000LM015-2E8174_WDZ3WZFN mirror ata-ST2000LX001-1RG174_WDZASXRK ata-ST2000LX001-1RG174_ZDZ4TJK2
#creating two volumes with different record sizes
zfs create -o recordsize=16k zfs_sata/mfdatabase
zfs create -o recordsize=1024k zfs_sata/mfjournal
#some sequential testing
dd if=/dev/zero of=/zfs_sata/mfdatabase/tempfile bs=1M count=1024; sync
dd if=/dev/zero of=/zfs_sata/mfdatabase/tempfile2 bs=16k count=65536; sync
/sbin/sysctl -w vm.drop_caches=3
dd if=/zfs_sata/mfdatabase/tempfile of=/dev/null bs=1M count=1024
dd if=/zfs_sata/mfdatabase/tempfile2 of=/dev/null bs=16k count=65536
rm -rf /zfs_sata/mfdatabase/*
writes
16.5013 s, 65.1 MB/s (expected slower)
7.26762 s, 148 MB/s (expected faster)
reads
54.6486 s, 19.6 MB/s (expected slower)
59.2402 s, 18.1 MB/s (expected slower)
dd if=/dev/zero of=/zfs_sata/mfjournal/tempfile bs=1M count=1024; sync
dd if=/dev/zero of=/zfs_sata/mfjournal/tempfile2 bs=16k count=65536; sync
/sbin/sysctl -w vm.drop_caches=3
dd if=/zfs_sata/mfjournal/tempfile of=/dev/null bs=1M count=1024
dd if=/zfs_sata/mfjournal/tempfile2 of=/dev/null bs=16k count=65536
rm -rf /zfs_sata/mfjournal/*
writes
12.1631 s, 88.3 MB/s (expected faster)
8.75189 s, 123 MB/s (expected slower)
reads
43.0267 s, 25.0 MB/s (expected faster)
23.1101 s, 46.5 MB/s (expected faster)
#change zfs options for good
/etc/modprobe.d/zfs.conf
e.g. options zfs PARAMETER=VALUE
#change zfs option in flight
zfs_arc_max = "17179869184"
zfs_arc_max
https://forums.freebsd.org/threads/howto-tuning-l2arc-in-zfs.29907/ l2arc_write_max: 8388608 # Maximum number of bytes written to l2arc per feed l2arc_write_boost: 8388608 # Mostly only relevant at the first few hours after boot l2arc_headroom: 2 # Not sure l2arc_feed_secs: 1 # l2arc feeding period l2arc_feed_min_ms: 200 # minimum l2arc feeding period l2arc_noprefetch: 1 # control whether streaming data is cached or not l2arc_feed_again: 1 # control whether feed_min_ms is used or not l2arc_norw: 1 # no read and write at the same time
/etc/modprobe.d/zfs.conf #log options zfs zfs_txg_timeout=30 #cache options zfs zfs_arc_max=34359738368 options zfs l2arc_noprefetch=0 options zfs l2arc_write_max=1073741824 options zfs l2arc_write_boost=2147483648
1073741824zil_slog_limit
#for all SSD pool logbias could be changed e.g.
zfs set logbias=throughput zfs/mydata
#look at performance
zpool iostat -v 1
#resources I read
- http://open-zfs.org/wiki/Performance_tuning
- https://github.com/zfsonlinux/zfs/wiki/ZFS-on-Linux-Module-Parameters
- https://martin.heiland.io/2018/02/23/zfs-tuning
- https://www.svennd.be/tuning-of-zfs-module
- https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSWritesAndZIL
- http://www.nanowolk.nl/ext/2013_02_zfs_sequential_read_write_performance
- http://www.nanowolk.nl/ext/2013_02_zfs_random_iops_read_write_performance
- https://sites.google.com/site/ryanbabchishin/home/publications/changing-a-zvol-block-size-while-making-it-sparse-and-compressed
- https://utcc.utoronto.ca/~cks/space/blog/tech/AdvancedFormatDrives
- https://docs.oracle.com/cd/E23823_01/html/819-5461/gazss.html#indexterm-425
- https://zfs.datto.com/2017_slides/pinchuk.pdf
Thursday, November 14, 2019
Need to know drive properties on Windows
Open the command line as Administrator
> fsutil fsinfo ntfsinfo [drive letter]
Friday, September 13, 2019
Sort content of file and remove duplicates
> cat list.txt | sort -u list-sorted-unique.txt
And then I found that the first column still has duplicates
> cut -d ':' -f1 list-sorted-unique.txt | sort -u | wc -l
Saturday, September 7, 2019
Reporting with lsblk and specific columns
lsblk -o NAME,FSTYPE,LABEL,MOUNTPOINT,SIZE,MODEL,SERIAL
NAME FSTYPE LABEL MOUNTPOINT SIZE MODEL SERIAL
sdf 465.8G WDC WDBNCE5000P 190476803028
├─sdf9 8M
└─sdf1 zfs_member zfs 465.8G
nvme0n1 931.5G Samsung SSD 970 EVO 1TB S467NX0KB02478V
├─nvme0n1p1 zfs_member 128G
└─nvme0n1p2 zfs_member 803.5G
sdo btrfs 931.5G ST1000LM048-2E71 WDEQTNE0
├─sdo1 zfs_member zfs_sata 931.5G
└─sdo9 btrfs 8M
sdd 465.8G WDC WDBNCE5000P 190476801481
├─sdd9 8M
└─sdd1 zfs_member zfs 465.8G
sdm btrfs 931.5G ST1000LM048-2E71 WDEN067Y
├─sdm1 zfs_member zfs_sata 931.5G
└─sdm9 btrfs 8M
sdb isw_raid_member 465.8G CT500MX500SSD1 1906E1E8FBC3
└─md126 465.8G
├─md126p2 LVM2_member 464.8G
│ ├─centos-swap swap 15.7G
│ ├─centos-home xfs /home 399.1G
│ └─centos-root xfs / 50G
└─md126p1 xfs /boot 1G
sdk 1.8T ST2000LX001-1RG1 WDZASXRK
├─sdk9 8M
└─sdk1 zfs_member zfs_sata 1.8T
sdi btrfs 1.8T ST2000LM015-2E81 WDZAAC2H
├─sdi9 btrfs 8M
└─sdi1 zfs_member zfs_sata 1.8T
sdq 465.8G Samsung SSD 850 S24CNXAGC07791V
├─sdq9 8M
└─sdq1 zfs_member zfs 465.8G
sdg 465.8G WDC WDBNCE5000P 190476800512
├─sdg9 8M
└─sdg1 zfs_member zfs 465.8G
sde 465.8G WDC WDBNCE5000P 190476800250
├─sde9 8M
└─sde1 zfs_member zfs 465.8G
sdn btrfs 931.5G ST1000LM048-2E71 WDEMXXPP
├─sdn1 zfs_member zfs_sata 931.5G
└─sdn9 btrfs 8M
sdc 465.8G WDC WDBNCE5000P 190476802105
├─sdc9 8M
└─sdc1 zfs_member zfs 465.8G
sdl 1.8T ST2000LX001-1RG1 ZDZ4TJK2
├─sdl1 zfs_member zfs_sata 1.8T
└─sdl9 8M
nvme1n1 477G PCIe SSD 19012351200132
├─nvme1n1p2 zfs_member zfs 221G
└─nvme1n1p1 swap [SWAP] 256G
sda isw_raid_member 465.8G CT500MX500SSD1 1906E1E8FE6C
└─md126 465.8G
├─md126p2 LVM2_member 464.8G
│ ├─centos-swap swap 15.7G
│ ├─centos-home xfs /home 399.1G
│ └─centos-root xfs / 50G
└─md126p1 xfs /boot 1G
sdj btrfs 1.8T ST2000LM015-2E81 WDZ3WZFN
├─sdj9 btrfs 8M
└─sdj1 zfs_member zfs_sata 1.8T
sdr 465.8G CT500MX500SSD4 1909E1ED9C29
├─sdr9 8M
└─sdr1 zfs_member zfs 465.8G
sdh 465.8G WDC WDBNCE5000P 190476800028
├─sdh9 8M
└─sdh1 zfs_member zfs 465.8G
sdp btrfs 931.5G ST1000LM048-2E71 WDEPK5PK
├─sdp9 btrfs 8M
└─sdp1 zfs_member zfs_sata 931.5G