I have written a RAID configuration analysis tool to help me configure filers. It reports the mean time to data loss (MTTDL), the amount of storage available in a JBOD, the number of possible spares and a other useful information. I am releasing it on the hope that other folks that are trying to figure out how many disks to bundle in a RAID configuration (vdev) will find it useful.
Download
File | Size | Checksum | Version | Type |
---|---|---|---|---|
raid.py | 23K | 60577 27 | 1.2 | Python source code |
Sample Output
This report shows the trade offs for having between 3 and 8 disks in a vdev (RAID configuration) for a number of different RAID options. It was created by running the following command.
1 2 3 4 |
% python raid.py -n 3-8 -s 2 \ -d 'Seagate Barracuda ST2000DM001' \ --mtbf 750000 --mttr 24 |
Here is the report.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
MTTDL RAID Configuration Report Seagate Barracuda ST2000DM001 MTBF: 750,000 (85.6) MTTR: 24 Disk Size: 2.000TB JBOD Capacity: 24 MTTDL MTTDL N P (hrs) (yrs) AFR FT DC % DC Min C BD B S JDC Types == == ======== ======== ========= == ====== ==== === == == == == ===== ============ 3 0 2.5e+05 28.5 3.5% 0 100.0% 6.0 1 24 21 7 3 42.0 RAID-0 3 1 3.91e+09 4.46e+05 0.000224% 1 50.0% 3.0 2 24 21 7 3 21.0 RAID-1/10/01 3 1 3.91e+09 4.46e+05 0.000224% 1 66.7% 4.0 2 24 21 7 3 28.0 RAID-5/Z1 3 2 1.22e+14 1.39e+10 7.18e-09% 2 33.3% 2.0 3 24 21 7 3 14.0 RAID-6/Z2 3 2 1.22e+14 1.39e+10 7.18e-09% 2 33.3% 2.0 3 24 21 7 3 14.0 RAID-10(M=2) 4 0 1.88e+05 21.4 4.67% 0 100.0% 8.0 1 24 20 5 4 40.0 RAID-0 4 1 1.95e+09 2.23e+05 0.000449% 1 50.0% 4.0 2 24 20 5 4 20.0 RAID-1/10/01 4 1 1.95e+09 2.23e+05 0.000449% 1 75.0% 6.0 2 24 20 5 4 30.0 RAID-5/Z1 4 2 3.05e+13 3.48e+09 2.87e-08% 2 50.0% 4.0 3 24 20 5 4 20.0 RAID-6/Z2 4 3 9.54e+17 1.09e+14 9.19e-13% 3 25.0% 2.0 4 24 20 5 4 10.0 RAID-Z3 4 3 9.54e+17 1.09e+14 9.19e-13% 3 25.0% 2.0 4 24 20 5 4 10.0 RAID-10(M=3) 5 0 1.5e+05 17.1 5.84% 0 100.0% 10.0 1 24 20 4 4 40.0 RAID-0 5 1 1.17e+09 1.34e+05 0.000748% 1 50.0% 5.0 2 24 20 4 4 20.0 RAID-1/10/01 5 1 1.17e+09 1.34e+05 0.000748% 1 80.0% 8.0 2 24 20 4 4 32.0 RAID-5/Z1 5 2 1.22e+13 1.39e+09 7.18e-08% 2 60.0% 6.0 3 24 20 4 4 24.0 RAID-6/Z2 5 3 1.91e+17 2.18e+13 4.59e-12% 3 40.0% 4.0 4 24 20 4 4 16.0 RAID-Z3 5 4 5.96e+21 6.8e+17 1.47e-16% 4 20.0% 2.0 5 24 20 4 4 8.0 RAID-10(M=4) 6 0 1.25e+05 14.3 7.01% 0 100.0% 12.0 1 24 18 3 6 36.0 RAID-0 6 1 7.81e+08 8.92e+04 0.00112% 1 50.0% 6.0 2 24 18 3 6 18.0 RAID-1/10/01 6 1 7.81e+08 8.92e+04 0.00112% 1 83.3% 10.0 2 24 18 3 6 30.0 RAID-5/Z1 6 2 6.1e+12 6.97e+08 1.44e-07% 2 66.7% 8.0 3 24 18 3 6 24.0 RAID-6/Z2 6 3 6.36e+16 7.26e+12 1.38e-11% 3 50.0% 6.0 4 24 18 3 6 18.0 RAID-Z3 6 5 3.1e+25 3.54e+21 2.82e-20% 5 16.7% 2.0 6 24 18 3 6 6.0 RAID-10(M=5) 7 0 1.07e+05 12.2 8.18% 0 100.0% 14.0 1 24 21 3 3 42.0 RAID-0 7 1 5.58e+08 6.37e+04 0.00157% 1 50.0% 7.0 2 24 21 3 3 21.0 RAID-1/10/01 7 1 5.58e+08 6.37e+04 0.00157% 1 85.7% 12.0 2 24 21 3 3 36.0 RAID-5/Z1 7 2 3.49e+12 3.98e+08 2.51e-07% 2 71.4% 10.0 3 24 21 3 3 30.0 RAID-6/Z2 7 3 2.72e+16 3.11e+12 3.21e-11% 3 57.1% 8.0 4 24 21 3 3 24.0 RAID-Z3 7 6 1.39e+29 1.58e+25 6.32e-24% 6 14.3% 2.0 7 24 21 3 3 6.0 RAID-10(M=6) 8 0 9.38e+04 10.7 9.34% 0 100.0% 16.0 1 24 16 2 8 32.0 RAID-0 8 1 4.19e+08 4.78e+04 0.00209% 1 50.0% 8.0 2 24 16 2 8 16.0 RAID-1/10/01 8 1 4.19e+08 4.78e+04 0.00209% 1 87.5% 14.0 2 24 16 2 8 28.0 RAID-5/Z1 8 2 2.18e+12 2.49e+08 4.02e-07% 2 75.0% 12.0 3 24 16 2 8 24.0 RAID-6/Z2 8 3 1.36e+16 1.56e+12 6.43e-11% 3 62.5% 10.0 4 24 16 2 8 20.0 RAID-Z3 8 7 5.41e+32 6.18e+28 1.62e-27% 7 12.5% 2.0 8 24 16 2 8 4.0 RAID-10(M=7) Term Definition ======= ================================================ AFR Annualized Failure Rate in years B Number of RAID blocks BD Number of RAID block disks C JBOD capacity (number of disks) DC RAID data capacity (TB) FT Fault Tolerance: max bad disks with no data loss JDC JBOD data capacity in TB Min Minimum number of disks allowed MTBF Mean Time Between Failures in hours MTTDL Mean Time To Data Loss in hours MTTR Mean Time To Recover (repair) in hours N Number of disks in each RAID array P Parity S Spares, must be greater than zero |
Sample CSV Output
This report shows the trade offs for having between 3 and 8 disks in a vdev (RAID configuration) for a RAID-Z2 configuration in CSV format for use in a spreadsheet. It was created by running the following command.
1 2 3 4 5 |
% python raid.py -n 3-8 -s 2 \ -d 'Seagate Barracuda ST2000DM001' \ --mtbf 750000 --mttr 24 \ -f 'Z2' --csv |
Here is the generated CSV data.
1 2 3 4 5 6 7 8 9 10 11 12 |
Disk,Seagate Barracuda ST2000DM001 Size,2.000 MTBF,750000 MTTR,24 ,,N,Parity,MTTDL (hrs),MTTDL (yrs),AFR,FT,DC %,DC,Min,C,BD,B,S,JDC,Types ,,3,2,1.22e+14,1.39e+10,7.1762e-11,2,0.333,2.0,3,24,21,7,3,14.0,RAID-6/Z2 ,,4,2,3.05e+13,3.48e+09,2.8705e-10,2,0.500,4.0,3,24,20,5,4,20.0,RAID-6/Z2 ,,5,2,1.22e+13,1.39e+09,7.1762e-10,2,0.600,6.0,3,24,20,4,4,24.0,RAID-6/Z2 ,,6,2,6.1e+12,6.97e+08,1.4352e-09,2,0.667,8.0,3,24,18,3,6,24.0,RAID-6/Z2 ,,7,2,3.49e+12,3.98e+08,2.5117e-09,2,0.714,10.0,3,24,21,3,3,30.0,RAID-6/Z2 ,,8,2,2.18e+12,2.49e+08,4.0187e-09,2,0.750,12.0,3,24,16,2,8,24.0,RAID-6/Z2 |
Report Comparing Storage for Z1, Z2, Z3 for n [3-8]
This report shows how Z1, Z2 and Z3 storage varies for different values of n.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
#!/bin/bash python raid.py -n 3-8 -s 2 \ -d 'Seagate Barracuda ST2000DM001' \ --mtbf 750000 --mttr 24 -f 'Z1' \ --no-key python raid.py -n 3-8 -s 2 \ -d 'Seagate Barracuda ST2000DM001' \ --mtbf 750000 --mttr 24 -f 'Z2' \ --no-title --no-header --no-key python raid.py -n 3-8 -s 2 \ -d 'Seagate Barracuda ST2000DM001' \ --mtbf 750000 --mttr 24 -f 'Z3' \ --no-title --no-header --no-key |
Here is the report.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
MTTDL RAID Configuration Report Seagate Barracuda ST2000DM001 MTBF: 750,000 (85.6) MTTR: 24 Disk Size: 2.000TB JBOD Capacity: 24 MTTDL MTTDL N P (hrs) (yrs) AFR FT DC % DC Min C BD B S JDC Types == == ======== ======== ========= == ====== ==== === == == == == ===== ============ 3 1 3.91e+09 4.46e+05 0.000224% 1 66.7% 4.0 2 24 21 7 3 28.0 RAID-5/Z1 4 1 1.95e+09 2.23e+05 0.000449% 1 75.0% 6.0 2 24 20 5 4 30.0 RAID-5/Z1 5 1 1.17e+09 1.34e+05 0.000748% 1 80.0% 8.0 2 24 20 4 4 32.0 RAID-5/Z1 6 1 7.81e+08 8.92e+04 0.00112% 1 83.3% 10.0 2 24 18 3 6 30.0 RAID-5/Z1 7 1 5.58e+08 6.37e+04 0.00157% 1 85.7% 12.0 2 24 21 3 3 36.0 RAID-5/Z1 8 1 4.19e+08 4.78e+04 0.00209% 1 87.5% 14.0 2 24 16 2 8 28.0 RAID-5/Z1 3 2 1.22e+14 1.39e+10 7.18e-09% 2 33.3% 2.0 3 24 21 7 3 14.0 RAID-6/Z2 4 2 3.05e+13 3.48e+09 2.87e-08% 2 50.0% 4.0 3 24 20 5 4 20.0 RAID-6/Z2 5 2 1.22e+13 1.39e+09 7.18e-08% 2 60.0% 6.0 3 24 20 4 4 24.0 RAID-6/Z2 6 2 6.1e+12 6.97e+08 1.44e-07% 2 66.7% 8.0 3 24 18 3 6 24.0 RAID-6/Z2 7 2 3.49e+12 3.98e+08 2.51e-07% 2 71.4% 10.0 3 24 21 3 3 30.0 RAID-6/Z2 8 2 2.18e+12 2.49e+08 4.02e-07% 2 75.0% 12.0 3 24 16 2 8 24.0 RAID-6/Z2 4 3 9.54e+17 1.09e+14 9.19e-13% 3 25.0% 2.0 4 24 20 5 4 10.0 RAID-Z3 5 3 1.91e+17 2.18e+13 4.59e-12% 3 40.0% 4.0 4 24 20 4 4 16.0 RAID-Z3 6 3 6.36e+16 7.26e+12 1.38e-11% 3 50.0% 6.0 4 24 18 3 6 18.0 RAID-Z3 7 3 2.72e+16 3.11e+12 3.21e-11% 3 57.1% 8.0 4 24 21 3 3 24.0 RAID-Z3 8 3 1.36e+16 1.56e+12 6.43e-11% 3 62.5% 10.0 4 24 16 2 8 20.0 RAID-Z3 |
Enjoy!