IOZONE
Test šestih diskov 1.5TB Seagate ST31500341AS CC1H v ZFS direktno in preko NFS. Bruto kapaciteta ZRAID1 je 9TB. Klient je povezan preko 1GbE vmesnika na FreeBSD 8.0 NFS strežnik. Plošča z dvema Intel(R) Xeon(R) CPU E5405 @ 2.00GHz in onboard Intel 63XXESB2 SATA300 controller Velikosti datotek so bile 128MB, 512MB, 1GB in 2GB.
Meritev na ZRAID1
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
tank 8,12T 6,98T 1,14T 85% ONLINE -
iozone -i0 -i1 -s 128m -s 512m -s 1g -s 2g -r 64 -Rb local.xls
KB reclen write rewrite read reread
131072 64 159279 82114 313862 268426
524288 64 125009 79300 221548 230180
1048576 64 110125 64727 253959 250068
2097152 64 111738 67656 225682 234333
NFS na ZRAID1 disku sinhrono
KB reclen write rewrite read reread
131072 64 32598 28838 81102 2310744
524288 64 29060 26338 67903 1007856
1048576 64 29349 22134 75539 1006377
2097152 64 26115 20008 68196 851873
NFS na ZRAID disku asinhrono z sysctl vfs.nfsrv.async=1
Asinhrono pisanje nekoliko izboljša pisanje.
KB reclen write rewrite read reread
131072 64 52319 47669 82849 2280266
524288 64 45468 39673 73202 1004585
1048576 64 41955 39308 75171 1005348
2097152 64 44255 45257 73147 1005467
NFS na UFS disku
6 SCSI diskov 320GB v CCD RAID0.
KB reclen write rewrite read reread
131072 64 130104 135266 1780650 1794401
524288 64 133297 105866 1474741 1518910
1048576 64 114409 126695 110038 108436
2097152 64 123005 116133 92767 92537
4194304 64 125183 118607 110751 109125
8388608 64 121518 119501 105037 77166
Linux NFSv3 klient asinhrono
Kaže, da ima vpliv velika količčina spomina na klientu (12GB)
Hitrost pisanja je tu večja kot pri ZFS in dosega 3&4 teoretične hitrosti.
KB reclen write rewrite read reread
131072 64 1422745 1861050 4347758 4170952
524288 64 1553690 1899441 3717355 4190181
1048576 64 1238507 1492025 3976624 3969894
2097152 64 164641 45789 4135545 4181265
4194304 64 76264 36880 4693247 4024055
8388608 64 75341 22259 2856531 4604778
12582912 64 49744 20970 37413 35422
ZFS JBOD NFS lokalno
iozone -i0 -i1 -s 128m -s 512m -s 1g -s 2g -s 4g -s 8g -r 64 -Rb nfs-jb-sync.xls
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
tank 6,12T 5,90T 222G 96% ONLINE -
KB reclen write rewrite read reread
131072 64 265099 572784 2018787 2019380
524288 64 179531 361111 1940541 1989141
1048576 64 81915 38170 241334 75945
2097152 64 63996 30866 89855 113389
4194304 64 57328 31850 77156 92197
8388608 64 58109 31960 72362 80447
ZFS JBOD TANK NFS sinhrono
iozone -i0 -i1 -s 128m -s 512m -s 1g -s 2g -s 4g -s 8g -r 64 -Rb nfs-jb-sync.xls
KB reclen write rewrite read reread
131072 64 2915 36491 86100 2522465
524288 64 28046 19070 84281 2043028
1048576 64 24996 15210 70147 1255287
2097152 64 22762 15157 42553 1180350
4194304 64 22672 15256 40374 1219546
8388608 64 22029 15333 41542 43220
ZFS JBOD TANK NFS asinhrono sysctl vfs.nfsrv.async=1
KB reclen write rewrite read reread
131072 64 86012 113832 2497949 2075593
524288 64 23374 31611 2017826 1822565
1048576 64 34685 23232 1238086 1241179
2097152 64 32580 21174 1130921 1142132
4194304 64 36785 21697 38965 1210808
8388608 64 30812 22414 40845 43793
NFS na SunFire 4540
Linux klient z 32GB spomina
KB reclen write rewrite read reread
131072 64 1367841 1443907 1981074 2029601
524288 64 1365162 1252635 112634 2100579
1048576 64 1344537 1444160 112790 1956746
2097152 64 1358127 1447318 103656 1929460
4194304 64 782433 1012271 106510 1752600
8388608 64 895280 974399 107337 1927381
16777216 64 86047 76457 105429 1703674
33554432 64 25877 23966 105585 572271
Kreiranje diskovnega polja z :
ds2# sh
# DISKI=`awk 'BEGIN{for(s=1;s<48;++s) print "da" s}'`
# echo $DISKI
da1 da2 da3 da4 da5 da6 da7 da8 da9 da10 da11 da12 da13 da14 da15 da16 da17 da18 da19 da20 da21 da22 da23 da24 \
da25 da26 da27 da28 da29 da30 da31 da32 da33 da34 da35 da36 da37 da38 da39 da40 da41 da42 da43 da44 da45 da46 da47
# zpool create tank raidz2 $DISKI spare da48
# df -h tank
Filesystem Size Used Avail Capacity Mounted on
tank 20T 52K 20T 0% /tank
Hitrost pisanja in branja z disk dumpom
FreeBSD streznik ima 6GB pomnilnika. Pred pisanjem je potrebno datoteko pobrisati. Primer za pisanja in branja 10GB testne datoteke:
# dd if=/dev/random of=/tank/testfile.dat bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 274.870995 secs (38147932 bytes/sec)
# dd if=/tank/testfile.dat of=/dev/null bs=1M
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 26.309143 secs (398559542 bytes/sec)
$ zpool iostat
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
tank 45.3G 21.3T 128 301 15.7M 36.2M
Disk dump hitrost pisanja na ZFS diskov s krmilnikom HP SmartArray P812 z eno samo skupino raidz2
Velikost datoteke |
1GB |
10GB |
100GB |
|
|
pisanje v MiB/s |
43 |
38 |
49 |
|
|
branje v MiB/s |
325 |
398 |
363 |
|
|
Testiranje hitrosti z
# /tmp/iozone -i0 -i1 -s 128m -s 512m -s 1g -s 2g -s 4g -s 8g -s 10g -s 100g -s 1000g -r 64 -Rb nfs-jb-sync.xls
KB reclen write rewrite read reread
131072 64 822831 3689046 4505869 4549685
524288 64 160688 176362 4237628 4243499
1048576 64 168036 32638 438791 453390
2097152 64 213226 30056 407862 585865
4194304 64 218165 32331 415721 493245
8388608 64 221097 31885 426789 480108
10485760 64 171688 30643 376904 176616
104857600 64 184643 31547 399757 417586
Za večjo varnost se pri vecjem stevilu diskov priporočajo manjše skupine RAIDZ diskov zato diskovje prekonfiguriramo v 8 skupin ZRAID2 s po 6 diski z naslednimi ukazi:
# DISKI=`awk 'BEGIN{for(s=1;s<49;++s){if((s-1) % 6 == 0) print "raidz2"; print "da" s}}'`
# echo $DISKI
raidz2 da1 da2 da3 da4 da5 da6 raidz2 da7 da8 da9 da10 da11 da12 raidz2 da13 da14 da15 da16 da17 da18 \
raidz2 da19 da20 da21 da22 da23 da24 raidz2 da25 da26 da27 da28 da29 da30 \
raidz2 da31 da32 da33 da34 da35 da36 raidz2 da37 da38 da39 da40 da41 da42 raidz2 da43 da44 da45 da46 da47 da48
# zpool create tank $DISKI
# df -h tank
Filesystem Size Used Avail Capacity Mounted on
tank 14T 36K 14T 0% /tank
Testiranje hitrosti z IOZONE
# /tmp/iozone -i0 -i1 -s 128m -s 512m -s 1g -s 2g -s 4g -s 8g -s 10g -s 20g -s30g -s 40g -s 100g -r 64 -Rb radidz2-groups.xls
Output is in Kbytes/sec
KB reclen write rewrite read reread
131072 64 820133 3484389 4552058 4549196
524288 64 809952 3699177 4550320 4556127
1048576 64 431827 895943 4344376 4362739
2097152 64 349137 118444 1029376 1060582
4194304 64 332152 118384 1111439 1037976
8388608 64 339660 119920 1009807 1009100
10485760 64 321090 122320 1014529 1010512
20971520 64 324476 121458 974272 974296
31457280 64 321771 122120 940798 940999
41943040 64 321564 123014 943356 946306
104857600 64 317099 117262 913083 943086
Disk dump hitrost pisanja na ZFS diskov s krmilnikom HP SmartArray P812 na 8 skupin raidz2 s 6 diski v skupini
Velikost datoteke |
1GB |
10GB |
100GB |
|
|
pisanje v MiB/s |
54 |
52 |
52 |
|
|
branje v MiB/s |
4812 |
879 |
868 |
|
|
Strojna podpora z 8 logicnimi RAID6 ADG diski
50 diskov je razdeljeno po naslednji logiki:
- Bay 1 na DL180G in Bay 1 na StorageWorks sta v RAID 0+1 in sta namenjena za operacijski sistem
- Bayi 2-7, 8-13, 14-19, 20-25 v obeh tvorijo osem RAID6 logicnih diskov po vrsti od DL180G do StorageWorks. Velikost sektorjev RAID6 je namesto 16KB natavljena na 64KB.
Slabost naslednje konfiguracije je, da ob napaki integritete datotek ZFS ne more popraviti napake.
forge# sh
# zpool create tank da1 da2 da3 da4 da5 da6 da7 da8
# zpool status
pool: tank
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0
da3 ONLINE 0 0 0
da4 ONLINE 0 0 0
da5 ONLINE 0 0 0
da6 ONLINE 0 0 0
da7 ONLINE 0 0 0
da8 ONLINE 0 0 0
errors: No known data errors
# df -h tank
Filesystem Size Used Avail Capacity Mounted on
tank 14T 18K 14T 0% /tank
Testiranje hitrosti UFS2 na RAID0+1 sistemskih diskih z IOZONE:
KB reclen write rewrite read reread
131072 64 660192 2040828 4864974 4854321
524288 64 178736 92000 4859024 4816527
1048576 64 80624 89533 2098667 2087043
2097152 64 83464 81694 2089523 2078392
4194304 64 75592 82535 2074097 2067524
8388608 64 79790 74521 76896 82906
10485760 64 77076 73632 75783 80393
20971520 64 74165 71184 73484 76210
Testiranje hitrosti ZFS na 8 logicnih diskih z RAID6
/tmp/iozone -i0 -i1 -s 128m -s 512m -s 1g -s 2g -s 4g -s 8g -s 10g -s 20g -s30g -s 40g -s 100g -r 64 -Rb raid6.xls
KB reclen write rewrite read reread
131072 64 509280 1065945 4311985 4342744
524288 64 314361 61529 1857076 2096062
1048576 64 362029 61790 1131068 1315906
2097152 64 416198 67532 790424 1216252
4194304 64 454973 85806 992800 1127157
8388608 64 445109 84692 1034820 1090557
10485760 64 458674 85417 1042117 1040583
20971520 64 449161 84629 1053396 1042228
31457280 64 441081 83483 1053525 1021891
41943040 64 441446 83194 1052692 1041444
104857600 64 437923 82995 1046279 1027647
Disk dump hitrost pisanja na ZFS diske s krmilnikom HP SmartArray P812 na 8 skupin HW RAID6 s 6 diski v skupini
Velikost datoteke |
1GB |
10GB |
100GB |
|
|
pisanje v MiB/s |
57 |
55 |
55 |
|
|
branje v MiB/s |
4925 |
1137 |
1047 |
|
|
24 logičnih diskov v 4 skupinah RAIDZ2 z 6 diski
Logične diske združimo v RAID0 po dva skupaj in nato ustvarimo štiri RAIDZ2 diskovna polja z:
# DISKI=`awk 'BEGIN{for(s=1;s<25;++s){if((s-1) % 6 == 0) print "raidz2"; print "da" s}}'`
# echo $DISKI
raidz2 da1 da2 da3 da4 da5 da6 raidz2 da7 da8 da9 da10 da11 da12 raidz2 da13 da14 da15 da16 da17 da18 raidz2 da19 da20 da21 da22 da23 da24
# zpool create tank $DISKI
# df -h tank
Filesystem Size Used Avail Capacity Mounted on
tank 14T 36K 14T 0% /tank
Testiranje hitrosti z IOZONE
# cd /tank # /tmp/iozone -i0 -i1 -s 128m -s 512m -s 1g -s 2g -s 4g \
-s 8g -s 10g -s 20g -s30g -s 40g -s 100g -r 64 -Rb radidz2-groups.xls
KB reclen write rewrite read reread
131072 64 907379 3765017 4612285 4630232
524288 64 813944 3795366 4634754 4642749
1048576 64 407316 823833 4366772 4370083
2097152 64 379474 215505 1258112 1277242
4194304 64 344936 196284 1116616 1106021
8388608 64 322321 179807 995130 937519
10485760 64 324512 187260 935301 952578
20971520 64 320905 193779 959166 943469
31457280 64 319725 182185 911931 938123
41943040 64 320693 175936 883126 971885
104857600 64 318103 169401 908441 914209
Disk dump hitrost pisanja na ZFS diske s krmilnikom HP SmartArray P812 na 4 skupine HW RAID0 s 6 diski v skupini
Velikost datoteke |
1GB |
10GB |
100GB |
|
|
pisanje v MiB/s |
54 |
52 |
52 |
|
|
branje v MiB/s |
4920 |
931 |
892 |
|
|
LUSTRE
Testiranje hitrost na štiri OST narejene iz 300GB /dev/sda4 particij. Klenti so na drugem stikalu in pripnejo datotečni sistem z ukazom:
mkdir /scratch
mount -t lustre 10.0.2.50@tcp0:/datafs /scratch
# df -h /scratch
Filesystem Size Used Avail Use% Mounted on
10.0.2.50@tcp0:/datafs
1.2T 1.9G 1.2T 1% /scratch
# lfs df -h
UUID bytes Used Available Use% Mounted on
datafs-MDT0000_UUID 203.8G 459.8M 191.7G 0% /mnt/datafs[MDT:0]
datafs-OST0000_UUID 303.1G 466.9M 287.2G 0% /mnt/datafs[OST:0]
datafs-OST0001_UUID 303.1G 466.9M 287.2G 0% /mnt/datafs[OST:1]
datafs-OST0002_UUID 303.1G 466.9M 287.2G 0% /mnt/datafs[OST:2]
datafs-OST0003_UUID 303.1G 490.4M 287.2G 0% /mnt/datafs[OST:3]
filesystem summary: 1.2T 1.8G 1.1T 0% /mnt/datafs
Hitrost pisanja-branja na enem samem klientu z 1Gbe priključkom je:
Command line used: iozone -i0 -i1 -s 128m -s 512m -s 1g -s 10g -s 100g -r 64 -Rb lustre.xls
KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread
131072 64 104221 110374 2946633 2983040
524288 64 89318 90983 2966706 2982451
1048576 64 84107 85849 2946407 2972567
10485760 64 81946 83689 2955741 2961087
104857600 64 77116 77799 83542 84404
Testiranje agregatnega prenosa LUSTRE
Štiri vozlišča z enim diskom in 11 klientov. Na OSS je bilo pri naslednjem testu izmerjen loadavg od 10 do 18. Na klientih pa je bil do 0.1.
Vsak klient napiše datteko velikosti 50GB na paralelni datotečni sistem. Pisanje je porazdeljeno (stripping) po vseh OSS-jih. Velikost blokov je 1M.
[root@prelog current]# ./iozone -M -t 11 -s 50g -r 1M -i0 -i1 -+m klienti.txt
Iozone: Performance Test of File I/O
Version $Revision: 3.279 $
Compiled for 64 bit mode.
Build: linux
Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
Al Slater, Scott Rhine, Mike Wisner, Ken Goss
Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
Randy Dunlap, Mark Montague, Dan Million,
Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy,
Erik Habbinga, Kris Strecker, Walter Wong.
Run began: Thu Oct 21 17:41:21 2010
Machine = Linux prelog 2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 EDT 2009 x86_6 File size set to 52428800 KB
File size set to 52428800 KB
Record Size 1024 KB
Network distribution mode enabled.
Command line used: ./iozone -M -t 11 -s 50g -r 1M -i0 -i1 -+m klienti.txt
Output is in Kbytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Throughput test with 11 processes
Each process writes a 52428800 Kbyte file in 1024 Kbyte records
Test running:
Children see throughput for 11 initial writers = 288943.99 KB/sec
Min throughput per process = 20178.96 KB/sec
Max throughput per process = 36860.29 KB/sec
Avg throughput per process = 26267.64 KB/sec
Min xfer = 28704768.00 KB
Test running:
Children see throughput for 11 rewriters = 278721.19 KB/sec
Min throughput per process = 17302.08 KB/sec
Max throughput per process = 37080.87 KB/sec
Avg throughput per process = 25338.29 KB/sec
Min xfer = 24466432.00 KB
Test running:
Children see throughput for 11 readers = 751049.96 KB/sec
Min throughput per process = 6618.46 KB/sec
Max throughput per process = 156022.19 KB/sec
Avg throughput per process = 68277.27 KB/sec
Min xfer = 2224128.00 KB
Test running:
Children see throughput for 11 re-readers = 964074.49 KB/sec
Min throughput per process = 77288.98 KB/sec
Max throughput per process = 107587.23 KB/sec
Avg throughput per process = 87643.14 KB/sec
Min xfer = 37677056.00 KB
Test cleanup:
iozone test complete.
Pri naslednjem testu smo zmanšali število klientov na 6 in tako zmanjšali breme na strežnikih. loadavg je bil manjši od 10. Posledično tudi nekoliko višje agregatne hitrosti.
[root@prelog current]# ./iozone -M -t 6 -s 10g -r 1M -i0 -i1 -+m klienti.txt
Iozone: Performance Test of File I/O
Version $Revision: 3.279 $
Compiled for 64 bit mode.
Build: linux
Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
Al Slater, Scott Rhine, Mike Wisner, Ken Goss
Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
Randy Dunlap, Mark Montague, Dan Million,
Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy,
Erik Habbinga, Kris Strecker, Walter Wong.
Run began: Thu Oct 21 19:18:50 2010
Machine = Linux prelog 2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 EDT 2009 x86_6 File size set to 10485760 KB
File size set to 10485760 KB
Record Size 1024 KB
Network distribution mode enabled.
Command line used: ./iozone -M -t 6 -s 10g -r 1M -i0 -i1 -+m klienti.txt
Output is in Kbytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Throughput test with 6 processes
Each process writes a 10485760 Kbyte file in 1024 Kbyte records
Test running:
Children see throughput for 6 initial writers = 298239.63 KB/sec
Min throughput per process = 37517.05 KB/sec
Max throughput per process = 81445.19 KB/sec
Avg throughput per process = 49706.61 KB/sec
Min xfer = 4831232.00 KB
Test running:
Children see throughput for 6 rewriters = 299440.88 KB/sec
Min throughput per process = 35417.12 KB/sec
Max throughput per process = 83380.05 KB/sec
Avg throughput per process = 49906.81 KB/sec
Min xfer = 4457472.00 KB
Test running:
Children see throughput for 6 readers = 27122258.50 KB/sec
Min throughput per process = 4137672.50 KB/sec
Max throughput per process = 5225607.00 KB/sec
Avg throughput per process = 4520376.42 KB/sec
Min xfer = 8304640.00 KB
Test running:
Children see throughput for 6 re-readers = 31579072.00 KB/sec
Min throughput per process = 5149842.00 KB/sec
Max throughput per process = 5320063.50 KB/sec
Avg throughput per process = 5263178.67 KB/sec
Min xfer = 10152960.00 KB
Test cleanup:
iozone test complete.
Testiranje agregatnega prenosa LUSTRE na šest OSS
Šest vozlišč kot OSS z enim diskom in 11 klientov. Prejšnjim štirim sta bila dodana še dva OSS.
# lfs df -h
UUID bytes Used Available Use% Mounted on
datafs-MDT0000_UUID 203.8G 459.9M 191.7G 0% /mnt/datafs[MDT:0]
datafs-OST0000_UUID 303.1G 60.0G 227.6G 19% /mnt/datafs[OST:0]
datafs-OST0001_UUID 303.1G 76.5G 211.1G 25% /mnt/datafs[OST:1]
datafs-OST0002_UUID 303.1G 76.5G 211.1G 25% /mnt/datafs[OST:2]
datafs-OST0003_UUID 303.1G 50.5G 237.1G 16% /mnt/datafs[OST:3]
datafs-OST0004_UUID 303.1G 82.6G 205.0G 27% /mnt/datafs[OST:4]
datafs-OST0005_UUID 303.1G 84.0G 203.6G 27% /mnt/datafs[OST:5]
filesystem summary: 1.8T 430.1G 1.3T 23% /mnt/dataf
Na OSS je bilo pri naslednjem testu izmerjen loadavg 10. Na klientih pa je bil do 0.1.
[root@prelog current]# ./iozone -M -t 11 -s 50g -r 1M -i0 -i1 -+m klienti.txt
Iozone: Performance Test of File I/O
Version $Revision: 3.279 $
Compiled for 64 bit mode.
Build: linux
Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
Al Slater, Scott Rhine, Mike Wisner, Ken Goss
Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
Randy Dunlap, Mark Montague, Dan Million,
Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy,
Erik Habbinga, Kris Strecker, Walter Wong.
Run began: Thu Oct 21 19:36:36 2010
Machine = Linux prelog 2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 EDT 2009 x86_6 File size set to 52428800 KB
File size set to 52428800 KB
Record Size 1024 KB
Network distribution mode enabled.
Command line used: ./iozone -M -t 11 -s 50g -r 1M -i0 -i1 -+m klienti.txt
Output is in Kbytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Throughput test with 11 processes
Each process writes a 52428800 Kbyte file in 1024 Kbyte records
Test running:
Children see throughput for 11 initial writers = 447891.52 KB/sec
Min throughput per process = 29084.98 KB/sec
Max throughput per process = 77920.95 KB/sec
Avg throughput per process = 40717.41 KB/sec
Min xfer = 19571712.00 KB
Test running:
Children see throughput for 11 rewriters = 445862.31 KB/sec
Min throughput per process = 29778.18 KB/sec
Max throughput per process = 79588.55 KB/sec
Avg throughput per process = 40532.94 KB/sec
Min xfer = 19616768.00 KB
Test running:
Children see throughput for 11 readers = 1091218.25 KB/sec
Min throughput per process = 76603.67 KB/sec
Max throughput per process = 125198.80 KB/sec
Avg throughput per process = 99201.66 KB/sec
Min xfer = 32112640.00 KB
Test running:
Children see throughput for 11 re-readers = 1812684.16 KB/sec
Min throughput per process = 123533.34 KB/sec
Max throughput per process = 189891.80 KB/sec
Avg throughput per process = 164789.47 KB/sec
Min xfer = 34107392.00 KB
Test cleanup:
iozone test complete.
Če zmanjšamo obremenitev diskov z večimi klienti in ustvarimo obremenitev ena na ena oz 6 OSS in 6 klientov se agregatna hitrost poveča
[root@prelog current]# ./iozone -M -t 6 -s 10g -r 1M -i0 -+m klienti.txt
Iozone: Performance Test of File I/O
Version $Revision: 3.279 $
Compiled for 64 bit mode.
Build: linux
Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
Al Slater, Scott Rhine, Mike Wisner, Ken Goss
Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
Randy Dunlap, Mark Montague, Dan Million,
Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy,
Erik Habbinga, Kris Strecker, Walter Wong.
Run began: Thu Oct 21 20:37:44 2010
Machine = Linux prelog 2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 EDT 2009 x86_6 File size set to 10485760 KB
File size set to 10485760 KB
Record Size 1024 KB
Network distribution mode enabled.
Command line used: ./iozone -M -t 6 -s 10g -r 1M -i0 -+m klienti.txt
Output is in Kbytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Throughput test with 6 processes
Each process writes a 10485760 Kbyte file in 1024 Kbyte records
Test running:
Children see throughput for 6 initial writers = 484999.86 KB/sec
Min throughput per process = 72106.22 KB/sec
Max throughput per process = 86916.07 KB/sec
Avg throughput per process = 80833.31 KB/sec
Min xfer = 8701952.00 KB
Test running:
Children see throughput for 6 rewriters = 497186.64 KB/sec
Min throughput per process = 74130.01 KB/sec
Max throughput per process = 89213.27 KB/sec
Avg throughput per process = 82864.44 KB/sec
Min xfer = 8715264.00 KB
Test cleanup:
iozone test complete
Testiranje agregatnega prenosa na 6xOSS s štirimi RAID5 diski preko etherneta
Obremenitev na OST je bila 0.9
[root@prelog current]# ./iozone -M -t 6 -s 10g -r 1M -i0 -+m klienti.txt
Iozone: Performance Test of File I/O
Version $Revision: 3.279 $
Compiled for 64 bit mode.
Build: linux
Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
Al Slater, Scott Rhine, Mike Wisner, Ken Goss
Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
Randy Dunlap, Mark Montague, Dan Million,
Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy,
Erik Habbinga, Kris Strecker, Walter Wong.
Run began: Wed Nov 24 03:23:09 2010
Machine = Linux prelog 2.6.18-194.el5 #1 SMP Fri Apr 2 14:58:14 EDT 2010 x86_6 F ile size set to 10485760 KB
File size set to 10485760 KB
Record Size 1024 KB
Network distribution mode enabled.
Command line used: ./iozone -M -t 6 -s 10g -r 1M -i0 -+m klienti.txt
Output is in Kbytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Throughput test with 6 processes
Each process writes a 10485760 Kbyte file in 1024 Kbyte records
Test running:
Children see throughput for 6 initial writers = 687015.67 KB/sec
Min throughput per process = 110736.41 KB/sec
Max throughput per process = 116098.21 KB/sec
Avg throughput per process = 114502.61 KB/sec
Min xfer = 10004480.00 KB
Test running:
Children see throughput for 6 rewriters = 692070.91 KB/sec
Min throughput per process = 113561.17 KB/sec
Max throughput per process = 116098.62 KB/sec
Avg throughput per process = 115345.15 KB/sec
Min xfer = 10259456.00 KB
Test cleanup:
iozone test complete.
Pri 18 klientih na dveh stikalih je obremenitev 0.6 do 12
Command line used: ./iozone -M -t 18 -s 10g -r 1M -i0 -+m klienti.txt
Output is in Kbytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Throughput test with 18 processes
Each process writes a 10485760 Kbyte file in 1024 Kbyte records
Test running:
Children see throughput for 18 initial writers = 654899.18 KB/sec
Min throughput per process = 33035.31 KB/sec
Max throughput per process = 38454.23 KB/sec
Avg throughput per process = 36383.29 KB/sec
Min xfer = 9018368.00 KB
Test running:
Children see throughput for 18 rewriters = 560558.85 KB/sec
Min throughput per process = 27425.26 KB/sec
Max throughput per process = 33380.26 KB/sec
Avg throughput per process = 31142.16 KB/sec
Min xfer = 8619008.00 KB
Pri 32 klientih oz. polovici sestava je obremenitev na OSS v povprečju okoli 20
Command line used: ./iozone -M -t 32 -s 10g -r 1M -i0 -+m klienti.txt
Output is in Kbytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Throughput test with 32 processes
Each process writes a 10485760 Kbyte file in 1024 Kbyte records
Test running:
Children see throughput for 32 initial writers = 665272.74 KB/sec
Min throughput per process = 16511.76 KB/sec
Max throughput per process = 23534.63 KB/sec
Avg throughput per process = 20789.77 KB/sec
Min xfer = 7363584.00 KB
Test running:
Children see throughput for 32 rewriters = 491270.41 KB/sec
Min throughput per process = 11935.04 KB/sec
Max throughput per process = 17934.66 KB/sec
Avg throughput per process = 15352.20 KB/sec
Min xfer = 6980608.00 KB
Zmogljivost NFS strežnika
Pri polno obremenjenem sestavu pri katerem so vozlišča "swapala" 30% je zmogljivost vzporednega pisanja na diske 100MB/s pa do 300MB/s na NFS/ZFS strežnik forge.
Run began: Fri Dec 17 08:56:04 2010
Machine = Linux prelog 2.6.18-194.el5 #1 SMP Fri Apr 2 14:58:14 EDT 2010 x86_6 File size set to 10485760 KB
File size set to 10485760 KB
Record Size 1024 KB
Network distribution mode enabled.
Command line used: /home/leon/bench/iozone3_279/src/current/iozone -M -t 6 -s 10g -r 1M -i0 -+m /home/leon/bench/iozone3_279/src/current/klienti-home.txt
Output is in Kbytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Throughput test with 6 processes
Each process writes a 10485760 Kbyte file in 1024 Kbyte records
Test running:
Children see throughput for 6 initial writers = 96448.97 KB/sec
Min throughput per process = 0.00 KB/sec
Max throughput per process = 49105.15 KB/sec
Avg throughput per process = 16074.83 KB/sec
Min xfer = 0.00 KB
Test running:
Children see throughput for 6 rewriters = 278038.36 KB/sec
Min throughput per process = 0.00 KB/sec
Max throughput per process = 190440.55 KB/sec
Avg throughput per process = 46339.73 KB/sec
Min xfer = 0.00 KB