Mailing List Archive

FAS8040 cpu pegged for over 1 month 24/7
I have an 8040 is at 100% cpu all the time on all cores. It has been like
this for at least a month, my stats do not go back further than this. I
expect the cpu was not this high before upgradin to 9.3P2. I feel like i
should be able to get more than 10k ops out of a 150TB hybrid aggregate
with 222 disks in it. Any help or feedback on performance expectations
will be appreciated. Let me know if any stats would be usefull, i stripped
my email down as my first try was rejected for being too big.



HOSTNAME::> node run -node HOSTNAME-0

HOSTNAME-01 HOSTNAME-02

HOSTNAME::> node run -node HOSTNAME-02

Type 'exit' or 'Ctrl-D' to return to the CLI



HOSTNAME-02> sysstat prif set
diag priv set diag



Warning: These diagnostic commands are for use by NetApp



personnel only.



HOSTNAME-02*> sysstat -M 1



ANY1+ ANY2+ ANY3+ ANY4+ ANY5+ ANY6+ ANY7+ ANY8+ AVG CPU0 CPU1 CPU2 CPU3
CPU4 CPU5 CPU6 CPU7 Nwk_Excl Nwk_Lg Nwk_Exmpt Protocol Storage Raid Raid_Ex
Xor_Ex Target Kahuna WAFL_Ex(Kahu) WAFL_MPClean SM_Exempt Exempt SSAN_Ex
Intr Host Ops/s CP



100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
100% 100% 100% 100% 0% 0% 32% 0% 0% 0%
30% 10% 0% 0% 253%( 36%) 0% 0% 99% 18%
3% 353% 3748 100%



100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
100% 100% 100% 100% 0% 0% 33% 0% 0% 0%
29% 10% 0% 0% 235%( 33%) 0% 0% 70% 19%
3% 398% 3959 100%



100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
100% 100% 100% 100% 0% 0% 34% 0% 0% 0%
30% 9% 0% 0% 245%( 35%) 0% 0% 74% 19%
3% 385% 3802 100%



100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
100% 100% 100% 100% 0% 0% 30% 0% 1% 0%
28% 8% 0% 0% 236%( 33%) 0% 0% 76% 17%
3% 399% 3367 100%



100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
100% 100% 100% 100% 0% 1% 30% 0% 2% 0%
29% 8% 0% 0% 217%( 31%) 0% 0% 71% 16%
3% 422% 3427 100%



100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
100% 100% 100% 100% 0% 1% 36% 0% 1% 1%
33% 9% 0% 0% 242%( 34%) 0% 0% 91% 19%
3% 362% 3754 100%



100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
100% 100% 100% 100% 0% 1% 33% 0% 0% 0%
26% 4% 0% 0% 272%( 38%) 18% 0% 102% 18%
4% 320% 3496 60%



100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
100% 100% 100% 100% 0% 1% 34% 0% 1% 0%
24% 4% 0% 0% 258%( 36%) 3% 0% 87% 20%
3% 364% 4001 0%



100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
100% 100% 100% 100% 0% 1% 41% 0% 0% 0%
32% 5% 0% 0% 276%( 39%) 2% 0% 101% 24%
4% 312% 4752 0%



100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
100% 100% 100% 100% 0% 1% 44% 0% 0% 0%
27% 4% 0% 0% 268%( 38%) 1% 0% 101% 25%
4% 323% 5077 28%



100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
100% 100% 100% 100% 0% 1% 36% 0% 0% 0%
29% 2% 0% 0% 256%( 36%) 3% 0% 96% 21%
4% 352% 4105 2%



100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
100% 100% 100% 100% 0% 1% 40% 0% 2% 0%
22% 1% 0% 0% 252%( 36%) 2% 0% 114% 23%
4% 339% 4801 0%



100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
100% 100% 100% 100% 0% 0% 33% 0% 3% 0%
17% 1% 0% 0% 236%( 33%) 2% 0% 73% 20%
3% 410% 4280 0%



100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
100% 100% 100% 100% 0% 0% 33% 0% 1% 0%
16% 2% 0% 0% 232%( 33%) 1% 0% 67% 20%
3% 424% 4219 0%



100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
100% 100% 100% 100% 0% 0% 29% 0% 0% 0%
15% 1% 0% 0% 230%( 32%) 2% 0% 71% 18%
3% 429% 3719 0%



100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
100% 100% 100% 100% 0% 1% 31% 0% 0% 0%
24% 2% 0% 0% 241%( 34%) 3% 0% 75% 19%
3% 400% 4206 0%



100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
100% 100% 100% 100% 0% 1% 28% 0% 0% 0%
46% 11% 0% 0% 263%( 37%) 67% 0% 82% 17%
3% 280% 3496 59%



100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
100% 100% 100% 100% 0% 0% 25% 0% 0% 1%
38% 7% 0% 0% 324%( 46%) 52% 0% 107% 16%
3% 226% 3127 100%



100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
100% 100% 100% 100% 0% 0% 30% 0% 0% 1%
35% 7% 0% 1% 285%( 40%) 49% 0% 86% 18%
3% 285% 3633 100%

0% 0% 94% 20% 4% 341% 4300 100%






HOSTNAME-02*> sysstat -x 1



CPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape
kB/s Cache Cache CP CP_Ty CP_Ph Disk
OTHER FCP iSCSI FCP kB/s iSCSI kB/s



in out read write read
write age hit time [T--H--F--N--B--O--#--:] [n--v--p--f]
util in out in out



100% 6 0 0 3907 100046 124797 278121 20862
0 0 3s 94% 0% 0--0--0--0--0--0--0--0 0--0--0--0
25% 3 0 3898 0 0 99115 127259



100% 3 0 0 3869 60123 134198 411571 20704
0 0 4s 94% 0% 0--0--0--0--0--0--0--0 0--0--0--0
33% 5 0 3861 0 0 59346 120058



100% 0 0 0 4089 27375 116197 225213 18145
0 0 4s 93% 0% 0--0--0--0--0--0--0--0 0--0--0--0
23% 0 0 4089 0 0 26622 115872



100% 13 0 0 4175 35817 111147 260996 28849
0 0 2s 94% 0% 0--0--0--0--0--0--0--0 0--0--0--0
25% 0 0 4162 0 0 34874 125888



100% 2 0 0 4143 56386 99541 401056 23882
0 0 0s 93% 0% 0--0--0--0--0--0--0--0 0--0--0--0
37% 17 0 4124 0 0 55471 105104



100% 11 0 0 3750 25706 149389 364912 154655
0 0 2s 93% 43% 1--0--0--0--0--0--0--0 0--1--0--0
29% 0 0 3739 0 0 24904 119412



100% 4 0 0 3395 36225 45390 280507 211869
0 0 3s 92% 100% 0--0--0--0--0--0--0--1 0--0--1--0
40% 0 0 3391 0 0 35744 81843



100% 25 0 0 4713 46662 149078 307272 93644
0 0 4s 95% 100% 0--0--0--0--0--0--0--1 0--0--0--1
42% 184 0 4504 0 0 45774 118418



100% 2 0 0 3930 51697 110975 221814 105300
0 0 3s 93% 100% 0--0--0--0--0--0--0--1 0--0--0--1
26% 0 0 3928 0 0 50821 108484



100% 5 0 0 4148 54985 137949 267436 208711
0 0 3s 93% 100% 1--0--0--0--0--0--0--1 1--0--0--1
31% 9 0 4134 0 0 53645 142356



100% 16 0 0 4883 81959 108174 364367 666466
0 0 4s 94% 100% 0--0--0--0--0--0--0--2 0--0--0--2
40% 1 0 4866 0 0 81554 95623



100% 5 0 0 6111 74315 122018 268918 79372
0 0 3s 95% 100% 0--0--0--0--0--0--0--1 0--0--0--1
26% 4 0 6102 0 0 72956 126959



100% 3 0 0 4658 50514 119345 218663 157863
0 0 3s 94% 100% 0--0--0--0--0--0--0--1 0--0--0--1
22% 0 0 4655 0 0 50021 112484



100% 2 0 0 4125 61813 137044 244725 223996
0 0 3s 93% 100% 0--0--0--0--0--0--0--1 0--0--0--1
23% 0 0 4123 0 0 59479 135345



100% 6 0 0 4005 48098 158865 283920 54318
0 0 4s 95% 15% 0--0--0--0--0--0--0--0 0--0--0--0
26% 14 0 3985 0 0 47165 162826



100% 24 0 0 4490 45946 155717 266290 96976
0 0 2s 93% 0% 0--0--0--0--0--0--0--0 0--0--0--0
28% 0 0 4466 0 0 44575 146650



100% 7 0 0 4935 52081 155766 331777 11753
0 0 4s 93% 0% 0--0--0--0--0--0--0--0 0--0--0--0
32% 2 0 4926 0 0 51171 165626



100% 0 0 0 6518 39526 179763 350349 24130
0 0 1s 92% 0% 0--0--0--0--0--0--0--0 0--0--0--0
29% 4 0 6514 0 0 38678 177139


HOSTNAME-02*> qos exit



logout





HOSTNAME::> qos statistics characteristics show -iterations 0 -rows 10

Policy Group IOPS Throughput Request size Read Concurrency
Is Adaptive?

-------------------- -------- --------------- ------------ ---- -----------
------------

-total- 8094 321.31MB/s 41627B 41% 17
-

data02_PROD_2 1626 101.69MB/s 65578B 80% 7 false

DEV_DATA_DEV_2 1528 147.53MB/s 101219B 57% 5 false

_System-Work 1243 4.40KB/s 3B 1% 0
false

DEV_DATA_DEV 749 6.51MB/s 9121B 93% 1
false

data02_PROD 667 9.48MB/s 14911B 0% 1
false

DEV_OS_DEV_2 470 28.78MB/s 64214B 35% 2 false

data04_PROD 256 4.35MB/s 17812B 8% 0
false

data03_PROD 191 3.64MB/s 19971B 0% 0
false

os03_PROD_2 191 1.55MB/s 8496B 0% 0 false

shares02_PROD 183 1382.41KB/s 7721B 84% 0
false

-total- 9542 347.25MB/s 38158B 42% 21
-

DEV_DATA_DEV_2 1675 83.11MB/s 52028B 71% 4 false

data02_PROD_2 1547 103.42MB/s 70112B 86% 7 false

_System-Work 1191 6.58KB/s 5B 0% 0
false

DEV_OS_DEV_2 860 30.42MB/s 37095B 37% 2 false

DEV_DATA_DEV 667 5.22MB/s 8208B 96% 0
false

data03_PROD_2 627 11.51MB/s 19255B 2% 3 false

data02_PROD 528 7.24MB/s 14370B 0% 0
false

shares02_PROD 526 80.62MB/s 160604B 66% 3
false

data01_PROD 411 4.87MB/s 12431B 0% 0
false

data03_PROD 377 4.95MB/s 13746B 0% 0
false

Policy Group IOPS Throughput Request size Read Concurrency
Is Adaptive?

-------------------- -------- --------------- ------------ ---- -----------
------------

-total- 11004 218.56MB/s 20827B 36% 15
-

_System-Work 3750 24.82KB/s 6B 0% 0
false

DEV_DATA_DEV_2 1572 25.67MB/s 17129B 78% 2 false

data02_PROD_2 1568 91.24MB/s 61028B 99% 7 false

DEV_DATA_DEV 793 6.19MB/s 8185B 96% 0
false

os03_PROD 652 5.18MB/s 8322B 4% 1
false

DEV_OS_DEV_2 549 31.44MB/s 60082B 44% 2 false

data01_PROD 411 6.96MB/s 17753B 0% 0
false

shares02_PROD 304 34.42MB/s 118717B 42% 2
false

data02_PROD 261 4.56MB/s 18303B 0% 0
false

data03_PROD 237 3.59MB/s 15904B 0% 0
false
RE: FAS8040 cpu pegged for over 1 month 24/7 [ In reply to ]
Not sure if this bug applies to you, but it’s fixed in 9.3P5 and there appears to be a workaround as well:

https://mysupport.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=1144006

At least worth checking …

Anthony Bar
tbar@berkcom.com<mailto:tbar@berkcom.com>
Berkeley Communications
www.berkcom.com<http://www.berkcom.com/>

From: toasters-bounces@teaparty.net <toasters-bounces@teaparty.net> On Behalf Of jordan slingerland
Sent: Thursday, June 14, 2018 7:49 AM
To: Toasters <toasters@teaparty.net>
Subject: FAS8040 cpu pegged for over 1 month 24/7

I have an 8040 is at 100% cpu all the time on all cores. It has been like this for at least a month, my stats do not go back further than this. I expect the cpu was not this high before upgradin to 9.3P2. I feel like i should be able to get more than 10k ops out of a 150TB hybrid aggregate with 222 disks in it. Any help or feedback on performance expectations will be appreciated. Let me know if any stats would be usefull, i stripped my email down as my first try was rejected for being too big.


HOSTNAME::> node run -node HOSTNAME-0
HOSTNAME-01 HOSTNAME-02
HOSTNAME::> node run -node HOSTNAME-02
Type 'exit' or 'Ctrl-D' to return to the CLI

HOSTNAME-02> sysstat prif set diag priv set diag

Warning: These diagnostic commands are for use by NetApp

personnel only.

HOSTNAME-02*> sysstat -M 1

ANY1+ ANY2+ ANY3+ ANY4+ ANY5+ ANY6+ ANY7+ ANY8+ AVG CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 Nwk_Excl Nwk_Lg Nwk_Exmpt Protocol Storage Raid Raid_Ex Xor_Ex Target Kahuna WAFL_Ex(Kahu) WAFL_MPClean SM_Exempt Exempt SSAN_Ex Intr Host Ops/s CP

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 0% 0% 32% 0% 0% 0% 30% 10% 0% 0% 253%( 36%) 0% 0% 99% 18% 3% 353% 3748 100%

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 0% 0% 33% 0% 0% 0% 29% 10% 0% 0% 235%( 33%) 0% 0% 70% 19% 3% 398% 3959 100%

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 0% 0% 34% 0% 0% 0% 30% 9% 0% 0% 245%( 35%) 0% 0% 74% 19% 3% 385% 3802 100%

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 0% 0% 30% 0% 1% 0% 28% 8% 0% 0% 236%( 33%) 0% 0% 76% 17% 3% 399% 3367 100%

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 0% 1% 30% 0% 2% 0% 29% 8% 0% 0% 217%( 31%) 0% 0% 71% 16% 3% 422% 3427 100%

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 0% 1% 36% 0% 1% 1% 33% 9% 0% 0% 242%( 34%) 0% 0% 91% 19% 3% 362% 3754 100%

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 0% 1% 33% 0% 0% 0% 26% 4% 0% 0% 272%( 38%) 18% 0% 102% 18% 4% 320% 3496 60%

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 0% 1% 34% 0% 1% 0% 24% 4% 0% 0% 258%( 36%) 3% 0% 87% 20% 3% 364% 4001 0%

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 0% 1% 41% 0% 0% 0% 32% 5% 0% 0% 276%( 39%) 2% 0% 101% 24% 4% 312% 4752 0%

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 0% 1% 44% 0% 0% 0% 27% 4% 0% 0% 268%( 38%) 1% 0% 101% 25% 4% 323% 5077 28%

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 0% 1% 36% 0% 0% 0% 29% 2% 0% 0% 256%( 36%) 3% 0% 96% 21% 4% 352% 4105 2%

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 0% 1% 40% 0% 2% 0% 22% 1% 0% 0% 252%( 36%) 2% 0% 114% 23% 4% 339% 4801 0%

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 0% 0% 33% 0% 3% 0% 17% 1% 0% 0% 236%( 33%) 2% 0% 73% 20% 3% 410% 4280 0%

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 0% 0% 33% 0% 1% 0% 16% 2% 0% 0% 232%( 33%) 1% 0% 67% 20% 3% 424% 4219 0%

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 0% 0% 29% 0% 0% 0% 15% 1% 0% 0% 230%( 32%) 2% 0% 71% 18% 3% 429% 3719 0%

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 0% 1% 31% 0% 0% 0% 24% 2% 0% 0% 241%( 34%) 3% 0% 75% 19% 3% 400% 4206 0%

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 0% 1% 28% 0% 0% 0% 46% 11% 0% 0% 263%( 37%) 67% 0% 82% 17% 3% 280% 3496 59%

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 0% 0% 25% 0% 0% 1% 38% 7% 0% 0% 324%( 46%) 52% 0% 107% 16% 3% 226% 3127 100%

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 0% 0% 30% 0% 0% 1% 35% 7% 0% 1% 285%( 40%) 49% 0% 86% 18% 3% 285% 3633 100%
0% 0% 94% 20% 4% 341% 4300 100%



HOSTNAME-02*> sysstat -x 1

CPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape kB/s Cache Cache CP CP_Ty CP_Ph Disk OTHER FCP iSCSI FCP kB/s iSCSI kB/s

in out read write read write age hit time [T--H--F--N--B--O--#--:] [n--v--p--f] util in out in out

100% 6 0 0 3907 100046 124797 278121 20862 0 0 3s 94% 0% 0--0--0--0--0--0--0--0 0--0--0--0 25% 3 0 3898 0 0 99115 127259

100% 3 0 0 3869 60123 134198 411571 20704 0 0 4s 94% 0% 0--0--0--0--0--0--0--0 0--0--0--0 33% 5 0 3861 0 0 59346 120058

100% 0 0 0 4089 27375 116197 225213 18145 0 0 4s 93% 0% 0--0--0--0--0--0--0--0 0--0--0--0 23% 0 0 4089 0 0 26622 115872

100% 13 0 0 4175 35817 111147 260996 28849 0 0 2s 94% 0% 0--0--0--0--0--0--0--0 0--0--0--0 25% 0 0 4162 0 0 34874 125888

100% 2 0 0 4143 56386 99541 401056 23882 0 0 0s 93% 0% 0--0--0--0--0--0--0--0 0--0--0--0 37% 17 0 4124 0 0 55471 105104

100% 11 0 0 3750 25706 149389 364912 154655 0 0 2s 93% 43% 1--0--0--0--0--0--0--0 0--1--0--0 29% 0 0 3739 0 0 24904 119412

100% 4 0 0 3395 36225 45390 280507 211869 0 0 3s 92% 100% 0--0--0--0--0--0--0--1 0--0--1--0 40% 0 0 3391 0 0 35744 81843

100% 25 0 0 4713 46662 149078 307272 93644 0 0 4s 95% 100% 0--0--0--0--0--0--0--1 0--0--0--1 42% 184 0 4504 0 0 45774 118418

100% 2 0 0 3930 51697 110975 221814 105300 0 0 3s 93% 100% 0--0--0--0--0--0--0--1 0--0--0--1 26% 0 0 3928 0 0 50821 108484

100% 5 0 0 4148 54985 137949 267436 208711 0 0 3s 93% 100% 1--0--0--0--0--0--0--1 1--0--0--1 31% 9 0 4134 0 0 53645 142356

100% 16 0 0 4883 81959 108174 364367 666466 0 0 4s 94% 100% 0--0--0--0--0--0--0--2 0--0--0--2 40% 1 0 4866 0 0 81554 95623

100% 5 0 0 6111 74315 122018 268918 79372 0 0 3s 95% 100% 0--0--0--0--0--0--0--1 0--0--0--1 26% 4 0 6102 0 0 72956 126959

100% 3 0 0 4658 50514 119345 218663 157863 0 0 3s 94% 100% 0--0--0--0--0--0--0--1 0--0--0--1 22% 0 0 4655 0 0 50021 112484

100% 2 0 0 4125 61813 137044 244725 223996 0 0 3s 93% 100% 0--0--0--0--0--0--0--1 0--0--0--1 23% 0 0 4123 0 0 59479 135345

100% 6 0 0 4005 48098 158865 283920 54318 0 0 4s 95% 15% 0--0--0--0--0--0--0--0 0--0--0--0 26% 14 0 3985 0 0 47165 162826

100% 24 0 0 4490 45946 155717 266290 96976 0 0 2s 93% 0% 0--0--0--0--0--0--0--0 0--0--0--0 28% 0 0 4466 0 0 44575 146650

100% 7 0 0 4935 52081 155766 331777 11753 0 0 4s 93% 0% 0--0--0--0--0--0--0--0 0--0--0--0 32% 2 0 4926 0 0 51171 165626

100% 0 0 0 6518 39526 179763 350349 24130 0 0 1s 92% 0% 0--0--0--0--0--0--0--0 0--0--0--0 29% 4 0 6514 0 0 38678 177139

HOSTNAME-02*> qos exit

logout


HOSTNAME::> qos statistics characteristics show -iterations 0 -rows 10
Policy Group IOPS Throughput Request size Read Concurrency Is Adaptive?
-------------------- -------- --------------- ------------ ---- ----------- ------------
-total- 8094 321.31MB/s 41627B 41% 17 -
data02_PROD_2 1626 101.69MB/s 65578B 80% 7 false
DEV_DATA_DEV_2 1528 147.53MB/s 101219B 57% 5 false
_System-Work 1243 4.40KB/s 3B 1% 0 false
DEV_DATA_DEV 749 6.51MB/s 9121B 93% 1 false
data02_PROD 667 9.48MB/s 14911B 0% 1 false
DEV_OS_DEV_2 470 28.78MB/s 64214B 35% 2 false
data04_PROD 256 4.35MB/s 17812B 8% 0 false
data03_PROD 191 3.64MB/s 19971B 0% 0 false
os03_PROD_2 191 1.55MB/s 8496B 0% 0 false
shares02_PROD 183 1382.41KB/s 7721B 84% 0 false
-total- 9542 347.25MB/s 38158B 42% 21 -
DEV_DATA_DEV_2 1675 83.11MB/s 52028B 71% 4 false
data02_PROD_2 1547 103.42MB/s 70112B 86% 7 false
_System-Work 1191 6.58KB/s 5B 0% 0 false
DEV_OS_DEV_2 860 30.42MB/s 37095B 37% 2 false
DEV_DATA_DEV 667 5.22MB/s 8208B 96% 0 false
data03_PROD_2 627 11.51MB/s 19255B 2% 3 false
data02_PROD 528 7.24MB/s 14370B 0% 0 false
shares02_PROD 526 80.62MB/s 160604B 66% 3 false
data01_PROD 411 4.87MB/s 12431B 0% 0 false
data03_PROD 377 4.95MB/s 13746B 0% 0 false
Policy Group IOPS Throughput Request size Read Concurrency Is Adaptive?
-------------------- -------- --------------- ------------ ---- ----------- ------------
-total- 11004 218.56MB/s 20827B 36% 15 -
_System-Work 3750 24.82KB/s 6B 0% 0 false
DEV_DATA_DEV_2 1572 25.67MB/s 17129B 78% 2 false
data02_PROD_2 1568 91.24MB/s 61028B 99% 7 false
DEV_DATA_DEV 793 6.19MB/s 8185B 96% 0 false
os03_PROD 652 5.18MB/s 8322B 4% 1 false
DEV_OS_DEV_2 549 31.44MB/s 60082B 44% 2 false
data01_PROD 411 6.96MB/s 17753B 0% 0 false
shares02_PROD 304 34.42MB/s 118717B 42% 2 false
data02_PROD 261 4.56MB/s 18303B 0% 0 false
data03_PROD 237 3.59MB/s 15904B 0% 0 false
Re: FAS8040 cpu pegged for over 1 month 24/7 [ In reply to ]
Thank you tony, I appreciate that. in sysstemshell ntp does not show up on
the list so I do not htink that is the issue.

mcached is the highest usage using up to 400% of cpu.

On Thu, Jun 14, 2018 at 11:20 AM, Tony Bar <tbar@berkcom.com> wrote:

> Not sure if this bug applies to you, but it’s fixed in 9.3P5 and there
> appears to be a workaround as well:
>
>
>
> https://mysupport.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=1144006
>
>
>
> At least worth checking …
>
>
>
> *Anthony Bar*
>
> tbar@berkcom.com
>
> *Berkeley Communications*
>
> www.berkcom.com
>
>
>
> *From:* toasters-bounces@teaparty.net <toasters-bounces@teaparty.net> *On
> Behalf Of *jordan slingerland
> *Sent:* Thursday, June 14, 2018 7:49 AM
> *To:* Toasters <toasters@teaparty.net>
> *Subject:* FAS8040 cpu pegged for over 1 month 24/7
>
>
>
> I have an 8040 is at 100% cpu all the time on all cores. It has been
> like this for at least a month, my stats do not go back further than this.
> I expect the cpu was not this high before upgradin to 9.3P2. I feel like i
> should be able to get more than 10k ops out of a 150TB hybrid aggregate
> with 222 disks in it. Any help or feedback on performance expectations
> will be appreciated. Let me know if any stats would be usefull, i stripped
> my email down as my first try was rejected for being too big.
>
>
>
>
>
> HOSTNAME::> node run -node HOSTNAME-0
>
> HOSTNAME-01 HOSTNAME-02
>
> HOSTNAME::> node run -node HOSTNAME-02
>
> Type 'exit' or 'Ctrl-D' to return to the CLI
>
>
>
> HOSTNAME-02> sysstat prif set
> diag priv set diag
>
>
>
> Warning: These diagnostic commands are for use by NetApp
>
>
>
> personnel only.
>
>
>
> HOSTNAME-02*> sysstat -M 1
>
>
>
> ANY1+ ANY2+ ANY3+ ANY4+ ANY5+ ANY6+ ANY7+ ANY8+ AVG CPU0 CPU1 CPU2 CPU3
> CPU4 CPU5 CPU6 CPU7 Nwk_Excl Nwk_Lg Nwk_Exmpt Protocol Storage Raid Raid_Ex
> Xor_Ex Target Kahuna WAFL_Ex(Kahu) WAFL_MPClean SM_Exempt Exempt SSAN_Ex
> Intr Host Ops/s CP
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 0% 32% 0% 0% 0%
> 30% 10% 0% 0% 253%( 36%) 0% 0% 99% 18%
> 3% 353% 3748 100%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 0% 33% 0% 0% 0%
> 29% 10% 0% 0% 235%( 33%) 0% 0% 70% 19%
> 3% 398% 3959 100%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 0% 34% 0% 0% 0%
> 30% 9% 0% 0% 245%( 35%) 0% 0% 74% 19%
> 3% 385% 3802 100%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 0% 30% 0% 1% 0%
> 28% 8% 0% 0% 236%( 33%) 0% 0% 76% 17%
> 3% 399% 3367 100%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 1% 30% 0% 2% 0%
> 29% 8% 0% 0% 217%( 31%) 0% 0% 71% 16%
> 3% 422% 3427 100%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 1% 36% 0% 1% 1%
> 33% 9% 0% 0% 242%( 34%) 0% 0% 91% 19%
> 3% 362% 3754 100%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 1% 33% 0% 0% 0%
> 26% 4% 0% 0% 272%( 38%) 18% 0% 102% 18%
> 4% 320% 3496 60%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 1% 34% 0% 1% 0%
> 24% 4% 0% 0% 258%( 36%) 3% 0% 87% 20%
> 3% 364% 4001 0%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 1% 41% 0% 0% 0%
> 32% 5% 0% 0% 276%( 39%) 2% 0% 101% 24%
> 4% 312% 4752 0%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 1% 44% 0% 0% 0%
> 27% 4% 0% 0% 268%( 38%) 1% 0% 101% 25%
> 4% 323% 5077 28%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 1% 36% 0% 0% 0%
> 29% 2% 0% 0% 256%( 36%) 3% 0% 96% 21%
> 4% 352% 4105 2%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 1% 40% 0% 2% 0%
> 22% 1% 0% 0% 252%( 36%) 2% 0% 114% 23%
> 4% 339% 4801 0%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 0% 33% 0% 3% 0%
> 17% 1% 0% 0% 236%( 33%) 2% 0% 73% 20%
> 3% 410% 4280 0%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 0% 33% 0% 1% 0%
> 16% 2% 0% 0% 232%( 33%) 1% 0% 67% 20%
> 3% 424% 4219 0%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 0% 29% 0% 0% 0%
> 15% 1% 0% 0% 230%( 32%) 2% 0% 71% 18%
> 3% 429% 3719 0%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 1% 31% 0% 0% 0%
> 24% 2% 0% 0% 241%( 34%) 3% 0% 75% 19%
> 3% 400% 4206 0%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 1% 28% 0% 0% 0%
> 46% 11% 0% 0% 263%( 37%) 67% 0% 82% 17%
> 3% 280% 3496 59%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 0% 25% 0% 0% 1%
> 38% 7% 0% 0% 324%( 46%) 52% 0% 107% 16%
> 3% 226% 3127 100%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 0% 30% 0% 0% 1%
> 35% 7% 0% 1% 285%( 40%) 49% 0% 86% 18%
> 3% 285% 3633 100%
>
> 0% 0% 94% 20% 4% 341% 4300 100%
>
>
>
>
>
>
>
> HOSTNAME-02*> sysstat -x 1
>
>
>
> CPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape
> kB/s Cache Cache CP CP_Ty CP_Ph Disk
> OTHER FCP iSCSI FCP kB/s iSCSI kB/s
>
>
>
> in out read write read
> write age hit time [T--H--F--N--B--O--#--:] [n--v--p--f]
> util in out in out
>
>
>
> 100% 6 0 0 3907 100046 124797 278121 20862
> 0 0 3s 94% 0% 0--0--0--0--0--0--0--0 0--0--0--0
> 25% 3 0 3898 0 0 99115 127259
>
>
>
> 100% 3 0 0 3869 60123 134198 411571 20704
> 0 0 4s 94% 0% 0--0--0--0--0--0--0--0 0--0--0--0
> 33% 5 0 3861 0 0 59346 120058
>
>
>
> 100% 0 0 0 4089 27375 116197 225213 18145
> 0 0 4s 93% 0% 0--0--0--0--0--0--0--0 0--0--0--0
> 23% 0 0 4089 0 0 26622 115872
>
>
>
> 100% 13 0 0 4175 35817 111147 260996 28849
> 0 0 2s 94% 0% 0--0--0--0--0--0--0--0 0--0--0--0
> 25% 0 0 4162 0 0 34874 125888
>
>
>
> 100% 2 0 0 4143 56386 99541 401056 23882
> 0 0 0s 93% 0% 0--0--0--0--0--0--0--0 0--0--0--0
> 37% 17 0 4124 0 0 55471 105104
>
>
>
> 100% 11 0 0 3750 25706 149389 364912 154655
> 0 0 2s 93% 43% 1--0--0--0--0--0--0--0 0--1--0--0
> 29% 0 0 3739 0 0 24904 119412
>
>
>
> 100% 4 0 0 3395 36225 45390 280507 211869
> 0 0 3s 92% 100% 0--0--0--0--0--0--0--1 0--0--1--0
> 40% 0 0 3391 0 0 35744 81843
>
>
>
> 100% 25 0 0 4713 46662 149078 307272 93644
> 0 0 4s 95% 100% 0--0--0--0--0--0--0--1 0--0--0--1
> 42% 184 0 4504 0 0 45774 118418
>
>
>
> 100% 2 0 0 3930 51697 110975 221814 105300
> 0 0 3s 93% 100% 0--0--0--0--0--0--0--1 0--0--0--1
> 26% 0 0 3928 0 0 50821 108484
>
>
>
> 100% 5 0 0 4148 54985 137949 267436 208711
> 0 0 3s 93% 100% 1--0--0--0--0--0--0--1 1--0--0--1
> 31% 9 0 4134 0 0 53645 142356
>
>
>
> 100% 16 0 0 4883 81959 108174 364367 666466
> 0 0 4s 94% 100% 0--0--0--0--0--0--0--2 0--0--0--2
> 40% 1 0 4866 0 0 81554 95623
>
>
>
> 100% 5 0 0 6111 74315 122018 268918 79372
> 0 0 3s 95% 100% 0--0--0--0--0--0--0--1 0--0--0--1
> 26% 4 0 6102 0 0 72956 126959
>
>
>
> 100% 3 0 0 4658 50514 119345 218663 157863
> 0 0 3s 94% 100% 0--0--0--0--0--0--0--1 0--0--0--1
> 22% 0 0 4655 0 0 50021 112484
>
>
>
> 100% 2 0 0 4125 61813 137044 244725 223996
> 0 0 3s 93% 100% 0--0--0--0--0--0--0--1 0--0--0--1
> 23% 0 0 4123 0 0 59479 135345
>
>
>
> 100% 6 0 0 4005 48098 158865 283920 54318
> 0 0 4s 95% 15% 0--0--0--0--0--0--0--0 0--0--0--0
> 26% 14 0 3985 0 0 47165 162826
>
>
>
> 100% 24 0 0 4490 45946 155717 266290 96976
> 0 0 2s 93% 0% 0--0--0--0--0--0--0--0 0--0--0--0
> 28% 0 0 4466 0 0 44575 146650
>
>
>
> 100% 7 0 0 4935 52081 155766 331777 11753
> 0 0 4s 93% 0% 0--0--0--0--0--0--0--0 0--0--0--0
> 32% 2 0 4926 0 0 51171 165626
>
>
>
> 100% 0 0 0 6518 39526 179763 350349 24130
> 0 0 1s 92% 0% 0--0--0--0--0--0--0--0 0--0--0--0
> 29% 4 0 6514 0 0 38678 177139
>
>
>
> HOSTNAME-02*> qos exit
>
>
>
> logout
>
>
>
>
>
> HOSTNAME::> qos statistics characteristics show -iterations 0 -rows 10
>
> Policy Group IOPS Throughput Request size Read
> Concurrency Is Adaptive?
>
> -------------------- -------- --------------- ------------ ----
> ----------- ------------
>
> -total- 8094 321.31MB/s 41627B 41%
> 17 -
>
> data02_PROD_2 1626 101.69MB/s 65578B 80% 7 false
>
> DEV_DATA_DEV_2 1528 147.53MB/s 101219B 57% 5 false
>
> _System-Work 1243 4.40KB/s 3B 1%
> 0 false
>
> DEV_DATA_DEV 749 6.51MB/s 9121B 93% 1
> false
>
> data02_PROD 667 9.48MB/s 14911B 0% 1
> false
>
> DEV_OS_DEV_2 470 28.78MB/s 64214B 35% 2 false
>
> data04_PROD 256 4.35MB/s 17812B 8% 0
> false
>
> data03_PROD 191 3.64MB/s 19971B 0% 0
> false
>
> os03_PROD_2 191 1.55MB/s 8496B 0% 0 false
>
> shares02_PROD 183 1382.41KB/s 7721B 84% 0
> false
>
> -total- 9542 347.25MB/s 38158B 42%
> 21 -
>
> DEV_DATA_DEV_2 1675 83.11MB/s 52028B 71% 4 false
>
> data02_PROD_2 1547 103.42MB/s 70112B 86% 7 false
>
> _System-Work 1191 6.58KB/s 5B 0%
> 0 false
>
> DEV_OS_DEV_2 860 30.42MB/s 37095B 37% 2 false
>
> DEV_DATA_DEV 667 5.22MB/s 8208B 96% 0
> false
>
> data03_PROD_2 627 11.51MB/s 19255B 2% 3 false
>
> data02_PROD 528 7.24MB/s 14370B 0% 0
> false
>
> shares02_PROD 526 80.62MB/s 160604B 66% 3
> false
>
> data01_PROD 411 4.87MB/s 12431B 0% 0
> false
>
> data03_PROD 377 4.95MB/s 13746B 0% 0
> false
>
> Policy Group IOPS Throughput Request size Read
> Concurrency Is Adaptive?
>
> -------------------- -------- --------------- ------------ ----
> ----------- ------------
>
> -total- 11004 218.56MB/s 20827B 36%
> 15 -
>
> _System-Work 3750 24.82KB/s 6B 0%
> 0 false
>
> DEV_DATA_DEV_2 1572 25.67MB/s 17129B 78% 2 false
>
> data02_PROD_2 1568 91.24MB/s 61028B 99% 7 false
>
> DEV_DATA_DEV 793 6.19MB/s 8185B 96% 0
> false
>
> os03_PROD 652 5.18MB/s 8322B 4% 1
> false
>
> DEV_OS_DEV_2 549 31.44MB/s 60082B 44% 2 false
>
> data01_PROD 411 6.96MB/s 17753B 0% 0
> false
>
> shares02_PROD 304 34.42MB/s 118717B 42% 2
> false
>
> data02_PROD 261 4.56MB/s 18303B 0% 0
> false
>
> data03_PROD 237 3.59MB/s 15904B 0% 0
> false
>
>
>
>
>
>
>
>
>
Re: FAS8040 cpu pegged for over 1 month 24/7 [ In reply to ]
Anyone have any idea why /usr/sbin/mcache would be so busy on node 2 and
not even on the radar on node1? Could this maybe be a bug or a stuck
thread? I am tempted to reboot.




last pid: 82685; load averages: 3.51, 4.69,
4.95

up 54+14:37:15 13:18:25

69 processes: 1 running, 66 sleeping, 2 zombie

CPU: 1.4% user, 0.0% nice, 51.6% system, 0.0% interrupt, 47.0% idle

Mem: 188M Active, 1714M Inact, 3189M Wired, 39M Cache, 63M Buf, 754M Free

Swap: 8192M Total, 577M Used, 7615M Free, 7% Inuse



PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU
COMMAND

1936 root 12 40 0 36136K 3600K uwait 2 323:33 6.30% spmd

82357 diag 1 40 0 25564K 8052K CPU1 1 0:05 1.90% top

2163 root 189 57 0 567M 111M select 3 86.1H 1.86% mgwd

8194 root 10 40 0 170M 20616K uwait 0 338:48 1.46%
raid_lm

8540 root 49 40 0 202M 29060K uwait 0 400:24 0.05%
vifmgr

8555 root 42 40 0 188M 26560K uwait 7 51:41 0.05% bcomd

8550 root 71 40 0 190M 28044K uwait 7 500:09 0.00% vldb

1698 root 34 40 0 192M 28768K uwait 6 164:23 0.00%
notifyd

9682 www 98 4 0 85604K 18568K kqread 5 118:38 0.00% httpd

xxxxxxxxx% top

xxxxxxxxx% exit

logout



xxxxxxxxx::*> systemshell -node AME001-NACLP-02

(system node systemshell)

diag@169.254.217.17's password:



Warning: The system shell provides access to low-level

diagnostic tools that can cause irreparable damage to

the system if not used properly. Use this environment

only when directed to do so by support personnel.

xxxxxxxxx% top

last pid: 81502; load averages: 24.46, 23.38,
22.80

up 54+15:01:23 13:18:40

61 processes: 1 running, 59 sleeping, 1 zombie

CPU: 54.1% user, 0.0% nice, 45.9% system, 0.0% interrupt, 0.0% idle

Mem: 173M Active, 2034M Inact, 3624M Wired, 35M Cache, 63M Buf, 19M Free

Swap: 8192M Total, 651M Used, 7541M Free, 7% Inuse



PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU
COMMAND

7918 root 33 40 0 169M 16500K uwait 0 2935.7 489.70%
mcached

81490 diag 1 40 0 25564K 8040K CPU2 2 0:00 2.87% top

2164 root 187 40 0 563M 106M uwait 1 35.9H 0.00% mgwd

7771 root 50 40 0 203M 29504K uwait 2 420:06 0.00%
vifmgr

1938 root 13 40 0 36272K 4040K uwait 3 334:26 0.00% spmd

8003 root 10 40 0 168M 21096K uwait 1 307:46 0.00%
raid_lm

1700 root 33 40 0 192M 29904K uwait 1 238:47 0.00%
notifyd

13124 root 31 40 0 173M 21196K uwait 1 183:03 0.00%
cshmd

7785 root 63 40 0 189M 27184K uwait 6 113:23 0.00% vldb

7821 root 89 40 0 197M 31568K uwait 1 89:16 0.00% cmd

5135 root 38 40 0 177M 24948K uwait 2 85:06 0.00% nchmd

5089 root 1 8 0 7356K 2368K wait 7 54:04 0.00% bash

7793 root 43 40 0 190M 26016K uwait 1 45:56 0.00% bcomd

7844 root 25 40 0 164M 14664K uwait 3 32:56 0.00% pipd

8845 www 98 4 0 71268K 12720K kqread 0 26:38 0.00% httpd

5140 root 38 40 0 172M 17612K uwait 4 24:59 0.00% nphmd

7799 root 51 40 0 185M 23572K uwait 2 23:45 0.00% crs

13121 root 34 40 0 169M 18224K uwait 6 21:58 0.00%
schmd

13122 root 23 40 0 167M 16992K uwait 0 17:21 0.00%
cphmd

13123 root 19 40 0 167M 15560K uwait 2 15:14 0.00% shmd

7892 root 12 40 0 160M 14896K uwait 0 9:45 0.00%
vserverdr

1979 root 2 40 0 41896K 5436K usem 6 9:21 0.00%
env_mgr

1284 root 1 40 0 12844K 1868K select 1 8:34 0.00%
rpcbind

8302 root 1 8 0 10852K 2100K nanslp 0 7:50 0.00%
qldump

2022 root 23 40 0 125M 11924K uwait 6 7:45 0.00%
fpolicy

8757 root 1 40 0 15016K 1804K ttyin 0 7:39 0.00% login

7828 root 11 40 0 162M 13944K uwait 2 6:32 0.00%
upgrademgr


On Thu, Jun 14, 2018 at 10:48 AM, jordan slingerland <
jordan.slingerland@gmail.com> wrote:

> I have an 8040 is at 100% cpu all the time on all cores. It has been
> like this for at least a month, my stats do not go back further than this.
> I expect the cpu was not this high before upgradin to 9.3P2. I feel like i
> should be able to get more than 10k ops out of a 150TB hybrid aggregate
> with 222 disks in it. Any help or feedback on performance expectations
> will be appreciated. Let me know if any stats would be usefull, i stripped
> my email down as my first try was rejected for being too big.
>
>
>
> HOSTNAME::> node run -node HOSTNAME-0
>
> HOSTNAME-01 HOSTNAME-02
>
> HOSTNAME::> node run -node HOSTNAME-02
>
> Type 'exit' or 'Ctrl-D' to return to the CLI
>
>
>
> HOSTNAME-02> sysstat prif set
> diag priv set diag
>
>
>
> Warning: These diagnostic commands are for use by NetApp
>
>
>
> personnel only.
>
>
>
> HOSTNAME-02*> sysstat -M 1
>
>
>
> ANY1+ ANY2+ ANY3+ ANY4+ ANY5+ ANY6+ ANY7+ ANY8+ AVG CPU0 CPU1 CPU2 CPU3
> CPU4 CPU5 CPU6 CPU7 Nwk_Excl Nwk_Lg Nwk_Exmpt Protocol Storage Raid Raid_Ex
> Xor_Ex Target Kahuna WAFL_Ex(Kahu) WAFL_MPClean SM_Exempt Exempt SSAN_Ex
> Intr Host Ops/s CP
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 0% 32% 0% 0% 0%
> 30% 10% 0% 0% 253%( 36%) 0% 0% 99% 18%
> 3% 353% 3748 100%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 0% 33% 0% 0% 0%
> 29% 10% 0% 0% 235%( 33%) 0% 0% 70% 19%
> 3% 398% 3959 100%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 0% 34% 0% 0% 0%
> 30% 9% 0% 0% 245%( 35%) 0% 0% 74% 19%
> 3% 385% 3802 100%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 0% 30% 0% 1% 0%
> 28% 8% 0% 0% 236%( 33%) 0% 0% 76% 17%
> 3% 399% 3367 100%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 1% 30% 0% 2% 0%
> 29% 8% 0% 0% 217%( 31%) 0% 0% 71% 16%
> 3% 422% 3427 100%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 1% 36% 0% 1% 1%
> 33% 9% 0% 0% 242%( 34%) 0% 0% 91% 19%
> 3% 362% 3754 100%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 1% 33% 0% 0% 0%
> 26% 4% 0% 0% 272%( 38%) 18% 0% 102% 18%
> 4% 320% 3496 60%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 1% 34% 0% 1% 0%
> 24% 4% 0% 0% 258%( 36%) 3% 0% 87% 20%
> 3% 364% 4001 0%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 1% 41% 0% 0% 0%
> 32% 5% 0% 0% 276%( 39%) 2% 0% 101% 24%
> 4% 312% 4752 0%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 1% 44% 0% 0% 0%
> 27% 4% 0% 0% 268%( 38%) 1% 0% 101% 25%
> 4% 323% 5077 28%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 1% 36% 0% 0% 0%
> 29% 2% 0% 0% 256%( 36%) 3% 0% 96% 21%
> 4% 352% 4105 2%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 1% 40% 0% 2% 0%
> 22% 1% 0% 0% 252%( 36%) 2% 0% 114% 23%
> 4% 339% 4801 0%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 0% 33% 0% 3% 0%
> 17% 1% 0% 0% 236%( 33%) 2% 0% 73% 20%
> 3% 410% 4280 0%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 0% 33% 0% 1% 0%
> 16% 2% 0% 0% 232%( 33%) 1% 0% 67% 20%
> 3% 424% 4219 0%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 0% 29% 0% 0% 0%
> 15% 1% 0% 0% 230%( 32%) 2% 0% 71% 18%
> 3% 429% 3719 0%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 1% 31% 0% 0% 0%
> 24% 2% 0% 0% 241%( 34%) 3% 0% 75% 19%
> 3% 400% 4206 0%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 1% 28% 0% 0% 0%
> 46% 11% 0% 0% 263%( 37%) 67% 0% 82% 17%
> 3% 280% 3496 59%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 0% 25% 0% 0% 1%
> 38% 7% 0% 0% 324%( 46%) 52% 0% 107% 16%
> 3% 226% 3127 100%
>
>
>
> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
> 100% 100% 100% 100% 0% 0% 30% 0% 0% 1%
> 35% 7% 0% 1% 285%( 40%) 49% 0% 86% 18%
> 3% 285% 3633 100%
>
> 0% 0% 94% 20% 4% 341% 4300 100%
>
>
>
>
>
>
> HOSTNAME-02*> sysstat -x 1
>
>
>
> CPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape
> kB/s Cache Cache CP CP_Ty CP_Ph Disk
> OTHER FCP iSCSI FCP kB/s iSCSI kB/s
>
>
>
> in out read write read
> write age hit time [T--H--F--N--B--O--#--:] [n--v--p--f]
> util in out in out
>
>
>
> 100% 6 0 0 3907 100046 124797 278121 20862
> 0 0 3s 94% 0% 0--0--0--0--0--0--0--0 0--0--0--0
> 25% 3 0 3898 0 0 99115 127259
>
>
>
> 100% 3 0 0 3869 60123 134198 411571 20704
> 0 0 4s 94% 0% 0--0--0--0--0--0--0--0 0--0--0--0
> 33% 5 0 3861 0 0 59346 120058
>
>
>
> 100% 0 0 0 4089 27375 116197 225213 18145
> 0 0 4s 93% 0% 0--0--0--0--0--0--0--0 0--0--0--0
> 23% 0 0 4089 0 0 26622 115872
>
>
>
> 100% 13 0 0 4175 35817 111147 260996 28849
> 0 0 2s 94% 0% 0--0--0--0--0--0--0--0 0--0--0--0
> 25% 0 0 4162 0 0 34874 125888
>
>
>
> 100% 2 0 0 4143 56386 99541 401056 23882
> 0 0 0s 93% 0% 0--0--0--0--0--0--0--0 0--0--0--0
> 37% 17 0 4124 0 0 55471 105104
>
>
>
> 100% 11 0 0 3750 25706 149389 364912 154655
> 0 0 2s 93% 43% 1--0--0--0--0--0--0--0 0--1--0--0
> 29% 0 0 3739 0 0 24904 119412
>
>
>
> 100% 4 0 0 3395 36225 45390 280507 211869
> 0 0 3s 92% 100% 0--0--0--0--0--0--0--1 0--0--1--0
> 40% 0 0 3391 0 0 35744 81843
>
>
>
> 100% 25 0 0 4713 46662 149078 307272 93644
> 0 0 4s 95% 100% 0--0--0--0--0--0--0--1 0--0--0--1
> 42% 184 0 4504 0 0 45774 118418
>
>
>
> 100% 2 0 0 3930 51697 110975 221814 105300
> 0 0 3s 93% 100% 0--0--0--0--0--0--0--1 0--0--0--1
> 26% 0 0 3928 0 0 50821 108484
>
>
>
> 100% 5 0 0 4148 54985 137949 267436 208711
> 0 0 3s 93% 100% 1--0--0--0--0--0--0--1 1--0--0--1
> 31% 9 0 4134 0 0 53645 142356
>
>
>
> 100% 16 0 0 4883 81959 108174 364367 666466
> 0 0 4s 94% 100% 0--0--0--0--0--0--0--2 0--0--0--2
> 40% 1 0 4866 0 0 81554 95623
>
>
>
> 100% 5 0 0 6111 74315 122018 268918 79372
> 0 0 3s 95% 100% 0--0--0--0--0--0--0--1 0--0--0--1
> 26% 4 0 6102 0 0 72956 126959
>
>
>
> 100% 3 0 0 4658 50514 119345 218663 157863
> 0 0 3s 94% 100% 0--0--0--0--0--0--0--1 0--0--0--1
> 22% 0 0 4655 0 0 50021 112484
>
>
>
> 100% 2 0 0 4125 61813 137044 244725 223996
> 0 0 3s 93% 100% 0--0--0--0--0--0--0--1 0--0--0--1
> 23% 0 0 4123 0 0 59479 135345
>
>
>
> 100% 6 0 0 4005 48098 158865 283920 54318
> 0 0 4s 95% 15% 0--0--0--0--0--0--0--0 0--0--0--0
> 26% 14 0 3985 0 0 47165 162826
>
>
>
> 100% 24 0 0 4490 45946 155717 266290 96976
> 0 0 2s 93% 0% 0--0--0--0--0--0--0--0 0--0--0--0
> 28% 0 0 4466 0 0 44575 146650
>
>
>
> 100% 7 0 0 4935 52081 155766 331777 11753
> 0 0 4s 93% 0% 0--0--0--0--0--0--0--0 0--0--0--0
> 32% 2 0 4926 0 0 51171 165626
>
>
>
> 100% 0 0 0 6518 39526 179763 350349 24130
> 0 0 1s 92% 0% 0--0--0--0--0--0--0--0 0--0--0--0
> 29% 4 0 6514 0 0 38678 177139
>
>
> HOSTNAME-02*> qos exit
>
>
>
> logout
>
>
>
>
>
> HOSTNAME::> qos statistics characteristics show -iterations 0 -rows 10
>
> Policy Group IOPS Throughput Request size Read
> Concurrency Is Adaptive?
>
> -------------------- -------- --------------- ------------ ----
> ----------- ------------
>
> -total- 8094 321.31MB/s 41627B 41%
> 17 -
>
> data02_PROD_2 1626 101.69MB/s 65578B 80% 7 false
>
> DEV_DATA_DEV_2 1528 147.53MB/s 101219B 57% 5 false
>
> _System-Work 1243 4.40KB/s 3B 1%
> 0 false
>
> DEV_DATA_DEV 749 6.51MB/s 9121B 93% 1
> false
>
> data02_PROD 667 9.48MB/s 14911B 0% 1
> false
>
> DEV_OS_DEV_2 470 28.78MB/s 64214B 35% 2 false
>
> data04_PROD 256 4.35MB/s 17812B 8% 0
> false
>
> data03_PROD 191 3.64MB/s 19971B 0% 0
> false
>
> os03_PROD_2 191 1.55MB/s 8496B 0% 0 false
>
> shares02_PROD 183 1382.41KB/s 7721B 84% 0
> false
>
> -total- 9542 347.25MB/s 38158B 42%
> 21 -
>
> DEV_DATA_DEV_2 1675 83.11MB/s 52028B 71% 4 false
>
> data02_PROD_2 1547 103.42MB/s 70112B 86% 7 false
>
> _System-Work 1191 6.58KB/s 5B 0%
> 0 false
>
> DEV_OS_DEV_2 860 30.42MB/s 37095B 37% 2 false
>
> DEV_DATA_DEV 667 5.22MB/s 8208B 96% 0
> false
>
> data03_PROD_2 627 11.51MB/s 19255B 2% 3 false
>
> data02_PROD 528 7.24MB/s 14370B 0% 0
> false
>
> shares02_PROD 526 80.62MB/s 160604B 66% 3
> false
>
> data01_PROD 411 4.87MB/s 12431B 0% 0
> false
>
> data03_PROD 377 4.95MB/s 13746B 0% 0
> false
>
> Policy Group IOPS Throughput Request size Read
> Concurrency Is Adaptive?
>
> -------------------- -------- --------------- ------------ ----
> ----------- ------------
>
> -total- 11004 218.56MB/s 20827B 36%
> 15 -
>
> _System-Work 3750 24.82KB/s 6B 0%
> 0 false
>
> DEV_DATA_DEV_2 1572 25.67MB/s 17129B 78% 2 false
>
> data02_PROD_2 1568 91.24MB/s 61028B 99% 7 false
>
> DEV_DATA_DEV 793 6.19MB/s 8185B 96% 0
> false
>
> os03_PROD 652 5.18MB/s 8322B 4% 1
> false
>
> DEV_OS_DEV_2 549 31.44MB/s 60082B 44% 2 false
>
> data01_PROD 411 6.96MB/s 17753B 0% 0
> false
>
> shares02_PROD 304 34.42MB/s 118717B 42% 2
> false
>
> data02_PROD 261 4.56MB/s 18303B 0% 0
> false
>
> data03_PROD 237 3.59MB/s 15904B 0% 0
> false
>
>
>
>
>
>
>
>
Re: FAS8040 cpu pegged for over 1 month 24/7 [ In reply to ]
maybe open a case and even force a core for analyzing?
--tmac

*Tim McCarthy, **Principal Consultant*

*Proud Member of the #NetAppATeam <https://twitter.com/NetAppATeam>*

*I Blog at TMACsRack <https://tmacsrack.wordpress.com/>*


On Fri, Jun 15, 2018 at 1:39 PM jordan slingerland <
jordan.slingerland@gmail.com> wrote:

>
> Anyone have any idea why /usr/sbin/mcache would be so busy on node 2 and
> not even on the radar on node1? Could this maybe be a bug or a stuck
> thread? I am tempted to reboot.
>
>
>
> xxxxxxxxx> top
>
> last pid: 82685; load averages: 3.51, 4.69,
> 4.95
> up 54+14:37:15 13:18:25
>
> 69 processes: 1 running, 66 sleeping, 2 zombie
>
> CPU: 1.4% user, 0.0% nice, 51.6% system, 0.0% interrupt, 47.0% idle
>
> Mem: 188M Active, 1714M Inact, 3189M Wired, 39M Cache, 63M Buf, 754M Free
>
> Swap: 8192M Total, 577M Used, 7615M Free, 7% Inuse
>
>
>
> PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU
> COMMAND
>
> 1936 root 12 40 0 36136K 3600K uwait 2 323:33 6.30% spmd
>
> 82357 diag 1 40 0 25564K 8052K CPU1 1 0:05 1.90%
> top
>
> 2163 root 189 57 0 567M 111M select 3 86.1H 1.86% mgwd
>
> 8194 root 10 40 0 170M 20616K uwait 0 338:48 1.46%
> raid_lm
>
> 8540 root 49 40 0 202M 29060K uwait 0 400:24 0.05%
> vifmgr
>
> 8555 root 42 40 0 188M 26560K uwait 7 51:41 0.05%
> bcomd
>
> 8550 root 71 40 0 190M 28044K uwait 7 500:09 0.00% vldb
>
> 1698 root 34 40 0 192M 28768K uwait 6 164:23 0.00%
> notifyd
>
> 9682 www 98 4 0 85604K 18568K kqread 5 118:38 0.00%
> httpd
>
> xxxxxxxxx% top
>
> xxxxxxxxx% exit
>
> logout
>
>
>
> xxxxxxxxx::*> systemshell -node AME001-NACLP-02
>
> (system node systemshell)
>
> diag@169.254.217.17's password:
>
>
>
> Warning: The system shell provides access to low-level
>
> diagnostic tools that can cause irreparable damage to
>
> the system if not used properly. Use this environment
>
> only when directed to do so by support personnel.
>
> xxxxxxxxx% top
>
> last pid: 81502; load averages: 24.46, 23.38,
> 22.80
> up 54+15:01:23 13:18:40
>
> 61 processes: 1 running, 59 sleeping, 1 zombie
>
> CPU: 54.1% user, 0.0% nice, 45.9% system, 0.0% interrupt, 0.0% idle
>
> Mem: 173M Active, 2034M Inact, 3624M Wired, 35M Cache, 63M Buf, 19M Free
>
> Swap: 8192M Total, 651M Used, 7541M Free, 7% Inuse
>
>
>
> PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU
> COMMAND
>
> 7918 root 33 40 0 169M 16500K uwait 0 2935.7 489.70%
> mcached
>
> 81490 diag 1 40 0 25564K 8040K CPU2 2 0:00 2.87%
> top
>
> 2164 root 187 40 0 563M 106M uwait 1 35.9H 0.00% mgwd
>
> 7771 root 50 40 0 203M 29504K uwait 2 420:06 0.00%
> vifmgr
>
> 1938 root 13 40 0 36272K 4040K uwait 3 334:26 0.00% spmd
>
> 8003 root 10 40 0 168M 21096K uwait 1 307:46 0.00%
> raid_lm
>
> 1700 root 33 40 0 192M 29904K uwait 1 238:47 0.00%
> notifyd
>
> 13124 root 31 40 0 173M 21196K uwait 1 183:03 0.00%
> cshmd
>
> 7785 root 63 40 0 189M 27184K uwait 6 113:23 0.00% vldb
>
> 7821 root 89 40 0 197M 31568K uwait 1 89:16 0.00% cmd
>
> 5135 root 38 40 0 177M 24948K uwait 2 85:06 0.00%
> nchmd
>
> 5089 root 1 8 0 7356K 2368K wait 7 54:04 0.00% bash
>
> 7793 root 43 40 0 190M 26016K uwait 1 45:56 0.00%
> bcomd
>
> 7844 root 25 40 0 164M 14664K uwait 3 32:56 0.00% pipd
>
> 8845 www 98 4 0 71268K 12720K kqread 0 26:38 0.00%
> httpd
>
> 5140 root 38 40 0 172M 17612K uwait 4 24:59 0.00%
> nphmd
>
> 7799 root 51 40 0 185M 23572K uwait 2 23:45 0.00% crs
>
> 13121 root 34 40 0 169M 18224K uwait 6 21:58 0.00%
> schmd
>
> 13122 root 23 40 0 167M 16992K uwait 0 17:21 0.00%
> cphmd
>
> 13123 root 19 40 0 167M 15560K uwait 2 15:14 0.00%
> shmd
>
> 7892 root 12 40 0 160M 14896K uwait 0 9:45 0.00%
> vserverdr
>
> 1979 root 2 40 0 41896K 5436K usem 6 9:21 0.00%
> env_mgr
>
> 1284 root 1 40 0 12844K 1868K select 1 8:34 0.00%
> rpcbind
>
> 8302 root 1 8 0 10852K 2100K nanslp 0 7:50 0.00%
> qldump
>
> 2022 root 23 40 0 125M 11924K uwait 6 7:45 0.00%
> fpolicy
>
> 8757 root 1 40 0 15016K 1804K ttyin 0 7:39 0.00%
> login
>
> 7828 root 11 40 0 162M 13944K uwait 2 6:32 0.00%
> upgrademgr
>
>
> On Thu, Jun 14, 2018 at 10:48 AM, jordan slingerland <
> jordan.slingerland@gmail.com> wrote:
>
>> I have an 8040 is at 100% cpu all the time on all cores. It has been
>> like this for at least a month, my stats do not go back further than this.
>> I expect the cpu was not this high before upgradin to 9.3P2. I feel like i
>> should be able to get more than 10k ops out of a 150TB hybrid aggregate
>> with 222 disks in it. Any help or feedback on performance expectations
>> will be appreciated. Let me know if any stats would be usefull, i stripped
>> my email down as my first try was rejected for being too big.
>>
>>
>>
>> HOSTNAME::> node run -node HOSTNAME-0
>>
>> HOSTNAME-01 HOSTNAME-02
>>
>> HOSTNAME::> node run -node HOSTNAME-02
>>
>> Type 'exit' or 'Ctrl-D' to return to the CLI
>>
>>
>>
>> HOSTNAME-02> sysstat prif set
>> diag priv set diag
>>
>>
>>
>> Warning: These diagnostic commands are for use by NetApp
>>
>>
>>
>> personnel only.
>>
>>
>>
>> HOSTNAME-02*> sysstat -M 1
>>
>>
>>
>> ANY1+ ANY2+ ANY3+ ANY4+ ANY5+ ANY6+ ANY7+ ANY8+ AVG CPU0 CPU1 CPU2 CPU3
>> CPU4 CPU5 CPU6 CPU7 Nwk_Excl Nwk_Lg Nwk_Exmpt Protocol Storage Raid Raid_Ex
>> Xor_Ex Target Kahuna WAFL_Ex(Kahu) WAFL_MPClean SM_Exempt Exempt SSAN_Ex
>> Intr Host Ops/s CP
>>
>>
>>
>> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
>> 100% 100% 100% 100% 0% 0% 32% 0% 0% 0%
>> 30% 10% 0% 0% 253%( 36%) 0% 0% 99% 18%
>> 3% 353% 3748 100%
>>
>>
>>
>> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
>> 100% 100% 100% 100% 0% 0% 33% 0% 0% 0%
>> 29% 10% 0% 0% 235%( 33%) 0% 0% 70% 19%
>> 3% 398% 3959 100%
>>
>>
>>
>> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
>> 100% 100% 100% 100% 0% 0% 34% 0% 0% 0%
>> 30% 9% 0% 0% 245%( 35%) 0% 0% 74% 19%
>> 3% 385% 3802 100%
>>
>>
>>
>> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
>> 100% 100% 100% 100% 0% 0% 30% 0% 1% 0%
>> 28% 8% 0% 0% 236%( 33%) 0% 0% 76% 17%
>> 3% 399% 3367 100%
>>
>>
>>
>> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
>> 100% 100% 100% 100% 0% 1% 30% 0% 2% 0%
>> 29% 8% 0% 0% 217%( 31%) 0% 0% 71% 16%
>> 3% 422% 3427 100%
>>
>>
>>
>> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
>> 100% 100% 100% 100% 0% 1% 36% 0% 1% 1%
>> 33% 9% 0% 0% 242%( 34%) 0% 0% 91% 19%
>> 3% 362% 3754 100%
>>
>>
>>
>> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
>> 100% 100% 100% 100% 0% 1% 33% 0% 0% 0%
>> 26% 4% 0% 0% 272%( 38%) 18% 0% 102% 18%
>> 4% 320% 3496 60%
>>
>>
>>
>> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
>> 100% 100% 100% 100% 0% 1% 34% 0% 1% 0%
>> 24% 4% 0% 0% 258%( 36%) 3% 0% 87% 20%
>> 3% 364% 4001 0%
>>
>>
>>
>> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
>> 100% 100% 100% 100% 0% 1% 41% 0% 0% 0%
>> 32% 5% 0% 0% 276%( 39%) 2% 0% 101% 24%
>> 4% 312% 4752 0%
>>
>>
>>
>> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
>> 100% 100% 100% 100% 0% 1% 44% 0% 0% 0%
>> 27% 4% 0% 0% 268%( 38%) 1% 0% 101% 25%
>> 4% 323% 5077 28%
>>
>>
>>
>> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
>> 100% 100% 100% 100% 0% 1% 36% 0% 0% 0%
>> 29% 2% 0% 0% 256%( 36%) 3% 0% 96% 21%
>> 4% 352% 4105 2%
>>
>>
>>
>> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
>> 100% 100% 100% 100% 0% 1% 40% 0% 2% 0%
>> 22% 1% 0% 0% 252%( 36%) 2% 0% 114% 23%
>> 4% 339% 4801 0%
>>
>>
>>
>> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
>> 100% 100% 100% 100% 0% 0% 33% 0% 3% 0%
>> 17% 1% 0% 0% 236%( 33%) 2% 0% 73% 20%
>> 3% 410% 4280 0%
>>
>>
>>
>> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
>> 100% 100% 100% 100% 0% 0% 33% 0% 1% 0%
>> 16% 2% 0% 0% 232%( 33%) 1% 0% 67% 20%
>> 3% 424% 4219 0%
>>
>>
>>
>> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
>> 100% 100% 100% 100% 0% 0% 29% 0% 0% 0%
>> 15% 1% 0% 0% 230%( 32%) 2% 0% 71% 18%
>> 3% 429% 3719 0%
>>
>>
>>
>> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
>> 100% 100% 100% 100% 0% 1% 31% 0% 0% 0%
>> 24% 2% 0% 0% 241%( 34%) 3% 0% 75% 19%
>> 3% 400% 4206 0%
>>
>>
>>
>> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
>> 100% 100% 100% 100% 0% 1% 28% 0% 0% 0%
>> 46% 11% 0% 0% 263%( 37%) 67% 0% 82% 17%
>> 3% 280% 3496 59%
>>
>>
>>
>> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
>> 100% 100% 100% 100% 0% 0% 25% 0% 0% 1%
>> 38% 7% 0% 0% 324%( 46%) 52% 0% 107% 16%
>> 3% 226% 3127 100%
>>
>>
>>
>> 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%
>> 100% 100% 100% 100% 0% 0% 30% 0% 0% 1%
>> 35% 7% 0% 1% 285%( 40%) 49% 0% 86% 18%
>> 3% 285% 3633 100%
>>
>> 0% 0% 94% 20% 4% 341% 4300 100%
>>
>>
>>
>>
>>
>>
>> HOSTNAME-02*> sysstat -x 1
>>
>>
>>
>> CPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape
>> kB/s Cache Cache CP CP_Ty CP_Ph Disk
>> OTHER FCP iSCSI FCP kB/s iSCSI kB/s
>>
>>
>>
>> in out read write read
>> write age hit time [T--H--F--N--B--O--#--:] [n--v--p--f]
>> util in out in out
>>
>>
>>
>> 100% 6 0 0 3907 100046 124797 278121 20862
>> 0 0 3s 94% 0% 0--0--0--0--0--0--0--0 0--0--0--0
>> 25% 3 0 3898 0 0 99115 127259
>>
>>
>>
>> 100% 3 0 0 3869 60123 134198 411571 20704
>> 0 0 4s 94% 0% 0--0--0--0--0--0--0--0 0--0--0--0
>> 33% 5 0 3861 0 0 59346 120058
>>
>>
>>
>> 100% 0 0 0 4089 27375 116197 225213 18145
>> 0 0 4s 93% 0% 0--0--0--0--0--0--0--0 0--0--0--0
>> 23% 0 0 4089 0 0 26622 115872
>>
>>
>>
>> 100% 13 0 0 4175 35817 111147 260996 28849
>> 0 0 2s 94% 0% 0--0--0--0--0--0--0--0 0--0--0--0
>> 25% 0 0 4162 0 0 34874 125888
>>
>>
>>
>> 100% 2 0 0 4143 56386 99541 401056 23882
>> 0 0 0s 93% 0% 0--0--0--0--0--0--0--0 0--0--0--0
>> 37% 17 0 4124 0 0 55471 105104
>>
>>
>>
>> 100% 11 0 0 3750 25706 149389 364912 154655
>> 0 0 2s 93% 43% 1--0--0--0--0--0--0--0 0--1--0--0
>> 29% 0 0 3739 0 0 24904 119412
>>
>>
>>
>> 100% 4 0 0 3395 36225 45390 280507 211869
>> 0 0 3s 92% 100% 0--0--0--0--0--0--0--1 0--0--1--0
>> 40% 0 0 3391 0 0 35744 81843
>>
>>
>>
>> 100% 25 0 0 4713 46662 149078 307272 93644
>> 0 0 4s 95% 100% 0--0--0--0--0--0--0--1 0--0--0--1
>> 42% 184 0 4504 0 0 45774 118418
>>
>>
>>
>> 100% 2 0 0 3930 51697 110975 221814 105300
>> 0 0 3s 93% 100% 0--0--0--0--0--0--0--1 0--0--0--1
>> 26% 0 0 3928 0 0 50821 108484
>>
>>
>>
>> 100% 5 0 0 4148 54985 137949 267436 208711
>> 0 0 3s 93% 100% 1--0--0--0--0--0--0--1 1--0--0--1
>> 31% 9 0 4134 0 0 53645 142356
>>
>>
>>
>> 100% 16 0 0 4883 81959 108174 364367 666466
>> 0 0 4s 94% 100% 0--0--0--0--0--0--0--2 0--0--0--2
>> 40% 1 0 4866 0 0 81554 95623
>>
>>
>>
>> 100% 5 0 0 6111 74315 122018 268918 79372
>> 0 0 3s 95% 100% 0--0--0--0--0--0--0--1 0--0--0--1
>> 26% 4 0 6102 0 0 72956 126959
>>
>>
>>
>> 100% 3 0 0 4658 50514 119345 218663 157863
>> 0 0 3s 94% 100% 0--0--0--0--0--0--0--1 0--0--0--1
>> 22% 0 0 4655 0 0 50021 112484
>>
>>
>>
>> 100% 2 0 0 4125 61813 137044 244725 223996
>> 0 0 3s 93% 100% 0--0--0--0--0--0--0--1 0--0--0--1
>> 23% 0 0 4123 0 0 59479 135345
>>
>>
>>
>> 100% 6 0 0 4005 48098 158865 283920 54318
>> 0 0 4s 95% 15% 0--0--0--0--0--0--0--0 0--0--0--0
>> 26% 14 0 3985 0 0 47165 162826
>>
>>
>>
>> 100% 24 0 0 4490 45946 155717 266290 96976
>> 0 0 2s 93% 0% 0--0--0--0--0--0--0--0 0--0--0--0
>> 28% 0 0 4466 0 0 44575 146650
>>
>>
>>
>> 100% 7 0 0 4935 52081 155766 331777 11753
>> 0 0 4s 93% 0% 0--0--0--0--0--0--0--0 0--0--0--0
>> 32% 2 0 4926 0 0 51171 165626
>>
>>
>>
>> 100% 0 0 0 6518 39526 179763 350349 24130
>> 0 0 1s 92% 0% 0--0--0--0--0--0--0--0 0--0--0--0
>> 29% 4 0 6514 0 0 38678 177139
>>
>>
>> HOSTNAME-02*> qos exit
>>
>>
>>
>> logout
>>
>>
>>
>>
>>
>> HOSTNAME::> qos statistics characteristics show -iterations 0 -rows 10
>>
>> Policy Group IOPS Throughput Request size Read
>> Concurrency Is Adaptive?
>>
>> -------------------- -------- --------------- ------------ ----
>> ----------- ------------
>>
>> -total- 8094 321.31MB/s 41627B 41%
>> 17 -
>>
>> data02_PROD_2 1626 101.69MB/s 65578B 80% 7
>> false
>>
>> DEV_DATA_DEV_2 1528 147.53MB/s 101219B 57% 5
>> false
>>
>> _System-Work 1243 4.40KB/s 3B 1%
>> 0 false
>>
>> DEV_DATA_DEV 749 6.51MB/s 9121B 93% 1
>> false
>>
>> data02_PROD 667 9.48MB/s 14911B 0% 1
>> false
>>
>> DEV_OS_DEV_2 470 28.78MB/s 64214B 35% 2 false
>>
>> data04_PROD 256 4.35MB/s 17812B 8% 0
>> false
>>
>> data03_PROD 191 3.64MB/s 19971B 0% 0
>> false
>>
>> os03_PROD_2 191 1.55MB/s 8496B 0% 0
>> false
>>
>> shares02_PROD 183 1382.41KB/s 7721B 84% 0
>> false
>>
>> -total- 9542 347.25MB/s 38158B 42%
>> 21 -
>>
>> DEV_DATA_DEV_2 1675 83.11MB/s 52028B 71% 4
>> false
>>
>> data02_PROD_2 1547 103.42MB/s 70112B 86% 7
>> false
>>
>> _System-Work 1191 6.58KB/s 5B 0%
>> 0 false
>>
>> DEV_OS_DEV_2 860 30.42MB/s 37095B 37% 2
>> false
>>
>> DEV_DATA_DEV 667 5.22MB/s 8208B 96% 0
>> false
>>
>> data03_PROD_2 627 11.51MB/s 19255B 2% 3
>> false
>>
>> data02_PROD 528 7.24MB/s 14370B 0% 0
>> false
>>
>> shares02_PROD 526 80.62MB/s 160604B 66% 3
>> false
>>
>> data01_PROD 411 4.87MB/s 12431B 0% 0
>> false
>>
>> data03_PROD 377 4.95MB/s 13746B 0% 0
>> false
>>
>> Policy Group IOPS Throughput Request size Read
>> Concurrency Is Adaptive?
>>
>> -------------------- -------- --------------- ------------ ----
>> ----------- ------------
>>
>> -total- 11004 218.56MB/s 20827B 36%
>> 15 -
>>
>> _System-Work 3750 24.82KB/s 6B 0%
>> 0 false
>>
>> DEV_DATA_DEV_2 1572 25.67MB/s 17129B 78% 2
>> false
>>
>> data02_PROD_2 1568 91.24MB/s 61028B 99% 7
>> false
>>
>> DEV_DATA_DEV 793 6.19MB/s 8185B 96% 0
>> false
>>
>> os03_PROD 652 5.18MB/s 8322B 4% 1
>> false
>>
>> DEV_OS_DEV_2 549 31.44MB/s 60082B 44% 2
>> false
>>
>> data01_PROD 411 6.96MB/s 17753B 0% 0
>> false
>>
>> shares02_PROD 304 34.42MB/s 118717B 42% 2
>> false
>>
>> data02_PROD 261 4.56MB/s 18303B 0% 0
>> false
>>
>> data03_PROD 237 3.59MB/s 15904B 0% 0
>> false
>>
>>
>>
>>
>>
>>
>>
>>
>
> _______________________________________________
> Toasters mailing list
> Toasters@teaparty.net
> http://www.teaparty.net/mailman/listinfo/toasters
>