Mailing List Archive

Flash Cache vs Flash Pool
Hello Toasters
I'm thinking about if/how to implement Flash Pool in our new cDOT
clusters, and was wondering if anyone could provide a bit of real world
guidance for me.

We have two SAS aggregates, aggr_sas_600g_c1n1 with 8x24 600g and
aggr_sas_1200g_c1n1 with 4x24 1.2t.

I've been doing a bunch of reading about Flash Pool vs Flash Cache and am
trying to better understand their strengths and weaknesses. Flash Pool
accelerates writes as well as reads (Flash Cache is reads only), however
with Flash Pool there seems to be the potential for slower cache
access/throughput vs Flash Cache since the data needs to travel the SAS
path vs Flash Cache which is probably DMA through PCIe. Maybe that's not
a concern at all, I don't know. Additionally, it appears that using Flash
Pool disables the Flash Cache functionality for the aggregates which are
in hybrid mode (makes sense), but then we have expensive add-in cards
doing nothing.

Our theoretical Flash Pool would be 2x24 200g, giving us about 5.5t of
usable caching space to sprinkle into these aggregates.

I've been running AWA on the cluster against those two SAS aggregates for
~24 hours and have come up with these stats:

### FP AWA Stats ###

Host mrk_c1n1 Memory 61054 MB
ONTAP Version NetApp Release 8.3.2P5: Tue Aug 23 01:27:00 PDT
2016

Basic Information
Aggregate aggr_sas_600g_c1n1
Current-time Tue Nov 22 17:06:53 EST 2016
Start-time Mon Nov 21 12:48:17 EST 2016
Total runtime (sec) 101918
Interval length (sec) 600
Total intervals 157
In-core Intervals 1024

Summary of the past 157 intervals
max
------------
Read Throughput (MB/s): 339.039
Write Throughput (MB/s): 123.536
Cacheable Read (%): 56
Cacheable Write (%): 66
Max Projected Cache Size (GiB): 787.463

Summary Cache Hit Rate vs. Cache Size
Referenced Cache Size (GiB): 714.650
Referenced Interval: ID 132 starting at Tue Nov 22 12:33:05 EST 2016
Size 20% 40% 60% 80% 100%
Read Hit (%) 25 30 30 30 33
Write Hit (%) 1 2 2 2 2

The entire results and output of Automated Workload Analyzer (AWA) are
estimates. The format, syntax, CLI, results and output of AWA may
change in future Data ONTAP releases. AWA reports the projected cache
size in capacity. It does not make recommendations regarding the
number of data SSDs required. Please follow the guidelines for
configuring and deploying Flash Pool; that are provided in tools and
collateral documents. These include verifying the platform cache size
maximums and minimum number and maximum number of data SSDs.


Basic Information
Aggregate aggr_sas_1200g_c1n1
Current-time Tue Nov 22 17:06:53 EST 2016
Start-time Mon Nov 21 12:40:57 EST 2016
Total runtime (sec) 102357
Interval length (sec) 600
Total intervals 158
In-core Intervals 1024

Summary of the past 158 intervals
max
------------
Read Throughput (MB/s): 914.247
Write Throughput (MB/s): 257.318
Cacheable Read (%): 41
Cacheable Write (%): 26
Max Projected Cache Size (GiB): 2412.178

Summary Cache Hit Rate vs. Cache Size
Referenced Cache Size (GiB): 2142.380
Referenced Interval: ID 113 starting at Tue Nov 22 09:04:41 EST 2016
Size 20% 40% 60% 80% 100%
Read Hit (%) 34 38 38 38 41
Write Hit (%) 7 7 7 7 9

The entire results and output of Automated Workload Analyzer (AWA) are
estimates. The format, syntax, CLI, results and output of AWA may
change in future Data ONTAP releases. AWA reports the projected cache
size in capacity. It does not make recommendations regarding the
number of data SSDs required. Please follow the guidelines for
configuring and deploying Flash Pool; that are provided in tools and
collateral documents. These include verifying the platform cache size
maximums and minimum number and maximum number of data SSDs.



### FP AWA Stats End ###



Aggregate aggr_sas_600g_c1n1 has a lot of random overwrites (66%) that
could have been cached. The volumes in that aggregate are pretty much
exclusively Oracle databases. The other aggregate, aggr_sas_1200g_c1n1,
doesn't seem hit as hard.


Given those statistics, what would you do if your options were ~5.5t of
Flash Pool vs buying another 2t Flash Cache card per node in this HA pair?
I seem to be missing the 'Projected Read Offload' and 'Projected Write
Offload' statistics which would have been very useful, mentioned at the
Flash Pool documentation in
https://library.netapp.com/ecmdocs/ECMP1368404/html/GUID-2C3EC0DF-FEFE-4871
-A161-4A3BAC87DB69.html

Thanks for any insight you all can provide.



--
Ian Ehrenwald
Senior Infrastructure Engineer
Hachette Book Group, Inc.
1.617.263.1948 / ian.ehrenwald@hbgusa.com

This may contain confidential material. If you are not an intended recipient, please notify the sender, delete immediately, and understand that no disclosure or reliance on the information herein is permitted. Hachette Book Group may monitor email to and from our network.

_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters
RE: Flash Cache vs Flash Pool [ In reply to ]
As you wrote, FlashCache will not be used by data residing on an SSD or FlashPool aggregate.
'
In my experience, the difference between FlashCache and FlashPool is almost never described in terms of performance. It can happen, but it usually seems to come up only with really obscure workloads, such as a system that being absolutely crushed with a random IO where the lower overall overhead of FlashCache helps a bit. It's rare, though.

Here's my main thoughts:

1) FlashPool will never go cold due to a power failure or similar. That's my #1 reason for preferring it to FlashCache.

2) FlashPool can capture random overwrites, which can be really, really helpful with certain database workloads that have a lot of such IO.

3) FlashCache can be shared among multiple aggregates according to their needs, whereas FlashPool is fixed to one. Sometimes that helps address unknown or dynamic caching needs.

4) The fact some IO was cacheable doesn't mean anyone cares it was cacheable. AWA does pretty good, but it's not definitive. For example, let's say you have a workload that could be 2X faster with 1TB of FlashPool SSD but it runs at midnight and nobody cares about how fast it runs. Why waste the SSD?

I'd probably just take it slow. Add a few SSD's to each aggregate and dole them out slowly. Reevaulate every so often. Remember - once an SSD is added into, you can't get rid of it.

-----Original Message-----
From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of Ehrenwald, Ian
Sent: Tuesday, November 22, 2016 11:16 PM
To: toasters@teaparty.net
Subject: Flash Cache vs Flash Pool

Hello Toasters
I'm thinking about if/how to implement Flash Pool in our new cDOT clusters, and was wondering if anyone could provide a bit of real world guidance for me.

We have two SAS aggregates, aggr_sas_600g_c1n1 with 8x24 600g and
aggr_sas_1200g_c1n1 with 4x24 1.2t.

I've been doing a bunch of reading about Flash Pool vs Flash Cache and am trying to better understand their strengths and weaknesses. Flash Pool accelerates writes as well as reads (Flash Cache is reads only), however with Flash Pool there seems to be the potential for slower cache access/throughput vs Flash Cache since the data needs to travel the SAS path vs Flash Cache which is probably DMA through PCIe. Maybe that's not a concern at all, I don't know. Additionally, it appears that using Flash Pool disables the Flash Cache functionality for the aggregates which are in hybrid mode (makes sense), but then we have expensive add-in cards doing nothing.

Our theoretical Flash Pool would be 2x24 200g, giving us about 5.5t of usable caching space to sprinkle into these aggregates.

I've been running AWA on the cluster against those two SAS aggregates for
~24 hours and have come up with these stats:

### FP AWA Stats ###

Host mrk_c1n1 Memory 61054 MB
ONTAP Version NetApp Release 8.3.2P5: Tue Aug 23 01:27:00 PDT
2016

Basic Information
Aggregate aggr_sas_600g_c1n1
Current-time Tue Nov 22 17:06:53 EST 2016
Start-time Mon Nov 21 12:48:17 EST 2016
Total runtime (sec) 101918
Interval length (sec) 600
Total intervals 157
In-core Intervals 1024

Summary of the past 157 intervals
max
------------
Read Throughput (MB/s): 339.039
Write Throughput (MB/s): 123.536
Cacheable Read (%): 56
Cacheable Write (%): 66
Max Projected Cache Size (GiB): 787.463

Summary Cache Hit Rate vs. Cache Size
Referenced Cache Size (GiB): 714.650
Referenced Interval: ID 132 starting at Tue Nov 22 12:33:05 EST 2016
Size 20% 40% 60% 80% 100%
Read Hit (%) 25 30 30 30 33
Write Hit (%) 1 2 2 2 2

The entire results and output of Automated Workload Analyzer (AWA) are estimates. The format, syntax, CLI, results and output of AWA may change in future Data ONTAP releases. AWA reports the projected cache size in capacity. It does not make recommendations regarding the number of data SSDs required. Please follow the guidelines for configuring and deploying Flash Pool; that are provided in tools and collateral documents. These include verifying the platform cache size maximums and minimum number and maximum number of data SSDs.


Basic Information
Aggregate aggr_sas_1200g_c1n1
Current-time Tue Nov 22 17:06:53 EST 2016
Start-time Mon Nov 21 12:40:57 EST 2016
Total runtime (sec) 102357
Interval length (sec) 600
Total intervals 158
In-core Intervals 1024

Summary of the past 158 intervals
max
------------
Read Throughput (MB/s): 914.247
Write Throughput (MB/s): 257.318
Cacheable Read (%): 41
Cacheable Write (%): 26
Max Projected Cache Size (GiB): 2412.178

Summary Cache Hit Rate vs. Cache Size
Referenced Cache Size (GiB): 2142.380
Referenced Interval: ID 113 starting at Tue Nov 22 09:04:41 EST 2016
Size 20% 40% 60% 80% 100%
Read Hit (%) 34 38 38 38 41
Write Hit (%) 7 7 7 7 9

The entire results and output of Automated Workload Analyzer (AWA) are estimates. The format, syntax, CLI, results and output of AWA may change in future Data ONTAP releases. AWA reports the projected cache size in capacity. It does not make recommendations regarding the number of data SSDs required. Please follow the guidelines for configuring and deploying Flash Pool; that are provided in tools and collateral documents. These include verifying the platform cache size maximums and minimum number and maximum number of data SSDs.



### FP AWA Stats End ###



Aggregate aggr_sas_600g_c1n1 has a lot of random overwrites (66%) that could have been cached. The volumes in that aggregate are pretty much exclusively Oracle databases. The other aggregate, aggr_sas_1200g_c1n1, doesn't seem hit as hard.


Given those statistics, what would you do if your options were ~5.5t of Flash Pool vs buying another 2t Flash Cache card per node in this HA pair?
I seem to be missing the 'Projected Read Offload' and 'Projected Write Offload' statistics which would have been very useful, mentioned at the Flash Pool documentation in
https://library.netapp.com/ecmdocs/ECMP1368404/html/GUID-2C3EC0DF-FEFE-4871
-A161-4A3BAC87DB69.html

Thanks for any insight you all can provide.



--
Ian Ehrenwald
Senior Infrastructure Engineer
Hachette Book Group, Inc.
1.617.263.1948 / ian.ehrenwald@hbgusa.com

This may contain confidential material. If you are not an intended recipient, please notify the sender, delete immediately, and understand that no disclosure or reliance on the information herein is permitted. Hachette Book Group may monitor email to and from our network.

_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters

_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters
Re: Flash Cache vs Flash Pool [ In reply to ]
As I heard Flash Cache and Flash Pool work as hierarchal cache in ONTAP
9.1. New FAS systems have embedded Flash Cache and you cann't buy new
system without it.

You can have way more Flash Pool than Flash Cache in system.

1) FlashPool will never go cold due to a power failure or similar. That's
> my #1 reason for preferring it to FlashCache.

And in case of takeover survived head uses all available Flash Pool. With
Flash Cache half of the cache will be unaccessible.

3) FlashCache can be shared among multiple aggregates according to their
> needs, whereas FlashPool is fixed to one. Sometimes that helps address
> unknown or dynamic caching needs

You can use Flash Pool with several aggregates. Storage pool let you share
Flash Pool among 4 aggregates.




On Wed, Nov 23, 2016 at 1:36 PM, Steiner, Jeffrey <
Jeffrey.Steiner@netapp.com> wrote:

> As you wrote, FlashCache will not be used by data residing on an SSD or
> FlashPool aggregate.
> '
> In my experience, the difference between FlashCache and FlashPool is
> almost never described in terms of performance. It can happen, but it
> usually seems to come up only with really obscure workloads, such as a
> system that being absolutely crushed with a random IO where the lower
> overall overhead of FlashCache helps a bit. It's rare, though.
>
> Here's my main thoughts:
>
> 1) FlashPool will never go cold due to a power failure or similar. That's
> my #1 reason for preferring it to FlashCache.
>
> 2) FlashPool can capture random overwrites, which can be really, really
> helpful with certain database workloads that have a lot of such IO.
>
> 3) FlashCache can be shared among multiple aggregates according to their
> needs, whereas FlashPool is fixed to one. Sometimes that helps address
> unknown or dynamic caching needs.
>
> 4) The fact some IO was cacheable doesn't mean anyone cares it was
> cacheable. AWA does pretty good, but it's not definitive. For example,
> let's say you have a workload that could be 2X faster with 1TB of FlashPool
> SSD but it runs at midnight and nobody cares about how fast it runs. Why
> waste the SSD?
>
> I'd probably just take it slow. Add a few SSD's to each aggregate and dole
> them out slowly. Reevaulate every so often. Remember - once an SSD is added
> into, you can't get rid of it.
>
> -----Original Message-----
> From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net]
> On Behalf Of Ehrenwald, Ian
> Sent: Tuesday, November 22, 2016 11:16 PM
> To: toasters@teaparty.net
> Subject: Flash Cache vs Flash Pool
>
> Hello Toasters
> I'm thinking about if/how to implement Flash Pool in our new cDOT
> clusters, and was wondering if anyone could provide a bit of real world
> guidance for me.
>
> We have two SAS aggregates, aggr_sas_600g_c1n1 with 8x24 600g and
> aggr_sas_1200g_c1n1 with 4x24 1.2t.
>
> I've been doing a bunch of reading about Flash Pool vs Flash Cache and am
> trying to better understand their strengths and weaknesses. Flash Pool
> accelerates writes as well as reads (Flash Cache is reads only), however
> with Flash Pool there seems to be the potential for slower cache
> access/throughput vs Flash Cache since the data needs to travel the SAS
> path vs Flash Cache which is probably DMA through PCIe. Maybe that's not a
> concern at all, I don't know. Additionally, it appears that using Flash
> Pool disables the Flash Cache functionality for the aggregates which are in
> hybrid mode (makes sense), but then we have expensive add-in cards doing
> nothing.
>
> Our theoretical Flash Pool would be 2x24 200g, giving us about 5.5t of
> usable caching space to sprinkle into these aggregates.
>
> I've been running AWA on the cluster against those two SAS aggregates for
> ~24 hours and have come up with these stats:
>
> ### FP AWA Stats ###
>
> Host mrk_c1n1 Memory 61054 MB
> ONTAP Version NetApp Release 8.3.2P5: Tue Aug 23 01:27:00 PDT
> 2016
>
> Basic Information
> Aggregate aggr_sas_600g_c1n1
> Current-time Tue Nov 22 17:06:53 EST 2016
> Start-time Mon Nov 21 12:48:17 EST 2016
> Total runtime (sec) 101918
> Interval length (sec) 600
> Total intervals 157
> In-core Intervals 1024
>
> Summary of the past 157 intervals
> max
> ------------
> Read Throughput (MB/s): 339.039
> Write Throughput (MB/s): 123.536
> Cacheable Read (%): 56
> Cacheable Write (%): 66
> Max Projected Cache Size (GiB): 787.463
>
> Summary Cache Hit Rate vs. Cache Size
> Referenced Cache Size (GiB): 714.650
> Referenced Interval: ID 132 starting at Tue Nov 22 12:33:05 EST 2016
> Size 20% 40% 60% 80% 100%
> Read Hit (%) 25 30 30 30 33
> Write Hit (%) 1 2 2 2 2
>
> The entire results and output of Automated Workload Analyzer (AWA) are
> estimates. The format, syntax, CLI, results and output of AWA may change in
> future Data ONTAP releases. AWA reports the projected cache size in
> capacity. It does not make recommendations regarding the number of data
> SSDs required. Please follow the guidelines for configuring and deploying
> Flash Pool; that are provided in tools and collateral documents. These
> include verifying the platform cache size maximums and minimum number and
> maximum number of data SSDs.
>
>
> Basic Information
> Aggregate aggr_sas_1200g_c1n1
> Current-time Tue Nov 22 17:06:53 EST 2016
> Start-time Mon Nov 21 12:40:57 EST 2016
> Total runtime (sec) 102357
> Interval length (sec) 600
> Total intervals 158
> In-core Intervals 1024
>
> Summary of the past 158 intervals
> max
> ------------
> Read Throughput (MB/s): 914.247
> Write Throughput (MB/s): 257.318
> Cacheable Read (%): 41
> Cacheable Write (%): 26
> Max Projected Cache Size (GiB): 2412.178
>
> Summary Cache Hit Rate vs. Cache Size
> Referenced Cache Size (GiB): 2142.380
> Referenced Interval: ID 113 starting at Tue Nov 22 09:04:41 EST 2016
> Size 20% 40% 60% 80% 100%
> Read Hit (%) 34 38 38 38 41
> Write Hit (%) 7 7 7 7 9
>
> The entire results and output of Automated Workload Analyzer (AWA) are
> estimates. The format, syntax, CLI, results and output of AWA may change in
> future Data ONTAP releases. AWA reports the projected cache size in
> capacity. It does not make recommendations regarding the number of data
> SSDs required. Please follow the guidelines for configuring and deploying
> Flash Pool; that are provided in tools and collateral documents. These
> include verifying the platform cache size maximums and minimum number and
> maximum number of data SSDs.
>
>
>
> ### FP AWA Stats End ###
>
>
>
> Aggregate aggr_sas_600g_c1n1 has a lot of random overwrites (66%) that
> could have been cached. The volumes in that aggregate are pretty much
> exclusively Oracle databases. The other aggregate, aggr_sas_1200g_c1n1,
> doesn't seem hit as hard.
>
>
> Given those statistics, what would you do if your options were ~5.5t of
> Flash Pool vs buying another 2t Flash Cache card per node in this HA pair?
> I seem to be missing the 'Projected Read Offload' and 'Projected Write
> Offload' statistics which would have been very useful, mentioned at the
> Flash Pool documentation in
> https://library.netapp.com/ecmdocs/ECMP1368404/html/GUID-
> 2C3EC0DF-FEFE-4871
> -A161-4A3BAC87DB69.html
>
> Thanks for any insight you all can provide.
>
>
>
> --
> Ian Ehrenwald
> Senior Infrastructure Engineer
> Hachette Book Group, Inc.
> 1.617.263.1948 / ian.ehrenwald@hbgusa.com
>
> This may contain confidential material. If you are not an intended
> recipient, please notify the sender, delete immediately, and understand
> that no disclosure or reliance on the information herein is permitted.
> Hachette Book Group may monitor email to and from our network.
>
> _______________________________________________
> Toasters mailing list
> Toasters@teaparty.net
> http://www.teaparty.net/mailman/listinfo/toasters
>
> _______________________________________________
> Toasters mailing list
> Toasters@teaparty.net
> http://www.teaparty.net/mailman/listinfo/toasters
>
RE: Flash Cache vs Flash Pool [ In reply to ]
Correct.

We cover this in episode 62 of the Tech ONTAP podcast:

https://soundcloud.com/techontap_podcast/episode-62-ontap-91-hardware-refresh

From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of Arthur Alikulov
Sent: Wednesday, November 23, 2016 7:08 AM
To: Steiner, Jeffrey
Cc: toasters@teaparty.net
Subject: Re: Flash Cache vs Flash Pool

As I heard Flash Cache and Flash Pool work as hierarchal cache in ONTAP 9.1. New FAS systems have embedded Flash Cache and you cann't buy new system without it.

You can have way more Flash Pool than Flash Cache in system.

1) FlashPool will never go cold due to a power failure or similar. That's my #1 reason for preferring it to FlashCache.
And in case of takeover survived head uses all available Flash Pool. With Flash Cache half of the cache will be unaccessible.

3) FlashCache can be shared among multiple aggregates according to their needs, whereas FlashPool is fixed to one. Sometimes that helps address unknown or dynamic caching needs
You can use Flash Pool with several aggregates. Storage pool let you share Flash Pool among 4 aggregates.




On Wed, Nov 23, 2016 at 1:36 PM, Steiner, Jeffrey <Jeffrey.Steiner@netapp.com<mailto:Jeffrey.Steiner@netapp.com>> wrote:
As you wrote, FlashCache will not be used by data residing on an SSD or FlashPool aggregate.
'
In my experience, the difference between FlashCache and FlashPool is almost never described in terms of performance. It can happen, but it usually seems to come up only with really obscure workloads, such as a system that being absolutely crushed with a random IO where the lower overall overhead of FlashCache helps a bit. It's rare, though.

Here's my main thoughts:

1) FlashPool will never go cold due to a power failure or similar. That's my #1 reason for preferring it to FlashCache.

2) FlashPool can capture random overwrites, which can be really, really helpful with certain database workloads that have a lot of such IO.

3) FlashCache can be shared among multiple aggregates according to their needs, whereas FlashPool is fixed to one. Sometimes that helps address unknown or dynamic caching needs.

4) The fact some IO was cacheable doesn't mean anyone cares it was cacheable. AWA does pretty good, but it's not definitive. For example, let's say you have a workload that could be 2X faster with 1TB of FlashPool SSD but it runs at midnight and nobody cares about how fast it runs. Why waste the SSD?

I'd probably just take it slow. Add a few SSD's to each aggregate and dole them out slowly. Reevaulate every so often. Remember - once an SSD is added into, you can't get rid of it.

-----Original Message-----
From: toasters-bounces@teaparty.net<mailto:toasters-bounces@teaparty.net> [mailto:toasters-bounces@teaparty.net<mailto:toasters-bounces@teaparty.net>] On Behalf Of Ehrenwald, Ian
Sent: Tuesday, November 22, 2016 11:16 PM
To: toasters@teaparty.net<mailto:toasters@teaparty.net>
Subject: Flash Cache vs Flash Pool

Hello Toasters
I'm thinking about if/how to implement Flash Pool in our new cDOT clusters, and was wondering if anyone could provide a bit of real world guidance for me.

We have two SAS aggregates, aggr_sas_600g_c1n1 with 8x24 600g and
aggr_sas_1200g_c1n1 with 4x24 1.2t.

I've been doing a bunch of reading about Flash Pool vs Flash Cache and am trying to better understand their strengths and weaknesses. Flash Pool accelerates writes as well as reads (Flash Cache is reads only), however with Flash Pool there seems to be the potential for slower cache access/throughput vs Flash Cache since the data needs to travel the SAS path vs Flash Cache which is probably DMA through PCIe. Maybe that's not a concern at all, I don't know. Additionally, it appears that using Flash Pool disables the Flash Cache functionality for the aggregates which are in hybrid mode (makes sense), but then we have expensive add-in cards doing nothing.

Our theoretical Flash Pool would be 2x24 200g, giving us about 5.5t of usable caching space to sprinkle into these aggregates.

I've been running AWA on the cluster against those two SAS aggregates for
~24 hours and have come up with these stats:

### FP AWA Stats ###

Host mrk_c1n1 Memory 61054 MB
ONTAP Version NetApp Release 8.3.2P5: Tue Aug 23 01:27:00 PDT
2016

Basic Information
Aggregate aggr_sas_600g_c1n1
Current-time Tue Nov 22 17:06:53 EST 2016
Start-time Mon Nov 21 12:48:17 EST 2016
Total runtime (sec) 101918
Interval length (sec) 600
Total intervals 157
In-core Intervals 1024

Summary of the past 157 intervals
max
------------
Read Throughput (MB/s): 339.039
Write Throughput (MB/s): 123.536
Cacheable Read (%): 56
Cacheable Write (%): 66
Max Projected Cache Size (GiB): 787.463

Summary Cache Hit Rate vs. Cache Size
Referenced Cache Size (GiB): 714.650
Referenced Interval: ID 132 starting at Tue Nov 22 12:33:05 EST 2016
Size 20% 40% 60% 80% 100%
Read Hit (%) 25 30 30 30 33
Write Hit (%) 1 2 2 2 2

The entire results and output of Automated Workload Analyzer (AWA) are estimates. The format, syntax, CLI, results and output of AWA may change in future Data ONTAP releases. AWA reports the projected cache size in capacity. It does not make recommendations regarding the number of data SSDs required. Please follow the guidelines for configuring and deploying Flash Pool; that are provided in tools and collateral documents. These include verifying the platform cache size maximums and minimum number and maximum number of data SSDs.


Basic Information
Aggregate aggr_sas_1200g_c1n1
Current-time Tue Nov 22 17:06:53 EST 2016
Start-time Mon Nov 21 12:40:57 EST 2016
Total runtime (sec) 102357
Interval length (sec) 600
Total intervals 158
In-core Intervals 1024

Summary of the past 158 intervals
max
------------
Read Throughput (MB/s): 914.247
Write Throughput (MB/s): 257.318
Cacheable Read (%): 41
Cacheable Write (%): 26
Max Projected Cache Size (GiB): 2412.178

Summary Cache Hit Rate vs. Cache Size
Referenced Cache Size (GiB): 2142.380
Referenced Interval: ID 113 starting at Tue Nov 22 09:04:41 EST 2016
Size 20% 40% 60% 80% 100%
Read Hit (%) 34 38 38 38 41
Write Hit (%) 7 7 7 7 9

The entire results and output of Automated Workload Analyzer (AWA) are estimates. The format, syntax, CLI, results and output of AWA may change in future Data ONTAP releases. AWA reports the projected cache size in capacity. It does not make recommendations regarding the number of data SSDs required. Please follow the guidelines for configuring and deploying Flash Pool; that are provided in tools and collateral documents. These include verifying the platform cache size maximums and minimum number and maximum number of data SSDs.



### FP AWA Stats End ###



Aggregate aggr_sas_600g_c1n1 has a lot of random overwrites (66%) that could have been cached. The volumes in that aggregate are pretty much exclusively Oracle databases. The other aggregate, aggr_sas_1200g_c1n1, doesn't seem hit as hard.


Given those statistics, what would you do if your options were ~5.5t of Flash Pool vs buying another 2t Flash Cache card per node in this HA pair?
I seem to be missing the 'Projected Read Offload' and 'Projected Write Offload' statistics which would have been very useful, mentioned at the Flash Pool documentation in
https://library.netapp.com/ecmdocs/ECMP1368404/html/GUID-2C3EC0DF-FEFE-4871
-A161-4A3BAC87DB69.html

Thanks for any insight you all can provide.



--
Ian Ehrenwald
Senior Infrastructure Engineer
Hachette Book Group, Inc.
1.617.263.1948<tel:1.617.263.1948> / ian.ehrenwald@hbgusa.com<mailto:ian.ehrenwald@hbgusa.com>

This may contain confidential material. If you are not an intended recipient, please notify the sender, delete immediately, and understand that no disclosure or reliance on the information herein is permitted. Hachette Book Group may monitor email to and from our network.

_______________________________________________
Toasters mailing list
Toasters@teaparty.net<mailto:Toasters@teaparty.net>
http://www.teaparty.net/mailman/listinfo/toasters

_______________________________________________
Toasters mailing list
Toasters@teaparty.net<mailto:Toasters@teaparty.net>
http://www.teaparty.net/mailman/listinfo/toasters
Re: Flash Cache vs Flash Pool [ In reply to ]
Hello Jeffrey
Thanks for the recommendations. Many of our workloads run in evening hours and others are almost 24/7, so time of day vs SSD count isn't a factor for us (I think). Here's what AWA is reporting after running for over 5 days against both of the aggregates I previously mentioned:

Basic Information
Aggregate aggr_sas_600g_c1n1
Current-time Sat Nov 26 17:21:42 EST 2016
Start-time Mon Nov 21 12:48:17 EST 2016
Total runtime (sec) 448414
Interval length (sec) 600
Total intervals 672
In-core Intervals 1024

Summary of the past 672 intervals
max
------------
Read Throughput (MB/s): 352.553
Write Throughput (MB/s): 142.209
Cacheable Read (%): 56
Cacheable Write (%): 79
Max Projected Cache Size (GiB): 1284.788

Summary Cache Hit Rate vs. Cache Size
Referenced Cache Size (GiB): 1270.275
Referenced Interval: ID 647 starting at Sat Nov 26 12:33:15 EST 2016
Size 20% 40% 60% 80% 100%
Read Hit (%) 54 63 64 64 64
Write Hit (%) 1 1 1 1 1




Basic Information
Aggregate aggr_sas_1200g_c1n1
Current-time Sat Nov 26 17:21:42 EST 2016
Start-time Mon Nov 21 12:40:57 EST 2016
Total runtime (sec) 448853
Interval length (sec) 600
Total intervals 673
In-core Intervals 1024

Summary of the past 673 intervals
max
------------
Read Throughput (MB/s): 933.115
Write Throughput (MB/s): 257.318
Cacheable Read (%): 62
Cacheable Write (%): 26
Max Projected Cache Size (GiB): 5080.340

Summary Cache Hit Rate vs. Cache Size
Referenced Cache Size (GiB): 4715.367
Referenced Interval: ID 640 starting at Sat Nov 26 10:55:47 EST 2016
Size 20% 40% 60% 80% 100%
Read Hit (%) 35 41 41 42 42
Write Hit (%) 11 11 11 11 14



The first aggregate listed (aggr_sas_600g_c1n1) is where our database volumes live, vast majority of them being Oracle. Based on what I see from this report, that aggregate could really benefit from write caching provided by FlashPool since 79% of the operations were cacheable. At the very least, our latency should go down markedly?

Almost inversely, the other aggregate (aggr_sas_1200g_c1n1) is much heavier on the read side with 62% read cacheable. This aggregate contains our VMware datastores, application binaries, etc and it might benefit more from staying in FlashCache.

Am I understanding and interpreting the data from AWA correctly?


Ian Ehrenwald
Senior Infrastructure Engineer
Hachette Book Group, Inc.
1.617.263.1948 / ian.ehrenwald@hbgusa.com

________________________________________
From: Steiner, Jeffrey <Jeffrey.Steiner@netapp.com>
Sent: Wednesday, November 23, 2016 6:36 AM
To: Ehrenwald, Ian; toasters@teaparty.net
Subject: RE: Flash Cache vs Flash Pool

As you wrote, FlashCache will not be used by data residing on an SSD or FlashPool aggregate.
'
In my experience, the difference between FlashCache and FlashPool is almost never described in terms of performance. It can happen, but it usually seems to come up only with really obscure workloads, such as a system that being absolutely crushed with a random IO where the lower overall overhead of FlashCache helps a bit. It's rare, though.

Here's my main thoughts:

1) FlashPool will never go cold due to a power failure or similar. That's my #1 reason for preferring it to FlashCache.

2) FlashPool can capture random overwrites, which can be really, really helpful with certain database workloads that have a lot of such IO.

3) FlashCache can be shared among multiple aggregates according to their needs, whereas FlashPool is fixed to one. Sometimes that helps address unknown or dynamic caching needs.

4) The fact some IO was cacheable doesn't mean anyone cares it was cacheable. AWA does pretty good, but it's not definitive. For example, let's say you have a workload that could be 2X faster with 1TB of FlashPool SSD but it runs at midnight and nobody cares about how fast it runs. Why waste the SSD?

I'd probably just take it slow. Add a few SSD's to each aggregate and dole them out slowly. Reevaulate every so often. Remember - once an SSD is added into, you can't get rid of it.
This may contain confidential material. If you are not an intended recipient, please notify the sender, delete immediately, and understand that no disclosure or reliance on the information herein is permitted. Hachette Book Group may monitor email to and from our network.

_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters
RE: Flash Cache vs Flash Pool [ In reply to ]
I'm not an expert on AWA, but I'll ask some of the team I work with for thoughts.

I am, however, and Oracle expert. If you really want an opinion on an Oracle workload, send me an AWR report offline. Specifically, use 'awrrpt.sql' to make the report and make sure it targets a period where someone was concerned or complaining about performance. The elapsed time should be no more than one hour. That's the best way to see the bottleneck.

Direct improvements in real-world performance will come from read caching. The improvements from write caching are less clear. When you write to ONTAP, you're actually writing to the NVRAM. That's all the application cares about, and that means you're already writing to pure solid state storage. It's also a lot faster than Flash technology.

When AWA detects an opportunity for write caching, what it's really saying is that it sees repeated overwrites of the same blocks. If you cache that data on SSD, you will reduce the load on the spinning media. That means the spinning media read response times will be better. The improvement depends on just how busy those disks were. Some workloads, such as DataGuard standby databases with very low amounts of RAM tend to do a huge amount of overwrites of a small number of blocks. FlashPool write caching can be hugely helpful there.

-----Original Message-----
From: Ehrenwald, Ian [mailto:Ian.Ehrenwald@hbgusa.com]
Sent: Sunday, November 27, 2016 4:07 AM
To: Steiner, Jeffrey <Jeffrey.Steiner@netapp.com>; toasters@teaparty.net
Subject: Re: Flash Cache vs Flash Pool

Hello Jeffrey
Thanks for the recommendations. Many of our workloads run in evening hours and others are almost 24/7, so time of day vs SSD count isn't a factor for us (I think). Here's what AWA is reporting after running for over 5 days against both of the aggregates I previously mentioned:

Basic Information
Aggregate aggr_sas_600g_c1n1
Current-time Sat Nov 26 17:21:42 EST 2016
Start-time Mon Nov 21 12:48:17 EST 2016
Total runtime (sec) 448414
Interval length (sec) 600
Total intervals 672
In-core Intervals 1024

Summary of the past 672 intervals
max
------------
Read Throughput (MB/s): 352.553
Write Throughput (MB/s): 142.209
Cacheable Read (%): 56
Cacheable Write (%): 79
Max Projected Cache Size (GiB): 1284.788

Summary Cache Hit Rate vs. Cache Size
Referenced Cache Size (GiB): 1270.275
Referenced Interval: ID 647 starting at Sat Nov 26 12:33:15 EST 2016
Size 20% 40% 60% 80% 100%
Read Hit (%) 54 63 64 64 64
Write Hit (%) 1 1 1 1 1




Basic Information
Aggregate aggr_sas_1200g_c1n1
Current-time Sat Nov 26 17:21:42 EST 2016
Start-time Mon Nov 21 12:40:57 EST 2016
Total runtime (sec) 448853
Interval length (sec) 600
Total intervals 673
In-core Intervals 1024

Summary of the past 673 intervals
max
------------
Read Throughput (MB/s): 933.115
Write Throughput (MB/s): 257.318
Cacheable Read (%): 62
Cacheable Write (%): 26
Max Projected Cache Size (GiB): 5080.340

Summary Cache Hit Rate vs. Cache Size
Referenced Cache Size (GiB): 4715.367
Referenced Interval: ID 640 starting at Sat Nov 26 10:55:47 EST 2016
Size 20% 40% 60% 80% 100%
Read Hit (%) 35 41 41 42 42
Write Hit (%) 11 11 11 11 14



The first aggregate listed (aggr_sas_600g_c1n1) is where our database volumes live, vast majority of them being Oracle. Based on what I see from this report, that aggregate could really benefit from write caching provided by FlashPool since 79% of the operations were cacheable. At the very least, our latency should go down markedly?

Almost inversely, the other aggregate (aggr_sas_1200g_c1n1) is much heavier on the read side with 62% read cacheable. This aggregate contains our VMware datastores, application binaries, etc and it might benefit more from staying in FlashCache.

Am I understanding and interpreting the data from AWA correctly?


Ian Ehrenwald
Senior Infrastructure Engineer
Hachette Book Group, Inc.
1.617.263.1948 / ian.ehrenwald@hbgusa.com

________________________________________
From: Steiner, Jeffrey <Jeffrey.Steiner@netapp.com>
Sent: Wednesday, November 23, 2016 6:36 AM
To: Ehrenwald, Ian; toasters@teaparty.net
Subject: RE: Flash Cache vs Flash Pool

As you wrote, FlashCache will not be used by data residing on an SSD or FlashPool aggregate.
'
In my experience, the difference between FlashCache and FlashPool is almost never described in terms of performance. It can happen, but it usually seems to come up only with really obscure workloads, such as a system that being absolutely crushed with a random IO where the lower overall overhead of FlashCache helps a bit. It's rare, though.

Here's my main thoughts:

1) FlashPool will never go cold due to a power failure or similar. That's my #1 reason for preferring it to FlashCache.

2) FlashPool can capture random overwrites, which can be really, really helpful with certain database workloads that have a lot of such IO.

3) FlashCache can be shared among multiple aggregates according to their needs, whereas FlashPool is fixed to one. Sometimes that helps address unknown or dynamic caching needs.

4) The fact some IO was cacheable doesn't mean anyone cares it was cacheable. AWA does pretty good, but it's not definitive. For example, let's say you have a workload that could be 2X faster with 1TB of FlashPool SSD but it runs at midnight and nobody cares about how fast it runs. Why waste the SSD?

I'd probably just take it slow. Add a few SSD's to each aggregate and dole them out slowly. Reevaulate every so often. Remember - once an SSD is added into, you can't get rid of it.
This may contain confidential material. If you are not an intended recipient, please notify the sender, delete immediately, and understand that no disclosure or reliance on the information herein is permitted. Hachette Book Group may monitor email to and from our network.

_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters
RE: Flash Cache vs Flash Pool [ In reply to ]
It looks like the AWA output shows that you'll hit the optimum caching point with just 60% of the simulated cache you specified. That's pretty typical. The workload is concentrated on a fairly small amount of disk. Just 700GB or so will deliver the maximum value you should expect.

There are some exceptions. For example, there might be a database workload that really REALLY doesn't like even a few extra spinning disk hits, and a little extra cacheability would improve performance a lot. Unlikely, but possible. We'd need to look at AWR data to say more.

-----Original Message-----
From: Steiner, Jeffrey
Sent: Sunday, November 27, 2016 1:49 PM
To: 'Ehrenwald, Ian' <Ian.Ehrenwald@hbgusa.com>; toasters@teaparty.net
Subject: RE: Flash Cache vs Flash Pool

I'm not an expert on AWA, but I'll ask some of the team I work with for thoughts.

I am, however, and Oracle expert. If you really want an opinion on an Oracle workload, send me an AWR report offline. Specifically, use 'awrrpt.sql' to make the report and make sure it targets a period where someone was concerned or complaining about performance. The elapsed time should be no more than one hour. That's the best way to see the bottleneck.

Direct improvements in real-world performance will come from read caching. The improvements from write caching are less clear. When you write to ONTAP, you're actually writing to the NVRAM. That's all the application cares about, and that means you're already writing to pure solid state storage. It's also a lot faster than Flash technology.

When AWA detects an opportunity for write caching, what it's really saying is that it sees repeated overwrites of the same blocks. If you cache that data on SSD, you will reduce the load on the spinning media. That means the spinning media read response times will be better. The improvement depends on just how busy those disks were. Some workloads, such as DataGuard standby databases with very low amounts of RAM tend to do a huge amount of overwrites of a small number of blocks. FlashPool write caching can be hugely helpful there.

-----Original Message-----
From: Ehrenwald, Ian [mailto:Ian.Ehrenwald@hbgusa.com]
Sent: Sunday, November 27, 2016 4:07 AM
To: Steiner, Jeffrey <Jeffrey.Steiner@netapp.com>; toasters@teaparty.net
Subject: Re: Flash Cache vs Flash Pool

Hello Jeffrey
Thanks for the recommendations. Many of our workloads run in evening hours and others are almost 24/7, so time of day vs SSD count isn't a factor for us (I think). Here's what AWA is reporting after running for over 5 days against both of the aggregates I previously mentioned:

Basic Information
Aggregate aggr_sas_600g_c1n1
Current-time Sat Nov 26 17:21:42 EST 2016
Start-time Mon Nov 21 12:48:17 EST 2016
Total runtime (sec) 448414
Interval length (sec) 600
Total intervals 672
In-core Intervals 1024

Summary of the past 672 intervals
max
------------
Read Throughput (MB/s): 352.553
Write Throughput (MB/s): 142.209
Cacheable Read (%): 56
Cacheable Write (%): 79
Max Projected Cache Size (GiB): 1284.788

Summary Cache Hit Rate vs. Cache Size
Referenced Cache Size (GiB): 1270.275
Referenced Interval: ID 647 starting at Sat Nov 26 12:33:15 EST 2016
Size 20% 40% 60% 80% 100%
Read Hit (%) 54 63 64 64 64
Write Hit (%) 1 1 1 1 1




Basic Information
Aggregate aggr_sas_1200g_c1n1
Current-time Sat Nov 26 17:21:42 EST 2016
Start-time Mon Nov 21 12:40:57 EST 2016
Total runtime (sec) 448853
Interval length (sec) 600
Total intervals 673
In-core Intervals 1024

Summary of the past 673 intervals
max
------------
Read Throughput (MB/s): 933.115
Write Throughput (MB/s): 257.318
Cacheable Read (%): 62
Cacheable Write (%): 26
Max Projected Cache Size (GiB): 5080.340

Summary Cache Hit Rate vs. Cache Size
Referenced Cache Size (GiB): 4715.367
Referenced Interval: ID 640 starting at Sat Nov 26 10:55:47 EST 2016
Size 20% 40% 60% 80% 100%
Read Hit (%) 35 41 41 42 42
Write Hit (%) 11 11 11 11 14



The first aggregate listed (aggr_sas_600g_c1n1) is where our database volumes live, vast majority of them being Oracle. Based on what I see from this report, that aggregate could really benefit from write caching provided by FlashPool since 79% of the operations were cacheable. At the very least, our latency should go down markedly?

Almost inversely, the other aggregate (aggr_sas_1200g_c1n1) is much heavier on the read side with 62% read cacheable. This aggregate contains our VMware datastores, application binaries, etc and it might benefit more from staying in FlashCache.

Am I understanding and interpreting the data from AWA correctly?


Ian Ehrenwald
Senior Infrastructure Engineer
Hachette Book Group, Inc.
1.617.263.1948 / ian.ehrenwald@hbgusa.com

________________________________________
From: Steiner, Jeffrey <Jeffrey.Steiner@netapp.com>
Sent: Wednesday, November 23, 2016 6:36 AM
To: Ehrenwald, Ian; toasters@teaparty.net
Subject: RE: Flash Cache vs Flash Pool

As you wrote, FlashCache will not be used by data residing on an SSD or FlashPool aggregate.
'
In my experience, the difference between FlashCache and FlashPool is almost never described in terms of performance. It can happen, but it usually seems to come up only with really obscure workloads, such as a system that being absolutely crushed with a random IO where the lower overall overhead of FlashCache helps a bit. It's rare, though.

Here's my main thoughts:

1) FlashPool will never go cold due to a power failure or similar. That's my #1 reason for preferring it to FlashCache.

2) FlashPool can capture random overwrites, which can be really, really helpful with certain database workloads that have a lot of such IO.

3) FlashCache can be shared among multiple aggregates according to their needs, whereas FlashPool is fixed to one. Sometimes that helps address unknown or dynamic caching needs.

4) The fact some IO was cacheable doesn't mean anyone cares it was cacheable. AWA does pretty good, but it's not definitive. For example, let's say you have a workload that could be 2X faster with 1TB of FlashPool SSD but it runs at midnight and nobody cares about how fast it runs. Why waste the SSD?

I'd probably just take it slow. Add a few SSD's to each aggregate and dole them out slowly. Reevaulate every so often. Remember - once an SSD is added into, you can't get rid of it.
This may contain confidential material. If you are not an intended recipient, please notify the sender, delete immediately, and understand that no disclosure or reliance on the information herein is permitted. Hachette Book Group may monitor email to and from our network.

_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters
Re: Flash Cache vs Flash Pool [ In reply to ]
Hi Jeffrey
Thanks for the interesting followup to my statistics, and I apologize for not replying to your previous email from earlier. I am working with our DBA team to see if they're OK sending out AWRs for external analysis. I don't see any reason why we couldn't do it, but DB stuff isn't in my "sphere of influence" ;)


Ian Ehrenwald
Senior Infrastructure Engineer
Hachette Book Group, Inc.
1.617.263.1948 / ian.ehrenwald@hbgusa.com

________________________________________
From: Steiner, Jeffrey <Jeffrey.Steiner@netapp.com>
Sent: Wednesday, November 30, 2016 6:23:10 AM
To: Ehrenwald, Ian; toasters@teaparty.net
Cc: Smith, Christopher
Subject: RE: Flash Cache vs Flash Pool

It looks like the AWA output shows that you'll hit the optimum caching point with just 60% of the simulated cache you specified. That's pretty typical. The workload is concentrated on a fairly small amount of disk. Just 700GB or so will deliver the maximum value you should expect.

There are some exceptions. For example, there might be a database workload that really REALLY doesn't like even a few extra spinning disk hits, and a little extra cacheability would improve performance a lot. Unlikely, but possible. We'd need to look at AWR data to say more.

-----Original Message-----
From: Steiner, Jeffrey
Sent: Sunday, November 27, 2016 1:49 PM
To: 'Ehrenwald, Ian' <Ian.Ehrenwald@hbgusa.com>; toasters@teaparty.net
Subject: RE: Flash Cache vs Flash Pool

I'm not an expert on AWA, but I'll ask some of the team I work with for thoughts.

I am, however, and Oracle expert. If you really want an opinion on an Oracle workload, send me an AWR report offline. Specifically, use 'awrrpt.sql' to make the report and make sure it targets a period where someone was concerned or complaining about performance. The elapsed time should be no more than one hour. That's the best way to see the bottleneck.

Direct improvements in real-world performance will come from read caching. The improvements from write caching are less clear. When you write to ONTAP, you're actually writing to the NVRAM. That's all the application cares about, and that means you're already writing to pure solid state storage. It's also a lot faster than Flash technology.

When AWA detects an opportunity for write caching, what it's really saying is that it sees repeated overwrites of the same blocks. If you cache that data on SSD, you will reduce the load on the spinning media. That means the spinning media read response times will be better. The improvement depends on just how busy those disks were. Some workloads, such as DataGuard standby databases with very low amounts of RAM tend to do a huge amount of overwrites of a small number of blocks. FlashPool write caching can be hugely helpful there.

-----Original Message-----
From: Ehrenwald, Ian [mailto:Ian.Ehrenwald@hbgusa.com]
Sent: Sunday, November 27, 2016 4:07 AM
To: Steiner, Jeffrey <Jeffrey.Steiner@netapp.com>; toasters@teaparty.net
Subject: Re: Flash Cache vs Flash Pool

Hello Jeffrey
Thanks for the recommendations. Many of our workloads run in evening hours and others are almost 24/7, so time of day vs SSD count isn't a factor for us (I think). Here's what AWA is reporting after running for over 5 days against both of the aggregates I previously mentioned:

Basic Information
Aggregate aggr_sas_600g_c1n1
Current-time Sat Nov 26 17:21:42 EST 2016
Start-time Mon Nov 21 12:48:17 EST 2016
Total runtime (sec) 448414
Interval length (sec) 600
Total intervals 672
In-core Intervals 1024

Summary of the past 672 intervals
max
------------
Read Throughput (MB/s): 352.553
Write Throughput (MB/s): 142.209
Cacheable Read (%): 56
Cacheable Write (%): 79
Max Projected Cache Size (GiB): 1284.788

Summary Cache Hit Rate vs. Cache Size
Referenced Cache Size (GiB): 1270.275
Referenced Interval: ID 647 starting at Sat Nov 26 12:33:15 EST 2016
Size 20% 40% 60% 80% 100%
Read Hit (%) 54 63 64 64 64
Write Hit (%) 1 1 1 1 1




Basic Information
Aggregate aggr_sas_1200g_c1n1
Current-time Sat Nov 26 17:21:42 EST 2016
Start-time Mon Nov 21 12:40:57 EST 2016
Total runtime (sec) 448853
Interval length (sec) 600
Total intervals 673
In-core Intervals 1024

Summary of the past 673 intervals
max
------------
Read Throughput (MB/s): 933.115
Write Throughput (MB/s): 257.318
Cacheable Read (%): 62
Cacheable Write (%): 26
Max Projected Cache Size (GiB): 5080.340

Summary Cache Hit Rate vs. Cache Size
Referenced Cache Size (GiB): 4715.367
Referenced Interval: ID 640 starting at Sat Nov 26 10:55:47 EST 2016
Size 20% 40% 60% 80% 100%
Read Hit (%) 35 41 41 42 42
Write Hit (%) 11 11 11 11 14



The first aggregate listed (aggr_sas_600g_c1n1) is where our database volumes live, vast majority of them being Oracle. Based on what I see from this report, that aggregate could really benefit from write caching provided by FlashPool since 79% of the operations were cacheable. At the very least, our latency should go down markedly?

Almost inversely, the other aggregate (aggr_sas_1200g_c1n1) is much heavier on the read side with 62% read cacheable. This aggregate contains our VMware datastores, application binaries, etc and it might benefit more from staying in FlashCache.

Am I understanding and interpreting the data from AWA correctly?


Ian Ehrenwald
Senior Infrastructure Engineer
Hachette Book Group, Inc.
1.617.263.1948 / ian.ehrenwald@hbgusa.com

________________________________________
From: Steiner, Jeffrey <Jeffrey.Steiner@netapp.com>
Sent: Wednesday, November 23, 2016 6:36 AM
To: Ehrenwald, Ian; toasters@teaparty.net
Subject: RE: Flash Cache vs Flash Pool

As you wrote, FlashCache will not be used by data residing on an SSD or FlashPool aggregate.
'
In my experience, the difference between FlashCache and FlashPool is almost never described in terms of performance. It can happen, but it usually seems to come up only with really obscure workloads, such as a system that being absolutely crushed with a random IO where the lower overall overhead of FlashCache helps a bit. It's rare, though.

Here's my main thoughts:

1) FlashPool will never go cold due to a power failure or similar. That's my #1 reason for preferring it to FlashCache.

2) FlashPool can capture random overwrites, which can be really, really helpful with certain database workloads that have a lot of such IO.

3) FlashCache can be shared among multiple aggregates according to their needs, whereas FlashPool is fixed to one. Sometimes that helps address unknown or dynamic caching needs.

4) The fact some IO was cacheable doesn't mean anyone cares it was cacheable. AWA does pretty good, but it's not definitive. For example, let's say you have a workload that could be 2X faster with 1TB of FlashPool SSD but it runs at midnight and nobody cares about how fast it runs. Why waste the SSD?

I'd probably just take it slow. Add a few SSD's to each aggregate and dole them out slowly. Reevaulate every so often. Remember - once an SSD is added into, you can't get rid of it.
This may contain confidential material. If you are not an intended recipient, please notify the sender, delete immediately, and understand that no disclosure or reliance on the information herein is permitted. Hachette Book Group may monitor email to and from our network.
This may contain confidential material. If you are not an intended recipient, please notify the sender, delete immediately, and understand that no disclosure or reliance on the information herein is permitted. Hachette Book Group may monitor email to and from our network.

_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters
RE: Flash Cache vs Flash Pool [ In reply to ]
Just to close this out - we took this offline for a little data collection. Here's an example of what you can learn about storage performance from Oracle AWR reports. They really are the best starting place. Sometimes we need to look closer at the storage, but I always prefer to start the database layer.



There are signatures of various problems in those AWR reports. For example, if someone says "the database is too slow" and they saw latency of 30ms, I agree, that's bad and it's probably related to the storage system. If latency was 300ms, that's not storage, that's probably network congestion. 3000ms is catastrophic. That's what you get when LACP is misconfigured and such.



Here's what we found in the AWR. I've changed system names for the sake of security. It's a minor thing, but no need to advertise that.



First, the DBA's asked about putting red logs on SSD. Check out TR-3633, in particular section 10.2. Putting redo logs on and SSD aggregate is virtually guaranteed to accomplish nothing because they're already on SSD from a write point of view - NVRAM. There're more detail in 10.2 that explains in detail.



I don't see a compelling reason to change anything about the storage system based on this data. There is no evidence at all that redo logging is any concern at all.



The key thing to look at is the load average and foreground events. That usually tells the whole story



For example, DW1 shows this:
Load Profile

Per Second

Per Transaction

Per Exec

Per Call

Redo size:

3,472,750.3

349,451.7





Logical reads:

191,971.8

19,317.5





Physical reads:

8,514.8

856.8





Physical writes:

800.6

80.6








So, we're reading about 200K blocks/sec from the SGA cache. That means the database is definitely in use. The physical reads are nothing special. Only about 8500 block/sec. That doesn't mean 8500 IOPS, it just means 8500 blocks/sec. It could be multiblock reads. Now we drop to the foreground events
Top 5 Timed Foreground Events
Event

Waits

Time(s)

Avg wait (ms)

% DB time

Wait Class

DB CPU



21,119



14.01



db file sequential read

12,147,000

17,413

1

11.55

User I/O

enq: TX - row lock contention

1,289

6,751

5237

4.48

Application

SQL*Net more data from dblink

11,274

6,739

598

4.47

Network

resmgr:cpu quantum

2,511,892

4,478

2

2.97

Scheduler




Most of the time waiting is spent on CPU work, and then you have random reads in the form of db_file_sequential_read. Yes, that's a random IO operation. It's already averaging 1ms latency, so this isn't going to get much better, even with 100% SSD. If someone was complaining about performance on December 4th between noon and 3pm, it wasn't because of storage.



We have another sample from 8am to 11am on December 8 for DW1. It looks like this:
Load Profile

Per Second

Per Transaction

Per Exec

Per Call

Redo size:

858,861.0

1,485,848.1





Logical reads:

219,310.0

379,411.0





Block changes:

2,775.4

4,801.5





Physical reads:

11,216.3

19,404.4





Physical writes:

286.1

495.0








200K logical reads again, which is respectable. Storage IO is about 11K read IOPS. Again, nothing even noticeable on a typical storage array. How's the latencies...
Top 5 Timed Foreground Events
Event

Waits

Time(s)

Avg wait (ms)

% DB time

Wait Class

DB CPU



42,440



29.55



resmgr:cpu quantum

11,902,019

25,021

2

17.42

Scheduler

db file sequential read

5,432,705

15,915

3

11.08

User I/O

SQL*Net more data from dblink

9,309

14,324

1539

9.97

Network

enq: TX - row lock contention

1,036

10,682

10310

7.44

Application




Once again, the main barrier to performance is CPU computation work. This doesn't mean you need more CPU's, it's probably just a single thread on a single core doing some work. It's pretty normal. You also have a lot of CPU resource throttling happing here. If there are performance complaints at this time, check that out.



Storage latency is a bit higher at 3ms. Sure, we can improve that, but it only represents about 11% of the total waiting. If you were running ONTAP 10K with the infinitely fast holographic storage crystals that yield zero latency, you'd only get a 11% performance improvement.



The sample from December 4th from 2am to 6am shows a little more latency:
Top 5 Timed Foreground Events
Event

Waits

Time(s)

Avg wait (ms)

% DB time

Wait Class

DB CPU



30,300



33.96



db file sequential read

3,371,761

15,422

5

17.28

User I/O

SQL*Net more data from dblink

41,717

14,691

352

16.46

Network

direct path read

404,790

9,249

23

10.37

User I/O

db file scattered read

905,819

8,808

10

9.87

User I/O




We're at 5ms. Now *that* a workload where Flash helps. We could bring down the latency on the db_file_sequential_read and db_file_scattered_read and noticeably improve performance. FlashPool normally does the job here, but it depends just how random the IO is. There's a point where you just have to have all-SSD to improve things. I checked the aggregate, and it's certainly big enough that it shouldn't be saturating. Were there complaints from 2am to 6am on December 4th?



PROD1 has too little redo logging to conclude anything. It's averaging about 100KB. The latencies are a little bit higher than you'd normally see, but when logging falls that low the AWR stats get skewed and the OS itself can introduce delays via the IO schedules. It sometimes waits a little to see if more redo operations are about to be issued.



The AWR reports for PROD2 are from a time of basically zero activity. The IOPS are around 2. That's actually two IO's per second. The microSD chip in my phone could handle those workloads.







-----Original Message-----
From: Steiner, Jeffrey
Sent: Wednesday, November 30, 2016 12:23 PM
To: 'Ehrenwald, Ian' <Ian.Ehrenwald@hbgusa.com>; 'toasters@teaparty.net' <toasters@teaparty.net>
Cc: Smith, Christopher <Christopher.Smith3@netapp.com>
Subject: RE: Flash Cache vs Flash Pool



It looks like the AWA output shows that you'll hit the optimum caching point with just 60% of the simulated cache you specified. That's pretty typical. The workload is concentrated on a fairly small amount of disk. Just 700GB or so will deliver the maximum value you should expect.



There are some exceptions. For example, there might be a database workload that really REALLY doesn't like even a few extra spinning disk hits, and a little extra cacheability would improve performance a lot. Unlikely, but possible. We'd need to look at AWR data to say more.



-----Original Message-----

From: Steiner, Jeffrey

Sent: Sunday, November 27, 2016 1:49 PM

To: 'Ehrenwald, Ian' <Ian.Ehrenwald@hbgusa.com<mailto:Ian.Ehrenwald@hbgusa.com>>; toasters@teaparty.net<mailto:toasters@teaparty.net>

Subject: RE: Flash Cache vs Flash Pool



I'm not an expert on AWA, but I'll ask some of the team I work with for thoughts.



I am, however, and Oracle expert. If you really want an opinion on an Oracle workload, send me an AWR report offline. Specifically, use 'awrrpt.sql' to make the report and make sure it targets a period where someone was concerned or complaining about performance. The elapsed time should be no more than one hour. That's the best way to see the bottleneck.



Direct improvements in real-world performance will come from read caching. The improvements from write caching are less clear. When you write to ONTAP, you're actually writing to the NVRAM. That's all the application cares about, and that means you're already writing to pure solid state storage. It's also a lot faster than Flash technology.



When AWA detects an opportunity for write caching, what it's really saying is that it sees repeated overwrites of the same blocks. If you cache that data on SSD, you will reduce the load on the spinning media. That means the spinning media read response times will be better. The improvement depends on just how busy those disks were. Some workloads, such as DataGuard standby databases with very low amounts of RAM tend to do a huge amount of overwrites of a small number of blocks. FlashPool write caching can be hugely helpful there.



-----Original Message-----

From: Ehrenwald, Ian [mailto:Ian.Ehrenwald@hbgusa.com]

Sent: Sunday, November 27, 2016 4:07 AM

To: Steiner, Jeffrey <Jeffrey.Steiner@netapp.com<mailto:Jeffrey.Steiner@netapp.com>>; toasters@teaparty.net<mailto:toasters@teaparty.net>

Subject: Re: Flash Cache vs Flash Pool



Hello Jeffrey

Thanks for the recommendations. Many of our workloads run in evening hours and others are almost 24/7, so time of day vs SSD count isn't a factor for us (I think). Here's what AWA is reporting after running for over 5 days against both of the aggregates I previously mentioned:



Basic Information

Aggregate aggr_sas_600g_c1n1

Current-time Sat Nov 26 17:21:42 EST 2016

Start-time Mon Nov 21 12:48:17 EST 2016

Total runtime (sec) 448414

Interval length (sec) 600

Total intervals 672

In-core Intervals 1024



Summary of the past 672 intervals

max

------------

Read Throughput (MB/s): 352.553

Write Throughput (MB/s): 142.209

Cacheable Read (%): 56

Cacheable Write (%): 79

Max Projected Cache Size (GiB): 1284.788



Summary Cache Hit Rate vs. Cache Size

Referenced Cache Size (GiB): 1270.275

Referenced Interval: ID 647 starting at Sat Nov 26 12:33:15 EST 2016

Size 20% 40% 60% 80% 100%

Read Hit (%) 54 63 64 64 64

Write Hit (%) 1 1 1 1 1









Basic Information

Aggregate aggr_sas_1200g_c1n1

Current-time Sat Nov 26 17:21:42 EST 2016

Start-time Mon Nov 21 12:40:57 EST 2016

Total runtime (sec) 448853

Interval length (sec) 600

Total intervals 673

In-core Intervals 1024



Summary of the past 673 intervals

max

------------

Read Throughput (MB/s): 933.115

Write Throughput (MB/s): 257.318

Cacheable Read (%): 62

Cacheable Write (%): 26

Max Projected Cache Size (GiB): 5080.340



Summary Cache Hit Rate vs. Cache Size

Referenced Cache Size (GiB): 4715.367

Referenced Interval: ID 640 starting at Sat Nov 26 10:55:47 EST 2016

Size 20% 40% 60% 80% 100%

Read Hit (%) 35 41 41 42 42

Write Hit (%) 11 11 11 11 14







The first aggregate listed (aggr_sas_600g_c1n1) is where our database volumes live, vast majority of them being Oracle. Based on what I see from this report, that aggregate could really benefit from write caching provided by FlashPool since 79% of the operations were cacheable. At the very least, our latency should go down markedly?



Almost inversely, the other aggregate (aggr_sas_1200g_c1n1) is much heavier on the read side with 62% read cacheable. This aggregate contains our VMware datastores, application binaries, etc and it might benefit more from staying in FlashCache.



Am I understanding and interpreting the data from AWA correctly?





Ian Ehrenwald

Senior Infrastructure Engineer

Hachette Book Group, Inc.

1.617.263.1948 / ian.ehrenwald@hbgusa.com<mailto:ian.ehrenwald@hbgusa.com>



________________________________________

From: Steiner, Jeffrey <Jeffrey.Steiner@netapp.com<mailto:Jeffrey.Steiner@netapp.com>>

Sent: Wednesday, November 23, 2016 6:36 AM

To: Ehrenwald, Ian; toasters@teaparty.net<mailto:toasters@teaparty.net>

Subject: RE: Flash Cache vs Flash Pool



As you wrote, FlashCache will not be used by data residing on an SSD or FlashPool aggregate.

'

In my experience, the difference between FlashCache and FlashPool is almost never described in terms of performance. It can happen, but it usually seems to come up only with really obscure workloads, such as a system that being absolutely crushed with a random IO where the lower overall overhead of FlashCache helps a bit. It's rare, though.



Here's my main thoughts:



1) FlashPool will never go cold due to a power failure or similar. That's my #1 reason for preferring it to FlashCache.



2) FlashPool can capture random overwrites, which can be really, really helpful with certain database workloads that have a lot of such IO.



3) FlashCache can be shared among multiple aggregates according to their needs, whereas FlashPool is fixed to one. Sometimes that helps address unknown or dynamic caching needs.



4) The fact some IO was cacheable doesn't mean anyone cares it was cacheable. AWA does pretty good, but it's not definitive. For example, let's say you have a workload that could be 2X faster with 1TB of FlashPool SSD but it runs at midnight and nobody cares about how fast it runs. Why waste the SSD?



I'd probably just take it slow. Add a few SSD's to each aggregate and dole them out slowly. Reevaulate every so often. Remember - once an SSD is added into, you can't get rid of it.

This may contain confidential material. If you are not an intended recipient, please notify the sender, delete immediately, and understand that no disclosure or reliance on the information herein is permitted. Hachette Book Group may monitor email to and from our network.