Mailing List Archive

[no subject]
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters
[no subject] [ In reply to ]
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters
[no subject] [ In reply to ]
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters
[no subject] [ In reply to ]
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters
[no subject] [ In reply to ]
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters
[no subject] [ In reply to ]
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters
Re: Question about flash pool maximum SSD size and local tiering [ In reply to ]
On 2023-10-17 14:54, Florian Schmid via Toasters wrote:
> Hi Johan,
> thank you very much for your help.
>
> No, we don't have the disks yet for which the flash-pool should be used.
> Not all SSDs will be used for flash-pool, only some for cache and the rest
> for fast SSD storage.

So you're thinking to have several different "physical tiers" with different
characteristics (performance, inherent latency) for different workloads --
in the same HA-pair? Several different Aggrs with differing performance and
behaviour in the same node, FAS8300? (it's a fairly powerful machine so it
can do this adequately in many smaller workload cases).

Or do you mean in different 8300 nodes in a X-node cluster (what's X?)

This idea is much harder to make successful than you probably think. It
requires you to know very much about your workloads, your applications, what
they do so that you can place the correct data in the right place and you
have to have the ability to do this over time as data volumes grow. Assuming
they do... It's very hard indeed to automate so you need people who can baby
watch this continuously and move data around. Yes that's mostly
non-disruptive, but it's still quite a lot of work.

It also pretty much assumes for it to be successful in the longer run that
your applications do not change their workload patterns and/or pressure more
than very slowly.
Is this the case?

All in all FabricPool is much much more automatic. It just does the job
itself, pretty much w/o fuss once you've tuned it a bit w.r.t. cool down
period(s) and things. It "just works". You do need an S3 target system, but
as has already been pointed out it can be ONTAP with NL-SAS drives, if you
already have a bunch of these lying about you can repurpose those and
instead use new Cx00 (or Ax00) nodes in the "front end".
The challenge with FabricPool is the network: the connection between the
front end and the S3 back end needs to be very good and solid. You have to
understand it fully and know every detail how it's built so you know you can
trust it's capacity and latency; traffic can be quite bursty.

I'm not very positive to your idea here I'm afraid:

"Not all SSDs will be used for flash-pool, only some for cache and the rest
for fast SSD storage."

it's just my (long) experience of this that it's not very productive in
reality and it costs a lot of operations (manual work, skilled personnel).
It also tends to give you various problems when you need to do HW LCM
(upgrade your controllers and disk back ends). It inevitably leads to
stranded capacity in more than once dimension as time passes.

/M


--
Sr Human ;-) Alt: r.m.bergman@gmail-DEL_THIS-.com
--
"Qui vicit non est victor nisi victus fatetur." - Ennius
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters
[no subject] [ In reply to ]
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters
[no subject] [ In reply to ]
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters