Mailing List Archive

experience with standard PD layer using CVO?
>>>>> "Jeff" == Jeff Mohler via Toasters <toasters@teaparty.net> writes:

Jeff> From: Jeff Mohler <jmohler@yahooinc.com>
Jeff> To: Toasters <toasters@teaparty.net>
Jeff> Date: Fri, 5 Aug 2022 10:15:13 -0700
Jeff> Subject: CVO on Standard/SATA PD

Jeff> Does anyone here have experience on a standard PD layer using CVO?

Jeff, I know you're deep in the weeds, but for those of us who
aren't... can you spell out those acronyms please?

PD = Purple Dinosaur?
CVO = Cheery Victory Orange?

*grin*


Jeff> If so, I am looking for -throughput- maximus, limitations,
Jeff> lessons learned, on per volume write workloads.

Jeff> We are in a mode where we believe we were told there are
Jeff> 100-200MB/sec write "edges" of throughput to CVO on standard?PD
Jeff> storage...just we're not quite in shape here to test that yet
Jeff> ourselves.

Yeah, you need to translate this section as well please. Or at least
give a little more context so people have a hope of understanding. Or
better yet, learning from your experiences!

Cheers,
John

_______________________________________________
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters
Re: [EXTERNAL] experience with standard PD layer using CVO? [ In reply to ]
CVO == Cloud Volumes Ontap, the software version of OnTap that you can run
in the
various CSP's (Cloud Service Providers) aka GCP, AWS, Azure

https://cloud.netapp.com/ontap-cloud

PD == Persistent Disk, what GCP (Google Compute Platform) calls its, uh,
persistent disks.

https://cloud.google.com/compute/docs/disks

And to answer Jeffs question, we've not done any CVO testing in GCP, so I
can't
answer the throughput questions.

We have found that, in general, the VM Instance type that hosts the CVO
instances matters,
since the Instance type also determines networking connectivity.

Disk type (in this case GCP PDs) matters, too; I'd test with Local SSD to
see where
that gets you.

Lastly, Throughput is dependent on workload, which determines the IO
pattern.
Maybe create an FIO config that generates traffic close to what you expect,
and
use that to drive various CVO configurations.

https://github.com/axboe/fio

-Skottie




On Fri, Aug 5, 2022 at 1:26 PM John Stoffel <john@stoffel.org> wrote:

> >>>>> "Jeff" == Jeff Mohler via Toasters <toasters@teaparty.net> writes:
>
> Jeff> From: Jeff Mohler <jmohler@yahooinc.com>
> Jeff> To: Toasters <toasters@teaparty.net>
> Jeff> Date: Fri, 5 Aug 2022 10:15:13 -0700
> Jeff> Subject: CVO on Standard/SATA PD
>
> Jeff> Does anyone here have experience on a standard PD layer using CVO?
>
> Jeff, I know you're deep in the weeds, but for those of us who
> aren't... can you spell out those acronyms please?
>
> PD = Purple Dinosaur?
> CVO = Cheery Victory Orange?
>
> *grin*
>
>
> Jeff> If so, I am looking for -throughput- maximus, limitations,
> Jeff> lessons learned, on per volume write workloads.
>
> Jeff> We are in a mode where we believe we were told there are
> Jeff> 100-200MB/sec write "edges" of throughput to CVO on standard PD
> Jeff> storage...just we're not quite in shape here to test that yet
> Jeff> ourselves.
>
> Yeah, you need to translate this section as well please. Or at least
> give a little more context so people have a hope of understanding. Or
> better yet, learning from your experiences!
>
> Cheers,
> John
>
> _______________________________________________
> Toasters mailing list
> Toasters@teaparty.net
> https://www.teaparty.net/mailman/listinfo/toasters
>