Mailing List Archive

[PATCH] perf test: Make metric testing more robust.
When testing metric expressions we fake counter values from 1 going
upward. For some metrics this can yield negative values that are clipped
to zero, and then cause divide by zero failures. A workaround for this
case is to try a second time with counter values going in the opposite
direction.

This case was seen in a metric like:
event1 / max(event2 - event3, 0)

Signed-off-by: Ian Rogers <irogers@google.com>
---
tools/perf/tests/pmu-events.c | 32 ++++++++++++++++++++++++++------
1 file changed, 26 insertions(+), 6 deletions(-)

diff --git a/tools/perf/tests/pmu-events.c b/tools/perf/tests/pmu-events.c
index b8aff8fb50d8..6c1cd58605c1 100644
--- a/tools/perf/tests/pmu-events.c
+++ b/tools/perf/tests/pmu-events.c
@@ -600,8 +600,18 @@ static int test_parsing(void)
}

if (expr__parse(&result, &ctx, pe->metric_expr, 0)) {
- expr_failure("Parse failed", map, pe);
- ret++;
+ /*
+ * Parsing failed, make numbers go from large to
+ * small which can resolve divide by zero
+ * issues.
+ */
+ k = 1024;
+ hashmap__for_each_entry((&ctx.ids), cur, bkt)
+ expr__add_id_val(&ctx, strdup(cur->key), k--);
+ if (expr__parse(&result, &ctx, pe->metric_expr, 0)) {
+ expr_failure("Parse failed", map, pe);
+ ret++;
+ }
}
expr__ctx_clear(&ctx);
}
@@ -656,10 +666,20 @@ static int metric_parse_fake(const char *str)
}
}

- if (expr__parse(&result, &ctx, str, 0))
- pr_err("expr__parse failed\n");
- else
- ret = 0;
+ ret = 0;
+ if (expr__parse(&result, &ctx, str, 0)) {
+ /*
+ * Parsing failed, make numbers go from large to small which can
+ * resolve divide by zero issues.
+ */
+ i = 1024;
+ hashmap__for_each_entry((&ctx.ids), cur, bkt)
+ expr__add_id_val(&ctx, strdup(cur->key), i--);
+ if (expr__parse(&result, &ctx, str, 0)) {
+ pr_err("expr__parse failed\n");
+ ret = -1;
+ }
+ }

out:
expr__ctx_clear(&ctx);
--
2.32.0.554.ge1b32706d8-goog
Re: [PATCH] perf test: Make metric testing more robust. [ In reply to ]
On 04/08/2021 08:25, Ian Rogers wrote:
> When testing metric expressions we fake counter values from 1 going
> upward. For some metrics this can yield negative values that are clipped
> to zero, and then cause divide by zero failures. A workaround for this
> case is to try a second time with counter values going in the opposite
> direction.
>
> This case was seen in a metric like:
> event1 / max(event2 - event3, 0)

is this the standard method to make the metric evaluation fail when
results are not as expected? In this example, event2 should be greater
than event3 always. Dividing by max(x, 0) would seem a bit silly.

thanks,
John

>
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
> tools/perf/tests/pmu-events.c | 32 ++++++++++++++++++++++++++------
> 1 file changed, 26 insertions(+), 6 deletions(-)
>
> diff --git a/tools/perf/tests/pmu-events.c b/tools/perf/tests/pmu-events.c
> index b8aff8fb50d8..6c1cd58605c1 100644
> --- a/tools/perf/tests/pmu-events.c
> +++ b/tools/perf/tests/pmu-events.c
> @@ -600,8 +600,18 @@ static int test_parsing(void)
> }
>
> if (expr__parse(&result, &ctx, pe->metric_expr, 0)) {
> - expr_failure("Parse failed", map, pe);
> - ret++;
> + /*
> + * Parsing failed, make numbers go from large to
> + * small which can resolve divide by zero
> + * issues.
> + */
> + k = 1024;
> + hashmap__for_each_entry((&ctx.ids), cur, bkt)
> + expr__add_id_val(&ctx, strdup(cur->key), k--);
> + if (expr__parse(&result, &ctx, pe->metric_expr, 0)) {
> + expr_failure("Parse failed", map, pe);
> + ret++;
> + }
> }
> expr__ctx_clear(&ctx);
> }
> @@ -656,10 +666,20 @@ static int metric_parse_fake(const char *str)
> }
> }
>
> - if (expr__parse(&result, &ctx, str, 0))
> - pr_err("expr__parse failed\n");
> - else
> - ret = 0;
> + ret = 0;
> + if (expr__parse(&result, &ctx, str, 0)) {
> + /*
> + * Parsing failed, make numbers go from large to small which can
> + * resolve divide by zero issues.
> + */
> + i = 1024;
> + hashmap__for_each_entry((&ctx.ids), cur, bkt)
> + expr__add_id_val(&ctx, strdup(cur->key), i--);
> + if (expr__parse(&result, &ctx, str, 0)) {
> + pr_err("expr__parse failed\n");
> + ret = -1;
> + }
> + }
>
> out:
> expr__ctx_clear(&ctx);
>
Re: [PATCH] perf test: Make metric testing more robust. [ In reply to ]
On 04/08/2021 15:55, Ian Rogers wrote:
>
>
> On Wed, Aug 4, 2021, 2:11 AM John Garry <john.garry@huawei.com
> <mailto:john.garry@huawei.com>> wrote:
>
> On 04/08/2021 08:25, Ian Rogers wrote:
> > When testing metric expressions we fake counter values from 1 going
> > upward. For some metrics this can yield negative values that are
> clipped
> > to zero, and then cause divide by zero failures. A workaround for
> this
> > case is to try a second time with counter values going in the
> opposite
> > direction.
> >
> > This case was seen in a metric like:
> >    event1 / max(event2 - event3, 0)
>
> is this the standard method to make the metric evaluation fail when
> results are not as expected? In this example, event2 should be greater
> than event3 always. Dividing by max(x, 0) would seem a bit silly.
>
>
> I wouldn't say it was standard but it is in a metric a third party gave
> us.

I agree that making it more robust is a good thing. But masking bogus
expressions isn't great. After all, we're here to find them :)

> It would be possible to get the same test failure on more standard
> expressions, so it would be nice if these tests were more robust.

so something like this would fail also:
event1 / (event2 + event3 - 1 - event4)

assuming we have ascending values from 1 for event1. And this would seem
a valid expression.

Anyway, it would be nice if we could reject max(0, x) and any divide by
negative numbers, apart from your change.

Cheers,
john


> >
> > Signed-off-by: Ian Rogers <irogers@google.com
> <mailto:irogers@google.com>>
> > ---
> >   tools/perf/tests/pmu-events.c | 32 ++++++++++++++++++++++++++------
> >   1 file changed, 26 insertions(+), 6 deletions(-)
> >
> > diff --git a/tools/perf/tests/pmu-events.c
> b/tools/perf/tests/pmu-events.c
> > index b8aff8fb50d8..6c1cd58605c1 100644
> > --- a/tools/perf/tests/pmu-events.c
> > +++ b/tools/perf/tests/pmu-events.c
> > @@ -600,8 +600,18 @@ static int test_parsing(void)
> >                       }
> >
> >                       if (expr__parse(&result, &ctx,
> pe->metric_expr, 0)) {
> > -                             expr_failure("Parse failed", map, pe);
> > -                             ret++;
> > +                             /*
> > +                              * Parsing failed, make numbers go
> from large to
> > +                              * small which can resolve divide
> by zero
> > +                              * issues.
> > +                              */
> > +                             k = 1024;
> > +                             hashmap__for_each_entry((&ctx.ids),
> cur, bkt)
> > +                                     expr__add_id_val(&ctx,
> strdup(cur->key), k--);
> > +                             if (expr__parse(&result, &ctx,
> pe->metric_expr, 0)) {
> > +                                     expr_failure("Parse
> failed", map, pe);
> > +                                     ret++;
> > +                             }
> >                       }
> >                       expr__ctx_clear(&ctx);
> >               }
> > @@ -656,10 +666,20 @@ static int metric_parse_fake(const char *str)
> >               }
> >       }
> >
> > -     if (expr__parse(&result, &ctx, str, 0))
> > -             pr_err("expr__parse failed\n");
> > -     else
> > -             ret = 0;
> > +     ret = 0;
> > +     if (expr__parse(&result, &ctx, str, 0)) {
> > +             /*
> > +              * Parsing failed, make numbers go from large to
> small which can
> > +              * resolve divide by zero issues.
> > +              */
> > +             i = 1024;
> > +             hashmap__for_each_entry((&ctx.ids), cur, bkt)
> > +                     expr__add_id_val(&ctx, strdup(cur->key), i--);
> > +             if (expr__parse(&result, &ctx, str, 0)) {
> > +                     pr_err("expr__parse failed\n");
> > +                     ret = -1;
> > +             }
> > +     }
> >
> >   out:
> >       expr__ctx_clear(&ctx);
> >
>
Re: [PATCH] perf test: Make metric testing more robust. [ In reply to ]
On Wed, Aug 4, 2021 at 10:19 AM John Garry <john.garry@huawei.com> wrote:
>
> On 04/08/2021 15:55, Ian Rogers wrote:
> >
> >
> > On Wed, Aug 4, 2021, 2:11 AM John Garry <john.garry@huawei.com
> > <mailto:john.garry@huawei.com>> wrote:
> >
> > On 04/08/2021 08:25, Ian Rogers wrote:
> > > When testing metric expressions we fake counter values from 1 going
> > > upward. For some metrics this can yield negative values that are
> > clipped
> > > to zero, and then cause divide by zero failures. A workaround for
> > this
> > > case is to try a second time with counter values going in the
> > opposite
> > > direction.
> > >
> > > This case was seen in a metric like:
> > > event1 / max(event2 - event3, 0)
> >
> > is this the standard method to make the metric evaluation fail when
> > results are not as expected? In this example, event2 should be greater
> > than event3 always. Dividing by max(x, 0) would seem a bit silly.
> >
> >
> > I wouldn't say it was standard but it is in a metric a third party gave
> > us.
>
> I agree that making it more robust is a good thing. But masking bogus
> expressions isn't great. After all, we're here to find them :)
>
> > It would be possible to get the same test failure on more standard
> > expressions, so it would be nice if these tests were more robust.
>
> so something like this would fail also:
> event1 / (event2 + event3 - 1 - event4)
>
> assuming we have ascending values from 1 for event1. And this would seem
> a valid expression.
>
> Anyway, it would be nice if we could reject max(0, x) and any divide by
> negative numbers, apart from your change.

Thanks John, it'd be nice to have a tool to vet for bogus expressions.
I think that's out of the scope of this change but we should bear it
in mind. Would be nice to land this change if someone has time to
review.

Thanks,
Ian

> Cheers,
> john
>
>
> > >
> > > Signed-off-by: Ian Rogers <irogers@google.com
> > <mailto:irogers@google.com>>
> > > ---
> > > tools/perf/tests/pmu-events.c | 32 ++++++++++++++++++++++++++------
> > > 1 file changed, 26 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/tools/perf/tests/pmu-events.c
> > b/tools/perf/tests/pmu-events.c
> > > index b8aff8fb50d8..6c1cd58605c1 100644
> > > --- a/tools/perf/tests/pmu-events.c
> > > +++ b/tools/perf/tests/pmu-events.c
> > > @@ -600,8 +600,18 @@ static int test_parsing(void)
> > > }
> > >
> > > if (expr__parse(&result, &ctx,
> > pe->metric_expr, 0)) {
> > > - expr_failure("Parse failed", map, pe);
> > > - ret++;
> > > + /*
> > > + * Parsing failed, make numbers go
> > from large to
> > > + * small which can resolve divide
> > by zero
> > > + * issues.
> > > + */
> > > + k = 1024;
> > > + hashmap__for_each_entry((&ctx.ids),
> > cur, bkt)
> > > + expr__add_id_val(&ctx,
> > strdup(cur->key), k--);
> > > + if (expr__parse(&result, &ctx,
> > pe->metric_expr, 0)) {
> > > + expr_failure("Parse
> > failed", map, pe);
> > > + ret++;
> > > + }
> > > }
> > > expr__ctx_clear(&ctx);
> > > }
> > > @@ -656,10 +666,20 @@ static int metric_parse_fake(const char *str)
> > > }
> > > }
> > >
> > > - if (expr__parse(&result, &ctx, str, 0))
> > > - pr_err("expr__parse failed\n");
> > > - else
> > > - ret = 0;
> > > + ret = 0;
> > > + if (expr__parse(&result, &ctx, str, 0)) {
> > > + /*
> > > + * Parsing failed, make numbers go from large to
> > small which can
> > > + * resolve divide by zero issues.
> > > + */
> > > + i = 1024;
> > > + hashmap__for_each_entry((&ctx.ids), cur, bkt)
> > > + expr__add_id_val(&ctx, strdup(cur->key), i--);
> > > + if (expr__parse(&result, &ctx, str, 0)) {
> > > + pr_err("expr__parse failed\n");
> > > + ret = -1;
> > > + }
> > > + }
> > >
> > > out:
> > > expr__ctx_clear(&ctx);
> > >
> >
>