Mailing List Archive

luceneutil
Hi Lucene Community,
When using luceneutil do some benchmark, it’s output shows several results which compares baseline and my_modified_version. It seems like to do iteration many times().

So my questions: 1) Is there any relationship between different iteration result ? 2) is the last iteration result the final benchmark result ?

Thanks~
---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org
Re: luceneutil [ In reply to ]
My understanding is that, 1) there isn't any specific relationship
between the iterations, and 2) the final output is a summary over all
iterations. The idea is that randomness might affect results on any
particular iteration, but by running multiple times (20 I think?) and
then aggregating the statistics over the repeated trials, hopefully
the noise gets smoothed out and only the real impact of the change
being tested shows.

Cheers,
-Greg

On Tue, Dec 21, 2021 at 6:00 PM 364367207 <364367207@qq.com.invalid> wrote:
>
> Hi Lucene Community,
> When using luceneutil do some benchmark, it’s output shows several results which compares baseline and my_modified_version. It seems like to do iteration many times().
>
> So my questions: 1) Is there any relationship between different iteration result ? 2) is the last iteration result the final benchmark result ?
>
> Thanks~
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> For additional commands, e-mail: java-user-help@lucene.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org
Re: luceneutil [ In reply to ]
This is basically my understanding as well, with the addition that, iirc,
by default the final output is a summary over the (default 3?) "best"
iterations of the baseline and candidate, respectively. The idea is to
allow each version to "put its best foot forward".
Michael

On Fri, Jan 7, 2022 at 10:06 AM Greg Miller <gsmiller@gmail.com> wrote:

> My understanding is that, 1) there isn't any specific relationship
> between the iterations, and 2) the final output is a summary over all
> iterations. The idea is that randomness might affect results on any
> particular iteration, but by running multiple times (20 I think?) and
> then aggregating the statistics over the repeated trials, hopefully
> the noise gets smoothed out and only the real impact of the change
> being tested shows.
>
> Cheers,
> -Greg
>
> On Tue, Dec 21, 2021 at 6:00 PM 364367207 <364367207@qq.com.invalid>
> wrote:
> >
> > Hi Lucene Community,
> > When using luceneutil do some benchmark, it’s output shows several
> results which compares baseline and my_modified_version. It seems like to
> do iteration many times().
> >
> > So my questions: 1) Is there any relationship between different
> iteration result ? 2) is the last iteration result the final benchmark
> result ?
> >
> > Thanks~
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> > For additional commands, e-mail: java-user-help@lucene.apache.org
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> For additional commands, e-mail: java-user-help@lucene.apache.org
>
>