Mailing List Archive

Wiki/Documentation for luceneutil
Hi,

Please help me to find Wiki/Documentation for luceneutil on how to
interpret the stats so presented. Is there a way to obtain a summary table
of all the runs just like lucene benchmark stats table for index/search?

Thanks,
Praveen
Re: Wiki/Documentation for luceneutil [ In reply to ]
Hi Praveen,

Unfortunately we do not have good docs for luceneutil. Maybe start a page
on Confluence?

Here's a quick explanation.

After you set up your performance script (e.g. localrun.py or perf.py or
so), and then run it (something like python3 localrun.py -source
wikimedium10m), it will first index all docs (takes a while, depending on
how concurrent your hardware is), then it will run 20 separate JVMs, where
each JVM is running multiple concurrent "search tasks" (simple TermQuery,
various kinds of BooleanQuery, primary key lookup, etc.) recording latency.

After each JVM finishes, it will print a table of the running results so
far. One row per task. Baseline mean and std-dev QPS and same for
competitor are shown. Then the pct change (baseline mean versus competitor
mean) is shown, as well as a confidence measure (p-value).

Nightly benchmarks run these same tools and collect results over time,
making charts showing the changes, here:
https://home.apache.org/~mikemccand/lucenebench It's been "running" for
more than a decade now!

Mike McCandless

http://blog.mikemccandless.com


On Wed, Oct 20, 2021 at 7:43 AM Praveen Nishchal <pravs.nishchal@gmail.com>
wrote:

> Hi,
>
> Please help me to find Wiki/Documentation for luceneutil on how to
> interpret the stats so presented. Is there a way to obtain a summary table
> of all the runs just like lucene benchmark stats table for index/search?
>
> Thanks,
> Praveen
>