Mailing List Archive

Need your perspective on Garbage Collection
Hi,
The issue is my garbage collection is running quite often i configure my
JVM as recommended (Gone though several articles ,blogs on lucene) also
provide enough RAM and memory (not as large to trigger GC ) .Main cause of
concern is GC run for more than 10 min (sometimes even 15 min)
This make whole server stuck and search is not responding . to solve it
what i am doing right now is restarting my server (very bad approach) can
you please help me in managing it and provide your insight what steps or
configuration i should prefer some useful way to optimize it .
my index size 700 GB

what configurations you suggest for it ,
like jvm,ram ,cpu cores,heap size,young and old genration.
I hope to hear from you soon

-
Re: Need your perspective on Garbage Collection [ In reply to ]
Hi Satnam,

Can you please share some details about what application using Lucene
you are using. For Solr and Elasticserach there are recommendations and
default startup scripts. If it is yur own Lucene application we would
also need more details.

Basically, Lucene itsself needs very few heap to execute queries and
index stuff. With an index of 700 Gigabytes you should still be able to
use a small heap (like a few gigabytes). Problems are mostly located
outside of Lucene, e.g., code trying to fetch all results of a large
query result using TopDocs paging ("deep paging problem"). So please
share more details to give you some answers. Maybe also source code
where it hangs.

Uwe

Am 03.01.2023 um 13:49 schrieb _ SATNAM:
> Hi,
> The issue is my garbage collection is running quite often i configure my
> JVM as recommended (Gone though several articles ,blogs on lucene) also
> provide enough RAM and memory (not as large to trigger GC ) .Main cause of
> concern is GC run for more than 10 min (sometimes even 15 min)
> This make whole server stuck and search is not responding . to solve it
> what i am doing right now is restarting my server (very bad approach) can
> you please help me in managing it and provide your insight what steps or
> configuration i should prefer some useful way to optimize it .
> my index size 700 GB
>
> what configurations you suggest for it ,
> like jvm,ram ,cpu cores,heap size,young and old genration.
> I hope to hear from you soon
>
> -
>
--
Uwe Schindler
Achterdiek 19, D-28357 Bremen
https://www.thetaphi.de
eMail: uwe@thetaphi.de


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org
Re: Need your perspective on Garbage Collection [ In reply to ]
> Am 03.01.2023 um 13:49 schrieb _ SATNAM:
> > Hi,
> > The issue is my garbage collection is running quite often i configure my
> > JVM as recommended (Gone though several articles ,blogs on lucene) also
> > provide enough RAM and memory (not as large to trigger GC ) .Main cause of
> > concern is GC run for more than 10 min (sometimes even 15 min)

While we haven't suffered such long pauses, when diagnosing such
problems I have found that profiling tools
are invaluable for diagnosing the root cause.

If you haven't already, consider tracking object allocation and
lifecycle using a profiler like JFR (free on modern JDK):
https://docs.oracle.com/javase/10/troubleshoot/troubleshoot-performance-issues-using-jfr.htm#JSTGD299
or YourKit (paid).

Given that you mention that you must restart the service to clear the
problem, that sounds like a memory leak to me. Tuning GC and JVM
parameters is extremely unlikely to fix a leak; it will only prolong
the eventual crash.

On Tue, Jan 3, 2023 at 9:14 AM Uwe Schindler <uwe@thetaphi.de> wrote:
>
> Hi Satnam,
>
> Can you please share some details about what application using Lucene
> you are using. For Solr and Elasticserach there are recommendations and
> default startup scripts. If it is yur own Lucene application we would
> also need more details.
>
> Basically, Lucene itsself needs very few heap to execute queries and
> index stuff. With an index of 700 Gigabytes you should still be able to
> use a small heap (like a few gigabytes). Problems are mostly located
> outside of Lucene, e.g., code trying to fetch all results of a large
> query result using TopDocs paging ("deep paging problem"). So please
> share more details to give you some answers. Maybe also source code
> where it hangs.
>
> Uwe
>
> > This make whole server stuck and search is not responding . to solve it
> > what i am doing right now is restarting my server (very bad approach) can
> > you please help me in managing it and provide your insight what steps or
> > configuration i should prefer some useful way to optimize it .
> > my index size 700 GB
> >
> > what configurations you suggest for it ,
> > like jvm,ram ,cpu cores,heap size,young and old genration.
> > I hope to hear from you soon
> >
> > -
> >
> --
> Uwe Schindler
> Achterdiek 19, D-28357 Bremen
> https://www.thetaphi.de
> eMail: uwe@thetaphi.de
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> For additional commands, e-mail: java-user-help@lucene.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org