Mailing List Archive

auto-tune ws / headroom poc
Hi,

following up an attempt to discuss this on irc:


The following discussion is, for the time being, regarding workpace_backend
only, but the question is relevant for the other workspaces also:

Relevant portions of workspace_backend are not available for use in VCL, but are
consumed in core code. Setting workspace_backend too low will trigger an
immediate assertion failure in VBO_GetBusyObj().

IMHO, it doesn't make sense to handle these assertions differently, because
space for http headers, vsl and the initial read from the backend are mandatory
for varnish to issue backend requests.

Instead of trying to better handle nonsense workspace_backend sizes, I'd suggest
to make it impossible to tune it stupidly in the first place - and to give
admins a tunable which is easier to understand.

The attached patch contains a PoC (this is _not_ a finished patch, see below)
suggesting the following changes:

- make workspace_backend a protected (read-only) parameter

- add headroom_backend as a new admin-tunable parameter which roughly equals to
"workspace available to VCL"

- calculate the size of workspace_backend based on all other relevant parameters

Notes on the patch:

- I've kept the workspace parameter as protected so the mempool code can stay
as is (pointing to a single parameter for sizing)

- in VBO_Init() I've taken a lightweight approach at the consistency issue from
parameter changes, and I'd be curious to hear if others agree or think
that we need to make this safer.

- what is the best way to cross the border between the cache and mgt code?

Originally I would have preferred to have the sizing code in cache_busyobj.c,
but then we'd need to make mgt_param visible there - which I didn't like.

But what I have in the patch now doesn't look clean at all, I am really
unsure at this point.

Thx, Nils

--

** * * UPLEX - Nils Goroll Systemoptimierung

Scheffelstraße 32
22301 Hamburg

tel +49 40 28805731
mob +49 170 2723133
fax +49 40 42949753

xmpp://slink@jabber.ccc.de/

http://uplex.de/
Re: auto-tune ws / headroom poc [ In reply to ]
Hi,

this just surfaced again. More than before I think we should not let the admin
size workspaces, but just the space available to VCL.

Nils

On 29/02/16 13:49, Nils Goroll wrote:
> Hi,
>
> following up an attempt to discuss this on irc:
>
>
> The following discussion is, for the time being, regarding workpace_backend
> only, but the question is relevant for the other workspaces also:
>
> Relevant portions of workspace_backend are not available for use in VCL, but are
> consumed in core code. Setting workspace_backend too low will trigger an
> immediate assertion failure in VBO_GetBusyObj().
>
> IMHO, it doesn't make sense to handle these assertions differently, because
> space for http headers, vsl and the initial read from the backend are mandatory
> for varnish to issue backend requests.
>
> Instead of trying to better handle nonsense workspace_backend sizes, I'd suggest
> to make it impossible to tune it stupidly in the first place - and to give
> admins a tunable which is easier to understand.
>
> The attached patch contains a PoC (this is _not_ a finished patch, see below)
> suggesting the following changes:
>
> - make workspace_backend a protected (read-only) parameter
>
> - add headroom_backend as a new admin-tunable parameter which roughly equals to
> "workspace available to VCL"
>
> - calculate the size of workspace_backend based on all other relevant parameters
>
> Notes on the patch:
>
> - I've kept the workspace parameter as protected so the mempool code can stay
> as is (pointing to a single parameter for sizing)
>
> - in VBO_Init() I've taken a lightweight approach at the consistency issue from
> parameter changes, and I'd be curious to hear if others agree or think
> that we need to make this safer.
>
> - what is the best way to cross the border between the cache and mgt code?
>
> Originally I would have preferred to have the sizing code in cache_busyobj.c,
> but then we'd need to make mgt_param visible there - which I didn't like.
>
> But what I have in the patch now doesn't look clean at all, I am really
> unsure at this point.
>
> Thx, Nils
>
>
>
> _______________________________________________
> varnish-dev mailing list
> varnish-dev@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev
>

_______________________________________________
varnish-dev mailing list
varnish-dev@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-dev