Mailing List Archive

Varnish hit + pass + miss reaches less than 50% of all reqs
Hello everybody,

i was trying to get some stats from my varnish server using varnishstat.
When using varnish stat, i see that :
- MAIN.client_req should make 100% of my queries
- MAIN.cache_hit represents 10% of MAIN.client_req
- MAIN.cache_hitpass represents 7% of MAIN.client_req
- MAIN.cache_miss represents 24% of MAIN.client_req
- MAIN.cache_hit_grace is very low

So all these sumed categories represent less than 50% of client_req, i
think i'm missing something. The configuration is not maintained by me,
here is a sample of what it returns if this can help :
-------------------------------------
cat /etc/varnish/production.vcl | grep return
return(synth(900, "Moved Permanently"));
return(synth(901, "Moved Permanently"));
return(synth(902, "Moved Permanently"));
return(synth(903, "Moved Permanently"));
return(pipe);
return(pipe);
return(pass);
return(pass);
return(pass);
return(synth(410, "Gone"));
return(pass);
return(synth(850, "Moved Permanently"));
return(hash);
return(hash);
return(pass);
return(hash);
return(lookup);
return(retry);
return(deliver);
-------------------------------------

Thanks very much for your help,
Regards,
Alex
Re: Varnish hit + pass + miss reaches less than 50% of all reqs [ In reply to ]
Hi Guillaume,

thanks for helping !

Here it is :
MAIN.uptime 1821762 1.00 Child process uptime


[532/604]
MAIN.sess_conn 152864621 83.91 Sessions accepted
MAIN.sess_drop 0 0.00 Sessions dropped
MAIN.sess_fail 0 0.00 Session accept failures
MAIN.client_req_400 0 0.00 Client requests received,
subject to 400 errors
MAIN.client_req_417 0 0.00 Client requests received,
subject to 417 errors
MAIN.client_req 290152364 159.27 Good client requests received
MAIN.cache_hit 7433491 4.08 Cache hits
MAIN.cache_hit_grace 36319 0.02 Cache grace hits
MAIN.cache_hitpass 16003020 8.78 Cache hits for pass
MAIN.cache_miss 89526521 49.14 Cache misses
MAIN.backend_conn 5078542 2.79 Backend conn. success
MAIN.backend_unhealthy 2 0.00 Backend conn. not attempted
MAIN.backend_busy 0 0.00 Backend conn. too many
MAIN.backend_fail 6245 0.00 Backend conn. failures
MAIN.backend_reuse 266259369 146.15 Backend conn. reuses
MAIN.backend_recycle 267274817 146.71 Backend conn. recycles
MAIN.backend_retry 17429 0.01 Backend conn. retry
MAIN.fetch_head 1623 0.00 Fetch no body (HEAD)
MAIN.fetch_length 77119616 42.33 Fetch with Length
MAIN.fetch_chunked 188334324 103.38 Fetch chunked
MAIN.fetch_eof 295751 0.16 Fetch EOF
MAIN.fetch_bad 0 0.00 Fetch bad T-E
MAIN.fetch_none 18415 0.01 Fetch no body
MAIN.fetch_1xx 0 0.00 Fetch no body (1xx)
MAIN.fetch_204 5427973 2.98 Fetch no body (204)
MAIN.fetch_304 130058 0.07 Fetch no body (304)
MAIN.fetch_failed 8591 0.00 Fetch failed (all causes)
MAIN.fetch_no_thread 0 0.00 Fetch failed (no thread)
MAIN.pools 2 . Number of thread pools
MAIN.threads 400 . Total number of threads
MAIN.threads_limited 15 0.00 Threads hit max
MAIN.threads_created 35825 0.02 Threads created
MAIN.threads_destroyed 35425 0.02 Threads destroyed
MAIN.threads_failed 0 0.00 Thread creation failed
MAIN.thread_queue_len 0 . Length of session queue
MAIN.busy_sleep 160177 0.09 Number of requests sent to
sleep on busy objhdr
MAIN.busy_wakeup 160177 0.09 Number of requests woken
after sleep on busy objhdr
MAIN.busy_killed 0 0.00 Number of requests killed
after sleep on busy objhdr
MAIN.sess_queued 35718 0.02 Sessions queued for thread
MAIN.sess_dropped 0 0.00 Sessions dropped for thread
MAIN.n_object 370310 . object structs made
MAIN.n_vampireobject 0 . unresurrected objects
MAIN.n_objectcore 370485 . objectcore structs made
MAIN.n_objecthead 376770 . objecthead structs made
MAIN.n_waitinglist 388 . waitinglist structs made
MAIN.n_backend 363 . Number of backends
MAIN.n_expired 68570078 . Number of expired objects
MAIN.n_lru_nuked 20474008 . Number of LRU nuked objects
MAIN.n_lru_moved 6156256 . Number of LRU moved objects
MAIN.n_lru_limited 3 0.00 Reached nuke_limit
MAIN.losthdr 0 0.00 HTTP header overflows
MAIN.s_sess 152864621 83.91 Total sessions seen
MAIN.s_req 290152364 159.27 Total requests seen
MAIN.s_pipe 216 0.00 Total pipe sessions seen
MAIN.s_pass 181773529 99.78 Total pass-ed requests seen
MAIN.s_fetch 271300050 148.92 Total backend fetches
initiated
MAIN.s_synth 11418599 6.27 Total synthethic responses
made
MAIN.s_req_hdrbytes 490884503465 269455.89 Request header bytes
MAIN.s_req_bodybytes 6627058200 3637.72 Request body bytes
MAIN.s_resp_hdrbytes 253365794945 139077.33 Response header bytes
MAIN.s_resp_bodybytes 2259371209416 1240212.06 Response body bytes
MAIN.s_pipe_hdrbytes 317932 0.17 Pipe request header bytes
MAIN.s_pipe_in 0 0.00 Piped bytes from client
MAIN.s_pipe_out 650171 0.36 Piped bytes to client
MAIN.s_pipe_out 650171 0.36 Piped bytes to client
MAIN.sess_closed 8450 0.00 Session Closed
MAIN.sess_closed_err 51028839 28.01 Session Closed with error
MAIN.sess_readahead 0 0.00 Session Read Ahead
MAIN.sess_herd 208419766 114.41 Session herd
MAIN.sc_rem_close 101825384 55.89 Session OK REM_CLOSE
MAIN.sc_req_close 103 0.00 Session OK REQ_CLOSE
MAIN.sc_req_http10 0 0.00 Session Err REQ_HTTP10
MAIN.sc_rx_bad 0 0.00 Session Err RX_BAD
MAIN.sc_rx_body 0 0.00 Session Err RX_BODY
MAIN.sc_rx_junk 0 0.00 Session Err RX_JUNK
MAIN.sc_rx_overflow 0 0.00 Session Err RX_OVERFLOW
MAIN.sc_rx_timeout 51028883 28.01 Session Err RX_TIMEOUT
MAIN.sc_tx_pipe 216 0.00 Session OK TX_PIPE
MAIN.sc_tx_error 0 0.00 Session Err TX_ERROR
MAIN.sc_tx_eof 0 0.00 Session OK TX_EOF
MAIN.sc_resp_close 6107 0.00 Session OK RESP_CLOSE
MAIN.sc_overload 0 0.00 Session Err OVERLOAD
MAIN.sc_pipe_overflow 0 0.00 Session Err PIPE_OVERFLOW
MAIN.sc_range_short 0 0.00 Session Err RANGE_SHORT
MAIN.shm_records 48096929466 26401.32 SHM records
MAIN.shm_writes 1753278706 962.41 SHM writes
MAIN.shm_flushes 111020519 60.94 SHM flushes due to overflow
MAIN.shm_cont 6287333 3.45 SHM MTX contention
MAIN.shm_cycles 24363 0.01 SHM cycles through buffer
MAIN.backend_req 271344507 148.95 Backend requests made
MAIN.n_vcl 10 . Number of loaded VCLs in
total
MAIN.n_vcl_avail 10 . Number of VCLs available
MAIN.n_vcl_discard 0 . Number of discarded VCLs
MAIN.bans 1 . Count of bans
MAIN.bans_completed 0 . Number of bans marked
'completed'
MAIN.bans_obj 0 . Number of bans using obj.*
MAIN.bans_req 1 . Number of bans using req.*
MAIN.bans_added 128 0.00 Bans added
MAIN.bans_deleted 127 0.00 Bans deleted
MAIN.bans_tested 630175 0.35 Bans tested against
objects (lookup)
MAIN.bans_obj_killed 112189 0.06 Objects killed by bans
(lookup)
MAIN.bans_lurker_tested 0 0.00 Bans tested against
objects (lurker)
MAIN.bans_tests_tested 1382078 0.76 Ban tests tested against
objects (lookup)
MAIN.bans_lurker_tests_tested 0 0.00 Ban tests tested
against objects (lurker)
MAIN.bans_lurker_obj_killed 0 0.00 Objects killed by
bans (lurker)
MAIN.bans_dups 14 0.00 Bans superseded by
other bans
MAIN.bans_lurker_contention 0 0.00 Lurker gave way for
lookup
MAIN.bans_persisted_bytes 34768 . Bytes used by the
persisted ban lists
MAIN.bans_persisted_fragmentation 34468 . Extra bytes in
persisted ban lists due to fragmentation
MAIN.n_purges 0 . Number of purge
operations executed
MAIN.n_obj_purged 0 . Number of
purged objects
MAIN.exp_mailed 110185218 60.48 Number of
objects mailed to expiry thread
MAIN.exp_received 110185218 60.48 Number of
objects received by expiry thread
MAIN.hcb_nolock 112963041 62.01 HCB Lookups
without lock
MAIN.hcb_lock 86256017 47.35 HCB Lookups
with lock
MAIN.hcb_insert 86255995 47.35 HCB Inserts
MAIN.esi_errors 0 0.00 ESI parse
errors (unlock)
MAIN.esi_warnings 0 0.00 ESI parse
warnings (unlock)
MAIN.vmods 2 . Loaded VMODs
MAIN.n_gzip 129789432 71.24 Gzip operations
MAIN.n_gunzip 210646709 115.63 Gunzip
operations
MAIN.vsm_free 965520 . Free VSM space
MAIN.vsm_used 83969088 . Used VSM space
MAIN.vsm_cooling 0 . Cooling VSM
space
MAIN.vsm_overflow 0 . Overflow VSM
space
MAIN.vsm_overflowed 0 0.00 Overflowed VSM
space
MGT.uptime 1821761 1.00 Management
process uptime
MGT.child_start 1 0.00 Child process
started
MGT.child_exit 0 0.00 Child process
normal exit
MGT.child_stop 0 0.00 Child process
unexpected exit
MGT.child_died 0 0.00 Child process
died (signal)
MGT.child_dump 0 0.00 Child process
core dumped
MGT.child_panic 0 0.00 Child process
panic
MEMPOOL.busyobj.live 34 . In use
MEMPOOL.busyobj.pool 32 . In Pool
MEMPOOL.busyobj.sz_wanted 65536 . Size requested
MEMPOOL.busyobj.sz_actual 65504 . Size allocated
MEMPOOL.busyobj.allocs 271336588 148.94 Allocations
MEMPOOL.busyobj.frees 271336554 148.94 Frees
MEMPOOL.busyobj.recycle 270933647 148.72 Recycled from
pool
MEMPOOL.busyobj.timeout 988916 0.54 Timed out from
pool
MEMPOOL.busyobj.toosmall 0 0.00 Too small to
recycle
MEMPOOL.busyobj.surplus 88997 0.05 Too many for
pool
MEMPOOL.busyobj.randry 402941 0.22 Pool ran dry
MEMPOOL.req0.live 23 . In use
MEMPOOL.req0.pool 24 . In Pool
MEMPOOL.req0.sz_wanted 131072 . Size requested
MEMPOOL.req0.sz_actual 131040 . Size allocated
MEMPOOL.req0.allocs 155090050 85.13 Allocations
MEMPOOL.req0.frees 155090027 85.13 Frees
MEMPOOL.req0.recycle 154983843 85.07 Recycled from
pool
MEMPOOL.req0.timeout 1025602 0.56 Timed out from
pool
MEMPOOL.req0.toosmall 0 0.00 Too small to
recycle
MEMPOOL.req0.surplus 2995 0.00 Too many for
pool
MEMPOOL.req0.randry 106207 0.06 Pool ran dry
MEMPOOL.sess0.live 333 . In use
MEMPOOL.sess0.pool 20 . In Pool
MEMPOOL.sess0.sz_wanted 512 . Size requested
MEMPOOL.sess0.sz_actual 480 . Size allocated
MEMPOOL.sess0.allocs 76433149 41.96 Allocations
MEMPOOL.sess0.frees 76432816 41.96 Frees
MEMPOOL.sess0.recycle 76232233 41.85 Recycled from
pool
MEMPOOL.sess0.timeout 1809683 0.99 Timed out from
pool
MEMPOOL.sess0.toosmall 0 0.00 Too small to
recycle
MEMPOOL.sess0.surplus 18419 0.01 Too many for
pool
MEMPOOL.sess0.randry 200916 0.11 Pool ran dry
MEMPOOL.req1.live 23 . In use
MEMPOOL.req1.pool 25 . In Pool
MEMPOOL.req1.sz_wanted 131072 . Size requested
MEMPOOL.req1.sz_actual 131040 . Size allocated
MEMPOOL.req1.allocs 155148876 85.16 Allocations
MEMPOOL.req1.frees 155148853 85.16 Frees
MEMPOOL.req1.recycle 155041566 85.11 Recycled from
pool
MEMPOOL.req1.timeout 1025704 0.56 Timed out from
pool
MEMPOOL.req1.toosmall 0 0.00 Too small to
recycle
MEMPOOL.req1.surplus 2749 0.00 Too many for
pool
MEMPOOL.req1.randry 107310 0.06 Pool ran dry
MEMPOOL.sess1.live 314 . In use
MEMPOOL.sess1.pool 36 . In Pool
MEMPOOL.sess1.sz_wanted 512 . Size requested
MEMPOOL.sess1.sz_actual 480 . Size allocated
MEMPOOL.sess1.allocs 76431491 41.95 Allocations
MEMPOOL.sess1.frees 76431177 41.95 Frees
MEMPOOL.sess1.recycle 76229789 41.84 Recycled from
pool
MEMPOOL.sess1.timeout 1811749 0.99 Timed out from
pool
MEMPOOL.sess1.toosmall 0 0.00 Too small to
recycle
MEMPOOL.sess1.surplus 18312 0.01 Too many for
pool
MEMPOOL.sess1.randry 201702 0.11 Pool ran dry
SMA.default.c_req 104863620 57.56 Allocator
requests
SMA.default.c_fail 20841440 11.44 Allocator
failures
SMA.default.c_bytes 1035542691744 568429.19 Bytes allocated
SMA.default.c_freed 1018362838340 558998.84 Bytes freed
SMA.default.g_alloc 1393168 . Allocations
outstanding
SMA.default.g_bytes 17179853404 . Bytes
outstanding
SMA.default.g_space 15780 . Bytes available
SMA.Transient.c_req 570410809 313.11 Allocator
requests
SMA.Transient.c_fail 0 0.00 Allocator
failures
SMA.Transient.c_bytes 4076783572265 2237824.46 Bytes allocated
SMA.Transient.c_freed 4076757189033 2237809.98 Bytes freed
SMA.Transient.g_alloc 19133 . Allocations
outstanding
SMA.Transient.g_bytes 26383232 . Bytes
outstanding
SMA.Transient.g_space 0 . Bytes available
...
LCK.backend.creat
365 0.00 Created locks

[0/604]
LCK.backend.destroy
0 0.00 Destroyed locks
LCK.backend.locks
555822161 305.10 Lock Operations
LCK.backend_tcp.creat
37 0.00 Created locks
LCK.backend_tcp.destroy
0 0.00 Destroyed locks
LCK.backend_tcp.locks
1076235922 590.77 Lock Operations
LCK.ban.creat
1 0.00 Created locks
LCK.ban.destroy
0 0.00 Destroyed locks
LCK.ban.locks
373355072 204.94 Lock Operations
LCK.busyobj.creat
271334966 148.94 Created locks
LCK.busyobj.destroy
271336345 148.94 Destroyed locks
LCK.busyobj.locks
2741594616 1504.91 Lock Operations
LCK.cli.creat
1 0.00 Created locks
LCK.cli.destroy
0 0.00 Destroyed locks
LCK.cli.locks
607411 0.33 Lock Operations
LCK.exp.creat
1 0.00 Created locks
LCK.exp.destroy
0 0.00 Destroyed locks
LCK.exp.locks
563288168 309.20 Lock Operations
LCK.hcb.creat
1 0.00 Created locks
LCK.hcb.destroy
0 0.00 Destroyed locks
LCK.hcb.locks
172158514 94.50 Lock Operations
LCK.lru.creat
2 0.00 Created locks
LCK.lru.destroy
0 0.00 Destroyed locks
LCK.lru.locks
296448067 162.73 Lock Operations
LCK.mempool.creat
5 0.00 Created locks
LCK.mempool.destroy
0 0.00 Destroyed locks
LCK.mempool.locks
1487652815 816.60 Lock Operations
LCK.objhdr.creat
86265523 47.35 Created locks
LCK.objhdr.destroy
85889681 47.15 Destroyed locks
LCK.objhdr.locks
3588940017 1970.04 Lock Operations
LCK.pipestat.creat
1 0.00 Created locks
LCK.pipestat.destroy
0 0.00 Destroyed locks
LCK.pipestat.locks
216 0.00 Lock Operations
LCK.sess.creat
152861317 83.91 Created locks
LCK.sess.destroy
152862891 83.91 Destroyed locks
LCK.sess.locks
37421700 20.54 Lock Operations
LCK.smp.creat
0 0.00 Created locks
LCK.smp.destroy
0 0.00 Destroyed locks
LCK.smp.locks
0 0.00 Lock Operations
LCK.vbe.creat
1 0.00 Created locks
LCK.vbe.destroy
0 0.00 Destroyed locks
LCK.vbe.locks
607832 0.33 Lock Operations
LCK.vcapace.creat
1 0.00 Created locks
LCK.vcapace.destroy
0 0.00 Destroyed locks
LCK.vcapace.locks
0 0.00 Lock Operations
LCK.vcl.creat
1 0.00 Created locks
LCK.vcl.destroy
0 0.00 Destroyed locks
LCK.vcl.locks
543937089 298.58 Lock Operations
LCK.vxid.creat
1 0.00 Created locks
LCK.vxid.destroy
0 0.00 Destroyed locks
LCK.vxid.locks
45733 0.03 Lock Operations
LCK.waiter.creat
2 0.00 Created locks
LCK.waiter.destroy
0 0.00 Destroyed locks
LCK.waiter.locks
1420757858 779.88 Lock Operations
LCK.wq.creat
3 0.00 Created locks
LCK.wq.destroy
0 0.00 Destroyed locks
LCK.wq.locks
1773506747 973.51 Lock Operations
LCK.wstat.creat
1 0.00 Created locks
LCK.wstat.destroy
0 0.00 Destroyed locks
LCK.wstat.locks
759559125 416.94 Lock Operations
LCK.sma.creat
2 0.00 Created locks
LCK.sma.destroy
0 0.00 Destroyed locks
LCK.sma.locks
1328287853 729.12 Lock Operations

I removed the vcl_root sections, i hope you won't need it

Thanks again,

Le mer. 17 avr. 2019 à 17:28, Guillaume Quintard <
guillaume@varnish-software.com> a écrit :

> Can you share the "varnishstat -1" output?
>
> I'm pretty sure the answer is in the passes and synth responses you omitted
>
>
> On Wed, Apr 17, 2019, 16:19 Alexandre Thaveau <
> Alexandre.Thaveau@mister-auto.com> wrote:
>
>> Hello everybody,
>>
>> i was trying to get some stats from my varnish server using varnishstat.
>> When using varnish stat, i see that :
>> - MAIN.client_req should make 100% of my queries
>> - MAIN.cache_hit represents 10% of MAIN.client_req
>> - MAIN.cache_hitpass represents 7% of MAIN.client_req
>> - MAIN.cache_miss represents 24% of MAIN.client_req
>> - MAIN.cache_hit_grace is very low
>>
>> So all these sumed categories represent less than 50% of client_req, i
>> think i'm missing something. The configuration is not maintained by me,
>> here is a sample of what it returns if this can help :
>> -------------------------------------
>> cat /etc/varnish/production.vcl | grep return
>> return(synth(900, "Moved Permanently"));
>> return(synth(901, "Moved Permanently"));
>> return(synth(902, "Moved Permanently"));
>> return(synth(903, "Moved Permanently"));
>> return(pipe);
>> return(pipe);
>> return(pass);
>> return(pass);
>> return(pass);
>> return(synth(410, "Gone"));
>> return(pass);
>> return(synth(850, "Moved Permanently"));
>> return(hash);
>> return(hash);
>> return(pass);
>> return(hash);
>> return(lookup);
>> return(retry);
>> return(deliver);
>> -------------------------------------
>>
>> Thanks very much for your help,
>> Regards,
>> Alex
>> _______________________________________________
>> varnish-misc mailing list
>> varnish-misc@varnish-cache.org
>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>>
>
Re: Varnish hit + pass + miss reaches less than 50% of all reqs [ In reply to ]
Hi there,

So:

MAIN.client_req 290152364 (aaaaaaaaaaall the requests)

vs

MAIN.cache_hit 7433491
MAIN.cache_hit_grace 36319 (exclude these are they are already
accounted for in MAIN.cache_hit)
MAIN.cache_hitpass 16003020 (exclude these are they are already
accounted for in MAIN.s_pass)
MAIN.cache_miss 89526521
MAIN.s_synth 11418599
MAIN.s_pipe 216
MAIN.s_pass 181773529

the difference is now 8 requests, which is fairly reasonable (some requests
may be in flight, and threads don't necessarily push their stats after
every requests)

does this clarify things a bit?

--
Guillaume Quintard


On Wed, Apr 17, 2019 at 4:51 PM Alexandre Thaveau <
Alexandre.Thaveau@mister-auto.com> wrote:

> Hi Guillaume,
>
> thanks for helping !
>
> Here it is :
> MAIN.uptime 1821762 1.00 Child process uptime
>
>
> [532/604]
> MAIN.sess_conn 152864621 83.91 Sessions accepted
> MAIN.sess_drop 0 0.00 Sessions dropped
> MAIN.sess_fail 0 0.00 Session accept failures
> MAIN.client_req_400 0 0.00 Client requests received,
> subject to 400 errors
> MAIN.client_req_417 0 0.00 Client requests received,
> subject to 417 errors
> MAIN.client_req 290152364 159.27 Good client requests received
> MAIN.cache_hit 7433491 4.08 Cache hits
> MAIN.cache_hit_grace 36319 0.02 Cache grace hits
> MAIN.cache_hitpass 16003020 8.78 Cache hits for pass
> MAIN.cache_miss 89526521 49.14 Cache misses
> MAIN.backend_conn 5078542 2.79 Backend conn. success
> MAIN.backend_unhealthy 2 0.00 Backend conn. not
> attempted
> MAIN.backend_busy 0 0.00 Backend conn. too many
> MAIN.backend_fail 6245 0.00 Backend conn. failures
> MAIN.backend_reuse 266259369 146.15 Backend conn. reuses
> MAIN.backend_recycle 267274817 146.71 Backend conn. recycles
> MAIN.backend_retry 17429 0.01 Backend conn. retry
> MAIN.fetch_head 1623 0.00 Fetch no body (HEAD)
> MAIN.fetch_length 77119616 42.33 Fetch with Length
> MAIN.fetch_chunked 188334324 103.38 Fetch chunked
> MAIN.fetch_eof 295751 0.16 Fetch EOF
> MAIN.fetch_bad 0 0.00 Fetch bad T-E
> MAIN.fetch_none 18415 0.01 Fetch no body
> MAIN.fetch_1xx 0 0.00 Fetch no body (1xx)
> MAIN.fetch_204 5427973 2.98 Fetch no body (204)
> MAIN.fetch_304 130058 0.07 Fetch no body (304)
> MAIN.fetch_failed 8591 0.00 Fetch failed (all causes)
> MAIN.fetch_no_thread 0 0.00 Fetch failed (no thread)
> MAIN.pools 2 . Number of thread pools
> MAIN.threads 400 . Total number of threads
> MAIN.threads_limited 15 0.00 Threads hit max
> MAIN.threads_created 35825 0.02 Threads created
> MAIN.threads_destroyed 35425 0.02 Threads destroyed
> MAIN.threads_failed 0 0.00 Thread creation failed
> MAIN.thread_queue_len 0 . Length of session queue
> MAIN.busy_sleep 160177 0.09 Number of requests sent
> to sleep on busy objhdr
> MAIN.busy_wakeup 160177 0.09 Number of requests woken
> after sleep on busy objhdr
> MAIN.busy_killed 0 0.00 Number of requests killed
> after sleep on busy objhdr
> MAIN.sess_queued 35718 0.02 Sessions queued for thread
> MAIN.sess_dropped 0 0.00 Sessions dropped for
> thread
> MAIN.n_object 370310 . object structs made
> MAIN.n_vampireobject 0 . unresurrected objects
> MAIN.n_objectcore 370485 . objectcore structs made
> MAIN.n_objecthead 376770 . objecthead structs made
> MAIN.n_waitinglist 388 . waitinglist structs made
> MAIN.n_backend 363 . Number of backends
> MAIN.n_expired 68570078 . Number of expired objects
> MAIN.n_lru_nuked 20474008 . Number of LRU nuked
> objects
> MAIN.n_lru_moved 6156256 . Number of LRU moved
> objects
> MAIN.n_lru_limited 3 0.00 Reached nuke_limit
> MAIN.losthdr 0 0.00 HTTP header overflows
> MAIN.s_sess 152864621 83.91 Total sessions seen
> MAIN.s_req 290152364 159.27 Total requests seen
> MAIN.s_pipe 216 0.00 Total pipe sessions seen
> MAIN.s_pass 181773529 99.78 Total pass-ed requests
> seen
> MAIN.s_fetch 271300050 148.92 Total backend fetches
> initiated
> MAIN.s_synth 11418599 6.27 Total synthethic
> responses made
> MAIN.s_req_hdrbytes 490884503465 269455.89 Request header bytes
> MAIN.s_req_bodybytes 6627058200 3637.72 Request body bytes
> MAIN.s_resp_hdrbytes 253365794945 139077.33 Response header bytes
> MAIN.s_resp_bodybytes 2259371209416 1240212.06 Response body bytes
> MAIN.s_pipe_hdrbytes 317932 0.17 Pipe request header bytes
> MAIN.s_pipe_in 0 0.00 Piped bytes from client
> MAIN.s_pipe_out 650171 0.36 Piped bytes to client
> MAIN.s_pipe_out 650171 0.36 Piped bytes to client
> MAIN.sess_closed 8450 0.00 Session Closed
> MAIN.sess_closed_err 51028839 28.01 Session Closed with error
> MAIN.sess_readahead 0 0.00 Session Read Ahead
> MAIN.sess_herd 208419766 114.41 Session herd
> MAIN.sc_rem_close 101825384 55.89 Session OK REM_CLOSE
> MAIN.sc_req_close 103 0.00 Session OK REQ_CLOSE
> MAIN.sc_req_http10 0 0.00 Session Err REQ_HTTP10
> MAIN.sc_rx_bad 0 0.00 Session Err RX_BAD
> MAIN.sc_rx_body 0 0.00 Session Err RX_BODY
> MAIN.sc_rx_junk 0 0.00 Session Err RX_JUNK
> MAIN.sc_rx_overflow 0 0.00 Session Err RX_OVERFLOW
> MAIN.sc_rx_timeout 51028883 28.01 Session Err RX_TIMEOUT
> MAIN.sc_tx_pipe 216 0.00 Session OK TX_PIPE
> MAIN.sc_tx_error 0 0.00 Session Err TX_ERROR
> MAIN.sc_tx_eof 0 0.00 Session OK TX_EOF
> MAIN.sc_resp_close 6107 0.00 Session OK RESP_CLOSE
> MAIN.sc_overload 0 0.00 Session Err OVERLOAD
> MAIN.sc_pipe_overflow 0 0.00 Session Err PIPE_OVERFLOW
> MAIN.sc_range_short 0 0.00 Session Err RANGE_SHORT
> MAIN.shm_records 48096929466 26401.32 SHM records
> MAIN.shm_writes 1753278706 962.41 SHM writes
> MAIN.shm_flushes 111020519 60.94 SHM flushes due to
> overflow
> MAIN.shm_cont 6287333 3.45 SHM MTX contention
> MAIN.shm_cycles 24363 0.01 SHM cycles through buffer
> MAIN.backend_req 271344507 148.95 Backend requests made
> MAIN.n_vcl 10 . Number of loaded VCLs in
> total
> MAIN.n_vcl_avail 10 . Number of VCLs available
> MAIN.n_vcl_discard 0 . Number of discarded VCLs
> MAIN.bans 1 . Count of bans
> MAIN.bans_completed 0 . Number of bans marked
> 'completed'
> MAIN.bans_obj 0 . Number of bans using obj.*
> MAIN.bans_req 1 . Number of bans using req.*
> MAIN.bans_added 128 0.00 Bans added
> MAIN.bans_deleted 127 0.00 Bans deleted
> MAIN.bans_tested 630175 0.35 Bans tested against
> objects (lookup)
> MAIN.bans_obj_killed 112189 0.06 Objects killed by bans
> (lookup)
> MAIN.bans_lurker_tested 0 0.00 Bans tested against
> objects (lurker)
> MAIN.bans_tests_tested 1382078 0.76 Ban tests tested against
> objects (lookup)
> MAIN.bans_lurker_tests_tested 0 0.00 Ban tests tested
> against objects (lurker)
> MAIN.bans_lurker_obj_killed 0 0.00 Objects killed by
> bans (lurker)
> MAIN.bans_dups 14 0.00 Bans superseded by
> other bans
> MAIN.bans_lurker_contention 0 0.00 Lurker gave way
> for lookup
> MAIN.bans_persisted_bytes 34768 . Bytes used by the
> persisted ban lists
> MAIN.bans_persisted_fragmentation 34468 . Extra bytes in
> persisted ban lists due to fragmentation
> MAIN.n_purges 0 . Number of
> purge operations executed
> MAIN.n_obj_purged 0 . Number of
> purged objects
> MAIN.exp_mailed 110185218 60.48 Number of
> objects mailed to expiry thread
> MAIN.exp_received 110185218 60.48 Number of
> objects received by expiry thread
> MAIN.hcb_nolock 112963041 62.01 HCB Lookups
> without lock
> MAIN.hcb_lock 86256017 47.35 HCB Lookups
> with lock
> MAIN.hcb_insert 86255995 47.35 HCB Inserts
> MAIN.esi_errors 0 0.00 ESI parse
> errors (unlock)
> MAIN.esi_warnings 0 0.00 ESI parse
> warnings (unlock)
> MAIN.vmods 2 . Loaded VMODs
> MAIN.n_gzip 129789432 71.24 Gzip operations
> MAIN.n_gunzip 210646709 115.63 Gunzip
> operations
> MAIN.vsm_free 965520 . Free VSM space
> MAIN.vsm_used 83969088 . Used VSM space
> MAIN.vsm_cooling 0 . Cooling VSM
> space
> MAIN.vsm_overflow 0 . Overflow VSM
> space
> MAIN.vsm_overflowed 0 0.00 Overflowed VSM
> space
> MGT.uptime 1821761 1.00 Management
> process uptime
> MGT.child_start 1 0.00 Child process
> started
> MGT.child_exit 0 0.00 Child process
> normal exit
> MGT.child_stop 0 0.00 Child process
> unexpected exit
> MGT.child_died 0 0.00 Child process
> died (signal)
> MGT.child_dump 0 0.00 Child process
> core dumped
> MGT.child_panic 0 0.00 Child process
> panic
> MEMPOOL.busyobj.live 34 . In use
> MEMPOOL.busyobj.pool 32 . In Pool
> MEMPOOL.busyobj.sz_wanted 65536 . Size requested
> MEMPOOL.busyobj.sz_actual 65504 . Size allocated
> MEMPOOL.busyobj.allocs 271336588 148.94 Allocations
> MEMPOOL.busyobj.frees 271336554 148.94 Frees
> MEMPOOL.busyobj.recycle 270933647 148.72 Recycled from
> pool
> MEMPOOL.busyobj.timeout 988916 0.54 Timed out from
> pool
> MEMPOOL.busyobj.toosmall 0 0.00 Too small to
> recycle
> MEMPOOL.busyobj.surplus 88997 0.05 Too many for
> pool
> MEMPOOL.busyobj.randry 402941 0.22 Pool ran dry
> MEMPOOL.req0.live 23 . In use
> MEMPOOL.req0.pool 24 . In Pool
> MEMPOOL.req0.sz_wanted 131072 . Size requested
> MEMPOOL.req0.sz_actual 131040 . Size allocated
> MEMPOOL.req0.allocs 155090050 85.13 Allocations
> MEMPOOL.req0.frees 155090027 85.13 Frees
> MEMPOOL.req0.recycle 154983843 85.07 Recycled from
> pool
> MEMPOOL.req0.timeout 1025602 0.56 Timed out from
> pool
> MEMPOOL.req0.toosmall 0 0.00 Too small to
> recycle
> MEMPOOL.req0.surplus 2995 0.00 Too many for
> pool
> MEMPOOL.req0.randry 106207 0.06 Pool ran dry
> MEMPOOL.sess0.live 333 . In use
> MEMPOOL.sess0.pool 20 . In Pool
> MEMPOOL.sess0.sz_wanted 512 . Size requested
> MEMPOOL.sess0.sz_actual 480 . Size allocated
> MEMPOOL.sess0.allocs 76433149 41.96 Allocations
> MEMPOOL.sess0.frees 76432816 41.96 Frees
> MEMPOOL.sess0.recycle 76232233 41.85 Recycled from
> pool
> MEMPOOL.sess0.timeout 1809683 0.99 Timed out from
> pool
> MEMPOOL.sess0.toosmall 0 0.00 Too small to
> recycle
> MEMPOOL.sess0.surplus 18419 0.01 Too many for
> pool
> MEMPOOL.sess0.randry 200916 0.11 Pool ran dry
> MEMPOOL.req1.live 23 . In use
> MEMPOOL.req1.pool 25 . In Pool
> MEMPOOL.req1.sz_wanted 131072 . Size requested
> MEMPOOL.req1.sz_actual 131040 . Size allocated
> MEMPOOL.req1.allocs 155148876 85.16 Allocations
> MEMPOOL.req1.frees 155148853 85.16 Frees
> MEMPOOL.req1.recycle 155041566 85.11 Recycled from
> pool
> MEMPOOL.req1.timeout 1025704 0.56 Timed out from
> pool
> MEMPOOL.req1.toosmall 0 0.00 Too small to
> recycle
> MEMPOOL.req1.surplus 2749 0.00 Too many for
> pool
> MEMPOOL.req1.randry 107310 0.06 Pool ran dry
> MEMPOOL.sess1.live 314 . In use
> MEMPOOL.sess1.pool 36 . In Pool
> MEMPOOL.sess1.sz_wanted 512 . Size requested
> MEMPOOL.sess1.sz_actual 480 . Size allocated
> MEMPOOL.sess1.allocs 76431491 41.95 Allocations
> MEMPOOL.sess1.frees 76431177 41.95 Frees
> MEMPOOL.sess1.recycle 76229789 41.84 Recycled from
> pool
> MEMPOOL.sess1.timeout 1811749 0.99 Timed out from
> pool
> MEMPOOL.sess1.toosmall 0 0.00 Too small to
> recycle
> MEMPOOL.sess1.surplus 18312 0.01 Too many for
> pool
> MEMPOOL.sess1.randry 201702 0.11 Pool ran dry
> SMA.default.c_req 104863620 57.56 Allocator
> requests
> SMA.default.c_fail 20841440 11.44 Allocator
> failures
> SMA.default.c_bytes 1035542691744 568429.19 Bytes
> allocated
> SMA.default.c_freed 1018362838340 558998.84 Bytes freed
> SMA.default.g_alloc 1393168 . Allocations
> outstanding
> SMA.default.g_bytes 17179853404 . Bytes
> outstanding
> SMA.default.g_space 15780 . Bytes available
> SMA.Transient.c_req 570410809 313.11 Allocator
> requests
> SMA.Transient.c_fail 0 0.00 Allocator
> failures
> SMA.Transient.c_bytes 4076783572265 2237824.46 Bytes
> allocated
> SMA.Transient.c_freed 4076757189033 2237809.98 Bytes freed
> SMA.Transient.g_alloc 19133 . Allocations
> outstanding
> SMA.Transient.g_bytes 26383232 . Bytes
> outstanding
> SMA.Transient.g_space 0 . Bytes available
> ...
> LCK.backend.creat
> 365 0.00 Created locks
>
> [0/604]
> LCK.backend.destroy
> 0 0.00 Destroyed locks
> LCK.backend.locks
> 555822161 305.10 Lock Operations
> LCK.backend_tcp.creat
> 37 0.00 Created locks
> LCK.backend_tcp.destroy
> 0 0.00 Destroyed locks
> LCK.backend_tcp.locks
> 1076235922 590.77 Lock Operations
> LCK.ban.creat
> 1 0.00 Created locks
> LCK.ban.destroy
> 0 0.00 Destroyed locks
> LCK.ban.locks
> 373355072 204.94 Lock Operations
> LCK.busyobj.creat
> 271334966 148.94 Created locks
> LCK.busyobj.destroy
> 271336345 148.94 Destroyed locks
> LCK.busyobj.locks
> 2741594616 1504.91 Lock Operations
> LCK.cli.creat
> 1 0.00 Created locks
> LCK.cli.destroy
> 0 0.00 Destroyed locks
> LCK.cli.locks
> 607411 0.33 Lock Operations
> LCK.exp.creat
> 1 0.00 Created locks
> LCK.exp.destroy
> 0 0.00 Destroyed locks
> LCK.exp.locks
> 563288168 309.20 Lock Operations
> LCK.hcb.creat
> 1 0.00 Created locks
> LCK.hcb.destroy
> 0 0.00 Destroyed locks
> LCK.hcb.locks
> 172158514 94.50 Lock Operations
> LCK.lru.creat
> 2 0.00 Created locks
> LCK.lru.destroy
> 0 0.00 Destroyed locks
> LCK.lru.locks
> 296448067 162.73 Lock Operations
> LCK.mempool.creat
> 5 0.00 Created locks
> LCK.mempool.destroy
> 0 0.00 Destroyed locks
> LCK.mempool.locks
> 1487652815 816.60 Lock Operations
> LCK.objhdr.creat
> 86265523 47.35 Created locks
> LCK.objhdr.destroy
> 85889681 47.15 Destroyed locks
> LCK.objhdr.locks
> 3588940017 1970.04 Lock Operations
> LCK.pipestat.creat
> 1 0.00 Created locks
> LCK.pipestat.destroy
> 0 0.00 Destroyed locks
> LCK.pipestat.locks
> 216 0.00 Lock Operations
> LCK.sess.creat
> 152861317 83.91 Created locks
> LCK.sess.destroy
> 152862891 83.91 Destroyed locks
> LCK.sess.locks
> 37421700 20.54 Lock Operations
> LCK.smp.creat
> 0 0.00 Created locks
> LCK.smp.destroy
> 0 0.00 Destroyed locks
> LCK.smp.locks
> 0 0.00 Lock Operations
> LCK.vbe.creat
> 1 0.00 Created locks
> LCK.vbe.destroy
> 0 0.00 Destroyed locks
> LCK.vbe.locks
> 607832 0.33 Lock Operations
> LCK.vcapace.creat
> 1 0.00 Created locks
> LCK.vcapace.destroy
> 0 0.00 Destroyed locks
> LCK.vcapace.locks
> 0 0.00 Lock Operations
> LCK.vcl.creat
> 1 0.00 Created locks
> LCK.vcl.destroy
> 0 0.00 Destroyed locks
> LCK.vcl.locks
> 543937089 298.58 Lock Operations
> LCK.vxid.creat
> 1 0.00 Created locks
> LCK.vxid.destroy
> 0 0.00 Destroyed locks
> LCK.vxid.locks
> 45733 0.03 Lock Operations
> LCK.waiter.creat
> 2 0.00 Created locks
> LCK.waiter.destroy
> 0 0.00 Destroyed locks
> LCK.waiter.locks
> 1420757858 779.88 Lock Operations
> LCK.wq.creat
> 3 0.00 Created locks
> LCK.wq.destroy
> 0 0.00 Destroyed locks
> LCK.wq.locks
> 1773506747 973.51 Lock Operations
> LCK.wstat.creat
> 1 0.00 Created locks
> LCK.wstat.destroy
> 0 0.00 Destroyed locks
> LCK.wstat.locks
> 759559125 416.94 Lock Operations
> LCK.sma.creat
> 2 0.00 Created locks
> LCK.sma.destroy
> 0 0.00 Destroyed locks
> LCK.sma.locks
> 1328287853 729.12 Lock Operations
>
> I removed the vcl_root sections, i hope you won't need it
>
> Thanks again,
>
> Le mer. 17 avr. 2019 à 17:28, Guillaume Quintard <
> guillaume@varnish-software.com> a écrit :
>
>> Can you share the "varnishstat -1" output?
>>
>> I'm pretty sure the answer is in the passes and synth responses you
>> omitted
>>
>>
>> On Wed, Apr 17, 2019, 16:19 Alexandre Thaveau <
>> Alexandre.Thaveau@mister-auto.com> wrote:
>>
>>> Hello everybody,
>>>
>>> i was trying to get some stats from my varnish server using varnishstat.
>>> When using varnish stat, i see that :
>>> - MAIN.client_req should make 100% of my queries
>>> - MAIN.cache_hit represents 10% of MAIN.client_req
>>> - MAIN.cache_hitpass represents 7% of MAIN.client_req
>>> - MAIN.cache_miss represents 24% of MAIN.client_req
>>> - MAIN.cache_hit_grace is very low
>>>
>>> So all these sumed categories represent less than 50% of client_req, i
>>> think i'm missing something. The configuration is not maintained by me,
>>> here is a sample of what it returns if this can help :
>>> -------------------------------------
>>> cat /etc/varnish/production.vcl | grep return
>>> return(synth(900, "Moved Permanently"));
>>> return(synth(901, "Moved Permanently"));
>>> return(synth(902, "Moved Permanently"));
>>> return(synth(903, "Moved Permanently"));
>>> return(pipe);
>>> return(pipe);
>>> return(pass);
>>> return(pass);
>>> return(pass);
>>> return(synth(410, "Gone"));
>>> return(pass);
>>> return(synth(850, "Moved Permanently"));
>>> return(hash);
>>> return(hash);
>>> return(pass);
>>> return(hash);
>>> return(lookup);
>>> return(retry);
>>> return(deliver);
>>> -------------------------------------
>>>
>>> Thanks very much for your help,
>>> Regards,
>>> Alex
>>> _______________________________________________
>>> varnish-misc mailing list
>>> varnish-misc@varnish-cache.org
>>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>>>
>>
> _______________________________________________
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
Re: Varnish hit + pass + miss reaches less than 50% of all reqs [ In reply to ]
On Wed, Apr 17, 2019 at 6:23 PM Guillaume Quintard
<guillaume@varnish-software.com> wrote:
>
> Hi there,
>
> So:
>
> MAIN.client_req 290152364 (aaaaaaaaaaall the requests)
>
> vs
>
> MAIN.cache_hit 7433491
> MAIN.cache_hit_grace 36319 (exclude these are they are already accounted for in MAIN.cache_hit)
> MAIN.cache_hitpass 16003020 (exclude these are they are already accounted for in MAIN.s_pass)
> MAIN.cache_miss 89526521
> MAIN.s_synth 11418599
> MAIN.s_pipe 216
> MAIN.s_pass 181773529
>
> the difference is now 8 requests, which is fairly reasonable (some requests may be in flight, and threads don't necessarily push their stats after every requests)

Well, you can also return(synth) from almost anywhere, including after
a lookup where we bump one of the outcomes. This can create a bit of
double-accounting from the point of view of "summing the rest".

Dridi
_______________________________________________
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
Re: Varnish hit + pass + miss reaches less than 50% of all reqs [ In reply to ]
Ok thanks very much for your help, my prometheus graph will be much more
useful with these informations :)

Best regards,
Alex

Le jeu. 18 avr. 2019 à 08:24, Dridi Boukelmoune <dridi@varni.sh> a écrit :

> On Wed, Apr 17, 2019 at 6:23 PM Guillaume Quintard
> <guillaume@varnish-software.com> wrote:
> >
> > Hi there,
> >
> > So:
> >
> > MAIN.client_req 290152364 (aaaaaaaaaaall the requests)
> >
> > vs
> >
> > MAIN.cache_hit 7433491
> > MAIN.cache_hit_grace 36319 (exclude these are they are already
> accounted for in MAIN.cache_hit)
> > MAIN.cache_hitpass 16003020 (exclude these are they are already
> accounted for in MAIN.s_pass)
> > MAIN.cache_miss 89526521
> > MAIN.s_synth 11418599
> > MAIN.s_pipe 216
> > MAIN.s_pass 181773529
> >
> > the difference is now 8 requests, which is fairly reasonable (some
> requests may be in flight, and threads don't necessarily push their stats
> after every requests)
>
> Well, you can also return(synth) from almost anywhere, including after
> a lookup where we bump one of the outcomes. This can create a bit of
> double-accounting from the point of view of "summing the rest".
>
> Dridi
>