Mailing List Archive

Issue with passing Cache-Control: no-cache header to Tomcat during cache misses
Hello,

When a user refreshes(F5) or performs a hard refresh(ctrl+F5) in their
browser, the browser includes the *Cache-Control: no-cache* header in the
request.
However, in our* production Varnish setup*, we have implemented a check
that treats* requests with Cache-Control: no-cache as cache misses*,
meaning it bypasses the cache and goes directly to the backend server
(Tomcat) to fetch the content.

*Example:*
in vcl_recv subroutine of default.vcl:

sub vcl_recv{
#other Code
# Serve fresh data from backend while F5 and CTRL+F5 from user
if (req.http.Cache-Control ~ "(no-cache|max-age=0)") {
set req.hash_always_miss = true;
}
#other Code
}


However, we've noticed that the *Cache-Control: no-cache header is not
being passed* to Tomcat even when there is a cache miss.
We're unsure why this is happening and would appreciate your assistance in
understanding the cause.

*Expected Functionality:*
If the request contains *Cache-Control: no-cache header then it should be
passed to Tomcat* at Backend.

Thanks & Regards
Uday Kumar
Re: Issue with passing Cache-Control: no-cache header to Tomcat during cache misses [ In reply to ]
Hi Uday,

Can you provide us with a log of the transaction please? You can run this
on the Varnish server:

varnishlog -g request -q 'ReqHeader:Cache-Control'

And you should see something as soon as you send a request with that header
to Varnish. Note that we need the backend part of the transaction, so
please don't truncate the block.

Kind regards,

--
Guillaume Quintard


On Mon, Jun 12, 2023 at 10:33?PM Uday Kumar <uday.polu@indiamart.com> wrote:

> Hello,
>
> When a user refreshes(F5) or performs a hard refresh(ctrl+F5) in their
> browser, the browser includes the *Cache-Control: no-cache* header in the
> request.
> However, in our* production Varnish setup*, we have implemented a check
> that treats* requests with Cache-Control: no-cache as cache misses*,
> meaning it bypasses the cache and goes directly to the backend server
> (Tomcat) to fetch the content.
>
> *Example:*
> in vcl_recv subroutine of default.vcl:
>
> sub vcl_recv{
> #other Code
> # Serve fresh data from backend while F5 and CTRL+F5 from user
> if (req.http.Cache-Control ~ "(no-cache|max-age=0)") {
> set req.hash_always_miss = true;
> }
> #other Code
> }
>
>
> However, we've noticed that the *Cache-Control: no-cache header is not
> being passed* to Tomcat even when there is a cache miss.
> We're unsure why this is happening and would appreciate your assistance in
> understanding the cause.
>
> *Expected Functionality:*
> If the request contains *Cache-Control: no-cache header then it should be
> passed to Tomcat* at Backend.
>
> Thanks & Regards
> Uday Kumar
> _______________________________________________
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
Re: Issue with passing Cache-Control: no-cache header to Tomcat during cache misses [ In reply to ]
Hi Guillaume,

Thanks for the response.

Can you provide us with a log of the transaction please?

I have sent a R*equest *to VARNISH which Contains *Cache-Control: no-cache
header*, we have made sure the request with *cache-control header* is a
MISS with a check in *vcl_recv subroutine*, so it's a *MISS *as expected.

*The problem as mentioned before: *
*Cache-Control: no-cache header is not being passed to the Backend even
though its a MISS.*


*Please find below the transaction log of Varnish.*

* << Request >> 2293779
- Begin req 2293778 rxreq
- Timestamp Start: 1686730406.463326 0.000000 0.000000
- Timestamp Req: 1686730406.463326 0.000000 0.000000
- ReqStart IPAddress 61101
- ReqMethod GET
- ReqURL someURL
- ReqProtocol HTTP/1.1
- ReqHeader Host: IP:Port
- ReqHeader Connection: keep-alive
- ReqHeader Pragma: no-cache
- *ReqHeader Cache-Control: no-cache*
- ReqHeader Upgrade-Insecure-Requests: 1
- ReqHeader User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36
- ReqHeader Accept:
text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7
- ReqHeader Accept-Encoding: gzip, deflate
- ReqHeader Accept-Language: en-US,en;q=0.9
- ReqHeader X-Forwarded-For: IPAddress
- VCL_call RECV
- VCL_Log URL:someURL
- ReqURL someURL
- ReqHeader X-contentencode: gzip, deflate
- VCL_Log HTTP_X_Compression:gzip, deflate
- VCL_return hash
- ReqUnset Accept-Encoding: gzip, deflate
- ReqHeader Accept-Encoding: gzip
- VCL_call HASH
- ReqHeader hash-url: someURL
- ReqUnset hash-url: someURL
- ReqHeader hash-url: someURL
- VCL_Log hash-url: someURL
- ReqUnset hash-url: someURL
- VCL_return lookup
- VCL_call MISS
- VCL_return fetch
*- Link bereq 2293780 fetch*
- Timestamp Fetch: 1686730406.515526 0.052200 0.052200
- RespProtocol HTTP/1.1
- RespStatus 200
- RespReason OK
- RespHeader add_in_varnish_logs:
ResultCount:66|McatCount:10|traceId:96z3uIgBXHUiXRNoegNA
- RespHeader Content-Type: text/html;charset=UTF-8
- RespHeader Content-Encoding: gzip
- RespHeader Vary: Accept-Encoding
- RespHeader Date: Wed, 14 Jun 2023 08:13:25 GMT
- RespHeader Server: Intermesh Caching Servers/2.0.1
- RespHeader X-Varnish: 2293779
- RespHeader Age: 0
- RespHeader Via: 1.1 varnish (Varnish/5.2)
- VCL_call DELIVER
- RespHeader X-Edge: MISS
- VCL_Log
addvg:ResultCount:66|McatCount:10|traceId:96z3uIgBXHUiXRNoegNA
- RespUnset add_in_varnish_logs:
ResultCount:66|McatCount:10|traceId:96z3uIgBXHUiXRNoegNA
- VCL_return deliver
- Timestamp Process: 1686730406.515554 0.052228 0.000029
- RespHeader Accept-Ranges: bytes
- RespHeader Transfer-Encoding: chunked
- RespHeader Connection: keep-alive
- Timestamp Resp: 1686730406.518064 0.054738 0.002510
- ReqAcct 569 0 569 331 36932 37263
- End
*** << BeReq >> 2293780*
-- Begin bereq 2293779 fetch
-- Timestamp Start: 1686730406.463456 0.000000 0.000000
-- BereqMethod GET
-- BereqURL someURL
-- BereqProtocol HTTP/1.1
-- BereqHeader Host: IP:Port
-- BereqHeader Pragma: no-cache
-- BereqHeader Upgrade-Insecure-Requests: 1
-- BereqHeader User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36
-- BereqHeader Accept:
text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7
-- BereqHeader Accept-Language: en-US,en;q=0.9
-- BereqHeader X-Forwarded-For: IPAddress
-- BereqHeader X-contentencode: gzip, deflate
-- BereqHeader Accept-Encoding: gzip
-- BereqHeader X-Varnish: 2293780
-- VCL_call BACKEND_FETCH
-- BereqUnset Accept-Encoding: gzip
-- BereqHeader Accept-Encoding: gzip, deflate
-- BereqUnset X-contentencode: gzip, deflate
-- VCL_return fetch
-- BackendOpen 27 reload_2023-06-07T091359.node66 127.0.0.1 8984
127.0.0.1 39154
-- BackendStart 127.0.0.1 8984
-- Timestamp Bereq: 1686730406.463621 0.000165 0.000165
-- Timestamp Beresp: 1686730406.515400 0.051944 0.051779
-- BerespProtocol HTTP/1.1
-- BerespStatus 200
-- BerespReason OK
-- BerespHeader Server: Apache-Coyote/1.1
-- BerespHeader add_in_varnish_logs:
ResultCount:66|McatCount:10|traceId:96z3uIgBXHUiXRNoegNA
-- BerespHeader Content-Type: text/html;charset=UTF-8
-- BerespHeader Transfer-Encoding: chunked
-- BerespHeader Content-Encoding: gzip
-- BerespHeader Vary: Accept-Encoding
-- BerespHeader Date: Wed, 14 Jun 2023 08:13:25 GMT
-- TTL RFC 120 10 0 1686730407 1686730407 1686730405 0 0
-- VCL_call BACKEND_RESPONSE
-- BerespUnset Server: Apache-Coyote/1.1
-- BerespHeader Server: Caching Servers/2.0.1
-- TTL VCL 120 604800 0 1686730407
-- TTL VCL 86400 604800 0 1686730407
-- VCL_return deliver
-- Storage malloc s0
-- ObjProtocol HTTP/1.1
-- ObjStatus 200
-- ObjReason OK
-- ObjHeader add_in_varnish_logs:
ResultCount:66|McatCount:10|traceId:96z3uIgBXHUiXRNoegNA
-- ObjHeader Content-Type: text/html;charset=UTF-8
-- ObjHeader Content-Encoding: gzip
-- ObjHeader Vary: Accept-Encoding
-- ObjHeader Date: Wed, 14 Jun 2023 08:13:25 GMT
-- ObjHeader Server: Caching Servers/2.0.1
-- Fetch_Body 2 chunked stream
-- Gzip u F - 36932 291926 80 211394 295386
-- BackendReuse 27 reload_2023-06-07T091359.node66
-- Timestamp BerespBody: 1686730406.518050 0.054594 0.002650
-- Length 36932
-- BereqAcct 574 0 574 276 36932 37208
-- End


Thanks & Regards
Uday Kumar


On Tue, Jun 13, 2023 at 2:13?AM Guillaume Quintard <
guillaume.quintard@gmail.com> wrote:

> Hi Uday,
>
> Can you provide us with a log of the transaction please? You can run this
> on the Varnish server:
>
> varnishlog -g request -q 'ReqHeader:Cache-Control'
>
> And you should see something as soon as you send a request with that
> header to Varnish. Note that we need the backend part of the transaction,
> so please don't truncate the block.
>
> Kind regards,
>
> --
> Guillaume Quintard
>
>
> On Mon, Jun 12, 2023 at 10:33?PM Uday Kumar <uday.polu@indiamart.com>
> wrote:
>
>> Hello,
>>
>> When a user refreshes(F5) or performs a hard refresh(ctrl+F5) in their
>> browser, the browser includes the *Cache-Control: no-cache* header in
>> the request.
>> However, in our* production Varnish setup*, we have implemented a check
>> that treats* requests with Cache-Control: no-cache as cache misses*,
>> meaning it bypasses the cache and goes directly to the backend server
>> (Tomcat) to fetch the content.
>>
>> *Example:*
>> in vcl_recv subroutine of default.vcl:
>>
>> sub vcl_recv{
>> #other Code
>> # Serve fresh data from backend while F5 and CTRL+F5 from user
>> if (req.http.Cache-Control ~ "(no-cache|max-age=0)") {
>> set req.hash_always_miss = true;
>> }
>> #other Code
>> }
>>
>>
>> However, we've noticed that the *Cache-Control: no-cache header is not
>> being passed* to Tomcat even when there is a cache miss.
>> We're unsure why this is happening and would appreciate your assistance
>> in understanding the cause.
>>
>> *Expected Functionality:*
>> If the request contains *Cache-Control: no-cache header then it should
>> be passed to Tomcat* at Backend.
>>
>> Thanks & Regards
>> Uday Kumar
>> _______________________________________________
>> varnish-misc mailing list
>> varnish-misc@varnish-cache.org
>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>>
>
Re: Issue with passing Cache-Control: no-cache header to Tomcat during cache misses [ In reply to ]
On Wed, Jun 14, 2023 at 9:02?AM Uday Kumar <uday.polu@indiamart.com> wrote:
>
> Hi Guillaume,
>
> Thanks for the response.
>
> Can you provide us with a log of the transaction please?
>
> I have sent a Request to VARNISH which Contains Cache-Control: no-cache header, we have made sure the request with cache-control header is a MISS with a check in vcl_recv subroutine, so it's a MISS as expected.
>
> The problem as mentioned before:
> Cache-Control: no-cache header is not being passed to the Backend even though its a MISS.

There is this in the code:

> H("Cache-Control", H_Cache_Control, F ) // 2616 14.9

We remove the this header when we create a normal fetch task, hence
the F flag. There's a reference to RFC2616 section 14.9, but this RFC
has been updated by newer documents. Also that section is fairly long
and I don't have time to dissect it, but I suspect the RFC reference
is only here to point to the Cache-Control definition, not the F flag.

I suspect the rationale for the F flag is that on cache misses we act
as a generic client, not just on behalf of the client that triggered
the cache miss. If you want pass-like behavior on a cache miss, you
need to implement it in VCL:

- store cache-control in a different header in vcl_recv
- restore cache-control in vcl_backend_fetch if applicable

Please note that you open yourself to malicious clients forcing
no-cache on your origin server upon cache misses.

Come to think of it, we should probably give Pragma both P and F flags.

Dridi
_______________________________________________
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
Re: Issue with passing Cache-Control: no-cache header to Tomcat during cache misses [ In reply to ]
> There is this in the code:
>
> * > H("Cache-Control", H_Cache_Control, F ) * //
> 2616 14.9
>
> We remove the this header when we create a normal fetch task, hence
> the F flag. There's a reference to RFC2616 section 14.9, but this RFC
> has been updated by newer documents.
>

Where can I find details about the above code, could not find it in RFC
2616 14.9!


Thanks & Regards,
Uday Kumar
Re: Issue with passing Cache-Control: no-cache header to Tomcat during cache misses [ In reply to ]
On Thu, Jun 15, 2023 at 9:33?AM Uday Kumar <uday.polu@indiamart.com> wrote:
>
>
>> There is this in the code:
>>
>> > H("Cache-Control", H_Cache_Control, F ) // 2616 14.9
>>
>> We remove the this header when we create a normal fetch task, hence
>> the F flag. There's a reference to RFC2616 section 14.9, but this RFC
>> has been updated by newer documents.
>
>
> Where can I find details about the above code, could not find it in RFC 2616 14.9!

This is from include/tbl/http_headers.h in the Varnish code base.

I'm not going to break it down in details, but that's basically where
we declare well-known headers and when to strip them when we perform a
req->bereq or beresp->resp transition.

In this case, we strip the cache-control header from the initial
beresp when it is a cache miss.

Dridi
_______________________________________________
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
Re: Issue with passing Cache-Control: no-cache header to Tomcat during cache misses [ In reply to ]
Adding to what Dridi said, and just to be clear: the "cleaning" of those
well-known headers only occurs when the req object is copied into a beteq,
so there's nothing preventing you from stashing the "cache-control" header
into "x-cache-control" during vcl_recv, and then copying it back to
"cache-control" during vcl_backend_response.
Re: Issue with passing Cache-Control: no-cache header to Tomcat during cache misses [ In reply to ]
Thanks Dridi and Guillaume for clarification!

On Thu, Jun 15, 2023, 18:30 Guillaume Quintard <guillaume.quintard@gmail.com>
wrote:

> Adding to what Dridi said, and just to be clear: the "cleaning" of those
> well-known headers only occurs when the req object is copied into a beteq,
> so there's nothing preventing you from stashing the "cache-control" header
> into "x-cache-control" during vcl_recv, and then copying it back to
> "cache-control" during vcl_backend_response.
>