Mailing List Archive

Deliver on HIT, otherwise redirect using "503; Location: ..."
Hello.

Switched to Varnish from Nginx for additional functionality and better control of handling requests.
But still can't implement what i want. And I want simple behaviour "Redirect on MISS/PASS".
I want to use VC for deploying quick "cdn" servers for our mp4-video-servers (used for HTML5 players), without need to store all files on this quick (ssd, upto 2x480GB space, full database about 6TB).

Currently we have 6 servers with SATA HDDs and hitting iowait like a trucks :)

Examples:
- Request -> Varnish -> HIT: serve it using Varnish.
- Request -> Varnish -> MISS: start caching data from backend, and instantly reply to client: `Location: http://backend/$req.url"
- Request -> Varnish -> UPDATE: see `-> MISS` behaviour.

From my perspective, i should do this "detach & reply redirect" somewhere in `vcl_miss` OR `vcl_backend_fetch`, because if i understood correctly https://www.varnish-cache.org/docs/4.1/reference/states.html, i need vcl_backend_response to keep run in background (as additional thread) while doing return(synth(...)) to redirect user.

Similiar thing is "hitting stale content while object is updating".
But in my case "replying redirect while object is updating".

Also, i pray to implement this without writing additional scripts, why? I could do external php/ruby checker/cache-pusher with nginx & etc. But scared by performance downgrade :(
_______________________________________________
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
Re: Deliver on HIT, otherwise redirect using "503; Location: ..." [ In reply to ]
Hi Anton,

Have you looked into the "do_stream" feature of Varnish? This will begin
serving the content to the visitor without waiting for the entire object
to be downloaded and stored in cache. Set in vcl_backend_response.

https://github.com/mattiasgeniar/varnish-4.0-configuration-templates/blob/master/default.vcl

Cheers,
Mark

On Sat, 17 Dec 2016 19:05:48 -0500, Anton Berezhkov
<bubonic.pestilence@gmail.com> wrote:

> Hello.
>
> Switched to Varnish from Nginx for additional functionality and better
> control of handling requests.
> But still can't implement what i want. And I want simple behaviour
> "Redirect on MISS/PASS".
> I want to use VC for deploying quick "cdn" servers for our
> mp4-video-servers (used for HTML5 players), without need to store all
> files on this quick (ssd, upto 2x480GB space, full database about 6TB).
>
> Currently we have 6 servers with SATA HDDs and hitting iowait like a
> trucks :)
>
> Examples:
> - Request -> Varnish -> HIT: serve it using Varnish.
> - Request -> Varnish -> MISS: start caching data from backend, and
> instantly reply to client: `Location: http://backend/$req.url"
> - Request -> Varnish -> UPDATE: see `-> MISS` behaviour.
>
> From my perspective, i should do this "detach & reply redirect"
> somewhere in `vcl_miss` OR `vcl_backend_fetch`, because if i understood
> correctly https://www.varnish-cache.org/docs/4.1/reference/states.html,
> i need vcl_backend_response to keep run in background (as additional
> thread) while doing return(synth(...)) to redirect user.
>
> Similiar thing is "hitting stale content while object is updating".
> But in my case "replying redirect while object is updating".
>
> Also, i pray to implement this without writing additional scripts, why?
> I could do external php/ruby checker/cache-pusher with nginx & etc. But
> scared by performance downgrade :(
> _______________________________________________
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc

_______________________________________________
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
Re: Deliver on HIT, otherwise redirect using "503; Location: ..." [ In reply to ]
Hello. Yes, i already did it, and noticed change. But this not what i'm looking for.

As this option doesn't provide: "instantly reply to client Location...", i'm forced to return(deliver).

I'll repeat:
- If MISS:
- - Thread #1: start caching data into storage
- - Main thread: reply to client: synth(503, "Moved Temporarily")

- If UPDAING:
- - Main thread: reply to client: synth(503, "Moved Temporarily")

- If HIT:
- - Main thread: serve cached data

And beresp.do_stream is just non-blocking IO. While i'm looking for "Detaching backend-to-cache-routine into different thread, while main thread responses to client 'Location: http://backend/$req.url'"

> On Dec 18, 2016, at 3:58 AM, Mark Staudinger <mark.staudinger@nyi.net> wrote:
>
> Hi Anton,
>
> Have you looked into the "do_stream" feature of Varnish? This will begin serving the content to the visitor without waiting for the entire object to be downloaded and stored in cache. Set in vcl_backend_response.
>
> https://github.com/mattiasgeniar/varnish-4.0-configuration-templates/blob/master/default.vcl
>
> Cheers,
> Mark
>
> On Sat, 17 Dec 2016 19:05:48 -0500, Anton Berezhkov <bubonic.pestilence@gmail.com> wrote:
>
>> Hello.
>>
>> Switched to Varnish from Nginx for additional functionality and better control of handling requests.
>> But still can't implement what i want. And I want simple behaviour "Redirect on MISS/PASS".
>> I want to use VC for deploying quick "cdn" servers for our mp4-video-servers (used for HTML5 players), without need to store all files on this quick (ssd, upto 2x480GB space, full database about 6TB).
>>
>> Currently we have 6 servers with SATA HDDs and hitting iowait like a trucks :)
>>
>> Examples:
>> - Request -> Varnish -> HIT: serve it using Varnish.
>> - Request -> Varnish -> MISS: start caching data from backend, and instantly reply to client: `Location: http://backend/$req.url"
>> - Request -> Varnish -> UPDATE: see `-> MISS` behaviour.
>>
>> From my perspective, i should do this "detach & reply redirect" somewhere in `vcl_miss` OR `vcl_backend_fetch`, because if i understood correctly https://www.varnish-cache.org/docs/4.1/reference/states.html, i need vcl_backend_response to keep run in background (as additional thread) while doing return(synth(...)) to redirect user.
>>
>> Similiar thing is "hitting stale content while object is updating".
>> But in my case "replying redirect while object is updating".
>>
>> Also, i pray to implement this without writing additional scripts, why? I could do external php/ruby checker/cache-pusher with nginx & etc. But scared by performance downgrade :(
>> _______________________________________________
>> varnish-misc mailing list
>> varnish-misc@varnish-cache.org
>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


_______________________________________________
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
Re: Deliver on HIT, otherwise redirect using "503; Location: ..." [ In reply to ]
This is how I semi-implemented: http://pastebin.com/drDP8JxP
Now i need to use script which will run "curi -I -X PUT <url-to-put-into-cache>".


> On Dec 18, 2016, at 3:58 AM, Mark Staudinger <mark.staudinger@nyi.net> wrote:
>
> Hi Anton,
>
> Have you looked into the "do_stream" feature of Varnish? This will begin serving the content to the visitor without waiting for the entire object to be downloaded and stored in cache. Set in vcl_backend_response.
>
> https://github.com/mattiasgeniar/varnish-4.0-configuration-templates/blob/master/default.vcl
>
> Cheers,
> Mark
>
> On Sat, 17 Dec 2016 19:05:48 -0500, Anton Berezhkov <bubonic.pestilence@gmail.com> wrote:
>
>> Hello.
>>
>> Switched to Varnish from Nginx for additional functionality and better control of handling requests.
>> But still can't implement what i want. And I want simple behaviour "Redirect on MISS/PASS".
>> I want to use VC for deploying quick "cdn" servers for our mp4-video-servers (used for HTML5 players), without need to store all files on this quick (ssd, upto 2x480GB space, full database about 6TB).
>>
>> Currently we have 6 servers with SATA HDDs and hitting iowait like a trucks :)
>>
>> Examples:
>> - Request -> Varnish -> HIT: serve it using Varnish.
>> - Request -> Varnish -> MISS: start caching data from backend, and instantly reply to client: `Location: http://backend/$req.url"
>> - Request -> Varnish -> UPDATE: see `-> MISS` behaviour.
>>
>> From my perspective, i should do this "detach & reply redirect" somewhere in `vcl_miss` OR `vcl_backend_fetch`, because if i understood correctly https://www.varnish-cache.org/docs/4.1/reference/states.html, i need vcl_backend_response to keep run in background (as additional thread) while doing return(synth(...)) to redirect user.
>>
>> Similiar thing is "hitting stale content while object is updating".
>> But in my case "replying redirect while object is updating".
>>
>> Also, i pray to implement this without writing additional scripts, why? I could do external php/ruby checker/cache-pusher with nginx & etc. But scared by performance downgrade :(
>> _______________________________________________
>> varnish-misc mailing list
>> varnish-misc@varnish-cache.org
>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


_______________________________________________
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
Re: Deliver on HIT, otherwise redirect using "503; Location: ..." [ In reply to ]
It would be possible to do this with varnish... but I have to ask... why
bother?

If the purpose is to offload the IO load, then varnish is good, but you
need to prime the cache... TBH, what I'd do first is put one or a pair of
varnish boxes really close to the overloaded box, and force all traffic to
that server through the close varnish boxes... using the do_stream feature,
you'll get stuff out there fairly quickly.

After that is working nicely, I'd layer in the further out varnish boxes
which interact with the near-varnish boxes to get their data.

This works well at scale since the local caches offer whatever's useful
local to them, and the 'near-varnish' boxes handle the 'global caching'
world.

This was how I arranged it at $PreviousGig and the outer CDN was getting a
85-90% cache hit ratio, and the inner tier was seeing 60% cache hit
ratio's. (The inner tier's ratio will depend heavily on how many outer
tier's there are...)

On Sat, Dec 17, 2016 at 8:09 PM, Anton Berezhkov <
bubonic.pestilence@gmail.com> wrote:

> This is how I semi-implemented: http://pastebin.com/drDP8JxP
> Now i need to use script which will run "curi -I -X PUT
> <url-to-put-into-cache>".
>
>
> > On Dec 18, 2016, at 3:58 AM, Mark Staudinger <mark.staudinger@nyi.net>
> wrote:
> >
> > Hi Anton,
> >
> > Have you looked into the "do_stream" feature of Varnish? This will
> begin serving the content to the visitor without waiting for the entire
> object to be downloaded and stored in cache. Set in vcl_backend_response.
> >
> > https://github.com/mattiasgeniar/varnish-4.0-
> configuration-templates/blob/master/default.vcl
> >
> > Cheers,
> > Mark
> >
> > On Sat, 17 Dec 2016 19:05:48 -0500, Anton Berezhkov <
> bubonic.pestilence@gmail.com> wrote:
> >
> >> Hello.
> >>
> >> Switched to Varnish from Nginx for additional functionality and better
> control of handling requests.
> >> But still can't implement what i want. And I want simple behaviour
> "Redirect on MISS/PASS".
> >> I want to use VC for deploying quick "cdn" servers for our
> mp4-video-servers (used for HTML5 players), without need to store all files
> on this quick (ssd, upto 2x480GB space, full database about 6TB).
> >>
> >> Currently we have 6 servers with SATA HDDs and hitting iowait like a
> trucks :)
> >>
> >> Examples:
> >> - Request -> Varnish -> HIT: serve it using Varnish.
> >> - Request -> Varnish -> MISS: start caching data from backend, and
> instantly reply to client: `Location: http://backend/$req.url"
> >> - Request -> Varnish -> UPDATE: see `-> MISS` behaviour.
> >>
> >> From my perspective, i should do this "detach & reply redirect"
> somewhere in `vcl_miss` OR `vcl_backend_fetch`, because if i understood
> correctly https://www.varnish-cache.org/docs/4.1/reference/states.html, i
> need vcl_backend_response to keep run in background (as additional thread)
> while doing return(synth(...)) to redirect user.
> >>
> >> Similiar thing is "hitting stale content while object is updating".
> >> But in my case "replying redirect while object is updating".
> >>
> >> Also, i pray to implement this without writing additional scripts, why?
> I could do external php/ruby checker/cache-pusher with nginx & etc. But
> scared by performance downgrade :(
> >> _______________________________________________
> >> varnish-misc mailing list
> >> varnish-misc@varnish-cache.org
> >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
>
> _______________________________________________
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
Re: Deliver on HIT, otherwise redirect using "503; Location: ..." [ In reply to ]
I think Jason is right in asking "why?". What do you want to achieve
specifically with this behavior?

Varnish has streaming and request coalescing, meaning a request can be
served as soon as data starts being available AND the backend doesn't
suffer from simultaneous misses on the same object. I feel that should
cover almost all your needs, so I'm curious about the use-case.

On Dec 18, 2016 20:27, "Jason Price" <japrice@gmail.com> wrote:

> It would be possible to do this with varnish... but I have to ask... why
> bother?
>
> If the purpose is to offload the IO load, then varnish is good, but you
> need to prime the cache... TBH, what I'd do first is put one or a pair of
> varnish boxes really close to the overloaded box, and force all traffic to
> that server through the close varnish boxes... using the do_stream feature,
> you'll get stuff out there fairly quickly.
>
> After that is working nicely, I'd layer in the further out varnish boxes
> which interact with the near-varnish boxes to get their data.
>
> This works well at scale since the local caches offer whatever's useful
> local to them, and the 'near-varnish' boxes handle the 'global caching'
> world.
>
> This was how I arranged it at $PreviousGig and the outer CDN was getting a
> 85-90% cache hit ratio, and the inner tier was seeing 60% cache hit
> ratio's. (The inner tier's ratio will depend heavily on how many outer
> tier's there are...)
>
> On Sat, Dec 17, 2016 at 8:09 PM, Anton Berezhkov <
> bubonic.pestilence@gmail.com> wrote:
>
>> This is how I semi-implemented: http://pastebin.com/drDP8JxP
>> Now i need to use script which will run "curi -I -X PUT
>> <url-to-put-into-cache>".
>>
>>
>> > On Dec 18, 2016, at 3:58 AM, Mark Staudinger <mark.staudinger@nyi.net>
>> wrote:
>> >
>> > Hi Anton,
>> >
>> > Have you looked into the "do_stream" feature of Varnish? This will
>> begin serving the content to the visitor without waiting for the entire
>> object to be downloaded and stored in cache. Set in vcl_backend_response.
>> >
>> > https://github.com/mattiasgeniar/varnish-4.0-configuration-
>> templates/blob/master/default.vcl
>> >
>> > Cheers,
>> > Mark
>> >
>> > On Sat, 17 Dec 2016 19:05:48 -0500, Anton Berezhkov <
>> bubonic.pestilence@gmail.com> wrote:
>> >
>> >> Hello.
>> >>
>> >> Switched to Varnish from Nginx for additional functionality and better
>> control of handling requests.
>> >> But still can't implement what i want. And I want simple behaviour
>> "Redirect on MISS/PASS".
>> >> I want to use VC for deploying quick "cdn" servers for our
>> mp4-video-servers (used for HTML5 players), without need to store all files
>> on this quick (ssd, upto 2x480GB space, full database about 6TB).
>> >>
>> >> Currently we have 6 servers with SATA HDDs and hitting iowait like a
>> trucks :)
>> >>
>> >> Examples:
>> >> - Request -> Varnish -> HIT: serve it using Varnish.
>> >> - Request -> Varnish -> MISS: start caching data from backend, and
>> instantly reply to client: `Location: http://backend/$req.url"
>> >> - Request -> Varnish -> UPDATE: see `-> MISS` behaviour.
>> >>
>> >> From my perspective, i should do this "detach & reply redirect"
>> somewhere in `vcl_miss` OR `vcl_backend_fetch`, because if i understood
>> correctly https://www.varnish-cache.org/docs/4.1/reference/states.html,
>> i need vcl_backend_response to keep run in background (as additional
>> thread) while doing return(synth(...)) to redirect user.
>> >>
>> >> Similiar thing is "hitting stale content while object is updating".
>> >> But in my case "replying redirect while object is updating".
>> >>
>> >> Also, i pray to implement this without writing additional scripts,
>> why? I could do external php/ruby checker/cache-pusher with nginx & etc.
>> But scared by performance downgrade :(
>> >> _______________________________________________
>> >> varnish-misc mailing list
>> >> varnish-misc@varnish-cache.org
>> >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>>
>>
>> _______________________________________________
>> varnish-misc mailing list
>> varnish-misc@varnish-cache.org
>> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>>
>
>
> _______________________________________________
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
Re: Deliver on HIT, otherwise redirect using "503; Location: ..." [ In reply to ]
We have 6 servers with 6TB on each one (same video files). Currently they're hitting iowait limit of SATA disk (240ops total). At same time, each server can provide 500Mbit of guaranteed bandwidth.

With HDD restriction, each server provides about 320Mbit. There is also problem with fragmentation, which is caused by nature of HTML5 video players & HTTP (allowing to request partial data with Range header).

Before this moment, we were scaling horizontally, by duplicating this servers.

There is also option to get same server with 2x480GB SSDs. As i reasearched from nginx logs, 98% of daily traffic lays in ≈800GB of files.

What i want to achieve: To build a varnish server with 2x480GB SSDs(no raid), with storage for varnish about 800GBs. Which will guarantedly fill all available bandwidth for a server.

Also, I built simple load-balancer, which monitors each server for current eth0 load (in Mbps) and decide to which one redirect (using HTTP Location header).

Request for Video -> LBServer: Find lowest loaded(1 of 6) & Redirect to LBNode -> LBN: serve request

To add new HDD-LBN, i need to setup server, sync videos, setup some additional software.

My wish: add new SSD-LBN, setup & sync varnish config, and it will build cached pool itself.

Why i need redirect?
1. It will offload bandwidth of SSD-LBN, pass-through will take bandwidth of both servers + still cause iowait problems on HDD-LBN.
2. It will "prove" that uncached video will be take from HDD-LBN which always have all videos.

Currently all LBN servers are hosted on OVH and we're good with them, especially because of low price :)

If you have any suggestions, i'll be glad to hear them :)

> On Dec 18, 2016, at 10:59 PM, Guillaume Quintard <guillaume@varnish-software.com> wrote:
>
> I think Jason is right in asking "why?". What do you want to achieve specifically with this behavior?
>
> Varnish has streaming and request coalescing, meaning a request can be served as soon as data starts being available AND the backend doesn't suffer from simultaneous misses on the same object. I feel that should cover almost all your needs, so I'm curious about the use-case.
>
> On Dec 18, 2016 20:27, "Jason Price" <japrice@gmail.com> wrote:
> It would be possible to do this with varnish... but I have to ask... why bother?
>
> If the purpose is to offload the IO load, then varnish is good, but you need to prime the cache... TBH, what I'd do first is put one or a pair of varnish boxes really close to the overloaded box, and force all traffic to that server through the close varnish boxes... using the do_stream feature, you'll get stuff out there fairly quickly.
>
> After that is working nicely, I'd layer in the further out varnish boxes which interact with the near-varnish boxes to get their data.
>
> This works well at scale since the local caches offer whatever's useful local to them, and the 'near-varnish' boxes handle the 'global caching' world.
>
> This was how I arranged it at $PreviousGig and the outer CDN was getting a 85-90% cache hit ratio, and the inner tier was seeing 60% cache hit ratio's. (The inner tier's ratio will depend heavily on how many outer tier's there are...)
>
> On Sat, Dec 17, 2016 at 8:09 PM, Anton Berezhkov <bubonic.pestilence@gmail.com> wrote:
> This is how I semi-implemented: http://pastebin.com/drDP8JxP
> Now i need to use script which will run "curi -I -X PUT <url-to-put-into-cache>".
>
>
> > On Dec 18, 2016, at 3:58 AM, Mark Staudinger <mark.staudinger@nyi.net> wrote:
> >
> > Hi Anton,
> >
> > Have you looked into the "do_stream" feature of Varnish? This will begin serving the content to the visitor without waiting for the entire object to be downloaded and stored in cache. Set in vcl_backend_response.
> >
> > https://github.com/mattiasgeniar/varnish-4.0-configuration-templates/blob/master/default.vcl
> >
> > Cheers,
> > Mark
> >
> > On Sat, 17 Dec 2016 19:05:48 -0500, Anton Berezhkov <bubonic.pestilence@gmail.com> wrote:
> >
> >> Hello.
> >>
> >> Switched to Varnish from Nginx for additional functionality and better control of handling requests.
> >> But still can't implement what i want. And I want simple behaviour "Redirect on MISS/PASS".
> >> I want to use VC for deploying quick "cdn" servers for our mp4-video-servers (used for HTML5 players), without need to store all files on this quick (ssd, upto 2x480GB space, full database about 6TB).
> >>
> >> Currently we have 6 servers with SATA HDDs and hitting iowait like a trucks :)
> >>
> >> Examples:
> >> - Request -> Varnish -> HIT: serve it using Varnish.
> >> - Request -> Varnish -> MISS: start caching data from backend, and instantly reply to client: `Location: http://backend/$req.url"
> >> - Request -> Varnish -> UPDATE: see `-> MISS` behaviour.
> >>
> >> From my perspective, i should do this "detach & reply redirect" somewhere in `vcl_miss` OR `vcl_backend_fetch`, because if i understood correctly https://www.varnish-cache.org/docs/4.1/reference/states.html, i need vcl_backend_response to keep run in background (as additional thread) while doing return(synth(...)) to redirect user.
> >>
> >> Similiar thing is "hitting stale content while object is updating".
> >> But in my case "replying redirect while object is updating".
> >>
> >> Also, i pray to implement this without writing additional scripts, why? I could do external php/ruby checker/cache-pusher with nginx & etc. But scared by performance downgrade :(
> >> _______________________________________________
> >> varnish-misc mailing list
> >> varnish-misc@varnish-cache.org
> >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
>
> _______________________________________________
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
>
> _______________________________________________
> varnish-misc mailing list
> varnish-misc@varnish-cache.org
> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc


_______________________________________________
varnish-misc mailing list
varnish-misc@varnish-cache.org
https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
Re: Deliver on HIT, otherwise redirect using "503; Location: ..." [ In reply to ]
Thanks for the context. So, if I get what you're writing, the goal is too
redirect to a node that has the object in cache?

Question is, what's the time needed to:
- send a request to the server on the LAN and receive the object
- send the redirect across the web, and wait for the client to send a new
request, again across the web.

If the former is not at least an order of magnitude larger than the latter,
I wouldn't bother.

The issues I have with your redirection scheme are that:
- IIUC, you are basically explaining to people where the backend is,
instead of shielding it with Varnish
- it doesn't lower the backend traffic
- as said, I'm not even sure the user-experience is better/faster

--
Guillaume Quintard

On Sun, Dec 18, 2016 at 9:21 PM, Anton Berezhkov <
bubonic.pestilence@gmail.com> wrote:

> We have 6 servers with 6TB on each one (same video files). Currently
> they're hitting iowait limit of SATA disk (240ops total). At same time,
> each server can provide 500Mbit of guaranteed bandwidth.
>
> With HDD restriction, each server provides about 320Mbit. There is also
> problem with fragmentation, which is caused by nature of HTML5 video
> players & HTTP (allowing to request partial data with Range header).
>
> Before this moment, we were scaling horizontally, by duplicating this
> servers.
>
> There is also option to get same server with 2x480GB SSDs. As i
> reasearched from nginx logs, 98% of daily traffic lays in ≈800GB of files.
>
> What i want to achieve: To build a varnish server with 2x480GB SSDs(no
> raid), with storage for varnish about 800GBs. Which will guarantedly fill
> all available bandwidth for a server.
>
> Also, I built simple load-balancer, which monitors each server for current
> eth0 load (in Mbps) and decide to which one redirect (using HTTP Location
> header).
>
> Request for Video -> LBServer: Find lowest loaded(1 of 6) & Redirect to
> LBNode -> LBN: serve request
>
> To add new HDD-LBN, i need to setup server, sync videos, setup some
> additional software.
>
> My wish: add new SSD-LBN, setup & sync varnish config, and it will build
> cached pool itself.
>
> Why i need redirect?
> 1. It will offload bandwidth of SSD-LBN, pass-through will take bandwidth
> of both servers + still cause iowait problems on HDD-LBN.
> 2. It will "prove" that uncached video will be take from HDD-LBN which
> always have all videos.
>
> Currently all LBN servers are hosted on OVH and we're good with them,
> especially because of low price :)
>
> If you have any suggestions, i'll be glad to hear them :)
>
> > On Dec 18, 2016, at 10:59 PM, Guillaume Quintard <
> guillaume@varnish-software.com> wrote:
> >
> > I think Jason is right in asking "why?". What do you want to achieve
> specifically with this behavior?
> >
> > Varnish has streaming and request coalescing, meaning a request can be
> served as soon as data starts being available AND the backend doesn't
> suffer from simultaneous misses on the same object. I feel that should
> cover almost all your needs, so I'm curious about the use-case.
> >
> > On Dec 18, 2016 20:27, "Jason Price" <japrice@gmail.com> wrote:
> > It would be possible to do this with varnish... but I have to ask... why
> bother?
> >
> > If the purpose is to offload the IO load, then varnish is good, but you
> need to prime the cache... TBH, what I'd do first is put one or a pair of
> varnish boxes really close to the overloaded box, and force all traffic to
> that server through the close varnish boxes... using the do_stream feature,
> you'll get stuff out there fairly quickly.
> >
> > After that is working nicely, I'd layer in the further out varnish boxes
> which interact with the near-varnish boxes to get their data.
> >
> > This works well at scale since the local caches offer whatever's useful
> local to them, and the 'near-varnish' boxes handle the 'global caching'
> world.
> >
> > This was how I arranged it at $PreviousGig and the outer CDN was getting
> a 85-90% cache hit ratio, and the inner tier was seeing 60% cache hit
> ratio's. (The inner tier's ratio will depend heavily on how many outer
> tier's there are...)
> >
> > On Sat, Dec 17, 2016 at 8:09 PM, Anton Berezhkov <
> bubonic.pestilence@gmail.com> wrote:
> > This is how I semi-implemented: http://pastebin.com/drDP8JxP
> > Now i need to use script which will run "curi -I -X PUT
> <url-to-put-into-cache>".
> >
> >
> > > On Dec 18, 2016, at 3:58 AM, Mark Staudinger <mark.staudinger@nyi.net>
> wrote:
> > >
> > > Hi Anton,
> > >
> > > Have you looked into the "do_stream" feature of Varnish? This will
> begin serving the content to the visitor without waiting for the entire
> object to be downloaded and stored in cache. Set in vcl_backend_response.
> > >
> > > https://github.com/mattiasgeniar/varnish-4.0-
> configuration-templates/blob/master/default.vcl
> > >
> > > Cheers,
> > > Mark
> > >
> > > On Sat, 17 Dec 2016 19:05:48 -0500, Anton Berezhkov <
> bubonic.pestilence@gmail.com> wrote:
> > >
> > >> Hello.
> > >>
> > >> Switched to Varnish from Nginx for additional functionality and
> better control of handling requests.
> > >> But still can't implement what i want. And I want simple behaviour
> "Redirect on MISS/PASS".
> > >> I want to use VC for deploying quick "cdn" servers for our
> mp4-video-servers (used for HTML5 players), without need to store all files
> on this quick (ssd, upto 2x480GB space, full database about 6TB).
> > >>
> > >> Currently we have 6 servers with SATA HDDs and hitting iowait like a
> trucks :)
> > >>
> > >> Examples:
> > >> - Request -> Varnish -> HIT: serve it using Varnish.
> > >> - Request -> Varnish -> MISS: start caching data from backend, and
> instantly reply to client: `Location: http://backend/$req.url"
> > >> - Request -> Varnish -> UPDATE: see `-> MISS` behaviour.
> > >>
> > >> From my perspective, i should do this "detach & reply redirect"
> somewhere in `vcl_miss` OR `vcl_backend_fetch`, because if i understood
> correctly https://www.varnish-cache.org/docs/4.1/reference/states.html, i
> need vcl_backend_response to keep run in background (as additional thread)
> while doing return(synth(...)) to redirect user.
> > >>
> > >> Similiar thing is "hitting stale content while object is updating".
> > >> But in my case "replying redirect while object is updating".
> > >>
> > >> Also, i pray to implement this without writing additional scripts,
> why? I could do external php/ruby checker/cache-pusher with nginx & etc.
> But scared by performance downgrade :(
> > >> _______________________________________________
> > >> varnish-misc mailing list
> > >> varnish-misc@varnish-cache.org
> > >> https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
> >
> >
> > _______________________________________________
> > varnish-misc mailing list
> > varnish-misc@varnish-cache.org
> > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
> >
> >
> > _______________________________________________
> > varnish-misc mailing list
> > varnish-misc@varnish-cache.org
> > https://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
>