Mailing List Archive

NFS Export using Sync is slow when exported partition is a drbd d evice
Hi,

I have used DRBD and Heartbeat and successfully built a 2-node high availbility NFS server. It all works fine, fail-over approx. 5-10 seconds, read/write 7 MBps over 100 Mbps Ethernet.

There is one problem. When exporting the file system using NFS and setting the parameters to sync and no_wdelay (typically in /etc/exports) the response from the server is REALLY slow. Using an interface monitor I can see that there are little communication between the server and the client but after 5-8 seconds there is a burst using 80% of the link's total bandwidth when the file actually is tranferred. It's no problems when using a non-DRBD device and the same export table!!! Why this annoying delay? It's not usable with this delay!

First I thought there was truoble when using the "virtual cluster interface" provided by heartbeat since the client is communicating to the server via this "virtual" IP but the server's replies are from its "real" IP.... I tried a non-DRBD NFS server and tied a second virtual interface to it but then it was no problems so it must be something wrong with DRBD... or?

Have anyone else the same problem?



Regards,
Jonas Johansson, Ericsson UAB - Core Core Network Development
Re: NFS Export using Sync is slow when exported partition is a drbd d evice [ In reply to ]
Two questions ...

Which DRBD protocol is used ?
Which DRBD release ?

-Philipp

* Jonas Johansson (UAB) <Jonas.Johansson@example.com> [011220 14:05]:
> Hi,
>
> I have used DRBD and Heartbeat and successfully built a 2-node high availbility NFS server. It all works fine, fail-over approx. 5-10 seconds, read/write 7 MBps over 100 Mbps Ethernet.
>
> There is one problem. When exporting the file system using NFS and setting the parameters to sync and no_wdelay (typically in /etc/exports) the response from the server is REALLY slow. Using an interface monitor I can see that there are little communication between the server and the client but after 5-8 seconds there is a burst using 80% of the link's total bandwidth when the file actually is tranferred. It's no problems when using a non-DRBD device and the same export table!!! Why this annoying delay? It's not usable with this delay!
>
> First I thought there was truoble when using the "virtual cluster interface" provided by heartbeat since the client is communicating to the server via this "virtual" IP but the server's replies are from its "real" IP.... I tried a non-DRBD NFS server and tied a second virtual interface to it but then it was no problems so it must be something wrong with DRBD... or?
>
> Have anyone else the same problem?
>
>
>
> Regards,
> Jonas Johansson, Ericsson UAB - Core Core Network Development
>
> _______________________________________________
> DRBD-devel mailing list
> DRBD-devel@example.com
> https://lists.sourceforge.net/lists/listinfo/drbd-devel
--
: Dipl-Ing Philipp Reisner Tel +43-1-8974897-750 :
: LINBIT Information Technologies GmbH http://www.linbit.com :
: Sechshauserstr 48, 1150 Wien :
Re: NFS Export using Sync is slow when exported partition is a drbd d evice [ In reply to ]
Please try protocol B instead of C.

( Alternatively Change line 632 of drbd_receiver.c from

#define NUMBER 24

to

#define NUMBER 1

)

-Philipp

* Philipp Reisner <philipp.reisner@example.com> [011220 14:49]:
> Two questions ...
>
> Which DRBD protocol is used ?
> Which DRBD release ?
>
> -Philipp
>
> * Jonas Johansson (UAB) <Jonas.Johansson@example.com> [011220 14:05]:
> > Hi,
> >
> > I have used DRBD and Heartbeat and successfully built a 2-node high availbility NFS server. It all works fine, fail-over approx. 5-10 seconds, read/write 7 MBps over 100 Mbps Ethernet.
> >
> > There is one problem. When exporting the file system using NFS and setting the parameters to sync and no_wdelay (typically in /etc/exports) the response from the server is REALLY slow. Using an interface monitor I can see that there are little communication between the server and the client but after 5-8 seconds there is a burst using 80% of the link's total bandwidth when the file actually is tranferred. It's no problems when using a non-DRBD device and the same export table!!! Why this annoying delay? It's not usable with this delay!
> >
> > First I thought there was truoble when using the "virtual cluster interface" provided by heartbeat since the client is communicating to the server via this "virtual" IP but the server's replies are from its "real" IP.... I tried a non-DRBD NFS server and tied a second virtual interface to it but then it was no problems so it must be something wrong with DRBD... or?
> >
> > Have anyone else the same problem?
> >
> >
> >
> > Regards,
> > Jonas Johansson, Ericsson UAB - Core Core Network Development
> >
> > _______________________________________________
> > DRBD-devel mailing list
> > DRBD-devel@example.com
> > https://lists.sourceforge.net/lists/listinfo/drbd-devel
> --
> : Dipl-Ing Philipp Reisner Tel +43-1-8974897-750 :
> : LINBIT Information Technologies GmbH http://www.linbit.com :
> : Sechshauserstr 48, 1150 Wien :
>
> _______________________________________________
> DRBD-devel mailing list
> DRBD-devel@example.com
> https://lists.sourceforge.net/lists/listinfo/drbd-devel
--
: Dipl-Ing Philipp Reisner Tel +43-1-8974897-750 :
: LINBIT Information Technologies GmbH http://www.linbit.com :
: Sechshauserstr 48, 1150 Wien :