Mailing List Archive

smbfs caching
Hello All,
I have an application that does a lot of file system I/O, and it can
be run in a parallel mode where different machines share data via a
networked file system. Getting good performance and scaling in parallel
mode requires good performance from both the filesystem clients and servers.
Since the Linux NFS server is not that great I decided to try using
samba and smbfs instead. To my supprise NFS performed significantly better
than the SMB setup. Here are the results:
penguin1> grep 'Total check time' *.sum
LOCAL.sum: Total check time = 3:42:33 User=11771.93 Sys=1063.53 Mem=206.735
NFS.sum: Total check time = 4:44:43 User=11836.13 Sys=1441.40 Mem=206.750
SAMBA.sum: Total check time = 6:11:53 User=11974.19 Sys=2272.34 Mem=206.750
For the NFS and SMB tests the program ran on 1 machine which was connected to
an identical machine via 100BaseT ethernet which ran the server. The results
when running on a local file system are also included for comparison.
One reason these results supprise me so much is that I thought that the raw
speed of samba was better than what you could get from NFS (I have not
verified that). This leaves me wondering if Linux'es smbfs does not do much
caching.
Would anyone like to comment on this?
Thanks,
Jim
--
----------------------------------------------------------------------------
Jim Nance Avant! Corporation
(919) 941-6655 Do you have sweet iced tea? jim_nance@avanticorp.com
No, but there's sugar on the table.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
Re: smbfs caching [ In reply to ]
On Fri, Jan 22, 1999 at 04:36:46PM -0500, Jim Nance wrote:
> Would anyone like to comment on this?
I don't know why you expected smbfs to be faster than NFS... NFS is a
simplistic design, SMB is not (not even close).
We use both here... NFS is pretty good most of the time, even over
slow wan links. SMB is pretty hideous on circuits less than 256K or
so, and some operations (like expanding the network neighborhood tree
under Win98 and damned painful -- even over a lan).
For copying large files (100MB+) about, both seem on a par with each
other...
-cw
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
Re: smbfs caching [ In reply to ]
This is a multi-part message in MIME format.
--------------AA210A17D7095EA8385CA785
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Chris Wedgwood wrote:
>
> On Fri, Jan 22, 1999 at 04:36:46PM -0500, Jim Nance wrote:
>
> > Would anyone like to comment on this?
>
> I don't know why you expected smbfs to be faster than NFS... NFS is a
> simplistic design, SMB is not (not even close).
>
> We use both here... NFS is pretty good most of the time, even over
> slow wan links. SMB is pretty hideous on circuits less than 256K or
> so, and some operations (like expanding the network neighborhood tree
> under Win98 and damned painful -- even over a lan).
>
> For copying large files (100MB+) about, both seem on a par with each
> other...
However, if you are using a Microsoft client to copy something large
over a slow WAN via SMB, you can kiss off using your machine for
anything else until the copy completes (I've observed this using Win95,
Win NT and Win98). If you are using a Samba client, you can at least
continue to use your machine of other work...
-Tom
--
Tom Eastep
Compaq Computer Corporation
Enterprise Computing Group
Tandem Division
tom.eastep@compaq.com
--------------AA210A17D7095EA8385CA785
Content-Type: text/x-vcard; charset=us-ascii;
name="eastep.vcf"
Content-Transfer-Encoding: 7bit
Content-Description: Card for Tom Eastep
Content-Disposition: attachment;
filename="eastep.vcf"
begin:vcard
n:Eastep;Tom
x-mozilla-html:TRUE
org:Compaq Computer Corporation;Tandem Division
adr:;;;;;;
version:2.1
email;internet:tom.eastep@compaq.com
x-mozilla-cpt:;-6528
fn:Tom Eastep
end:vcard
--------------AA210A17D7095EA8385CA785--
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
Re: smbfs caching [ In reply to ]
On Sat, Jan 23, 1999 at 05:44:48AM +0000, Tom Eastep wrote:
> However, if you are using a Microsoft client to copy something
> large over a slow WAN via SMB, you can kiss off using your machine
> for anything else until the copy completes (I've observed this
> using Win95, Win NT and Win98). If you are using a Samba client,
> you can at least continue to use your machine of other work...
Copying from NT server is really fun too -- it tried to bring the
entire file into memory. For, pulling a 400MB (database dump) file
off a NT box via SMB sucks rocks -- the machine starts swapping (only
have 256MB of RAM) and thrashes really badly, eventually the file
will copy, but it takes a _long_ time.
There are fixes for this -- but its silly I have to code a custom
Win32 app. to get around this. Grrr... bad M$.
-cw
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
Re: smbfs caching [ In reply to ]
On Fri, Jan 22, 1999 at 04:36:46PM -0500, Jim Nance wrote:
> I have an application that does a lot of file system I/O, and it can
> be run in a parallel mode where different machines share data via a
> networked file system. Getting good performance and scaling in parallel
> mode requires good performance from both the filesystem clients and servers.
> Since the Linux NFS server is not that great I decided to try using
> samba and smbfs instead. To my supprise NFS performed significantly better
> than the SMB setup. Here are the results:
Maybe, the problem you are seeing is something I've stumbled with lately. It
goes like this:
Unix utilities typically use 'stat' to get information on each and every
file they need information about. For example, 'ls -l' would query every
file like that.
Windows utilities typically use 'findfirst/findnext' to get the same
information, for whole directories.
SMB's implementation of findfirst/findnext is nice. It sends the information
regarding all queried files at the same time, in a large packet (or
packets). That means that querying a directory with 700 files can take
several seconds.
stat, however, is called for each file in turn (this is the way Unix
works...), and SMB can't do too much about it. For 700 files, it would send
700 tiny packets, and get a response for each of these. A quick calculation
reveals that this is scary. Assume a network with a latency of 500ms (this
is my situation here, at times). It would take *at least* 350 seconds for
the information of stating 700 files to get back to you. Ouch.
If your test involved stating a lot of files - this just might be the reason
for the speed difference.
A solution? Nothing that I can think of. A work-around? Yes. Have the smbfs
code cache the information returned from findfirst/findnext on whole
directories. Why would that help? Since most utilities that stat a lot of
files would generally use 'readdir' to see what files they have to query,
caching that information would be a win, as the application would typically
seconds later try to stat each and every file (and smbfs uses
findfirst/findnext to "emulate" a readdir).
I've not yet started real work on that, and I don't know if I will. My usage
of smb was rather limited. If I continue using it, I'll probably try to hack
this into my kernel sooner or later, as the slowness really annoyed me. You
can't entirely blame Linux for that - this is just the way Unix works, and
it is an incompatibility between the two operating systems - but I'm certain
people would blame Linux (compared to a Windows client, Unix here performs
badly).
And maybe this isn't your problem, after all. ;-)
Nimrod
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] some Alpha cleanups [ In reply to ]
> In the ide_disk.c: forcing int pointer to the location known
> to be unaligned looks ugly for me (and for alpha too :) and
> causes unaligned trap. I suggest the following patch (or use
> macros from asm/unaligned.h instead)
Can I suggest something different for all these cases. You can tell
egcs something is packed and may not be suitably aligned as an attribute.
Use that instead. Let the compiler do the brainwork.
> on alpha (it was already done in the tulip driver) until we
> find a better solution (maybe use get/put_unaligned in the
> ip_rcv?). Here is the patch for eepro100, which I currently
Its too slow doing this. Don't inflict damage on the major platforms
because the Alpha has some problems with alignment. Again this is where
the alignment hints of egcs will help
Alan
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/