Mailing List Archive

Database benchmarks
Now that I have the test suite working and installation is quick,
I set up the software on a freshly-installed machine on my home
network, ran the suite, reinstalled using InnoDB tables instead of
MyISAM, ran again, installed MySQL 4.0.12, and ran again.

The semi-bad news: there didn't seem to be any difference in
performance with any of these changes. The variance in timing
among setups wasn't much more than the variance from one run to
the next. The actual numbers are below. Probably the most
important numbers are the "sec per fetch" and "sec per search"
at the end--those are the timings of regular page fetches and
searches done by background threads that run during the
conformance tests and best simulate actual use.

The semi-good news is that MySQL 4.0.12 instaled easily and
worked out of the box with no problems, and seems as reliable
as its now "production" status would indicate, and didn't have
any performance problems, so it seems there would be no
downside to using it if we decided to upgrade to take advantage
of its features.

MyISAM:

Test "Links" Succeeded (120.817 secs)
Test "HTML" Succeeded (321.443 secs)
Test "Editing" Succeeded (229.574 secs)
Test "Parsing" Succeeded (23.135 secs)
Test "Special" Succeeded (124.010 secs)
Test "Search" Succeeded (33.702 secs)
Test "Math" Succeeded (49.452 secs)
Stopped background threads.
Fetched 213 pages in 784.356 sec (3.682 sec per fetch).
Performed 201 searches in 397.350 sec (1.865 sec per search).
Total elapsed time: 0 hr, 16 min, 41.367 sec.

InnoDB:

Test "Links" Succeeded (113.099 secs)
Test "HTML" Succeeded (247.384 secs)
Test "Editing" Succeeded (175.459 secs)
Test "Parsing" Succeeded (16.881 secs)
Test "Special" Succeeded (159.286 secs)
Test "Search" Succeeded (45.763 secs)
Test "Math" Succeeded (60.805 secs)
Stopped background threads.
Fetched 194 pages in 721.915 sec (3.721 sec per fetch).
Performed 192 searches in 343.591 sec (1.771 sec per search).
Total elapsed time: 0 hr, 15 min, 20.568 sec.

MySQL 4.0.12:

Test "Links" Succeeded (114.171 secs)
Test "HTML" Succeeded (258.449 secs)
Test "Editing" Succeeded (212.278 secs)
Test "Parsing" Succeeded (21.764 secs)
Test "Special" Succeeded (131.613 secs)
Test "Search" Succeeded (31.383 secs)
Test "Math" Succeeded (52.241 secs)
Stopped background threads.
Fetched 201 pages in 748.631 sec (3.725 sec per fetch).
Performed 200 searches in 350.369 sec (1.743 sec per search).
Total elapsed time: 0 hr, 15 min, 51.312 sec.


--
Lee Daniel Crocker <lee@piclab.com> <http://www.piclab.com/lee/>
"All inventions or works of authorship original to me, herein and past,
are placed irrevocably in the public domain, and may be used or modified
for any purpose, without permission, attribution, or notification."--LDC
RE: Database benchmarks [ In reply to ]
Lee Daniel Crocker wrote:

> Now that I have the test suite working and installation is quick,
> I set up the software on a freshly-installed machine on my home
> network, ran the suite, reinstalled using InnoDB tables instead of
> MyISAM, ran again, installed MySQL 4.0.12, and ran again.
>
> The semi-bad news: there didn't seem to be any difference in
> performance with any of these changes. The variance in timing
> among setups wasn't much more than the variance from one run to
> the next. The actual numbers are below. Probably the most
> important numbers are the "sec per fetch" and "sec per search"
> at the end--those are the timings of regular page fetches and
> searches done by background threads that run during the
> conformance tests and best simulate actual use.

The differences between MySQL versions and table types may not be the
determining factor in performance here. Inconsequential test results could
indicate a performance bottleneck on your test system. Disk throughput,
available RAM or other could be limiting all test configurations.

For Example:
-If maximum disk throughput on your test system is 18 Mbytes/sec, all
configurations may produce similar results at this level.
-Increase the disk throughput to 33 Mbytes/sec. At this level, configuration
#1 may outperform configuration #2 because it is capable of taking advantage
of the increased disk throughput. Configuration 2# may reach maximum
performance at 28 Mbytes/sec with little to no improvement at 33 Mbytes/sec.
The performance of configuration #1 could taper higher than 33 Mbytes/sec,
say 39 Mbytes/sec.

Or on the other hand, your message indicates that default MySQL
configurations were used. The default configuration options may not be
taking advantage of the resources available on your test system. The next
step could be adjusting these configurations to optimize the use of
available resources.

The fact that Wikipedia can be installed on various configurations and see
similar results, is good. Because, it provides a solid baseline for
performance measurement.

BTW, this is my first post to the list and I wanted to note and thank all of
you for the excellent work this project has produced. We are testing the
Wikipedia engine for use as a team knowledgebase. I know there are other
engines that may be more suitable for this, but it was hard to pass up the
combination of features included in Wikipedia.

Thank you.

-- Jason Dreyer
Re: Database benchmarks [ In reply to ]
> (Dreyer, Jason <Jason.Dreyer@deg.state.wi.us>):
>
> The differences between MySQL versions and table types may not be
> the determining factor in performance here.

Absolutely; lies, damned lies, and benchmarks, and all that. Disk
I/O may well be a major culprit. Memory/CPU usage probably isn't.
I'll run some more tests to check some of those things. I'll also run
some tests for things like having the database on a separate machine,
even PostgreSQL if I have the time (I'd appreciate it if the fellow
who said he had it working would send me a patch).

But I did want to get this initial set of numbers out there for
discussion, and even these first limited results do give me some
warm fuzzies about MySQL 4.0.12, which was something I wanted to
look hard at because I wanted some of its features.

I'd lso appreciate suggestions for other benchmarks (specific
MySQL settings, for example).

--
Lee Daniel Crocker <lee@piclab.com> <http://www.piclab.com/lee/>
"All inventions or works of authorship original to me, herein and past,
are placed irrevocably in the public domain, and may be used or modified
for any purpose, without permission, attribution, or notification."--LDC
Re: Database benchmarks [ In reply to ]
On Thu, 2003-04-17 at 11:42, Lee Daniel Crocker wrote:
> ...even these first limited results do give me some
> warm fuzzies about MySQL 4.0.12, which was something I wanted to
> look hard at because I wanted some of its features.

If you get a chance, you might try tweaking the search to use the new
boolean search mode, and see what performance impact that has compared
to what we're doing now. (IIRC you have to alter the fulltext index in
some way. MySQL docs should describe it.)

I'm going to be busy and/or out of town for a few days, so I won't have
a chance soon... :)

-- brion vibber (brion @ pobox.com)
Re: Database benchmarks [ In reply to ]
Lee Daniel Crocker wrote:
> Absolutely; lies, damned lies, and benchmarks, and all that. Disk
> I/O may well be a major culprit. Memory/CPU usage probably isn't.
> I'll run some more tests to check some of those things. I'll also run
> some tests for things like having the database on a separate machine,
> even PostgreSQL if I have the time (I'd appreciate it if the fellow
> who said he had it working would send me a patch).

If you meen me with the fellow, I thought I mentioned that I got the data
into a Postgresql DB, nothing more. To do this is easy, but that will not
result in a database you will run a 'speed' test with. And I don't have the
PHP Code running. I only use it to test queries.

To get a pg database which will give useful results, you have to modify the
db schemas in a appropriate way (and later the queries in the php pages).
Without this pg will loose in any test (especially speed measuring ones).

But my Server (a very fast Pentium 1 with 90 Mhz) is pretty happy with the
german wikipedia pages ;)

Nevertheless, I can help on questions concerning a proper implementation,
and I will continue to port the schemas. The next test I will work on is a
propper fulltext search, which is handled by a pg add on. But my Server is
a bit slow in moving of about 27MB data (german wiki) ;)

Smurf
--
Prayer: In the beginning there was IBM ... and IBM created SQL.
------------------------- Anthill inside! ---------------------------
Re: Database benchmarks [ In reply to ]
> (Thomas Corell <T.Corell@t-online.de>):
>
> I thought I mentioned that I got the data into a Postgresql DB,
> nothing more. To do this is easy, but that will not result in a
> database you will run a 'speed' test with. And I don't have the
> PHP Code running. I only use it to test queries.

Ah, I see. Then testing PG is further away. Once you do get a
reasonably efficient schema, fulltext search, etc., let me know
and I'll see how easy it is to abstract those functions in the
wiki software sufficiently to allow for a database comparison.
For now I'll keep testing MySQL tweaks.

--
Lee Daniel Crocker <lee@piclab.com> <http://www.piclab.com/lee/>
"All inventions or works of authorship original to me, herein and past,
are placed irrevocably in the public domain, and may be used or modified
for any purpose, without permission, attribution, or notification."--LDC
Re: Database benchmarks [ In reply to ]
Lee Daniel Crocker wrote:
> Ah, I see. Then testing PG is further away. Once you do get a
> reasonably efficient schema, fulltext search, etc., let me know
> and I'll see how easy it is to abstract those functions in the
> wiki software sufficiently to allow for a database comparison.
> For now I'll keep testing MySQL tweaks.

Just a question for optimizing the right way. Are the queries known used
mainly? This will help e.g. to setup helpful views. I think all queries
concerning the displaying of a page, e.g. Or is your test suite a proper
place to look for such queries?

Smurf
--
------------------------- Anthill inside! ---------------------------
Re: Database benchmarks [ In reply to ]
> (Thomas Corell <T.Corell@t-online.de>):
>
> Just a question for optimizing the right way. Are the queries known
> used mainly? This will help e.g. to setup helpful views. I think all
> queries concerning the displaying of a page, e.g. Or is your test
> suite a proper place to look for such queries?

I'm sorry, I don't understand that question at all. There are no
views at all used in the DB. All queries are composed by the software
referring directly to the database tables, and are about as optimal
as we could make them under the limits of MySQL, but it's quite
possible that we've missed a number of optimizations.

The test suite interacts with the wiki over the web, just a user
would, so it has no knowledge of any code internals.

--
Lee Daniel Crocker <lee@piclab.com> <http://www.piclab.com/lee/>
"All inventions or works of authorship original to me, herein and past,
are placed irrevocably in the public domain, and may be used or modified
for any purpose, without permission, attribution, or notification."--LDC
RE: Database benchmarks [ In reply to ]
Lee Daniel Crocker wrote:

> Absolutely; lies, damned lies, and benchmarks, and all that.

Improving benchmarks which apply to Wikipedia db will hopefully improve the
situation out "in the wild". So yeah, the rest is just lies.

> Disk I/O may well be a major culprit. Memory/CPU usage probably
> isn't. I'll also run some tests for things like having the database
> on a separate machine
> ...
> I'd lso appreciate suggestions for other benchmarks (specific
> MySQL settings, for example).

Even if your system has plenty of memory, MySQL may not be configured to use
it. What do your settings in my.cnf look like? These settings will also
differ for MyISAM and InnoDB tables.

Improving disk throughput usually translates -> new hardware. You can try a
different file system or block size. XFS for Linux is improving. You may
want to compare it to ReiserFS. If you are going to test different block
sizes for the db, partition accordingly with the db on a separate partition
from the OS, Apache, PHP and MySQL binaries. This way, you can leave the
binary partitions at a smaller block size and adjust the db partition
without affecting the others. When installing your db on a second machine do
the same; isolate your binaries from your data.

Monitoring with mytop could be interesting:
http://jeremy.zawodny.com/mysql/mytop/
Re: Database benchmarks [ In reply to ]
> Improving disk throughput usually translates -> new hardware.
> You can try a different file system or block size. XFS for Linux
> is improving. You may want to compare it to ReiserFS. If you are
> going to test different block sizes for the db, partition
> accordingly with the db on a separate partition from the OS, Apache,
> PHP and MySQL binaries. This way, you can leave the binary
> partitions at a smaller block size and adjust the db partition
> without affecting the others. When installing your db on a second
> machine do the same; isolate your binaries from your data.

I'm sure I'll end up doing some of that. Right now, I'm using an
old Compaq with a small (8Gb) disk for the test installation, mainly
because it's trashable. But the software is relatively stable and
safe now, so I'll install it on my main development box with the nice
10,000 RPM SCSI and a gig of ram, and run the test suite from the
Compaq instead.

I'm a big fan of ReiserFS in general. That's what the MySQL folks
recommend as well, and I run that at Piclab (which is a small machine
but runs the testsuite faster than my Compaq). I'm not sure that block
sizes are that flexible for Resier, but I'll look into it. At any rate,
it would be good to find an optimal arrangement for the database
before we get the new server to install it on.

--
Lee Daniel Crocker <lee@piclab.com> <http://www.piclab.com/lee/>
"All inventions or works of authorship original to me, herein and past,
are placed irrevocably in the public domain, and may be used or modified
for any purpose, without permission, attribution, or notification."--LDC
Re: Database benchmarks [ In reply to ]
On Thu, Apr 17, 2003 at 05:12:37PM -0500, Lee Daniel Crocker wrote:
> > Improving disk throughput usually translates -> new hardware.
> > You can try a different file system or block size. XFS for Linux
> > is improving. You may want to compare it to ReiserFS. If you are
> > going to test different block sizes for the db, partition
> > accordingly with the db on a separate partition from the OS, Apache,
> > PHP and MySQL binaries. This way, you can leave the binary
> > partitions at a smaller block size and adjust the db partition
> > without affecting the others. When installing your db on a second
> > machine do the same; isolate your binaries from your data.
>
> I'm sure I'll end up doing some of that. Right now, I'm using an
> old Compaq with a small (8Gb) disk for the test installation, mainly
> because it's trashable. But the software is relatively stable and
> safe now, so I'll install it on my main development box with the nice
> 10,000 RPM SCSI and a gig of ram, and run the test suite from the
> Compaq instead.
>
> I'm a big fan of ReiserFS in general. That's what the MySQL folks
> recommend as well, and I run that at Piclab (which is a small machine
> but runs the testsuite faster than my Compaq). I'm not sure that block
> sizes are that flexible for Resier, but I'll look into it. At any rate,
> it would be good to find an optimal arrangement for the database
> before we get the new server to install it on.
> http://www.wikipedia.org/mailman/listinfo/wikitech-l

AFAIK, ReiserFS block sizes are stuck at 4KB unless someone changed that
while I wasn't looking.

--
Nick Reinking -- eschewing obfuscation since 1981 -- Minneapolis, MN
Re: Database benchmarks [ In reply to ]
On Thu, Apr 17, 2003 at 04:56:28PM -0500, Dreyer, Jason wrote:
> Lee Daniel Crocker wrote:
>
> > Absolutely; lies, damned lies, and benchmarks, and all that.
>
> Improving benchmarks which apply to Wikipedia db will hopefully improve the
> situation out "in the wild". So yeah, the rest is just lies.
>
> > Disk I/O may well be a major culprit. Memory/CPU usage probably
> > isn't. I'll also run some tests for things like having the database
> > on a separate machine
> > ...
> > I'd lso appreciate suggestions for other benchmarks (specific
> > MySQL settings, for example).
>
> Even if your system has plenty of memory, MySQL may not be configured to use
> it. What do your settings in my.cnf look like? These settings will also
> differ for MyISAM and InnoDB tables.
>
> Improving disk throughput usually translates -> new hardware. You can try a
> different file system or block size. XFS for Linux is improving. You may
> want to compare it to ReiserFS. If you are going to test different block
> sizes for the db, partition accordingly with the db on a separate partition
> from the OS, Apache, PHP and MySQL binaries. This way, you can leave the
> binary partitions at a smaller block size and adjust the db partition
> without affecting the others. When installing your db on a second machine do
> the same; isolate your binaries from your data.

Also, I still think we need to (badly) upgrade the kernel - AFAIK, it is
still running 2.4.6 (Brion hasn't said otherwise). A newer kernel could
really help us out here (w/ the ganked and reganked VM subsystem).

--
Nick Reinking -- eschewing obfuscation since 1981 -- Minneapolis, MN
Re: Database benchmarks [ In reply to ]
Lee Daniel Crocker wrote:
>>(Thomas Corell <T.Corell@t-online.de>):
>>
>>Just a question for optimizing the right way. Are the queries known
>>used mainly? This will help e.g. to setup helpful views. I think all
>>queries concerning the displaying of a page, e.g. Or is your test
>>suite a proper place to look for such queries?
> I'm sorry, I don't understand that question at all. There are no
> views at all used in the DB. All queries are composed by the software
> referring directly to the database tables, and are about as optimal
> as we could make them under the limits of MySQL, but it's quite
> possible that we've missed a number of optimizations.

Well, of course there are actually no views - MySQL don't support them. But
PostgreSQL does. And if you have for one often used operation (e.g.
displaying a wikipage) a select operation depending on a selection of
tables and rows, it can improve performance if you have a view with proper
indices exactly optimized for this operation.

Knowing these operations and the time they need to fullfill a successful
operation, plus the usage statistic of this operation, leads to the
knowlegde which of them will need as much performance as it can get.
Example:
DB-costs / operation operations / hour total time
Update of a page: 2sec 1000 2000sec
Display a page 1sec 100000 100000sec

If you can reduce the DB-costs/operation by 50% ( 1sec , 0.5sec ) you get
for the Update 1000sec benefit, for the Display 50000sec. This shows that
getting a better performance on Update operation is quite useless.

I hope this explanation was a bit more clearly. I will take a look at your
test suite and again at the php source. If I get a proper running
PostgreSQL configuration, I will tell you.

> The test suite interacts with the wiki over the web, just a user
> would, so it has no knowledge of any code internals.
>

Smurf
--
------------------------- Anthill inside! ---------------------------
Re: Database benchmarks [ In reply to ]
On Thu, 17 Apr 2003, Nick Reinking wrote:
> Also, I still think we need to (badly) upgrade the kernel - AFAIK, it is
> still running 2.4.6 (Brion hasn't said otherwise). A newer kernel could
> really help us out here (w/ the ganked and reganked VM subsystem).

It's 2.4.7.

I've been reluctant to do a kernel upgrade for fear of making it
unbootable and subjecting Jason to another 3-hour journey. :) Had I
thought of it at the time, we should have gone ahead and done it
when he was there dealing with the recent crash.

If we just install the RedHat RPMs it should in theory be fairly painless,
but I'm not too familiar with how the red hat boot goodies are set up or
how to make it fall back to the previous kernel after the first reboot if
the new one doesn't work.

-- brion vibber (brion @ pobox.com)
Re: Database benchmarks [ In reply to ]
> (Nick Reinking <nick@twoevils.org>):
>
> Also, I still think we need to (badly) upgrade the kernel - AFAIK, it is
> still running 2.4.6 (Brion hasn't said otherwise). A newer kernel could
> really help us out here (w/ the ganked and reganked VM subsystem).

Now seems like an awkward time for that, though, since it's my
impression that 2.5.X is going to become stable 2.6 Real Soon Now,
with new VMs, new threading, etc. Reiser 4 is just around the
corner as well.

Here's a thought: if we do plan to install a brand new server for
the database, take advantage of that by doing something similar to
what we did for the present server. Installing the latest stuff,
making lots of tweaks, trashing it a few times if needed, until we
get a setup that runs reliably and fast, test the hell out of it,
then make a backup, move the database, and bring it up, prepared to
switch back if all hell breaks loose.

I'm thinking a machine that's dedicated to running one program
(mysqld), attached to one client, can afford to be a little more
on the bleeding edge and still be stable than one that has to
serve hundreds of clients with dozens of apps like a typical ISP.

--
Lee Daniel Crocker <lee@piclab.com> <http://www.piclab.com/lee/>
"All inventions or works of authorship original to me, herein and past,
are placed irrevocably in the public domain, and may be used or modified
for any purpose, without permission, attribution, or notification."--LDC
Re: Database benchmarks [ In reply to ]
> (Brion Vibber <vibber@aludra.usc.edu>):
>
> If we just install the RedHat RPMs it should in theory be fairly painless,
> but I'm not too familiar with how the red hat boot goodies are set up or
> how to make it fall back to the previous kernel after the first reboot if
> the new one doesn't work.

I'm fairly familiar with how to do that (just having done it on
my test machine a few times), but it requires a boot floppy or CD
(and therefore the 3-hour trip for Jason).

--
Lee Daniel Crocker <lee@piclab.com> <http://www.piclab.com/lee/>
"All inventions or works of authorship original to me, herein and past,
are placed irrevocably in the public domain, and may be used or modified
for any purpose, without permission, attribution, or notification."--LDC
Re: Database benchmarks [ In reply to ]
On Thu, Apr 17, 2003 at 05:50:35PM -0500, Lee Daniel Crocker wrote:
> > (Brion Vibber <vibber@aludra.usc.edu>):
> >
> > If we just install the RedHat RPMs it should in theory be fairly painless,
> > but I'm not too familiar with how the red hat boot goodies are set up or
> > how to make it fall back to the previous kernel after the first reboot if
> > the new one doesn't work.
>
> I'm fairly familiar with how to do that (just having done it on
> my test machine a few times), but it requires a boot floppy or CD
> (and therefore the 3-hour trip for Jason).

I'm pretty sure if it breaks, you can still select the old kernel from a
list (RH uses GRUB, I believe). So, you still have to be there once to
boot it to the old kernel, and then you can change the boot
configuration. Still, are we really going to want to jump to 2.6 right away
after it comes out? 2.4 had quite a bit of teething problems until
fairly late in the game.

--
Nick Reinking -- eschewing obfuscation since 1981 -- Minneapolis, MN
Re: Database benchmarks [ In reply to ]
On Thu, 17 Apr 2003, Nick Reinking wrote:
> I'm pretty sure if it breaks, you can still select the old kernel from a
> list (RH uses GRUB, I believe). So, you still have to be there once to
> boot it to the old kernel, and then you can change the boot
> configuration.

Sure, but we don't have console without somebody driving to San Diego...

I recall that LILO had an option to set one image as the default for the
next boot, and then revert to the lilo.conf-specified one for subsequent
boots. GRUB probably has similar functionality, but it scares me and I've
never really tried to figure it out.

> Still, are we really going to want to jump to 2.6 right away
> after it comes out? 2.4 had quite a bit of teething problems until
> fairly late in the game.

I don't see any real need to go 2.6, but a more recent 2.4 ought to be a
help (fixed VM probs, some local security loopholes).

-- brion vibber (brion @ pobox.com)
Re: Database benchmarks [ In reply to ]
Our servers use LILO. I've never taken the time to learn about GRUB,
so I can't abide to have it in use on a server that I end up working
on. If the server is, in fact, using GRUB, I would be surprised.

I am intrigued by this LILI option you mention. That's new to me, and
it sounds pretty useful.

Jason

P.S. I will eventually be making another trip to San Diego, and I
could always update the kernel while I'm there.

Brion Vibber wrote:

> On Thu, 17 Apr 2003, Nick Reinking wrote:
> > I'm pretty sure if it breaks, you can still select the old kernel from a
> > list (RH uses GRUB, I believe). So, you still have to be there once to
> > boot it to the old kernel, and then you can change the boot
> > configuration.
>
> Sure, but we don't have console without somebody driving to San Diego...
>
> I recall that LILO had an option to set one image as the default for the
> next boot, and then revert to the lilo.conf-specified one for subsequent
> boots. GRUB probably has similar functionality, but it scares me and I've
> never really tried to figure it out.
>
> > Still, are we really going to want to jump to 2.6 right away
> > after it comes out? 2.4 had quite a bit of teething problems until
> > fairly late in the game.
>
> I don't see any real need to go 2.6, but a more recent 2.4 ought to be a
> help (fixed VM probs, some local security loopholes).
>
> -- brion vibber (brion @ pobox.com)
>
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l@wikipedia.org
> http://www.wikipedia.org/mailman/listinfo/wikitech-l

--
"Jason C. Richey" <jasonr@bomis.com>
Re: Database benchmarks [ In reply to ]
On Thu, 17 Apr 2003, Jason Richey wrote:
> Our servers use LILO. I've never taken the time to learn about GRUB,
> so I can't abide to have it in use on a server that I end up working
> on. If the server is, in fact, using GRUB, I would be surprised.

I don't know which is sitting on the boot sector, but both LILO and GRUB
are in fact available on the system, so it should be elementary to use one
or the other. In theory. :)

> I am intrigued by this LILI option you mention. That's new to me, and
> it sounds pretty useful.

Lemme look it up....

/sbin/lilo -R - set default command line for next reboot

-R command line
This option sets the default command for the boot
loader the next time it executes. The boot loader
will then erase this line: this is a once-only com
mand. It is typically used in reboot scripts, just
before calling `shutdown -r'.

So, you could set up separate entries linux-knowngood and linux-scarynew,
set linux-knowngood as the default in lilo.conf, then:

/sbin/lilo -R linux-scarynew
/sbin/shutdown -r now

If the machine can't boot up under linux-scarynew, it can be remotely
rebooted, and the second boot should bring up linux-knowngood. (Or if
linux-scarynew crashes on its own later.) If it proves reliable, the
default can be changed later on.

-- brion vibber (brion @ pobox.com)
RE: Database benchmarks [ In reply to ]
Nick Reinking wrote:
> Lee Daniel Crocker wrote:
> > Jason Dreyer wrote:
> > > You can try a different file system or block size. XFS for Linux
> > > is improving. You may want to compare it to ReiserFS. If you are
> > > going to test different block sizes for the db...
> >
> > I'm a big fan of ReiserFS in general. That's what the MySQL folks
> > recommend as well, and I run that at Piclab (which is a small machine
> > but runs the testsuite faster than my Compaq). I'm not sure that block
> > sizes are that flexible for Resier, but I'll look into it. At any rate,
> > it would be good to find an optimal arrangement for the database
> > before we get the new server to install it on.
> > http://www.wikipedia.org/mailman/listinfo/wikitech-l
>
> AFAIK, ReiserFS block sizes are stuck at 4KB unless someone
> changed that
> while I wasn't looking.

XFS for Linux 1.2 on x86 supports a maximum of 4K, equal to the page size of
the x86 kernel. XFS supports a minimum block size of 512 bytes, but I doubt
a smaller block size would improve db performance. So.. a block size
performance comparison for Wikipedia is probably off in the more distant
future, when larger block sizes are supported on x86 systems or the if db
moved to IA-64.