I ran the drbd benchmark, and got these results. Here are some hardware
specifications:
Node1
------
CPU: PII-350
RAM: 64MB
Disk: IDE 7200rpm Quantum Fireball 6.5GB
NIC: 3c905B Cyclone, set to full duplex.
Node2
------
CPU: Celeron 400
RAM: 64MB
Disk: IDE WDC AC26400R 6.5GB (Don't know the RPM's)
NIC: Intel i82558 based using eepro100 module, set to full duplex.
Both are sharing /dev/hda8, and I'm using ReiserFS primarily, but the
problem has been recreated with ext2.(reiserfs works beautifully btw, no
problems with resyncing at all).
I'm sending this to drbd-devel because Node2 has a very hard time on
sustained writes. It exhibits the same behavior that was reported with
0.5.6pre1. Deadlock, can switch virtual terminals, can't log in... TCP
responds with the SYNACK, but thats about it. This only happens on Node2,
writing 100MB or more using bonnie as a benchmark. Node1 had the same
problem, but I increased the rate to 10000(drbdsetup ...... -r 10000), and
changed the protocol from B to C. That seemed to alleviate the problem. In
fact, I've run bonnie -s 900 on that machine, and it happily mirrored
everything over to the other without complaining. However, bonnie -s 100
while Node2 is the primary, results in the timeout deadlock.
Anyways, I have another identical system to Node1, so these will probably
become my fault-tolerant SMB server for the backend of my NT based(no I
didn't make that decision) websites.
--------------- report --------------
DRBD Benchmark
Version: 0.5.7
SETSIZE = 100M
Node1:
Linux 2.2.17 i686
bogomips : 694.68
Disk write: 9.44 MB/sec (104857600 B / 00:10.595702)
Drbd unconnected: 9.43 MB/sec (104857600 B / 00:10.601198)
Node2:
Linux 2.2.17 i686
bogomips : 794.62
Disk write: 8.99 MB/sec (104857600 B / 00:11.120484)
Drbd unconnected: 8.69 MB/sec (104857600 B / 00:11.502283)
Network:
Bandwidth: 1.04 MB/sec (104857600 B / 01:36.113341)
Latency: round-trip min/avg/max = 0.1/0.1/0.2 ms
Drbd connected (writing on node1):
Protocol A: 5.62 MB/sec (104857600 B / 00:17.788790)
Protocol B: 6.77 MB/sec (104857600 B / 00:14.762859)
Protocol C: 6.07 MB/sec (104857600 B / 00:16.467106)
Drbd connected (writing on node2):
Protocol A: 8.48 MB/sec (104857600 B / 00:11.790881)
Protocol B: 7.96 MB/sec (104857600 B / 00:12.559939)
Protocol C: 8.36 MB/sec (104857600 B / 00:11.956605)
-------------------------------------
specifications:
Node1
------
CPU: PII-350
RAM: 64MB
Disk: IDE 7200rpm Quantum Fireball 6.5GB
NIC: 3c905B Cyclone, set to full duplex.
Node2
------
CPU: Celeron 400
RAM: 64MB
Disk: IDE WDC AC26400R 6.5GB (Don't know the RPM's)
NIC: Intel i82558 based using eepro100 module, set to full duplex.
Both are sharing /dev/hda8, and I'm using ReiserFS primarily, but the
problem has been recreated with ext2.(reiserfs works beautifully btw, no
problems with resyncing at all).
I'm sending this to drbd-devel because Node2 has a very hard time on
sustained writes. It exhibits the same behavior that was reported with
0.5.6pre1. Deadlock, can switch virtual terminals, can't log in... TCP
responds with the SYNACK, but thats about it. This only happens on Node2,
writing 100MB or more using bonnie as a benchmark. Node1 had the same
problem, but I increased the rate to 10000(drbdsetup ...... -r 10000), and
changed the protocol from B to C. That seemed to alleviate the problem. In
fact, I've run bonnie -s 900 on that machine, and it happily mirrored
everything over to the other without complaining. However, bonnie -s 100
while Node2 is the primary, results in the timeout deadlock.
Anyways, I have another identical system to Node1, so these will probably
become my fault-tolerant SMB server for the backend of my NT based(no I
didn't make that decision) websites.
--------------- report --------------
DRBD Benchmark
Version: 0.5.7
SETSIZE = 100M
Node1:
Linux 2.2.17 i686
bogomips : 694.68
Disk write: 9.44 MB/sec (104857600 B / 00:10.595702)
Drbd unconnected: 9.43 MB/sec (104857600 B / 00:10.601198)
Node2:
Linux 2.2.17 i686
bogomips : 794.62
Disk write: 8.99 MB/sec (104857600 B / 00:11.120484)
Drbd unconnected: 8.69 MB/sec (104857600 B / 00:11.502283)
Network:
Bandwidth: 1.04 MB/sec (104857600 B / 01:36.113341)
Latency: round-trip min/avg/max = 0.1/0.1/0.2 ms
Drbd connected (writing on node1):
Protocol A: 5.62 MB/sec (104857600 B / 00:17.788790)
Protocol B: 6.77 MB/sec (104857600 B / 00:14.762859)
Protocol C: 6.07 MB/sec (104857600 B / 00:16.467106)
Drbd connected (writing on node2):
Protocol A: 8.48 MB/sec (104857600 B / 00:11.790881)
Protocol B: 7.96 MB/sec (104857600 B / 00:12.559939)
Protocol C: 8.36 MB/sec (104857600 B / 00:11.956605)
-------------------------------------