Mailing List Archive

[PATCH 0/2] dm-ioband: I/O bandwidth controller v1.7.0: Introduction
Hi everyone,

This is the dm-ioband version 1.7.0 release.

Dm-ioband is an I/O bandwidth controller implemented as a device-mapper
driver, which gives specified bandwidth to each job running on the same
physical device.

- Can be applied to the kernel 2.6.27-rc5-mm1.
- Changes from 1.6.0 (posted on Sep 24, 2008):
- Fix a problem that processes issuing I/Os are permanently blocked
when I/O requests to reclaim pages are consecutively issued.

You can apply the latest bio-cgroup patch to this dm-ioband version.
The bio-cgroup provides a BIO tracking mechanism with dm-ioband.
Please see the following site for more information:
Block I/O tracking
http://people.valinux.co.jp/~ryov/bio-cgroup/

Thanks,
Ryo Tsuruta

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [PATCH 0/2] dm-ioband: I/O bandwidth controller v1.7.0: Introduction [ In reply to ]
Hi Dong-Jae,

Thanks for being intersted in dm-ioband.

> I tested dm-ioband( the latest release, ver 1.7.0 ) IO controller, but
> I had a strange result from it.
> I have something wrong in test process?
> The test process and results are in attached file.
> Can you check my testing result and give me a helpful advices and comments?

There are some suggestions for you.

1. you have to specify a dm-ioband device at the command line to
control bandwidth.

# tiotest -R -d /dev/mapper/ioband1 -f 300

2. tiotest is not an appropriate tool to see how bandwith is shared
among devices, becasue those three tiotests don't finish at the
same time, a process which issues I/Os to a device with the highest
weight finishes first, so you can't see how bandwidth is shared
from the results of each tiotest.

I use iostat to see the time variation of bandiwdth. The followings
are the outputs of iostat just after starting three tiotests on the
same setting as yours.

# iostat -p dm-0 -p dm-1 -p dm-2 1
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
dm-0 5430.00 0.00 10860.00 0 10860
dm-1 16516.00 0.00 16516.00 0 16516
dm-2 32246.00 0.00 32246.00 0 32246

avg-cpu: %user %nice %system %iowait %steal %idle
0.51 0.00 21.83 76.14 0.00 1.52

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
dm-0 5720.00 0.00 11440.00 0 11440
dm-1 16138.00 0.00 16138.00 0 16138
dm-2 32734.00 0.00 32734.00 0 32734
...

> You can refer the testing tool, tiobench, in
> http://sourceforge.net/projects/tiobench/
> Originally, tiobench don't support the direct IO mode testing, so I
> added the O_DIRECT option to tiobench source code and recompile it to
> test the Direct IO cases

Could you give me the O_DIRECT patch?

Thanks,
Ryo Tsuruta

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
Re: [PATCH 0/2] dm-ioband: I/O bandwidth controller v1.7.0: Introduction [ In reply to ]
Hi Dong-Jae,

> So, I tested dm-ioband and bio-cgroup patches with another IO testing
> tool, xdd ver6.5(http://www.ioperformance.com/), after your reply.
> Xdd supports O_DIRECT mode and time limit options.
> I think, personally, it is proper tool for testing of IO controllers
> in Linux Container ML.

Xdd is really useful for me. Thanks for letting me know.

> And I found some strange points in test results. In fact, it will be
> not strange for other persons^^
>
> 1. dm-ioband can control IO bandwidth well in O_DIRECT mode(read and
> write), I think the result is very reasonable. but it can't control it
> in Buffered mode when I checked just only output of xdd. I think
> bio-cgroup patches is for solving the problems, is it right? If so,
> how can I check or confirm the role of bio-cgroup patches?
>
> 2. As showed in test results, the IO performance in Buffered IO mode
> is very low compared with it in O_DIRECT mode. In my opinion, the
> reverse case is more natural in real life.
> Can you give me a answer about it?

Your results show all xdd programs belong to the same cgroup,
could you explain me in detail about your test procedure?

To know how many I/Os are actually issued to a physical device in
buffered mode within a measurement period, you should check the
/sys/block/<dev>/stat file just before starting a test program and
just after the end of the test program. The contents of the stat file
is described in the following document:
kernel/Documentation/block/stat.txt

> 3. Compared with physical bandwidth(when it is checked with one
> process and without dm-ioband device), the sum of the bandwidth by
> dm-ioband has very considerable gap with the physical bandwidth. I
> wonder the reason. Is it overhead of dm-ioband or bio-cgroup patches?
> or Are there any another reasons?

The followings are the results on my PC with SATA disk, and there is
no big difference between with and without dm-ioband. Please try the
same thing if you have time.

without dm-ioband
=================
# xdd.linux -op write -queuedepth 16 -targets 1 /dev/sdb1 \
-reqsize 8 -numreqs 128000 -verbose -timelimit 30 -dio -randomize

T Q Bytes Ops Time Rate IOPS Latency
%CPU OP_Type ReqSize
0 16 140001280 17090 30.121 4.648 567.38 0.0018
0.01 write 8192

with dm-ioband
==============
* cgroup1 (weight 10)
# cat /cgroup/1/bio.id
1
# echo $$ > /cgroup/1/tasks
# xdd.linux -op write -queuedepth 16 -targets 1 /dev/mapper/ioband1
-reqsize 8 -numreqs 128000 -verbose -timelimit 30 -dio -randomize
T Q Bytes Ops Time Rate IOPS Latency
%CPU OP_Type ReqSize
0 16 14393344 1757 30.430 0.473 57.74 0.0173
0.00 write 8192

* cgroup2 (weight 20)
# cat /cgroup/2/bio.id
2
# echo $$ > /cgroup/2/tasks
# xdd.linux -op write -queuedepth 16 -targets 1 /dev/mapper/ioband1
-reqsize 8 -numreqs 128000 -verbose -timelimit 30 -dio -randomize
T Q Bytes Ops Time Rate IOPS Latency
%CPU OP_Type ReqSize
0 16 44113920 5385 30.380 1.452 177.25 0.0056
0.00 write 8192

* cgroup3 (weight 60)
# cat /cgroup/3/bio.id
3
# echo $$ > /cgroup/3/tasks
# xdd.linux -op write -queuedepth 16 -targets 1 /dev/mapper/ioband1
-reqeize 8 -numreqs 128000 -verbose -timelimit 30 -dio -randomize
T Q Bytes Ops Time Rate IOPS Latency
%CPU OP_Type ReqSize
0 16 82485248 10069 30.256 2.726 332.79 0.0030
0.00 write 8192

Total
=====
Bytes Ops Rate IOPS
w/o dm-ioband 140001280 17090 4.648 567.38
w/ dm-ioband 140992512 17211 4.651 567.78

> > Could you give me the O_DIRECT patch?
> >
> Of course, if you want. But it is nothing
> Tiobench tool is very simple and light source code, so I just add the
> O_DIRECT option in tiotest.c of tiobench testing tool.
> Anyway, after I make a patch file, I send it to you

Thank you very much!

Ryo Tsuruta

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel