Hi,
I'm using v1.
I do not need bfq, but mq-deadline also has no effect.
buffer IO is my target(I've added the cgwb_v1 at grub). If no throttle setting, it
would be too fast for observation.
can you give out a demo shell with 2 different blkio.cost.weight at
/sys/fs/cgroup/blkio??
Thanks very much!
在 2022-09-29 15:52:16,"Joseph Qi" <joseph.qi(a)linux.alibaba.com> 写道:
Hi,
Which cgroup version do you use? cgroup v1 or v2?
For bfq, I don't have any experience on weight control.
For iocost, better to specify qos and model, according to the documentation
suggested before.
Seems you've mixed bfq, iocost, block throttle together. I'd suggest you
evaluate them individually and use direct io first.
Thanks,
Joseph
On 9/29/22 3:26 PM, 王传国 wrote:
Hi Joseph,
Thanks for your reply! But I have 2 questions, :
1. why the blkio.bfq.weight has no effect after "echo bfq >
/sys/block/vdb/queue/scheduler"
2.iocost didn't work either, two fio both got 5M,but 3M and 6M is what I want.Point
out my mistakes please!
Thanks very much!
And my shell like below:
mount /dev/vdb1 /wcg/data2/
cd /sys/fs/cgroup/blkio
echo bfq > /sys/block/vdb/queue/scheduler
echo 0 > /sys/block/vdb/queue/iosched/low_latency
echo "253:16 10485760" > blkio.throttle.write_bps_device
echo "253:16 enable=1" > blkio.cost.qos
echo "253:16 ctrl=auto" > blkio.cost.model
echo 0 > /sys/block/vdb/queue/rotational
mkdir fio1 fio2
echo "253:16 100" > fio1/blkio.cost.weight
echo "253:16 200" > fio2/blkio.cost.weight
echo $$ > /sys/fs/cgroup/blkio/fio1/cgroup.procs
fio -rw=write -ioengine=libaio -bs=4k -size=1G -numjobs=1
-name=/wcg/data2/fio_test1.log
#do follows at another console
echo $$ > /sys/fs/cgroup/blkio/fio2/cgroup.procs
fio -rw=write -ioengine=libaio -bs=4k -size=1G -numjobs=1
-name=/wcg/data2/fio_test2.log
在 2022-09-28 16:41:40,"Joseph Qi" <joseph.qi(a)linux.alibaba.com> 写道:
> 'io.weight' is for cfq io scheduler, while 'io.bfq.weight' is for bfq
io
> scheduler, as its name indicates.
> So you may need configure corresponding io scheduler as well.
>
> BTW, if you want io weight control, I recommend another approach named io
> cost. The following documentation may help to understand the details:
>
https://help.aliyun.com/document_detail/155863.html
>
> Thanks,
> Joseph
>
> On 9/28/22 1:50 PM, 王传国 wrote:
>> 各位同仁,
>>
>> 我看到cgroup2中有io.weight 和 io.bfq.weight,区别是什么?
>>
>> 我的理解是为了控制兄弟group在父group下的IO权重,我在如下版本测试了下,好像结果不太对,谁能指点一下,拜谢!
>>
>> # uname -a
>>
>> Linux localhost.localdomain 4.19.91-26.an8.x86_64 #1 SMP Tue May 24 13:10:09 CST
2022 x86_64 x86_64 x86_64 GNU/Linux
>>
>>
>>
>> 我的测试脚本:
>>
>> #change to cgroup2 by adding cgroup_no_v1=all into grub param
>>
>> mkdir -p /aaa/cg2
>>
>> mkdir -p /aaa/data2
>>
>> mount -t cgroup2 nodev /aaa/cg2
>>
>> mount /dev/sdb1 /aaag/data2/
>>
>> echo bfq > /sys/block/vdb/queue/scheduler #做或不做
>>
>>
>>
>> mkdir /aaa/cg2/test
>>
>> echo "+io +memory" > /aaa/cg2/cgroup.subtree_control
>>
>> echo "+io +memory" > /aaa/cg2/test/cgroup.subtree_control
>>
>> cat /aaa/cg2/test/cgroup.controllers
>>
>> echo "8:16 wbps=10485760" > /aaa/cg2/test/io.max
>>
>> echo $$ > /aaa/cg2/test/cgroup.procs
>>
>>
>>
>> mkdir -p /aaa/cg2/test/dd1
>>
>> mkdir -p /aaa/cg2/test/dd2
>>
>> echo 200 > /aaa/cg2/test/dd1/io.weight
>>
>> #echo 200 > /aaa/cg2/test/dd1/io.bfq.weight #两个选项都试了
>>
>>
>>
>> #在另外2个终端执行如下的2个测试:
>>
>> echo $$ > /aaa/cg2/test/dd1/cgroup.procs
>>
>> dd if=/dev/zero of=/aaa/data2/ddfile1 bs=128M count=1
>>
>>
>>
>> echo $$ > /aaa/cg2/test/dd2/cgroup.procs
>>
>> dd if=/dev/zero of=/aaa/data2/ddfile2 bs=128M count=1
>>
>>
>>
>> 我得到了两个 500K+, 而不是期望的300K+ and 600K!
>>
>>
>>
>> _______________________________________________
>> Cloud Kernel mailing list -- cloud-kernel(a)lists.openanolis.cn
>> To unsubscribe send an email to cloud-kernel-leave(a)lists.openanolis.cn
> _______________________________________________
> Cloud Kernel mailing list -- cloud-kernel(a)lists.openanolis.cn
> To unsubscribe send an email to cloud-kernel-leave(a)lists.openanolis.cn
_______________________________________________
Cloud Kernel mailing list -- cloud-kernel(a)lists.openanolis.cn
To unsubscribe send an email to cloud-kernel-leave(a)lists.openanolis.cn