Mailing List Archive

Xen VM call trace on fstrim: Error: discard_granularity is 0.
When performing the following command in a Xen VM the call trace below is
generated.

sudo fstrim -v /

Xen VM host: Xen 4.14.0
Xen Dom0: Linux 4.19.14
Xen DomX: Linux 5.10.6

The code in the kernel triggering this trace is the following.

/* In case the discard granularity isn't set by buggy device driver
*/
if (WARN_ON_ONCE(!q->limits.discard_granularity)) {
char dev_name[BDEVNAME_SIZE];

bdevname(bdev, dev_name);
pr_err_ratelimited("%s: Error: discard_granularity is
0.\n", dev_name);
return -EOPNOTSUPP;
}

The call trace in the Xen VM.

[ 145.295257] ------------[ cut here ]------------
[ 145.295274] WARNING: CPU: 1 PID: 1230 at block/blk-lib.c:51
__blkdev_issue_discard+0x245/0x2a0
[ 145.295277] Modules linked in: intel_rapl_msr intel_rapl_common
crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel crypto_simd
cryptd glue_helper rapl joydev bochs_drm drm_vram_helper drm_ttm_helper
mousedev xen_kbdfront ttm drm_kms_helper cec syscopyarea sysfillrect
sysimgblt xen_netfront fb_sys_fops intel_agp pcspkr intel_gtt mac_hid fuse
drm agpgart bpf_preload ip_tables x_tables ext4 crc32c_generic crc16
mbcache jbd2 ata_generic pata_acpi floppy xen_blkfront crc32c_intel
serio_raw ata_piix
[ 145.295373] CPU: 1 PID: 1230 Comm: fstrim Not tainted 5.10.6-arch1-1 #1
[ 145.295376] Hardware name: Xen HVM domU, BIOS 4.14.0 11/14/2020
[ 145.295383] RIP: 0010:__blkdev_issue_discard+0x245/0x2a0
[ 145.295390] Code: 48 8b 44 24 48 65 48 2b 04 25 28 00 00 00 75 6c 8b 44
24 1c 48 83 c4 50 5b 5d 41 5c 41 5d 41 5e 41 5f c3 0f 0b e9 f9 fe ff ff
<0f> 0b 48 8d 74 24 28 4c 89 ef e8 ac c2 00 00 48 c7 c6 e0 a1 86 b6
[ 145.295393] RSP: 0018:ffffb283007b7b70 EFLAGS: 00010246
[ 145.295399] RAX: 0000000000000000 RBX: 0000000000000000 RCX:
0000000000000c40
[ 145.295402] RDX: 00000000000000c0 RSI: 0000000000012240 RDI:
ffff8e4980411380
[ 145.295404] RBP: 00000000000000c0 R08: 0000000000000000 R09:
ffffb283007b7bf8
[ 145.295407] R10: 0000000000000001 R11: 0000000000002000 R12:
0000000000012240
[ 145.295410] R13: ffff8e4980411380 R14: ffff8e4983b14770 R15:
0000000000000000
[ 145.295415] FS: 00007fd5189fc580(0000) GS:ffff8e498ad00000(0000)
knlGS:0000000000000000
[ 145.295418] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 145.295421] CR2: 00005562e06d6448 CR3: 0000000108188001 CR4:
00000000003706e0
[ 145.295432] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
0000000000000000
[ 145.295435] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7:
0000000000000400
[ 145.295437] Call Trace:
[ 145.295452] blkdev_issue_discard+0x86/0xe0
[ 145.295510] ext4_trim_fs+0x518/0x930 [ext4]
[ 145.295521] ? mntput_no_expire+0x4a/0x260
[ 145.295568] __ext4_ioctl+0xfee/0x1b10 [ext4]
[ 145.295619] ext4_ioctl+0x2a/0x40 [ext4]
[ 145.295626] __x64_sys_ioctl+0x83/0xb0
[ 145.295634] do_syscall_64+0x33/0x40
[ 145.295640] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 145.295646] RIP: 0033:0x7fd518b71f6b
[ 145.295652] Code: 89 d8 49 8d 3c 1c 48 f7 d8 49 39 c4 72 b5 e8 1c ff ff
ff 85 c0 78 ba 4c 89 e0 5b 5d 41 5c c3 f3 0f 1e fa b8 10 00 00 00 0f 05
<48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d d5 ae 0c 00 f7 d8 64 89 01 48
[ 145.295655] RSP: 002b:00007ffea645a438 EFLAGS: 00000246 ORIG_RAX:
0000000000000010
[ 145.295660] RAX: ffffffffffffffda RBX: 00007ffea645a510 RCX:
00007fd518b71f6b
[ 145.295662] RDX: 00007ffea645a450 RSI: 00000000c0185879 RDI:
0000000000000003
[ 145.295665] RBP: 00005562e06d5440 R08: 00005562e06d5440 R09:
00007ffea645ae87
[ 145.295668] R10: 0000000000000000 R11: 0000000000000246 R12:
0000000000000003
[ 145.295670] R13: 00007ffea645ae86 R14: 0000000000000000 R15:
ffffffff00000000
[ 145.295677] ---[ end trace 5fcef9628995f731 ]---
[ 145.295682] xvda1: Error: discard_granularity is 0.
[ 145.295703] xvda1: Error: discard_granularity is 0.

Please note that the VM is using a RAW partition as a disk on the VM host.

<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/dev/nvme0n1p2'/>
<target dev='xvda' bus='xen'/>
</disk>

Attachments: xen-dmesg, dom0-dmesg, domx-dmesg, domx-libvirt-xml are
attached.

--
Arthur Borsboom
Re: Xen VM call trace on fstrim: Error: discard_granularity is 0. [ In reply to ]
On Wed, Jan 13, 2021 at 11:43:50AM +0100, Arthur Borsboom wrote:
> When performing the following command in a Xen VM the call trace below is
> generated.
>
> sudo fstrim -v /
>
> Xen VM host: Xen 4.14.0
> Xen Dom0: Linux 4.19.14
> Xen DomX: Linux 5.10.6
>
> The code in the kernel triggering this trace is the following.
>
> /* In case the discard granularity isn't set by buggy device driver
> */
> if (WARN_ON_ONCE(!q->limits.discard_granularity)) {
> char dev_name[BDEVNAME_SIZE];
>
> bdevname(bdev, dev_name);
> pr_err_ratelimited("%s: Error: discard_granularity is
> 0.\n", dev_name);
> return -EOPNOTSUPP;
> }

So it seems like the underlying storage in dom0 doesn't support
discard, and hence the feature don't get setup on the frontend?

Can you print the output of `xenstore-ls -fp` executed from dom0 when
the guest is running? That way I could see which features the backend
is exposing.

Thanks, Roger.
Re: Xen VM call trace on fstrim: Error: discard_granularity is 0. [ In reply to ]
Find attached the output of xenstore-ls

Best regards,
Arthur Borsboom.

On Mon, 18 Jan 2021 at 11:19, Roger Pau Monné <roger.pau@citrix.com> wrote:

> On Wed, Jan 13, 2021 at 11:43:50AM +0100, Arthur Borsboom wrote:
> > When performing the following command in a Xen VM the call trace below is
> > generated.
> >
> > sudo fstrim -v /
> >
> > Xen VM host: Xen 4.14.0
> > Xen Dom0: Linux 4.19.14
> > Xen DomX: Linux 5.10.6
> >
> > The code in the kernel triggering this trace is the following.
> >
> > /* In case the discard granularity isn't set by buggy device
> driver
> > */
> > if (WARN_ON_ONCE(!q->limits.discard_granularity)) {
> > char dev_name[BDEVNAME_SIZE];
> >
> > bdevname(bdev, dev_name);
> > pr_err_ratelimited("%s: Error: discard_granularity is
> > 0.\n", dev_name);
> > return -EOPNOTSUPP;
> > }
>
> So it seems like the underlying storage in dom0 doesn't support
> discard, and hence the feature don't get setup on the frontend?
>
> Can you print the output of `xenstore-ls -fp` executed from dom0 when
> the guest is running? That way I could see which features the backend
> is exposing.
>
> Thanks, Roger.
>


--
Arthur Borsboom
Re: Xen VM call trace on fstrim: Error: discard_granularity is 0. [ In reply to ]
On Mon, Jan 18, 2021 at 11:25:49AM +0100, Arthur Borsboom wrote:
> Find attached the output of xenstore-ls

Thanks, that's very helpful. I think this is ude to QEMU not writing
the discard-aligment node, which leads to blkfront only setting the
discard feature partially.

Could you give a try to the following patch on on domU? I think it
should solve your issues (I've only build tested it).

---8<---
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 5265975b3fba..5a93f7cc2939 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -2179,22 +2179,23 @@ static void blkfront_closing(struct blkfront_info *info)

static void blkfront_setup_discard(struct blkfront_info *info)
{
- int err;
- unsigned int discard_granularity;
- unsigned int discard_alignment;
+ unsigned int discard_granularity = 0;
+ unsigned int discard_alignment = 0;
+ unsigned int discard_secure = 0;

- info->feature_discard = 1;
- err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
+ xenbus_gather(XBT_NIL, info->xbdev->otherend,
"discard-granularity", "%u", &discard_granularity,
"discard-alignment", "%u", &discard_alignment,
+ "discard-secure", "%u", &discard_secure,
NULL);
- if (!err) {
- info->discard_granularity = discard_granularity;
- info->discard_alignment = discard_alignment;
- }
- info->feature_secdiscard =
- !!xenbus_read_unsigned(info->xbdev->otherend, "discard-secure",
- 0);
+
+ if (!discard_granularity)
+ return;
+
+ info->feature_discard = 1;
+ info->discard_granularity = discard_granularity;
+ info->discard_alignment = discard_alignment;
+ info->feature_secdiscard = !!discard_secure;
}

static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo)