Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LVM VDO not sending "device online events" to LVM ? #52

Open
tigerblue77 opened this issue May 14, 2022 · 5 comments
Open

LVM VDO not sending "device online events" to LVM ? #52

tigerblue77 opened this issue May 14, 2022 · 5 comments
Assignees

Comments

@tigerblue77
Copy link

Hello everyone,

Still working on my project to make VDO work on my Debian 11(.3), I encountered a problem at first reboot after installation, everything is detailed here, including the fix I found.

Reading this RHEL 8 documentation, I understand that the default behavior is that LVM wait for a "device online event" to mount a logical volume. That's why I'm wondering if my problem is "normal", if I did something wrong or if I'm encountering a VDO bug in which VDO doesn't send this event to LVM, leading LVM not to mount the VDO volume.

Also, don't hesitate to give any advice if I'm doing something wrong or if you have a better solution than the one I found. It's really hard to find up-to-date documentation and to apply it on another system than RHEL so... Help is welcome :)

@rhawalsh rhawalsh self-assigned this May 18, 2022
@rhawalsh
Copy link
Member

Hi @tigerblue77, I saw this thread and meant to reply sooner.

I'm still trying to understand what might be going on for you, but wanted to mention that all of the mount options that you listed for the /etc/fstab are not actually needed in the LVM-VDO (or RHEL-8.3+) scenario. We're working to update the docs on that. But you should be able to get away with a basic configuration that looks like:

echo "$DM_VDOLV_PATH	$MOUNT_POINT	ext4	defaults	0	0" >> /etc/fstab

The need for the additional options previously was because we only had the vdo.service unit which effectively ran vdo start --all and we didn't have any per-volume control. We have since established a udev trigger that runs an instantiated systemd unit that can start individual volumes as their storage becomes available.

This behavior has always been present for the LVM-VDO implementation, and I'm a little confused on the need for a lvm2-activation-generator. Prior to us being acquired by Red Hat, we had a number of Debian/Ubuntu deployments that seemed to work fine, but perhaps things have changed over the last few years?

@tigerblue77
Copy link
Author

tigerblue77 commented May 18, 2022

Hi @rhawalsh,

Thanks for taking a look and thanks for your reply. I am currently working on this project, so I will modify my script with your suggestion and tell you directly if it works for me.

Edit: yes it works. Thanks.

Don't worry, it works like a charm once I add the configuration line event_activation = 0 in the LVM configuration file (/etc/lvm/lvm.conf). I'm just curious, which is why I'm wondering:

  • Why do I need this systemd .mount file? I was thinking that /etc/fstab was the only configuration needed for permanent/static mounts and the system was dealing with it.
  • Why is all this manipulation not necessary when creating "basic" logical volumes? VDO is just build on top of LVM so it should act the same... Especially since my VDO volumes are active when I create them.
  • Why do I have to keep this event_activation parameter in my lvm.conf or else I'll go back to the original problem? (I've tested it) Once the .mount file is generated, I was thinking that it always works by relying on it.
  • Finally, why is this configuration not the default one, am I slowing my startup using it ? Or risking any problem ?

Also, I hope everything is clear on my linked Reddit thread, don't hesitate if you have any question or any question. Thanks again.

@rhawalsh
Copy link
Member

Hi @rhawalsh,

Thanks for taking a look and thanks for your reply. I am currently working on this project, so I will modify my script with your suggestion and tell you directly if it works for me.

Edit: yes it works. Thanks.

Great!

Don't worry, it works like a charm once I add the configuration line event_activation = 0 in the LVM configuration file (/etc/lvm/lvm.conf). I'm just curious, which is why I'm wondering:

  • Why do I need this systemd .mount file? I was thinking that /etc/fstab was the only configuration needed for permanent/static mounts and the system was dealing with it.

The systemd mount unit is just an alternative to spelling things in /etc/fstab but with additional controls available just like other systemd units.

  • Why is all this manipulation not necessary when creating "basic" logical volumes? VDO is just build on top of LVM so it should act the same... Especially since my VDO volumes are active when I create them.

Which manipulations are you speaking of? As you can see we have no special requirements on the mount options anymore. It used to be necessary because we didn't have per-device management controls during startup. With the LVM-VDO implementation, the defaults always worked.

Unless you're talking about the stuff in your script where you were running things like depmod, etc.. In which case, any time you want to have a kernel module available in the initrd, you'd need to do that. Generally speaking, unless you need it in early boot, I think you'd be ok skipping that step.

  • Why do I have to keep this event_activation parameter in my lvm.conf or else I'll go back to the original problem? (I've tested it) Once the .mount file is generated, I was thinking that it always works by relying on it.

I'm not sure... The default configuration in RHEL-based distros works as intended. I'm not sure whether event_activation is enabled or disabled by default there and why it needs to be changed on Debian-based distros.

  • Finally, why is this configuration not the default one, am I slowing my startup using it ? Or risking any problem ?

I don't know who maintains lvm2 in the Debian-based distros. I don't think it's the same group as those who maintain it in RHEL-based distros. If they're using a different default, then maybe they could explain it (or something in their logs might indicate why)

Also, I hope everything is clear on my linked Reddit thread, don't hesitate if you have any question or any question. Thanks again.

@tigerblue77
Copy link
Author

tigerblue77 commented Jul 17, 2023

Hello @rhawalsh,
Sorry that I didn't answer, it was a very busy year and 2023 the same...

Which manipulations are you speaking of? As you can see we have no special requirements on the mount options anymore. It used to be necessary because we didn't have per-device management controls during startup. With the LVM-VDO implementation, the defaults always worked.
Unless you're talking about the stuff in your script where you were running things like depmod, etc.. In which case, any time you want to have a kernel module available in the initrd, you'd need to do that. Generally speaking, unless you need it in early boot, I think you'd be ok skipping that step.

Yes I was talking about all of this.

Whatever, I just upgraded to Debian 12 with 6.1.37-1 Linux kernel and LVM 2.03.16-2 and now this configuration (with event_activation = 0) no longer works... I have LVM installed with default configuration but my LVM VDO volumes are inactive at boot... Any idea now I'm no longer using "old kernel" ? (very important for me as I have down services actually in my home lab...)

lvm.conf states that :

# Previously, setting this to zero would enable static autoactivation
# services (via the lvm2-activation-generator), but the autoactivation
# services and generator have been removed.

@tigerblue77
Copy link
Author

tigerblue77 commented Jul 17, 2023

OMG I think I just found... I had to add "kvdo" to /etc/initramfs-tools/modules so that the module is loaded by initramfs at boot...
Can you confirm that it's a requirement ? And that it's the right way to do it ?

Sources :

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants