-
-
Notifications
You must be signed in to change notification settings - Fork 606
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Importing ZFS volume fails on OSv #918
Comments
Turns out that if we ZFS-format volume on Ubuntu, then it's unusable for OSv. But if we ZFS-format the volume inside OSv, then it will render as "damaged" for Ubuntu, while any OSv unikernel will be able to use it with no problems at all! @nyh was OSv's I've also noticed that |
Last time I tried I was able to mount OSv zfs file system on ubuntu using open zfs and I was able to perform basic commands on it like ls, etc. However after this I could not use same OSv image to boot OSv - I think it would crash trying to mount zfs.
I am more than happy send you my notes of mounting OSv zfs on ubuntu.
Waldek
…Sent from my iPhone
On Oct 13, 2017, at 09:00, Miha Pleško ***@***.***> wrote:
Turns out that if we ZFS-format volume on Ubuntu, then it's unusable for OSv. But if we ZFS-format the volume inside OSv, then it will render as "damaged" for Ubuntu, while any OSv unikernel will be able to use with no problems at all! @nyh was OSv's /zpool.so perhaps adjusted at any point of OSv development? Should OSv's ZFS implementation play well with any ZFS implementation, or did we make it OSv-specific?
I've also noticed that /zpool.so inside OSv has less ZFS features listed as supported, compared to zfs on my Ubuntu, but OSv crashes even if I disable them all (by providing -d flag to the zpool create command).
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Hi @wkozaczuk , please do send the notes that you've mentioned, I'm looking forward to read them! Currently my goal is to be able to prepare a volume on Ubuntu that I can then mount in the OSv (additional volume, not the one that the OSv boots from), so perhaps your approach is already good enough to cover my case. |
I can't add any significant comment, but anyway. A long time ago (about 2 years?) I tried to prepare OSv image without booting it - e.g. mount ZFS on Linux host, and add some files. I don't really remember all problems I got, but I do remember that I quit that task. Similar to wkozaczuk - OSv didn't like the modified image. |
When I've needed to mount the zfs filesystem from an image, this is what I've done: sudo modprobe nbd max_part=63 Then unmounting: sudo zfs umount osv/zfs The problem is that once this is done, the resulting image is unusable for the reasons mentioned above. |
I also think that OpenZFS implementation on Linux is not as mature as on
FreeBSD. So maybe the milage may vary if ZFS image was created on FreeBSD
and imported to OSv.
…On Tue, Nov 7, 2017 at 1:33 PM, Rick Payne ***@***.***> wrote:
When I've needed to mount the zfs filesystem from an image, this is what
I've done:
sudo modprobe nbd max_part=63 sudo qemu-nbd -c /dev/nbd0 osv.img sudo
zpool import -d /dev osv sudo zfs set mountpoint=/media/osv osv sudo zfs
mount osv/zfs
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#918 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFDSIWmYJ__WigEspoFHB0TP5GvVsw87ks5s0KJggaJpZM4P4KlP>
.
|
Recently I spent some time investigating this issue and found out following:
The only problem is that any zpool or zfs command work really slow with qemu-nbd connected devices. I am not sure what the solution to this is. Or if there is a way to create ZFS pool with vdev of type disk without qemu-nbd.
I am not sure what the correct solution is. I am not sure how to enforce the vdev name to be /dev/vblk0.1 after it is mounted on host. So overall it looks like that OSv ZFS is compatible with ZFS artifacts of newer version created/modfiied on host. |
Nice finding :) I tried
Seems it works without flaw for me. One detail, |
Interesting find. If qemu-nbd on a qcow image is slow, I wonder if you can't do it much faster using a loop device (see https://en.wikipedia.org/wiki/Loop_device) on a raw image. E.g., something like: (I didn't try!) dd if=/dev/zero of=test1.img bs=1M count=128
losetup /dev/loop0 test1.img
zpool create test1-zpool -m /test1 loop0
zfs create test1-zpool/test1 |
I think one implication of this finding is that the ZFS-based images in theory alternatively could be built on a host without having to run OSv with zpool/zfs, etc. |
True. I was asked once to try to building OSv image without running it, but failed due to problems with ZFS not being valid for OSv anymore (after I hand-edited it). |
The commit c9640a3 addressing the issue #918, tweaked the vdev disk mounting logic to default to import the root pool from the device /dev/vblk0.1. This was really a hack that was satisfactory to support mounting a ZFS image created or modified on host. However, if we want to be able to import root pool and mount ZFS filesystem from arbitrary device and partition like /dev/vblk0.2 or /deb/vblk1.1, we have to pass the specific device path to all places in ZFS code where it references it. There are 4 code paths that end up calling vdev_alloc() but unfortunately changing all relevant functions and its callers to pass the device path would be quite untenable. So instead, this patch adds new field spa_dev_path to the spa structure that holds the information about the Storage Pool Allocator in memory. This new field is set to point to the device we want to import the ZFS root pool from in spa_import_rootpool() function called by ZFS mount disk process and then used by vdev_alloc() downstream. Refs #1200 Signed-off-by: Waldemar Kozaczuk <[email protected]> Message-Id: <[email protected]>
I have following usecase that I'd like to realize:
test1.img
with zpool namedtest1-zpool
with mountpoint set to/dev/test1
/dev/test1
automatically on bootI'm able to prepare a volume as described using following commands:
But when I then attach it to the OSv and attempt to run the CLI, I get following error:
Do you perhaps have any idea why an error like this would occur?
My debugging
In the OSv source code I can see that ZFS pools are attempted to be imported here:
osv/fs/vfs/main.cc
Lines 2195 to 2216 in 8f82c93
And if I perform the very same import operation on my host, everything seems to be working normally:
$ sudo zpool import -d . pool: test1-zpool id: 1371947131760445919 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: test1-zpool ONLINE ./test1.img ONLINE
so I'm not sure why OSv would have any problems with it and I kindly ask for help.
/cc @justinc1 @gberginc
The text was updated successfully, but these errors were encountered: