Skip to content
This repository has been archived by the owner on Sep 7, 2021. It is now read-only.

Implementation of vCPU crate #3

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

jennymankin
Copy link
Collaborator

This PR is for a simple crate that provides a trait abstraction of a virtual CPU (vCPU).

Each VMM can then provide their own implementation of the vCPU trait (for example, I have a implementation of the Vcpu trait for the VcpuFd struct of rust-vmm/kvm-ioctls; when this PR is approved, I can make any last minute changes/tests to that POC and put it up for PR as well).

Issue/design here

/// Reasons for vCPU exits.
///
/// Exit reasons are drawn a superset of exit reasons for various VMMs.
///
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would have preferred to implement VcpuExit as an Associated Type (like RunContextType), and have each implementation of the Vcpu trait be responsible for defining their own VcpuExit enums, so that we didn't have to guess on the superset and/or update it for changes in the future. However, the kvm-ioctl/Firecracker/CroSVM code all rely on storing ioport and mmio data in the VcpuExit itself, which requires an explicit lifetime annotation on the VcpuExit:

pub enum VcpuExit<'a> {
    IoOut(u16 /* port */, &'a [u8] /* data */),
    IoIn(u16 /* port */, &'a mut [u8] /* data */),
    MmioRead(u64 /* address */, &'a mut [u8]),
    MmioWrite(u64 /* address */, &'a [u8]),
    // …
}

The only way I was able to successfully implement the VcpuExit as an associated type requires explicit lifetime annotations everywhere: In the trait definition, trait implementations, run function. And, most problematic--each explicit lifetime annotation must be carried over as a generic thereafter.

Example:

// Trait definition
pub trait Vcpu<'a> {
    type RunContextType;
    type VcpuExitType;
    fn run(&'a self) -> Result<Self::VcpuExitType>;

// Trait implementation
impl<'a> Vcpu<'a> for VcpuFd {
    type RunContextType = kvm_bindings::kvm_run;
    type VcpuExitType = VcpuExit<'a>;

     fn run(&'a self) -> Result<VcpuExit<'a>> { … }

// Every future use of the trait as a generic requires the explicit annotation too
pub fn setup_fpu<a', T: Vcpu<'a>>(vcpu: &T) -> Result<()>  {
    vcpu.set_fpu(…)
}

IMO, this doesn't make the use of an Associated Type worth it. As far as I can tell, generic associated types is an open Rust RFC: rust-lang/rust#44265, so it might be doable in the future, but not yet.

More experienced Rust-developers than I: Please let me know if there's a better way to abstract this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We encountered the same issue when designing the vm-memory crate. And the current solution for vm-memory is to carry on the lifetime parameter, and it's really annoying:(

/// Get the context from the last vCPU run.
///
/// The context contains information related to the latest VM exit
fn get_run_context(&self) -> Self::RunContextType;
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current Vcpu trait definition only includes a set of functions implemented by kvm-ioctls. There are, of course, other functions that would be useful as Vcpu traits (get/set MP state, get/set xsave state, etc.), and any Vcpu not implementing explicitly could use the unimplemented!() macro

src/x86_64.rs Outdated
)
);
}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@andreeaflorescu I pulled pulled these data structure definitions from kvm-ioctls/mod.rs (ie, CpuId wrapper), as these are not kvm-specific and are useful to any Vcpu implementation. The duplicate will be removed in the next-step PR to update kvm-ioctls to use this vmm-vcpu crate. Let me know if you prefer another way.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are currently working on providing a more flexible interface for kvm structures with 0-sized arrays. Can you also take a look at this PR: rust-vmm/kvm#6 and let us know what you think?

We need to alter cpuid entries (adding new ones to the list, removing existing ones and so on). This is the main purpose of this PR.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is nice. At first glance it looks good, but I'll also take a deeper look. It also resolves a question I had, in whether the MSR array was going to subject to the same issues as the CPUID array (looks like both are resolved in this PR).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yap, it is a generic solution for handling 0-sized arrays.

/// Reasons for vCPU exits.
///
/// Exit reasons are drawn a superset of exit reasons for various VMMs.
///
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We encountered the same issue when designing the vm-memory crate. And the current solution for vm-memory is to carry on the lifetime parameter, and it's really annoying:(


/// Set a single register in the virtual CPU
#[cfg(any(target_arch = "arm", target_arch = "aarch64"))]
fn set_one_reg(&self, reg_id: u64, data: u64) -> Result<()>;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any way to read current value of a register on ARM/ARM64? Seems there's no equivalent get_one_reg() on ARM.
Could we combine set_one_reg() with set_regs() by defining StandardRegisters for ARM as :
struct StandardRegisters {
reg_id: u64,
data: u64,
}

Copy link

@tkreuzer tkreuzer Apr 17, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • What's the issue with implementing set_regs()/get_regs() on arm? Performance? Too many registers?
  • Why not implement get_one_reg() / set_one_reg() on x86_64? This would be convenient and even allow a default implementation (maybe no the most efficient) for get_regs() / set_regs(), etc.
  • What about using a hypervisor specific enum as an associated type for reg_id?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I chose these to reach parity with what was provided by the kvm-ioctls crate, but it might have been overreaching for this PR since it's not something I'm familiar with. I saw an email fly by that someone is willing/able to do the ARM work for the kvm-ioctls crate, so maybe it makes sense to remove these until that work is done and then adapt when those are fleshed out.

While I can't speak for KVM, the get_one_reg()/set_one_reg() functionality is something that would be useful for Hyper-V. So maybe it does make sense that, if we keep it here, to have at least a conditional compilation for ARM or Windows, and let the WHP side implement it (with, as you suggest, a hypervisor-specific enum as an associated type). Or have it for all, and let KVM implement it with the unimplemented!() macro if there's no appropriate use for it.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just in case it's relevant, please consider using the Arm AArch64 Cortex-A bare-metal register access crate here.

src/vcpu.rs Show resolved Hide resolved
src/vcpu.rs Show resolved Hide resolved
Cargo.toml Show resolved Hide resolved
Copy link
Member

@andreeaflorescu andreeaflorescu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to take some more time to understand how would this fit with the kvm-ioctls and higher level crates before doing an in-depth review.

P.S. Please sign your commits. (git commit -s). Can you also add some more details to the commit description?

README.md Show resolved Hide resolved
README.md Outdated
## License

This project is licensed under Apache License, Version 2.0, (LICENSE or http://www.apache.org/licenses/LICENSE-2.0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please update the license to MIT or Apache-v.2.0?
Please check the conversation from this issues: rust-vmm/community#14 (comment)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please update this section and specify that it is licensed under Apache-v2.0 or MIT?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed. Sorry for missing this (your originally cited request for license updates!) when I updated licenses everywhere else

src/vcpu.rs Outdated
/// public consumption.
///
/// These types use kvm_bindings under the hood, as they are not necessarily KVM-
/// specific, but rather generic x86/x86/ARM structures. Generic naming makes
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: x86 is specified two times. Did you mean x86/x86_64?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yup, fixed

@jennymankin
Copy link
Collaborator Author

jennymankin commented Apr 19, 2019

I'd like to take some more time to understand how would this fit with the kvm-ioctls and higher level crates before doing an in-depth review.

P.S. Please sign your commits. (git commit -s). Can you also add some more details to the commit description?

No problem @andreeaflorescu. If you're interested, I have this poc branch of porting kvm-ioctls to use this Vcpu crate and it is passing the tests: https://github.com/jennymankin/kvm-ioctls/tree/poc-vcpu (apologies for the poc quality/commenting out etc). As you can see the changes are minimal.

I will address your other comments asap.

src/x86_64.rs Outdated
@@ -0,0 +1,331 @@
// Copyright 2018-2019 CrowdStrike, Inc.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Design nit: I would split the code here in two files:

  1. One file - windows specific structures.
  2. lib.rs/other: The logic of selecting between the linux vs windows structures.

For example we could have a windows module. In windows/mod.rs we would have the definition of windows specific structures and their tests. This would end up being the kvm-bindings of windows. The nice thing with this approach is that you don't need to label each individual structure/test with the conditional compilation macros.

#[cfg!(windows)

pub struct LapicState {
    pub regs: [::std::os::raw::c_char; 4096usize],
}

impl Default for LapicState {
    fn default() -> Self {
        unsafe { ::std::mem::zeroed() }
    }
}

impl ::std::fmt::Debug for LapicState {
    fn fmt(&self, fmt: &mut ::std::fmt::Formatter) -> ::std::fmt::Result {
        self.regs[..].fmt(fmt)
    }
}

....

mod tests {

#[test]
fn vcpu_test_layout_lapic_state() {
    assert_eq!(
        ::std::mem::size_of::<LapicState>(),
        1024usize,
        concat!("Size of: ", stringify!(LapicState))
    );
    assert_eq!(
        ::std::mem::align_of::<LapicState>(),
        1usize,
        concat!("Alignment of ", stringify!(LapicState))
    );
    assert_eq!(
        unsafe { &(*(::std::ptr::null::<LapicState>())).regs as *const _ as usize },
        0usize,
        concat!(
            "Offset of field: ",
            stringify!(LapicState),
            "::",
            stringify!(regs)
        )
    );
}
}

Then the responsibility for choosing between the appropriate structure depending on the operating system would move to src/lib.rs:

#[cfg(windows)]
mod windows;

#[cfg(windows)]
pub use windows::LapicState;
#[cfg(unix)]
pub use kvm_bindings::kvm_lapic_state as LapicState;

// kvm_msrs looks the same on both platforms; just export it as Msrs
pub use kvm_bindings::kvm_msrs as Msrs;
....

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like you're operating off an older commit, before I redefined all the structures for Windows compilation. But your point not only still stands, it's probably even more important now that there is a ton of conditional compilation statements with mixed Windows/Unix definitions. I'm working on splitting it out as you've suggested.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Split Windows definitions into a separate file as suggested

src/x86_64.rs Outdated
fn vcpu_test_layout_lapic_state() {
assert_eq!(
::std::mem::size_of::<LapicState>(),
1024usize,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will fail on Linux. The test code should be label with cfg(windows).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

src/x86_64.rs Outdated
}
}

#[test]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you mode all tests to a mod tests? We used this separation in other crates to get an accurate coverage. When computing the coverage we are excluding the tests modules when counting the instructed lines.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you provide more information on this? Looking at other rust-vmm repos I'm only seeing some (like kvm-ioctls) with a test directory that contains integration tests/pytests, but unit tests still look to be spread out in their respective files in kvm-ioctls and vm-memory.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The unit tests are written in the same file as the code, only they are placed at the end of each file in a mod tests. This is the case for kvm-ioctls and vm-memory (I think). Kvm-bindings contains auto-generated code so the tests are placed immediately after the code.

Check this file in kvm-ioctls: https://github.com/rust-vmm/kvm-ioctls/blob/master/src/ioctls/system.rs#L383. The tests module is the excluded when running kcov using --exclude-region.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes sense--thanks for clarifying. Fixed as suggested.

src/x86_64.rs Outdated

/// Single MSR to be read/written
///
/// pub struct kvm_msr_entry {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason to have the kvm structures definitions as comments? The default rust documentation will show the inner structure. To check how it looks you can run cargo doc --open.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also related to an older/outdated commit, comments were eliminated with the commit that removed the kvm_bindings dependency from Windows builds. Nevertheless, appreciate the general guideline and info about cargo doc for next time.

src/vcpu.rs Show resolved Hide resolved
@jennymankin jennymankin force-pushed the vcpu-crate branch 3 times, most recently from 7361fad to 960900e Compare April 23, 2019 04:46
Copy link
Member

@andreeaflorescu andreeaflorescu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will look at all the code in a few hours, I've got some hours to burn in the plane.

Please sign each individual commit:
git commit --amend --no-edit -s

Also, please follow the 50/72 rule on your commit message. The summary should be no more than 50 characters and the each line of the description must be less than 72.

Details about the 50/72 rule: https://preslav.me/2015/02/21/what-s-with-the-50-72-rule/

Cargo.toml Outdated
[target.'cfg(unix)'.dev-dependencies]
kvm-ioctls = "0.1.0"

libc = ">=0.2.39"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: add a newline.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

README.md Outdated
## License

This project is licensed under Apache License, Version 2.0, (LICENSE or http://www.apache.org/licenses/LICENSE-2.0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please update this section and specify that it is licensed under Apache-v2.0 or MIT?

extern crate kvm_bindings;

#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
pub mod x86_64;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest to export the structures uniformly on both platforms. Instead of exporting them as mod x86_64 and mod arm, you can export the structures directly.
So basically when you want to use some structures from this crate in an external crate you would write:
use vmm_vcpu::StandardRegisters instead of use vmm_vcpu::x86_64::StandardRegisters.

My thinking here is that x86 and arm are not logical modules, they're just a way for us to separate the code into modules so we don't have a huge file with all defines. What do you think?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am having second doubts about it. Before doing any chance I would try to see how other multi-platform crates are doing this.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I went through the same though process having read your comments. On reading your first comment, I was in complete agreement that exporting the structures was cleaner. But having thought it over, I am now leaning toward keeping them as-is (exporting as mod x86_64 and mod arm). The reason being that it forces the code to be explicit that the structures come from specific architectures. This seems particularly important given that the structures can have generic names (SpecialRegisters) but represent architecture-dependent structures. Having scanned through various projects on github, I'm not seeing much that exactly fit this use/hierarchy, and so are not directly comparable, but it does seem that exporting the module instead of the structures is more common.

Let me know if you have strong feelings either way, having thought about it over a couple days.

src/lib.rs Outdated
pub mod x86_64;

#[cfg(any(target_arch = "arm", target_arch = "aarch64"))]
mod arm;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should also be pub I think.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

Copy link
Member

@andreeaflorescu andreeaflorescu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add #[deny(missing_docs)] at the beginning of src/lib.rs and write documentation for public structures and types.

Cpuid,
Canceled,

/// An out port instruction was run on the given port with the given data.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is relatively hard to follow the exits and map them to KVM ones. I would suggest either having them in alphabetical order or in the order they're in the kvm header. This can be done in a different PR as they are sort of in random order in kvm-ioctls as well.

///
#[derive(Debug)]
pub enum VcpuExit<'a> {
None,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are the following exits Hyper-v exits? Is there any overlap between KVM and Hyper-v exits? If not I would rather have them as 2 defines: one VcpuExit for Hyper-v and one for KVM. Could you please point me to where are these VM exits defined for Windows? I kinda got lost googling for them.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Related to both this comment and the one above: There is some overlap. The top block are Windows-specific, the middle block are KVM-specific, and the bottom block are common between the two. Seeing as there's no real reason to separate them, maybe it makes sense to mix them all together in alphabetical order, as you've suggested (optionally commenting which ones are found in each hypervisor). An alternative is to keep them subcategorized, add a comment for each block as to where they can be found, and alphabatize within the block. Do you have a preference?

Documentation for Windows VM exits can be found here: https://docs.microsoft.com/en-us/virtualization/api/hypervisor-platform/funcs/whvrunvirtualprocessor

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the documentation link. Regarding the documentation, we need to have documentation for all entries. For the KVM entries you can get away with copying the documentation from kvm-ioctls.

Regarding keeping them in one enum, I believe the overlap is so small that it is rather confusing to have them as one enum. We can wait for other people to chime in because it might be my personal preference.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, adding #[deny(missing_docs)], as you requested above, is indeed revealing many entries for which I am adding documentation, these included :)

I am not opposed to platform-specific VcpuEntries, wrapped with conditional compilation (as discussed in the first round of comments in this PR, an associated type would have been my preference, but I think the requirement of carrying lifetime annotations everywhere makes this unappealing). Vcpu-defined per-platform exits may be the cleanest solution.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added #[deny(missing_docs)] and updated documentation everywhere to fix failures

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And, agreeing with your suggestion, I split the VcpuExits up by platform. For each, I reordered Exits by their enum/#define as specified in their relevant header files (they actually were pretty close to this).

IrqWindowOpen,
}

pub type Result<T> = result::Result<T, io::Error>;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add the documentation from kvm-ioctls:

/// A specialized `Result` type for KVM ioctls.
///
/// This typedef is generally used to avoid writing out io::Error directly and
/// is otherwise a direct mapping to Result.

It needs to be adapted so that it doesn't mention KVM.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

extern crate kvm_bindings;

#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
pub mod x86_64;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am having second doubts about it. Before doing any chance I would try to see how other multi-platform crates are doing this.

kvm-bindings = "0.1"

[target.'cfg(unix)'.dev-dependencies]
kvm-ioctls = "0.1.0"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just noticed that. We will have a problem here because rust doesn't allow cyclic dependencies between crates. Isn't the idea to have kvm-ioctls import vmm-vcpu as well?

Copy link
Collaborator Author

@jennymankin jennymankin Apr 30, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point about the circular dependency. This dependency isn't strictly necessary; it is only needed for the CpuId structure doc comment examples, which are run as tests when running cargo test (thus dev-dependencies being a dependency for tests only).

Those tests/examples will become outdated anyway with the upcoming crate for "KvmVec"-like generic abstractions (and CpuId and MsrEntries implementations), ie here. But when that change is incorporated, the same problem will remain: What is the best way to handle platform-specific doc comment examples in abstraction crates. This looks to be a common problem with a solution that the rust developers agree should be allowed. But seeing as that issue is still open, I see three options, none perfect:

  1. Demonstrate API usage with specific (KVM and/or WHP) examples, which is the most instructive and useful way to show how to use the APIs, but results in this circular crate dependency when running tests. (Seems from the link above that a workaround could be to comment out dev-dependencies when publishing?)
  2. Examples could probably be made more generic, albeit less instructive, in order to demonstrate usage without dependencies.
  3. The platform-specific examples could remain, but the doc comment examples can be annotated with the ignore directive so that they are visible as documentation, but not actually run with cargo test.

///
#[derive(Debug)]
pub enum VcpuExit<'a> {
None,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the documentation link. Regarding the documentation, we need to have documentation for all entries. For the KVM entries you can get away with copying the documentation from kvm-ioctls.

Regarding keeping them in one enum, I believe the overlap is so small that it is rather confusing to have them as one enum. We can wait for other people to chime in because it might be my personal preference.

jennymankin and others added 3 commits April 30, 2019 15:45
Created a crate with trait abstractions for virtual CPUs, including
exporting necessary architecture-specific generic structures.
- Removed dependency on kvm_bindings for Windows platforms
- Used conditional compilation to export wrappers of kvm_bindings as
  generically-named structures for Unix-based platforms, and explicit
  definition of the same structures for Windows-based platforms.
- Exported mode and data types as public
- Changed licenses
- Created separate files for explicitly-defined Windows structures
- Moved all tests to a module at the end of a file
- Added #[deny(missing_docs)] and added documentation everywhere
- Divided VcpuExits by platform (conditionally compiled) and reordered
  by definition in platform-specific headers
- Fixed licenses
@zachreizner
Copy link
Member

So we discussed this at the rust-vmm ptg meetup. After talking for a couple hours on the subject of this trait, we don't think the various hypervisors (e.g. hyper-v, kvm) are similar enough to gain much benefit to this trait. Specifically, the VcpuExit structure is so different that any code making use of it would have to specialize for each trait impl, defeating the benefits. We would like to see a concrete example showing benefits of a Vcpu trait, perhaps firecracker or crosvm supporting to different hypervisors using the same code. Without that, my disposition would be to no accept this PR. An alternative would be to pick and choose Vcpu related code to break out into its own crate. For example, a hypervisor agnostic CPUID crate would give lots of benefits without having to agree on a common Vcpu trait.

@jennymankin
Copy link
Collaborator Author

jennymankin commented May 3, 2019

Hi @zachreizner, thanks for summarizing your discussion at the PTG meetup since I wasn't able to make it. I'm sure over the couple hours of discussion you covered many aspects that led to this conclusion. Here are some of my thoughts.

You are certainly right in that the differences in the VcpuExit structure (due to the underlying vCPU exits exposed by each hypervisor) make it such that any code making use of the run() function would need to specialize its processing of the exits based on hypervisor. This would need to either be accomplished directly at the layer performing the vCPU.run(), or might be itself abstracted within a higher-level crate. For example, a hypervisor-agnostic VM crate might utilize the trait generic (with VMM-specific references providing implementation of those operations). See, for example, the proposed issue to provide additional abstractions of a VM and a VMM that makes use of abstracted vCPU functionality.

Getting crosvm/Firecracker to achieve parity with Hyper-V in addition to KVM is an ambitious goal, and it's true that doing so will require more layers than just swapping in a vCPU implementation of a generic trait. The specifics of what this would look like is something we'd like to look at, and focusing on/POCing the handling of the VcpuExit is a good suggestion.

Stepping back from these more-ambitious goals, I think the vCPU crate still offers opportunity for abstraction for common VMM-related operations in higher-level crates that utilize common vCPU functionality. The arch crate comes to mind. In development of the Hyper-V-based libwhp crate, some of the arch functionality had to be duplicated, stripped of KVM-specific objects and APIs, and imported as a separate libwhp-specific crate. The duplication was one of the motivations behind my proposal of the arch crate here for rust-vmm: it naturally lends itself to a hypervisor-agnostic solution that can be easily imported into different VMM projects. And as we discussed a couple weeks ago on the rust-vmm call, since those APIs accept the Vcpu trait generic as an input parameter, there is "zero cost" to the abstraction due to the static dispatch.

That is one example where the generic abstraction provided by the vCPU crate benefits other hypervisor-agnostic crates; I think it's reasonable to assume others exist. For example, we are also currently researching and developing a Windows loader crate; this makes use of these same vCPU APIs and abstraction implementations to set up the VM.

So independent of our goals to achieve interchangeable VMMs in ambitious projects like crosvm and Firecracker, I think that having generic crates abstracting lower-level functionality provides benefits to smaller-scale projects, like those that might be using rust-vmm crates as building blocks to their own VMMs.

@yisun-git
Copy link

Hi, @zachreizner

I am a little bit confused here. Why do you think different VcpuExit structure will defeat the abstraction benefit? Per my thought, the VcpuExit abstraction should be a set of all hypervisors exit reasons. The concrete Vcpu implementation should handle the exit reasons it cares. The differences are encapsulated into the concrete hypervisor crate, e.g kvm-ioctls. It doesn's affect upper layers. The only disadvantage is there are redundant exit reasons for specific hypervisor. But it should not be a big case I think.

Because we statically set concrete hypervisor in upper layer, there should be zero cost with Hypervisor/Vm/Vcpu abstractions and make the upper layer codes be generic and elegant. This brings much benefit to make the upper layer crates be platform portable. So far, we only have KVM/HyperV requirements. But there may be more and more hypervisors in the future.

@zachreizner
Copy link
Member

Responding to @yisun-git: a union of the VcpuExit enum for all hypevisors would be inappropriate because some of the vcpu exits would actually lead to contradictions depending on the hypervisor one was using. For example, HyperV has an exit for CPUID that the VMM can use to fill in CPUID registers while KVM has no such exit, opting for a VM wide KVM_SET_CPUID call. If I were writing crosvm/firecracker to use the proposed union VcpuExit, I would need to both the set the CPUID (a no-op on HyperV) and respond to the CPUID exit (an impossibility in KVM) or implement this particular feature in the vmm-vcpu crate, which would make this crate more high-level than intended. Having thought this trough, I can't see it providing benefits.

Responding to @jennymankin: I think we are on the same page in that there is plenty of arch related code that we would like shared, but disagree on the best way to achieve that goal. A Vcpu trait at first seems obvious, but the more we went through the mental exercise of understanding how it might be used, the more holes we found. Hypervisors are too different to really make the Vcpu trait work well. A better way to achieve the goal of sharing arch code is for the arch crate itself to export types that are consumed by hypervisor implementations. Using the setup_fpu example:

// arch crate
struct Fpu { ... }
pub fn setup_fpu() -> Fpu {
    Fpu {
        fcw: 0x37f,
        mxcsr: 0x1f80,
        ..Default::default()
    };
}

// vmm binary using kvm (e.g. crosvm/firecracker)
pub fn setup_vcpu(vcpu: &KvmVcpu) -> Result<()> {
    let fpu = arch::setup_fpu();
    vcpu.setup_fpu(fpu)
}
// vmm binary using hyperv
pub fn setup_vcpu(vcpu: &HyperVcpu) -> Result<()> {
    let fpu = arch::setup_fpu();
    vcpu.setup_fpu(fpu)
}

The nice thing about this is it more cleanly separates the logic of setting up the Fpu data structure in arch from the concrete business of programming the hypervisor in an actual vmm implementation. This also means that it's not necessary for us to agree on a constellation of traits that each hypervisor must use, but don't really fit.

@jennymankin
Copy link
Collaborator Author

Hi @zachreizner, thanks for detailing the proposed alternative. I wanted to take some time to think through your suggestion.

What you're suggesting does make sense for a lot of the arch functionality, in which the goal of the function is to configure some stuff (state, registers, etc), and then set it on the vCPU. That can be cleanly ported to have the function return the configured structure back to the vCPU, and it can do the actual setting. In this sense, the arch crate provides glorified structure definitions and struct value setting, for a specific struct.

But I think this design breaks down for more complex functions, which might perform more than one operation on the vCPU (eg, get a value from the vCPU, manipulate it, set the new value). It moves the burden of managing each of these operations to each vCPU implementation itself. (Granted, the current arch crate is pretty small, but we could see more helper arch-related functionality being added in the future.) Not to mention each vCPU implementation gets cluttered with even the simple functions (like the setup_fpu you cited), having to implement common functionality that could be easily abstracted behind a generic type. I just don't see the benefit of this, when the abstraction comes so easily (especially on KVM because it's already using the same data types). And since (virtual) CPU and architecture are directly related, I don't think there's inelegance to having these architecture-specific vCPU "helpers" take a generic reference to a vCPU.

And from the WHP side, having done the implementation of the trait, the effort isn't in finding similar vCPU functionality between different hypervisors--it's pretty easy to identify the core functionality shared between even these two very different hypervisors (where additional/non-overlapped functionality can be implemented separately). They get and set registers, state, MSRs, etc. The effort is in the translation between data structure types, which in this case, is a burden that falls on the WHP side (since we're standardizing on kvm data structures). This translation effort is present regardless of which approach is used; it's either done ad-hoc for each arch-level implementation function in the vCPU, or it's done once in primitive operations (like get_regs/set_regs) with the higher-level setup functions taking advantage of this translation functionality.

(And I know I'm leaning on the arch crate as example here, but I think it's a good example of a higher-level crate that provides useful functionality by consuming lower-level functionality.)

Secondarily, we (along with Cloudbase) will continue to work through the abstraction and whether or not it provides benefit for full VMM solutions like Firecracker/Crosvm.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants