一百来行代码弄个 KVM 虚机

Mar 10, 2024 20:00 · 3202 words · 7 minute read Linux Virtualization

原文:https://zserge.com/posts/kvm/


KVM 是 Linux 内核提供的一种虚拟化技术。换句话讲,它使用户能够在单个 Linux 宿主机上运行多个虚拟机(VM)。在这种情况下虚机被称为客户机。如果你曾在 Linux 上用过 QEMU 和 VirtualBox,那你应该知道 KVM。虚拟化的底层又是如何工作的呢?

ioctl

KVM 提供一个特殊的字符设备 /dev/kvm 作为 API 入口。通过打开该设备来获得 KVM 子系统的句柄,然后使用 ioctl 系统调用来分配资源并启动虚机。某些 ioctl 返回的文件描述符也能被 ioctl 控制。KVM 中只有这几个层级的 API:

  • 系统层 ioctl:查询和设置整个 KVM 子系统的配置;另外还有创建虚机的 ioctl 操作

  • VM 层 ioctl:查询和设置虚机的配置,例如内存布局;另外还被用于创建 VCPU 和设备(device)

  • VCPU 层 ioctl:查询和设置单个 VCPU

    必须使用创建 VCPU 的那个线程来执行 VCPU 层 ioctl,否则切换线程后的第一个 ioctl 会在性能上受到影响

  • IO 设备层 ioctl:查询和设置设备(网卡、磁盘)

    必须使用创建 VM 的那个进程(地址空间)

KVM API:https://www.kernel.org/doc/html/latest/virt/kvm/api.html
ioctl syscall:https://man7.org/linux/man-pages/man2/ioctl.2.html

实操一下:

// KVM layer
int kvm_fd = open("/dev/kvm", O_RDWR);
int version = ioctl(kvm_fd, KVM_GET_API_VERSION, 0);
printf("KVM version: %d\n", version);

// Create VM
int vm_fd = ioctl(kvm_fd, KVM_CREATE_VM, 0);

// Create VM Memory
#define RAM_SIZE 0x10000
void *mem = mmap(NULL, RAM_SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE, -1, 0);
struct kvm_userspace_memory_region mem = {
    .slot = 0,
    .guest_phys_addr = 0,
    .memory_size = RAM_SIZE,
    .userspace_addr = (uintptr_t) mem,
};
ioctl(vm_fd, KVM_SET_USER_MEMORY_REGION, &mem);

// Create VCPU
int vcpu_fd = ioctl(vm_fd, KVM_CREATE_VCPU, 0);
  1. 打开 KVM 字符设备 kvm_fd = open("/dev/kvm", O_RDWR)

    与 KVM 子系统交互的前提是以读写方式打开 /dev/kvm 文件获取文件描述符,QEMU 也是这么干的

  2. 创建虚机“机箱” vm_fd = ioctl(kvm_fd, KVM_CREATE_VM, 0)

    此时啥也没有(CPU、内存统统没有),只是个壳子

  3. mmap 映射用户态内存的方式“插内存条” *mem = mmap(NULL, RAM_SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE, -1, 0)

    申请小块内存用 brk;大块内存用 mmap,成功返回内存的起始地址。

  4. 初始化虚机内存 ioctl(vm_fd, KVM_SET_USER_MEMORY_REGION, &mem)

  5. 创建一个 VCPU 并插上 vcpu_fd = ioctl(vm_fd, KVM_CREATE_VCPU, 0)

此时我们就创建了一个新虚机,为其分配好内存和 VCPU。要让虚机真正跑点东西,我们还要加载虚机镜像并正确地设置 CPU 寄存器。

加载镜像

假设镜像为 guest.bin,就只读取文件并将其复制到 VM 的内存。当然 mmap 也是一个不错的选择。

int bin_fd = open("guest.bin", O_RDONLY);
if (bin_fd < 0) {
    fprintf(stderr, "can not open binary file: %d\n", errno);
    return 1;
}
char *p = (char *)ram_start;
for (;;) {
    int r = read(bin_fd, p, 4096); // 4KB
    if (r <= 0) {
        break;
    }
    p += r;
}
close(bin_fd);

通过循环逐 4KB 读到 VM 的内存地址空间。

guest.bin 镜像中的内容必须包含当前 CPU 架构的有效字节码,因为 KVM 并非一条一条解释 CPU 指令(全虚拟化),而是让真正的 CPU 直接来计算,只拦截 I/O 请求。这就是为什么现代化虚机的性能甚至能够接近物理机,除非进行大量 I/O 操作。

下面是一个迷你“内核”:

.globl _start
.code16 # 16bit
_start:
    xorw %ax, %ax
loop:
    out %ax, $0x10
    inc %ax
    jmp loop

这段汇编代码的意思是:在一个无限循环中不断将 AX 寄存器 + 1,并把值输出到 I/O 端口 0x10。

$ curl -L -o guest.asm https://gist.githubusercontent.com/zserge/d68683f17c68709818f8baab0ded2d15/raw/b79033254b092ec9121bb891938b27dd128030d7/guest.S
$ as -32 guest.asm -o guest.o
$ ld -m elf_i386 --oformat binary -N -e _start -Ttext 0x10000
$ ll
-rwxr-xr-x 1 root root   7 Mar 10 05:58 guest
-rw-r--r-- 1 root root 106 Mar 10 05:56 guest.asm
-rw-r--r-- 1 root root 496 Mar 10 05:57 guest.o
$ file guest
guest: data

ELF 是 Linux/Unix 下二进制程序的格式:https://en.wikipedia.org/wiki/Executable_and_Linkable_Format

我们将其编译链接为古老的 16 位程序,因为 KVM 的 VCPU 支持多种工作模式(实模式、保护模式),和真正的 x86 处理器一样。“实模式”最简单,而且我们的“内核”非常小。另外实模式是直接寻址内存的(只能寻址 1MB),不用 GDT(全局描述表)还有段描述符这些东西,总之实模式初始化寄存器更简单:

struct kvm_sregs sregs;
ioctl(vcpu_fd, KVM_GET_SREGS, &sregs);
// Initialize selector and base with zeros
sregs.cs.selector = sregs.cs.base = sregs.ss.selector = sregs.ss.base = sregs.ds.selector = sregs.ds.base = sregs.es.selector = sregs.es.base = sregs.fs.selector = sregs.fs.base = sregs.gs.selector = 0;
// Save special registers
ioctl(vcpu_fd, KVM_SET_SREGS, &sregs);

// Initialize and save normal registers
struct kvm_regs regs;
regs.rflags = 2; // bit 1 must always be set to 1 in EFLAGS and RFLAGS
regs.rip = 0; // our code runs from address 0
ioctl(vcpu_fd, KVM_SET_REGS, &regs);

初始化一堆寄存器 CS(代码段)、SS(栈)、DS(数据段)、ES(额外段)等等。

运行

“内核”代码加载了,寄存器也好了,要运行虚机我们需要拿到每个 VCPU 的运行状态的指针,然后进入一个循环,在该循环中虚机会一直运行,直到被 I/O 或其他操作中断,并将控制器还回宿主机。

int runsz = ioctl(kvm_fd, KVM_GET_VCPU_MMAP_SIZE, 0);
struct kvm_run *run = (struct kvm_run *) mmap(NULL, runsz, PROT_READ | PROT_WRITE, MAP_SHARED, vcpu_fd, 0);

for (;;) {
    ioctl(vcpu_fd, KVM_RUN, 0);
    switch (run->exit_reason) {
    case KVM_EXIT_IO:
        printf("IO port: %x, data: %x\n", run->io.port, *(int *)((char *)(run) + run->io.data_offset));
        break;
    case KVM_EXIT_SHUTDOWN:
        return;
    }
}

KVM 使用一段内存区域(KVM_RUN)来向用户提供 VCPU 的运行状态

  1. 调用 KVM API 获取这段内存的大小 runsz = ioctl(kvm_fd, KVM_GET_VCPU_MMAP_SIZE, 0)
  2. mmap(NULL, runsz, PROT_READ | PROT_WRITE, MAP_SHARED, vcpu_fd, 0); 映射至我们的应用程序内存,便可读取到

编译完整代码并执行:

$ curl -L -O https://gist.githubusercontent.com/zserge/d68683f17c68709818f8baab0ded2d15/raw/b79033254b092ec9121bb891938b27dd128030d7/kvm-host-simple.c
$ gcc kvm-host-simple.c -o kvm-vmm
$ ./kvm-vmm guest
IO port: 10, data: 0
IO port: 10, data: 1
IO port: 10, data: 2
IO port: 10, data: 3
IO port: 10, data: 4
IO port: 10, data: 5
IO port: 10, data: 6

成功了!

所有源码都在 https://gist.github.com/zserge/d68683f17c68709818f8baab0ded2d15

你管这叫内核?

显然这个太简单了,要是运行一个 Linux 内核呢?

前面完全一样:打开 /dev/kvm;创建 VM 等等。但是我们需要一些额外的 VM ioctl 来添加一个定时器,来初始化 TSS(任务状态段),来添加一个中断控制器:

ioctl(vm_fd, KVM_SET_TSS_ADDR, 0xffffd000);
uint64_t map_addr = 0xffffc000;
ioctl(vm_fd, KVM_SET_IDENTITY_MAP_ADDR, &map_addr);
ioctl(vm_fd, KVM_CREATE_IRQCHIP, 0);
struct kvm_pit_config pit = { .flags = 0 };
ioctl(vm_fd, KVM_CREATE_PIT2, &pit);

而且初始化寄存器的地方也要改改。Linux 内核跑在保护模式,所以要在寄存器中开启:

sregs.cs.base = 0;
sregs.cs.limit = ~0;
sregs.cs.g = 1;

sregs.ds.base = 0;
sregs.ds.limit = ~0;
sregs.ds.g = 1;

sregs.fs.base = 0;
sregs.fs.limit = ~0;
sregs.fs.g = 1;

sregs.gs.base = 0;
sregs.gs.limit = ~0;
sregs.gs.g = 1;

sregs.es.base = 0;
sregs.es.limit = ~0;
sregs.es.g = 1;

sregs.ss.base = 0;
sregs.ss.limit = ~0;
sregs.ss.g = 1;

sregs.cs.db = 1;
sregs.ss.db = 1;
sregs.cr0 |= 1; // enable protected mode

regs.rflags = 2;
regs.rip = 0x100000; // This is where our kernel code starts
regs.rsi = 0x10000; // This is where our boot parameters start

为啥不能在地址 0 处加载内核了,是因为内核镜像都遵循一种特殊的“启动协议”,有一个固定的头携带了启动参数,随后才是实际内核的字节码。

这个就是 bzImage 格式:https://en.wikipedia.org/wiki/Vmlinux#bzImage

加载内核镜像

为了正确地将内核镜像加载到我们的虚机中,首先要读取整个 bzImage 文件,查看位置 0x1f1 并从这获取到设置扇区的数量。然后跳到内核代码开始处的位置。另外,我们要从 bzImage 的开头复制启动参数到虚机内存中的启动参数位置(0x10000)。

但即使这样还不够,我们还要为虚机加上启动参数,强制 VGA 模式,并初始化命令行指针。

我们想要内核将日志打印到 ttyS0,这样就可以拦截 I/O,宿主机会将其打印到标准输出。要实现这个,要在内核命令行中加上 console=ttyS0

还要为内核设置一个假的 CPU ID 才能启动。

我用了最小化配置的内核,并且调整了一些配置来支持控制台和 virtio。

所有源码都在 https://gist.github.com/zserge/ae9098a75b2b83a1299d19b79b5fe488 作者提供的镜像有点问题,我们使用评论中的那个

$ curl -L -O https://gist.githubusercontent.com/zserge/ae9098a75b2b83a1299d19b79b5fe488/raw/fd60a03c64208bd1edd1de63a2542592d588b237/kvm-host.c
$ curl -L -o test-bzImage https://gist.github.com/ricarkol/60511f3a4d213bbb700b99429c04088e/raw/221404b7fa9e5c45350a780382d83c11c2d7b858/test-bzImage2
$ gcc kvm-host.c -o kvm-vm
$ kvm-vm test-bzImage
Linux version 4.19.28+ (kollerr@oc6638465227.ibm.com) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-28) (GCC)) #1 Thu Aug 6 22:57:04 EDT 2020
Command line: console=ttyS0
Intel Spectre v2 broken microcode detected; disabling Speculation Control
Disabled fast string operations
x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
BIOS-provided physical RAM map:
BIOS-88: [mem 0x0000000000000000-0x000000000009efff] usable
BIOS-88: [mem 0x0000000000100000-0x00000000030fffff] usable
NX (Execute Disable) protection: active
tsc: Fast TSC calibration using PIT
tsc: Detected 2297.305 MHz processor
last_pfn = 0x3100 max_arch_pfn = 0x400000000
x86/PAT: Configuration [0-7]: WB  WT  UC- UC  WB  WT  UC- UC
Using GB pages for direct mapping
Zone ranges:
  DMA32    [mem 0x0000000000001000-0x00000000030fffff]
  Normal   empty
Movable zone start for each node
Early memory node ranges
  node   0: [mem 0x0000000000001000-0x000000000009efff]
  node   0: [mem 0x0000000000100000-0x00000000030fffff]
Reserved but unavailable: 98 pages
Initmem setup node 0 [mem 0x0000000000001000-0x00000000030fffff]
[mem 0x03100000-0xffffffff] available for PCI devices
clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645519600211568 ns
Built 1 zonelists, mobility grouping on.  Total pages: 12253
Kernel command line: console=ttyS0
Dentry cache hash table entries: 8192 (order: 4, 65536 bytes)
Inode-cache hash table entries: 4096 (order: 3, 32768 bytes)
Memory: 37216K/49784K available (4104K kernel code, 266K rwdata, 160K rodata, 448K init, 1328K bss, 12568K reserved, 0K cma-reserved)
Kernel/User page tables isolation: enabled
NR_IRQS: 4352, nr_irqs: 24, preallocated irqs: 16
Console: colour VGA+ 142x228
console [ttyS0] enabled
APIC: ACPI MADT or MP tables are not detected
APIC: Switch to virtual wire mode setup with no configuration
Not enabling interrupt remapping due to skipped IO-APIC setup
clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x211d4366b08, max_idle_ns: 440795310879 ns
Calibrating delay loop (skipped), value calculated using timer frequency.. 4594.61 BogoMIPS (lpj=9189220)
pid_max: default: 4096 minimum: 301
Mount-cache hash table entries: 512 (order: 0, 4096 bytes)
Mountpoint-cache hash table entries: 512 (order: 0, 4096 bytes)
Disabled fast string operations
Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024
Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4
CPU: Intel 06/3f (family: 0x6, model: 0x3f, stepping: 0x2)
Spectre V2 : Spectre mitigation: kernel not compiled with retpoline; no mitigation available!
Speculative Store Bypass: Vulnerable
Performance Events: Haswell events, 16-deep LBR, full-width counters, Intel PMU driver.
... version:                2
... bit width:              48
... generic registers:      4
... value mask:             0000ffffffffffff
... max period:             00007fffffffffff
... fixed-purpose events:   3
... event mask:             000000070000000f
clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
core: PMU erratum BJ122, BV98, HSD29 workaround disabled, HT off
clocksource: Switched to clocksource tsc-early
platform rtc_cmos: registered platform RTC device (no PNP device found)
workingset: timestamp_bits=62 max_order=14 bucket_order=0
Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
i8042: Can't read CTR while initializing i8042
i8042: probe of i8042 failed with error -5
random: get_random_bytes called from 0xffffffff810215ee with crng_init=0
sched_clock: Marking stable (1158516470, 1969431915)->(3986254069, -858305684)
Freeing unused kernel image memory: 448K
Write protecting the kernel read-only data: 8192k
Freeing unused kernel image memory: 2032K
Freeing unused kernel image memory: 1888K
Run /sbin/init as init process
Run /etc/init as init process
Run /bin/init as init process
Run /bin/sh as init process
Kernel panic - not syncing: No working init found.  Try passing init= option to kernel. See Linux Documentation/admin-guide/init.rst for guidance.
Kernel Offset: disabled
---[ end Kernel panic - not syncing: No working init found.  Try passing init= option to kernel. See Linux Documentation/admin-guide/init.rst for guidance. ]---

显然这个结果没啥用,没有 initrd 和根分区,也没任何实际的应用程序。但它证明了 kvm 并不可怕,而是一个相当强大的工具。

结语

相比裸 kvm,libvirt + QEMU 就友好一些(但也没太多)。要是你有兴趣深入研究,我建议阅读 kvmtool 的源码,比 QEMU 简单易懂得多,整个项目也小得多。