-
-
Notifications
You must be signed in to change notification settings - Fork 605
AArch64
The AArch64 port of OSv has been progressing over the years since 2015 and this ongoing effort can be broken down into three waves. The initial and most fundamental work was done by Claudio Fontana and others from Huawei Technologies Duesseldorf GmbH in 2015. The second "wave" contributions to add Xen support came from Sergiy Kibrik in 2017. The latest effort has been picked up by Waldemar Kozaczuk in late 2019.
As of this writing, OSv can boot in both emulated AArch64 mode on QEMU on X64 hardware as well as on QEMU and Firecracker on AArch64 hardware with KVM enabled. The latter has been tested on Raspberry PI 4b on Ubuntu 19.10, 20.04, and 20.10. The XEN support has not been tested in any way.
On both QEMU and Firecracker the virtio-blk
, virtio-net
and virtio-rng
and serial console devices are recognized and supported; on QEMU the virtio support is PCI-based, though without MSI/MSI-X extension, on Firecracker it is MMIO-based. Furthermore one can run simple applications as well as some unit tests loaded from RAMFS or ROFS disk (ZFS is not supported at this time due to a bug, see below). Networking seems to function as well, based on the fact that DHCP works and OSv guest responds to ping from a host. More complicated apps like iperf3 or web server apps have not been tested as they fail to load due to some limitations in the dynamic linker it seems (after recently added support of static TLS and upgrade of musl to enable floating-point support for aarch64, some of the apps including cli seem to be working fine).
- Added support of AWS Firecracker
- Improved building process of kernel and unit tests on Fedora
- Enhanced loader.py to support better debugging of OSv on AArch64
- Pre-parse configuration from DTB instead of relocating it
- Added floating-point support as part of Musl 1.1.24 upgrade
- Fix memory mapping initialization when booting with #vCPUs >= 2
- Implement sigsetjmp/siglongjmp
- Implement feenableexcept/fedisableexcept/fegetexcept
- Implement static TLS support
- aarch64: sometimes triggers a synchronous exception of type IL (Instruction Length) when invoking ELF INIT functions
- Improved building process of the kernel on Ubuntu
- aarch64: dynamically map kernel code in early boot
- aarch64: bugs in arm-clock
- ZFS does not work (gets into an infinite loop); seems to be caused by unhandled synchronous exception when access bit disabled in PTE (Page Table Entry) by 'core/pagecache.cc'.
- Lack of dynamic TLS support for applications
- Single stack only
- Like X64, AAch64 port needs dedicated interrupt, exception, and syscall stacks
- No GICv2m or GICv3 support, therefore no MSI or MSI-X at the moment
As of this writing, the AArch64 version can only be cross-compiled on both Fedora and Ubuntu with AArch64 artifacts (libraries and headers) downloaded. If you happen to have different Linux distribution, you can always use the Fedora OSv development container or the Ubuntu OSv development container. It should be also possible to build the AArch64 version of OSv natively on Ubuntu on ARM hardware like Raspberry PI 4, Odroid N2, or RockPro64.
Please note that most applications (osv-apps) and modules cannot be built for aarch64. At this moment only the unit tests (the 'tests` module) can be built for aarch64 (please see the example below).
You can build either RAMFS or ROFS images like so:
./scripts/build image=empty fs=ramfs -j4 arch=aarch64
./scripts/build image=tests fs=rofs -j4 arch=aarch64 --create-disk
You can run OSv either in emulated AArch64 mode on QEMU on X64 hardware or on QEMU and Firecracker on AArch64 hardware with KVM enabled. Here is an example of using run.py
to run OSv in emulated mode on QEMU:
./scripts/run.py --arch=aarch64 -e '/tests/tst-hello.so'
Please note that in order to run OSv on Firecracker you need to manually change kernel addresses from 0x40080000
to 0x80080000
like so:
diff --git a/Makefile b/Makefile
index 3c609a1c..8bc0953d 100644
--- a/Makefile
+++ b/Makefile
@@ -451,8 +451,8 @@ endif # x64
ifeq ($(arch),aarch64)
-kernel_base := 0x40080000
-kernel_vm_base := 0x40080000
+kernel_base := 0x80080000
+kernel_vm_base := 0x80080000
app_local_exec_tls_size := 0x0
include $(libfdt_base)/Makefile.libfdt
This is necessary until we address the relevant shortcoming.
One can debug OSv on QEMU as simple as pointing gdb to the AArch64 version of the loader.elf
like so:
gdb build/release.aarch64/loader.elf
The easiest way is to use non-SSD boot which simply requires installing the Raspberry PI version of Ubuntu 20.04 from https://ubuntu.com/download/raspberry-pi on the SD card. However, the performance is not going to great as disk I/O with SD is much worse than with SSD. Setting up Raspberry PI 4 so it can boot Ubuntu from SSD is more involved but you get much better performance.
Steps for SSD boot (somewhat based on https://tynick.com/blog/05-22-2020/raspberry-pi-4-boot-from-usb/, https://www.raspberrypi.org/forums/viewtopic.php?t=275291 and https://www.raspberrypi.org/forums/viewtopic.php?f=131&t=268476#p1634061):
- Install 64-bit Raspberry Pi OS on SD.
- Boot above and upgrade boot loader - follow the 1st article.
- At this point, you should be able to boot from SSD without SD, if you installed 64-bit Raspberry Pi OS on SSD, but we want Ubuntu 20.04. The 64-bit Raspberry Pi OS is fairly outdated in terms of available development packages like gcc (only 8.3) or QEMU which is pretty much unavailable in a usable form.
- Using a laptop with Ubuntu, get Raspberry Pi 64-bit version of Ubuntu 20.04 and install it on SSD for example by using the “Disks” app (restore disk, not partition).
- Mount 64-bit Raspberry Pi OS SD from step 1 on the same laptop.
- Copy (overwrite) start4.elf and fixup4.dat from Raspberry Pi OS boot partition to Ubuntu boot partition.
- Edit config.txt on Ubuntu boot partition by following https://www.raspberrypi.org/forums/viewtopic.php?f=131&t=268476#p1634061.
- Uncompress
vmlinuz
from Ubuntu partition and overwrite it with the uncompressed copy of it. - After each ‘apt-get upgrade” repeat step 7 above again if kernel updated.
To prepare for OSv:
-
sudo apt-get install qemu-kvm
- to install QEMU -
sudo usermod -aG kvm <user>
- to enable KVM
- ARM Cortex-A Series Programmer's Guide for ARMv8-A
- AArch64 Virtualization
- A Practical Guide to ARM Cortex-M Exception Handling
- Linux AArch64 boot code
- Linux AArch64 exception handling code and this
The documentation below had been written by Claudio Fontana and describes the state of AArch64 port of OSv as of circa 2015.
The AArch64 Port of OSv is ongoing, initially targeting the QEMU Mach-virt platform, running on the ARM Foundation Model v8 or on the APM X-Gene Mustang development board.
mainline contains AArch64 support for the loader image (loader.img), which means it is possible to embed programs inside the loader itself. Manual modification of the bootfs.manifest.skeleton is necessary.
SMP is supported, but the SMP work for AArch64 has exposed a bug in the virtual counter in QEMU/kvm which still needs solving. In the meantime, you need to apply the following workaround for kvm:
"[RFC PATCH] KVM: arm/arm64: Don't let userspace update CNTVOFF once guest is running"
https://lists.cs.columbia.edu/pipermail/kvmarm/2015-June/015198.html
There are some limitations, mostly in the libc support; you can read the details below.
Mainline OSv already includes all the features, so there is no need to look at special branches.
Experimental work-in-progress can sometimes be found here:
https://github.com/hw-claudio/osv_aarch64 "aarch64-next"
https://github.com/hw-claudio/osv_aarch64/tree/aarch64-next
Beware: aarch64-next is a rebasing branch.
Upstream QEMU (git mainline) is now usable for aarch64, including PCI support, since February 13th, 2015.
While the loader.img can be built, including adding own programs to the bootfs image, build of the usr.img is not possible yet, due to issues in the current build system.
Most of the problems are due to the build step which requires to run an OSv VM in order to build the OSv with the ZFS file system, but there other major issues, including the framework loosely defined by the python scripts in scripts/ and the general lack of cross-compilability in everything beyond the kernel proper.
To address these challenges, a user-space ZFS image creation tool has been sketched to enable building the ZFS-based image without running OSv, and then the scripts should be reworked (or avoided as much as possible) for the usr.img creation.
For development purposes, you can find a pre-built tentative usr.img image to use for virtio-blk and ZFS mounting tests at:
https://github.com/hw-claudio/osv_aarch64.git "usr.img"
In the usr.img branch, look for a file in the top source directory called "usr.img.aarch64"
build system: the first pass (loader.img) mostly works through cross-compilation, but there are issues to proceed any further as mentioned.
qemu-system-aarch64 tcg software system emulation: currently the AArch64 image runs on the Foundation Model but also successfully on the QEMU tcg software system emulation.
tests with available hardware (development boards): preliminary tests with the hardware currently available has been done, in particular with the APM X-Gene Mustang board.
external dependencies: Avi has successfully added the Fedora packages for AArch64, and they seem ok, although they still contain broken stap information, causing warnings.
devices support: support for PCI is available, as well as virtio-rng, virtio-blk and virtio-net. Simple small functional tests have been performed with success.
smp support: work is now upstream. SMP is implemented via PSCI.
tls support: currently tls works only for in-kernel tls variables. It is not possible to run external applications (.so ELF files) which make use of tls variables.
libc: We need to implement setjmp, longjmp, ucontext and architecture signal code, and also the broken/missing floating point support for the libc math functions which depend on floating point representation.
musl: also related to the preceeding point, as libc is implemented partly with musl, partly with own code for the OSv-specific parts. AArch64 support for musl has now landed in mainline musl, but the issues with floating point are still there (results of double precision are completely wrong).
hardware information passing from the host to the guest is currently based on device trees, with fallback defaults for the mach-virt platform. No ACPI in OSv yet though.
ELF64: initial relocations for get_init() are supported, plus basic relocs necessary to run applications. Additional relocations will be implemented if/when we hit missing ones while enabling more and more applications for AArch64.
console: pl011 UART output and input is now available, no FIFO (or, FIFO depth = 1).
backtrace: basic functionality now available.
power: implemented via PSCI.
exception handling: seems to work well.
MMU: done, but need to revisit for the sigfault detection speedup feature added to x64 (and stubbed on AArch64). We also need to clean up parameter passing through the MMU code, and remove some code duplication.
page fault and VMA handling: basic functionality now available.
interrupts: basic functionality now available.
GIC: more or less done, v2. Note that we don't support GICv2m or GICv3, so we cannot have MSI or MSI-X at the moment.
Generic Timers: functionality available.
scheduling: support for task switching available (switch-to-first, switch-to)
signals: work started, but architecture-generated signals work has not been submitted yet.
arch trace: nothing available.
sampler: sampler support missing for AArch64.
scripts: most scripts have not even been looked at for AArch64
management tools: management tools have not been looked at yet for AArch64
tests: some tests build but most don't because of other missing components. No attempt to run any tests beside tst-hello.so.
string optimizations: imported from newlib (based on BSD-licensed Linaro patches). This includes memcpy, memset and memcpy_backwards, slightly changed from the original due to different API entry point.
hypervisor and firmware detection: not a priority, not implemented.
These are some brief instructions about how to cross-compile OSv's loader.img (very incomplete still) on an X86-64 build host machine.
At the time of writing this, the available functionality is minimal: the loader image boots, gic is initialized, timers are initialized, etc, and a simple hello world application is started on top of OSv (in the case of aarch64-next), or you will get an abort with a backtrace in zfs (for master).
You can find a 32bit (needs multilib 😟) crosscompiler from Linaro, in particular the package
gcc-linaro-aarch64-linux-gnu-4.8-2013.12_linux, which is not distro-specific 😃, and includes all tools needed for building.
http://releases.linaro.org/13.12/components/toolchain/binaries
For the host root filesystem for AArch64, a good option is the Linaro LEG Image
linaro-image-leg-java-genericarmv8-20131215-598.rootfs.tar.gz
http://releases.linaro.org/13.12/openembedded/aarch64/
You can experiment with other images and compilers from Linaro, but those I am using right now.
For Ubuntu there are AArch64 crosscompilers available in the official repositories as well. Packages are named g++-4.8-aarch64-linux-gnu and gcc-4.8-aarch64-linux-gnu.
Ubuntu is also used over here, and works ok.
You will need to have or build an AArch64 linux kernel for the Host, which will be run on top of the Foundation v8 Model. In addition to that, you will need the bootwrapper, which you can get from:
http://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/boot-wrapper-aarch64.git
Use the foundation-v8.dts, and my suggestion is to use nfsroot to mount the root filesystem (the linaro LEG image). The boot wrapper takes as input the linux kernel Image and produces linux-system.axf, which will be the input for the Foundation Model.
Start the Foundation model:
./Foundation_v8 --image=linux-system.axf --cores=1 --network=nat --network-nat-ports=1234=1234
The latter option will expose the 1234 port on the host side to the same port number in the guest running inside the model. You can add additional mappings as desired/needed.
If you are skipping the user space initialization via something like init=/bin/sh for speedup (edit the bootwrapper Makefile), in Foundation model you might need to run:
/sbin/udhcpc eth0
Nothing to do anymore, since they are now part of the mainline tree.
In addition to the general requirements for building OSv (see README.md),
note that the simple build system recognizes the ARCH and CROSS_PREFIX environment variables, and looks for the following build tools:
CXX=$(CROSS_PREFIX)g++
CC=$(CROSS_PREFIX)gcc
LD=$(CROSS_PREFIX)ld
STRIP=$(CROSS_PREFIX)strip
OBJCOPY=$(CROSS_PREFIX)objcopy
HOST_CXX=g++
In order to build AArch64, countrary to the past when the target architecture was automatically detected by running the supplied compiler, you need to explicitly say make ARCH=aarch64, otherwise the build system will try to detect ARCH running uname on the host machine, and try to build x64.
At the beginning of the build process, look for this message:
build.mk:
build.mk: building arch=aarch64, override with ARCH env
build.mk:
If the message does not say arch=aarch64, the crosscompiler could not be found or run correctly. In this case, check the CROSS_PREFIX variable, or the compiler binary name if it's not canonical (do you need to add a symlink for example from g++-4.8.3 to g++ ?).
An example of command line for QEMU which works running on top of Foundation Model with kvm with an AArch64 qemu-system-aarch64 binary is:
$ qemu-system-aarch64 -nographic -M virt -enable-kvm \
-kernel ./loader.img -cpu host -m 1024M -append "--nomount /tools/uush.so"
An example of command line for QEMU running on system emulation on an x86_64 host with an x86_64 qemu-system-aarch64 binary is:
$ qemu-system-aarch64 -nographic -M virt \
-kernel ./loader.img -cpu cortex-a57 -m 1024M -append "--nomount /tools/uush.so"
Jani Kokkonen <[email protected]>
Claudio Fontana <[email protected]>