Skip to content
Claudio Fontana edited this page Jul 3, 2015 · 72 revisions

The AArch64 Port of OSv is ongoing, initially targeting the QEMU Mach-virt platform, running on the ARM Foundation Model v8 or on the APM X-Gene Mustang development board.

Functional Status

mainline contains AArch64 support for the loader image (loader.img), which means it is possible to embed programs inside the loader itself. Manual modification of the bootfs.manifest.skeleton is necessary.

SMP is supported, but the SMP work for AArch64 has exposed a bug in the virtual counter in QEMU/kvm which still needs solving. In the meantime, you need to apply the following workaround for kvm:

"[RFC PATCH] KVM: arm/arm64: Don't let userspace update CNTVOFF once guest is running"

https://lists.cs.columbia.edu/pipermail/kvmarm/2015-June/015198.html

There are some limitations, mostly in the libc support; you can read the details below.

Mainline OSv already includes all the features, so there is no need to look at special branches.

Experimental work-in-progress can sometimes be found here:

https://github.com/hw-claudio/osv_aarch64 "aarch64-next"

https://github.com/hw-claudio/osv_aarch64/tree/aarch64-next

Beware: aarch64-next is a rebasing branch.

Upstream QEMU (git mainline) is now usable for aarch64, including PCI support, since February 13th, 2015.

While the loader.img can be built, including adding own programs to the bootfs image, build of the usr.img is not possible yet, due to issues in the current build system.

Most of the problems are due to the build step which requires to run an OSv VM in order to build the OSv with the ZFS file system, but there other major issues, including the framework loosely defined by the python scripts in scripts/ and the general lack of cross-compilability in everything beyond the kernel proper.

To address these challenges, a user-space ZFS image creation tool has been sketched to enable building the ZFS-based image without running OSv, and then the scripts should be reworked (or avoided as much as possible) for the usr.img creation.

For development purposes, you can find a pre-built tentative usr.img image to use for virtio-blk and ZFS mounting tests at:

https://github.com/hw-claudio/osv_aarch64.git "usr.img"

In the usr.img branch, look for a file in the top source directory called "usr.img.aarch64"

Component Status

* build system: the first pass (loader.img) mostly works through cross-compilation, but there are issues to proceed any further as mentioned.

* qemu-system-aarch64 tcg software system emulation: currently the AArch64 image runs on the Foundation Model but also successfully on the QEMU tcg software system emulation.

* tests with available hardware (development boards): preliminary tests with the hardware currently available has been done, in particular with the APM X-Gene Mustang board.

* external dependencies: Avi has successfully added the Fedora packages for AArch64, and they seem ok, although they still contain broken stap information, causing warnings.

* devices support: support for PCI is available, as well as virtio-rng, virtio-blk and virtio-net. Simple small functional tests have been performed with success.

* smp support: work is now upstream. SMP is implemented via PSCI.

* tls support: currently tls works only for in-kernel tls variables. It is not possible to run external applications (.so ELF files) which make use of tls variables.

* libc: We need to implement setjmp, longjmp, ucontext and architecture signal code, and also the broken/missing floating point support for the libc math functions which depend on floating point representation.

* musl: also related to the preceeding point, as libc is implemented partly with musl, partly with own code for the OSv-specific parts. AArch64 support for musl has now landed in mainline musl, but the issues with floating point are still there (results of double precision are completely wrong).

* hardware information passing from the host to the guest is currently based on device trees, with fallback defaults for the mach-virt platform. No ACPI in OSv yet though.

* ELF64: initial relocations for get_init() are supported, plus basic relocs necessary to run applications. Additional relocations will be implemented if/when we hit missing ones while enabling more and more applications for AArch64.

* console: pl011 UART output and input is now available, no FIFO (or, FIFO depth = 1).

* backtrace: basic functionality now available.

* power: implemented via PSCI.

* exception handling: seems to work well.

* MMU: done, but need to revisit for the sigfault detection speedup feature added to x64 (and stubbed on AArch64). We also need to clean up parameter passing through the MMU code, and remove some code duplication.

* page fault and VMA handling: basic functionality now available.

* interrupts: basic functionality now available.

* GIC: more or less done, v2. Note that we don't support GICv2m or GICv3, so we cannot have MSI or MSI-X at the moment.

* Generic Timers: functionality available.

* scheduling: support for task switching available (switch-to-first, switch-to)

* signals: work started, but architecture-generated signals work has not been submitted yet.

* arch trace: nothing available.

* sampler: sampler support missing for AArch64.

* scripts: most scripts have not even been looked at for AArch64

* management tools: management tools have not been looked at yet for AArch64

* tests: some tests build but most don't because of other missing components. No attempt to run any tests beside tst-hello.so.

* string optimizations: imported from newlib (based on BSD-licensed Linaro patches). This includes memcpy, memset and memcpy_backwards, slightly changed from the original due to different API entry point.

* hypervisor and firmware detection: not a priority, not implemented.

Build instructions

These are some brief instructions about how to cross-compile OSv's loader.img (very incomplete still) on an X86-64 build host machine.

At the time of writing this, the available functionality is minimal: the loader image boots, gic is initialized, timers are initialized, etc, and a simple hello world application is started on top of OSv (in the case of aarch64-next), or you will get an abort with a backtrace in zfs (for master).

Crosscompiler Tools and Host Image from Linaro

You can find a 32bit (needs multilib 😟) crosscompiler from Linaro, in particular the package

gcc-linaro-aarch64-linux-gnu-4.8-2013.12_linux, which is not distro-specific 😃, and includes all tools needed for building.

http://releases.linaro.org/13.12/components/toolchain/binaries

For the host root filesystem for AArch64, a good option is the Linaro LEG Image

linaro-image-leg-java-genericarmv8-20131215-598.rootfs.tar.gz

http://releases.linaro.org/13.12/openembedded/aarch64/

You can experiment with other images and compilers from Linaro, but those I am using right now.

Crosscompiler Tools for Ubuntu

For Ubuntu there are AArch64 crosscompilers available in the official repositories as well. Packages are named g++-4.8-aarch64-linux-gnu and gcc-4.8-aarch64-linux-gnu.

Ubuntu is also used over here, and works ok.

Preparing the AArch64 Host

You will need to have or build an AArch64 linux kernel for the Host, which will be run on top of the Foundation v8 Model. In addition to that, you will need the bootwrapper, which you can get from:

http://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/boot-wrapper-aarch64.git

Use the foundation-v8.dts, and my suggestion is to use nfsroot to mount the root filesystem (the linaro LEG image). The boot wrapper takes as input the linux kernel Image and produces linux-system.axf, which will be the input for the Foundation Model.

Running ARMv8 Foundation Model

Start the Foundation model:

./Foundation_v8 --image=linux-system.axf --cores=1 --network=nat --network-nat-ports=1234=1234

The latter option will expose the 1234 port on the host side to the same port number in the guest running inside the model. You can add additional mappings as desired/needed.

If you are skipping the user space initialization via something like init=/bin/sh for speedup (edit the bootwrapper Makefile), in Foundation model you might need to run:

/sbin/udhcpc eth0

Preparing the guest: External dependencies

Nothing to do anymore, since they are now part of the mainline tree.

Preparing the guest: Environment Variables for make

In addition to the general requirements for building OSv (see README.md),

note that the simple build system recognizes the ARCH and CROSS_PREFIX environment variables, and looks for the following build tools:

CXX=$(CROSS_PREFIX)g++
CC=$(CROSS_PREFIX)gcc
LD=$(CROSS_PREFIX)ld
STRIP=$(CROSS_PREFIX)strip
OBJCOPY=$(CROSS_PREFIX)objcopy
HOST_CXX=g++

In order to build AArch64, countrary to the past when the target architecture was automatically detected by running the supplied compiler, you need to explicitly say make ARCH=aarch64, otherwise the build system will try to detect ARCH running uname on the host machine, and try to build x64.

At the beginning of the build process, look for this message:

build.mk:
build.mk: building arch=aarch64, override with ARCH env
build.mk:

If the message does not say arch=aarch64, the crosscompiler could not be found or run correctly. In this case, check the CROSS_PREFIX variable, or the compiler binary name if it's not canonical (do you need to add a symlink for example from g++-4.8.3 to g++ ?).

Running the guest

An example of command line for QEMU which works running on top of Foundation Model with kvm with an AArch64 qemu-system-aarch64 binary is:


$ qemu-system-aarch64 -nographic -M virt -enable-kvm \
    -kernel ./loader.img -cpu host -m 1024M -append "--nomount /tools/uush.so"

An example of command line for QEMU running on system emulation on an x86_64 host with an x86_64 qemu-system-aarch64 binary is:


$ qemu-system-aarch64 -nographic -M virt \
    -kernel ./loader.img -cpu cortex-a57 -m 1024M -append "--nomount /tools/uush.so"

Jani Kokkonen <[email protected]>
Claudio Fontana <[email protected]>
Clone this wiki locally