arceos-helloworld

April 14, 2026 · View on GitHub

Crates.io CI

A standalone Hello World application running on ArceOS unikernel, with all dependencies sourced from crates.io. Supports multiple architectures via cargo xtask.

Supported Architectures

ArchitectureRust TargetQEMU MachinePlatform
riscv64riscv64gc-unknown-none-elfqemu-system-riscv64 -machine virtriscv64-qemu-virt
aarch64aarch64-unknown-none-softfloatqemu-system-aarch64 -machine virtaarch64-qemu-virt
x86_64x86_64-unknown-noneqemu-system-x86_64 -machine q35x86-pc
loongarch64loongarch64-unknown-noneqemu-system-loongarch64 -machine virtloongarch64-qemu-virt

Prerequisites

  • Rust nightly toolchain (edition 2024)

    rustup install nightly
    rustup default nightly
    
  • Bare-metal targets (install the ones you need)

    rustup target add riscv64gc-unknown-none-elf
    rustup target add aarch64-unknown-none-softfloat
    rustup target add x86_64-unknown-none
    rustup target add loongarch64-unknown-none
    
  • QEMU (install the emulators for your target architectures)

    # Ubuntu/Debian
    sudo apt install qemu-system-riscv64 qemu-system-aarch64 \
                     qemu-system-x86 qemu-system-loongarch64  # OR qemu-systrem-misc
    
    # macOS (Homebrew)
    brew install qemu
    
  • rust-objcopy (from cargo-binutils, required for non-x86_64 targets)

    cargo install cargo-binutils
    rustup component add llvm-tools
    

Quick Start

# install cargo-clone sub-command
cargo install cargo-clone 
# get source code of arceos-helloworld crate from crates.io
cargo clone arceos-helloworld
# into crate dir
cd arceos-helloworld
# Build and run on RISC-V 64 QEMU (default)
cargo xtask run

# Build and run on other architectures
cargo xtask run --arch aarch64
cargo xtask run --arch x86_64
cargo xtask run --arch loongarch64

# Build only (no QEMU)
cargo xtask build --arch riscv64
cargo xtask build --arch aarch64

Expected output (riscv64 example):

       d8888                            .d88888b.   .d8888b.
      d88888                           d88P" "Y88b d88P  Y88b
     ...
d88P     888 888      "Y8888P  "Y8888   "Y88888P"   "Y8888P"

arch = riscv64
platform = riscv64-qemu-virt
...
smp = 1

Hello, world!

QEMU will automatically exit after printing the message.

Project Structure

app-helloworld/
├── .cargo/
│   └── config.toml       # cargo xtask alias & AX_CONFIG_PATH
├── xtask/
│   ├── Cargo.toml        # xtask build tool (clap CLI)
│   └── src/
│       └── main.rs       # build/run subcommand implementation
├── configs/
│   ├── riscv64.toml      # Platform config for RISC-V 64 QEMU virt
│   ├── aarch64.toml      # Platform config for AArch64 QEMU virt
│   ├── x86_64.toml       # Platform config for x86-64 PC
│   └── loongarch64.toml  # Platform config for LoongArch64 QEMU virt
├── src/
│   └── main.rs           # Application entry point
├── build.rs              # Linker script path setup (auto-detects arch)
├── Cargo.toml            # Dependencies (axstd from crates.io)
└── README.md

How It Works

The cargo xtask pattern uses a host-native helper crate (xtask/) to orchestrate cross-compilation and QEMU execution:

  1. cargo xtask build --arch <ARCH>

    • Copies configs/<ARCH>.toml to .axconfig.toml (platform configuration)
    • Runs cargo build --release --target <TARGET>
    • build.rs auto-detects the architecture and locates the correct linker script
  2. cargo xtask run --arch <ARCH>

    • Performs the build step above
    • Converts ELF to raw binary via rust-objcopy (except x86_64 which uses ELF directly)
    • Launches the appropriate QEMU emulator with architecture-specific flags

Key Components

ComponentRole
axstdArceOS standard library (replaces Rust's std in no_std environment)
axhalHardware abstraction layer, generates the linker script at build time
axplat-*Platform-specific support crates (one per target board/VM)
axruntimeKernel initialization and runtime setup
build.rsLocates the linker script generated by axhal and passes it to the linker
configs/*.tomlPre-generated platform configuration for each architecture

ArceOS Tutorial Crates

This crate is part of a series of tutorial crates for learning OS development with ArceOS. The crates are organized by functionality and complexity progression:

#Crate NameDescription
1arceos-helloworld (this crate)Minimal ArceOS unikernel application that prints Hello World, demonstrating the basic boot flow
2arceos-collectionsDynamic memory allocation on a unikernel, demonstrating the use of String, Vec, and other collection types
3arceos-readpflashMMIO device access via page table remapping, reading data from QEMU's PFlash device
4arceos-childtaskMulti-tasking basics: spawning a child task (thread) that accesses a PFlash MMIO device
5arceos-msgqueueCooperative multi-task scheduling with a producer-consumer message queue, demonstrating inter-task communication
6arceos-fairschedPreemptive CFS scheduling with timer-interrupt-driven task switching, demonstrating automatic task preemption
7arceos-readblkVirtIO block device driver discovery and disk I/O, demonstrating device probing and block read operations
8arceos-loadappFAT filesystem initialization and file I/O, demonstrating the full I/O stack from VirtIO block device to filesystem
9arceos-userprivilegeUser-privilege mode switching: loading a user-space program, switching to unprivileged mode, and handling syscalls
10arceos-lazymappingLazy page mapping (demand paging): user-space program triggers page faults, and the kernel maps physical pages on demand
11arceos-runlinuxappLoading and running real Linux ELF applications (musl libc) on ArceOS, with ELF parsing and Linux syscall handling
12arceos-guestmodeMinimal hypervisor: creating a guest address space, entering guest mode, and handling a single VM exit (shutdown)
13arceos-guestaspaceHypervisor address space management: loop-based VM exit handling with nested page fault (NPF) on-demand mapping
14arceos-guestvdevHypervisor virtual device support: timer virtualization, console I/O forwarding, and NPF passthrough; guest runs preemptive multi-tasking
15arceos-guestmonolithickernelFull hypervisor + guest monolithic kernel: the guest kernel supports user-space process management, syscall handling, and preemptive scheduling

Progression Logic:

  • #1–#8 (Unikernel Stage): Starting from the simplest output, these crates progressively introduce memory allocation, device access (MMIO / VirtIO), multi-task scheduling (both cooperative and preemptive), and filesystem support, building up the core capabilities of a unikernel.
  • #8–#10 (Monolithic Kernel Stage): Building on the unikernel foundation, these crates add user/kernel privilege separation, page fault handling, and ELF loading, progressively evolving toward a monolithic kernel.
  • #11–#14 (Hypervisor Stage): Starting from minimal VM lifecycle management, these crates progressively add address space management, virtual devices, timer injection, and ultimately run a full monolithic kernel inside a virtual machine.

License

GPL-3.0