Improved Linux filesystem sharing for simulated devices with extended Virtio support in Renode
Published:
Topics: Open source tools, Open simulation
Developing software for Linux-based systems in simulation lets you iterate faster than using hardware, especially when the latter is still under development. Our open source Renode simulation framework helps scale embedded dev environments across many use cases and test scenarios, enabling more seamless cross-team collaboration, and a critical element of getting the most out of embracing simulation is properly partitioning your workflow, with which we often help our customers.
The naive approach of doing everything in simulation – including compiling your kernel modules and userspace applications – is not the most efficient both in terms of compute power and access to tooling on your PC host vs. the simulated embedded target you’re working with. A better idea is to build your software on the host and upload it to your simulated machine, for which Renode already implements several methods. This blog post presents how we extended host-guest filesystem sharing using more recent developments in Virtio, resulting in improved code iteration efficiency by removing the need to re-mount the filesystem.
From Virtio to Virtiofs
The original support for the Memory-mapped I/O-based Virtio block device flow in Renode enabled us to dynamically load filesystem images and make them available in simulation without the need to reload the kernel/rootfs, letting developers quickly sideload userspace apps, reducing iteration time, e.g. when working on Linux drivers. As Virtio does not rely on a network controller modeled in Renode, it can also be used for platforms that do not support a network interface, while offering higher data transfer speeds than simulated networks.
virtiofs
, support for which we added recently, is a later addition to the Virtio subsystem, letting virtual machines access a directory tree on the host. This lets developers share parts of their host machine’s directory tree with their simulated device without the need to re-pack the rootfs with every code iteration. Below we go into the details of this implementation and show you how to use the filesystem sharing functionality in a brief tutorial.
Virtqueues, virtio rings, FUSE
Communication between the guest driver and device in Virtio is based on a simple data structure named virtqueue
which is divided into up to 3 areas - Descriptor Table, Available Ring and Used Ring.
The data itself is sent via buffers of varying lengths. The Descriptor Table area contains metadata about each buffer’s address, size, continuity, device-writability (each buffer can be device-writable or device-readable). The descriptors are added by the driver and the driver decides about each buffer’s metadata and layout in memory. The descriptors can be chained, forming a “chain of descriptors”.
The Available Ring contains indices of descriptors that are made available for the device by the driver, and is able to contain information about all descriptors, such as their addresses, length, device-writability, and whether another descriptor is chained to it. In this case, the “Next” field contains information about which descriptor is the next one for processing, as they may not be ordered. The device is allowed to write to “device-writable” buffers. When processing a request, the device (after reading index of the descriptor, which is the head of a “chain of descriptors”), reads requests from buffers written by the driver, and writes its reply in device-writable buffers in this chain, which are placed by the driver after any device-readable descriptors. Then the device uses the Used Ring to place information with the pointer of the used descriptor chain, and a number of processed bytes.
Consider the diagrams below for a simple illustration of the internal states in virtqueues
:
First, the driver creates two writable buffers and descriptors with information about them, and chains them using the “Next” flag and index in the following buffer’s “Next” field, pointing to the next buffer. The buffers are device-writable as the “W” flag is set.
The device writes 0x230 bytes to the next buffer in the current chain of descriptors. Then it writes this information to the Used Ring.
When the device finishes processing the chain, the driver creates another (Next) buffer for the device to consume, not chained to any other buffer, and toggles it available.
The shared directory implementation is based on the Filesystem in Userspace (FUSE) daemon. In addition to virtiofs
support, we also needed to run a daemon in the host which creates a FUSE used to mount specific host directories and then communicates with Renode via a Unix socket. To do this, we used libfuse
, to which we contributed an example filesystem with domain socket support.
A memory-mapped virtiofs
device interface is available via a set of registers. Interrupts are used to notify drivers about available FUSE responses, device changes, and acknowledgements. FUSE message contents are transferred between the device and the guest’s Linux drivers over the Renode sysbus.
Demo: Virtiofs simulated device configuration for filesystem sharing
Using a virtiofs
device with a simulated Renode machine on Linux is simple and requires just a few steps. Make sure you have the latest version of Renode installed on your machine.
First, install a filesystem daemon providing access via a UNIX domain socket – this step is planned to soon be included in mainline Renode, eliminating the additional dependencies. A sample filesystem is available on Antmicro’s GitHub. Ensure that Meson and Ninja are installed in your system and compile the filesystem daemon with the following commands:
$ git clone https://github.com/antmicro/libfuse -b passthrough-hp-uds
$ cd libfuse
$ mkdir build; cd build
$ meson setup ..
$ ninja
The output binary will be stored in the example/passthrough_hp_uds
directory.
It is recommended to add it to $PATH
or create an alias. Executing the binary will be referred to as “passthrough_hp_uds”.
export PATH="$PATH:$PWD/example/passthrough_hp_uds"
To share a directory, start the filesystem daemon, and provide a path to the shared directory (appending &
starts it in the background):
$ mkdir shareddir
$ passthrough_hp_uds shareddir &
By default, this creates an Unix domain socket at /tmp/libfuse-passthrough-hp.sock
. It is also possible to provide a custom socket path to enable using multiple shared directories, also with the same machine.
To try out this feature in Renode, you can use the script available at tests/peripherals/virtio-vexriscv.resc
, which will set up an example machine with a device available, waiting for the filesystem daemon socket at its default location, with “MySharedDir” set as a tag (a name of your choosing – same as provided to the peripheral).
In another terminal, execute the Renode script. This starts the emulation.
./renode -e “i @tests/peripherals/virtio-vexriscv.resc; s”
Now log in to the guest machine as root.
On the guest machine, create a directory where you wish to mount the shared directory.
# mkdir shared
# mount -t virtiofs MySharedDir shared
The shared directory is now ready to use.
In the clip below, you can see how filesystem sharing works after a successful setup:
For more details regarding the configuration, refer to the Renode documentation.
Develop drivers, userspace apps and entire systems with Renode
As virtiofs
support in Renode eliminates the need to re-compile and re-mount directory trees, development and testing of Linux kernel drivers, userspace applications, as well as automated testing of varying filesystem payloads is now more time-efficient.
Do not hesitate to contact us at contact@antmicro.com to discuss how the capabilities offered by Renode in terms of resource-efficient software and hardware co-development of entire systems in a replicable, automated environment thoroughly integrated with a comprehensive open source toolkit can benefit your projects.