Components

The stack comprises of the following components:

Component

Version

Source

Trusted Firmware-M (RSS)

a1e8602d5ec2acad923b24eb436dc0f663eea649 (based on master branch post 1.6.0)

Trusted Firmware-M repository

SCP-firmware

RD-INFRA-2022.08.18 (based on 2.10)

SCP-Firmware repository

Trusted Firmware-A

RD-INFRA-2022.08.18 (based on 2.7)

Trusted Firmware-A repository

U-Boot

2022.07

U-Boot repository

Xen

4.16

Xen repository

Linux Kernel

5.19.17

Linux repository

Zephyr

3.2.0

Zephyr repository

RSS

The Runtime Security Subsystem (RSS) is a security subsystem fulfilling the requirements of the Arm Confidential Compute Architecture (CCA). The RSS additionally adds an isolated environment to provide platform security services that are outside of the scope of the CCA Platform Security Domain.

The RSS serves as the Root of Trust for the system, offering critical platform security services and holding and protecting the most sensitive assets in the system.

In the current software stack, the RSS offers the Secure Boot service only.

The RSS internally consists of 3 boot loaders and a runtime. The following diagram illustrates the high-level software structure of the RSS and some relevant external components.


../_images/rss_software_structure_simplified.svg

Boot Loaders

RSS BL1

The first stage bootloader (BL1) of the RSS is immutable code located in the RSS ROM that executes in place on reset. Its purpose is to load and verify the integrity of the second stage bootloader (BL2) image.

RSS BL2

RSS BL2 is provisioned in the RSS OTP and executed from the RSS SRAM. Its purpose is to load, decrypt and authenticate the BL3 image.

RSS BL3

RSS BL3 is implemented through extensions to the existing MCUBoot bootloader in Trusted Firmware-M (TF-M). It loads and authenticates the initial bootloaders of the SCP, Safety Island (SI) and Application Processor (AP).

After all the aforementioned PEs begin to boot, BL3 loads and authenticates the RSS Runtime and starts it.

Runtime

The RSS Runtime will provide services of PSA Crypto and Attestation in the form of APIs in the future.

Downstream Changes

Patches for the RSS are included at yocto/meta-rd-n2-automotive/recipes-bsp/trusted-firmware-m/files to:

  • Load and boot the SCP.

  • Add MHUv2 code to communicate with the SCP.

  • Load and boot the Safety Island.

SCP-firmware

The Power Control System Architecture (PCSA) 1 describes how systems can be built to provide microcontrollers to abstract various power, or other system management tasks, away from Application Processors (APs).

According to the PCSA, the System Control Processor (SCP), a dedicated processor, is used to abstract power and system management tasks away from application processors. The Manageability Control Processor (MCP) follows the same approach with the goal of providing a management entry-point to the System on Chip (SoC) where manageability is required, such as on a SoC targeting servers.

SCP-firmware provides a software reference implementation for the System Control Processor (SCP) and Manageability Control Processor (MCP) components found in several Arm Compute Sub-Systems.

In the current software stack, SCP-firmware is integrated for providing the functionality of the SCP only.

boot_flow Module

A new module boot_flow is introduced in the SCP-firmware RAM firmware for the RSS-based boot flow.

The boot_flow module communicates with the RSS to sync the states of the boot flow. The communication is carried by MHUv2 devices.

The boot_flow module is designed to handle the logic of the boot flow only. The platform specific tasks are performed by another module platform_system. The boot_flow module exchanges events with the platform_system module for power related actions.

MHUv2 Communication

There are MHUv2 devices between the Arm® Cortex®-M core where the RSS runs and the Arm® Cortex®-M core where SCP-firmware runs. In the transport layer of MHUv2, Doorbell signals are exchanged between SCP-firmware and the RSS.

For the MHUv2 Doorbell signals sent from the RSS to the SCP, usage of different slots of channel 0 indicates different meanings:

  • Setting slot 0 means that Safety Island Cluster 0 (SI CL0) is ready to boot

  • Slot 1 is reserved for SI CL1

  • Slot 2 is reserved for SI CL2

  • Setting slot 3 indicates that the AP is ready to boot

The Doorbell signals sent from the SCP to the RSS use slot 0 only. The RSS can distinguish the meaning of the signals according to its own state.

The following diagram illustrates the MHUv2 communication sequence between SCP-firmware and the RSS.


../_images/scp_mhuv2_sequence.svg

Downstream Changes

Patches for the SCP-firmware are included at yocto/meta-rd-n2-automotive/recipes-bsp/scp-firmware/files to:

  • Add the MHUv2 and transport modules to communicate with the RSS.

  • Add the boot_flow module to control the RSS based boot flow as defined in boot_flow Module. In addition, add the logic that implements the communication with the RSS in regards to the boot_flow module’s functionality.

Primary Compute

Device Tree

The RD-N2-Automotive FVP device tree contains the hardware description for the Primary Compute. It is compiled using a standalone Yocto recipe, bundled in the Trusted Firmware-A flash image at rest and used to configure U-Boot, Linux and Xen at runtime.

It is located at components/primary_compute/devicetree/fvp-rd-n2-automotive.dts.

Trusted Firmware-A

Trusted Firmware-A is the initial bootloader on the Primary Compute. The implementation is based on the upstream RD-N2 platform port.

Downstream Changes

The upstream RD-N2 platform port does not include a HW_CONFIG device tree suitable for booting Linux or Xen, so patches are included at yocto/meta-rd-n2-automotive/recipes-bsp/trusted-firmware-a/files to:

  • Bundle the device tree as the HW_CONFIG entry in the AP flash image at rest.

  • Patch this device tree at runtime with information about services (e.g. PSCI) provided by TF-A.

  • Move the BL31 load location from DRAM to SRAM.

For Arm platforms, TF-A passes the HW_CONFIG to BL33 using the register x1.

U-Boot

U-Boot is the non-secure world second-stage bootloader (BL33 in TF-A) on the Primary Compute. It consumes the device tree provided by Trusted Firmware-A and provides UEFI services to UEFI applications like Linux and Xen. The device tree is used to configure U-Boot at runtime, minimizing the need for platform-specific configuration.

Downstream Changes

The implementation is based on the VExpress64 board family. Patch files can be found at yocto/meta-rd-n2-automotive/recipes-bsp/u-boot/files to:

  • Consume the device tree using register x1, the TF-A default.

  • Provide a minimal, generic defconfig for FVPs, vexpress_fvp_defconfig.

  • Enable the real-time clock for the VExpress64 boards by default.

The same device tree is exposed to Linux or Xen in the UEFI system table.

Xen

Xen is a type-1 hypervisor, providing services that allow multiple computer operating systems to execute on the same computer hardware concurrently. Responsibilities of the Xen hypervisor include memory management and CPU scheduling of all virtual machines (domains), and for launching the most privileged domain (Dom0) - the only virtual machine which by default has direct access to hardware. From the Dom0 the hypervisor can be managed and unprivileged domains (DomU) can be launched.

On starting up, the GRUB2 configuration uses the “chainloader” command to instruct the UEFI services provider (U-boot) to load and run Xen as an EFI application. Further Xen reads its configuration (xen.cfg) from the boot partition of the virtio disk containing the boot arguments for Xen and Dom0 to start the whole system.

The Arm Memory Partitioning and Monitoring (MPAM) extension is enabled in Xen. MPAM is an optional extension to Armv8.4 and later versions. It defines a method that software can utilize to apportion and monitor the performance-giving resources (usually cache and memory bandwidth) of the memory system. Domains can be assigned with dedicated system level cache (SLC) slices so that cache contention with multiple domains can be mitigated.


../_images/xen_mpam_structure.svg

The stack offers several methods for users to configure MPAM for domains:
  • For Dom0, an optional Xen command line parameter dom0_mpam can be used to configure the cache portion bit mask (CPBM) for Dom0. The format of the dom0_mpam parameter is:

    dom0_mpam=slc:<CPBM in hexadecimal>
    

    To use the dom0_mpam parameter, users can add this parameter to the options of the [xen] section in xen.cfg config file. An example to assign the first 4 portions of SLC to Dom0 at Xen boot time is shown below:

    [xen]
    options=(...) dom0_mpam=slc:0xf
    
  • There is a set of sub-commands in “xl” to allow users to use MPAM at runtime. Users can use the xl psr-hwinfo command to query the system information of MPAM, and use xl psr-cat-set or xl psr-cat-show to configure or read the CPBM for Dom0 and DomU at runtime.

    The format of xl psr-cat-set is (-l 0 refers to SLC):

    xl psr-cat-set -l 0 <Domain ID> <CPBM in hexadecimal>
    

    The format of xl psr-cat-show is (-l 0 refers to SLC):

    xl psr-cat-show -l 0
    

    More detailed information of the sub-commands, please refer to the --help of each sub-command respectively.

For limitations of MPAM support in Xen, please refer to the changelog limitations Limitations section.

Xen is only included in the Virtualization Reference Stack Architecture as described in Reference Stack Overview.

Downstream Changes

Patches for the Xen device passthrough support at yocto/meta-rd-n2-automotive/dynamic-layers/virtualization-layer/recipes-extended/xen/files/ to:

  • Reserve a static allocated DMA memory for no-smmu connected device in guest

Patches for the Xen MPAM extension support at yocto/meta-rd-n2-automotive/dynamic-layers/virtualization-layer/recipes-extended/xen/files/ to:

  • Discover MPAM CPU feature

  • Initialize MPAM at Xen boot time

  • Support MPAM in Xen tools to apply the domain MPAM configuration in userspace at runtime

Linux Kernel

Remoteproc

In Linux, a remoteproc driver for the Armv8-R64 processor is added to the Linux kernel. It is used to support RPMsg communication between the Armv9.0-A processors and the Armv8-R64 processor. More details on the communication can be found at HIPC section.

Virtual Network over RPMsg

In order to allow applications to access the remote processor using network sockets, a virtual network device over RPMsg is introduced. The rpmsg_net kernel module is added for creating a virtual network device and converting RPMsg data to network data.

Virtual Network in the Xen DomU

In the virtio module, vring is forced to enable DMA operations in the Xen domains. It causes incorrect DMA page mapping because DMA is not used for armv8r64_remoteproc-based virtio RPMsg. A patch is added to fix it by checking the IOMMU ability before forcing virtio device to enable DMA operations.

Downstream Changes

The Armv8r64_remoteproc and rpmsg_net drivers can be found at components/primary_compute/linux_drivers.

The DMA operation fix patch can be found at yocto/meta-rd-n2-automotive/recipes-kernel/linux/files.

Safety Island

Zephyr

Zephyr is an open source real-time operating system based on a small footprint kernel designed for use on resource-constrained and embedded systems.

The Reference Stack uses Zephyr 3.2.0 as a baseline and introduces a new board fvp_rdn2_automotive_cortex_r82 for the RD-N2-Automotive FVP. It reuses the fvp_aemv8r SoC support and adds a pair of Kconfig symbols for MPU device region configuration.

The Zephyr image for this board is running on the Armv8-R64 processor. In order to enable communication with Armv9-A processors, a set of drivers are added into Zephyr by means of an out-of-tree module. More details on the communication can be found at HIPC section.

MHUv2

The Arm Message Handling Unit Version 2 (MHUv2) is a mailbox controller for inter-processor communication. In the RD-N2-Automotive FVP, there are MHUv2 devices on-chip for signaling between Armv9-A and Armv8-R64 cluster, using the doorbell protocol. A driver is added into the Zephyr inter-processor mailbox framework to support this device.

Virtual Network over RPMsg

A veth_rpmsg driver is added for network socket based communication between Armv9-A and Armv8-R64 clusters. It implements an RPMsg backend by the OpenAMP library and an adaptation layer for converting RPMsg data to network data.

Zperf sample

The zperf sample can be used to stress test inter-processor communication over a virtual network on the RD-N2-Automotive FVP. The board overlay dts and configuration file are added to this sample. This sample needs to be used together with iperf on the Armv9-A side for network performance testing.

Downstream Changes

The board support for fvp_rdn2_automotive_cortex_r82 is located at components/safety_island/zephyr/src/boards.

The out-of-tree driver for virtual network over RPMsg is located at components/safety_island/zephyr/src/drivers/ethernet.

The out-of-tree driver for MHUv2 device is located at components/safety_island/zephyr/src/drivers/ipm.

The zperf application configuration is located at components/safety_island/zephyr/src/apps.

The MPU region configuration patch is located at yocto/meta-rd-n2-automotive/recipes-kernel/zephyr-kernel/files/zephyr.

References

1

Power Control System Architecture - DEN0050C (Please contact Arm directly to obtain a copy of this document)