..
# SPDX-FileCopyrightText: Copyright (c) 2023-2025, Linaro Limited.
#
# SPDX-FileCopyrightText: Copyright 2022-2024 Arm Limited and/or its
# affiliates
#
# SPDX-License-Identifier: MIT
########################################
Build, Deploy and Validate Cassini Image
########################################
The recommended approach for image build setup and customization is to use the
kas build tool. To support this, Cassini provides configuration files to
setup and build different target images, different distribution image features,
and set associated parameter configurations.
This page first briefly describes below the kas configuration files provided
with Cassini, before guidance is given on using those kas configuration files
to set up the Cassini distribution on a target platform.
.. note::
All command examples on this page can be copied by clicking the copy button.
Any console prompts at the start of each line, comments, or empty lines will
be automatically excluded from the copied text.
The ``kas`` directory contains kas configuration files to
support building and customizing Cassini distribution images via kas. These kas
configuration files contain default parameter settings for a Cassini distribution
build. Here, the files are briefly introduced, classified by type:
* **Base Configs**: Configures common software components
* ``cassini.yml`` to build an image for the Cassini distribution.
* **Build Cloud Configs**: Set and configure container runtime and cloud service
* ``no-cloud.yml`` to include the default container runtime without a cloud service.
* ``greengrass.yml`` to include |Docker|_ and the |AWS IoT Greengrass V2|_ cloud service.
If no cloud config is specified, |Docker|_ and |K3s orchestration|_ is included by
default
* **Build OTA Update Config**: Set and configure Over-the-air update service
* ``no-ota.yml`` to remove over-the-air update service.
|Mender|_ is included by default
* **Build Modifier Configs**: Set and configure features of the Cassini
distribution
* ``dev.yml`` to configure the image for development using |debug tweaks|_
and disable :doc:`../developer_manual/security_hardening`.
* ``tests.yml`` to include run-time validation tests into the image
with |debug tweaks|_.
* **Target Platform Configs**: Set the target platform
For information on supported targets in Cassini and corresponding value
for ``MACHINE`` **variable**, refer to :ref:`target_platforms_label`.
These kas configuration files can be used to build a custom Cassini distribution
by passing the **Base Config** and one **Target Platform Config** to the
kas build tool. **Build Cloud Configs**, **Build OTA Update Config** and
**Build Modifier Configs** are optional (only one of each can be included).
Configuration files are separated with a colon in the kas execution command line,
see examples below:
.. code-block:: text
kas build ::::
In the next section, guidance is provided for configuring, building and
deploying Cassini distributions using these kas configuration files.
****************************
Build Host Environment Setup
****************************
This documentation assumes an Ubuntu based build host, where the build steps
have been validated on the Ubuntu 20.04 LTS (Focal Fossa) and 22.04 LTS
(Jammy Jellyfish).
A number of package dependencies must be installed on the Build Host to run
build scenarios via the Yocto Project. The Yocto Project documentation
provides the |list of essential packages|_ together with a command for their
installation.
The recommended approach for building Cassini is to use the kas build tool. To
install kas:
.. code-block:: console
:substitutions:
python3 -m pip install kas==|kas version|
For more details on kas installation, see |kas Dependencies & installation|_.
To deploy a Cassini distribution image onto the supported target platform,
``bmap-tools`` is used. This can be installed via:
.. code-block:: console
sudo apt install bmap-tools
.. note::
The Build Host should have at least 65 GBytes of free disk space to build a
Cassini distribution image.
********
Download
********
The ``meta-cassini`` repository can be downloaded using Git, via:
.. code-block:: shell
:substitutions:
# Change the tag or branch to be fetched by replacing the value supplied to
# the --branch parameter option
git clone |meta-cassini remote| --branch |meta-cassini branch|
cd meta-cassini
.. _build_label:
****************
Build and Deploy
****************
Refer to the platform guides instructions on how to build and deploy the Cassini images
on supported platforms:
* :ref:`Getting Started with Arm Corstone-1000 for MPS3 `
* :ref:`Getting Started with Arm Corstone-1000 FVP `
* :ref:`Getting Started with KV260 `
* :ref:`Getting Started with Generic Arm64 Images `
***
Run
***
To run the deployed Cassini distribution image, simply boot the target platform.
The Cassini distribution image can be logged into as ``cassini`` user.
The distribution can then be used for deployment and orchestration of
application workloads in order to achieve the desired use-cases.
********
Validate
********
As an initial validation step, check that the appropriate Systemd services are
running successfully,
* ``docker.service``
* ``k3s.service`` is available unless a cloud modifier is
included as part of the build config.
* ``greengrass.service`` is available if ``greengrass.yml`` is
included as part of the build config.
A service can be checked by running the command:
.. code-block:: console
systemctl status --no-pager --lines=0
And ensuring the command output lists them as active and running.
More thorough run-time validation of Cassini components are provided as a series
of integration tests, available if the ``kas/tests.yml`` kas
configuration file was included in the image build.
.. note::
Due to performance limitations, K3s is not currently supported on
the Arm Corstone-1000.
*********************************
Reproducing the Cassini Use-Cases
*********************************
This section briefly demonstrates simplified use-case examples, where detailed
instructions for developing, deploying, and orchestrating application workloads
are left to the external documentation of the relevant technology.
Deploying Application Workloads via Docker and K3s
==================================================
This example deploys the |Nginx|_ web server as an application workload, using
the ``nginx`` container image available from Docker's default image repository.
The deployment can be achieved either via Docker or via K3s, as follows:
1. Boot the image and log-in as ``cassini`` user.
2. Ensure the target device can access the internet
.. code-block:: console
wget www.linaro.org
The output should be similar to:
.. code-block:: console
--2023-12-02 12:42:10-- http://www.linaro.org/
Resolving www.linaro.org... 18.165.227.69, 18.165.227.126, 18.165.227.43, ...
Connecting to www.linaro.org|18.165.227.69|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://www.linaro.org/ [following]
--2023-12-02 12:42:10-- https://www.linaro.org/
Connecting to www.linaro.org|18.165.227.69|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 54811 (54K) [text/html]
Saving to: 'index.html.1'
index.html 100%[===============>] 53.53K 323KB/s in 0.2s
2023-12-02 12:42:26 (323 KB/s) - 'index.html' saved [54811/54811]
3. Deploy the example application workload:
* **Deploy via Docker**
3.1. Run the following example command to deploy via Docker:
.. code-block:: console
sudo docker run -p 8082:80 -d nginx
3.2. Confirm the Docker container is running by checking its ``STATUS``
in the container list:
.. code-block:: console
sudo docker container list
* **Deploy via K3s**
3.1. Run the following example command to deploy via K3s:
.. code-block:: console
cat << EOT > nginx-example.yml && sudo kubectl apply -f nginx-example.yml
apiVersion: v1
kind: Pod
metadata:
name: k3s-nginx-example
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
hostPort: 8082
EOT
3.2. Confirm that the K3s Pod hosting the container is running by
checking that its ``STATUS`` is ``running``, using:
.. code-block:: console
sudo kubectl get pods -o wide
4. After the Nginx application workload has been successfully deployed, it can
be interacted with on the network, via for example:
.. code-block:: console
wget localhost:8082
.. note::
As both methods deploy a web server listening on port 8082, the two methods
cannot be run simultaneously and one deployment must be stopped before the
other can start.
.. note::
Due to performance limitations, K3s is not currently supported on
the Arm Corstone-1000.