Validation

Build-Time Kernel Configuration Check

After the kernel configuration has been produced during the build, it is checked to validate the presence of necessary kernel configuration to comply with specific Cassini functionalities.

A list of required kernel configs is used as a reference, and compared against the list of available configs in the kernel build. All reference configs need to be present either as module (=m) or built-in (=y). A BitBake warning message is produced if the kernel is not configured as expected.

The following kernel configuration checks are performed:

  • Container engine support:

    Check performed via: meta-cassini-distro/classes/containers_kernelcfg_check.bbclass. By default Yocto Docker config is used as the reference.

  • K3s orchestration support:

    Check performed via: meta-cassini-distro/classes/k3s_kernelcfg_check.bbclass. By default Yocto K3s config is used as the reference.

Run-Time Integration Tests

The meta-cassini-tests Yocto layer contains recipes and configuration for including run-time integration tests into an Cassini distribution, to be run manually after booting the image.

The Cassini run-time integration tests are a mechanism for validating Cassini core functionalities. The following integration test suites are included in the Cassini distribution image:

The tests are built as a Yocto Package Test (ptest), and implemented using the Bash Automated Test System (BATS).

Run-time integration tests are not included in a Cassini distribution image by default, and must instead be included explicitly. See Run-Time Integration Tests within the Build System documentation for details on how to include the tests.

The test suites are executed using the test user account, which has sudo privileges. More information about user accounts can be found at User Accounts.

Note

Container Engine and K3s Orchestration tests require access to the internet e.g. to download container images from external image hubs.

Note

When running on platforms with limited performance, the default Linux networking services may timeout before they can initialize properly. The base image provides a helper script to make sure the network is working before tests are run.

sudo wait-online.sh eth0

This step is currently necessary on Corstone-1000 platforms (FVP and MPS3).

Preparing the device

Before running the tests, the device under test should be reset to make sure no unnecessary processes are running. In addition, when using the Corstone-1000 for MPS3, the secure flash used by Platform Security Architecture API Tests should be wiped. The process for doing this is described here Clean Secure Flash Before Testing.

Running the Tests

If the tests have been included in the Cassini distribution image, they may be run via the ptest framework, using the following command after booting the image and logging in:

ptest-runner [-t timeout] [test-suite-id]

If the test suite identifier ([test-suite-id]) is omitted, all integration tests will be run. For example, running ptest-runner produces output such as the following:

$ ptest-runner
START: ptest-runner
[...]
PASS:container-engine-integration-tests
[...]
PASS:k3s-integration-tests
[...]
PASS:user-accounts-integration-tests
[...]
STOP: ptest-runner

Note

ptest-runner -l is a useful command to list the available test suites in the image.

Note

[-t timeout] specifies a timeout in seconds and must be supplied if the test takes longer than the default (300). You can use the duration estimates for each test to set this value.

Alternatively, a single standalone test suite may be run via a runner script included in the test suite directory:

/usr/share/[test-suite-id]/run-[test-suite-id]

Upon completion of the test-suite, a result indicator will be output by the script, as one of two options: PASS:[test-suite-id] or FAIL:[test-suite-id], as well as an appropriate exit status.

A test suite consists of one or more ‘top-level’ BATS tests, which may be composed of multiple assertions, where each assertion is considered a named sub-test. If a sub-test fails, its individual result will be included in the output with a similar format. In addition, if a test failed then debugging information will be provided in the output of type DEBUG. The format of these results are described in Test Logging.

Test Logging

Test suite execution outputs results and debugging information into a log file. As the test suites are executed using the test user account, this log file will be owned by the test user and located in the test user’s home directory by default, at:

/home/test/runtime-integration-tests-logs/[test-suite-id].log

Therefore, reading this file as another user will require sudo access. The location of the log file for each test suite is customizable, as described in the detailed documentation for each test suite below. The log file is replaced on each new execution of a test suite.

The log file will record the results of each top-level integration test, as well as a result for each individual sub-test up until a failing sub-test is encountered.

Each top-level result is formatted as:

TIMESTAMP RESULT:[top_level_test_name]

Each sub-test result is formatted as:

TIMESTAMP RESULT:[top_level_test_name]:[sub_test_name]

Where TIMESTAMP is of the format %Y-%m-%d %H:%M:%S (see Python Datetime Format Codes), and RESULT is either PASS, FAIL, or SKIP.

On a test failure, a debugging message of type DEBUG will be written to the log. The format of a debugging message is:

TIMESTAMP DEBUG:[top_level_test_name]:[return_code]:[stdout]:[stderr]

Additional informational messages may appear in the log file with INFO or DEBUG message types, e.g. to log that an environment clean-up action occurred.

Test Suites

The test suites are detailed below.

Container Engine Tests

Duration: up to 10 min

The container engine test suite is identified as:

container-engine-integration-tests

for execution via ptest-runner or as a standalone BATS suite, as described in Preparing the device.

The test suite is built and installed in the image according to the following BitBake recipe: meta-cassini-tests/recipes-tests/runtime-integration-tests/container-engine-integration-tests.bb.

Currently the test suite contains three top-level integration tests, which run consecutively in the following order.

1. run container is composed of four sub-tests:
1.1. Run a containerized detached workload via the docker run command
- Pull an image from the network
- Create and start a container
1.2. Check the container is running via the docker inspect command
1.3. Remove the running container via the docker remove command
- Stop the container
- Remove the container from the container list
1.4. Check the container is not found via the docker inspect command
2. container network connectivity is composed of a single sub-test:
2.1. Run a containerized, immediate (non-detached) network-based workload via the docker run command
- Create and start a container, re-using the existing image
- Update package lists within container from external network

The tests can be customized via environment variables passed to the execution, each prefixed by CE_ to identify the variable as associated to the container engine tests:

CE_TEST_IMAGE: defines the container image
Default: docker.io/library/alpine
CE_TEST_LOG_DIR: defines the location of the log file
Default: /home/test/runtime-integration-tests-logs/
Directory will be created if it does not exist
CE_TEST_CLEAN_ENV: enable test environment clean-up
Default: 1 (enabled)
Container Engine Environment Clean-Up

A clean environment is expected when running the container engine tests. For example, if the target image already exists within the container engine environment, then the functionality to pull the image over the network will not be validated. Or, if there are running containers from previous (failed) tests then they may interfere with subsequent test executions.

Therefore, if CE_TEST_CLEAN_ENV is set to 1 (as is default), running the test suite will perform an environment clean before and after the suite execution.

The environment clean operation involves:

  • Determination and removal of all running containers of the image given by CE_TEST_IMAGE

  • Removal of the image given by CE_TEST_IMAGE, if it exists

If enabled then the environment clean operations will always be run, regardless of test-suite success or failure.

K3s Orchestration Tests

Duration: up to 3 min

The K3s test suite is identified as:

k3s-integration-tests

for execution via ptest-runner or as a standalone BATS suite, as described in Preparing the device.

The test suite is built and installed in the image according to the following BitBake recipe within meta-cassini-tests/recipes-tests/runtime-integration-tests/k3s-integration-tests.bb.

Currently the test suite contains a single top-level integration test which validates the deployment and high-availability of a test workload based on the Nginx web server.

The K3s integration tests consider a single-node cluster, which runs a K3s server together with its built-in worker agent. The containerized test workload is therefore deployed to this node for scheduling and execution.

The test suite will not be run until the appropriate K3s services are in the ‘active’ state, and all ‘kube-system’ pods are either running, or have completed their workload.

1. K3s container orchestration is composed of many sub-tests, grouped here by test area:
Workload Deployment:
1.1. Deploy test Nginx workload from YAML file via kubectl apply
1.2. Ensure Pods are initialized via kubectl wait
1.3. Create NodePort Service to expose Deployment via kubectl create service
1.4. Get the IP of the node(s) running the Deployment via kubectl get
1.5. Ensure web service is accessible on the node(s) via wget
Deployment Upgrade:
1.6. Check initial image version of running Deployment via kubectl get
1.7. Get all pre-upgrade Pod names running Deployment via kubectl get
1.8. Upgrade image version of Deployment via kubectl set
1.9. Ensure a new set of Pod names have been started via kubectl wait and kubectl get
1.10. Check Pods are running the upgraded image version via kubectl get
1.11. Ensure web service is still accessible on the node(s) via wget
Server Failure Tolerance:
1.12. Stop K3s server Systemd service via systemctl stop
1.13. Ensure web service is still accessible on the node(s) via wget
1.14. Restart the Systemd service via systemctl start
1.15. Check K3s server is again responding to kubectl get

The tests can be customized via environment variables passed to the execution, each prefixed by K3S_ to identify the variable as associated to the K3s orchestration tests:

K3S_TEST_LOG_DIR: defines the location of the log file
Default: /home/test/runtime-integration-tests-logs/
Directory will be created if it does not exist
K3S_TEST_CLEAN_ENV: enable test environment clean-up
Default: 1 (enabled)

Note

Only supported when K3s cloud service is selected.

K3s Environment Clean-Up

A clean environment is expected when running the K3s integration tests, to ensure that the system is ready to be validated. For example, the test suite expects that the Pods created from any previous execution of the integration tests have been deleted, in order to test that a new Deployment successfully initializes new Pods for orchestration.

Therefore, if K3S_TEST_CLEAN_ENV is set to 1 (as is default), running the test suite will perform an environment clean before and after the suite execution.

The environment clean operation involves:

  • Deleting any previous K3s test Service

  • Deleting any previous K3s test Deployment, ensuring corresponding Pods are also deleted

If enabled then the environment clean operations will always be run, regardless of test-suite success or failure.

User Accounts Tests

Duration: up to 10 min

The User Accounts test suite is identified as:

user-accounts-integration-tests

for execution via ptest-runner or as a standalone BATS suite, as described in Preparing the device.

The test suite is built and installed in the image according to the following BitBake recipe within meta-cassini-tests/recipes-tests/runtime-integration-tests/user-accounts-integration-tests.bb.

The test suite validates that the user accounts described in User Accounts are correctly configured with appropriate access permissions on the Cassini distribution image. The validation performed by the test suite is dependent whether or not it has been configured with Cassini Security Hardening.

As the configuration of user accounts is modified for Cassini distribution image which is built with Cassini security hardening, additional security-related validation is included in the test suite for this image. These additional tests validate that the appropriate password requirements and that the mask configuration for permission control of newly created files and directories is applied correctly.

The test suite therefore contains following integration tests:

1. user accounts management tests is composed of three sub-tests:
1.1. Check home directory permissions are correct for the default non-privileged Cassini user account, via the filesystem stat utility
1.2. Check the default privileged Cassini user account has sudo command access
1.3. Check the default non-privileged Cassini user account does not have sudo command access
2. user accounts management additional security tests is only included for images configured with Cassini security hardening, and is composed of four sub-tests:
2.1. Log-in to a local console using the non-privileged Cassini user account
- As part of the log-in procedure, validate the user is prompted to set an account password
2.2. Check that the umask value is set correctly

The tests can be customized via environment variables passed to the execution, each prefixed by UA_ to identify the variable as associated to the user accounts tests:

UA_TEST_LOG_DIR: defines the location of the log file
Default: /home/test/runtime-integration-tests-logs/
Directory will be created if it does not exist
UA_TEST_CLEAN_ENV: enable test environment clean-up
Default: 1 (enabled)
User Accounts Environment Clean-Up

As the user accounts integration tests only modify the system for images built with Cassini security hardening, clean-up operations are only performed when running the test suite on these images.

In addition, the clean-up operations will only occur if UA_TEST_CLEAN_ENV is set to 1 (as is default).

The environment clean-up operations for images built with Cassini security hardening are:

  • Reset the password for the test user account

  • Reset the password for the non-privileged Cassini user account

After the environment clean-up, the user accounts will return to their original state where the first log-in will prompt the user for a new account password.

If enabled then the environment clean operations will always be run, regardless of test-suite success or failure.

Parsec simple end-to-end Tests

Duration: up to 5 hours

The Parsec simple end2end test suite is identified as:

parsec-simple-e2e-tests

for execution via ptest-runner or as a standalone BATS suite, as described in Preparing the device.

The test suite is built and installed in the image according to the following BitBake recipe within meta-cassini-tests/recipes-tests/runtime-integration-tests/parsec-simple-e2e-tests.bb.

The test suite validates Parsec service in Cassini distribution image by running simple end2end tests available in parsec-tool.

The tests can be customized via environment variables passed to the execution, each prefixed by PS_ to identify the variable as associated to the Parsec simple end2end tests:

PS_TEST_LOG_DIR: defines the location of the log file
Default: /home/test/runtime-integration-tests-logs/
Directory will be created if it does not exist
PS_TEST_CLEAN_ENV: enable test environment clean-up
Parsec Simple End2End Tests Environment Clean-Up

In addition, the clean-up operations will only occur if PS_TEST_CLEAN_ENV is set to 1 (as is default).

Currently, no clean-up is required as simple end2end tests script parsec-cli-tests.sh cleans up temporary files before exiting.

If enabled then the environment clean operations will always be run, regardless of test-suite success or failure.

Platform Security Architecture API Tests

Duration: up to 1 hour

The Platform Security Architecture API test suite is identified as:

psa-arch-tests

for execution via ptest-runner or as a standalone BATS suite, as described in Preparing the device.

The test suite is built and installed in the image according to the following BitBake recipe within meta-cassini-tests/recipes-tests/runtime-integration-tests/psa-arch-tests.bb.

The test suite validates security requirements of PSA Certified API’s Architecture on Arm-based platforms available in psa-api-tests.

The tests can be customized via environment variables passed to the execution, each prefixed by PSA_ to identify the variable as associated to the PSA API tests:

PSA_ARCH_TESTS_TEST_LOG_DIR: defines the location of the log file
Default: /home/test/runtime-integration-tests-logs/
Directory will be created if it does not exist
PSA_ARCH_TESTS_TEST_CLEAN_ENV: enable test environment clean-up
Platform Security Architecture API Tests Environment Clean-Up

In addition, the clean-up operations will only occur if PSA_ARCH_TESTS_TEST_CLEAN_ENV is set to 1 (as is default).

Currently, no clean-up is required as each api test cleans up temporary files before exiting.

If enabled then the environment clean operations will always be run, regardless of test-suite success or failure.

OP-TEE Sanity Tests

Duration: up to 1 hour

The OP-TEE Sanity test suite is identified as:

optee-xtests

for execution via ptest-runner or as a standalone BATS suite, as described in Preparing the device.

The test suite is built and installed in the image according to the following BitBake recipe within meta-cassini-tests/recipes-tests/runtime-integration-tests/optee-xtests.bb.

The test suite runs TEE sanity test suite in Linux using the ARM TrustZone technology using in Optee-xtests.

The tests can be customized via environment variables passed to the execution, each prefixed by OPTEE_XTEST_ to identify the variable as associated to the optee xtests:

OPTEE_XTEST_TEST_LOG_DIR: defines the location of the log file
Default: /home/test/runtime-integration-tests-logs/
Directory will be created if it does not exist
OPTEE_XTEST_TEST_CLEAN_ENV: enable test environment clean-up
Default: 1 (enabled)
OP-TEE Sanity Tests Environment Clean-Up

In addition, the clean-up operations will only occur if OPTEE_XTEST_TEST_CLEAN_ENV is set to 1 (as is default).

Currently, no clean-up is required as the xtests cleans up temporary files before exiting.

If enabled then the environment clean operations will always be run, regardless of test-suite success or failure.