How to get software built and delivered for the Lab

To enable the integration of software for the Lab, all that is required by COs is to prepare a Spack package (python file) which details all build instructions that Spack needs to be aware of in order to successfully perform the build and install operations.

After the Spack package for a component has been created and tested locally so that it builds successfully, it needs to be passed on to the Technical Coordination (TC) repository in the EBRAINS GitLab that orchestrates the central build and delivery process through Gitlab CI so that it can be installed in the Lab.

Creating Spack packages

Dedicated documentation and instructions on how to create a Spack package can be retrieved from the official Spack documentation (usage, packaging guide, tutorial packaging, sec specs).

Examples from Spack packages of already integrated components for the Lab can be viewed as reference from the EBRAINS Spack repository.

As explained in the documentation, there are two key parts of Spack, that developers/COs needs to provide for each tool:

TermDescription
Spack SpecAn expression for describing a specific build of software.
Spack PackageA Python module that describes how software can be built according to a spec.

Spack packages are essentially installation scripts: inside them, developers encapsulate the build logic of a piece of software, including the different versions. dependencies, options, compilers,platforms etc, while specs allow the users of the software to specify the particular build they need. Essentially, a package is a recipe that translates a spec into build logic.

Spack provides a number of package template classes tailored to different build systems (See here and here)(Python, Makefile, CMake, Autotools, Waf etc). Those classes can be inherited by a Spack package in order to simplify its definition, depending of course on the build system used for each EBRAINS component.

To facilitate creating a new package, Spack provides the:

spack create <url>

command, that examines the url of the package’s source code archive, parses the package name, looks for available versions and checksums, guesses the build system usedand generates a boilerplate package template, containing all this information.

The developers then need to:

  1. provide some general information,such as for example a doc string that describes the package, a homepage and the maintainers of the package
  2. for each dependency of the package, add a depends_on() call to indicate that this needs to be installed before the package. In case of any dependencies that are not available in the Spack upstream, it is the CO’s responsibility to also provide a Spack package for those.
  3. override the build system defaults, for example to specify configurationarguments etc. This is strongly dependent on the build system used: For instance, for python packages that are available on pypi this step is not necessary;for CMake packages, the cmake_args() function can be overridden to add additional flags to the cmake call.

Spack provides very detailed tutorials and extended documentation on creating packages. Please refer to the sources above for more information.

Defining Spack Tests

Another important aspect of Spack packages in the context of this document is the definition of Spack tests that run after a tool is installed to verify that it behaves as expected. While in general it is not mandatory to define tests in Spack packages, it is crucial to be able to run automated tests on new experimental software after it is deployed in environments such as the EBRAINS Lab, and thus this is one of the requirements for including an EBRAINS tool in the official EBRAINS Spack-based repository. Specifically, it is required that each Spack package includes at least one installation-phase test. Providing additional build-time or standalone tests is encouraged.

Depending on the build system used for each package, Spack can run some defaulttests when installing with the –test flag. For example, for Python packages, Spack attempts to import all of the modules that get installed, while for Makefile, CMake, Autotoolspackages, Spack checks for the presence of a check or test target and runs make check or make test respectively.

These default tests are a good first step to ensurethat EBRAINS tools have been installed correctly. However, since most EBRAINS packages, according to the EBRAINS Software Quality guidelines, are expected to provide a test suite including unit and integration tests, it is important to also make use of those test suites when installing Spack packages, to ensure that the installation is sufficiently tested. To write custom install phase tests in addition to thebuilt-in build system installation phase tests, a new test function needs to be defined in the Spack package, using two decorators for each phase test method:

  • @run_after(‘build’), @run_after(‘install’) tells Spack when in the installation process to run the test method, namely after the provided installation phase.

  • @on_package_attributes(run_tests=True) tells Spack to only run the checks when the –test option is provided on the command line

Some examples for the most common build systems used in EBRAINS components:

Python Packages

By default, after the installation phase Spack will run a test() function, that attempts to import all of the modules that get installed. To also make use of any existing tests that have been defined for the package (e.g. with pytest), a new function needs to be defined in the Spack package as follows:

Note: The skip_modulesproperty can be used to specify any modulesthat should be skipped when running tests:

CMake packages

By default, after the build phase Spack will check for the presence of a check or test target and run make check or make test respectively. To add an installation phase test, a new function needs to be defined in the Spack package as follows (an additional build phase test can be added in the same way):

Note: in some cases, the make test target is expected to run after the installation phase, and not after the build phase. In that case, the default Spack test willrun make test before the installation andfail. In order to solve this issue, the check() function needs to be overridden in the package.py, as follows:

On-boarding on the Technical Coordination repository and pipeline

Requirements for onboarding on the Technical Coordination repository and pipeline

The current process for the on-boarding of components to the TC repository and build flow pipeline is through Merge Requests (MRs) from the COs’personal forksof the appropriate TC repository for their Spack package(s) and spec(s) against the master branch of theofficial EBRAINS Spack repository

Specifically:

Fork Spack Repository

The CO forks the official EBRAINS Spack repository on Gitlab and creates a new branch with a name that indicates the feature they are working on (e.g. add-<packagename>, update-<packagename>.

Visibility and Permissions

The CO makes sure that either the fork’s visibility is set to internal or at least one member of the Technical Coordination team is a member of the project. This is important both for resolving conflicts and for monitoring the results of the executed automatic pipelines.

Create Spack Package

The CO creates a Spack package for their tool(package.py), and they add the Spack spec to the spack.yaml file. Note here that for the Spack specs, COs need to include in the MR edits to the spack.yaml file that will detail what should be built onlyfor their component. For components that are already in the spack upstream, the TC pipeline can pull the required package directly from the upstream, and the CO onlyneeds to add the spec to the spack.yaml file.

Create Merge Request

A Merge Request is issued from the CO’s fork to the master branch of the TC repository. All Merge Requests to the master branch automatically trigger a pipeline, that attempts to build the new package on top of the existing EBRAINS Spack environment. If the pipeline fails, the CO is expected to investigate and try to solve the issue.

Merge Request Evaluation

If the pipeline is successful, the TC team evaluatesthat the MR affects only aspects of the component submitting the MR and, if the evaluation is successful, the MR is accepted.

Build and Delivery

This triggers the build and delivery of the component in the lab-int environment.