Create a new test chain

In this section, a brief tutorial on “how to create a new test chain” is provided.

Check if your LHCb applications are run correctly

As a first step, check that the applications involved in the test can be run correctly from the LHCbIntegrationTests package. This can be simply done adding a simple test chain similar to the Moore+DaVinci test. If such a test alredy exists, this step can be skipped.

###############################################################################
# (c) Copyright 2000-2018 CERN for the benefit of the LHCb Collaboration      #
#                                                                             #
# This software is distributed under the terms of the GNU General Public      #
# Licence version 3 (GPL Version 3), copied verbatim in the file "COPYING".   #
#                                                                             #
# In applying this licence, CERN does not waive the privileges and immunities #
# granted to it by virtue of its status as an Intergovernmental Organization  #
# or submit itself to any jurisdiction.                                       #
###############################################################################
# Test that the chain Moore - DaVinci is functional
get_project_version(Moore)
get_project_version(DaVinci)

add_test(NAME Moore+Davinci.Moore
         COMMAND ${Moore_run} gaudirun.py)
set_property(TEST Moore+Davinci.Moore PROPERTY TIMEOUT 600)


add_test(NAME Moore+Davinci.DaVinci
         COMMAND ${DaVinci_run} gaudirun.py)
set_property(TEST Moore+Davinci.DaVinci
             APPEND PROPERTY DEPENDS Moore+Davinci.Moore)
set_property(TEST Moore+Davinci.DaVinci PROPERTY TIMEOUT 600)

Prepare your scripts

LHCbIntegration is built as a simple shell meant to check only the integration between LHCb software projects. For this reason, and also for better code maintenance, all the `py scripts used in the tests should live in a dedicated directory in their related application, as DaVinciTests for DaVinci. In this way they can be more easily be adapted following the most recent updates. Furthermore, the level of redundancy inside the LHCb software is reduced since the same script can be used also for other tests. The .yaml option files containing the configurations of the various steps can instead be saved here since they could differ from the ones used for the tests in the main application.

The .py script and the .yaml option file can be written following the most recent lbexec command; many tests and tutorials are already available in the `DaVinci and Moore repositories. The Utilities directory is designed to collect a set of common helper functions in order to simplify the test development; please have a look and if you need to write functions or scripts that can be used also by other tests take in consideration the idea to put your codes in this folder.

An example of the .py file is reported below (updated to March 2023, here the latest version):

import Functors as F
from FunTuple import FunctorCollection
from FunTuple import FunTuple_Particles as Funtuple
import FunTuple.functorcollections as FC
from DaVinci.algorithms import create_lines_filter
from DaVinci import Options, make_config
from PyConf.reading import get_particles


def main(options: Options):
   process = options.input_process
   line = (
     "SpruceB2OC_BdToDsmK_DsmToHHH_FEST"
     if process == "Spruce"
     else "Hlt2Charm_D0ToKmPip"
   )
   prefix = "Spruce" if process == "Spruce" else "HLT2"
   b_name = "B0" if process == "Spruce" else "D0"
   daug1_name = "D_s-" if process == "Spruce" else "K-"
   daug2_name = "K+" if process == "Spruce" else "pi+"

   data = get_particles(f"/Event/{prefix}/{line}/Particles")
   fields = {
      "B": f"[{b_name} -> {daug1_name} {daug2_name}]CC",
      "daug1": f"[{b_name} -> ^{daug1_name} {daug2_name}]CC",
      "daug2": f"[{b_name} -> {daug1_name} ^{daug2_name}]CC",
   }

   variables_b = FunctorCollection(
      {
            "LOKI_daug1_PT": F.CHILD(1, F.PT),
            "LOKI_daug2_PT": F.CHILD(2, F.PT),
      }
   )

   variables_extra = FunctorCollection(
      {"LOKI_NTRCKS_ABV_THRSHLD": "NINTREE(ISBASIC & (PT > 15*MeV))"}
   )
   variables_b += variables_extra

   # FunTuple: make functor collection from the imported functor library Kinematics
   variables_all = FC.Kinematics()

   # FunTuple: associate functor collections to field (branch) name
   variables = {
      "ALL": variables_all,  # adds variables to all fields
      "B": variables_b,
      "daug1": variables_extra,
      "daug2": variables_extra,
   }

   tuple_dv = Funtuple(
      name=f"Tuple_{process.lower()}",
      tuple_name="DecayTree",
      fields=fields,
      variables=variables,
      inputs=data,
   )

   filter_dv = create_lines_filter("HDRFilter_DV", lines=[f"{line}"])
   algs = [filter_dv, tuple_dv]

   return make_config(options, algs)

An exmple of the .yaml file:

###############################################################################
# (c) Copyright 2023 CERN for the benefit of the LHCb Collaboration           #
#                                                                             #
# This software is distributed under the terms of the GNU General Public      #
# Licence version 3 (GPL Version 3), copied verbatim in the file "COPYING".   #
#                                                                             #
# In applying this licence, CERN does not waive the privileges and immunities #
# granted to it by virtue of its status as an Intergovernmental Organization  #
# or submit itself to any jurisdiction.                                       #
###############################################################################

input_files:
- passthrough_all_lines_persistreco.dst
input_manifest_file: passthrough_all_lines_persistreco.tck.json
histo_file: dv_from_turbopass_histos.root
ntuple_file: dv_from_turbopass_ntuple.root
input_process: TurboPass

Running your test

For running the test it is sufficient to write a new CMakeLists.txt file inside your directory, with all the instructions that have to be followed. For example, a test aimed for running DaVinci and create an ntuple via Funtuple, can be implemented as:

get_project_version(DaVinci)

set(src_dir ${CMAKE_CURRENT_SOURCE_DIR})
set(util_dir ${src_dir}/../Utilities/)

# Run DaVinci and create a tuple from the HLT2 step output
add_test(NAME Tupling_default.TupleFromTurbo.Run
         COMMAND ${util_dir}/logscript.sh Tupling_default.TupleFromTurbo.stdout.log Tupling_default.TupleFromTurbo.stderr.log
         ${DaVinci_run} lbexec DaVinciTests.options_lhcbintegration_mc_excl_line:main ${util_dir}/jobMC_options.yaml+${src_dir}/option_davinci_tupling_from_turbo.yaml)
set_property(TEST Tupling_default.TupleFromTurbo.Run
            APPEND PROPERTY DEPENDS Tupling_default.RunPassThrough.Run)
set_property(TEST Tupling_default.TupleFromTurbo.Run PROPERTY TIMEOUT 3600)

add_test(NAME Tupling_default.TupleFromTurbo.Validate
         COMMAND ${util_dir}/validator.sh Tupling_default.TupleFromTurbo.stderr.log)
set_property(TEST Tupling_default.TupleFromTurbo.Validate
            APPEND PROPERTY DEPENDS Tupling_default.TupleFromTurbo.Run)

add_test(NAME Tupling_default.TupleFromTurbo.ValidateTuple
         COMMAND ${DaVinci_run} python ${util_dir}/validator_tuple.py TurboPass)
set_property(TEST Tupling_default.TupleFromTurbo.ValidateTuple
            APPEND PROPERTY DEPENDS Tupling_default.TupleFromTurbo.Run)

In this case, the test has been split in three steps: the first step effectively runs the test, the other two steps check that the no errors occurred and that the final ntuple was generated correctly. Each step is implemented as a separate test with the add_test function accepting two arguements: the name of the test and the command to be run, identified by NAME and COMMAND, respectively. In addition, it is possible to configure each step with the set_property command, accepting the name of the test of interest and the property to be updated, like the timeout or a dependency on another test.

In the first step, the DaVinci job is run via the lb-exec command with the usual syntax, i.e. passing a function and the .yaml option file. As mentioned before the .py file should be available in the main application folder, in this case in the DaVinciTests directory inside DaVinci, while the .yaml file can be saved inside the source directory, in this case Tupling_default. In order to save separately the stdout and the stderr of the test, the helper logscript.sh is invoked upfront the lb-exec syntax. Two properties have been set, a timeout at 3600 seconds and the dependency on the test used to generated the input .dst file used by DaVinci.

The second step checks if any error has been written in the stderr generated by the first step. Also in this case, the validator.sh helper script can be used. The last step checks if there is any error in the generated ntuple, specifically assert that all the expected variables have been written and that their value is not nan. Both these tests depend on the one run with the first step.

Finally the three steps of the test can be run with:

run make test ARGS="-R LHCbIntegrationTests.Tupling_default.TupleFromTurbo"

or if you are working on a LHCb stack:

make fast/LHCbIntegrationTests/test ARGS="-R Tupling_default.TupleFromTurbo"

A complete example of a CMakeList.txt file, containing the full test chain, can be found in Tupling_default/CMakeLists.txt.

Some steps of your chain could be identical to the ones of other already exixisting test chains and therefore there is no need of duplicating the scripts. An example can be found in Tupling_veloSP, where all the steps, except the first one, are identical to the ones written in Tupling_default. In this case you can configure your common steps including the option files available in another folder, as:

get_project_version(Moore)

set(src_dir ${CMAKE_CURRENT_SOURCE_DIR})
set(util_dir ${src_dir}/../Utilities/)
set(base_dir ${src_dir}/../Tupling_default/)

# Start the test chain running HLT1 Allen on a MC .digi file from the TestFileDB.
add_test(NAME Tupling_veloSP.RunHLT1.Run
         COMMAND ${util_dir}/logscript.sh Tupling_veloSP.RunHLT1.stdout.log Tupling_veloSP.RunHLT1.stderr.log
         ${Moore_run} lbexec Moore.production:allen_hlt1_production ${util_dir}/jobMC_options.yaml+${src_dir}/option_moore_hlt1_allen.yaml hlt1_pp_veloSP)

# Run the HLT2 on the HLT1 step output, testing the MC workflow.
add_test(NAME Tupling_veloSP.RunHLT2.Run
         COMMAND ${util_dir}/logscript.sh Tupling_veloSP.RunHLT2.stdout.log Tupling_veloSP.RunHLT2.stderr.log
         ${Moore_run} lbexec Moore.production:hlt2 ${util_dir}/jobMC_options.yaml+${base_dir}/option_moore_hlt2_all_lines.yaml)
set_property(TEST Tupling_veloSP.RunHLT2.Run
            APPEND PROPERTY DEPENDS Tupling_veloSP.RunHLT1.Run)

In this case the first test runs the HLT1 with a configuration options different from the default version, while the second test is run including the .yaml file from the Tupling_default directory since no chaneges are need for the HLT2 step. A complete example of a CMakeList.txt file, containing the full test chain, can be found in Tupling_default/CMakeLists.txt.

Provide a short documentation

As last step, add a README.md file inside your new directory describing the aim of the test and reporting a descrption of the various steps required. An exmple can be found in Tupling_default/README.md. Finally, include the .md file in the LHCb Integration Tests documentation, updating the file existing_chains.rst. `