L1 Scouting Analysis with HTCondor
This repository highlights a method of analysis for the n-tuples arising from L1 trigger scouting with the CMS Experiment. By way of conserving time and resources, parallelising the production of TH1D histogram objects from the n-tuples and combining them to form a plot based upon the chosen object and the relevant cuts is an efficient and easy-to-use way of performing analysis.
First, you can pull the repository as standard with
git clone https://gitlab.cern.ch/tcritchl/summer_student_2024/-/blob/master/
git checkout master
Once pulled, the repository should work directly out of the box. First, the packages required for the running of the analysis can be directly sourced from CMSSW, which can be activated as usual from the /src
directory of CMSSW:
cmsenv
To perform the analysis once the environment is setup, one can defer to the config.json file to begin setting up configurables. One should first define the object on which they wish to perform analysis, from either "muon" or "e/g". Then one can choose the "n-tuple_dir" where the n-tuples are placed to peform analysis. The "output_dir" will dictate the placement of the histograms produced throughout the analysis (preferably on EOS to ensure sufficient disk space). Lastly, one should define the "dtype" as being either zero-bias ("zb") or selection-stream ("sel_stream").
Once the configuration is complete, the analysis can be run by simply ensuring that the Stage1_Hist.sh bash script has sufficient permissions with chmod u+x Stage1_Hist.sh
. Then the Stage1_Hist.sh can be run as usual with
./Stage1_Hist.sh
Running this script will create a unique condor_submission script and assiociated bash script for each n-tuple in the specified "n-tuple_dir". The location of these scripts can be configured but is set by default under Analysis/analysis_bash. The output logs of the condor jobs are saved to the logs directory.
Once running, a unique TH1D root hist object will be made and stored in the specified output directory. The HTCondor jobs should run in < 30 minutes. Once they have run, one can simply run the 'Combine_Hist.py' script as:
python3 Combine_Hist.py
on lxplus9 or higher, using the cmsenv setup. A root file will be produced with a combined histogram object containing the information of all the summed TH1D hist objects from stage 1. The combined root file will be stored in the same out_dir as for all of the other stage 1 histograms.
A plot will be produced alligned with the specified selections, binning and axis ranges and stored also in the output directory.