Legals | Data Protection | Login | Home

LK II Infastructures

Overview

Since its start in 2008 the LHC has accumulated a record luminosity of roughly 30 fb–1 and so brought a very rich scientific programme to the experiments ALICE, ATLAS, CMS and LHCb which culminated in the announcement of the discovery of a Higgs boson on July 4th, 2012. Numerous other results were obtained both by the general-purpose experiments ATLAS and CMS, by the heavy-flavour experiment LHCb and the heavy-ion experiment ALICE. This success was made possible by the outstanding performance of the LHC, the early and comprehensive understanding of the detector performance and the availability of a highly performing computing environment, which in itself can be considered an experiment of its own, the Worldwide LHC Computing Grid (WLCG). At the current level of statistical sensitivity, the results on the one hand beautifully confirm the Standard Model and on the other hand emphasise the need for more data.

 

After its restart in 2015 with increased luminosity, the LHC will almost double its collision energy from 8 TeV at the end of the first running period to 13 TeV and eventually 14 TeV. Lead–lead collisions will be provided at 5.5 TeV per nucleon pair. Collision rates will increase significantly; not only will the rate of hard collisions increase with the high-energy machine running but so will the rate of parasitic collisions, i.e. the rate of pile-up events. It is expected that for every pp collision some 50 or more pile-up interactions will be recorded at the same time. Consequently the data volume will signi cantly increase and requires appropriate measures to be taken. A longterm plan for future operation of the LHC has been laid out and is shown in Fig. 1. By 2030, the luminosity accumulated to date will increase by two orders of magnitude; 3000 fb–1 can be envisaged for each of the experiments ATLAS and CMS. Indeed, the LHC is only at the beginning of its experimental programme.

 

The enormous scientific success of the LHC experiments was made possible by a distributed, grid-based computing model developed and steadily evolved over the last 20 years which allows the rapid processing and analysis of huge amounts of data by scientists from all parts of the world. For the rst time in physics history, a truly distributed computing infrastructure came into successful operation, It consists of a global collaboration of more than 150 computing centres in 36 countries, organised within the Worldwide LHC Computing Grid (WLCG). The vast majority of computing activities required to harvest the physics results in the data provided by the LHC experiments were carried out outside of CERN. The distributed computing for the LHC is organised in a tierstructure consisting of the Tier-0 centre at CERN and national, or in some cases transnational, Tier-1 and Tier-2 centres with a clear de nition of responsibilities. The Tier-1 centres are mainly responsible for custodial data storage, centrally coordinated reprocessing campaigns of raw data and constitute important nodes for data distribution. Tier-1 centres also provide tape storage for the archival of all types of data ranging from raw or simulated data to derived data sets originating from reconstruction and selection processes. Moreover, central grid services are provided such as file transfer service or le catalogues. The German Tier-1 centre GridKa at KIT is in addition responsible for operating and enhancing the helpdesk and support platform Global Grid User Support (GGUS), used by WLCG and other communities. Tier-2 centres provide large amounts of compute power and disk storage to support physicists performing data analysis. They also provide the largest part of CPU resources for the creation of simulated data. In terms of computing and storage resources, the sum of the German Tier-2 centres has roughly the size of the German Tier-1 centre GridKa. High reliability, availability and stability are further important requirements for all Tier-1 and Tier-2 centres. One important part of this analysis infrastructure in Germany is the National Analysis Facility (NAF) at DESY which provides interactive services, direct access to the Tier-2 data sets and high input/output bandwidth to data sets relevant for the nal physics analysis by means of parallel le systems. The participating Helmholtz centres DESY and KIT have demonstrated in the past that their staff is capable of operating such data and computing facilities in a way that very favourably compares at the international level.

 

With an expected start in 2015 a new high-rate experiment, Belle II at the SuperKEKB accelerator in Japan, will go in operation and complement the LHCb studies on the properties of heavy quarks and the matter–antimatter asymmetry. Belle II is a large international collaboration with a strong German participation consisting of nine groups from universities, Max-Planck institutes and Helmholtz. In terms of expected data rates, this experiment will even exceed the rates of a single LHC experiment. 

 

An adequate share of the worldwide distributed data and computing infrastructure for the analysis of data produced by the experiments at the LHC and at Belle II is a crucial and indispensable ingredient for each national community to enable successful participation in physics analyses.