NIKHEF Physics Data Processing - Advanced Computing for Research
Physics today would be inconceivable without computing and intense data
processing. From the analysis of the LHC data, at 30 Peta-bytes per year,
to long-term achival of unique results, and calculating exact theoretical
predictions, data processing is all around us. And that data processing is
collaborative, global, and at the edge of what is technically feasible
(and sometimes just beyond what's possible ...)
A founding partner of the Dutch national e-Infrastructure, now coordinated by SURF, ICT has been a core activity since its beginning of Nikhef. Together we host the Dutch LHC Tier-1 facility and are a colocation facility for over 180 public and private networks, connecting peers across the world, including many of our key science partners. Nikhef Physics Data Processing also supports it with joint projects including FuSE and formerly BiG Grid.
Nikhef also promotes e-Science solutions and infrastructure for collaboration such as Federated Identity and Access Management, and participates in many joint projects in Europe and the Netherlands.
- National e-Infra Services - coordinated by SURF
- LHC Computing Grid Project - Enabling LHC Data Processing
- EGI: the European Grid Infrastructure - operating the pan-European Advanced Computing Infrastructure for Research
Besides the global federated infrastructure, Nikhef also avails over a large local analysis cluser ("Stoomboot") and an associated high-performance disk system based on dCache ("/dcache") of about five petabytes.
If you are working at Nikhef, computing and storage in the Netherlands, in the LHC Computing Grid, and services of the European e-Infrastructure are available:
- All tools you need are installed on stbc-i*.nikhef.nl.
- Get an electronic identity from the
GEANT TCS authority with federated SSO login.
If you already have a certificate from any of the IGTF accredited CAs, you can use that one too.
- Import it into your browser (see here how) and
register with your experiment - ask your group leader for this if needed. This may take up to 48 hours.
If your community is not listed there, ask the PDP team or go to the SURF Grid documentation pages.
- Abide by the Nikhef Acceptable Use Policy and that of our federation partners.
- Start using your experiment software framework. For example, Atlas analysis requires the use of DonQuijote2 for data management, LHCb uses DIRAC, and Alice uses AliEn.
You can browse through the documentation to tackle the complex use cases. For more information, you can contact the PDP team in H1.50 - H1.59, or by e-mail any time to email@example.com. This is also the proper e-mail address for support questions and bug reports.
The NIKHEF Data Processing Facility, NDPF, comprises a single production facility (LCG2ELPROD) and some smaller experimental systems. The facility contains over 8000 CPU cores of compute power and about 5000 TBytes of disk space. All of this is connected with 1.2Terabit of interconnect network and there is 250 Gbps of internetworking bandwidth available. The NDPF offers a high-throughput compute (HTC, both platform and cloud), and a high-throughput storage (HTS) service. Through our federation partners you can request access to a larger range of services at SURFsara.
There are three main lines of research and engineering of the NIKHEF PDP group:
- Systems at Scale: resaerch in highly distributed large systems
- Federated Identity and Access Management for research, and Infrastructure for Collaboration
- Scalable multi-domain security and site access control (see also our Wiki)
- Operations of large scale infrastructures