Physics today would be inconceivable without computing and intense data processing. From the analysis of the LHC data, at 30 Peta-bytes per year, to long-term achival of unique results, and calculating exact theoretical predictions, data processing is all around us. And that data processing is collaborative, global, and at the edge of what is technically feasible (and sometimes just beyond what’s possible …)
The Nikhef Data-Processing Facility (NDPF) offers federated and local high-throughput compute and data services. It offers Netherlands Tier-1 service for the LHC, is a main node for XENON and Virgo data, and it serves the Dutch science in the Dutch National e-Infrastructure coordinated by SURF. It also provides the infrastructure for Stoomboot, the Nikhef analysis cluster and high-throughput storage environment.
Together we host the Dutch LHC Tier-1 facility and are a colocation facility for over 180 public and private networks, connecting peers across the world, including many of our key science partners. Nikhef Physics Data Processing also supports it with joint projects including FuSE and formerly BiG Grid.
Provisioned on top of a terabit-speed network, connected globally and embedded in our NikhefHousing datacenter, with hardware systems and an underlying cloud facility, it follows demand and can address ‘exotic’ cases. For unique experiments, we work with many systems and network vendors to challenge the limits of tomorrow’s computing.
Nikhef also promotes e-Science solutions and infrastructure for collaboration such as Federated Identity and Access Management, and participates in many joint projects in Europe and the Netherlands.
- National e-Infra Services coordinated by SURF
- Worldwide LHC Computing Grid enabing LHC computing
- EGI Advanced Computing for Research operating the European e-Infrastructure for Research Computing
Besides the global federated infrastructure, Nikhef also avails over a large local analysis cluser (“Stoomboot”) and an associated high-performance disk system based on dCache (“/dcache”) of about five petabytes.
If you are working at Nikhef, computing and storage in the Netherlands, in the LHC Computing Grid, and services of the European e-Infrastructure are available.
- Local users start with the Computing Course, and can use the tools installed on
stbc-i_*_and CVMFS on all desktops.
- You can get your personal identity certificate from the Trusted Certificate Service, based on your Nikhef credentials, instantly. But if you already have a certificate from any of the IGTF accredited CAs, you can use that one as well.
- Register with your collaboration or experiment by importing your certificate in your browser and enrolling in your VOMS or IAM service - ask your group leader to approve it, if needed. This may take up to 48 hours. If your community is not listed there, ask the PDP team or go to the SURF grid documentation pages.
- Abide by the Acceptable Use Policy of Nikhef and that of our federation partners.
- Start using your experiment software framework. For example, Atlas analysis requires the use of DonQuijote2 for data management, LHCb uses DIRAC, and Alice uses AliEn.
You can browse through the Grid documentation to tackle the more complex use cases. For more information, you can contact the PDP team, or by e-mail any time to firstname.lastname@example.org. This is also the proper e-mail address for support questions and bug reports.
Services and systems
The NIKHEF Data Processing Facility, NDPF, comprises a single production
facility (LCG2ELPROD) and some smaller experimental systems. The
facility contains over 8000 CPU cores of compute power and about 5000 TBytes of disk space.
All of this is connected with 1.2Terabit of interconnect network and there is 250 Gbps of internetworking bandwidth available. The NDPF offers a high-throughput compute (HTC, both platform and cloud), and a high-throughput storage (HTS) service. Through our federation partners you can request access to a larger range of services at SURFsara.
PDP research programme
We study the integration of ICT infrastructure (computing systems, networks, and storage), the implementation methodology of algorithms to be able to exploit high-throughput and high-performance computing and storage, and the secure collaboration mechanisms that enable this infrastructure to operate as a collective, coherent, and reliable ecosystem.
Our principal lines of research and engineering in the PDP group encompass:
- Systems at Scale: resaerch in highly distributed large systems
- Federated Identity and Access Management for research, and Infrastructure for Collaboration
- Scalable multi-domain security and site access control (see also our Wiki)
- Operations of large scale infrastructures
More information about the research projects can be found in the PDP Strategyand elsewhere on the Nikhef web site.