[Go to /grid/]
Information
About the NDPF
Grids at Nikhef
Outside Access
Nikhef SSO
NDPF Status
Support

Meetings
e-IRG Open Workshop 2016

Grid Guides
eScience Certificates
eScience Cert Guide
Legacy certificate Guide

Facilities
Systems
Statistics
NIKHEF Network (restricted access)
NIKHEF Grid Wiki

Accounting NDPF (restricted access)
NL-T1 Alarms (restricted access)

Engineering & Research
AARC
Grid Trigger
LCAS/LCMAPS
gLExec
System Utilities
Authentication
Nikhef OID Registry
Nikhef URN Registry
Open Code Repository
Software and Tools
Files Repository

Local
Support Management
Photo Gallery 1
Photo Gallery 2

NIKHEF Grid and Physics Data Processing

Nikhef, the Dutch National e-Infrastructure, and Netherlands Tier-1

The Large Hadron Collider (LHC) alone produces rougly 30 Petabytes of data anually. And by 2014 the LOFAR radiotelescope had exceeded all LHC experiments combined with regards to data stored on tape in the Netherlands. And meanwhile bioinformatics, from tomato genomes to structural chemistry, face icreasing need for computation and data analysis. The federated e-Infrastructures combine large-scale distributed data processing centres across many different institutes and countries, that together enable the analysis of these data. In support of its mission and in order to strengthen the global e-Infra and cyber-infrastructure ecosystem, Nikhef has been participating since 2000 in many European and global project like DataGrid, EGEE, AARC, and EOSC-Hub, amongst others, that have helped share and operate this federation. Together with NWO and NBIC we proposed and won the BiG Grid project to provide a large-scale e-Science infrastructure in the Netherlands, putting down the foundations for the Dutch National e-Infrastructure, the DNI, coordinated by SURF. The Netherlands Tier-1 Facility is enabled by the DNI and jointly coordinated with SURFsara as part of the national e-Infrastructure - as are many more sciences. Nikhef also promotes e-Science solutions and infrastructure for collaboration such as Federated Identity and Access Management, and has participated in many joint grid projects in Earth observation, the humanities, in astro-particle physics, for LIGO/Virgo, Xenon, Lofar, and other data intensive sciences.

The use of federated compute and data technologies greatly facilitates implementation of the distributed model for simulation and analysis, and enables efficient use of the computing and storage resources by the various experiments.

Besides the global federated infrastructure, Nikhef also avails over a large local analysis cluser ("Stoomboot") and an associated high-performance disk system based on dCache ("/dcache") of about one petabyte.

Getting on

If you are at Nikhef, computing and storage in the Netherlands, in the LHC Computing Grid, and through the European e-Infrastructure are available for your use. To make your experience, follow these simple steps. For help, you can always drop by at any room between H1.50 and H1.59!
  • Remember that all the tools you need are installed on all Nikhef desktops and on login.nikhef.nl.
  • Get an electronic identity: go get a certificate at the GEANT Trusted Certificate Service (or the Legacy DutchGrid CA).
    Nikhef employees and registered guests can directly use their Nikhef login and password to get a certificate immediately.
    If you already have a certificate from any of the EUGridPMA or IGTF accredited CAs, you can use that one too.
  • Import the certificate in your browser (see here how) and - if you are affiliated to ATLAS, LHCb, or Alice - register with LCG at the Registration Site. If your community is not listed there, ask the grid team or go to the list of supported communities, or contact the helpdesk (by mail at grid.support at nikhef.nl)
    In a day or two (depending on your experiment management) you will have access to the community services. Please do abide by the e-Infrastructure policies as well as of course the Nikhef Acceptable Use Policy.
  • Add the client Software to your path and start working from any Linux workstation or on Stoomboot, following the step-by-step guide.
  • To start your work, also have a look at your experiment's software framework. For example, Atlas analysis requires the use of DonQuijote2 for data management, LHCb uses DIRAC, and Alice uses AliEn.
  • Access to Stoomboot is most convenient from either login.nikhef.nl or your own laptop, and connect to an interactive node (like stbc-i1.nikhef.nl) For questions regarding the functioning of Stoomboot, ask the PDP support team or your colleages via the stbc-users mailing list.
You can browse through the Grid documentation to tackle the complex use cases. For more information, you can contact the PDP team in H1.50 - H1.59, or by e-mail any time to grid.support@nikhef.nl. This is also the proper e-mail address for support questions and bug reports.

Our resources

The NIKHEF Data Processing Facility comprises a single production facility (LCG2ELPROD) and some smaller experimental systems. The facility contains over 5700 CPU cores of compute power and about 5000 TBytes of disk space. All of this is connected with 1.2Terabit of interconnect network and there is 250 Gbps of internetworking bandwidth available. Some of this capacity is dedicated to usage by the WLCG community (and our local physicists), whilst all participants and users of the Dutch National e-Infrastructure coordinated by SURF can use it! The Nikhef facilities are predominantly distributed high-throughput data clusters interconnected globally, and high-throughput cloud. An even karger range of facilities is offered in the Netherlands - for those have a look at SURFsara.

Our Research

There are three main lines of Grid and fabric research and engineering of the NIKHEF PDP group: More information about the research projects can be found in the Nikhef PDP Strategy and elsewhere on the Nikhef web site.

Comments to David Groep