Subprojects

The worldwide distributed trigger project is divided into several logical pieces. Some of these subprojects are already well on their way, others still need to be started. A number of collaborations are planned, for example with the project of Auke Pieter Colijn to study radiating top quarks and with the Physics Data Processing Group of Jeff Templon, to study the grid interface. Please contact me if you are interested to contribute to any of these projects.

Analysis of the fully hadronic top-antitop Higgs process

This part of the project will be done in close collaboration with the project of Auke Pieter Colijn (radiating top quarks). As a startup, it will be important to compare the efficiency of the fully hadronic trigger with the efficiency of the lepton trigger. This involves a study of the jet trigger turn on curves. It would be good if we can find a PhD student interested in this analysis.

In order to understand the results of the worldwide distributed trigger, we need to compare our analysis results with the standard ATLAS trigger. The top quark will provide us with a "standard candle" that contains the information we need for e.g. our luminosity calculations. Furthermore, we will contribute to b-tagging and invariant mass reconstruction tools. Eventually, we need to implement those tools at trigger level and develop new algorithms to optimize the selection of fully hadronic top-antitop Higgs decays.

Trigger modifications to allow connections to remote farms

The required modifications in the higher level trigger software are almost completed. The memory management has been stabalized for high latency connections and the protocol now handles parallel connections to a single event filter node. The event routing specifications are flexible enough to sustain remote connections. We still need to compare several solutions to route events from the trigger farm to the remote sites. This can either be done directly from the event building nodes (SFIs), from the event filter nodes (EFDs), or from the output nodes (SFOs).

The worldwide distributed trigger is slightly more demanding for the backend network of the trigger system. Small extensions are required to the infrastructure that need to be negotiated with TDAQ management. We should already start to setup the local infrastructure at NIKHEF in parallel with this effort (see section on remote farms below). The full functionality of a remote trigger site can be tested with the system that is in place at CERN at the moment. Performance tests are planned for December 2006, which means that the extended infrastructure needs to be installed by then.

Installation of a remote farm

This task should start as soon as possible. The complete higher level trigger software needs to be installed at a NIKHEF testbed. This installation needs to be tested thoroughly and we should try to run a number of configurations over a wide area network. The decision how to route events out of the trigger farm will be based on the results of these tests. As a second step, we need to study the network connection between CERN and NIKHEF. A stable link between the ATLAS trigger farm and the local NIKHEF farm has to be available by December.

Grid interface

An interesting part of the worldwide distributed trigger is the acquisition of resources. This part of the project is not as urgent as the rest, but it is an interesting topic for a master student. I will try to find a student for this project together with the Physics Data Processing group. The goal would be the development of a tool that acquires and maintaince a set of grid resources (storage and CPU) at a stable level. These resources need to be able to contact the ATLAS trigger farm, possibly penetrating firewalls and private networks.

Contents


External

A worldwide distributed trigger system

CERN logo
ATLAS logo
Last modified: Sunday, 16 July 2006 @ 19:29:18
This file is located at http://www.nikhef.nl/pdp/trigger/projects.php