=========================================================================== Summary of the meeting on: LHCb Track Reconstruction Software NIKHEF March 23 - March 26 1999 Present: R.van der Eijk, G.Gracia, R.Hierck, M.Merk, M.Needham W. Ruckstuhl (summary meeting) =========================================================================== Note: The language used in this summary in mostly telegram style. I tried to keep the info density high and the amount of words low. (Still it is a long text - probably full of typo's) The terminology is such that this text is rather incomprehensible for anybody not present at the meeting. I hope that it is not for those who were present! Please send me your comments/modifications, such that we can keep this for future reference. Agenda ------ Tue: 23/3 General Survey of the task: - Functionality of the program including Input / Output. - Constraints of the Gaudi Framework - Gonzalo's proposal + Rutger's proposal Wed: 24/3 Working session: Installation Gaudi Thu: 25/3 Towards a first version using Gaudi Fri: 26/3 Summary and Planning As a general starting point we propose a framework that allows the implementation of a Kalman procedure fit. For all practical purposes we have kept this implementation in mind. However the program should also allow a different fitting implementation. (e.g. a global fit) Tue: 23/3 ========= 1. Functionality of the program. ----------------------------- We anticipate that tracking will happen in various "passes" The exact number of passes is not specified. Each pass will add or replace input information. Here an example of three passes: Pass-1: Find/create first version of a track (Pattern Recognition=PR) Pass-2: Refit the track. Pass-3: User callable pass, allowing special inputs: e.g.: - hit dropping - alignment changes, - other input modifications In a higher level pass the sophistication of the fit might increase and also the cpu time needed. In the remainder of the meeting we mainly distinguish between pass-1 (PR) and pass-2 (refit) Note: In somewhate higher detail we currently forsee that: - the PR pass reconstructs tracks "station-by-station", i.e. a tree of tracks is simultaneously built up while stepping through the detector stations. In technical terms: a station is here defined as a set of layers. (Could be more general than a physical detector station) - the refit pass reconstructs on a track-by-track basis. Output of Tracking: ------------------- The output of each pass is a list of Tracks: A track is a uniformly defined class, i.e. the same after each pass of the reconstruction. Only the contents will differ. The Track class can be asked about its: - Track parameters + covariances at one or more z-positions - Chi2 of the fit - List of hits contributing to the track + residuals Input to Tracking: ------------------ A. Measurement input: The input to pass 1 (PR) are containers of pointers to highest level software "digitizations" (i.e. chamber drift times or silicon/MSGC clusters) Alignments and calibrations we assume (for now) to be done. We realize that some of these might need iteration with fitting, but this is not considered here. Input to refit passes is a list of output tracks from previous pass. B. Geometry input: Each pass of the trackfit requires knowledge of the material geometry of LHCb. This information is read from a database. Sophistication of the geometry input might be different for PR and refit. 2. Constraints from the Gaudi Framework --------------------------------------- Gonzalo introduces other to the ideas inside Gaudi. Separation of Data and Algorithm. Purpose: Data is kept in a data storage area separately from algorithm code such that various algorithms can be designed and applied independently to the data. (->Data should be algorithm independent? - MM??) Consequence: data resides either on a permanent data store (disk, e.g. database), or in the transient data store (memory, e.g. event data). The Gaudi come with a functionality: Initialize, execute, finalize,... (maybe more later) and with services: e.g. histogram service, logging service, random number service,... Writing code in the Gaudi style implies that the functionality will be inherited from Gaudi. To read from and write to the TDS a functionality is provided: a declare function, read/write function, delete function. Gonzalo notes that the speed preformance, when working with TDS data should not be much worse than that when working with local data since the TDS works with pointers and no copying of data is done. We questioned whether we can free memory in the TDS during an event (instead of cleaning up after a full event). Gonzalo thought this was so, but it should be checked. The issue is important if large amounts of temporary data are stored in the TDS. We discussed to what level of detail in programming we should follow the Gaudi functionality or at what level should stop storing data in the TDS? The final conclusion was simple: We follow the functionality in the top algorithms and keep it as far as turns out to be practical. It is not "absolutely verboten" to deviate in low level routines. (e.g. Runge Kutta interpolation parameters) A consequence of following the Gaudi filosophy is that the track reconstruction program as a module will not be re-usable for purposes outside gaudi. (e.g. a testbeam). This was realized an accepted. (2 people dead, 3 severly wounded.) 3. Gonzalo's & Rutger's Proposals --------------------------------- a) Gonzalo Gonzalo presented his work on a track fitting framework. His work has mainly concentrated on the design of the PR implenetation. His design describes how the algorithms for pass-1 fit can be implemented inside Gaudi. His proposal was accepted with a round of applause. A general idea that was adopted from this proposal is to copy the "subdetector hit objects" into "tracking hit objects" which are used in the track reconstruction. Access of all hit information should proceed via these tracking hit objects. We noted that possible calibrations or alignments could be implemented in the translation step from subdetector data object to tracking data object. Various names were proposed for these tracking objects ranging from a distinguished "TRFtrackOutTrkHit" to a sad "Dikhit". In the remainder of the meeting we used the name "nikhit". b) Rutger's Proposal Rutger presented his ideas on how to implement the core of the Kalman operations, specifying the functionality of the Kalman ("KLM") class, and the information that can be obtained from a track and a nikhit. Ain which for each trackfter discussions the general idea is: --------------------------------------------------------------------- Subdetector hits | | V nikhits --> pass-1 --> Tracklist --> pass-2 --> Tracklist (PR) | (refit) | | | V V store in event store in event a Track has: ------------ - a list of nikhits - a list of trackparameters (various z positions) - chi2, particle ID, + other relevant info a nikhit has: ------------- - hit ID or pointer to identify the hit. - pointer to subdetector hit from which it is derived - as function of a track parameter: - measurement (m) + error (v) in Kalman terminology (for a drift chamber the measurement direction is given by the predicted track parameter in the drift cell) - projection on the track parameter (h) - residual to the track parameter the heart of the Kalman actions are performed by the KLM class: KLM --- member fun's: set_TPar get_TPar update(hit) predict(z,transporter) smooth data: current TPar --------------------------------------------------------------------- Wed 24/3 ======== On Wednesday the Gaudi program was installed at nikhef, which turned to be rather easy. Thu 25/3 ======== 4. Monitoring ------------- A long discussion took place on how to implement monitoring information. This mainly involves how to access "MC truth info" and how to compare it with reconstructed info at various stages of analysis. On one hand we should clearly seperate "honest" reconstruction from "cheated" MC information, on the other hand we should have easy tools to accesss MC thruth info, as the main studies in the coming years will be to compare reconstruction performance with MC thruth. Examples of discussion items are: - Will there be pointers in the nikhit pointing to MC thruth hits? - Will there be access to in the nikhit to the sign of the drift time? Although there were no casualties in the discussion, the opions differed. No concrete solution for this was found. (In my personal opinion, this is one of the most difficult tasks we face, together with material implementation). We proposed to reconsider the "cheating" strategy and rediscuss it in the next meeting. For tests now, any reasonable method is allowed. Towards a first version ----------------------- The aim of a first version is to get roughly the same performance in Gaudi as we have currently with SICB. This means: performing a Kalman trackfit with cheated PR. In practice there are two possibilities for this first version: - copy the complete (FORTRAN) trackfit form SICB to Gaudi - implement a first version of the proposed C++ framework. The second option was preferred by all, as the first hardly produces any progress. The implementation requires work on the following items: a) A "cheat" version of pattern recognition, which reads the subdetector hits and builds tracks out of nikhits b) Implementation of the fit (pass-2):, i.e. a loop over MC truth tracks with for each track a double loop over hits: loop1: KLM.predict(z,transport) KLM.update(Hit) KLM.getTP loop2: KLM.smooth In the implementation of the first version we keep a number of routines from the SICB code, mainly: transporting from z1 to z2, including: - material implementation - runge kutta extrapolation - other utilities. Fri 26/3 ======== A summary meeting was held. The ideas from the week's meeting were collected and result in this summary. The work for the immediate future was distributed: @cern: Work on the 1st version Gaudi fit --------------------------------- Gonzalo is our natural contact person with the LHCb computing group (Gaudi). Gonzalo will finalize his already advanced work on interfacing several SICB banks with Gaudi structures. Matthew will work on the implementation of "cheating PR" Rutger and Matthew look at the first implemetation of the "KLM" and "nikhit" @ nikhef: Studies for the new Bfield -------------------------- Magnetic field map study from sicb: M.Merk Consequences for drift speed: B.Koene, K.Renner Tracking performance with the new field: i.e.: occupancies, search windows, performance: R.Hierck, M.Merk, W.Ruckstuhl