Within the ATLAS experiment Trigger/DAQ and DCS are both logically and physically separated.Nevertheless there is a need to communicate.The initial problem definition and analysis suggested three subsystems the Trigge...Within the ATLAS experiment Trigger/DAQ and DCS are both logically and physically separated.Nevertheless there is a need to communicate.The initial problem definition and analysis suggested three subsystems the Trigger/DAQ DCS Communication (DDC) project should support the ability to :1.exchange data between Trigger/DAQ and DCS;2.send alarm messages from DCS to Trigger/DAQ;3.issue commands to DCS from Trigger/DAQ.Each subsystem is developed and implemented independently using a common software infrastructure.Among the various subsystems of the ATLAS Trigger/DAQ the Online is responsible for the control and configuration.It is the glue connecting the different systems such as data flow.level 1 and high-level triggers.The DDC uses the various Online components as an interface point on the Trigger/DAQ side with the PVSS II SCADA system on the DCS side and addresses issues such as partitioning,time stamps,event numbers,hierarchy,authorization and security,PVSS II is a commercial product chosen by CERN to be the SCADA system for all LHC experiments,Its API provides full access to its database,which is sufficient to implement the 3 subsystems of the DDC software,The DDC project adopted the Online Software Process,which recommends a basic software life-cycle:problem statement,analysis,design,implementation and testing.Each phase results in a corresponding document or in the case of the implementation and testing,a piece of code,Inspection and review take a major role in the Online software process,The DDC documents have been inspected to detect flaws and resulted in a improved quality.A first prototype of the DDC is ready and foreseen to be used at the test-beam during summer 2001.展开更多
The studies undertaken to prepare the Technical Design Report of the ATLAS 3^rd Level Trigger(Event Filter)are performed on different prototypes based on different technologies.we present here the most recent results ...The studies undertaken to prepare the Technical Design Report of the ATLAS 3^rd Level Trigger(Event Filter)are performed on different prototypes based on different technologies.we present here the most recent results obtained for the supervision of the prototype based on conventional,off-the-shelf PC machines and Java Moblie agent technology.展开更多
We present several comparisons of GEANT4 simulations with test beam data and GEANT3 simulations for different liquid argon(LAr) calorimeters of the ATLAS detector,All relevant parts of the test beam setup(scintilators...We present several comparisons of GEANT4 simulations with test beam data and GEANT3 simulations for different liquid argon(LAr) calorimeters of the ATLAS detector,All relevant parts of the test beam setup(scintilators,multi wire proportional chambers,cryostat etc.)are described in GEANT4 as well as in GEANT3.Muon and electron data at different energies have been compared with Monte Carlo prediction.展开更多
The Online Book-keeper(OBK) was developed to keep track of past data taking activity,as well as providing the hardware and software conditions during physics data taking to the scientists doing offline analysis.The ap...The Online Book-keeper(OBK) was developed to keep track of past data taking activity,as well as providing the hardware and software conditions during physics data taking to the scientists doing offline analysis.The approach adopted to build the OBK was to develop a series of prototypes,experlimenting and comparing different DBMS systems for data storage,In this paper we describe the implemented prototypes,analyse their different characteristics and present the results obtained using a common set of tests.展开更多
One of the sub-systems of the Trigger/DAQ system of the future ATLAS experiment is the Online Software system.It encompasses the functionality needed to configure,control and monitor the DAQ.Its architecture is based ...One of the sub-systems of the Trigger/DAQ system of the future ATLAS experiment is the Online Software system.It encompasses the functionality needed to configure,control and monitor the DAQ.Its architecture is based on a component structure described in the ATLAS Trigger/DAQ technical proposal.Resular integration tests ensure its smooth operation in test beam setups during its evolutionary development towards the final ATLAS online system.Feedback is received and returned into the development process.Studies of the system.behavior have been performed on a set of up to 111 PCs on a configuration which is getting closer to the final size,Large scale and performance tests of the integrated system were performed on this setup with emphasis on investigating the aspects of the inter-dependence of the components and the performance of the communication software.Of particular interest were the run control state transitions in various configurations of the run control hierarchy.For the purpose of the tests,the software from other Trigger/DAQ sub-systems has been emulated.This paper presents a brief overview of the online system structure,its components and the large scale integration tests and their results.展开更多
Athena is the common framework used by the ATLAS experiment for simulation,reconstruction,and analysis,By design,Athena supports multiple persistence services,and insulates users from technology-specific persistence d...Athena is the common framework used by the ATLAS experiment for simulation,reconstruction,and analysis,By design,Athena supports multiple persistence services,and insulates users from technology-specific persistence details.Athena users and even most Athena package developers should neither know nor care whether data come from the grid or from local filesystems.nor whether data reside in object databases,in ROOT or ZEBRA files,or in ASCII files.In this paper we describe how Athena applications may transparently take advantage of emerging services provided by grid software today-how data generated by Athea jobs are registered in grid replica catalogs and other collection management services,and the means by which input data are identified and located in a grid-aware collection management environment.We outline an evolutionary path toward incorporation of grid-based virtual data services,whereby locating data may be replaced by locating a recipe according to which that dta may be generated.Several implementation scenarios,ranging from lowlevel grid catalog services(e.g.,from Globus)through higher-level services such as the Grid Data Management Pilot (under development as part of the European DataGrid porject,in collaboration,with the Particle Physics Data Grid)to more conventional database services,and a common architecture to support these various scenarios,are also described.展开更多
The ATLAS detector is one of the most sophisticated and huge detectors ever designed up to now.A detailed,flexible and complete simulation program is needed in order to study the characteristics and possible problems ...The ATLAS detector is one of the most sophisticated and huge detectors ever designed up to now.A detailed,flexible and complete simulation program is needed in order to study the characteristics and possible problems of such a challenging apparatus and to answer to all raising questions in terms of physics,design optimization,etc.To cope with these needs we are implementing an application based on the simulation framework FADS/Goofy(Framework for ATLAS Detector Simulation /Geant4-based Object-Oriented Folly)in the Geant4 environment.The user's specific code implementation is presented in detalils for the different applications implemented until now,from the various components of the ATLAS spectrometer top some particular testbeam facilities,particular emphasis is put in describing the simulation of the Muon Spectrometer and its subsystems as a test case for the implementation of the whole detector simulation program:the intrinsic complexity in the geometry description of the Muon System is one of the more demanding problems that are faced.The magnetic field handling,the physics impact in the event processing in presence of backgrounds from different sources and the implementation of different possible generators(including Pythia) are also discussed.展开更多
The configuration databases are an important part of the Trigger/DAQ system of the future ATLAS experiment .This paper describes their current status giving details of architecture,implementation,test results and plan...The configuration databases are an important part of the Trigger/DAQ system of the future ATLAS experiment .This paper describes their current status giving details of architecture,implementation,test results and plans for future work.展开更多
This paper outlines the desgn and prototyping of the ATLAS High Level Trigger(HLT)wihch is a combined effort of the Data Collection HLT and PESA(Physics and Event Selection Architecture)subgroups within the ATLAS TDAQ...This paper outlines the desgn and prototyping of the ATLAS High Level Trigger(HLT)wihch is a combined effort of the Data Collection HLT and PESA(Physics and Event Selection Architecture)subgroups within the ATLAS TDAQ collaboration.Two important issues,alresdy outlined in the ATLAS HLT,DAQ and DCS Technical Proposal [1] will be highlighted:the treatment of the LVL2 Trigger and Event Filter as aspects of a general HLT with a view to easier migration of algorthms between the two levels;unification of the selective data collection for LVL2 and Event Building.展开更多
Athena,the Software Framework for ATLAS' offline software is based on the Gaudi Framework from LHCb^1,The Processing Model of Gaudi is essentially that of a batch-oriented system -a User prepares a file detailing ...Athena,the Software Framework for ATLAS' offline software is based on the Gaudi Framework from LHCb^1,The Processing Model of Gaudi is essentially that of a batch-oriented system -a User prepares a file detailing the configuration of which Algorithms are to be applied to the input data of a job and the parameter values that control the behavior of each Algorithm instance.The Framework then reads that file once at the beginning of a job and runs to completion with no further interaction with the User. We have enhanced the Processing Model to include an interactive mode where a User can cotrol the event loop of a running job and modify the Algorithms and parameters on the fly.We changed only a very small number of Gaudi Classes to provide access to parameters from an embedded Python interpreter,No change was made to the Gaudi Programming Model.i.e., developers need not change anything to make use of this added interface,We present details of the design and implementation of the interactive Python interface for Athena.展开更多
The detector description database,the event data structure,the condition database are all examples(among others)of complex collections of objects which need to be unambiguously identified,not only internally to their ...The detector description database,the event data structure,the condition database are all examples(among others)of complex collections of objects which need to be unambiguously identified,not only internally to their own management structure.but also from one collection to the other.The requirements for such an identification scheme include the management of identifiers individually attached to each collected object,the possibility to formally spectify these identifiers (eg through dictionaries),to generate optimised and compact representations for these identifiers and to be able to use them as sorting and searching keys.we present here the generic toolkit developed in the context of the Atlas experiment to primarily provide the identification of the readout elements of the detector.This toolkit offers several either generic or specialized component such as:an XML based dictionary with which the formal specification of a particular object collection is expressed,a set of various binary representations for identifier objects(offering various level of compaction),range operators meant to manipulate ranges of identifiers,and finally a collection manager similar to the STL map but optimised for an organization keyed by Identifiers.All these components easily interoperate.In Particular the Identifier didctionary offers means of specifying permitted cardinalities of objects at each level of the hierarchy,This can then be translated into Identifier Ranges or can be used as the strategy driver for high compactification of the identifiers(e.g.to store very large number of identified objects).Current use of this toolkit within the detector description will be presented,and expected or possible other usages will be discussed.展开更多
文摘Within the ATLAS experiment Trigger/DAQ and DCS are both logically and physically separated.Nevertheless there is a need to communicate.The initial problem definition and analysis suggested three subsystems the Trigger/DAQ DCS Communication (DDC) project should support the ability to :1.exchange data between Trigger/DAQ and DCS;2.send alarm messages from DCS to Trigger/DAQ;3.issue commands to DCS from Trigger/DAQ.Each subsystem is developed and implemented independently using a common software infrastructure.Among the various subsystems of the ATLAS Trigger/DAQ the Online is responsible for the control and configuration.It is the glue connecting the different systems such as data flow.level 1 and high-level triggers.The DDC uses the various Online components as an interface point on the Trigger/DAQ side with the PVSS II SCADA system on the DCS side and addresses issues such as partitioning,time stamps,event numbers,hierarchy,authorization and security,PVSS II is a commercial product chosen by CERN to be the SCADA system for all LHC experiments,Its API provides full access to its database,which is sufficient to implement the 3 subsystems of the DDC software,The DDC project adopted the Online Software Process,which recommends a basic software life-cycle:problem statement,analysis,design,implementation and testing.Each phase results in a corresponding document or in the case of the implementation and testing,a piece of code,Inspection and review take a major role in the Online software process,The DDC documents have been inspected to detect flaws and resulted in a improved quality.A first prototype of the DDC is ready and foreseen to be used at the test-beam during summer 2001.
文摘The studies undertaken to prepare the Technical Design Report of the ATLAS 3^rd Level Trigger(Event Filter)are performed on different prototypes based on different technologies.we present here the most recent results obtained for the supervision of the prototype based on conventional,off-the-shelf PC machines and Java Moblie agent technology.
文摘We present several comparisons of GEANT4 simulations with test beam data and GEANT3 simulations for different liquid argon(LAr) calorimeters of the ATLAS detector,All relevant parts of the test beam setup(scintilators,multi wire proportional chambers,cryostat etc.)are described in GEANT4 as well as in GEANT3.Muon and electron data at different energies have been compared with Monte Carlo prediction.
文摘The Online Book-keeper(OBK) was developed to keep track of past data taking activity,as well as providing the hardware and software conditions during physics data taking to the scientists doing offline analysis.The approach adopted to build the OBK was to develop a series of prototypes,experlimenting and comparing different DBMS systems for data storage,In this paper we describe the implemented prototypes,analyse their different characteristics and present the results obtained using a common set of tests.
文摘One of the sub-systems of the Trigger/DAQ system of the future ATLAS experiment is the Online Software system.It encompasses the functionality needed to configure,control and monitor the DAQ.Its architecture is based on a component structure described in the ATLAS Trigger/DAQ technical proposal.Resular integration tests ensure its smooth operation in test beam setups during its evolutionary development towards the final ATLAS online system.Feedback is received and returned into the development process.Studies of the system.behavior have been performed on a set of up to 111 PCs on a configuration which is getting closer to the final size,Large scale and performance tests of the integrated system were performed on this setup with emphasis on investigating the aspects of the inter-dependence of the components and the performance of the communication software.Of particular interest were the run control state transitions in various configurations of the run control hierarchy.For the purpose of the tests,the software from other Trigger/DAQ sub-systems has been emulated.This paper presents a brief overview of the online system structure,its components and the large scale integration tests and their results.
文摘Athena is the common framework used by the ATLAS experiment for simulation,reconstruction,and analysis,By design,Athena supports multiple persistence services,and insulates users from technology-specific persistence details.Athena users and even most Athena package developers should neither know nor care whether data come from the grid or from local filesystems.nor whether data reside in object databases,in ROOT or ZEBRA files,or in ASCII files.In this paper we describe how Athena applications may transparently take advantage of emerging services provided by grid software today-how data generated by Athea jobs are registered in grid replica catalogs and other collection management services,and the means by which input data are identified and located in a grid-aware collection management environment.We outline an evolutionary path toward incorporation of grid-based virtual data services,whereby locating data may be replaced by locating a recipe according to which that dta may be generated.Several implementation scenarios,ranging from lowlevel grid catalog services(e.g.,from Globus)through higher-level services such as the Grid Data Management Pilot (under development as part of the European DataGrid porject,in collaboration,with the Particle Physics Data Grid)to more conventional database services,and a common architecture to support these various scenarios,are also described.
文摘The ATLAS detector is one of the most sophisticated and huge detectors ever designed up to now.A detailed,flexible and complete simulation program is needed in order to study the characteristics and possible problems of such a challenging apparatus and to answer to all raising questions in terms of physics,design optimization,etc.To cope with these needs we are implementing an application based on the simulation framework FADS/Goofy(Framework for ATLAS Detector Simulation /Geant4-based Object-Oriented Folly)in the Geant4 environment.The user's specific code implementation is presented in detalils for the different applications implemented until now,from the various components of the ATLAS spectrometer top some particular testbeam facilities,particular emphasis is put in describing the simulation of the Muon Spectrometer and its subsystems as a test case for the implementation of the whole detector simulation program:the intrinsic complexity in the geometry description of the Muon System is one of the more demanding problems that are faced.The magnetic field handling,the physics impact in the event processing in presence of backgrounds from different sources and the implementation of different possible generators(including Pythia) are also discussed.
文摘The configuration databases are an important part of the Trigger/DAQ system of the future ATLAS experiment .This paper describes their current status giving details of architecture,implementation,test results and plans for future work.
文摘This paper outlines the desgn and prototyping of the ATLAS High Level Trigger(HLT)wihch is a combined effort of the Data Collection HLT and PESA(Physics and Event Selection Architecture)subgroups within the ATLAS TDAQ collaboration.Two important issues,alresdy outlined in the ATLAS HLT,DAQ and DCS Technical Proposal [1] will be highlighted:the treatment of the LVL2 Trigger and Event Filter as aspects of a general HLT with a view to easier migration of algorthms between the two levels;unification of the selective data collection for LVL2 and Event Building.
文摘Athena,the Software Framework for ATLAS' offline software is based on the Gaudi Framework from LHCb^1,The Processing Model of Gaudi is essentially that of a batch-oriented system -a User prepares a file detailing the configuration of which Algorithms are to be applied to the input data of a job and the parameter values that control the behavior of each Algorithm instance.The Framework then reads that file once at the beginning of a job and runs to completion with no further interaction with the User. We have enhanced the Processing Model to include an interactive mode where a User can cotrol the event loop of a running job and modify the Algorithms and parameters on the fly.We changed only a very small number of Gaudi Classes to provide access to parameters from an embedded Python interpreter,No change was made to the Gaudi Programming Model.i.e., developers need not change anything to make use of this added interface,We present details of the design and implementation of the interactive Python interface for Athena.
文摘The detector description database,the event data structure,the condition database are all examples(among others)of complex collections of objects which need to be unambiguously identified,not only internally to their own management structure.but also from one collection to the other.The requirements for such an identification scheme include the management of identifiers individually attached to each collected object,the possibility to formally spectify these identifiers (eg through dictionaries),to generate optimised and compact representations for these identifiers and to be able to use them as sorting and searching keys.we present here the generic toolkit developed in the context of the Atlas experiment to primarily provide the identification of the readout elements of the detector.This toolkit offers several either generic or specialized component such as:an XML based dictionary with which the formal specification of a particular object collection is expressed,a set of various binary representations for identifier objects(offering various level of compaction),range operators meant to manipulate ranges of identifiers,and finally a collection manager similar to the STL map but optimised for an organization keyed by Identifiers.All these components easily interoperate.In Particular the Identifier didctionary offers means of specifying permitted cardinalities of objects at each level of the hierarchy,This can then be translated into Identifier Ranges or can be used as the strategy driver for high compactification of the identifiers(e.g.to store very large number of identified objects).Current use of this toolkit within the detector description will be presented,and expected or possible other usages will be discussed.