The CMS experiment at the CERN LHC collider is producing large amounts of simulated data in order to provide an adequate statistic for the Trigger System design.These productions are performed in a distributed environ...The CMS experiment at the CERN LHC collider is producing large amounts of simulated data in order to provide an adequate statistic for the Trigger System design.These productions are performed in a distributed environment,prototyping the hierarchical model of LHC computing centers developed by MONARC.A GRID approach is being used for interconnecting the Regional Centers.The main issues which are currently addressed are:automatic submission of data production requests to available productioin sites,data transfer among production sites,“best-replica” location and submission of enduser analysis job to the appropriate Regional Center,In each production site different hardware configurations are being tested and exploited.Furthermore robust job submission systems.which are also able to provide the needed bookkeeping of the produced data are being developed.BOSS(Batch Object Submission System)is an interface to the local computing center scheduling system that has been developed in order to allow recording in a relational database of information produced by the jobe running on the batch facilities A summary of the current activites and a plan for the use of DataGrid PM9 tools are presented.展开更多
Powerful mainstream computing equipment and the advent of affordable multi-Gigabit communication technology allow us to tackle data acquisition problems with clusters of inexpensive computers.Such networks typically i...Powerful mainstream computing equipment and the advent of affordable multi-Gigabit communication technology allow us to tackle data acquisition problems with clusters of inexpensive computers.Such networks typically incorporate heterogeneous plat forms,real-time partitions and custom devices.Therefore,one must strive for a software infrastructure that efficiently combines the nodes to a single,unified resource for the user,Overall requirements for such middleware are high efficiency and configuration flexibility. Intelligent I/O(I2O) is an industry specification that defines a unifrom messaging format and executing model for processor-enabled communication equipment.Mapping this concept to a distribulted computing environment and encapsulating the details of the specification into an application-programming framework allow us to provide run-time support for cluster operation.This paper gives a brief overview of a framework.XDAQ that we designed and implemented at CERN for the Compact Muon Solenoid experiment's prototype data acquisition system.展开更多
The CMS Collaboration of 2000 scientists involves 150 institutions from 31 nations spread all over the world.CMS software system integration and release management is performed at CERN.Code management is based on CVS,...The CMS Collaboration of 2000 scientists involves 150 institutions from 31 nations spread all over the world.CMS software system integration and release management is performed at CERN.Code management is based on CVS,with read or write access to the repository via a CVS server,Software configuration,release management tools(SCRAM) are being developed at CERN.Software releases are then distributed to regional centers,where the software is used by a local community for a wide variety of tasks,such as software development detector simulation and reconstruction and physics analysis.Depending on specific application,the system environment and local hardware requirements,different approaches and tools are used for the CMS software installation at different places.This presentation describes concepts and reactial solutions for a variety of ways of software distribution,with an emphasis on the CMS experience at Fermilab,Installation and usage of different models used for the production farm,for code development and for physics analysis are described.展开更多
The data acquisition system for the CMS experiment at the Large Hadron Collider (LHC) will require a large and high performance event building network.Several architectures and swithch technologies are currently being...The data acquisition system for the CMS experiment at the Large Hadron Collider (LHC) will require a large and high performance event building network.Several architectures and swithch technologies are currently being evaluated.This paper describes demonstrators which have been set up to study a small-scale event builder based on PCs emulating high performance sources and sinks connected via Ethernet or Myrinet switches.Results from ongoing studies,including measurements on throughput and scaling,are presented.展开更多
The CMS regional calorimeter trigger system detects signatures of electrons/photons,taus,jets,and missing and total transverse energy in a deadtinmess pipelined architecture .This system receives 7000 calorimeter treg...The CMS regional calorimeter trigger system detects signatures of electrons/photons,taus,jets,and missing and total transverse energy in a deadtinmess pipelined architecture .This system receives 7000 calorimeter tregger tower energies on 1.2 Gband digital copper cable serial links and processes them in a low-latency pipelined design using custom-built electronics.At the heart of the system is the Receiver Card which uses the new generation of gigabit ethernet receiver chips on a mezzanine card to convert serial data to parallel data before transmission on a 160 MHz backplane for further processing by cards that sum energies and identify electrons and jets.We describe the algorithms and hardware implementation,and summarize the simulation results that show that this system is capable of handling the rate requirements while triggering on physics signals with high efficiency.展开更多
The CMS IGUANA project has implemented an open analysis architecture that enables the creation of an integrated analysis environment.In this "analysis desktop" environment a physicist is able to perform most...The CMS IGUANA project has implemented an open analysis architecture that enables the creation of an integrated analysis environment.In this "analysis desktop" environment a physicist is able to perform most analysis-related tasks,not just the presentation and visualisation steps usually associated with analysis tools.The motivation behind IGUANA's approach is that phsics analysis includes much more than just the visualisation and data presentation.Many factors contribute to the increasing importance of making analysis and visualisation software an integral part of the experiment's software:object oriented and ever more advanced data models,GRID,and automated hierarchical storage management systems to name just a few.At the same time the analysis toolkits should be modular and non-invasive to be usable in different contexts within one experiment and generally across experiments.Ideally the analysis environment would appear to be perfectly customised to the experiment and the context,but would mostly consist of generic components.We describe how the IGUANA project is addressing these issues and present both the architecture and examples of how different aspects of analysis appear to the users and the developers.展开更多
The multi-tiered architecture of the highly-distributed CMS computing systems necessitates a flexible data distribution and analysis environment.We describe a prototype analysis environment which functions efficiently...The multi-tiered architecture of the highly-distributed CMS computing systems necessitates a flexible data distribution and analysis environment.We describe a prototype analysis environment which functions efficiently over wide area networks using a server installed at the Caltech/UCSD Tier 2 prototype to analyze CMS data stored at various locations using a thin client.The analysis environnment is based on existing HEP(Anaphe) and CMOS(CARF,ORCA,IGUANA)software thchnology on the server acessed from a variety of clients.A Java Analysis Studio (JAS,from SLAC)plug-in is being developed as a reference client.The server is operated as "Black box"on the proto-Tier2 system.ORCA Objectivity databases(e.g.an existing large CMS Muon sample)are hosted on the master and slave nodes,and remote clients can request processing of queries across the server nodes ,and get the histogram results returned and rendered in the client.The server is implemented using pure C++,and use XML-RPC as a language-neutral transport.This has several benefits,including much better scalability,better integration with CARF/ORCA,and importanly,Makes the work directly useful to other non-java general-purpose analysis and presentation tools such as Hippodraw,Lizard.or ROOT.展开更多
The Standard Model(SM) Higgs boson was predicted by theorists in the 1960 s during the development of the electroweak theory.Prior to the startup of the CERN Large Hadron Collider(LHC), experimental searches found no ...The Standard Model(SM) Higgs boson was predicted by theorists in the 1960 s during the development of the electroweak theory.Prior to the startup of the CERN Large Hadron Collider(LHC), experimental searches found no evidence of the Higgs boson. In July 2012, the ATLAS and CMS experiments at the LHC reported the discovery of a new boson in their searches for the SM Higgs boson. Subsequent experimental studies have revealed the spin-0 nature of this new boson and found its couplings to SM particles consistent to those of a Higgs boson. These measurements confirmed the newly discovered boson is indeed a Higgs boson. More measurements will be performed to compare the properties of the Higgs boson with the SM predictions.展开更多
The need for writing software packages that do not depend explicitly on the server code,and do not need to be modified,when new server interfaces arise,lead to the development of the hidden adapter pattern.We will sho...The need for writing software packages that do not depend explicitly on the server code,and do not need to be modified,when new server interfaces arise,lead to the development of the hidden adapter pattern.We will show its workings,and give an example of how complete decoupling between depended-on and dependent code can be achieved using the hidden adapter pattern.展开更多
New experiments including those at the LHC will require analysis of very large datasets which are best handled with distributed computation.We present the design and development of a prototype framework using Java and...New experiments including those at the LHC will require analysis of very large datasets which are best handled with distributed computation.We present the design and development of a prototype framework using Java and Objectivity.Our framework solves such analysis-specific problems as selecting event samples from large distributed databases.producing varialbe distributions,and negotiating between multiple analysis service providers.Examples from the successful application of the prototype to the analysis of data from the L3 experiment will also be presented.展开更多
Industry experts are increasingly focusing on team productivity on team productivity as the key to success,the base of the team effort is the four-fold structure of software in terms of logical organisation,physical o...Industry experts are increasingly focusing on team productivity on team productivity as the key to success,the base of the team effort is the four-fold structure of software in terms of logical organisation,physical organisation,managerial organisation,and dynamical structure.We describe the ideas put into action within the CMS software for organising software into sub-systems and packages,and to establish configuration management in a multiproject environment.We use a structure that allows to maximise the independence of soft ware development in individual areas,and at the same time emphasises the overwhelming importance of the interdependencies between the packages and components in the system.We comment on release procedures,and describe the inter-relationship between release,development,integration,and testing.展开更多
自从1964年理论家物理学预言Higgs粒子存在以来,将近50年后,2012年4月欧洲核子研究组织(CERN,the European Organization for Nuclear Research)的大强子对撞机(LHC)的ATLAS和CMS实验组终于确认了标准模型(SM)的Higgs粒子的存...自从1964年理论家物理学预言Higgs粒子存在以来,将近50年后,2012年4月欧洲核子研究组织(CERN,the European Organization for Nuclear Research)的大强子对撞机(LHC)的ATLAS和CMS实验组终于确认了标准模型(SM)的Higgs粒子的存在。这是理论高能物理学家和实验物理学家取得的伟大胜利。也是上世纪由Noether引入的现代物理对称性原理的胜利。科学家们追求的下一个目标是什么?自然成为人们关注的焦点。展开更多
文摘The CMS experiment at the CERN LHC collider is producing large amounts of simulated data in order to provide an adequate statistic for the Trigger System design.These productions are performed in a distributed environment,prototyping the hierarchical model of LHC computing centers developed by MONARC.A GRID approach is being used for interconnecting the Regional Centers.The main issues which are currently addressed are:automatic submission of data production requests to available productioin sites,data transfer among production sites,“best-replica” location and submission of enduser analysis job to the appropriate Regional Center,In each production site different hardware configurations are being tested and exploited.Furthermore robust job submission systems.which are also able to provide the needed bookkeeping of the produced data are being developed.BOSS(Batch Object Submission System)is an interface to the local computing center scheduling system that has been developed in order to allow recording in a relational database of information produced by the jobe running on the batch facilities A summary of the current activites and a plan for the use of DataGrid PM9 tools are presented.
文摘Powerful mainstream computing equipment and the advent of affordable multi-Gigabit communication technology allow us to tackle data acquisition problems with clusters of inexpensive computers.Such networks typically incorporate heterogeneous plat forms,real-time partitions and custom devices.Therefore,one must strive for a software infrastructure that efficiently combines the nodes to a single,unified resource for the user,Overall requirements for such middleware are high efficiency and configuration flexibility. Intelligent I/O(I2O) is an industry specification that defines a unifrom messaging format and executing model for processor-enabled communication equipment.Mapping this concept to a distribulted computing environment and encapsulating the details of the specification into an application-programming framework allow us to provide run-time support for cluster operation.This paper gives a brief overview of a framework.XDAQ that we designed and implemented at CERN for the Compact Muon Solenoid experiment's prototype data acquisition system.
文摘The CMS Collaboration of 2000 scientists involves 150 institutions from 31 nations spread all over the world.CMS software system integration and release management is performed at CERN.Code management is based on CVS,with read or write access to the repository via a CVS server,Software configuration,release management tools(SCRAM) are being developed at CERN.Software releases are then distributed to regional centers,where the software is used by a local community for a wide variety of tasks,such as software development detector simulation and reconstruction and physics analysis.Depending on specific application,the system environment and local hardware requirements,different approaches and tools are used for the CMS software installation at different places.This presentation describes concepts and reactial solutions for a variety of ways of software distribution,with an emphasis on the CMS experience at Fermilab,Installation and usage of different models used for the production farm,for code development and for physics analysis are described.
文摘The data acquisition system for the CMS experiment at the Large Hadron Collider (LHC) will require a large and high performance event building network.Several architectures and swithch technologies are currently being evaluated.This paper describes demonstrators which have been set up to study a small-scale event builder based on PCs emulating high performance sources and sinks connected via Ethernet or Myrinet switches.Results from ongoing studies,including measurements on throughput and scaling,are presented.
文摘The CMS regional calorimeter trigger system detects signatures of electrons/photons,taus,jets,and missing and total transverse energy in a deadtinmess pipelined architecture .This system receives 7000 calorimeter tregger tower energies on 1.2 Gband digital copper cable serial links and processes them in a low-latency pipelined design using custom-built electronics.At the heart of the system is the Receiver Card which uses the new generation of gigabit ethernet receiver chips on a mezzanine card to convert serial data to parallel data before transmission on a 160 MHz backplane for further processing by cards that sum energies and identify electrons and jets.We describe the algorithms and hardware implementation,and summarize the simulation results that show that this system is capable of handling the rate requirements while triggering on physics signals with high efficiency.
文摘The CMS IGUANA project has implemented an open analysis architecture that enables the creation of an integrated analysis environment.In this "analysis desktop" environment a physicist is able to perform most analysis-related tasks,not just the presentation and visualisation steps usually associated with analysis tools.The motivation behind IGUANA's approach is that phsics analysis includes much more than just the visualisation and data presentation.Many factors contribute to the increasing importance of making analysis and visualisation software an integral part of the experiment's software:object oriented and ever more advanced data models,GRID,and automated hierarchical storage management systems to name just a few.At the same time the analysis toolkits should be modular and non-invasive to be usable in different contexts within one experiment and generally across experiments.Ideally the analysis environment would appear to be perfectly customised to the experiment and the context,but would mostly consist of generic components.We describe how the IGUANA project is addressing these issues and present both the architecture and examples of how different aspects of analysis appear to the users and the developers.
文摘The multi-tiered architecture of the highly-distributed CMS computing systems necessitates a flexible data distribution and analysis environment.We describe a prototype analysis environment which functions efficiently over wide area networks using a server installed at the Caltech/UCSD Tier 2 prototype to analyze CMS data stored at various locations using a thin client.The analysis environnment is based on existing HEP(Anaphe) and CMOS(CARF,ORCA,IGUANA)software thchnology on the server acessed from a variety of clients.A Java Analysis Studio (JAS,from SLAC)plug-in is being developed as a reference client.The server is operated as "Black box"on the proto-Tier2 system.ORCA Objectivity databases(e.g.an existing large CMS Muon sample)are hosted on the master and slave nodes,and remote clients can request processing of queries across the server nodes ,and get the histogram results returned and rendered in the client.The server is implemented using pure C++,and use XML-RPC as a language-neutral transport.This has several benefits,including much better scalability,better integration with CARF/ORCA,and importanly,Makes the work directly useful to other non-java general-purpose analysis and presentation tools such as Hippodraw,Lizard.or ROOT.
基金supported by the Director,Office of Science,Offices of High Energy and Nuclear Physics of the U.S.Department of Energy(Grant No.DE-AC02-05CH11231)
文摘The Standard Model(SM) Higgs boson was predicted by theorists in the 1960 s during the development of the electroweak theory.Prior to the startup of the CERN Large Hadron Collider(LHC), experimental searches found no evidence of the Higgs boson. In July 2012, the ATLAS and CMS experiments at the LHC reported the discovery of a new boson in their searches for the SM Higgs boson. Subsequent experimental studies have revealed the spin-0 nature of this new boson and found its couplings to SM particles consistent to those of a Higgs boson. These measurements confirmed the newly discovered boson is indeed a Higgs boson. More measurements will be performed to compare the properties of the Higgs boson with the SM predictions.
文摘The need for writing software packages that do not depend explicitly on the server code,and do not need to be modified,when new server interfaces arise,lead to the development of the hidden adapter pattern.We will show its workings,and give an example of how complete decoupling between depended-on and dependent code can be achieved using the hidden adapter pattern.
文摘New experiments including those at the LHC will require analysis of very large datasets which are best handled with distributed computation.We present the design and development of a prototype framework using Java and Objectivity.Our framework solves such analysis-specific problems as selecting event samples from large distributed databases.producing varialbe distributions,and negotiating between multiple analysis service providers.Examples from the successful application of the prototype to the analysis of data from the L3 experiment will also be presented.
文摘Industry experts are increasingly focusing on team productivity on team productivity as the key to success,the base of the team effort is the four-fold structure of software in terms of logical organisation,physical organisation,managerial organisation,and dynamical structure.We describe the ideas put into action within the CMS software for organising software into sub-systems and packages,and to establish configuration management in a multiproject environment.We use a structure that allows to maximise the independence of soft ware development in individual areas,and at the same time emphasises the overwhelming importance of the interdependencies between the packages and components in the system.We comment on release procedures,and describe the inter-relationship between release,development,integration,and testing.
文摘自从1964年理论家物理学预言Higgs粒子存在以来,将近50年后,2012年4月欧洲核子研究组织(CERN,the European Organization for Nuclear Research)的大强子对撞机(LHC)的ATLAS和CMS实验组终于确认了标准模型(SM)的Higgs粒子的存在。这是理论高能物理学家和实验物理学家取得的伟大胜利。也是上世纪由Noether引入的现代物理对称性原理的胜利。科学家们追求的下一个目标是什么?自然成为人们关注的焦点。