期刊文献+
共找到15篇文章
< 1 >
每页显示 20 50 100
A standard Event Class for Monte Carlo Generators
1
作者 L.A.Gerren M.Fischler 《International Conference on Computing in High Energy and Nuclear Physics》 2001年第1期370-371,共2页
StdHepC++[1]is a CLHEP[2] Monte Carlo event class library which provides a common interface to Monte Carlo Event Generators,This work is an extensive redesign of the StdHep Fortran interface to use the full power of o... StdHepC++[1]is a CLHEP[2] Monte Carlo event class library which provides a common interface to Monte Carlo Event Generators,This work is an extensive redesign of the StdHep Fortran interface to use the full power of object oriented design,A generated event maps naturally onto the Directed Acyclic Graph concept and we have used the HepMC classes to implement this.The full implementation allows the user to combine events to simulate beam pileup and access them transparently as though they were a single event. 展开更多
关键词 MONTE Carlo传感器 标准事件分类 StdHepC++
下载PDF
PBSNG—Batch System for Farm Architecture
2
作者 J.Fromm K.Genser 等 《International Conference on Computing in High Energy and Nuclear Physics》 2001年第1期51-52,共2页
FBSNG [1] is a redesigned version of Farm Batch System (FBS[1]),which was developed as a batch process management system for off-line Run II data processing at FNAL.FBSNG is designed for UNIX computer farms and is ca... FBSNG [1] is a redesigned version of Farm Batch System (FBS[1]),which was developed as a batch process management system for off-line Run II data processing at FNAL.FBSNG is designed for UNIX computer farms and is capable of managing up to 1000 nodes in a single farm.FBSNG allows users to start arrays of parallel processes on one or more farm computers,It uses a simplified abstract resource counting method for load balancing between computers.The resource counting approach allows FBSNG to be a simple and flexible tool for farm resource management.FBSNG scheduler features include guaranteed and controllable” fair-share” scheduling.FBSNG is easily portable across different flavors of UNIX.The system has been successfully used at Fermilab as well as by off-site collaborators for several years on farms of different sizes and different platforms for off-line data processing,Monte-Carlo data generation and other tasks. 展开更多
关键词 实验 软件Farm 数据处理系统 PC聚类
下载PDF
The CDF Run II Disk Inventory Manager
3
作者 PaulHubbard StephanLammel 《International Conference on Computing in High Energy and Nuclear Physics》 2001年第1期304-307,共4页
The Collider Detector at Fermilab(CDF) experiment records and analyses proton-antiprotion interactions at a center-of -mass energy of 2 TeV,Run II of the Fermilab Tevatron started in April of this year,The duration of... The Collider Detector at Fermilab(CDF) experiment records and analyses proton-antiprotion interactions at a center-of -mass energy of 2 TeV,Run II of the Fermilab Tevatron started in April of this year,The duration of the run is expected to be over two years.One of the main data handling strategies of CDF for RUn II is to hide all tape access from the user and to facilitate sharing of data and thus disk space,A disk inventory manager was designed and developed over the past years to keep track of the data on disk.to coordinate user access to the data,and to stage data back from tape to disk as needed.The CDF Run II disk inventory manager consists of a server process,a user and administrator command line interfaces.and a library with the routines of the client API.Data are managed in filesets which are groups of one or more files.The system keeps track of user acess to the filesets and attempts to keep frequently accessed data on disk.Data that are not on disk are automatically staged back from tape as needed.For CDF the main staging method is based on the mt-tools package as tapes are written according to the ANSI standard. 展开更多
关键词 实验 质子反质子互作用 数据存储 数据处理
下载PDF
Architecture Design of Trigger and DAQ System for Fermilab CKM Experiment
4
作者 JinyuanWU 《International Conference on Computing in High Energy and Nuclear Physics》 2001年第1期579-582,共4页
The Fermilab CKM (E921) experiment studies a rare kaon decay which has a very small branching ratio and can be very hard to separate from background processes.A trigger and DAQ system is required to collecto all neces... The Fermilab CKM (E921) experiment studies a rare kaon decay which has a very small branching ratio and can be very hard to separate from background processes.A trigger and DAQ system is required to collecto all necessary unformation for background rejection and to maintain high reliability at high beam rate.The unique challenges have emphasized the following guiding concepts:(1) Collecting background is as important as collecting good events.(2) A DAQ "event" should not be just a "snap shot" of the detector.It should be a short history record of the detector around the candidate event. The hit history provides information to understand temporary detector blindness,which is extremely important to the CKM experiment.(3) The main purpose of the trigger system should not be "knocking down trigger rate" or "throwing out garbage events" .Instead,it should classify the events and select appropriate data collecting straegies among various predefined ones for the given types of the events.The following methodologies are epmployed in the architecture to fulfill the experiment requirements without confronting unnecessary technical difficulties.(1) Continuous digitization near the detector elements is utilized to preserve the data quality.(2) The concept of minimum synchronization is adopted to eliminate the needs of time matching signal paths.(3) A global level 1 trigger performs coincident and veto functions using digital timing information to avoid problems due to signal degrading in long calbes.(4) The DAQ logic allows to collect chronicle records around the interesting events with different levels of detail of ADC information,so that very low energy particles in the veto systems can be best detected.(5) A re-programmable hardware trigger(L2.5)and a software trigger(L3) sitting in the DAQ stream are planned to perform data selection functioins based on full detector data with adjustability. 展开更多
关键词 CKM实验 K介子衰变 触发器层次设计 高能物理
下载PDF
HepPDT:encapsulating the Particle Data Table
5
作者 L.A.Garren W.Brown 《International Conference on Computing in High Energy and Nuclear Physics》 2001年第1期374-376,共3页
As a result of discussions within the HEP community,we have written a C++ package which can be used to maintain a table of particle properties,including decay mode information.The classes allow for multiple tables and... As a result of discussions within the HEP community,we have written a C++ package which can be used to maintain a table of particle properties,including decay mode information.The classes allow for multiple tables and accept inpu from a number of standard sources,In addition,They provide a mechanism by which an event generator can employ the tabulated information to actually direct the decay of particles. 展开更多
关键词 HepPDT 粒子数据表 高能物理
下载PDF
Lattice QCD Production on a Commodity Cluster at Fermilab
6
作者 D.Holmgren P.Mackenzie 等 《International Conference on Computing in High Energy and Nuclear Physics》 2001年第1期57-60,共4页
Large scale QCD Monte Carlo calculations have typically been performed on either commercial supercomputers or specially built massively parallel computers such as Fermilab's ACPMAPS.Commodity clusters equipped wit... Large scale QCD Monte Carlo calculations have typically been performed on either commercial supercomputers or specially built massively parallel computers such as Fermilab's ACPMAPS.Commodity clusters equipped with high performance networking equipment present an attractive alternative,achieving superior performance to price ratios and offering clear upgrade paths.We describe the construction and results to date of Fermilab's prototype production cluster,which consists of 80 dual Pentium Ⅲsystems interconnected with Myrinet networking hardware.We describe software tools and techniques we have developed for operating system installation and administration.We discuss software optimizations using the Pentium's built-in parallel computation facilities(SSE),Finally,we present short and long term plans for the construction of larger facilities. 展开更多
关键词 格子QCD 费米实验室 商品聚类
下载PDF
The CDF Date Acquisition System for Tevatron Run Ⅱ
7
作者 ArndMeyer 《International Conference on Computing in High Energy and Nuclear Physics》 2001年第1期597-600,共4页
The CDF experiment at the Fermilab Tevatron has been significantly upgraded for the collider Run Ⅱ,which started in march 2001 and is scheduled to alst until 2006,Instantaneous luminosities of 10^32cm^-2S^-1 and abov... The CDF experiment at the Fermilab Tevatron has been significantly upgraded for the collider Run Ⅱ,which started in march 2001 and is scheduled to alst until 2006,Instantaneous luminosities of 10^32cm^-2S^-1 and above are expected.A data acquisition system capable of efficiently recording the data has been one of the most critical elements of the upgrade.Key figures are the abilitity to deal with the short bunch spacing of 132ns.event sizes of the order of 250kB,and permanent logging of 20MB/s ,The design of the system and experience from the first months of data-taking operation are discussed. 展开更多
关键词 CDF实验 数据收集 系统设计
下载PDF
Tools for Distributed Monitoring of the Campus Network with Low Latency Time
8
作者 AndreyBobyshev 《International Conference on Computing in High Energy and Nuclear Physics》 2001年第1期421-423,共3页
In addition to deployment of the commercial management products,a number of public and self developed tools are successfully used at Fermilab for monitoring of the campus network.A suite of tools consisting of several... In addition to deployment of the commercial management products,a number of public and self developed tools are successfully used at Fermilab for monitoring of the campus network.A suite of tools consisting of several programs running in a distributed environment to measure network parameters such as average round-trip time.traffic,throughput,error rate,DNS responses and others from different locations in the network is used.The system maintains a central archive of data,makes analysis and graphical representation available via a web-based interface.The developed tools are based on integration with well known public software RRD,Cricket fping,iperf. 展开更多
关键词 校园网 分布监视 FERMILAB
下载PDF
The CDF Computing and Analysis System:First Experience
9
作者 RickColombo FedorRatnikov 《International Conference on Computing in High Energy and Nuclear Physics》 2001年第1期15-19,共5页
The Collider Detector at Fermilab(CDF) collaboration records and analyses proton anti-proton interactions with a center-of -mass energy of 2 TeV at the Tevatron,A new collider run,Run II,of the Tevatron started in Apr... The Collider Detector at Fermilab(CDF) collaboration records and analyses proton anti-proton interactions with a center-of -mass energy of 2 TeV at the Tevatron,A new collider run,Run II,of the Tevatron started in April.During its more than two year duration the CDF experiment expects to record about 1 PetaByte of data.With its multi-purpose detector and center-of mass energy at the frontier,the experimental program is large and versatile.The over 500 scientists of CDF will engage in searches for new particles,like the Higgs boson or supersymmetric particles,precision measurement of electroweak parameters,like the mass of the W boson,measurement of top quark parameters and a large spectrum of B physics.The experiment has taken data and analysed them in previous runs.For Run II,however,the computing model was changed to incorporate new methodologies.the file format switched.and both data handling and analysis system redesigned to cope with the increased demands.This paper(4-036 at Chep 2001)gives an overview of the CDF Run II compute system with emphasize on areas where the current system does not match initial estimates and projections.For the data handling and analysis system a more detailed description is given. 展开更多
关键词 探测器 分析系统 实验
下载PDF
SAM and the Particle Physics Data Grid
10
作者 LauriLoebel-Carpenter LeeLueking 《International Conference on Computing in High Energy and Nuclear Physics》 2001年第1期724-729,共6页
The D0 experiment's data and job management system software,SAM,is an operational prototype of many of the concepts being developed for Grid computing .We explain how the components of SAM map into the Data Grid a... The D0 experiment's data and job management system software,SAM,is an operational prototype of many of the concepts being developed for Grid computing .We explain how the components of SAM map into the Data Grid architecture,We discuss the future use of Grid components to either replace existing components of SAM or to extend its functionality and utility.owrk being carried out as part of the Particle Physics Data Grid(PPDG) project. 展开更多
关键词 粒子物理学 数据网格 SAM
下载PDF
A Generic Digitization Framework for the CDF Simulation
11
作者 JimKowalkowski MarcPaterno 《International Conference on Computing in High Energy and Nuclear Physics》 2001年第1期485-489,共5页
Digitization from GEANT tracking requires a predictable sequence of steps to produce raw simulated detector readout information.We have developed a software framework that simplifies the development and integration of... Digitization from GEANT tracking requires a predictable sequence of steps to produce raw simulated detector readout information.We have developed a software framework that simplifies the development and integration of digitizers by separating the coordination activities(sequencing and dispatching)from the actual digitization process.This separation allows the developers of digitizers to concentrate on digitization.The framework provides the sequencing infrastructure and a digitizer model,which means that all digitizers are required to follow the same sequencing rules and provide an interface that fits the model. 展开更多
关键词 程序设计 CDF模拟 数字化处理
下载PDF
CMS Software Distribution and Installation Systems:Concepts,Practical Solutions and Experience at Fermilab as a CMS Tier 1 Center
12
作者 NataliaM.Ratnikova GregoryE.Graham 《International Conference on Computing in High Energy and Nuclear Physics》 2001年第1期506-509,共4页
The CMS Collaboration of 2000 scientists involves 150 institutions from 31 nations spread all over the world.CMS software system integration and release management is performed at CERN.Code management is based on CVS,... The CMS Collaboration of 2000 scientists involves 150 institutions from 31 nations spread all over the world.CMS software system integration and release management is performed at CERN.Code management is based on CVS,with read or write access to the repository via a CVS server,Software configuration,release management tools(SCRAM) are being developed at CERN.Software releases are then distributed to regional centers,where the software is used by a local community for a wide variety of tasks,such as software development detector simulation and reconstruction and physics analysis.Depending on specific application,the system environment and local hardware requirements,different approaches and tools are used for the CMS software installation at different places.This presentation describes concepts and reactial solutions for a variety of ways of software distribution,with an emphasis on the CMS experience at Fermilab,Installation and usage of different models used for the production farm,for code development and for physics analysis are described. 展开更多
关键词 CMS实验 分布系统 设置系统
下载PDF
The DZERO Online System Event Path
13
作者 S.Fuess M.Begel 《International Conference on Computing in High Energy and Nuclear Physics》 2001年第1期624-627,共4页
The Online computing system for the DZERO experiment is used to control monitor,and acquire data from the approximately 1-million channel detector,This paper describes the Online Host system event data path requiremen... The Online computing system for the DZERO experiment is used to control monitor,and acquire data from the approximately 1-million channel detector,This paper describes the Online Host system event data path requirements and design. 展开更多
关键词 DZERO实验 数据收集 数据监视
下载PDF
CDF Run Ⅱ Data File Catalog
14
作者 J.Kowalkowski F.Ratnikov 《International Conference on Computing in High Energy and Nuclear Physics》 2001年第1期308-309,共2页
The CDF experiment started data taking in April 2001,The data are organized into datasets which contain events of similar physics properties and reconstruction version.the information about datasets is stored in the D... The CDF experiment started data taking in April 2001,The data are organized into datasets which contain events of similar physics properties and reconstruction version.the information about datasets is stored in the Data File Catalog,a relational database.This information is presented to the data processing framework as objects which are retrieved using compound keys.The objects and the keys are designed to be the algorithms' view of information stored in the database.Objects may use several DB tables.A database interface management layer exists for the purpose of managing the mapping of persistent data to transient objects that can be used by the framework.This layer exists between the algorithm code and the code which reads directly from datanbase tables.At the user end,it places get/put interface on a top of a transient class for retrieval or storage of objects of this class using a key.Data File Catalog code makes use of this facility and contains all the code needed to manipulate CDF Data File Catalog from a C++ program or from the command prompt,It supports an Oracle interface using OTL,and a mSQL interface,This code and the Oravcle implementation of Data File Catalog were subjected to test during CDF Commissioning Run last fall and during first weeks of Run II in April.It performed exceptionally well. 展开更多
关键词 CDF实验 数据处理 数据文件目录
下载PDF
The BTeV DAQ and Trigger System—Some Throughput,Usability and Fault Tolerance Aspects
15
作者 E.E.Gottschalk T.Bapty 《International Conference on Computing in High Energy and Nuclear Physics》 2001年第1期628-631,共4页
As presented at the last CHEP conference,the BTeV triggering and data collection pose a significant challenge in construction and operation,generating 1.5 Terabytes/second of raw data from over 30 million detector cha... As presented at the last CHEP conference,the BTeV triggering and data collection pose a significant challenge in construction and operation,generating 1.5 Terabytes/second of raw data from over 30 million detector channels.We report on facets of the DAQ and trigger farms.We report on the current design of the DAQ,especially its partitioning features to support commissioning of the detector.We are exploring collaborations with computer science groups experienced in fault tolerant and dynamic real-time and embedded systems to develop a system to provide the extreme flexibility and high availability required of the heterogeneous trigger farm(-ten thousand DSPs and commodity processors).We describe directions in the following areas:system modeling and analysis using the Model Integrated Computing approach to assist in the creation of domain-specific modeling,analysis,and program synthesis environments for building complex,large-scale computer-based systems;System Configuration Management to include compileable design specifications for configurable hardware components,schedules,and communication maps. Runtime Environment and Hierarchical Fault Detection/Management-a system-wide infrastructure for rapidly detecting,isolating,filtering,and reporting faults which will be encapsulated in intelligent active entites(agents)to run on DSPs,L2/3 processors,and other supporting processors throughout the system. 展开更多
关键词 数据收集 触发系统 容错
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部