Shallow convection plays an important role in transporting heat and moisture from the near-surface to higher altitudes,yet its parameterization in numerical models remains a great challenge,partly due to the lack of h...Shallow convection plays an important role in transporting heat and moisture from the near-surface to higher altitudes,yet its parameterization in numerical models remains a great challenge,partly due to the lack of high-resolution observations.This study describes a large eddy simulation(LES)dataset for four shallow convection cases that differ primarily in inversion strength,which can be used as a surrogate for real data.To reduce the uncertainty in LES modeling,three different large eddy models were used,including SAM(System for Atmospheric Modeling),WRF(Weather Research and Forecasting model),and UCLA-LES.Results show that the different models generally exhibit similar behavior for each shallow convection case,despite some differences in the details of the convective structure.In addition to grid-averaged fields,conditionally sampled variables,such as in-cloud moisture and vertical velocity,are also provided,which are indispensable for calculation of the entrainment/detrainment rate.Considering the essentiality of the entraining/detraining process in the parameterization of cumulus convection,the dataset presented in this study is potentially useful for validation and improvement of the parameterization of shallow convection.展开更多
With the rapid development of artificial intelligence, large language models (LLMs) have demonstrated remarkable capabilities in natural language understanding and generation. These models have great potential to enha...With the rapid development of artificial intelligence, large language models (LLMs) have demonstrated remarkable capabilities in natural language understanding and generation. These models have great potential to enhance database query systems, enabling more intuitive and semantic query mechanisms. Our model leverages LLM’s deep learning architecture to interpret and process natural language queries and translate them into accurate database queries. The system integrates an LLM-powered semantic parser that translates user input into structured queries that can be understood by the database management system. First, the user query is pre-processed, the text is normalized, and the ambiguity is removed. This is followed by semantic parsing, where the LLM interprets the pre-processed text and identifies key entities and relationships. This is followed by query generation, which converts the parsed information into a structured query format and tailors it to the target database schema. Finally, there is query execution and feedback, where the resulting query is executed on the database and the results are returned to the user. The system also provides feedback mechanisms to improve and optimize future query interpretations. By using advanced LLMs for model implementation and fine-tuning on diverse datasets, the experimental results show that the proposed method significantly improves the accuracy and usability of database queries, making data retrieval easy for users without specialized knowledge.展开更多
The control manner during the process to ensure the quality of pipe products mainly relies on the operator’s experience, so it is very necessary to study the setting round process and obtain its spring-back law. The ...The control manner during the process to ensure the quality of pipe products mainly relies on the operator’s experience, so it is very necessary to study the setting round process and obtain its spring-back law. The setting round process is shaping an oval section pipe into circular section, so it is difficult to provide a quantificational analysis for its spring-back process because of the curvature inequality of pipe section neutral layer. However, the spring-back law of the circle-oval process can be easily predicted. The experimental method is firstly used to establish the equivalent effect between the setting round process and the circle-oval process. The setting round process can be converted into the circle-oval process. There are two difficulties in the theoretical analysis for the circle-oval process: elastic-plastic bending problem of curved beam; statically indeterminate problem. A quantitative analytic method for the circle-oval process is presented on the basis of combination of the spring-back law of plane curved beam with the element dividing idea in finite element method. The ovality after unloading versus the relative reduction is plotted with analytical and experimental results respectively, which shows a fair agreement. Finally, the method of quantitative prediction of reduction for large pipe setting round is given based on the equivalent effect and the analytical results. Five pipes, which are needed to be set round, are used to carry out experiment so as to verify this method. The results of verification experiment indicates that, in the experimental range, the residual ovality are all under 0.35% after the once only setting round with the theoretical prediction reductions. It is much less than the 1% requirement of pipe standard. Applying the established theoretical analysis is able to correct the pipe ovality with sufficient accuracy, which provides theoretical direction to plant use.展开更多
A class of multi dimensional degenerate diffusion processes X ε(t) in R r(r≥2) are considered and the asymptotic properties of empirical measures are investigated; here X ε(t) saitisfies the stochastic differen...A class of multi dimensional degenerate diffusion processes X ε(t) in R r(r≥2) are considered and the asymptotic properties of empirical measures are investigated; here X ε(t) saitisfies the stochastic differential equation dX ε(t)=σ(X ε(t)) d W(t)+B(X ε(t)) d t+ εσ~(X ε(t)) d W(t),ε>0. X ε(t) are small random perturbations of the degenerate diffusion process X(t), which satisfies the stochastic differential equation dX(t)=σ(X(t)) d W(t)+B(X(t)) d t. A large deviation theorem for projection measures ν on R r-n (n<r) of empirical measures μ are proved展开更多
Let(Z_(n))be a branching process with immigration in a random environmentξ,whereξis an independent and identically distributed sequence of random variables.We show asymptotic properties for all the moments of Z_(n) ...Let(Z_(n))be a branching process with immigration in a random environmentξ,whereξis an independent and identically distributed sequence of random variables.We show asymptotic properties for all the moments of Z_(n) and describe the decay rates of the n-step transition probabilities.As applications,a large deviation principle for the sequence log Z_(n) is established,and related large deviations are also studied.展开更多
Process planning for large complicated stampings is more complicated, illegible and multiform than that for common stampings. In this paper, an intelligent master model of computer aided process planning (CAPP) for ...Process planning for large complicated stampings is more complicated, illegible and multiform than that for common stampings. In this paper, an intelligent master model of computer aided process planning (CAPP) for large complicated stampings has been developed based on knowledge based engineering (KBE) and feature technology. This innovative model consists of knowledge base (KB), process control structure (PCS), process information model (PIM), multidisciplinary design optimization (MDO), model link environment (MLE) and simulation engine (SE), to realize process planning, optimization, simulation and management integrated to complete intelligent CAPP system. In this model, KBE provides knowledge base, open architecture and knowledge reuse ability to deal with the multi-domain and multi-expression of process knowledge, and forms an integrated environment. With PIM, all the knowledge consisting of objects, constraints, cxtmricncc and decision-makings is carried by object-oriented method dynamically for knowledge-reasoning. PCS makes dynamical knowledge modified and updated timely and accordingly. MLE provides scv. cral methods to make CAPP sysmm associated and integrated. SE provides a programmable mechanism to interpret simulation course and result. Meanwhile, collaborative optimization, one method of MDO, is imported to deal with the optimization distributed for multiple purposes. All these make CAPP sysmm integrated and open to other systems, such as dic design and manufacturing system.展开更多
In India, with ever increasing population and stress on natural resources, especially water, rejuvenation of rainwater harvesting (RWH) technique which was forgotten over the days is becoming very essential. Large num...In India, with ever increasing population and stress on natural resources, especially water, rejuvenation of rainwater harvesting (RWH) technique which was forgotten over the days is becoming very essential. Large number of RWH methods that are available in the literature are demand specific and site specific, since RWH system depends on the topography, land use, land cover, rainfall and demand pattern. Thus for each and every case, a detailed evaluation of RWH structures is required for implementation, including the analy-sis of hydrology, topography and other aspects like site availability and economics, however a common methodology could be evolved. The present study was aimed at evaluation of various RWH techniques in order to identify the most appropriate technique suitable for a large scale industrial area to meet its daily wa-ter demand. An attempt is made to determine the volume of water to be stored using mass balance method, Ripple diagram method, analytical method, and sequent peak algorithm method. Based on various satisfying criteria, analytical hierarchy process (AHP) is employed to determine the most appropriate type of RWH method and required number of RWH structures in the study area. If economy alone is considered along with hydrological and site specific parameters, recharging the aquifer has resulted as a better choice. However other criteria namely risk, satisfaction in obtaining required volume of water for immediate utilization etc. has resulted in opting for concrete storage structures method. From the results it is found that AHP, if used with all possible criteria can result in a better tool for evaluation of RWH methods and structures. This RWH structures not only meets the demand but saves transportation cost of water and reduces the dependability of the industry on irrigation reservoir. Besides monetary benefits it is hoped that the micro environment inside the industry will improve due to the cooling effect of the stored water.展开更多
In this paper, we study the precise large deviations for the prospectiveloss process with consistently varying tails. The obtained results improve some related known ones.
Managing TG-51 reference dosimetry in a large hospital network can be a challenging task. The objectives of this study are to investigate the effectiveness of using Statistical Process Control (SPC) to manage TG-51 wo...Managing TG-51 reference dosimetry in a large hospital network can be a challenging task. The objectives of this study are to investigate the effectiveness of using Statistical Process Control (SPC) to manage TG-51 workflow in such a network. All the sites in the network performed the annual reference dosimetry in water according to TG-51. These data were used to cross-calibrate the same ion chambers in plastic phantoms for monthly QA output measurements. An energy-specific dimensionless beam quality cross-calibration factor, <img src="Edit_6bfb9907-c034-4197-97a7-e8337a7fc21a.png" width="20" height="19" alt="" />, was derived to monitor the process across multiple sites. The SPC analysis was then performed to obtain the mean, <img src="Edit_c630a2dd-f714-4042-a46e-da0ca863cb41.png" width="30" height="20" alt="" /> , standard deviation, <span style="font-size:6.5pt;font-family:;" "=""><span style="white-space:normal;"><span style="font-size:6.5pt;font-family:"">σ</span><span style="white-space:nowrap;"><sub><i>k</i></sub></span></span></span>, the Upper Control Limit (UCL) and Lower Control Limit (LCL) in each beam. This process was first applied to 15 years of historical data at the main campus to assess the effectiveness of the process. A two-year prospective study including all 30 linear accelerators spread over the main campus and seven satellites in the network followed. The ranges of the control limits (±3σ) were found to be in the range of 1.7% - 2.6% and 3.3% - 4.2% for the main campus and the satellite sites respectively. The wider range in the satellite sites was attributed to variations in the workflow. Standardization of workflow was also found to be effective in narrowing the control limits. The SPC is effective in identifying variations in the workflow and was shown to be an effective tool in managing large network reference dosimetry.展开更多
Numerical control(NC) bending experiments with different process parameters were carried out for 5052O aluminum alloy tubes with outer diameter of 70 mm, wall thickness of 1.5 mm, and centerline bending radius of 105 ...Numerical control(NC) bending experiments with different process parameters were carried out for 5052O aluminum alloy tubes with outer diameter of 70 mm, wall thickness of 1.5 mm, and centerline bending radius of 105 mm. And the effects of process parameters on tube wall thinning and cross section distortion were investigated. Meanwhile, acceptable bending of the 5052O aluminum tubes was accomplished based on the above experiments. The results show that the effects of process parameters on bending process for large diameter thin-walled aluminum alloy tubes are similar to those for small diameter thin-walled tubes, but the forming quality of the large diameter thin-walled aluminum alloy tubes is much more sensitive to the process parameters and thus it is more difficult to form.展开更多
In the procedure of the steady-state hierarchical optimization with feedback for large-scale industrial processes, a sequence of set-point changes with different magnitudes is carried out on the optimization layer. To...In the procedure of the steady-state hierarchical optimization with feedback for large-scale industrial processes, a sequence of set-point changes with different magnitudes is carried out on the optimization layer. To improve the dynamic performance of transient response driven by the set-point changes, a filter-based iterative learning control strategy is proposed. In the proposed updating law, a local-symmetric-integral operator is adopted for eliminating the measurement noise of output information,a set of desired trajectories are specified according to the set-point changes sequence, the current control input is iteratively achieved by utilizing smoothed output error to modify its control input at previous iteration, to which the amplified coefficients related to the different magnitudes of set-point changes are introduced. The convergence of the algorithm is conducted by incorporating frequency-domain technique into time-domain analysis. Numerical simulation demonstrates the effectiveness of the proposed strategy,展开更多
To prevent the long-time coherent integration and limited range window stumbling blocks of stretch processing and reduce computational complexity, a novel method called multi-subpulse process of large time-bandwidth p...To prevent the long-time coherent integration and limited range window stumbling blocks of stretch processing and reduce computational complexity, a novel method called multi-subpulse process of large time-bandwidth product linear frequency modulating ( LFM ) signal ( i. e. chirp ) is proposed in this paper. The wideband chirp signal is split up into several compressed subpulses. Then the fast Fourier transform (FFT) is used to reconstruct the high resolution range profile ( HR- RP) in a relative short computation time. For multi-frame, pulse Doppler (PD) process is performed to obtain the two-dimension range-Doppler (R-D) high resolution profile. Simulations and field ex- perimental results show that the proposed method can provide high-quality target profile over a large range window in a short computation time and has the promising potential for long-time coherent in- tegration.展开更多
Let u ∈ R ,for any ω 〉 0, the processes X^ε = {X^ε(t); 0 ≤ t≤ 1} are governed by the following random evolution equations dX^ε(t)= b(X^ε(t),v(t))dt-εdSt/ε, where S={St; 0≤t≤1} is a compound Pois...Let u ∈ R ,for any ω 〉 0, the processes X^ε = {X^ε(t); 0 ≤ t≤ 1} are governed by the following random evolution equations dX^ε(t)= b(X^ε(t),v(t))dt-εdSt/ε, where S={St; 0≤t≤1} is a compound Poisson process, the process v={v(t); 0≤t≤1} is independent of S and takes values in R^m. We derive the large deviation principle for{(X^ε,v(.)); ε〉0} when ε↓0 by approximation method and contraction principle, which will be meaningful for us to find out the path property for the risk process of this type.展开更多
A parallel arithmetic program for the molecular dynamics (MD) simulation study of a large sized system consisting of 50 000100 000 atoms of liquid metals is reformed, based on the cascade arithmetic program used for t...A parallel arithmetic program for the molecular dynamics (MD) simulation study of a large sized system consisting of 50 000100 000 atoms of liquid metals is reformed, based on the cascade arithmetic program used for the molecular dynamics simulation study of a small sized system consisting of 5001 000 atoms. The program is used to simulate the rapid solidification processes of liquid metal Al system. Some new results, such as larger clusters composed of more than 36 smaller clusters (icosahedra or defect icosahedra) obtained in the system of 50 000 atoms, however, the larger clusters can not be seen in the small sized system of 5001 000 atoms. On the other hand, the results from this simulation study would be more closed to the real situation of the system under consideration because the influence of boundary conditions is decreased remarkably. It can be expected that from the parallel algorithm combined with the higher performance super computer, the total number of atoms in simulation system can be enlarged again up to tens, even hundreds times in the near future.展开更多
As Natural Language Processing(NLP)continues to advance,driven by the emergence of sophisticated large language models such as ChatGPT,there has been a notable growth in research activity.This rapid uptake reflects in...As Natural Language Processing(NLP)continues to advance,driven by the emergence of sophisticated large language models such as ChatGPT,there has been a notable growth in research activity.This rapid uptake reflects increasing interest in the field and induces critical inquiries into ChatGPT’s applicability in the NLP domain.This review paper systematically investigates the role of ChatGPT in diverse NLP tasks,including information extraction,Name Entity Recognition(NER),event extraction,relation extraction,Part of Speech(PoS)tagging,text classification,sentiment analysis,emotion recognition and text annotation.The novelty of this work lies in its comprehensive analysis of the existing literature,addressing a critical gap in understanding ChatGPT’s adaptability,limitations,and optimal application.In this paper,we employed a systematic stepwise approach following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses(PRISMA)framework to direct our search process and seek relevant studies.Our review reveals ChatGPT’s significant potential in enhancing various NLP tasks.Its adaptability in information extraction tasks,sentiment analysis,and text classification showcases its ability to comprehend diverse contexts and extract meaningful details.Additionally,ChatGPT’s flexibility in annotation tasks reducesmanual efforts and accelerates the annotation process,making it a valuable asset in NLP development and research.Furthermore,GPT-4 and prompt engineering emerge as a complementary mechanism,empowering users to guide the model and enhance overall accuracy.Despite its promising potential,challenges persist.The performance of ChatGP Tneeds tobe testedusingmore extensivedatasets anddiversedata structures.Subsequently,its limitations in handling domain-specific language and the need for fine-tuning in specific applications highlight the importance of further investigations to address these issues.展开更多
By the Cramér method, the large deviation principle for a form of compound Poisson process S(t)=∑N(t)i=1h(t-Si)Xi is obtained,where N(t), t>0, is a nonhomogeneous Poisson process with intensity λ(t)>0, Xi...By the Cramér method, the large deviation principle for a form of compound Poisson process S(t)=∑N(t)i=1h(t-Si)Xi is obtained,where N(t), t>0, is a nonhomogeneous Poisson process with intensity λ(t)>0, Xi, i≥1, are i.i.d. nonnegative random variables independent of N(t), and h(t), t>0, is a nonnegative monotone real function. Consequently, weak convergence for S(t) is also obtained.展开更多
In this paper we examine the large deviations principle (LDP) for sequences of classic Cramér-Lundberg risk processes under suitable time and scale modifications, and also for a wide class of claim distributions ...In this paper we examine the large deviations principle (LDP) for sequences of classic Cramér-Lundberg risk processes under suitable time and scale modifications, and also for a wide class of claim distributions including (the non-super- exponential) exponential claims. We prove two large deviations principles: first, we obtain the LDP for risk processes on D∈[0,1] with the Skorohod topology. In this case, we provide an explicit form for the rate function, in which the safety loading condition appears naturally. The second theorem allows us to obtain the LDP for Aggregate Claims processes on D∈[0,∞) with a different time-scale modification. As an application of the first result we estimate the ruin probability, and for the second result we work explicit calculations for the case of exponential claims.展开更多
The relationship between the arrangement of tungsten-halogen lamps and the uniformity of irradiance received by the wafer is discussed, and a sort of axial-symmetrical lamps-array is designed to guarantee that the irr...The relationship between the arrangement of tungsten-halogen lamps and the uniformity of irradiance received by the wafer is discussed, and a sort of axial-symmetrical lamps-array is designed to guarantee that the irradiation on the edge is approximately the same as the one on the center of the wafer. The magnitude of temperature on the wafer vs. the power of tungsten-halogen lamps is calculated numerically.展开更多
When heavy machines and large scaled receiver system of communication equipment are manufactured, it always needs to produce large-sized steel castings, aluminum castings and etc. Some defects of hot cracking by therm...When heavy machines and large scaled receiver system of communication equipment are manufactured, it always needs to produce large-sized steel castings, aluminum castings and etc. Some defects of hot cracking by thermal stress often appear during solidification process as these castings are produced, which results in failure of castings. Therefore predicting the effects of technological parameters for production of castings on the thermal stress during solidification process becomes an important means. In this paper, the mathematical models have been established and numerical calculation of temperature fields by using finite difference method (FDM) and then thermal stress fields by using finite element method (FEM) during solidification process of castings have been carried out. The technological parameters of production have been optimized by the results of calculation and the defects of hot cracking have been eliminated. Modeling and simulation of 3D thermal stress during solidification processes of large-sized castings provided a scientific basis, which promoted further development of advanced manufacturing technique.展开更多
A large Type Ⅳμ solar radio burst was observed at Yunnan observatory on December 16, 1988. The burst was associated by a coronal mass ejection (CME) with the radio signature of Type Ⅱ and Type Ⅳ bursts. On the bas...A large Type Ⅳμ solar radio burst was observed at Yunnan observatory on December 16, 1988. The burst was associated by a coronal mass ejection (CME) with the radio signature of Type Ⅱ and Type Ⅳ bursts. On the basis of Beijing Huairou magnetogram of AR5278, power law distribution of electrons of the source region is calculated. It is suggested that the Type Ⅳμ burst associated with CME was due to the gyrosynchrotron radiation of high energy electrons trapped by magnetic field. Finally, some quantitative and qualitative explanations are proposed.展开更多
基金the National Key R&D Program of China(Grant No.2021YFC3000802)the National Natural Science Foundation of China(Grant No.42175165)the National Key Scientific and Technological Infrastructure project“Earth System Numerical Simulation Facility”(EarthLab).
文摘Shallow convection plays an important role in transporting heat and moisture from the near-surface to higher altitudes,yet its parameterization in numerical models remains a great challenge,partly due to the lack of high-resolution observations.This study describes a large eddy simulation(LES)dataset for four shallow convection cases that differ primarily in inversion strength,which can be used as a surrogate for real data.To reduce the uncertainty in LES modeling,three different large eddy models were used,including SAM(System for Atmospheric Modeling),WRF(Weather Research and Forecasting model),and UCLA-LES.Results show that the different models generally exhibit similar behavior for each shallow convection case,despite some differences in the details of the convective structure.In addition to grid-averaged fields,conditionally sampled variables,such as in-cloud moisture and vertical velocity,are also provided,which are indispensable for calculation of the entrainment/detrainment rate.Considering the essentiality of the entraining/detraining process in the parameterization of cumulus convection,the dataset presented in this study is potentially useful for validation and improvement of the parameterization of shallow convection.
文摘With the rapid development of artificial intelligence, large language models (LLMs) have demonstrated remarkable capabilities in natural language understanding and generation. These models have great potential to enhance database query systems, enabling more intuitive and semantic query mechanisms. Our model leverages LLM’s deep learning architecture to interpret and process natural language queries and translate them into accurate database queries. The system integrates an LLM-powered semantic parser that translates user input into structured queries that can be understood by the database management system. First, the user query is pre-processed, the text is normalized, and the ambiguity is removed. This is followed by semantic parsing, where the LLM interprets the pre-processed text and identifies key entities and relationships. This is followed by query generation, which converts the parsed information into a structured query format and tailors it to the target database schema. Finally, there is query execution and feedback, where the resulting query is executed on the database and the results are returned to the user. The system also provides feedback mechanisms to improve and optimize future query interpretations. By using advanced LLMs for model implementation and fine-tuning on diverse datasets, the experimental results show that the proposed method significantly improves the accuracy and usability of database queries, making data retrieval easy for users without specialized knowledge.
基金supported by National Natural Science Foundation of China (Grant No. 51175452)Hebei Provincial Natural Science Foundation of China (Grant No. E2012203061)
文摘The control manner during the process to ensure the quality of pipe products mainly relies on the operator’s experience, so it is very necessary to study the setting round process and obtain its spring-back law. The setting round process is shaping an oval section pipe into circular section, so it is difficult to provide a quantificational analysis for its spring-back process because of the curvature inequality of pipe section neutral layer. However, the spring-back law of the circle-oval process can be easily predicted. The experimental method is firstly used to establish the equivalent effect between the setting round process and the circle-oval process. The setting round process can be converted into the circle-oval process. There are two difficulties in the theoretical analysis for the circle-oval process: elastic-plastic bending problem of curved beam; statically indeterminate problem. A quantitative analytic method for the circle-oval process is presented on the basis of combination of the spring-back law of plane curved beam with the element dividing idea in finite element method. The ovality after unloading versus the relative reduction is plotted with analytical and experimental results respectively, which shows a fair agreement. Finally, the method of quantitative prediction of reduction for large pipe setting round is given based on the equivalent effect and the analytical results. Five pipes, which are needed to be set round, are used to carry out experiment so as to verify this method. The results of verification experiment indicates that, in the experimental range, the residual ovality are all under 0.35% after the once only setting round with the theoretical prediction reductions. It is much less than the 1% requirement of pipe standard. Applying the established theoretical analysis is able to correct the pipe ovality with sufficient accuracy, which provides theoretical direction to plant use.
文摘A class of multi dimensional degenerate diffusion processes X ε(t) in R r(r≥2) are considered and the asymptotic properties of empirical measures are investigated; here X ε(t) saitisfies the stochastic differential equation dX ε(t)=σ(X ε(t)) d W(t)+B(X ε(t)) d t+ εσ~(X ε(t)) d W(t),ε>0. X ε(t) are small random perturbations of the degenerate diffusion process X(t), which satisfies the stochastic differential equation dX(t)=σ(X(t)) d W(t)+B(X(t)) d t. A large deviation theorem for projection measures ν on R r-n (n<r) of empirical measures μ are proved
基金partially supported by the National Nature Science Foundation of China(11601286,11501146)。
文摘Let(Z_(n))be a branching process with immigration in a random environmentξ,whereξis an independent and identically distributed sequence of random variables.We show asymptotic properties for all the moments of Z_(n) and describe the decay rates of the n-step transition probabilities.As applications,a large deviation principle for the sequence log Z_(n) is established,and related large deviations are also studied.
文摘Process planning for large complicated stampings is more complicated, illegible and multiform than that for common stampings. In this paper, an intelligent master model of computer aided process planning (CAPP) for large complicated stampings has been developed based on knowledge based engineering (KBE) and feature technology. This innovative model consists of knowledge base (KB), process control structure (PCS), process information model (PIM), multidisciplinary design optimization (MDO), model link environment (MLE) and simulation engine (SE), to realize process planning, optimization, simulation and management integrated to complete intelligent CAPP system. In this model, KBE provides knowledge base, open architecture and knowledge reuse ability to deal with the multi-domain and multi-expression of process knowledge, and forms an integrated environment. With PIM, all the knowledge consisting of objects, constraints, cxtmricncc and decision-makings is carried by object-oriented method dynamically for knowledge-reasoning. PCS makes dynamical knowledge modified and updated timely and accordingly. MLE provides scv. cral methods to make CAPP sysmm associated and integrated. SE provides a programmable mechanism to interpret simulation course and result. Meanwhile, collaborative optimization, one method of MDO, is imported to deal with the optimization distributed for multiple purposes. All these make CAPP sysmm integrated and open to other systems, such as dic design and manufacturing system.
文摘In India, with ever increasing population and stress on natural resources, especially water, rejuvenation of rainwater harvesting (RWH) technique which was forgotten over the days is becoming very essential. Large number of RWH methods that are available in the literature are demand specific and site specific, since RWH system depends on the topography, land use, land cover, rainfall and demand pattern. Thus for each and every case, a detailed evaluation of RWH structures is required for implementation, including the analy-sis of hydrology, topography and other aspects like site availability and economics, however a common methodology could be evolved. The present study was aimed at evaluation of various RWH techniques in order to identify the most appropriate technique suitable for a large scale industrial area to meet its daily wa-ter demand. An attempt is made to determine the volume of water to be stored using mass balance method, Ripple diagram method, analytical method, and sequent peak algorithm method. Based on various satisfying criteria, analytical hierarchy process (AHP) is employed to determine the most appropriate type of RWH method and required number of RWH structures in the study area. If economy alone is considered along with hydrological and site specific parameters, recharging the aquifer has resulted as a better choice. However other criteria namely risk, satisfaction in obtaining required volume of water for immediate utilization etc. has resulted in opting for concrete storage structures method. From the results it is found that AHP, if used with all possible criteria can result in a better tool for evaluation of RWH methods and structures. This RWH structures not only meets the demand but saves transportation cost of water and reduces the dependability of the industry on irrigation reservoir. Besides monetary benefits it is hoped that the micro environment inside the industry will improve due to the cooling effect of the stored water.
文摘In this paper, we study the precise large deviations for the prospectiveloss process with consistently varying tails. The obtained results improve some related known ones.
文摘Managing TG-51 reference dosimetry in a large hospital network can be a challenging task. The objectives of this study are to investigate the effectiveness of using Statistical Process Control (SPC) to manage TG-51 workflow in such a network. All the sites in the network performed the annual reference dosimetry in water according to TG-51. These data were used to cross-calibrate the same ion chambers in plastic phantoms for monthly QA output measurements. An energy-specific dimensionless beam quality cross-calibration factor, <img src="Edit_6bfb9907-c034-4197-97a7-e8337a7fc21a.png" width="20" height="19" alt="" />, was derived to monitor the process across multiple sites. The SPC analysis was then performed to obtain the mean, <img src="Edit_c630a2dd-f714-4042-a46e-da0ca863cb41.png" width="30" height="20" alt="" /> , standard deviation, <span style="font-size:6.5pt;font-family:;" "=""><span style="white-space:normal;"><span style="font-size:6.5pt;font-family:"">σ</span><span style="white-space:nowrap;"><sub><i>k</i></sub></span></span></span>, the Upper Control Limit (UCL) and Lower Control Limit (LCL) in each beam. This process was first applied to 15 years of historical data at the main campus to assess the effectiveness of the process. A two-year prospective study including all 30 linear accelerators spread over the main campus and seven satellites in the network followed. The ranges of the control limits (±3σ) were found to be in the range of 1.7% - 2.6% and 3.3% - 4.2% for the main campus and the satellite sites respectively. The wider range in the satellite sites was attributed to variations in the workflow. Standardization of workflow was also found to be effective in narrowing the control limits. The SPC is effective in identifying variations in the workflow and was shown to be an effective tool in managing large network reference dosimetry.
基金Project(50225518) supported by the National Science Foundation of China for Distinguished Young ScholarsProject(59975076, 50175092) supported by the National Natural Science Foundation of ChinaProject(04H53057) supported by the Aviation Science Foundation of China
文摘Numerical control(NC) bending experiments with different process parameters were carried out for 5052O aluminum alloy tubes with outer diameter of 70 mm, wall thickness of 1.5 mm, and centerline bending radius of 105 mm. And the effects of process parameters on tube wall thinning and cross section distortion were investigated. Meanwhile, acceptable bending of the 5052O aluminum tubes was accomplished based on the above experiments. The results show that the effects of process parameters on bending process for large diameter thin-walled aluminum alloy tubes are similar to those for small diameter thin-walled tubes, but the forming quality of the large diameter thin-walled aluminum alloy tubes is much more sensitive to the process parameters and thus it is more difficult to form.
基金This work was supported by the National Natural Science Foundation of China (No. 60274055)
文摘In the procedure of the steady-state hierarchical optimization with feedback for large-scale industrial processes, a sequence of set-point changes with different magnitudes is carried out on the optimization layer. To improve the dynamic performance of transient response driven by the set-point changes, a filter-based iterative learning control strategy is proposed. In the proposed updating law, a local-symmetric-integral operator is adopted for eliminating the measurement noise of output information,a set of desired trajectories are specified according to the set-point changes sequence, the current control input is iteratively achieved by utilizing smoothed output error to modify its control input at previous iteration, to which the amplified coefficients related to the different magnitudes of set-point changes are introduced. The convergence of the algorithm is conducted by incorporating frequency-domain technique into time-domain analysis. Numerical simulation demonstrates the effectiveness of the proposed strategy,
基金Supported by the National Natural Science Foundation of China(61301189)
文摘To prevent the long-time coherent integration and limited range window stumbling blocks of stretch processing and reduce computational complexity, a novel method called multi-subpulse process of large time-bandwidth product linear frequency modulating ( LFM ) signal ( i. e. chirp ) is proposed in this paper. The wideband chirp signal is split up into several compressed subpulses. Then the fast Fourier transform (FFT) is used to reconstruct the high resolution range profile ( HR- RP) in a relative short computation time. For multi-frame, pulse Doppler (PD) process is performed to obtain the two-dimension range-Doppler (R-D) high resolution profile. Simulations and field ex- perimental results show that the proposed method can provide high-quality target profile over a large range window in a short computation time and has the promising potential for long-time coherent in- tegration.
基金Supported by the National Natural Science Foundation of China (70273029)
文摘Let u ∈ R ,for any ω 〉 0, the processes X^ε = {X^ε(t); 0 ≤ t≤ 1} are governed by the following random evolution equations dX^ε(t)= b(X^ε(t),v(t))dt-εdSt/ε, where S={St; 0≤t≤1} is a compound Poisson process, the process v={v(t); 0≤t≤1} is independent of S and takes values in R^m. We derive the large deviation principle for{(X^ε,v(.)); ε〉0} when ε↓0 by approximation method and contraction principle, which will be meaningful for us to find out the path property for the risk process of this type.
文摘A parallel arithmetic program for the molecular dynamics (MD) simulation study of a large sized system consisting of 50 000100 000 atoms of liquid metals is reformed, based on the cascade arithmetic program used for the molecular dynamics simulation study of a small sized system consisting of 5001 000 atoms. The program is used to simulate the rapid solidification processes of liquid metal Al system. Some new results, such as larger clusters composed of more than 36 smaller clusters (icosahedra or defect icosahedra) obtained in the system of 50 000 atoms, however, the larger clusters can not be seen in the small sized system of 5001 000 atoms. On the other hand, the results from this simulation study would be more closed to the real situation of the system under consideration because the influence of boundary conditions is decreased remarkably. It can be expected that from the parallel algorithm combined with the higher performance super computer, the total number of atoms in simulation system can be enlarged again up to tens, even hundreds times in the near future.
文摘As Natural Language Processing(NLP)continues to advance,driven by the emergence of sophisticated large language models such as ChatGPT,there has been a notable growth in research activity.This rapid uptake reflects increasing interest in the field and induces critical inquiries into ChatGPT’s applicability in the NLP domain.This review paper systematically investigates the role of ChatGPT in diverse NLP tasks,including information extraction,Name Entity Recognition(NER),event extraction,relation extraction,Part of Speech(PoS)tagging,text classification,sentiment analysis,emotion recognition and text annotation.The novelty of this work lies in its comprehensive analysis of the existing literature,addressing a critical gap in understanding ChatGPT’s adaptability,limitations,and optimal application.In this paper,we employed a systematic stepwise approach following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses(PRISMA)framework to direct our search process and seek relevant studies.Our review reveals ChatGPT’s significant potential in enhancing various NLP tasks.Its adaptability in information extraction tasks,sentiment analysis,and text classification showcases its ability to comprehend diverse contexts and extract meaningful details.Additionally,ChatGPT’s flexibility in annotation tasks reducesmanual efforts and accelerates the annotation process,making it a valuable asset in NLP development and research.Furthermore,GPT-4 and prompt engineering emerge as a complementary mechanism,empowering users to guide the model and enhance overall accuracy.Despite its promising potential,challenges persist.The performance of ChatGP Tneeds tobe testedusingmore extensivedatasets anddiversedata structures.Subsequently,its limitations in handling domain-specific language and the need for fine-tuning in specific applications highlight the importance of further investigations to address these issues.
基金National Natural Science Foundation of China(No. 10971157)Educational Commission of Hubei Province, China(No.2004X124)
文摘By the Cramér method, the large deviation principle for a form of compound Poisson process S(t)=∑N(t)i=1h(t-Si)Xi is obtained,where N(t), t>0, is a nonhomogeneous Poisson process with intensity λ(t)>0, Xi, i≥1, are i.i.d. nonnegative random variables independent of N(t), and h(t), t>0, is a nonnegative monotone real function. Consequently, weak convergence for S(t) is also obtained.
文摘In this paper we examine the large deviations principle (LDP) for sequences of classic Cramér-Lundberg risk processes under suitable time and scale modifications, and also for a wide class of claim distributions including (the non-super- exponential) exponential claims. We prove two large deviations principles: first, we obtain the LDP for risk processes on D∈[0,1] with the Skorohod topology. In this case, we provide an explicit form for the rate function, in which the safety loading condition appears naturally. The second theorem allows us to obtain the LDP for Aggregate Claims processes on D∈[0,∞) with a different time-scale modification. As an application of the first result we estimate the ruin probability, and for the second result we work explicit calculations for the case of exponential claims.
基金Foundationfor Key Youth Teachers from Hunan Province(521105237) Natural Science Foundation of HunanUniversity(521101805)
文摘The relationship between the arrangement of tungsten-halogen lamps and the uniformity of irradiance received by the wafer is discussed, and a sort of axial-symmetrical lamps-array is designed to guarantee that the irradiation on the edge is approximately the same as the one on the center of the wafer. The magnitude of temperature on the wafer vs. the power of tungsten-halogen lamps is calculated numerically.
文摘When heavy machines and large scaled receiver system of communication equipment are manufactured, it always needs to produce large-sized steel castings, aluminum castings and etc. Some defects of hot cracking by thermal stress often appear during solidification process as these castings are produced, which results in failure of castings. Therefore predicting the effects of technological parameters for production of castings on the thermal stress during solidification process becomes an important means. In this paper, the mathematical models have been established and numerical calculation of temperature fields by using finite difference method (FDM) and then thermal stress fields by using finite element method (FEM) during solidification process of castings have been carried out. The technological parameters of production have been optimized by the results of calculation and the defects of hot cracking have been eliminated. Modeling and simulation of 3D thermal stress during solidification processes of large-sized castings provided a scientific basis, which promoted further development of advanced manufacturing technique.
文摘A large Type Ⅳμ solar radio burst was observed at Yunnan observatory on December 16, 1988. The burst was associated by a coronal mass ejection (CME) with the radio signature of Type Ⅱ and Type Ⅳ bursts. On the basis of Beijing Huairou magnetogram of AR5278, power law distribution of electrons of the source region is calculated. It is suggested that the Type Ⅳμ burst associated with CME was due to the gyrosynchrotron radiation of high energy electrons trapped by magnetic field. Finally, some quantitative and qualitative explanations are proposed.