Data breaches have massive consequences for companies, affecting them financially and undermining their reputation, which poses significant challenges to online security and the long-term viability of businesses. This...Data breaches have massive consequences for companies, affecting them financially and undermining their reputation, which poses significant challenges to online security and the long-term viability of businesses. This study analyzes trends in data breaches in the United States, examining the frequency, causes, and magnitude of breaches across various industries. We document that data breaches are increasing, with hacking emerging as the leading cause. Our descriptive analyses explore factors influencing breaches, including security vulnerabilities, human error, and malicious attacks. The findings provide policymakers and businesses with actionable insights to bolster data security through proactive audits, patching, encryption, and response planning. By better understanding breach patterns and risk factors, organizations can take targeted steps to enhance protections and mitigate the potential damage of future incidents.展开更多
Computer-aided Design (CAD), video games and other computer graphic related technology evolves substantial processing to geometric elements. A novel geometric computing method is proposed with the integration of des...Computer-aided Design (CAD), video games and other computer graphic related technology evolves substantial processing to geometric elements. A novel geometric computing method is proposed with the integration of descriptive geometry, math and computer algorithm. Firstly, geometric elements in general position are transformed to a special position in new coordinate system. Then a 3D problem is projected to new coordinate planes. Finally, according to 2D/3D correspondence principle in descriptive geometry, the solution is constructed computerized drawing process with ruler and compasses. In order to make this method a regular operation, a two-level pattern is established. Basic Layer is a set algebraic packaged function including about ten Primary Geometric Functions (PGF) and one projection transformation. In Application Layer, a proper coordinate is established and a sequence of PGFs is sought for to get the final results. Examples illustrate the advantages of our method on dimension reduction, regulatory and visual computing and robustness.展开更多
It is well-known that Chinese dish names concentrate on the essence of traditional Chinese culture,reflecting the collective wisdom of Chinese nation.In recent years,good and awkward translation of Chinese dish names ...It is well-known that Chinese dish names concentrate on the essence of traditional Chinese culture,reflecting the collective wisdom of Chinese nation.In recent years,good and awkward translation of Chinese dish names coexist with each other,which has a negative influence on Chinese image.Nowadays many people have begun to center on and study the English translation of Chinese dishes names.Descriptive translation is a clear answer to it.展开更多
English grammar is thought as one of the most important parts in both language learning and teaching. While few people know there is more than one kind of English grammar. This essay provides the features and comparis...English grammar is thought as one of the most important parts in both language learning and teaching. While few people know there is more than one kind of English grammar. This essay provides the features and comparison between two commonly used English grammar, namely descriptive grammar and prescriptive grammar, and assist English teachers to explore further in grammar teaching.展开更多
The studies on descriptive norms in translation studies are of great significance because they spread the domain of norms and begin to consider the impact of the outside community so that they lead the study to a new ...The studies on descriptive norms in translation studies are of great significance because they spread the domain of norms and begin to consider the impact of the outside community so that they lead the study to a new field.Descriptive norms are discussed and studied systemically mainly by three scholars:Toury,Hermans,and Chesterman.The purpose of this article is to review this concept and try to point out some merits and demerits of their theory.展开更多
This paper describes the statistical methods of the comparison of the incidence or mortality rates in cancer registry and descriptive epidemiology, and the features of microcomputer program (CANTEST) which was designe...This paper describes the statistical methods of the comparison of the incidence or mortality rates in cancer registry and descriptive epidemiology, and the features of microcomputer program (CANTEST) which was designed to perform the methods. The program was written in IBM BASIC language. Using the program CANTEST we presented here the user can do several statistical tests or estimations as follow: 1. the comparison of the adjusted rates which were calculated by directly or indirectly standardized methods, 2. the calculation of the slope of regression line for testing the linear trends of the adjusted rates, 3. the estimation of the 95% or 99%conndence intervals of the directly adjusted rates, of the cumulative rates (0-64 and 0-74), and of the cumulative risk. Several examples are presented for testing the performances of the program.展开更多
Background: The number of reported MDR-TB cases has been increasing in recent years. Objectives: To describe the epidemiological profile of MDR-TB cases in Bangladesh. Design: This was a descriptive cross-sectional st...Background: The number of reported MDR-TB cases has been increasing in recent years. Objectives: To describe the epidemiological profile of MDR-TB cases in Bangladesh. Design: This was a descriptive cross-sectional study. Settings: The study was conducted among the multi drug resistant tuberculosis patient admitted in the National Institute of Diseases of the Chest and Hospital (NIDCH) Dhaka, Bangladesh. Samples: 148 confirmed cases of MDR-TB. Materials and Methods: Hospital admitted MRD-TB cases were randomly chosen from the above mentioned hospital. Semi-structured and pretested questionnaire were introduced by researcher. Clinical and treatment data i.e. duration of TB drug intake, report of sputum, X-ray and blood test etc. were extracted from the hospital record. Results: Study found, majority of the participants (56.1%) were in the age group of 16 - 30 years. 64.2% of the study subjects were married. Majority of the participants education were whether under primary or primary level. 24.3% participant’s family member and 14.5% of neighbor were having TB. Most common comorbidity were diabetes, pulmonary infection, hearing loss, psychiatric symptoms, chest pain, joint pain etc. 63.5% respondent had high degree of AFB for sputum positivity and more than 98% had positive finding in X-ray chest. On an average ESR was low and also few cases of extremely low ESR were found. 71.6% were under twenty four months regimen. Conclusion: We can conclude that, many possible factors for MDR-TB. There is an urgent need for further study to confirm the exact factors in Bangladesh and address those immediately.展开更多
The objective of this study was to investigate how rapid descriptive consumer analysis using simultaneous presentation of samples compared with monadic presentation of samples, using both affective and descriptive sen...The objective of this study was to investigate how rapid descriptive consumer analysis using simultaneous presentation of samples compared with monadic presentation of samples, using both affective and descriptive sensory evaluation methods. Simultaneous presentation of coffee samples for sensory acceptance testing, using ranking analysis, was conducted using na?ve assessors. In a separate session, assessors evaluated the same coffee samples, using monadic presentation and employing the same scales. Similarly, descriptive consumer analysis, using simultaneous and monadic sample presentation, was conducted using descriptive attributes chosen by the panel. For RDA (Ranking descriptive analysis), coffee samples were presented simultaneously (randomised) to assessors and subsequently ranked. The process was then repeated using the same assessors;however, samples were presented in monadic and randomised presentation order. Data accumulated from the study were analysed by Analysis of Variance (APLSR-ANOVA Partial Least Squares Regression). Results obtained indicate that simultaneous presentation of samples was more effective than monadic presentation, as a larger amount of attributes with significant (P < 0.05) intensity differences were observed using RDA. Thus, simultaneous presentation of samples also allows ranking in SAT evaluation and proved a useful tool in establishing the hedonic attributes of products. We propose to call this method Ranking Acceptance Analysis (RAA).展开更多
Inferior vena cava thrombosis is an under-recognized entity associated with significant morbidity and mortality. This is the reason why, although the diagnosis is challenging, a high index of suspicion is required. Re...Inferior vena cava thrombosis is an under-recognized entity associated with significant morbidity and mortality. This is the reason why, although the diagnosis is challenging, a high index of suspicion is required. Regarding this condition, we present the case of a 63-year-old man who had repeatedly visited the emergency room suffering from abdominal and back pain and painful lower limb edema. After several tests, including magnetic resonance imaging (MRI), he was diagnosed to have agenesis of left renal vein and inferior vena cava thrombosis, from hypercoagulable state secondary to Antiphospholipid Syndrome. He had anticoagulation treatment with low-molecular-weight heparin with good subsequent evolution. This study sets out a descriptive retrospective study of fifty cases of inferior vena cava thrombosis diagnosed in a third-level hospital in the north of Spain over a ten-year period (2010-2018). The aim of this article is to identify the epidemiology, predisposing factors and symptoms that characterize this entity, in order to be able to achieve an early diagnosis that allows us to initiate immediate treatment, minimizing acute and chronic complications of this disease.展开更多
The purpose of this study is to examine the nature and content of the rapidly evolving undergraduate Principles of Information/Cybersecurity course which has been attracting an ever-growing attention in the computing ...The purpose of this study is to examine the nature and content of the rapidly evolving undergraduate Principles of Information/Cybersecurity course which has been attracting an ever-growing attention in the computing discipline, for the past decade. More specifically, it is to provide an impetus for the design of standardized principles of Information/Cybersecurity course. To achieve this, a survey of colleges and universities that offer the course was conducted. Several schools of engineering and business, in universities and colleges across several countries were surveyed to generate necessary data. Effort was made to direct the questionnaire only to Computer Information System (CIS), Computer Science (CS), Management Information System (MIS), Information System (IS) and other computer-related departments. The study instrument consisted of two main parts: one part addressed the institutional demographic information, while the other focused on the relevant elements of the course. There are sixty-two (62) questionnaire items covering areas such as demographics, perception of the course, course content and coverage, teaching preferences, method of delivery and course technology deployed, assigned textbooks and associated resources, learner support, course assessments, as well as the licensure-based certifications. Several themes emerged from the data analysis: (a) the principles course is an integral part of most cybersecurity programs;(b) majority of the courses examined, stress both strong technical and hands-on skills;(c) encourage vendor-neutral certifications as a course exit characteristic;and (d) an end-of-course class project, remains a standard requirement for successful course completion. Overall, the study makes it clear that cybersecurity is a multilateral discipline, and refuses to be confined by context and content. It is envisaged that the results of this study would turn out to be instructive for all practical purposes. We expect it to be one of the most definitive descriptive models of such a cardinal course, and help to guide and actually, shape the decisions of universities and academic programs focusing on information/cyber security in the updating and upgrading their curricula, most especially, the foundational principles course in light of new findings that are herein articulated.展开更多
Graphic science is the subject which teaches geometry and graphics, and is taught in early undergraduate curricula at many Japanese universities as a liberal arts subject or as a basic subject for design and drawing. ...Graphic science is the subject which teaches geometry and graphics, and is taught in early undergraduate curricula at many Japanese universities as a liberal arts subject or as a basic subject for design and drawing. In traditional graphic science courses, de-scriptive geometry based on hand drawings was taught. However, in recent years, there continues to be a rapid spread in the use of 3D-CAD in the field of engineering design and drawing, and there is also increasing use of CG in many fields such as for visualiza-tion of computer simulation results in science, and for image display in the movie and game entertainment fields. So there is a need for graphic presentation education including the competence in the use of 3D-CAD/CG, or "graphics literacy (or visual literacy) education" for a wide range of students. In order to realize graphics literacy education, from 2007, a new graphic science curriculum has been started at the College of Arts and Sciences of the University of Tokyo. The main part of the curriculum consists of Graphic Science I and Graphic Science II. With Graphic Science I, as before, traditional descriptive geometry is taught with hand drawing as the base. With Graphic Science II, commercial graphic processing software can be experienced. In this course, by introducing geo- metric problems as examples and assignments, it is designed to mutually complement with descriptive geometry education (Graphic Science I). With the spread of 3D-CAD/CG, some people say that there is no longer any need for descriptive geometry, but for the following reasons, it has been decided to teach descriptive geometry. 1) Traditional descriptive geometry is an excellent method in teaching and learning geometry of projection and of three-dimensional objects, and concepts and/or procedures in descriptive geome-try can be applied in solving geometric design problems by the use of 3D-CAD/CG. 2) Even in this age of 3D-CAD/CG, hand draw-ing is still being used (especially for free-hand sketches). 3) Hand drawing is an effective method of developing spatial ability of students. However, with the spread of 3D-CAD/CG, the descriptive geometry techniques in analyzing shapes and forms of three-dimensional objects are now loosing their earlier practical importance. So emphasis is not being placed on the education of practical techniques, but is being placed on teaching the theory behind the techniques, i.e., geometry of projection and of three-dimensional objects. This paper reports specific examples of classes in order to describe the importance of descriptive geometry education, and the need to switch from education focused on techniques to education on the theory behind the techniques.展开更多
This paper deals with the Monte Carlo Simulation in a Bayesian framework.It shows the impor-tance of the use of Monte Carlo experiments through refined descriptive sampling within the autoregressive model Xt=ρXt-1+Yt...This paper deals with the Monte Carlo Simulation in a Bayesian framework.It shows the impor-tance of the use of Monte Carlo experiments through refined descriptive sampling within the autoregressive model Xt=ρXt-1+Yt,where 0<ρ<1 and the errors Yt are independent ran-dom variables following an exponential distribution of parameterθ.To achieve this,a Bayesian Autoregressive Adaptive Refined Descriptive Sampling(B2ARDS)algorithm is proposed to esti-mate the parametersρandθof such a model by a Bayesian method.We have used the same prior as the one already used by some authors,and computed their properties when the Nor-mality error assumption is released to an exponential distribution.The results show that B2ARDS algorithm provides accurate and efficient point estimates.展开更多
This study aims to establish a rationale for the Rice University rule in determining the number of bins in a histogram. It is grounded in the Scott and Freedman-Diaconis rules. Additionally, the accuracy of the empiri...This study aims to establish a rationale for the Rice University rule in determining the number of bins in a histogram. It is grounded in the Scott and Freedman-Diaconis rules. Additionally, the accuracy of the empirical histogram in reproducing the shape of the distribution is assessed with respect to three factors: the rule for determining the number of bins (square root, Sturges, Doane, Scott, Freedman-Diaconis, and Rice University), sample size, and distribution type. Three measures are utilized: the average distance between empirical and theoretical histograms, the level of recognition by an expert judge, and the accuracy index, which is composed of the two aforementioned measures. Mean comparisons are conducted with aligned rank transformation analysis of variance for three fixed-effects factors: sample size (20, 35, 50, 100, 200, 500, and 1000), distribution type (10 types), and empirical rule to determine the number of bins (6 rules). From the accuracy index, Rice’s rule improves with increasing sample size and is independent of distribution type. It outperforms the Friedman-Diaconis rule but falls short of Scott’s rule, except with the arcsine distribution. Its profile of means resembles the square root rule concerning distributions and Doane’s rule concerning sample sizes. These profiles differ from those of the Scott and Friedman-Diaconis rules, which resemble each other. Among the seven rules, Scott’s rule stands out in terms of accuracy, except for the arcsine distribution, and the square root rule is the least accurate.展开更多
Flash boiling atomization(FBA)is a promising approach for enhancing spray atomization,which can generate a fine and more evenly distributed spray by increasing the fuel injection temperature or reducing the ambient pr...Flash boiling atomization(FBA)is a promising approach for enhancing spray atomization,which can generate a fine and more evenly distributed spray by increasing the fuel injection temperature or reducing the ambient pressure.However,when the outlet speed of the nozzle exceeds 400 m/s,investigating high-speed flash boiling atomization(HFBA)becomes quite challenging.This difficulty arises fromthe involvement ofmany complex physical processes and the requirement for a very fine mesh in numerical simulations.In this study,an HFBA model for gasoline direct injection(GDI)is established.This model incorporates primary and secondary atomization,as well as vaporization and boilingmodels,to describe the development process of the flash boiling spray.Compared to lowspeed FBA,these physical processes significantly impact HFBA.In this model,the Eulerian description is utilized for modeling the gas,and the Lagrangian description is applied to model the droplets,which effectively captures the movement of the droplets and avoids excessive mesh in the Eulerian coordinates.Under various conditions,numerical solutions of the Sauter mean diameter(SMD)for GDI show good agreement with experimental data,validating the proposed model’s performance.Simulations based on this HFBA model investigate the influences of fuel injection temperature and ambient pressure on the atomization process.Numerical analyses of the velocity field,temperature field,vapor mass fraction distribution,particle size distribution,and spray penetration length under different superheat degrees reveal that high injection temperature or low ambient pressure significantly affects the formation of small and dispersed droplet distribution.This effect is conducive to the refinement of spray particles and enhances atomization.展开更多
Video description generates natural language sentences that describe the subject,verb,and objects of the targeted Video.The video description has been used to help visually impaired people to understand the content.It...Video description generates natural language sentences that describe the subject,verb,and objects of the targeted Video.The video description has been used to help visually impaired people to understand the content.It is also playing an essential role in devolving human-robot interaction.The dense video description is more difficult when compared with simple Video captioning because of the object’s interactions and event overlapping.Deep learning is changing the shape of computer vision(CV)technologies and natural language processing(NLP).There are hundreds of deep learning models,datasets,and evaluations that can improve the gaps in current research.This article filled this gap by evaluating some state-of-the-art approaches,especially focusing on deep learning and machine learning for video caption in a dense environment.In this article,some classic techniques concerning the existing machine learning were reviewed.And provides deep learning models,a detail of benchmark datasets with their respective domains.This paper reviews various evaluation metrics,including Bilingual EvaluationUnderstudy(BLEU),Metric for Evaluation of Translation with Explicit Ordering(METEOR),WordMover’s Distance(WMD),and Recall-Oriented Understudy for Gisting Evaluation(ROUGE)with their pros and cons.Finally,this article listed some future directions and proposed work for context enhancement using key scene extraction with object detection in a particular frame.Especially,how to improve the context of video description by analyzing key frames detection through morphological image analysis.Additionally,the paper discusses a novel approach involving sentence reconstruction and context improvement through key frame object detection,which incorporates the fusion of large languagemodels for refining results.The ultimate results arise fromenhancing the generated text of the proposedmodel by improving the predicted text and isolating objects using various keyframes.These keyframes identify dense events occurring in the video sequence.展开更多
Image description task is the intersection of computer vision and natural language processing,and it has important prospects,including helping computers understand images and obtaining information for the visually imp...Image description task is the intersection of computer vision and natural language processing,and it has important prospects,including helping computers understand images and obtaining information for the visually impaired.This study presents an innovative approach employing deep reinforcement learning to enhance the accuracy of natural language descriptions of images.Our method focuses on refining the reward function in deep reinforcement learning,facilitating the generation of precise descriptions by aligning visual and textual features more closely.Our approach comprises three key architectures.Firstly,it utilizes Residual Network 101(ResNet-101)and Faster Region-based Convolutional Neural Network(Faster R-CNN)to extract average and local image features,respectively,followed by the implementation of a dual attention mechanism for intricate feature fusion.Secondly,the Transformer model is engaged to derive contextual semantic features from textual data.Finally,the generation of descriptive text is executed through a two-layer long short-term memory network(LSTM),directed by the value and reward functions.Compared with the image description method that relies on deep learning,the score of Bilingual Evaluation Understudy(BLEU-1)is 0.762,which is 1.6%higher,and the score of BLEU-4 is 0.299.Consensus-based Image Description Evaluation(CIDEr)scored 0.998,Recall-Oriented Understudy for Gisting Evaluation(ROUGE)scored 0.552,the latter improved by 0.36%.These results not only attest to the viability of our approach but also highlight its superiority in the realm of image description.Future research can explore the integration of our method with other artificial intelligence(AI)domains,such as emotional AI,to create more nuanced and context-aware systems.展开更多
Combining the strengths of Lagrangian and Eulerian descriptions,the coupled Lagrangian–Eulerian methods play an increasingly important role in various subjects.This work reviews their development and application in o...Combining the strengths of Lagrangian and Eulerian descriptions,the coupled Lagrangian–Eulerian methods play an increasingly important role in various subjects.This work reviews their development and application in ocean engineering.Initially,we briefly outline the advantages and disadvantages of the Lagrangian and Eulerian descriptions and the main characteristics of the coupled Lagrangian–Eulerian approach.Then,following the developmental trajectory of these methods,the fundamental formulations and the frameworks of various approaches,including the arbitrary Lagrangian–Eulerian finite element method,the particle-in-cell method,the material point method,and the recently developed Lagrangian–Eulerian stabilized collocation method,are detailedly reviewed.In addition,the article reviews the research progress of these methods with applications in ocean hydrodynamics,focusing on free surface flows,numerical wave generation,wave overturning and breaking,interactions between waves and coastal structures,fluid–rigid body interactions,fluid–elastic body interactions,multiphase flow problems and visualization of ocean flows,etc.Furthermore,the latest research advancements in the numerical stability,accuracy,efficiency,and consistency of the coupled Lagrangian–Eulerian particle methods are reviewed;these advancements enable efficient and highly accurate simulation of complicated multiphysics problems in ocean and coastal engineering.By building on these works,the current challenges and future directions of the hybrid Lagrangian–Eulerian particle methods are summarized.展开更多
The Dirac equation γ<sub>μ</sub>(δ<sub>μ</sub>-eA<sub>μ</sub>)Ψ=mc<sup>2</sup>Ψ describes the bound states of the electron under the action of external potentials...The Dirac equation γ<sub>μ</sub>(δ<sub>μ</sub>-eA<sub>μ</sub>)Ψ=mc<sup>2</sup>Ψ describes the bound states of the electron under the action of external potentials, A<sub>μ</sub>. We assumed that the fundamental form of the Dirac equation γ<sub>μ</sub>(δ<sub>μ</sub>-S<sub>μ</sub>)Ψ=0 should describe the stable particles (the electron, the proton and the dark-matter-particle (dmp)) bound to themselves under the action of their own potentials S<sub>μ</sub>. The new equation reveals that self energy is consequence of self action, it also reveals that the spin angular momentum is consequence of the dynamic structure of the stable particles. The quantitative results are the determination of their relative masses as well as the determination of the electromagnetic coupling constant.展开更多
文摘Data breaches have massive consequences for companies, affecting them financially and undermining their reputation, which poses significant challenges to online security and the long-term viability of businesses. This study analyzes trends in data breaches in the United States, examining the frequency, causes, and magnitude of breaches across various industries. We document that data breaches are increasing, with hacking emerging as the leading cause. Our descriptive analyses explore factors influencing breaches, including security vulnerabilities, human error, and malicious attacks. The findings provide policymakers and businesses with actionable insights to bolster data security through proactive audits, patching, encryption, and response planning. By better understanding breach patterns and risk factors, organizations can take targeted steps to enhance protections and mitigate the potential damage of future incidents.
基金National Natural Science Foundation of China(No.61073986)
文摘Computer-aided Design (CAD), video games and other computer graphic related technology evolves substantial processing to geometric elements. A novel geometric computing method is proposed with the integration of descriptive geometry, math and computer algorithm. Firstly, geometric elements in general position are transformed to a special position in new coordinate system. Then a 3D problem is projected to new coordinate planes. Finally, according to 2D/3D correspondence principle in descriptive geometry, the solution is constructed computerized drawing process with ruler and compasses. In order to make this method a regular operation, a two-level pattern is established. Basic Layer is a set algebraic packaged function including about ten Primary Geometric Functions (PGF) and one projection transformation. In Application Layer, a proper coordinate is established and a sequence of PGFs is sought for to get the final results. Examples illustrate the advantages of our method on dimension reduction, regulatory and visual computing and robustness.
文摘It is well-known that Chinese dish names concentrate on the essence of traditional Chinese culture,reflecting the collective wisdom of Chinese nation.In recent years,good and awkward translation of Chinese dish names coexist with each other,which has a negative influence on Chinese image.Nowadays many people have begun to center on and study the English translation of Chinese dishes names.Descriptive translation is a clear answer to it.
文摘English grammar is thought as one of the most important parts in both language learning and teaching. While few people know there is more than one kind of English grammar. This essay provides the features and comparison between two commonly used English grammar, namely descriptive grammar and prescriptive grammar, and assist English teachers to explore further in grammar teaching.
文摘The studies on descriptive norms in translation studies are of great significance because they spread the domain of norms and begin to consider the impact of the outside community so that they lead the study to a new field.Descriptive norms are discussed and studied systemically mainly by three scholars:Toury,Hermans,and Chesterman.The purpose of this article is to review this concept and try to point out some merits and demerits of their theory.
文摘This paper describes the statistical methods of the comparison of the incidence or mortality rates in cancer registry and descriptive epidemiology, and the features of microcomputer program (CANTEST) which was designed to perform the methods. The program was written in IBM BASIC language. Using the program CANTEST we presented here the user can do several statistical tests or estimations as follow: 1. the comparison of the adjusted rates which were calculated by directly or indirectly standardized methods, 2. the calculation of the slope of regression line for testing the linear trends of the adjusted rates, 3. the estimation of the 95% or 99%conndence intervals of the directly adjusted rates, of the cumulative rates (0-64 and 0-74), and of the cumulative risk. Several examples are presented for testing the performances of the program.
文摘Background: The number of reported MDR-TB cases has been increasing in recent years. Objectives: To describe the epidemiological profile of MDR-TB cases in Bangladesh. Design: This was a descriptive cross-sectional study. Settings: The study was conducted among the multi drug resistant tuberculosis patient admitted in the National Institute of Diseases of the Chest and Hospital (NIDCH) Dhaka, Bangladesh. Samples: 148 confirmed cases of MDR-TB. Materials and Methods: Hospital admitted MRD-TB cases were randomly chosen from the above mentioned hospital. Semi-structured and pretested questionnaire were introduced by researcher. Clinical and treatment data i.e. duration of TB drug intake, report of sputum, X-ray and blood test etc. were extracted from the hospital record. Results: Study found, majority of the participants (56.1%) were in the age group of 16 - 30 years. 64.2% of the study subjects were married. Majority of the participants education were whether under primary or primary level. 24.3% participant’s family member and 14.5% of neighbor were having TB. Most common comorbidity were diabetes, pulmonary infection, hearing loss, psychiatric symptoms, chest pain, joint pain etc. 63.5% respondent had high degree of AFB for sputum positivity and more than 98% had positive finding in X-ray chest. On an average ESR was low and also few cases of extremely low ESR were found. 71.6% were under twenty four months regimen. Conclusion: We can conclude that, many possible factors for MDR-TB. There is an urgent need for further study to confirm the exact factors in Bangladesh and address those immediately.
文摘The objective of this study was to investigate how rapid descriptive consumer analysis using simultaneous presentation of samples compared with monadic presentation of samples, using both affective and descriptive sensory evaluation methods. Simultaneous presentation of coffee samples for sensory acceptance testing, using ranking analysis, was conducted using na?ve assessors. In a separate session, assessors evaluated the same coffee samples, using monadic presentation and employing the same scales. Similarly, descriptive consumer analysis, using simultaneous and monadic sample presentation, was conducted using descriptive attributes chosen by the panel. For RDA (Ranking descriptive analysis), coffee samples were presented simultaneously (randomised) to assessors and subsequently ranked. The process was then repeated using the same assessors;however, samples were presented in monadic and randomised presentation order. Data accumulated from the study were analysed by Analysis of Variance (APLSR-ANOVA Partial Least Squares Regression). Results obtained indicate that simultaneous presentation of samples was more effective than monadic presentation, as a larger amount of attributes with significant (P < 0.05) intensity differences were observed using RDA. Thus, simultaneous presentation of samples also allows ranking in SAT evaluation and proved a useful tool in establishing the hedonic attributes of products. We propose to call this method Ranking Acceptance Analysis (RAA).
文摘Inferior vena cava thrombosis is an under-recognized entity associated with significant morbidity and mortality. This is the reason why, although the diagnosis is challenging, a high index of suspicion is required. Regarding this condition, we present the case of a 63-year-old man who had repeatedly visited the emergency room suffering from abdominal and back pain and painful lower limb edema. After several tests, including magnetic resonance imaging (MRI), he was diagnosed to have agenesis of left renal vein and inferior vena cava thrombosis, from hypercoagulable state secondary to Antiphospholipid Syndrome. He had anticoagulation treatment with low-molecular-weight heparin with good subsequent evolution. This study sets out a descriptive retrospective study of fifty cases of inferior vena cava thrombosis diagnosed in a third-level hospital in the north of Spain over a ten-year period (2010-2018). The aim of this article is to identify the epidemiology, predisposing factors and symptoms that characterize this entity, in order to be able to achieve an early diagnosis that allows us to initiate immediate treatment, minimizing acute and chronic complications of this disease.
文摘The purpose of this study is to examine the nature and content of the rapidly evolving undergraduate Principles of Information/Cybersecurity course which has been attracting an ever-growing attention in the computing discipline, for the past decade. More specifically, it is to provide an impetus for the design of standardized principles of Information/Cybersecurity course. To achieve this, a survey of colleges and universities that offer the course was conducted. Several schools of engineering and business, in universities and colleges across several countries were surveyed to generate necessary data. Effort was made to direct the questionnaire only to Computer Information System (CIS), Computer Science (CS), Management Information System (MIS), Information System (IS) and other computer-related departments. The study instrument consisted of two main parts: one part addressed the institutional demographic information, while the other focused on the relevant elements of the course. There are sixty-two (62) questionnaire items covering areas such as demographics, perception of the course, course content and coverage, teaching preferences, method of delivery and course technology deployed, assigned textbooks and associated resources, learner support, course assessments, as well as the licensure-based certifications. Several themes emerged from the data analysis: (a) the principles course is an integral part of most cybersecurity programs;(b) majority of the courses examined, stress both strong technical and hands-on skills;(c) encourage vendor-neutral certifications as a course exit characteristic;and (d) an end-of-course class project, remains a standard requirement for successful course completion. Overall, the study makes it clear that cybersecurity is a multilateral discipline, and refuses to be confined by context and content. It is envisaged that the results of this study would turn out to be instructive for all practical purposes. We expect it to be one of the most definitive descriptive models of such a cardinal course, and help to guide and actually, shape the decisions of universities and academic programs focusing on information/cyber security in the updating and upgrading their curricula, most especially, the foundational principles course in light of new findings that are herein articulated.
文摘Graphic science is the subject which teaches geometry and graphics, and is taught in early undergraduate curricula at many Japanese universities as a liberal arts subject or as a basic subject for design and drawing. In traditional graphic science courses, de-scriptive geometry based on hand drawings was taught. However, in recent years, there continues to be a rapid spread in the use of 3D-CAD in the field of engineering design and drawing, and there is also increasing use of CG in many fields such as for visualiza-tion of computer simulation results in science, and for image display in the movie and game entertainment fields. So there is a need for graphic presentation education including the competence in the use of 3D-CAD/CG, or "graphics literacy (or visual literacy) education" for a wide range of students. In order to realize graphics literacy education, from 2007, a new graphic science curriculum has been started at the College of Arts and Sciences of the University of Tokyo. The main part of the curriculum consists of Graphic Science I and Graphic Science II. With Graphic Science I, as before, traditional descriptive geometry is taught with hand drawing as the base. With Graphic Science II, commercial graphic processing software can be experienced. In this course, by introducing geo- metric problems as examples and assignments, it is designed to mutually complement with descriptive geometry education (Graphic Science I). With the spread of 3D-CAD/CG, some people say that there is no longer any need for descriptive geometry, but for the following reasons, it has been decided to teach descriptive geometry. 1) Traditional descriptive geometry is an excellent method in teaching and learning geometry of projection and of three-dimensional objects, and concepts and/or procedures in descriptive geome-try can be applied in solving geometric design problems by the use of 3D-CAD/CG. 2) Even in this age of 3D-CAD/CG, hand draw-ing is still being used (especially for free-hand sketches). 3) Hand drawing is an effective method of developing spatial ability of students. However, with the spread of 3D-CAD/CG, the descriptive geometry techniques in analyzing shapes and forms of three-dimensional objects are now loosing their earlier practical importance. So emphasis is not being placed on the education of practical techniques, but is being placed on teaching the theory behind the techniques, i.e., geometry of projection and of three-dimensional objects. This paper reports specific examples of classes in order to describe the importance of descriptive geometry education, and the need to switch from education focused on techniques to education on the theory behind the techniques.
文摘This paper deals with the Monte Carlo Simulation in a Bayesian framework.It shows the impor-tance of the use of Monte Carlo experiments through refined descriptive sampling within the autoregressive model Xt=ρXt-1+Yt,where 0<ρ<1 and the errors Yt are independent ran-dom variables following an exponential distribution of parameterθ.To achieve this,a Bayesian Autoregressive Adaptive Refined Descriptive Sampling(B2ARDS)algorithm is proposed to esti-mate the parametersρandθof such a model by a Bayesian method.We have used the same prior as the one already used by some authors,and computed their properties when the Nor-mality error assumption is released to an exponential distribution.The results show that B2ARDS algorithm provides accurate and efficient point estimates.
文摘This study aims to establish a rationale for the Rice University rule in determining the number of bins in a histogram. It is grounded in the Scott and Freedman-Diaconis rules. Additionally, the accuracy of the empirical histogram in reproducing the shape of the distribution is assessed with respect to three factors: the rule for determining the number of bins (square root, Sturges, Doane, Scott, Freedman-Diaconis, and Rice University), sample size, and distribution type. Three measures are utilized: the average distance between empirical and theoretical histograms, the level of recognition by an expert judge, and the accuracy index, which is composed of the two aforementioned measures. Mean comparisons are conducted with aligned rank transformation analysis of variance for three fixed-effects factors: sample size (20, 35, 50, 100, 200, 500, and 1000), distribution type (10 types), and empirical rule to determine the number of bins (6 rules). From the accuracy index, Rice’s rule improves with increasing sample size and is independent of distribution type. It outperforms the Friedman-Diaconis rule but falls short of Scott’s rule, except with the arcsine distribution. Its profile of means resembles the square root rule concerning distributions and Doane’s rule concerning sample sizes. These profiles differ from those of the Scott and Friedman-Diaconis rules, which resemble each other. Among the seven rules, Scott’s rule stands out in terms of accuracy, except for the arcsine distribution, and the square root rule is the least accurate.
基金Supported by the National Natural Science Foundation of China (Grant Nos. 60274011, 60574067, 60704008, 60736027, 60721003, 90924001)the New Century Excellent Talents in University (Grant No. NCET-04-0094)+1 种基金the Specialized Research Fund for the Doctoral Program of Higher Education (Grant No. 20070003110)the Programme of Introducing Talents of Discipline to Universities (the National 111 International Collaboration Projects) (Grant No. B06002)
基金supported by the National Natural Science Foundation of China(Project Nos.12272270,11972261).
文摘Flash boiling atomization(FBA)is a promising approach for enhancing spray atomization,which can generate a fine and more evenly distributed spray by increasing the fuel injection temperature or reducing the ambient pressure.However,when the outlet speed of the nozzle exceeds 400 m/s,investigating high-speed flash boiling atomization(HFBA)becomes quite challenging.This difficulty arises fromthe involvement ofmany complex physical processes and the requirement for a very fine mesh in numerical simulations.In this study,an HFBA model for gasoline direct injection(GDI)is established.This model incorporates primary and secondary atomization,as well as vaporization and boilingmodels,to describe the development process of the flash boiling spray.Compared to lowspeed FBA,these physical processes significantly impact HFBA.In this model,the Eulerian description is utilized for modeling the gas,and the Lagrangian description is applied to model the droplets,which effectively captures the movement of the droplets and avoids excessive mesh in the Eulerian coordinates.Under various conditions,numerical solutions of the Sauter mean diameter(SMD)for GDI show good agreement with experimental data,validating the proposed model’s performance.Simulations based on this HFBA model investigate the influences of fuel injection temperature and ambient pressure on the atomization process.Numerical analyses of the velocity field,temperature field,vapor mass fraction distribution,particle size distribution,and spray penetration length under different superheat degrees reveal that high injection temperature or low ambient pressure significantly affects the formation of small and dispersed droplet distribution.This effect is conducive to the refinement of spray particles and enhances atomization.
文摘Video description generates natural language sentences that describe the subject,verb,and objects of the targeted Video.The video description has been used to help visually impaired people to understand the content.It is also playing an essential role in devolving human-robot interaction.The dense video description is more difficult when compared with simple Video captioning because of the object’s interactions and event overlapping.Deep learning is changing the shape of computer vision(CV)technologies and natural language processing(NLP).There are hundreds of deep learning models,datasets,and evaluations that can improve the gaps in current research.This article filled this gap by evaluating some state-of-the-art approaches,especially focusing on deep learning and machine learning for video caption in a dense environment.In this article,some classic techniques concerning the existing machine learning were reviewed.And provides deep learning models,a detail of benchmark datasets with their respective domains.This paper reviews various evaluation metrics,including Bilingual EvaluationUnderstudy(BLEU),Metric for Evaluation of Translation with Explicit Ordering(METEOR),WordMover’s Distance(WMD),and Recall-Oriented Understudy for Gisting Evaluation(ROUGE)with their pros and cons.Finally,this article listed some future directions and proposed work for context enhancement using key scene extraction with object detection in a particular frame.Especially,how to improve the context of video description by analyzing key frames detection through morphological image analysis.Additionally,the paper discusses a novel approach involving sentence reconstruction and context improvement through key frame object detection,which incorporates the fusion of large languagemodels for refining results.The ultimate results arise fromenhancing the generated text of the proposedmodel by improving the predicted text and isolating objects using various keyframes.These keyframes identify dense events occurring in the video sequence.
基金This research was funded by the Natural Science Foundation of Gansu Province with Approval Numbers 20JR10RA334 and 21JR7RA570Funding is provided for the 2021 Longyuan Youth Innovation and Entrepreneurship Talent Project with Approval Number 2021LQGR20+1 种基金the University Level Innovation Project with Approval NumbersGZF2020XZD18jbzxyb2018-01 of Gansu University of Political Science and Law.
文摘Image description task is the intersection of computer vision and natural language processing,and it has important prospects,including helping computers understand images and obtaining information for the visually impaired.This study presents an innovative approach employing deep reinforcement learning to enhance the accuracy of natural language descriptions of images.Our method focuses on refining the reward function in deep reinforcement learning,facilitating the generation of precise descriptions by aligning visual and textual features more closely.Our approach comprises three key architectures.Firstly,it utilizes Residual Network 101(ResNet-101)and Faster Region-based Convolutional Neural Network(Faster R-CNN)to extract average and local image features,respectively,followed by the implementation of a dual attention mechanism for intricate feature fusion.Secondly,the Transformer model is engaged to derive contextual semantic features from textual data.Finally,the generation of descriptive text is executed through a two-layer long short-term memory network(LSTM),directed by the value and reward functions.Compared with the image description method that relies on deep learning,the score of Bilingual Evaluation Understudy(BLEU-1)is 0.762,which is 1.6%higher,and the score of BLEU-4 is 0.299.Consensus-based Image Description Evaluation(CIDEr)scored 0.998,Recall-Oriented Understudy for Gisting Evaluation(ROUGE)scored 0.552,the latter improved by 0.36%.These results not only attest to the viability of our approach but also highlight its superiority in the realm of image description.Future research can explore the integration of our method with other artificial intelligence(AI)domains,such as emotional AI,to create more nuanced and context-aware systems.
基金the support received from the Laoshan Laboratory(No.LSKJ202202000)the National Natural Science Foundation of China(Grant Nos.12032002,U22A20256,and 12302253)the Natural Science Foundation of Beijing(No.L212023)for partially funding this work.
文摘Combining the strengths of Lagrangian and Eulerian descriptions,the coupled Lagrangian–Eulerian methods play an increasingly important role in various subjects.This work reviews their development and application in ocean engineering.Initially,we briefly outline the advantages and disadvantages of the Lagrangian and Eulerian descriptions and the main characteristics of the coupled Lagrangian–Eulerian approach.Then,following the developmental trajectory of these methods,the fundamental formulations and the frameworks of various approaches,including the arbitrary Lagrangian–Eulerian finite element method,the particle-in-cell method,the material point method,and the recently developed Lagrangian–Eulerian stabilized collocation method,are detailedly reviewed.In addition,the article reviews the research progress of these methods with applications in ocean hydrodynamics,focusing on free surface flows,numerical wave generation,wave overturning and breaking,interactions between waves and coastal structures,fluid–rigid body interactions,fluid–elastic body interactions,multiphase flow problems and visualization of ocean flows,etc.Furthermore,the latest research advancements in the numerical stability,accuracy,efficiency,and consistency of the coupled Lagrangian–Eulerian particle methods are reviewed;these advancements enable efficient and highly accurate simulation of complicated multiphysics problems in ocean and coastal engineering.By building on these works,the current challenges and future directions of the hybrid Lagrangian–Eulerian particle methods are summarized.
文摘The Dirac equation γ<sub>μ</sub>(δ<sub>μ</sub>-eA<sub>μ</sub>)Ψ=mc<sup>2</sup>Ψ describes the bound states of the electron under the action of external potentials, A<sub>μ</sub>. We assumed that the fundamental form of the Dirac equation γ<sub>μ</sub>(δ<sub>μ</sub>-S<sub>μ</sub>)Ψ=0 should describe the stable particles (the electron, the proton and the dark-matter-particle (dmp)) bound to themselves under the action of their own potentials S<sub>μ</sub>. The new equation reveals that self energy is consequence of self action, it also reveals that the spin angular momentum is consequence of the dynamic structure of the stable particles. The quantitative results are the determination of their relative masses as well as the determination of the electromagnetic coupling constant.