In the very beginning,the Computer Laboratory of the University of Cambridge was founded to provide computing service for different disciplines across the university.As computer science developed as a discipline in it...In the very beginning,the Computer Laboratory of the University of Cambridge was founded to provide computing service for different disciplines across the university.As computer science developed as a discipline in its own right,boundaries necessarily arose between it and other disciplines,in a way that is now often detrimental to progress.Therefore,it is necessary to reinvigorate the relationship between computer science and other academic disciplines and celebrate exploration and creativity in research.To do this,the structures of the academic department have to act as supporting scaffolding rather than barriers.Some examples are given that show the efforts being made at the University of Cambridge to approach this problem.展开更多
At the panel session of the 3rd Global Forum on the Development of Computer Science,attendees had an opportunity to deliberate recent issues affecting computer science departments as a result of the recent growth in t...At the panel session of the 3rd Global Forum on the Development of Computer Science,attendees had an opportunity to deliberate recent issues affecting computer science departments as a result of the recent growth in the field.6 heads of university computer science departments participated in the discussions,including the moderator,Professor Andrew Yao.The first issue was how universities are managing the growing number of applicants in addition to swelling class sizes.Several approaches were suggested,including increasing faculty hiring,implementing scalable teaching tools,and working closer with other departments through degree programs that integrate computer science with other fields.The second issue was about the position and role of computer science within broader science.Participants generally agreed that all fields are increasingly relying on computer science techniques,and that effectively disseminating these techniques to others is a key to unlocking broader scientific progress.展开更多
Colletotrichum kahawae(Coffee Berry Disease)spreads through spores that can be carried by wind,rain,and insects affecting coffee plantations,and causes 80%yield losses and poor-quality coffee beans.The deadly disease ...Colletotrichum kahawae(Coffee Berry Disease)spreads through spores that can be carried by wind,rain,and insects affecting coffee plantations,and causes 80%yield losses and poor-quality coffee beans.The deadly disease is hard to control because wind,rain,and insects carry spores.Colombian researchers utilized a deep learning system to identify CBD in coffee cherries at three growth stages and classify photographs of infected and uninfected cherries with 93%accuracy using a random forest method.If the dataset is too small and noisy,the algorithm may not learn data patterns and generate accurate predictions.To overcome the existing challenge,early detection of Colletotrichum Kahawae disease in coffee cherries requires automated processes,prompt recognition,and accurate classifications.The proposed methodology selects CBD image datasets through four different stages for training and testing.XGBoost to train a model on datasets of coffee berries,with each image labeled as healthy or diseased.Once themodel is trained,SHAP algorithmto figure out which features were essential formaking predictions with the proposed model.Some of these characteristics were the cherry’s colour,whether it had spots or other damage,and how big the Lesions were.Virtual inception is important for classification to virtualize the relationship between the colour of the berry is correlated with the presence of disease.To evaluate themodel’s performance andmitigate excess fitting,a 10-fold cross-validation approach is employed.This involves partitioning the dataset into ten subsets,training the model on each subset,and evaluating its performance.In comparison to other contemporary methodologies,the model put forth achieved an accuracy of 98.56%.展开更多
The number of students demanding computer science(CS)education is rapidly rising,and while faculty sizes are also growing,the traditional pipeline consisting of a CS major,a CS master’s,and then a move to industry or...The number of students demanding computer science(CS)education is rapidly rising,and while faculty sizes are also growing,the traditional pipeline consisting of a CS major,a CS master’s,and then a move to industry or a Ph.D.program is simply not scalable.To address this problem,the Department of Computing at the University of Illinois has introduced a multidisciplinary approach to computing,which is a scalable and collaborative approach to capitalize on the tremendous demand for computer science education.The key component of the approach is the blended major,also referred to as“CS+X”,where CS denotes computer science and X denotes a non-computing field.These CS+X blended degrees enable win-win partnerships among multiple subject areas,distributing the educational responsibilities while growing the entire university.To meet the demand from non-CS majors,another pathway that is offered is a graduate certificate program in addition to the traditional minor program.To accommodate the large number of students,scalable teaching tools,such as automatic graders,have also been developed.展开更多
The need for information systems in organizations and economic units increases as there is a great deal of data that arise from doing many of the processes in order to be addressed to provide information that can brin...The need for information systems in organizations and economic units increases as there is a great deal of data that arise from doing many of the processes in order to be addressed to provide information that can bring interest to multi-users, the new and distinctive management accounting systems which meet in a manner easily all the needs of institutions and individuals from financial business, accounting and management, which take into account the accuracy, speed and confidentiality of the information for which the system is designed. The paper aims to describe a computerized system that is able to predict the budget for the new year based on past budgets by using time series analysis, which gives results with errors to a minimum and controls the budget during the year, through the ability to control exchange, compared to the scheme with the investigator and calculating the deviation, measurement of performance ratio and the expense of a number of indicators relating to budgets, such as the rate of condensation of capital, the growth rate and profitability ratio and gives a clear indication whether these ratios are good or not. There is a positive impact on information systems through this system for its ability to accomplish complex calculations and process paperwork, which is faster than it was previously and there is also a high flexibility, where the system can do any adjustments required in helping relevant parties to control the financial matters of the decision-making appropriate action thereon.展开更多
Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important a...Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important and scarce network resources such as bandwidth and processing power.There have been several reports of these control signaling turning into signaling storms halting network operations and causing the respective Telecom companies big financial losses.This paper draws its motivation from such real network disaster incidents attributed to signaling storms.In this paper,we present a thorough survey of the causes,of the signaling storm problems in 3GPP-based mobile broadband networks and discuss in detail their possible solutions and countermeasures.We provide relevant analytical models to help quantify the effect of the potential causes and benefits of their corresponding solutions.Another important contribution of this paper is the comparison of the possible causes and solutions/countermeasures,concerning their effect on several important network aspects such as architecture,additional signaling,fidelity,etc.,in the form of a table.This paper presents an update and an extension of our earlier conference publication.To our knowledge,no similar survey study exists on the subject.展开更多
Lower back pain is one of the most common medical problems in the world and it is experienced by a huge percentage of people everywhere.Due to its ability to produce a detailed view of the soft tissues,including the s...Lower back pain is one of the most common medical problems in the world and it is experienced by a huge percentage of people everywhere.Due to its ability to produce a detailed view of the soft tissues,including the spinal cord,nerves,intervertebral discs,and vertebrae,Magnetic Resonance Imaging is thought to be the most effective method for imaging the spine.The semantic segmentation of vertebrae plays a major role in the diagnostic process of lumbar diseases.It is difficult to semantically partition the vertebrae in Magnetic Resonance Images from the surrounding variety of tissues,including muscles,ligaments,and intervertebral discs.U-Net is a powerful deep-learning architecture to handle the challenges of medical image analysis tasks and achieves high segmentation accuracy.This work proposes a modified U-Net architecture namely MU-Net,consisting of the Meijering convolutional layer that incorporates the Meijering filter to perform the semantic segmentation of lumbar vertebrae L1 to L5 and sacral vertebra S1.Pseudo-colour mask images were generated and used as ground truth for training the model.The work has been carried out on 1312 images expanded from T1-weighted mid-sagittal MRI images of 515 patients in the Lumbar Spine MRI Dataset publicly available from Mendeley Data.The proposed MU-Net model for the semantic segmentation of the lumbar vertebrae gives better performance with 98.79%of pixel accuracy(PA),98.66%of dice similarity coefficient(DSC),97.36%of Jaccard coefficient,and 92.55%mean Intersection over Union(mean IoU)metrics using the mentioned dataset.展开更多
Liver cancer remains a leading cause of mortality worldwide,and precise diagnostic tools are essential for effective treatment planning.Liver Tumors(LTs)vary significantly in size,shape,and location,and can present wi...Liver cancer remains a leading cause of mortality worldwide,and precise diagnostic tools are essential for effective treatment planning.Liver Tumors(LTs)vary significantly in size,shape,and location,and can present with tissues of similar intensities,making automatically segmenting and classifying LTs from abdominal tomography images crucial and challenging.This review examines recent advancements in Liver Segmentation(LS)and Tumor Segmentation(TS)algorithms,highlighting their strengths and limitations regarding precision,automation,and resilience.Performance metrics are utilized to assess key detection algorithms and analytical methods,emphasizing their effectiveness and relevance in clinical contexts.The review also addresses ongoing challenges in liver tumor segmentation and identification,such as managing high variability in patient data and ensuring robustness across different imaging conditions.It suggests directions for future research,with insights into technological advancements that can enhance surgical planning and diagnostic accuracy by comparing popular methods.This paper contributes to a comprehensive understanding of current liver tumor detection techniques,provides a roadmap for future innovations,and improves diagnostic and therapeutic outcomes for liver cancer by integrating recent progress with remaining challenges.展开更多
Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify sp...Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.展开更多
This study directs the discussion of HIV disease with a novel kind of complex dynamical generalized and piecewise operator in the sense of classical and Atangana Baleanu(AB)derivatives having arbitrary order.The HIV i...This study directs the discussion of HIV disease with a novel kind of complex dynamical generalized and piecewise operator in the sense of classical and Atangana Baleanu(AB)derivatives having arbitrary order.The HIV infection model has a susceptible class,a recovered class,along with a case of infection divided into three sub-different levels or categories and the recovered class.The total time interval is converted into two,which are further investigated for ordinary and fractional order operators of the AB derivative,respectively.The proposed model is tested separately for unique solutions and existence on bi intervals.The numerical solution of the proposed model is treated by the piece-wise numerical iterative scheme of Newtons Polynomial.The proposed method is established for piece-wise derivatives under natural order and non-singular Mittag-Leffler Law.The cross-over or bending characteristics in the dynamical system of HIV are easily examined by the aspect of this research having a memory effect for controlling the said disease.This study uses the neural network(NN)technique to obtain a better set of weights with low residual errors,and the epochs number is considered 1000.The obtained figures represent the approximate solution and absolute error which are tested with NN to train the data accurately.展开更多
The importance of prerequisites for education has recently become a promising research direction.This work proposes a statistical model for measuring dependencies in learning resources between knowledge units.Instruct...The importance of prerequisites for education has recently become a promising research direction.This work proposes a statistical model for measuring dependencies in learning resources between knowledge units.Instructors are expected to present knowledge units in a semantically well-organized manner to facilitate students’understanding of the material.The proposed model reveals how inner concepts of a knowledge unit are dependent on each other and on concepts not in the knowledge unit.To help understand the complexity of the inner concepts themselves,WordNet is included as an external knowledge base in thismodel.The goal is to develop a model that will enable instructors to evaluate whether or not a learning regime has hidden relationships which might hinder students’ability to understand the material.The evaluation,employing three textbooks,shows that the proposed model succeeds in discovering hidden relationships among knowledge units in learning resources and in exposing the knowledge gaps in some knowledge units.展开更多
Computer science(CS)is a discipline to study the scientific and practical approach to computation and its applications.As we enter into the Internet era,computers and the Internet have become intimate parts of our dai...Computer science(CS)is a discipline to study the scientific and practical approach to computation and its applications.As we enter into the Internet era,computers and the Internet have become intimate parts of our daily life.Due to its rapid development and wide applications recently,more CS graduates are needed in industries around the world.In USA,this situation is even more severe due to the rapid expansions of several big IT related companies such as Microsoft,Google,Facebook,Amazon,IBM etc.Hence,how to effectively train a large number of展开更多
This work aims to implement expert and collaborative group recommendation services through an analysis of expertise and network relations NTIS. First of all, expertise database has been constructed by extracting keywo...This work aims to implement expert and collaborative group recommendation services through an analysis of expertise and network relations NTIS. First of all, expertise database has been constructed by extracting keywords after indexing national R&D information in Korea (human resources, project and outcome) and applying expertise calculation algorithm. In consideration of the characteristics of national R&D information, weight values have been selected. Then, expertise points were calculated by applying weighted values. In addition, joint research and collaborative relations were implemented in a knowledge map format through network analysis using national R&D information.展开更多
A recent work has shown that using an ion trap quantum processor can speed up the decision making of a reinforcement learning agent. Its quantum advantage is observed when the external environment changes, and then ag...A recent work has shown that using an ion trap quantum processor can speed up the decision making of a reinforcement learning agent. Its quantum advantage is observed when the external environment changes, and then agent needs to relearn again. One character of this quantum hardware system discovered in this study is that it tends to overestimate the values used to determine the actions the agent will take. IBM’s five qubit superconducting quantum processor is a popular quantum platform. The aims of our study are twofold. First we want to identify the hardware characteristic features of IBM’s 5Q quantum computer when running this learning agent, compared with the ion trap processor. Second, through careful analysis, we observe that the quantum circuit employed in the ion trap processor for this agent could be simplified. Furthermore, when tested on IBM’s 5Q quantum processor, our simplified circuit demonstrates its enhanced performance over the original circuit on one of the hard learning tasks investigated in the previous work. We also use IBM’s quantum simulator when a good baseline is needed to compare the performances. As more and more quantum hardware devices are moving out of the laboratory and becoming generally available to public use, our work emphasizes the fact that the features and constraints of the quantum hardware could take a toll on the performance of quantum algorithms.展开更多
In modern computer games, "bots" - intelligent realistic agents play a prominent role in the popularity of a game in the market. Typically, bots are modeled using finite-state machine and then programmed via simple ...In modern computer games, "bots" - intelligent realistic agents play a prominent role in the popularity of a game in the market. Typically, bots are modeled using finite-state machine and then programmed via simple conditional statements which are hard-coded in bots logic. Since these bots have become quite predictable to an experienced games' player, a player might lose interest in the game. We propose the use of a game theoretic based learning rule called fictitious play for improving behavior of these computer game bots which will make them less predictable and hence, more a enjoyable game.展开更多
Any number that can be uniquely determined by a graph is called a graph invariant.During the last twenty years’countless mathematical graph invariants have been characterized and utilized for correlation analysis.How...Any number that can be uniquely determined by a graph is called a graph invariant.During the last twenty years’countless mathematical graph invariants have been characterized and utilized for correlation analysis.However,no reliable examination has been embraced to decide,how much these invariants are related with a network graph or molecular graph.In this paper,it will discuss three different variants of bridge networks with good potential of prediction in the field of computer science,mathematics,chemistry,pharmacy,informatics and biology in context with physical and chemical structures and networks,because k-banhatti sombor invariants are freshly presented and have numerous prediction qualities for different variants of bridge graphs or networks.The study solved the topology of a bridge graph/networks of three different types with two invariants KBanhatti Sombor Indices and its reduced form.These deduced results can be used for the modeling of computer networks like Local area network(LAN),Metropolitan area network(MAN),and Wide area network(WAN),backbone of internet and other networks/structures of computers,power generation,bio-informatics and chemical compounds synthesis.展开更多
Autonomous systems are an emerging AI technology functioning without human intervention underpinned by the latest advances in intelligence,cognition,computer,and systems sciences.This paper explores the intelligent an...Autonomous systems are an emerging AI technology functioning without human intervention underpinned by the latest advances in intelligence,cognition,computer,and systems sciences.This paper explores the intelligent and mathematical foundations of autonomous systems.It focuses on structural and behavioral properties that constitute the intelligent power of autonomous systems.It explains how system intelligence aggregates from reflexive,imperative,adaptive intelligence to autonomous and cognitive intelligence.A hierarchical intelligence model(HIM)is introduced to elaborate the evolution of human and system intelligence as an inductive process.The properties of system autonomy are formally analyzed towards a wide range of applications in computational intelligence and systems engineering.Emerging paradigms of autonomous systems including brain-inspired systems,cognitive robots,and autonomous knowledge learning systems are described.Advances in autonomous systems will pave a way towards highly intelligent machines for augmenting human capabilities.展开更多
Taking a large number of images,the Cassini Imaging Science Subsystem(ISS)has been routinely used in astrometry.In ISS images,disk-resolved objects often lead to false detection of stars that disturb the camera pointi...Taking a large number of images,the Cassini Imaging Science Subsystem(ISS)has been routinely used in astrometry.In ISS images,disk-resolved objects often lead to false detection of stars that disturb the camera pointing correction.The aim of this study was to develop an automated processing method to remove the false image stars in disk-resolved objects in ISS images.The method included the following steps:extracting edges,segmenting boundary arcs,fitting circles and excluding false image stars.The proposed method was tested using 200 ISS images.Preliminary experimental results show that it can remove the false image stars in more than 95%of ISS images with disk-resolved objects in a fully automatic manner,i.e.,outperforming the traditional circle detection based on Circular Hough Transform(CHT)by 17%.In addition,its speed is more than twice as fast as that of the CHT method.It is also more robust(no manual parameter tuning is needed)when compared with CHT.The proposed method was also applied to a set of ISS images of Rhea to eliminate the mismatch in pointing correction in automatic procedure.Experiment results showed that the precision of final astrometry results can be improve by roughly 2 times that of automatic procedure without the method.It proved that the proposed method is helpful in the astrometry of ISS images in a fully automatic manner.展开更多
In this paper, a series of two line-soliton solutions and double periodic solutions of Chaffee-Infante equation have been obtained by using a new transformation. Unlike the existing methods which are used to find mult...In this paper, a series of two line-soliton solutions and double periodic solutions of Chaffee-Infante equation have been obtained by using a new transformation. Unlike the existing methods which are used to find multiple soliton solutions of nonlinear partial differential equations, this approach is constructive and pure algebraic. The results found here are tested on computer and therefore their validity is ensured.展开更多
Reaction–diffusion systems are mathematical models which link to several physical phenomena.The most common is the change in space and time of the meditation of one or more materials.Reaction–diffusion modeling is a...Reaction–diffusion systems are mathematical models which link to several physical phenomena.The most common is the change in space and time of the meditation of one or more materials.Reaction–diffusion modeling is a substantial role in the modeling of computer propagation like infectious diseases.We investigated the transmission dynamics of the computer virus in which connected to each other through network globally.The current study devoted to the structure-preserving analysis of the computer propagation model.This manuscript is devoted to finding the numerical investigation of the reaction–diffusion computer virus epidemic model with the help of a reliable technique.The designed technique is finite difference scheme which sustains the important physical behavior of continuous model like the positivity of the dependent variables,the stability of the equilibria.The theoretical analysis of the proposed method like the positivity of the approximation,stability,and consistency is discussed in detail.A numerical example of simulations yields the authentication of the theoretical results of the designed technique.展开更多
文摘In the very beginning,the Computer Laboratory of the University of Cambridge was founded to provide computing service for different disciplines across the university.As computer science developed as a discipline in its own right,boundaries necessarily arose between it and other disciplines,in a way that is now often detrimental to progress.Therefore,it is necessary to reinvigorate the relationship between computer science and other academic disciplines and celebrate exploration and creativity in research.To do this,the structures of the academic department have to act as supporting scaffolding rather than barriers.Some examples are given that show the efforts being made at the University of Cambridge to approach this problem.
文摘At the panel session of the 3rd Global Forum on the Development of Computer Science,attendees had an opportunity to deliberate recent issues affecting computer science departments as a result of the recent growth in the field.6 heads of university computer science departments participated in the discussions,including the moderator,Professor Andrew Yao.The first issue was how universities are managing the growing number of applicants in addition to swelling class sizes.Several approaches were suggested,including increasing faculty hiring,implementing scalable teaching tools,and working closer with other departments through degree programs that integrate computer science with other fields.The second issue was about the position and role of computer science within broader science.Participants generally agreed that all fields are increasingly relying on computer science techniques,and that effectively disseminating these techniques to others is a key to unlocking broader scientific progress.
基金support from the Deanship for Research&Innovation,Ministry of Education in Saudi Arabia,under the Auspices of Project Number:IFP22UQU4281768DSR122.
文摘Colletotrichum kahawae(Coffee Berry Disease)spreads through spores that can be carried by wind,rain,and insects affecting coffee plantations,and causes 80%yield losses and poor-quality coffee beans.The deadly disease is hard to control because wind,rain,and insects carry spores.Colombian researchers utilized a deep learning system to identify CBD in coffee cherries at three growth stages and classify photographs of infected and uninfected cherries with 93%accuracy using a random forest method.If the dataset is too small and noisy,the algorithm may not learn data patterns and generate accurate predictions.To overcome the existing challenge,early detection of Colletotrichum Kahawae disease in coffee cherries requires automated processes,prompt recognition,and accurate classifications.The proposed methodology selects CBD image datasets through four different stages for training and testing.XGBoost to train a model on datasets of coffee berries,with each image labeled as healthy or diseased.Once themodel is trained,SHAP algorithmto figure out which features were essential formaking predictions with the proposed model.Some of these characteristics were the cherry’s colour,whether it had spots or other damage,and how big the Lesions were.Virtual inception is important for classification to virtualize the relationship between the colour of the berry is correlated with the presence of disease.To evaluate themodel’s performance andmitigate excess fitting,a 10-fold cross-validation approach is employed.This involves partitioning the dataset into ten subsets,training the model on each subset,and evaluating its performance.In comparison to other contemporary methodologies,the model put forth achieved an accuracy of 98.56%.
文摘The number of students demanding computer science(CS)education is rapidly rising,and while faculty sizes are also growing,the traditional pipeline consisting of a CS major,a CS master’s,and then a move to industry or a Ph.D.program is simply not scalable.To address this problem,the Department of Computing at the University of Illinois has introduced a multidisciplinary approach to computing,which is a scalable and collaborative approach to capitalize on the tremendous demand for computer science education.The key component of the approach is the blended major,also referred to as“CS+X”,where CS denotes computer science and X denotes a non-computing field.These CS+X blended degrees enable win-win partnerships among multiple subject areas,distributing the educational responsibilities while growing the entire university.To meet the demand from non-CS majors,another pathway that is offered is a graduate certificate program in addition to the traditional minor program.To accommodate the large number of students,scalable teaching tools,such as automatic graders,have also been developed.
文摘The need for information systems in organizations and economic units increases as there is a great deal of data that arise from doing many of the processes in order to be addressed to provide information that can bring interest to multi-users, the new and distinctive management accounting systems which meet in a manner easily all the needs of institutions and individuals from financial business, accounting and management, which take into account the accuracy, speed and confidentiality of the information for which the system is designed. The paper aims to describe a computerized system that is able to predict the budget for the new year based on past budgets by using time series analysis, which gives results with errors to a minimum and controls the budget during the year, through the ability to control exchange, compared to the scheme with the investigator and calculating the deviation, measurement of performance ratio and the expense of a number of indicators relating to budgets, such as the rate of condensation of capital, the growth rate and profitability ratio and gives a clear indication whether these ratios are good or not. There is a positive impact on information systems through this system for its ability to accomplish complex calculations and process paperwork, which is faster than it was previously and there is also a high flexibility, where the system can do any adjustments required in helping relevant parties to control the financial matters of the decision-making appropriate action thereon.
基金the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2024-9/1).
文摘Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important and scarce network resources such as bandwidth and processing power.There have been several reports of these control signaling turning into signaling storms halting network operations and causing the respective Telecom companies big financial losses.This paper draws its motivation from such real network disaster incidents attributed to signaling storms.In this paper,we present a thorough survey of the causes,of the signaling storm problems in 3GPP-based mobile broadband networks and discuss in detail their possible solutions and countermeasures.We provide relevant analytical models to help quantify the effect of the potential causes and benefits of their corresponding solutions.Another important contribution of this paper is the comparison of the possible causes and solutions/countermeasures,concerning their effect on several important network aspects such as architecture,additional signaling,fidelity,etc.,in the form of a table.This paper presents an update and an extension of our earlier conference publication.To our knowledge,no similar survey study exists on the subject.
文摘Lower back pain is one of the most common medical problems in the world and it is experienced by a huge percentage of people everywhere.Due to its ability to produce a detailed view of the soft tissues,including the spinal cord,nerves,intervertebral discs,and vertebrae,Magnetic Resonance Imaging is thought to be the most effective method for imaging the spine.The semantic segmentation of vertebrae plays a major role in the diagnostic process of lumbar diseases.It is difficult to semantically partition the vertebrae in Magnetic Resonance Images from the surrounding variety of tissues,including muscles,ligaments,and intervertebral discs.U-Net is a powerful deep-learning architecture to handle the challenges of medical image analysis tasks and achieves high segmentation accuracy.This work proposes a modified U-Net architecture namely MU-Net,consisting of the Meijering convolutional layer that incorporates the Meijering filter to perform the semantic segmentation of lumbar vertebrae L1 to L5 and sacral vertebra S1.Pseudo-colour mask images were generated and used as ground truth for training the model.The work has been carried out on 1312 images expanded from T1-weighted mid-sagittal MRI images of 515 patients in the Lumbar Spine MRI Dataset publicly available from Mendeley Data.The proposed MU-Net model for the semantic segmentation of the lumbar vertebrae gives better performance with 98.79%of pixel accuracy(PA),98.66%of dice similarity coefficient(DSC),97.36%of Jaccard coefficient,and 92.55%mean Intersection over Union(mean IoU)metrics using the mentioned dataset.
基金the“Intelligent Recognition Industry Service Center”as part of the Featured Areas Research Center Program under the Higher Education Sprout Project by the Ministry of Education(MOE)in Taiwan,and the National Science and Technology Council,Taiwan,under grants 113-2221-E-224-041 and 113-2622-E-224-002.Additionally,partial support was provided by Isuzu Optics Corporation.
文摘Liver cancer remains a leading cause of mortality worldwide,and precise diagnostic tools are essential for effective treatment planning.Liver Tumors(LTs)vary significantly in size,shape,and location,and can present with tissues of similar intensities,making automatically segmenting and classifying LTs from abdominal tomography images crucial and challenging.This review examines recent advancements in Liver Segmentation(LS)and Tumor Segmentation(TS)algorithms,highlighting their strengths and limitations regarding precision,automation,and resilience.Performance metrics are utilized to assess key detection algorithms and analytical methods,emphasizing their effectiveness and relevance in clinical contexts.The review also addresses ongoing challenges in liver tumor segmentation and identification,such as managing high variability in patient data and ensuring robustness across different imaging conditions.It suggests directions for future research,with insights into technological advancements that can enhance surgical planning and diagnostic accuracy by comparing popular methods.This paper contributes to a comprehensive understanding of current liver tumor detection techniques,provides a roadmap for future innovations,and improves diagnostic and therapeutic outcomes for liver cancer by integrating recent progress with remaining challenges.
基金the Deanship of Scientifc Research at King Khalid University for funding this work through large group Research Project under grant number RGP2/421/45supported via funding from Prince Sattam bin Abdulaziz University project number(PSAU/2024/R/1446)+1 种基金supported by theResearchers Supporting Project Number(UM-DSR-IG-2023-07)Almaarefa University,Riyadh,Saudi Arabia.supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(No.2021R1F1A1055408).
文摘Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-RP23066).
文摘This study directs the discussion of HIV disease with a novel kind of complex dynamical generalized and piecewise operator in the sense of classical and Atangana Baleanu(AB)derivatives having arbitrary order.The HIV infection model has a susceptible class,a recovered class,along with a case of infection divided into three sub-different levels or categories and the recovered class.The total time interval is converted into two,which are further investigated for ordinary and fractional order operators of the AB derivative,respectively.The proposed model is tested separately for unique solutions and existence on bi intervals.The numerical solution of the proposed model is treated by the piece-wise numerical iterative scheme of Newtons Polynomial.The proposed method is established for piece-wise derivatives under natural order and non-singular Mittag-Leffler Law.The cross-over or bending characteristics in the dynamical system of HIV are easily examined by the aspect of this research having a memory effect for controlling the said disease.This study uses the neural network(NN)technique to obtain a better set of weights with low residual errors,and the epochs number is considered 1000.The obtained figures represent the approximate solution and absolute error which are tested with NN to train the data accurately.
文摘The importance of prerequisites for education has recently become a promising research direction.This work proposes a statistical model for measuring dependencies in learning resources between knowledge units.Instructors are expected to present knowledge units in a semantically well-organized manner to facilitate students’understanding of the material.The proposed model reveals how inner concepts of a knowledge unit are dependent on each other and on concepts not in the knowledge unit.To help understand the complexity of the inner concepts themselves,WordNet is included as an external knowledge base in thismodel.The goal is to develop a model that will enable instructors to evaluate whether or not a learning regime has hidden relationships which might hinder students’ability to understand the material.The evaluation,employing three textbooks,shows that the proposed model succeeds in discovering hidden relationships among knowledge units in learning resources and in exposing the knowledge gaps in some knowledge units.
文摘Computer science(CS)is a discipline to study the scientific and practical approach to computation and its applications.As we enter into the Internet era,computers and the Internet have become intimate parts of our daily life.Due to its rapid development and wide applications recently,more CS graduates are needed in industries around the world.In USA,this situation is even more severe due to the rapid expansions of several big IT related companies such as Microsoft,Google,Facebook,Amazon,IBM etc.Hence,how to effectively train a large number of
基金Project(N-12-NM-LU01-C01) supported by Construction of NTIS (National Science & Technology Information Service) Program Funded by the National Science & Technology Commission (NSTC), Korea
文摘This work aims to implement expert and collaborative group recommendation services through an analysis of expertise and network relations NTIS. First of all, expertise database has been constructed by extracting keywords after indexing national R&D information in Korea (human resources, project and outcome) and applying expertise calculation algorithm. In consideration of the characteristics of national R&D information, weight values have been selected. Then, expertise points were calculated by applying weighted values. In addition, joint research and collaborative relations were implemented in a knowledge map format through network analysis using national R&D information.
文摘A recent work has shown that using an ion trap quantum processor can speed up the decision making of a reinforcement learning agent. Its quantum advantage is observed when the external environment changes, and then agent needs to relearn again. One character of this quantum hardware system discovered in this study is that it tends to overestimate the values used to determine the actions the agent will take. IBM’s five qubit superconducting quantum processor is a popular quantum platform. The aims of our study are twofold. First we want to identify the hardware characteristic features of IBM’s 5Q quantum computer when running this learning agent, compared with the ion trap processor. Second, through careful analysis, we observe that the quantum circuit employed in the ion trap processor for this agent could be simplified. Furthermore, when tested on IBM’s 5Q quantum processor, our simplified circuit demonstrates its enhanced performance over the original circuit on one of the hard learning tasks investigated in the previous work. We also use IBM’s quantum simulator when a good baseline is needed to compare the performances. As more and more quantum hardware devices are moving out of the laboratory and becoming generally available to public use, our work emphasizes the fact that the features and constraints of the quantum hardware could take a toll on the performance of quantum algorithms.
文摘In modern computer games, "bots" - intelligent realistic agents play a prominent role in the popularity of a game in the market. Typically, bots are modeled using finite-state machine and then programmed via simple conditional statements which are hard-coded in bots logic. Since these bots have become quite predictable to an experienced games' player, a player might lose interest in the game. We propose the use of a game theoretic based learning rule called fictitious play for improving behavior of these computer game bots which will make them less predictable and hence, more a enjoyable game.
基金This project was funded by the Deanship of Scientific Research(DSR),King Abdul-Aziz University,Jeddah,Saudi Arabia under Grant No.(RG-11-611-43).
文摘Any number that can be uniquely determined by a graph is called a graph invariant.During the last twenty years’countless mathematical graph invariants have been characterized and utilized for correlation analysis.However,no reliable examination has been embraced to decide,how much these invariants are related with a network graph or molecular graph.In this paper,it will discuss three different variants of bridge networks with good potential of prediction in the field of computer science,mathematics,chemistry,pharmacy,informatics and biology in context with physical and chemical structures and networks,because k-banhatti sombor invariants are freshly presented and have numerous prediction qualities for different variants of bridge graphs or networks.The study solved the topology of a bridge graph/networks of three different types with two invariants KBanhatti Sombor Indices and its reduced form.These deduced results can be used for the modeling of computer networks like Local area network(LAN),Metropolitan area network(MAN),and Wide area network(WAN),backbone of internet and other networks/structures of computers,power generation,bio-informatics and chemical compounds synthesis.
基金supported in part by the Department of National Defence’s Innovation for Defence Excellence and Security(IDEa S)Program,Canadathrough the Project of Auto Defence Towards Trustworthy Technologies for Autonomous Human-Machine Systems,NSERCthe IEEE SMC Society Technical Committee on Brain-Inspired Systems(TCBCS)。
文摘Autonomous systems are an emerging AI technology functioning without human intervention underpinned by the latest advances in intelligence,cognition,computer,and systems sciences.This paper explores the intelligent and mathematical foundations of autonomous systems.It focuses on structural and behavioral properties that constitute the intelligent power of autonomous systems.It explains how system intelligence aggregates from reflexive,imperative,adaptive intelligence to autonomous and cognitive intelligence.A hierarchical intelligence model(HIM)is introduced to elaborate the evolution of human and system intelligence as an inductive process.The properties of system autonomy are formally analyzed towards a wide range of applications in computational intelligence and systems engineering.Emerging paradigms of autonomous systems including brain-inspired systems,cognitive robots,and autonomous knowledge learning systems are described.Advances in autonomous systems will pave a way towards highly intelligent machines for augmenting human capabilities.
基金supported by the National Natural Science Foundation of China(Grant Nos.11873026 and U1431227)the Natural Science Foundation of Guangdong Province,China(Grant No.2016A030313092)+1 种基金the National Key Research and Development Project of China(Grant No.2019YFC0120102)the Fundamental Research Funds for the Central Universities(Grant No.21619413)。
文摘Taking a large number of images,the Cassini Imaging Science Subsystem(ISS)has been routinely used in astrometry.In ISS images,disk-resolved objects often lead to false detection of stars that disturb the camera pointing correction.The aim of this study was to develop an automated processing method to remove the false image stars in disk-resolved objects in ISS images.The method included the following steps:extracting edges,segmenting boundary arcs,fitting circles and excluding false image stars.The proposed method was tested using 200 ISS images.Preliminary experimental results show that it can remove the false image stars in more than 95%of ISS images with disk-resolved objects in a fully automatic manner,i.e.,outperforming the traditional circle detection based on Circular Hough Transform(CHT)by 17%.In addition,its speed is more than twice as fast as that of the CHT method.It is also more robust(no manual parameter tuning is needed)when compared with CHT.The proposed method was also applied to a set of ISS images of Rhea to eliminate the mismatch in pointing correction in automatic procedure.Experiment results showed that the precision of final astrometry results can be improve by roughly 2 times that of automatic procedure without the method.It proved that the proposed method is helpful in the astrometry of ISS images in a fully automatic manner.
基金The project supported by '973' Project under Grant No.2004CB318000Doctor Start-up Foundation of Liaoning Province under Grant No.1040225Science and Technology Research Project of Liaoning Education Bureau
文摘In this paper, a series of two line-soliton solutions and double periodic solutions of Chaffee-Infante equation have been obtained by using a new transformation. Unlike the existing methods which are used to find multiple soliton solutions of nonlinear partial differential equations, this approach is constructive and pure algebraic. The results found here are tested on computer and therefore their validity is ensured.
基金The authors declare that they have no funding for the present study。
文摘Reaction–diffusion systems are mathematical models which link to several physical phenomena.The most common is the change in space and time of the meditation of one or more materials.Reaction–diffusion modeling is a substantial role in the modeling of computer propagation like infectious diseases.We investigated the transmission dynamics of the computer virus in which connected to each other through network globally.The current study devoted to the structure-preserving analysis of the computer propagation model.This manuscript is devoted to finding the numerical investigation of the reaction–diffusion computer virus epidemic model with the help of a reliable technique.The designed technique is finite difference scheme which sustains the important physical behavior of continuous model like the positivity of the dependent variables,the stability of the equilibria.The theoretical analysis of the proposed method like the positivity of the approximation,stability,and consistency is discussed in detail.A numerical example of simulations yields the authentication of the theoretical results of the designed technique.