Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision...Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision-making across diverse domains. Conversely, Python is indispensable for professional programming due to its versatility, readability, extensive libraries, and robust community support. It enables efficient development, advanced data analysis, data mining, and automation, catering to diverse industries and applications. However, one primary issue when using Microsoft Excel with Python libraries is compatibility and interoperability. While Excel is a widely used tool for data storage and analysis, it may not seamlessly integrate with Python libraries, leading to challenges in reading and writing data, especially in complex or large datasets. Additionally, manipulating Excel files with Python may not always preserve formatting or formulas accurately, potentially affecting data integrity. Moreover, dependency on Excel’s graphical user interface (GUI) for automation can limit scalability and reproducibility compared to Python’s scripting capabilities. This paper covers the integration solution of empowering non-programmers to leverage Python’s capabilities within the familiar Excel environment. This enables users to perform advanced data analysis and automation tasks without requiring extensive programming knowledge. Based on Soliciting feedback from non-programmers who have tested the integration solution, the case study shows how the solution evaluates the ease of implementation, performance, and compatibility of Python with Excel versions.展开更多
On-site programming big data refers to the massive data generated in the process of software development with the characteristics of real-time,complexity and high-difficulty for processing.Therefore,data cleaning is e...On-site programming big data refers to the massive data generated in the process of software development with the characteristics of real-time,complexity and high-difficulty for processing.Therefore,data cleaning is essential for on-site programming big data.Duplicate data detection is an important step in data cleaning,which can save storage resources and enhance data consistency.Due to the insufficiency in traditional Sorted Neighborhood Method(SNM)and the difficulty of high-dimensional data detection,an optimized algorithm based on random forests with the dynamic and adaptive window size is proposed.The efficiency of the algorithm can be elevated by improving the method of the key-selection,reducing dimension of data set and using an adaptive variable size sliding window.Experimental results show that the improved SNM algorithm exhibits better performance and achieve higher accuracy.展开更多
PL/SQL is the most common language for ORACLE database application. It allows the developer to create stored program units (Procedures, Functions, and Packages) to improve software reusability and hide the complexity ...PL/SQL is the most common language for ORACLE database application. It allows the developer to create stored program units (Procedures, Functions, and Packages) to improve software reusability and hide the complexity of the execution of a specific operation behind a name. Also, it acts as an interface between SQL database and DEVELOPER. Therefore, it is important to test these modules that consist of procedures and functions. In this paper, a new genetic algorithm (GA), as search technique, is used in order to find the required test data according to branch criteria to test stored PL/SQL program units. The experimental results show that this was not fully achieved, such that the test target in some branches is not reached and the coverage percentage is 98%. A problem rises when target branch is depending on data retrieved from tables;in this case, GA is not able to generate test cases for this branch.展开更多
While adopting an elevation-over-azimuth architecture by an inter-satellite linkage antenna of a user satellite, a zenith pass problem always occurs when the antenna is tracing the tracking and data relay satellite (...While adopting an elevation-over-azimuth architecture by an inter-satellite linkage antenna of a user satellite, a zenith pass problem always occurs when the antenna is tracing the tracking and data relay satellite (TDRS). This paper deals with this problem by way of, firstly, introducing movement laws of the inter-satellite linkage to predict the movement of the user satellite antenna followed by analyzing the potential pass moment and the actual one of the zenith pass in detail. A number of specific orbit altitudes for the user satellite that can remove the blindness zone are obtained. Finally, on the base of the predicted results from the movement laws of the inter-satellite linkage, the zenith pass tracing strategies for the user satellite antenna are designed under the program guidance using a trajectory preprocessor. Simulations have confirmed the reasonability and feasibility of the strategies in dealing with the zenith pass problem.展开更多
A neruon-oriented programming system based on parallel neural information processing has been presented. With the neural programming system built upon 4~8 process elements(TMS C30), the system has thus provided users...A neruon-oriented programming system based on parallel neural information processing has been presented. With the neural programming system built upon 4~8 process elements(TMS C30), the system has thus provided users high speed, general purpose and large scale neural network application development platforms etc.展开更多
A specialized Hungarian algorithm was developed here for the maximum likelihood data association problem with two implementation versions due to presence of false alarms and missed detections. The maximum likelihood d...A specialized Hungarian algorithm was developed here for the maximum likelihood data association problem with two implementation versions due to presence of false alarms and missed detections. The maximum likelihood data association problem is formulated as a bipartite weighted matching problem. Its duality and the optimality conditions are given. The Hungarian algorithm with its computational steps, data structure and computational complexity is presented. The two implementation versions, Hungarian forest (HF) algorithm and Hungarian tree (HT) algorithm, and their combination with the naYve auction initialization are discussed. The computational results show that HT algorithm is slightly faster than HF algorithm and they are both superior to the classic Munkres algorithm.展开更多
This paper describes multi view modeling and data model transformation for the modeling. We have proposed a reference model of CAD system generation, which can be applied to various domain specific languages. Howeve...This paper describes multi view modeling and data model transformation for the modeling. We have proposed a reference model of CAD system generation, which can be applied to various domain specific languages. However, the current CAD system generation cannot integrate data of multiple domains. Generally each domain has its own view of products. For example, in the domain of architectural structure, designers extract the necessary data from the data in architecture design. Domain experts translate one view into another view beyond domains using their own brains.The multi view modeling is a way to integrate product data of multiple domains, and make it possible to translate views among various domains by computers.展开更多
he transition from traditional learning to practice-oriented programming learning will bring learners discomfort.The discomfort quickly breeds negative emotions when encountering programming difficulties,which leads t...he transition from traditional learning to practice-oriented programming learning will bring learners discomfort.The discomfort quickly breeds negative emotions when encountering programming difficulties,which leads the learner to lose interest in programming or even give up.Emotion plays a crucial role in learning.Educational psychology research shows that positive emotion can promote learning performance,increase learning interest and cultivate creative thinking.Accurate recognition and interpretation of programming learners’emotions can give them feedback in time,and adjust teaching strategies accurately and individually,which is of considerable significance to improve effects of programming learning and education.The existing methods of sensor-free emotion prediction include emotion prediction based on keyboard dynamic,mouse interaction data and interaction logs,respectively.However,none of the three studies considered the temporal characteristics of emotion,resulting in low recognition accuracy.For the first time,this paper proposes an emotion prediction model based on time series and context information.Then,we establish a Bi-recurrent neural network,obtain the time sequence characteristics of data automatically,and explore the application of deep learning in the field of Academic Emotion prediction.The results show that the classification ability of this model is much better than that of the original LSTM(Long-Short Term Memory),GRU(Gate Recurrent Unit)and RNN(Re-current Neural Network),and this model has better generalization ability.展开更多
The primary focus of this paper is to design a progressive restoration plan for an enterprise data center environment following a partial or full disruption. Repairing and restoring disrupted components in an enterpri...The primary focus of this paper is to design a progressive restoration plan for an enterprise data center environment following a partial or full disruption. Repairing and restoring disrupted components in an enterprise data center requires a significant amount of time and human effort. Following a major disruption, the recovery process involves multiple stages, and during each stage, the partially recovered infrastructures can provide limited services to users at some degraded service level. However, how fast and efficiently an enterprise infrastructure can be recovered de- pends on how the recovery mechanism restores the disrupted components, considering the inter-dependencies between services, along with the limitations of expert human operators. The entire problem turns out to be NP- hard and rather complex, and we devise an efficient meta-heuristic to solve the problem. By considering some real-world examples, we show that the proposed meta-heuristic provides very accurate results, and still runs 600-2800 times faster than the optimal solution obtained from a general purpose mathematical solver [1].展开更多
The Chang'E-1 and Chang'E-2 missions have successfully obtained a huge amount of lunar scientific data, through the seven onboard instruments including a CCD stereo camera, a laser altimeter, an interference i...The Chang'E-1 and Chang'E-2 missions have successfully obtained a huge amount of lunar scientific data, through the seven onboard instruments including a CCD stereo camera, a laser altimeter, an interference imaging spectrometer, an X-ray spectrometer, a microwave radiometer, a high-energy particle detector and a solar-wind ion detector. Most of the Chang'E data are now publicly available to the science community, and this article serves as an official guide on how these data are classified and organized, and how they can be retrieved from http://159.226.88.59:7779/CE1OutENGWeb/. This article also presents the detailed specifications of various instruments and gives examples of research progress made based on these instruments.展开更多
文摘Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision-making across diverse domains. Conversely, Python is indispensable for professional programming due to its versatility, readability, extensive libraries, and robust community support. It enables efficient development, advanced data analysis, data mining, and automation, catering to diverse industries and applications. However, one primary issue when using Microsoft Excel with Python libraries is compatibility and interoperability. While Excel is a widely used tool for data storage and analysis, it may not seamlessly integrate with Python libraries, leading to challenges in reading and writing data, especially in complex or large datasets. Additionally, manipulating Excel files with Python may not always preserve formatting or formulas accurately, potentially affecting data integrity. Moreover, dependency on Excel’s graphical user interface (GUI) for automation can limit scalability and reproducibility compared to Python’s scripting capabilities. This paper covers the integration solution of empowering non-programmers to leverage Python’s capabilities within the familiar Excel environment. This enables users to perform advanced data analysis and automation tasks without requiring extensive programming knowledge. Based on Soliciting feedback from non-programmers who have tested the integration solution, the case study shows how the solution evaluates the ease of implementation, performance, and compatibility of Python with Excel versions.
基金supported by the National Key R&D Program of China(Nos.2018YFB1003905)the National Natural Science Foundation of China under Grant No.61971032,Fundamental Research Funds for the Central Universities(No.FRF-TP-18-008A3).
文摘On-site programming big data refers to the massive data generated in the process of software development with the characteristics of real-time,complexity and high-difficulty for processing.Therefore,data cleaning is essential for on-site programming big data.Duplicate data detection is an important step in data cleaning,which can save storage resources and enhance data consistency.Due to the insufficiency in traditional Sorted Neighborhood Method(SNM)and the difficulty of high-dimensional data detection,an optimized algorithm based on random forests with the dynamic and adaptive window size is proposed.The efficiency of the algorithm can be elevated by improving the method of the key-selection,reducing dimension of data set and using an adaptive variable size sliding window.Experimental results show that the improved SNM algorithm exhibits better performance and achieve higher accuracy.
文摘PL/SQL is the most common language for ORACLE database application. It allows the developer to create stored program units (Procedures, Functions, and Packages) to improve software reusability and hide the complexity of the execution of a specific operation behind a name. Also, it acts as an interface between SQL database and DEVELOPER. Therefore, it is important to test these modules that consist of procedures and functions. In this paper, a new genetic algorithm (GA), as search technique, is used in order to find the required test data according to branch criteria to test stored PL/SQL program units. The experimental results show that this was not fully achieved, such that the test target in some branches is not reached and the coverage percentage is 98%. A problem rises when target branch is depending on data retrieved from tables;in this case, GA is not able to generate test cases for this branch.
文摘While adopting an elevation-over-azimuth architecture by an inter-satellite linkage antenna of a user satellite, a zenith pass problem always occurs when the antenna is tracing the tracking and data relay satellite (TDRS). This paper deals with this problem by way of, firstly, introducing movement laws of the inter-satellite linkage to predict the movement of the user satellite antenna followed by analyzing the potential pass moment and the actual one of the zenith pass in detail. A number of specific orbit altitudes for the user satellite that can remove the blindness zone are obtained. Finally, on the base of the predicted results from the movement laws of the inter-satellite linkage, the zenith pass tracing strategies for the user satellite antenna are designed under the program guidance using a trajectory preprocessor. Simulations have confirmed the reasonability and feasibility of the strategies in dealing with the zenith pass problem.
文摘A neruon-oriented programming system based on parallel neural information processing has been presented. With the neural programming system built upon 4~8 process elements(TMS C30), the system has thus provided users high speed, general purpose and large scale neural network application development platforms etc.
基金This project was supported by the National Natural Science Foundation of China (60272024).
文摘A specialized Hungarian algorithm was developed here for the maximum likelihood data association problem with two implementation versions due to presence of false alarms and missed detections. The maximum likelihood data association problem is formulated as a bipartite weighted matching problem. Its duality and the optimality conditions are given. The Hungarian algorithm with its computational steps, data structure and computational complexity is presented. The two implementation versions, Hungarian forest (HF) algorithm and Hungarian tree (HT) algorithm, and their combination with the naYve auction initialization are discussed. The computational results show that HT algorithm is slightly faster than HF algorithm and they are both superior to the classic Munkres algorithm.
文摘This paper describes multi view modeling and data model transformation for the modeling. We have proposed a reference model of CAD system generation, which can be applied to various domain specific languages. However, the current CAD system generation cannot integrate data of multiple domains. Generally each domain has its own view of products. For example, in the domain of architectural structure, designers extract the necessary data from the data in architecture design. Domain experts translate one view into another view beyond domains using their own brains.The multi view modeling is a way to integrate product data of multiple domains, and make it possible to translate views among various domains by computers.
基金supported by the 2018-2020 Higher Education Talent Training Quality and Teaching Reform Project of Sichuan Province(Grant No.JG2018-46)the Science and Technology Planning Program of Sichuan University and Luzhou(Grant No.2017CDLZG30)the Postdoctoral Science fund of Sichuan University(Grant No.2019SCU12058).
文摘he transition from traditional learning to practice-oriented programming learning will bring learners discomfort.The discomfort quickly breeds negative emotions when encountering programming difficulties,which leads the learner to lose interest in programming or even give up.Emotion plays a crucial role in learning.Educational psychology research shows that positive emotion can promote learning performance,increase learning interest and cultivate creative thinking.Accurate recognition and interpretation of programming learners’emotions can give them feedback in time,and adjust teaching strategies accurately and individually,which is of considerable significance to improve effects of programming learning and education.The existing methods of sensor-free emotion prediction include emotion prediction based on keyboard dynamic,mouse interaction data and interaction logs,respectively.However,none of the three studies considered the temporal characteristics of emotion,resulting in low recognition accuracy.For the first time,this paper proposes an emotion prediction model based on time series and context information.Then,we establish a Bi-recurrent neural network,obtain the time sequence characteristics of data automatically,and explore the application of deep learning in the field of Academic Emotion prediction.The results show that the classification ability of this model is much better than that of the original LSTM(Long-Short Term Memory),GRU(Gate Recurrent Unit)and RNN(Re-current Neural Network),and this model has better generalization ability.
文摘The primary focus of this paper is to design a progressive restoration plan for an enterprise data center environment following a partial or full disruption. Repairing and restoring disrupted components in an enterprise data center requires a significant amount of time and human effort. Following a major disruption, the recovery process involves multiple stages, and during each stage, the partially recovered infrastructures can provide limited services to users at some degraded service level. However, how fast and efficiently an enterprise infrastructure can be recovered de- pends on how the recovery mechanism restores the disrupted components, considering the inter-dependencies between services, along with the limitations of expert human operators. The entire problem turns out to be NP- hard and rather complex, and we devise an efficient meta-heuristic to solve the problem. By considering some real-world examples, we show that the proposed meta-heuristic provides very accurate results, and still runs 600-2800 times faster than the optimal solution obtained from a general purpose mathematical solver [1].
基金supported jointly by the National Science and Technology Major Project of the Ministry of Science and Technology of China and China Lunar Exploration ProjectNational High-tech R&D Program of China
文摘The Chang'E-1 and Chang'E-2 missions have successfully obtained a huge amount of lunar scientific data, through the seven onboard instruments including a CCD stereo camera, a laser altimeter, an interference imaging spectrometer, an X-ray spectrometer, a microwave radiometer, a high-energy particle detector and a solar-wind ion detector. Most of the Chang'E data are now publicly available to the science community, and this article serves as an official guide on how these data are classified and organized, and how they can be retrieved from http://159.226.88.59:7779/CE1OutENGWeb/. This article also presents the detailed specifications of various instruments and gives examples of research progress made based on these instruments.