Purpose-One of the contributions of artificial intelligent(AI)in modern technology is emotion recognition which is mostly based on facial expression and modification of its inference engine.The facial recognition sche...Purpose-One of the contributions of artificial intelligent(AI)in modern technology is emotion recognition which is mostly based on facial expression and modification of its inference engine.The facial recognition scheme is mostly built to understand user expression in an online business webpage on a marketing site but has limited abilities to recognise elusive expressions.The basic emotions are expressed when interrelating and socialising with other personnel online.At most times,studying how to understand user expression is often a most tedious task,especially the subtle expressions.An emotion recognition system can be used to optimise and reduce complexity in understanding users’subconscious thoughts and reasoning through their pupil changes.Design/methodology/approach-This paper demonstrates the use of personal computer(PC)webcam to read in eye movement data that includes pupil changes as part of distinct user attributes.A custom eye movement algorithm(CEMA)is used to capture users’activity and record the data which is served as an input model to an inference engine(artificial neural network(ANN))that helps to predict user emotional response conveyed as emoticons on the webpage.Findings-The result from the error in performance shows that ANN is most adaptable to user behaviour prediction and can be used for the system’s modification paradigm.Research limitations/implications-One of the drawbacks of the analytical tool is its inability in some cases to set some of the emoticons within the boundaries of the visual field,this is a limitation to be tackled within subsequent runs with standard techniques.Originality/value-The originality of the proposed model is its ability to predict basic user emotional response based on changes in pupil size between average recorded baseline boundaries and convey the emoticons chronologically with the gaze points.展开更多
Although the WEB software development has some difficulty,but as long as the programming skills to find,in the practice ofoperation will find programming fun,this will stimulate the enthusiasm of students programming....Although the WEB software development has some difficulty,but as long as the programming skills to find,in the practice ofoperation will find programming fun,this will stimulate the enthusiasm of students programming.Exploration of interesting skills programmingof WEB software development is aimed at developers who initial contact with WEB software,through the practice of operation to make themget some fun of WEB programming,from simple to difficult gradually let students master WEB programming skills,improve students' interestin programming.By adding some special effects of webpages to enhance the students' interest in programming,through the key practice ofdatabase,let the students out of fear of programming of database connection.展开更多
Since webpage classification is different from traditional text classification with its irregular words and phrases,massive and unlabeled features,which makes it harder for us to obtain effective feature.To cope with ...Since webpage classification is different from traditional text classification with its irregular words and phrases,massive and unlabeled features,which makes it harder for us to obtain effective feature.To cope with this problem,we propose two scenarios to extract meaningful strings based on document clustering and term clustering with multi-strategies to optimize a Vector Space Model(VSM) in order to improve webpage classification.The results show that document clustering work better than term clustering in coping with document content.However,a better overall performance is obtained by spectral clustering with document clustering.Moreover,owing to image existing in a same webpage with document content,the proposed method is also applied to extract image meaningful terms,and experiment results also show its effectiveness in improving webpage classification.展开更多
This article acquaints the public with the insights gained from conducting document searches in the Slovak public administration information system,when supported by knowledge of its management.Additionally,it discuss...This article acquaints the public with the insights gained from conducting document searches in the Slovak public administration information system,when supported by knowledge of its management.Additionally,it discusses the advantages of simulating performance parameters and comparing the obtained results with the real parameters of the eZbierka(eCollection)legislation webpage.This comparison was based upon simulated results,obtained through the Gatling simulation tool,versus those obtained from measuring the properties of the public administration legislation webpage.Both sets of data(simulated and real),were generated via the the document search technologies in place on the eZbierka legislation webpage.The webpage provides users with binding laws and bylaws available in an electronically signed PDF file format.It is free open source.In order to simulate the accessing of documents on the webpage,the Gatling simulation tool was used.This tool simulated the activity,performed in the background of the information system,as a user attempted to read the data via the steps mentioned in the scenario.The settings of the simulated environment corresponded as much as possible to the hardware parameters and network infrastructure properties used for the operation of the respective information system.Based on this data,through load changing,we determined the number of users,the response time to queries,and their number;these parameters define the throughput of the server of the legislation webpage.The required parameter determination and performance of search technology operations are confirmed by a suitable hardware design and the webpage property parameter settings.We used the data from the eZbierka legislation webpage from its operational period of January 2016 to January 2019 for comparison,and analysed the relevant data to determine the parameter values of the legislation webpage of the slov-lex information system.The basic elements of the design solution include the technology used,the technology for searching the legislative documents with support of a searching tool,and a graphic database interface.By comparing the results,their dependencies,and proportionality,it is possible to ascertain the proper determination and appropriate applied search technology for selection of documents.Further,the graphic interface of the real web database was confirmed.展开更多
Abstract—Focused crawlers (also known as subjectoriented crawlers), as the core part of vertical search engine, collect topic-specific web pages as many as they can to form a subject-oriented corpus for the latter ...Abstract—Focused crawlers (also known as subjectoriented crawlers), as the core part of vertical search engine, collect topic-specific web pages as many as they can to form a subject-oriented corpus for the latter data analyzing or user querying. This paper demonstrates that the popular algorithms utilized at the process of focused web crawling, basically refer to webpage analyzing algorithms and crawling strategies (prioritize the uniform resource locator (URLs) in the queue). Advantages and disadvantages of three crawling strategies are shown in the first experiment, which indicates that the best-first search with an appropriate heuristics is a smart choice for topic-oriented crawlingwhile the depth-first search is helpless in focused crawling. Besides, another experiment on comparison of improved ones (with a webpage analyzing algorithm added) is carried out to verify that crawling strategies alone are not quite efficient for focused crawling and in most cases their mutual efforts are taken into consideration. In light of the experiment results and recent researches, some points on the research tendency of focused crawler algorithms are suggested.展开更多
The COVID-19 pandemic has had a major impact on health care services, leading to a breakdown in public and private health systems worldwide. A major challenge was the scarcity of mechanical ventilators, which resulted...The COVID-19 pandemic has had a major impact on health care services, leading to a breakdown in public and private health systems worldwide. A major challenge was the scarcity of mechanical ventilators, which resulted in the use of anaesthesia devices for this purpose. However, they are quite different from mechanical ventilators used in Intensive Care Units and some adaptations, such as the use of high flow to reduce CO2 rebreathing, were necessary to ensure patient safety. The objective of this study was to present a mathematical formula and develop a tool that can be used to adjust the flow of oxygen and air in flow metres of anaesthesia devices that do not have oxygen analysers or these analysers are not operational. A literature review was conducted using the main health databases and libraries as research sources: PubMed, Virtual Health Library (VHL), SciELO, and Cochrane. The review included studies published in English, Spanish, and Portuguese. Animal studies were excluded. A total of 11 references were included to support this article.展开更多
Webpage keyword extraction is very important for automatically extracting webpage summary, retrieval, automatic question answering, and character relation extraction, etc. In this paper, the environment vector of word...Webpage keyword extraction is very important for automatically extracting webpage summary, retrieval, automatic question answering, and character relation extraction, etc. In this paper, the environment vector of words is constructed with lexical chain, words context, word frequency, and webpage attribute weights according to the keywords characteristics. Thus, the multi-factor table of words is constructed, and then the keyword extraction issue is divided into two types according to the multi-factor table of words: keyword and non-keyword. Then, words are classified again with the support vector machine (SVM), and this method can extract the keywords of unregistered words and eliminate the semantic ambiguities. Experimental results show that this method is with higher precision ratio and recall ratio compared with the simple ff/idf algorithm.展开更多
With the explosive growth of Internet information, it is more and more important to fetch real-time and related information. And it puts forward higher requirement on the speed of webpage classification which is one o...With the explosive growth of Internet information, it is more and more important to fetch real-time and related information. And it puts forward higher requirement on the speed of webpage classification which is one of common methods to retrieve and manage information. To get a more efficient classifier, this paper proposes a webpage classification method based on locality sensitive hash function. In which, three innovative modules including building feature dictionary, mapping feature vectors to fingerprints using Localitysensitive hashing, and extending webpage features are contained. The compare results show that the proposed algorithm has better performance in lower time than the naive bayes one.展开更多
文摘Purpose-One of the contributions of artificial intelligent(AI)in modern technology is emotion recognition which is mostly based on facial expression and modification of its inference engine.The facial recognition scheme is mostly built to understand user expression in an online business webpage on a marketing site but has limited abilities to recognise elusive expressions.The basic emotions are expressed when interrelating and socialising with other personnel online.At most times,studying how to understand user expression is often a most tedious task,especially the subtle expressions.An emotion recognition system can be used to optimise and reduce complexity in understanding users’subconscious thoughts and reasoning through their pupil changes.Design/methodology/approach-This paper demonstrates the use of personal computer(PC)webcam to read in eye movement data that includes pupil changes as part of distinct user attributes.A custom eye movement algorithm(CEMA)is used to capture users’activity and record the data which is served as an input model to an inference engine(artificial neural network(ANN))that helps to predict user emotional response conveyed as emoticons on the webpage.Findings-The result from the error in performance shows that ANN is most adaptable to user behaviour prediction and can be used for the system’s modification paradigm.Research limitations/implications-One of the drawbacks of the analytical tool is its inability in some cases to set some of the emoticons within the boundaries of the visual field,this is a limitation to be tackled within subsequent runs with standard techniques.Originality/value-The originality of the proposed model is its ability to predict basic user emotional response based on changes in pupil size between average recorded baseline boundaries and convey the emoticons chronologically with the gaze points.
文摘Although the WEB software development has some difficulty,but as long as the programming skills to find,in the practice ofoperation will find programming fun,this will stimulate the enthusiasm of students programming.Exploration of interesting skills programmingof WEB software development is aimed at developers who initial contact with WEB software,through the practice of operation to make themget some fun of WEB programming,from simple to difficult gradually let students master WEB programming skills,improve students' interestin programming.By adding some special effects of webpages to enhance the students' interest in programming,through the key practice ofdatabase,let the students out of fear of programming of database connection.
基金supported by the National Natural Science Foundation of China under Grants No.61100205,No.60873001the HiTech Research and Development Program of China under Grant No.2011AA010705the Fundamental Research Funds for the Central Universities under Grant No.2009RC0212
文摘Since webpage classification is different from traditional text classification with its irregular words and phrases,massive and unlabeled features,which makes it harder for us to obtain effective feature.To cope with this problem,we propose two scenarios to extract meaningful strings based on document clustering and term clustering with multi-strategies to optimize a Vector Space Model(VSM) in order to improve webpage classification.The results show that document clustering work better than term clustering in coping with document content.However,a better overall performance is obtained by spectral clustering with document clustering.Moreover,owing to image existing in a same webpage with document content,the proposed method is also applied to extract image meaningful terms,and experiment results also show its effectiveness in improving webpage classification.
文摘This article acquaints the public with the insights gained from conducting document searches in the Slovak public administration information system,when supported by knowledge of its management.Additionally,it discusses the advantages of simulating performance parameters and comparing the obtained results with the real parameters of the eZbierka(eCollection)legislation webpage.This comparison was based upon simulated results,obtained through the Gatling simulation tool,versus those obtained from measuring the properties of the public administration legislation webpage.Both sets of data(simulated and real),were generated via the the document search technologies in place on the eZbierka legislation webpage.The webpage provides users with binding laws and bylaws available in an electronically signed PDF file format.It is free open source.In order to simulate the accessing of documents on the webpage,the Gatling simulation tool was used.This tool simulated the activity,performed in the background of the information system,as a user attempted to read the data via the steps mentioned in the scenario.The settings of the simulated environment corresponded as much as possible to the hardware parameters and network infrastructure properties used for the operation of the respective information system.Based on this data,through load changing,we determined the number of users,the response time to queries,and their number;these parameters define the throughput of the server of the legislation webpage.The required parameter determination and performance of search technology operations are confirmed by a suitable hardware design and the webpage property parameter settings.We used the data from the eZbierka legislation webpage from its operational period of January 2016 to January 2019 for comparison,and analysed the relevant data to determine the parameter values of the legislation webpage of the slov-lex information system.The basic elements of the design solution include the technology used,the technology for searching the legislative documents with support of a searching tool,and a graphic database interface.By comparing the results,their dependencies,and proportionality,it is possible to ascertain the proper determination and appropriate applied search technology for selection of documents.Further,the graphic interface of the real web database was confirmed.
基金supported by the Research Fund for International Young Scientists of National Natural Science Foundation of China under Grant No.61550110248Tibet Autonomous Region Key Scientific Research Projects under Grant No.Z2014A18G2-13
文摘Abstract—Focused crawlers (also known as subjectoriented crawlers), as the core part of vertical search engine, collect topic-specific web pages as many as they can to form a subject-oriented corpus for the latter data analyzing or user querying. This paper demonstrates that the popular algorithms utilized at the process of focused web crawling, basically refer to webpage analyzing algorithms and crawling strategies (prioritize the uniform resource locator (URLs) in the queue). Advantages and disadvantages of three crawling strategies are shown in the first experiment, which indicates that the best-first search with an appropriate heuristics is a smart choice for topic-oriented crawlingwhile the depth-first search is helpless in focused crawling. Besides, another experiment on comparison of improved ones (with a webpage analyzing algorithm added) is carried out to verify that crawling strategies alone are not quite efficient for focused crawling and in most cases their mutual efforts are taken into consideration. In light of the experiment results and recent researches, some points on the research tendency of focused crawler algorithms are suggested.
文摘The COVID-19 pandemic has had a major impact on health care services, leading to a breakdown in public and private health systems worldwide. A major challenge was the scarcity of mechanical ventilators, which resulted in the use of anaesthesia devices for this purpose. However, they are quite different from mechanical ventilators used in Intensive Care Units and some adaptations, such as the use of high flow to reduce CO2 rebreathing, were necessary to ensure patient safety. The objective of this study was to present a mathematical formula and develop a tool that can be used to adjust the flow of oxygen and air in flow metres of anaesthesia devices that do not have oxygen analysers or these analysers are not operational. A literature review was conducted using the main health databases and libraries as research sources: PubMed, Virtual Health Library (VHL), SciELO, and Cochrane. The review included studies published in English, Spanish, and Portuguese. Animal studies were excluded. A total of 11 references were included to support this article.
文摘Webpage keyword extraction is very important for automatically extracting webpage summary, retrieval, automatic question answering, and character relation extraction, etc. In this paper, the environment vector of words is constructed with lexical chain, words context, word frequency, and webpage attribute weights according to the keywords characteristics. Thus, the multi-factor table of words is constructed, and then the keyword extraction issue is divided into two types according to the multi-factor table of words: keyword and non-keyword. Then, words are classified again with the support vector machine (SVM), and this method can extract the keywords of unregistered words and eliminate the semantic ambiguities. Experimental results show that this method is with higher precision ratio and recall ratio compared with the simple ff/idf algorithm.
文摘With the explosive growth of Internet information, it is more and more important to fetch real-time and related information. And it puts forward higher requirement on the speed of webpage classification which is one of common methods to retrieve and manage information. To get a more efficient classifier, this paper proposes a webpage classification method based on locality sensitive hash function. In which, three innovative modules including building feature dictionary, mapping feature vectors to fingerprints using Localitysensitive hashing, and extending webpage features are contained. The compare results show that the proposed algorithm has better performance in lower time than the naive bayes one.