The fraudulent website image is a vital information carrier for telecom fraud.The efficient and precise recognition of fraudulent website images is critical to combating and dealing with fraudulent websites.Current re...The fraudulent website image is a vital information carrier for telecom fraud.The efficient and precise recognition of fraudulent website images is critical to combating and dealing with fraudulent websites.Current research on image recognition of fraudulent websites is mainly carried out at the level of image feature extraction and similarity study,which have such disadvantages as difficulty in obtaining image data,insufficient image analysis,and single identification types.This study develops a model based on the entropy method for image leader decision and Inception-v3 transfer learning to address these disadvantages.The data processing part of the model uses a breadth search crawler to capture the image data.Then,the information in the images is evaluated with the entropy method,image weights are assigned,and the image leader is selected.In model training and prediction,the transfer learning of the Inception-v3 model is introduced into image recognition of fraudulent websites.Using selected image leaders to train the model,multiple types of fraudulent websites are identified with high accuracy.The experiment proves that this model has a superior accuracy in recognizing images on fraudulent websites compared to other current models.展开更多
Website fingerprinting,also known asWF,is a traffic analysis attack that enables local eavesdroppers to infer a user’s browsing destination,even when using the Tor anonymity network.While advanced attacks based on de...Website fingerprinting,also known asWF,is a traffic analysis attack that enables local eavesdroppers to infer a user’s browsing destination,even when using the Tor anonymity network.While advanced attacks based on deep neural network(DNN)can performfeature engineering and attain accuracy rates of over 98%,research has demonstrated thatDNNis vulnerable to adversarial samples.As a result,many researchers have explored using adversarial samples as a defense mechanism against DNN-based WF attacks and have achieved considerable success.However,these methods suffer from high bandwidth overhead or require access to the target model,which is unrealistic.This paper proposes CMAES-WFD,a black-box WF defense based on adversarial samples.The process of generating adversarial examples is transformed into a constrained optimization problem solved by utilizing the Covariance Matrix Adaptation Evolution Strategy(CMAES)optimization algorithm.Perturbations are injected into the local parts of the original traffic to control bandwidth overhead.According to the experiment results,CMAES-WFD was able to significantly decrease the accuracy of Deep Fingerprinting(DF)and VarCnn to below 8.3%and the bandwidth overhead to a maximum of only 14.6%and 20.5%,respectively.Specially,for Automated Website Fingerprinting(AWF)with simple structure,CMAES-WFD reduced the classification accuracy to only 6.7%and the bandwidth overhead to less than 7.4%.Moreover,it was demonstrated that CMAES-WFD was robust against adversarial training to a certain extent.展开更多
In Japanese 'e-government' policy, called 'e-Japan', the 'administrative document management system' is functioning as information searching systems. On the other hand, this system has also gen...In Japanese 'e-government' policy, called 'e-Japan', the 'administrative document management system' is functioning as information searching systems. On the other hand, this system has also generated the problem that it is not fully functioning as a means for the information sharing in a governmental agency. So, the purpose of this research is to find how the administrative document management system can function as information sharing in administrative organization. For this purpose, this paper considers the current status and some problems firstly. And secondary, this paper proposes the idea and constructs some information systems using administrative official Website. This is the method and approach of this research. As a conclusion, this proposal information system junctions as information sharing support systems.展开更多
The feature analysis of fraudulent websites is of great significance to the combat,prevention and control of telecom fraud crimes.Aiming to address the shortcomings of existing analytical approaches,i.e.single dimensi...The feature analysis of fraudulent websites is of great significance to the combat,prevention and control of telecom fraud crimes.Aiming to address the shortcomings of existing analytical approaches,i.e.single dimension and venerability to anti-reconnaissance,this paper adopts the Stacking,the ensemble learning algorithm,combines multiple modalities such as text,image and URL,and proposes a multimodal fraudulent website identification method by ensembling heterogeneous models.Crossvalidation is first used in the training of multiple largely different base classifiers that are strong in learning,such as BERT model,residual neural network(ResNet)and logistic regression model.Classification of the text,image and URL features are then performed respectively.The results of the base classifiers are taken as the input of the meta-classifier,and the output of which is eventually used as the final identification.The study indicates that the fusion method is more effective in identifying fraudulent websites than the single-modal method,and the recall is increased by at least 1%.In addition,the deployment of the algorithm to the real Internet environment shows the improvement of the identification accuracy by at least 1.9%compared with other fusion methods.展开更多
Phishing websites present a severe cybersecurity risk since they can lead to financial losses,data breaches,and user privacy violations.This study uses machine learning approaches to solve the problem of phishing webs...Phishing websites present a severe cybersecurity risk since they can lead to financial losses,data breaches,and user privacy violations.This study uses machine learning approaches to solve the problem of phishing website detection.Using artificial intelligence,the project aims to provide efficient techniques for locating and thwarting these dangerous websites.The study goals were attained by performing a thorough literature analysis to investigate several models and methods often used in phishing website identification.Logistic Regression,K-Nearest Neighbors,Decision Trees,Random Forests,Support Vector Classifiers,Linear Support Vector Classifiers,and Naive Bayes were all used in the inquiry.This research covers the benefits and drawbacks of several Machine Learning approaches,illuminating how well-suited each is to overcome the difficulties in locating and countering phishing website predictions.The insights gained from this literature review guide the selection and implementation of appropriate models and methods in future research and real-world applications related to phishing detections.The study evaluates and compares accuracy,precision and recalls of several machine learning models in detecting phishing website URL’s detection.展开更多
Phishing attacks pose a significant security threat by masquerading as trustworthy entities to steal sensitive information,a problem that persists despite user awareness.This study addresses the pressing issue of phis...Phishing attacks pose a significant security threat by masquerading as trustworthy entities to steal sensitive information,a problem that persists despite user awareness.This study addresses the pressing issue of phishing attacks on websites and assesses the performance of three prominent Machine Learning(ML)models—Artificial Neural Networks(ANN),Convolutional Neural Networks(CNN),and Long Short-Term Memory(LSTM)—utilizing authentic datasets sourced from Kaggle and Mendeley repositories.Extensive experimentation and analysis reveal that the CNN model achieves a better accuracy of 98%.On the other hand,LSTM shows the lowest accuracy of 96%.These findings underscore the potential of ML techniques in enhancing phishing detection systems and bolstering cybersecurity measures against evolving phishing tactics,offering a promising avenue for safeguarding sensitive information and online security.展开更多
In order to improve the accuracy and integrality of mining data records from the web, the concepts of isomorphic page and directory page and three algorithms are proposed. An isomorphic web page is a set of web pages ...In order to improve the accuracy and integrality of mining data records from the web, the concepts of isomorphic page and directory page and three algorithms are proposed. An isomorphic web page is a set of web pages that have uniform structure, only differing in main information. A web page which contains many links that link to isomorphic web pages is called a directory page. Algorithm 1 can find directory web pages in a web using adjacent links similar analysis method. It first sorts the link, and then counts the links in each directory. If the count is greater than a given valve then finds the similar sub-page links in the directory and gives the results. A function for an isomorphic web page judgment is also proposed. Algorithm 2 can mine data records from an isomorphic page using a noise information filter. It is based on the fact that the noise information is the same in two isomorphic pages, only the main information is different. Algorithm 3 can mine data records from an entire website using the technology of spider. The experiment shows that the proposed algorithms can mine data records more intactly than the existing algorithms. Mining data records from isomorphic pages is an efficient method.展开更多
To improve the efficiency of the management of large farms, a digitalized system of economic statistics is designed based on the Internet platform of the digitalized agricultural integrated system of Friendship Farm, ...To improve the efficiency of the management of large farms, a digitalized system of economic statistics is designed based on the Internet platform of the digitalized agricultural integrated system of Friendship Farm, the largest farm in the world. The system can also realize data storage by using the access databank technology. A dynamic website system, based on ASP technology, is used to implement the on-line inquiry of the statistical index of the agricultural economy and the diagrams of the index every year. Furthermore, it can provide the value of comprehensive indicators of farms' economic profits for every year and a trend chart of the comprehensive appraisal of economic development, by using principal component analysis. An early-warning indicator boundary is decided based on the majority principle. The system can realize the farm's terminal data input with effective data-collecting channels and a normative gathering scope and system. This system breaks through the stand-alone database system in agricultural digitalized research to realize the database system in the Internet environment by integrating the existing technologies in China. The system lays a foundation for the further integrated research on the network platform of the digitalized agricultural integrated system in Friendship Farm.展开更多
With the rapid development of Web, there are more and more Web databases available for users to access. At the same time, job searchers often have difficulties in first finding the right sources and then querying over...With the rapid development of Web, there are more and more Web databases available for users to access. At the same time, job searchers often have difficulties in first finding the right sources and then querying over them, providing such an integrated job search system over Web databases has become a Web application in high demand. Based on such consideration, we build a deep Web data integration system that supports unified access for users to multiple job Web sites as a job meta-search engine. In this paper, the architecture of the system is given first, and the key components in the system are introduced.展开更多
Nowadays, an increasing number of web applications require identification registration. However, the behavior of website registration has not ever been thoroughly studied. We use the database provided by the Chinese S...Nowadays, an increasing number of web applications require identification registration. However, the behavior of website registration has not ever been thoroughly studied. We use the database provided by the Chinese Software Develop Net (CSDN) to provide a complete perspective on this research point. We concentrate on the following three aspects: complexity, correlation, and preference. From these analyses, we draw the following conclusions: firstly, a considerable number of users have not realized the importance of identification and are using very simple identifications that can be attacked very easily. Secondly, there is a strong complexity correlation among the three parts of identification. Thirdly, the top three passwords that users like are 123456789, 12345678 and 11111111, and the top three email providers that they prefer are NETEASE, qq and sina. Further, we provide some suggestions to improve the quality of user passwords.展开更多
A hl-quality website is crucial to a company for a successful e-business. The technique maintainers are always faced with the problem how to locate the prime factors which affect the quality of the websites. In view o...A hl-quality website is crucial to a company for a successful e-business. The technique maintainers are always faced with the problem how to locate the prime factors which affect the quality of the websites. In view of the complexity and fuzziness of BtoC webslte, a quality diagnosis method based on the multl-attribute and multi-layer fuzzy comprehensive evaluation model including all the quality factors is proposed. A simple example of diagnosis on a famous domestic BtoC websites shows the specific steps of this method and proves its validity. The process of quality evaluation and diagnosis system is illustrated and the computer program of diagnosis is Oven.展开更多
Agricultural product trading website is not only an important way to realize the agriculture informatization,but also the main manifestation of the agricultural informatization. Based on the preliminary understanding ...Agricultural product trading website is not only an important way to realize the agriculture informatization,but also the main manifestation of the agricultural informatization. Based on the preliminary understanding of the content and characteristics of China's agricultural product trading website,the paper builds a scientific evaluation indicator system and evaluates 50 typical agricultural product trading websites objectively by using classification and grading method. The results show that the overall construction level of China's agricultural product trading websites is general,and there are obvious differences between regions; the lack of website commercial function and the lag of informatization are the main factors restricting the development of agricultural product trading websites.展开更多
Phishing attacks are security attacks that do not affect only individuals’or organizations’websites but may affect Internet of Things(IoT)devices and net-works.IoT environment is an exposed environment for such atta...Phishing attacks are security attacks that do not affect only individuals’or organizations’websites but may affect Internet of Things(IoT)devices and net-works.IoT environment is an exposed environment for such attacks.Attackers may use thingbots software for the dispersal of hidden junk emails that are not noticed by users.Machine and deep learning and other methods were used to design detection methods for these attacks.However,there is still a need to enhance detection accuracy.Optimization of an ensemble classification method for phishing website(PW)detection is proposed in this study.A Genetic Algo-rithm(GA)was used for the proposed method optimization by tuning several ensemble Machine Learning(ML)methods parameters,including Random Forest(RF),AdaBoost(AB),XGBoost(XGB),Bagging(BA),GradientBoost(GB),and LightGBM(LGBM).These were accomplished by ranking the optimized classi-fiers to pick out the best classifiers as a base for the proposed method.A PW data-set that is made up of 4898 PWs and 6157 legitimate websites(LWs)was used for this study's experiments.As a result,detection accuracy was enhanced and reached 97.16 percent.展开更多
The advent of the Intemet has witnessed a revolution in the business world. One typical example is the emergence of the B2B website. The present paper looks at the B2B website, a conventionalized digital text, in term...The advent of the Intemet has witnessed a revolution in the business world. One typical example is the emergence of the B2B website. The present paper looks at the B2B website, a conventionalized digital text, in terms of its communicative purposes, move features as well as linguistic specialties, with the aim of presenting the generic structure of the B2B website and its principal linguistic features contributing to the realization of its communicative purposes. It is demonstrated that the B2B website is one instance of the promotional genres and it has a lot in common with advertisement English and "netsneak" in the aspect of lexico-grammatical features.展开更多
In this paper, we conduct research on the big data and the artificial intelligence aided decision-making mechanism with the applications on video website homemade program innovation. Make homemade video shows new medi...In this paper, we conduct research on the big data and the artificial intelligence aided decision-making mechanism with the applications on video website homemade program innovation. Make homemade video shows new media platform site content production with new possible, as also make the traditional media found in Internet age, the breakthrough point of the times. Site homemade video program, which is beneficial to reduce copyright purchase demand, reduce the cost, avoid the homogeneity competition, rich advertising marketing at the same time, improve the profit pattern, the organic combination of content production and operation, complete the strategic transformation. On the basis of these advantages, once the site of homemade video program to form a brand and a higher brand influence. Our later research provides the literature survey for the related issues.展开更多
In the era of the Internet and globalisation, more and more international academics focus their attention on how city governments compete for talent, capital, and technology through website marketing to promote their ...In the era of the Internet and globalisation, more and more international academics focus their attention on how city governments compete for talent, capital, and technology through website marketing to promote their economy and global status. However, 1) present research generally overlooks the importance of different types of elements in different marketing themes, 2) the combinations of marketing themes are still unknown, and 3) the presumption that the emphasised elements and specific combination of marketing themes on official websites differentiates cities requires more cases to be understood. In light of this background, this study collects homepage elements of 49 Alpha world cities' official websites and quantitatively analyses the frequency of different types of elements, the marketing content themes, and the dissimilarity of content of Chinese Alpha world cities. The results indicate that comprehensiveness and locality appear in the process of city marketing throughout official city websites. Overall, we make the following conclusions. 1) The importance of different kinds of elements significantly differs between 49 Alpha world cities. 2) Based on various combinations of elements, the marketing contents of Alpha world cities through official websites can be categorised into six themes of history and culture, government and information, construction and environment, government and living, construction and living, and general compound. 3) The marketing elements of five Chinese Alpha world cities, including Hong Kong, Beijing, Shanghai, Taipei and Guangzhou, are different than the other 44 Alpha world cities, and Chinese cities prefer to advertise their history and culture but rarely market citizens' activities. Moreover, Chinese cities' marketing mostly targets natives while the other 44 Alpha cities target external groups, and the locality of world cities' website marketing is reinforced especially on a native language edition website. This study ultimately finds that the Chinese edition websites of five Chinese cities place more focus on introducing local historical buildings, administrative services, and internal business information than the English edition websites do.展开更多
In the contemporary world, digital content that is subject to copyright is facing significant challenges against the act of copyright infringement.Billions of dollars are lost annually because of this illegal act. The...In the contemporary world, digital content that is subject to copyright is facing significant challenges against the act of copyright infringement.Billions of dollars are lost annually because of this illegal act. The currentmost effective trend to tackle this problem is believed to be blocking thosewebsites, particularly through affiliated government bodies. To do so, aneffective detection mechanism is a necessary first step. Some researchers haveused various approaches to analyze the possible common features of suspectedpiracy websites. For instance, most of these websites serve online advertisement, which is considered as their main source of revenue. In addition, theseadvertisements have some common attributes that make them unique ascompared to advertisements posted on normal or legitimate websites. Theyusually encompass keywords such as click-words (words that redirect to installmalicious software) and frequently used words in illegal gambling, illegal sexual acts, and so on. This makes them ideal to be used as one of the key featuresin the process of successfully detecting websites involved in the act of copyrightinfringement. Research has been conducted to identify advertisements servedon suspected piracy websites. However, these studies use a static approachthat relies mainly on manual scanning for the aforementioned keywords. Thisbrings with it some limitations, particularly in coping with the dynamic andever-changing behavior of advertisements posted on these websites. Therefore,we propose a technique that can continuously fine-tune itself and is intelligentenough to effectively identify advertisement (Ad) banners extracted fromsuspected piracy websites. We have done this by leveraging the power ofmachine learning algorithms, particularly the support vector machine with theword2vec word-embedding model. After applying the proposed technique to1015 Ad banners collected from 98 suspected piracy websites and 90 normal orlegitimate websites, we were able to successfully identify Ad banners extractedfrom suspected piracy websites with an accuracy of 97%. We present thistechnique with the hope that it will be a useful tool for various effective piracywebsite detection approaches. To our knowledge, this is the first approachthat uses machine learning to identify Ad banners served on suspected piracywebsites.展开更多
Purpose: To get a better understanding of the way in which university rankings are used.Design/methodology/approach: Detailed analysis of the activities of visitors of the website of the CWTS Leiden Ranking.Findings...Purpose: To get a better understanding of the way in which university rankings are used.Design/methodology/approach: Detailed analysis of the activities of visitors of the website of the CWTS Leiden Ranking.Findings: Visitors of the Leiden Ranking website originate disproportionally from specific countries. They are more interested in impact indicators than in collaboration indicators, while they are about equally interested in size-dependent indicators and size-independent indicators. Many visitors do not seem to realize that they should decide themselves which criterion they consider most appropriate for ranking universities.Research limitations: The analysis is restricted to the website of a single university ranking. Moreover, the analysis does not provide any detailed insights into the motivations of visitors of university ranking websites.Practical implications: The Leiden Ranking website may need to be improved in order to make more clear to visitors that they should decide themselves which criterion they want to use for ranking universities.Originality/value: This is the first analysis of the activities of visitors of a university ranking website.展开更多
基金supported by the National Social Science Fund of China(23BGL272)。
文摘The fraudulent website image is a vital information carrier for telecom fraud.The efficient and precise recognition of fraudulent website images is critical to combating and dealing with fraudulent websites.Current research on image recognition of fraudulent websites is mainly carried out at the level of image feature extraction and similarity study,which have such disadvantages as difficulty in obtaining image data,insufficient image analysis,and single identification types.This study develops a model based on the entropy method for image leader decision and Inception-v3 transfer learning to address these disadvantages.The data processing part of the model uses a breadth search crawler to capture the image data.Then,the information in the images is evaluated with the entropy method,image weights are assigned,and the image leader is selected.In model training and prediction,the transfer learning of the Inception-v3 model is introduced into image recognition of fraudulent websites.Using selected image leaders to train the model,multiple types of fraudulent websites are identified with high accuracy.The experiment proves that this model has a superior accuracy in recognizing images on fraudulent websites compared to other current models.
基金the Key JCJQ Program of China:2020-JCJQ-ZD-021-00 and 2020-JCJQ-ZD-024-12.
文摘Website fingerprinting,also known asWF,is a traffic analysis attack that enables local eavesdroppers to infer a user’s browsing destination,even when using the Tor anonymity network.While advanced attacks based on deep neural network(DNN)can performfeature engineering and attain accuracy rates of over 98%,research has demonstrated thatDNNis vulnerable to adversarial samples.As a result,many researchers have explored using adversarial samples as a defense mechanism against DNN-based WF attacks and have achieved considerable success.However,these methods suffer from high bandwidth overhead or require access to the target model,which is unrealistic.This paper proposes CMAES-WFD,a black-box WF defense based on adversarial samples.The process of generating adversarial examples is transformed into a constrained optimization problem solved by utilizing the Covariance Matrix Adaptation Evolution Strategy(CMAES)optimization algorithm.Perturbations are injected into the local parts of the original traffic to control bandwidth overhead.According to the experiment results,CMAES-WFD was able to significantly decrease the accuracy of Deep Fingerprinting(DF)and VarCnn to below 8.3%and the bandwidth overhead to a maximum of only 14.6%and 20.5%,respectively.Specially,for Automated Website Fingerprinting(AWF)with simple structure,CMAES-WFD reduced the classification accuracy to only 6.7%and the bandwidth overhead to less than 7.4%.Moreover,it was demonstrated that CMAES-WFD was robust against adversarial training to a certain extent.
文摘In Japanese 'e-government' policy, called 'e-Japan', the 'administrative document management system' is functioning as information searching systems. On the other hand, this system has also generated the problem that it is not fully functioning as a means for the information sharing in a governmental agency. So, the purpose of this research is to find how the administrative document management system can function as information sharing in administrative organization. For this purpose, this paper considers the current status and some problems firstly. And secondary, this paper proposes the idea and constructs some information systems using administrative official Website. This is the method and approach of this research. As a conclusion, this proposal information system junctions as information sharing support systems.
基金supported by Zhejiang Provincial Natural Science Foundation of China(Grant No.LGF20G030001)Ministry of Public Security Science and Technology Plan Project(2022LL16)Key scientific research projects of agricultural and social development in Hangzhou in 2020(202004A06).
文摘The feature analysis of fraudulent websites is of great significance to the combat,prevention and control of telecom fraud crimes.Aiming to address the shortcomings of existing analytical approaches,i.e.single dimension and venerability to anti-reconnaissance,this paper adopts the Stacking,the ensemble learning algorithm,combines multiple modalities such as text,image and URL,and proposes a multimodal fraudulent website identification method by ensembling heterogeneous models.Crossvalidation is first used in the training of multiple largely different base classifiers that are strong in learning,such as BERT model,residual neural network(ResNet)and logistic regression model.Classification of the text,image and URL features are then performed respectively.The results of the base classifiers are taken as the input of the meta-classifier,and the output of which is eventually used as the final identification.The study indicates that the fusion method is more effective in identifying fraudulent websites than the single-modal method,and the recall is increased by at least 1%.In addition,the deployment of the algorithm to the real Internet environment shows the improvement of the identification accuracy by at least 1.9%compared with other fusion methods.
文摘Phishing websites present a severe cybersecurity risk since they can lead to financial losses,data breaches,and user privacy violations.This study uses machine learning approaches to solve the problem of phishing website detection.Using artificial intelligence,the project aims to provide efficient techniques for locating and thwarting these dangerous websites.The study goals were attained by performing a thorough literature analysis to investigate several models and methods often used in phishing website identification.Logistic Regression,K-Nearest Neighbors,Decision Trees,Random Forests,Support Vector Classifiers,Linear Support Vector Classifiers,and Naive Bayes were all used in the inquiry.This research covers the benefits and drawbacks of several Machine Learning approaches,illuminating how well-suited each is to overcome the difficulties in locating and countering phishing website predictions.The insights gained from this literature review guide the selection and implementation of appropriate models and methods in future research and real-world applications related to phishing detections.The study evaluates and compares accuracy,precision and recalls of several machine learning models in detecting phishing website URL’s detection.
文摘Phishing attacks pose a significant security threat by masquerading as trustworthy entities to steal sensitive information,a problem that persists despite user awareness.This study addresses the pressing issue of phishing attacks on websites and assesses the performance of three prominent Machine Learning(ML)models—Artificial Neural Networks(ANN),Convolutional Neural Networks(CNN),and Long Short-Term Memory(LSTM)—utilizing authentic datasets sourced from Kaggle and Mendeley repositories.Extensive experimentation and analysis reveal that the CNN model achieves a better accuracy of 98%.On the other hand,LSTM shows the lowest accuracy of 96%.These findings underscore the potential of ML techniques in enhancing phishing detection systems and bolstering cybersecurity measures against evolving phishing tactics,offering a promising avenue for safeguarding sensitive information and online security.
文摘In order to improve the accuracy and integrality of mining data records from the web, the concepts of isomorphic page and directory page and three algorithms are proposed. An isomorphic web page is a set of web pages that have uniform structure, only differing in main information. A web page which contains many links that link to isomorphic web pages is called a directory page. Algorithm 1 can find directory web pages in a web using adjacent links similar analysis method. It first sorts the link, and then counts the links in each directory. If the count is greater than a given valve then finds the similar sub-page links in the directory and gives the results. A function for an isomorphic web page judgment is also proposed. Algorithm 2 can mine data records from an isomorphic page using a noise information filter. It is based on the fact that the noise information is the same in two isomorphic pages, only the main information is different. Algorithm 3 can mine data records from an entire website using the technology of spider. The experiment shows that the proposed algorithms can mine data records more intactly than the existing algorithms. Mining data records from isomorphic pages is an efficient method.
基金The Key Technologies R& D Program of Heilongjiang Province (No.GB06B601)
文摘To improve the efficiency of the management of large farms, a digitalized system of economic statistics is designed based on the Internet platform of the digitalized agricultural integrated system of Friendship Farm, the largest farm in the world. The system can also realize data storage by using the access databank technology. A dynamic website system, based on ASP technology, is used to implement the on-line inquiry of the statistical index of the agricultural economy and the diagrams of the index every year. Furthermore, it can provide the value of comprehensive indicators of farms' economic profits for every year and a trend chart of the comprehensive appraisal of economic development, by using principal component analysis. An early-warning indicator boundary is decided based on the majority principle. The system can realize the farm's terminal data input with effective data-collecting channels and a normative gathering scope and system. This system breaks through the stand-alone database system in agricultural digitalized research to realize the database system in the Internet environment by integrating the existing technologies in China. The system lays a foundation for the further integrated research on the network platform of the digitalized agricultural integrated system in Friendship Farm.
基金Supportted by the Natural Science Foundation ofChina (60573091 ,60273018) National Basic Research and Develop-ment Programof China (2003CB317000) the Key Project of Minis-try of Education of China (03044) .
文摘With the rapid development of Web, there are more and more Web databases available for users to access. At the same time, job searchers often have difficulties in first finding the right sources and then querying over them, providing such an integrated job search system over Web databases has become a Web application in high demand. Based on such consideration, we build a deep Web data integration system that supports unified access for users to multiple job Web sites as a job meta-search engine. In this paper, the architecture of the system is given first, and the key components in the system are introduced.
基金supported by the Foundation for Key Program of Ministry of Education, China under Grant No.311007National Science Foundation Project of China under Grants No. 61202079, No.61170225, No.61271199+1 种基金the Fundamental Research Funds for the Central Universities under Grant No.FRF-TP-09-015Athe Fundamental Research Funds in Beijing Jiaotong University under Grant No.W11JB00630
文摘Nowadays, an increasing number of web applications require identification registration. However, the behavior of website registration has not ever been thoroughly studied. We use the database provided by the Chinese Software Develop Net (CSDN) to provide a complete perspective on this research point. We concentrate on the following three aspects: complexity, correlation, and preference. From these analyses, we draw the following conclusions: firstly, a considerable number of users have not realized the importance of identification and are using very simple identifications that can be attacked very easily. Secondly, there is a strong complexity correlation among the three parts of identification. Thirdly, the top three passwords that users like are 123456789, 12345678 and 11111111, and the top three email providers that they prefer are NETEASE, qq and sina. Further, we provide some suggestions to improve the quality of user passwords.
基金Supported by Key Discipline Project fromScience and Technology Committee of Shanghai(No.04JC14009) and the Research Fund ofDonghua University(No.108 10 0044934)
文摘A hl-quality website is crucial to a company for a successful e-business. The technique maintainers are always faced with the problem how to locate the prime factors which affect the quality of the websites. In view of the complexity and fuzziness of BtoC webslte, a quality diagnosis method based on the multl-attribute and multi-layer fuzzy comprehensive evaluation model including all the quality factors is proposed. A simple example of diagnosis on a famous domestic BtoC websites shows the specific steps of this method and proves its validity. The process of quality evaluation and diagnosis system is illustrated and the computer program of diagnosis is Oven.
基金Supported by Shandong Provincial Natural Science Foundation(ZR2011DM008)
文摘Agricultural product trading website is not only an important way to realize the agriculture informatization,but also the main manifestation of the agricultural informatization. Based on the preliminary understanding of the content and characteristics of China's agricultural product trading website,the paper builds a scientific evaluation indicator system and evaluates 50 typical agricultural product trading websites objectively by using classification and grading method. The results show that the overall construction level of China's agricultural product trading websites is general,and there are obvious differences between regions; the lack of website commercial function and the lag of informatization are the main factors restricting the development of agricultural product trading websites.
基金This research has been funded by the Scientific Research Deanship at University of Ha'il-Saudi Arabia through Project Number RG-20023.
文摘Phishing attacks are security attacks that do not affect only individuals’or organizations’websites but may affect Internet of Things(IoT)devices and net-works.IoT environment is an exposed environment for such attacks.Attackers may use thingbots software for the dispersal of hidden junk emails that are not noticed by users.Machine and deep learning and other methods were used to design detection methods for these attacks.However,there is still a need to enhance detection accuracy.Optimization of an ensemble classification method for phishing website(PW)detection is proposed in this study.A Genetic Algo-rithm(GA)was used for the proposed method optimization by tuning several ensemble Machine Learning(ML)methods parameters,including Random Forest(RF),AdaBoost(AB),XGBoost(XGB),Bagging(BA),GradientBoost(GB),and LightGBM(LGBM).These were accomplished by ranking the optimized classi-fiers to pick out the best classifiers as a base for the proposed method.A PW data-set that is made up of 4898 PWs and 6157 legitimate websites(LWs)was used for this study's experiments.As a result,detection accuracy was enhanced and reached 97.16 percent.
文摘The advent of the Intemet has witnessed a revolution in the business world. One typical example is the emergence of the B2B website. The present paper looks at the B2B website, a conventionalized digital text, in terms of its communicative purposes, move features as well as linguistic specialties, with the aim of presenting the generic structure of the B2B website and its principal linguistic features contributing to the realization of its communicative purposes. It is demonstrated that the B2B website is one instance of the promotional genres and it has a lot in common with advertisement English and "netsneak" in the aspect of lexico-grammatical features.
文摘In this paper, we conduct research on the big data and the artificial intelligence aided decision-making mechanism with the applications on video website homemade program innovation. Make homemade video shows new media platform site content production with new possible, as also make the traditional media found in Internet age, the breakthrough point of the times. Site homemade video program, which is beneficial to reduce copyright purchase demand, reduce the cost, avoid the homogeneity competition, rich advertising marketing at the same time, improve the profit pattern, the organic combination of content production and operation, complete the strategic transformation. On the basis of these advantages, once the site of homemade video program to form a brand and a higher brand influence. Our later research provides the literature survey for the related issues.
基金National Natural Science Foundation of China(No.41320104001,41871140)the Scientific Specific Fund Project of Collaboration and Innovation Center of Coordinating Urban and Rural in Shanxi Province(No.SXCXCXZD2017-002)the Special Fund Project of the Basic Research Service of Sun Yat-sen University(No.17lgjc04)
文摘In the era of the Internet and globalisation, more and more international academics focus their attention on how city governments compete for talent, capital, and technology through website marketing to promote their economy and global status. However, 1) present research generally overlooks the importance of different types of elements in different marketing themes, 2) the combinations of marketing themes are still unknown, and 3) the presumption that the emphasised elements and specific combination of marketing themes on official websites differentiates cities requires more cases to be understood. In light of this background, this study collects homepage elements of 49 Alpha world cities' official websites and quantitatively analyses the frequency of different types of elements, the marketing content themes, and the dissimilarity of content of Chinese Alpha world cities. The results indicate that comprehensiveness and locality appear in the process of city marketing throughout official city websites. Overall, we make the following conclusions. 1) The importance of different kinds of elements significantly differs between 49 Alpha world cities. 2) Based on various combinations of elements, the marketing contents of Alpha world cities through official websites can be categorised into six themes of history and culture, government and information, construction and environment, government and living, construction and living, and general compound. 3) The marketing elements of five Chinese Alpha world cities, including Hong Kong, Beijing, Shanghai, Taipei and Guangzhou, are different than the other 44 Alpha world cities, and Chinese cities prefer to advertise their history and culture but rarely market citizens' activities. Moreover, Chinese cities' marketing mostly targets natives while the other 44 Alpha cities target external groups, and the locality of world cities' website marketing is reinforced especially on a native language edition website. This study ultimately finds that the Chinese edition websites of five Chinese cities place more focus on introducing local historical buildings, administrative services, and internal business information than the English edition websites do.
基金This research project was supported by the Ministry of Culture,Sports,and Tourism(MCST)and the Korea Copyright Commission in 2021(2019-PF-9500).
文摘In the contemporary world, digital content that is subject to copyright is facing significant challenges against the act of copyright infringement.Billions of dollars are lost annually because of this illegal act. The currentmost effective trend to tackle this problem is believed to be blocking thosewebsites, particularly through affiliated government bodies. To do so, aneffective detection mechanism is a necessary first step. Some researchers haveused various approaches to analyze the possible common features of suspectedpiracy websites. For instance, most of these websites serve online advertisement, which is considered as their main source of revenue. In addition, theseadvertisements have some common attributes that make them unique ascompared to advertisements posted on normal or legitimate websites. Theyusually encompass keywords such as click-words (words that redirect to installmalicious software) and frequently used words in illegal gambling, illegal sexual acts, and so on. This makes them ideal to be used as one of the key featuresin the process of successfully detecting websites involved in the act of copyrightinfringement. Research has been conducted to identify advertisements servedon suspected piracy websites. However, these studies use a static approachthat relies mainly on manual scanning for the aforementioned keywords. Thisbrings with it some limitations, particularly in coping with the dynamic andever-changing behavior of advertisements posted on these websites. Therefore,we propose a technique that can continuously fine-tune itself and is intelligentenough to effectively identify advertisement (Ad) banners extracted fromsuspected piracy websites. We have done this by leveraging the power ofmachine learning algorithms, particularly the support vector machine with theword2vec word-embedding model. After applying the proposed technique to1015 Ad banners collected from 98 suspected piracy websites and 90 normal orlegitimate websites, we were able to successfully identify Ad banners extractedfrom suspected piracy websites with an accuracy of 97%. We present thistechnique with the hope that it will be a useful tool for various effective piracywebsite detection approaches. To our knowledge, this is the first approachthat uses machine learning to identify Ad banners served on suspected piracywebsites.
文摘Purpose: To get a better understanding of the way in which university rankings are used.Design/methodology/approach: Detailed analysis of the activities of visitors of the website of the CWTS Leiden Ranking.Findings: Visitors of the Leiden Ranking website originate disproportionally from specific countries. They are more interested in impact indicators than in collaboration indicators, while they are about equally interested in size-dependent indicators and size-independent indicators. Many visitors do not seem to realize that they should decide themselves which criterion they consider most appropriate for ranking universities.Research limitations: The analysis is restricted to the website of a single university ranking. Moreover, the analysis does not provide any detailed insights into the motivations of visitors of university ranking websites.Practical implications: The Leiden Ranking website may need to be improved in order to make more clear to visitors that they should decide themselves which criterion they want to use for ranking universities.Originality/value: This is the first analysis of the activities of visitors of a university ranking website.