In this paper, we present a novel approach to model user request patterns in the World Wide Web. Instead of focusing on the user traffic for web pages, we capture the user interaction at the object level of the web pa...In this paper, we present a novel approach to model user request patterns in the World Wide Web. Instead of focusing on the user traffic for web pages, we capture the user interaction at the object level of the web pages. Our framework model consists of three sub-models: one for user file access, one for web pages, and one for storage servers. Web pages are assumed to consist of different types and sizes of objects, which are characterized using several categories: articles, media, and mosaics. The model is implemented with a discrete event simulation and then used to investigate the performance of our system over a variety of parameters in our model. Our performance measure of choice is mean response time and by varying the composition of web pages through our categories, we find that our framework model is able to capture a wide range of conditions that serve as a basis for generating a variety of user request patterns. In addition, we are able to establish a set of parameters that can be used as base cases. One of the goals of this research is for the framework model to be general enough that the parameters can be varied such that it can serve as input for investigating other distributed applications that require the generation of user request access patterns.展开更多
Component-based software reuse (CBSR) has been widely used in software developing practice and has an even more brilliant future with the rapid extension of the Internet, because World Wide Web (WWW) makes the large s...Component-based software reuse (CBSR) has been widely used in software developing practice and has an even more brilliant future with the rapid extension of the Internet, because World Wide Web (WWW) makes the large scale of component resources from different vendors become available to software developers. In this paper, an abstract component model suitable for representing components on WWW is proposed, which plays important roles both in achieving interoperability among components and among reusable component libraries (RCLs). Some necessary changes to many aspects of component management brought by WWW are also discussed, such as the classification of components and the corresponding searching methods, and the certification of components.展开更多
The International World Wide Web Conferences Steering Committee(IW3C2)cordially invites you to participate in the 17th International World Wide Web Conference(WWW2008),to be held on April 21-25,2008 in Beijing, China....The International World Wide Web Conferences Steering Committee(IW3C2)cordially invites you to participate in the 17th International World Wide Web Conference(WWW2008),to be held on April 21-25,2008 in Beijing, China.The conference series has become the premier venue for academics and industry to present,demonstrate。展开更多
In this paper, a WWW hot list which might be helpful in target culture teaching is provided and a sample lesson is given to illustrate the uses of WWW as great resources in culture teaching in ESL classroom.
This assignment mainly examines some of the implications of the Web for the English for Academic Purposes context.It explores how we provide the skills to navigate the world of information and also how we might provid...This assignment mainly examines some of the implications of the Web for the English for Academic Purposes context.It explores how we provide the skills to navigate the world of information and also how we might provide with strategies to exploit those resources critically and effectively in an academic environment.展开更多
A basis for automatic discovery of information resources on the World Wide Web is characterized by three underlying equations. With these equations, the information universe on the Web is divided into three associated...A basis for automatic discovery of information resources on the World Wide Web is characterized by three underlying equations. With these equations, the information universe on the Web is divided into three associated spaces. This model differs from the hypertext employed by the Web, in that the former supports the notion of automatic resource discovery. A private library, which is able to gather automatically from the Web the information useful to the library owner, is envisaged to illustrate that the basic equations and their derivations can be applied to Web automation, including crawling and classifying.展开更多
The rapid increase in the publication of knowledge bases as linked open data (LOD) warrants serious consideration from all concerned, as this phenomenon will potentially scale exponentially. This paper will briefly ...The rapid increase in the publication of knowledge bases as linked open data (LOD) warrants serious consideration from all concerned, as this phenomenon will potentially scale exponentially. This paper will briefly describe the evolution of the LOD, the emerging world-wide semantic web (WWSW), and explore the scalability and performance features Of the service oriented architecture that forms the foundation of the semantic technology platform developed at MIMOS Bhd., for addressing the challenges posed by the intelligent future internet. This paper" concludes with a review of the current status of the agriculture linked open data.展开更多
As data grows in size,search engines face new challenges in extracting more relevant content for users’searches.As a result,a number of retrieval and ranking algorithms have been employed to ensure that the results a...As data grows in size,search engines face new challenges in extracting more relevant content for users’searches.As a result,a number of retrieval and ranking algorithms have been employed to ensure that the results are relevant to the user’s requirements.Unfortunately,most existing indexes and ranking algo-rithms crawl documents and web pages based on a limited set of criteria designed to meet user expectations,making it impossible to deliver exceptionally accurate results.As a result,this study investigates and analyses how search engines work,as well as the elements that contribute to higher ranks.This paper addresses the issue of bias by proposing a new ranking algorithm based on the PageRank(PR)algorithm,which is one of the most widely used page ranking algorithms We pro-pose weighted PageRank(WPR)algorithms to test the relationship between these various measures.The Weighted Page Rank(WPR)model was used in three dis-tinct trials to compare the rankings of documents and pages based on one or more user preferences criteria.Thefindings of utilizing the Weighted Page Rank model showed that using multiple criteria to rankfinal pages is better than using only one,and that some criteria had a greater impact on ranking results than others.展开更多
This paper presented a new shared cache technique-the grouping cache, which could solve many invalid queries in the broadcast probe and the control bottleneck of the centralized web cache by dividing all cooperative c...This paper presented a new shared cache technique-the grouping cache, which could solve many invalid queries in the broadcast probe and the control bottleneck of the centralized web cache by dividing all cooperative caches into several groups according to their positions in the network topology. The technique has the following characteristics: The overhead of multi-cache query can be reduced efficiently by the cache grouping scheme; the compact summary of the cache directory can rapidly determine if a request exists in a cache within the group; the distribution algorithm based on the web-access logs can effectively balance the load among all the groups. The simulation test demonstrated that the grouping cache was more effective than any other existing shared cache techniques.展开更多
Today, in the field of computer networks, new services have been developed on the Internet or intranets, including the mail server, database management, sounds, videos and the web server itself Apache. The number of s...Today, in the field of computer networks, new services have been developed on the Internet or intranets, including the mail server, database management, sounds, videos and the web server itself Apache. The number of solutions for this server is therefore growing continuously, these services are becoming more and more complex and expensive, without being able to fulfill the needs of the users. The absence of benchmarks for websites with dynamic content is the major obstacle to research in this area. These users place high demands on the speed of access to information on the Internet. This is why the performance of the web server is critically important. Several factors influence performance, such as server execution speed, network saturation on the internet or intranet, increased response time, and throughputs. By measuring these factors, we propose a performance evaluation strategy for servers that allows us to determine the actual performance of different servers in terms of user satisfaction. Furthermore, we identified performance characteristics such as throughput, resource utilization, and response time of a system through measurement and modeling by simulation. Finally, we present a simple queue model of an Apache web server, which reasonably represents the behavior of a saturated web server using the Simulink model in Matlab (Matrix Laboratory) and also incorporates sporadic incoming traffic. We obtain server performance metrics such as average response time and throughput through simulations. Compared to other models, our model is conceptually straightforward. The model has been validated through measurements and simulations during the tests that we conducted.展开更多
文摘In this paper, we present a novel approach to model user request patterns in the World Wide Web. Instead of focusing on the user traffic for web pages, we capture the user interaction at the object level of the web pages. Our framework model consists of three sub-models: one for user file access, one for web pages, and one for storage servers. Web pages are assumed to consist of different types and sizes of objects, which are characterized using several categories: articles, media, and mosaics. The model is implemented with a discrete event simulation and then used to investigate the performance of our system over a variety of parameters in our model. Our performance measure of choice is mean response time and by varying the composition of web pages through our categories, we find that our framework model is able to capture a wide range of conditions that serve as a basis for generating a variety of user request patterns. In addition, we are able to establish a set of parameters that can be used as base cases. One of the goals of this research is for the framework model to be general enough that the parameters can be varied such that it can serve as input for investigating other distributed applications that require the generation of user request access patterns.
文摘Component-based software reuse (CBSR) has been widely used in software developing practice and has an even more brilliant future with the rapid extension of the Internet, because World Wide Web (WWW) makes the large scale of component resources from different vendors become available to software developers. In this paper, an abstract component model suitable for representing components on WWW is proposed, which plays important roles both in achieving interoperability among components and among reusable component libraries (RCLs). Some necessary changes to many aspects of component management brought by WWW are also discussed, such as the classification of components and the corresponding searching methods, and the certification of components.
文摘The International World Wide Web Conferences Steering Committee(IW3C2)cordially invites you to participate in the 17th International World Wide Web Conference(WWW2008),to be held on April 21-25,2008 in Beijing, China.The conference series has become the premier venue for academics and industry to present,demonstrate。
文摘In this paper, a WWW hot list which might be helpful in target culture teaching is provided and a sample lesson is given to illustrate the uses of WWW as great resources in culture teaching in ESL classroom.
文摘This assignment mainly examines some of the implications of the Web for the English for Academic Purposes context.It explores how we provide the skills to navigate the world of information and also how we might provide with strategies to exploit those resources critically and effectively in an academic environment.
文摘A basis for automatic discovery of information resources on the World Wide Web is characterized by three underlying equations. With these equations, the information universe on the Web is divided into three associated spaces. This model differs from the hypertext employed by the Web, in that the former supports the notion of automatic resource discovery. A private library, which is able to gather automatically from the Web the information useful to the library owner, is envisaged to illustrate that the basic equations and their derivations can be applied to Web automation, including crawling and classifying.
文摘The rapid increase in the publication of knowledge bases as linked open data (LOD) warrants serious consideration from all concerned, as this phenomenon will potentially scale exponentially. This paper will briefly describe the evolution of the LOD, the emerging world-wide semantic web (WWSW), and explore the scalability and performance features Of the service oriented architecture that forms the foundation of the semantic technology platform developed at MIMOS Bhd., for addressing the challenges posed by the intelligent future internet. This paper" concludes with a review of the current status of the agriculture linked open data.
文摘As data grows in size,search engines face new challenges in extracting more relevant content for users’searches.As a result,a number of retrieval and ranking algorithms have been employed to ensure that the results are relevant to the user’s requirements.Unfortunately,most existing indexes and ranking algo-rithms crawl documents and web pages based on a limited set of criteria designed to meet user expectations,making it impossible to deliver exceptionally accurate results.As a result,this study investigates and analyses how search engines work,as well as the elements that contribute to higher ranks.This paper addresses the issue of bias by proposing a new ranking algorithm based on the PageRank(PR)algorithm,which is one of the most widely used page ranking algorithms We pro-pose weighted PageRank(WPR)algorithms to test the relationship between these various measures.The Weighted Page Rank(WPR)model was used in three dis-tinct trials to compare the rankings of documents and pages based on one or more user preferences criteria.Thefindings of utilizing the Weighted Page Rank model showed that using multiple criteria to rankfinal pages is better than using only one,and that some criteria had a greater impact on ranking results than others.
文摘This paper presented a new shared cache technique-the grouping cache, which could solve many invalid queries in the broadcast probe and the control bottleneck of the centralized web cache by dividing all cooperative caches into several groups according to their positions in the network topology. The technique has the following characteristics: The overhead of multi-cache query can be reduced efficiently by the cache grouping scheme; the compact summary of the cache directory can rapidly determine if a request exists in a cache within the group; the distribution algorithm based on the web-access logs can effectively balance the load among all the groups. The simulation test demonstrated that the grouping cache was more effective than any other existing shared cache techniques.
文摘Today, in the field of computer networks, new services have been developed on the Internet or intranets, including the mail server, database management, sounds, videos and the web server itself Apache. The number of solutions for this server is therefore growing continuously, these services are becoming more and more complex and expensive, without being able to fulfill the needs of the users. The absence of benchmarks for websites with dynamic content is the major obstacle to research in this area. These users place high demands on the speed of access to information on the Internet. This is why the performance of the web server is critically important. Several factors influence performance, such as server execution speed, network saturation on the internet or intranet, increased response time, and throughputs. By measuring these factors, we propose a performance evaluation strategy for servers that allows us to determine the actual performance of different servers in terms of user satisfaction. Furthermore, we identified performance characteristics such as throughput, resource utilization, and response time of a system through measurement and modeling by simulation. Finally, we present a simple queue model of an Apache web server, which reasonably represents the behavior of a saturated web server using the Simulink model in Matlab (Matrix Laboratory) and also incorporates sporadic incoming traffic. We obtain server performance metrics such as average response time and throughput through simulations. Compared to other models, our model is conceptually straightforward. The model has been validated through measurements and simulations during the tests that we conducted.