Languages–independent text tokenization can aid in classification of languages with few sources.There is a global research effort to generate text classification for any language.Human text classification is a slow p...Languages–independent text tokenization can aid in classification of languages with few sources.There is a global research effort to generate text classification for any language.Human text classification is a slow procedure.Conse-quently,the text summary generation of different languages,using machine text classification,has been considered in recent years.There is no research on the machine text classification for many languages such as Czech,Rome,Urdu.This research proposes a cross-language text tokenization model using a Transformer technique.The proposed Transformer employs an encoder that has ten layers with self-attention encoding and a feedforward sublayer.This model improves the efficiency of text classification by providing a draft text classification for a number of documents.We also propose a novel Sub-Word tokenization model with frequent vocabulary usage in the documents.The Sub-Word Byte-Pair Tokenization technique(SBPT)utilizes the sharing of the vocabulary of one sentence with other sentences.The Sub-Word tokenization model enhances the performance of other Sub-Word tokenization models such pair encoding model by+10%using precision metric.展开更多
Improving the Quality of Service (QoS) of Internet traffic is widely recognized as a critical issue for the next-generation networks. In this paper, we present a new algorithm for the active queue management, namely R...Improving the Quality of Service (QoS) of Internet traffic is widely recognized as a critical issue for the next-generation networks. In this paper, we present a new algorithm for the active queue management, namely RED-DTB. This buffer control technique is used to enforce approximate fairness among a large number of concurrent Internet flows. Like RED (Random Early Detection) algorithm, the RED-DTB mechanism can be deployed to actively respond to the gateway congestion, keep the gateway in a healthy state, and protect the fragile flows from being stolen bandwidth by greedy ones. The algorithm is based on the so-called Dual Token Bucket (DTB) pattern. That is, on the one hand, every flow is rate-limited by its own token bucket, to ensure that it can not consume more than its fair share of bandwidth; On the other hand, to make some compensations to less aggressive flows, such as connections with larger round trip time or smaller sending window, and to gain a relatively higher system utilization coefficient, all flows, depending on their individual behavior, may have a chance to fetch tokens from the public token bucket when they run out of their own share of tokens. The algorithm is analyzed and evaluated by simulations, and is proved to be effective in protecting the gateway buffer and controlling the fair allocation of bandwidth among flows.展开更多
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R113),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Languages–independent text tokenization can aid in classification of languages with few sources.There is a global research effort to generate text classification for any language.Human text classification is a slow procedure.Conse-quently,the text summary generation of different languages,using machine text classification,has been considered in recent years.There is no research on the machine text classification for many languages such as Czech,Rome,Urdu.This research proposes a cross-language text tokenization model using a Transformer technique.The proposed Transformer employs an encoder that has ten layers with self-attention encoding and a feedforward sublayer.This model improves the efficiency of text classification by providing a draft text classification for a number of documents.We also propose a novel Sub-Word tokenization model with frequent vocabulary usage in the documents.The Sub-Word Byte-Pair Tokenization technique(SBPT)utilizes the sharing of the vocabulary of one sentence with other sentences.The Sub-Word tokenization model enhances the performance of other Sub-Word tokenization models such pair encoding model by+10%using precision metric.
基金the National Natural Science Foundation of China(60132030)and the National Education Department Doctorial Foundation Project(RFDP1999048602)
文摘Improving the Quality of Service (QoS) of Internet traffic is widely recognized as a critical issue for the next-generation networks. In this paper, we present a new algorithm for the active queue management, namely RED-DTB. This buffer control technique is used to enforce approximate fairness among a large number of concurrent Internet flows. Like RED (Random Early Detection) algorithm, the RED-DTB mechanism can be deployed to actively respond to the gateway congestion, keep the gateway in a healthy state, and protect the fragile flows from being stolen bandwidth by greedy ones. The algorithm is based on the so-called Dual Token Bucket (DTB) pattern. That is, on the one hand, every flow is rate-limited by its own token bucket, to ensure that it can not consume more than its fair share of bandwidth; On the other hand, to make some compensations to less aggressive flows, such as connections with larger round trip time or smaller sending window, and to gain a relatively higher system utilization coefficient, all flows, depending on their individual behavior, may have a chance to fetch tokens from the public token bucket when they run out of their own share of tokens. The algorithm is analyzed and evaluated by simulations, and is proved to be effective in protecting the gateway buffer and controlling the fair allocation of bandwidth among flows.