Currently,a growing number of educators are aware of the need to look for new approaches to replace the"spoon-feeding"method.Therefore,the context-based strategy began to emerge.This study aims to investigat...Currently,a growing number of educators are aware of the need to look for new approaches to replace the"spoon-feeding"method.Therefore,the context-based strategy began to emerge.This study aims to investigate how students derive information through contextual clues by examining the progress of Chinese middle school EFL students in terms of word recognition.The participants of this study included 20 eighth grade students from the same middle school.The participants sat for two different quizzes:a contextual vocabulary quiz(quiz A)and a direct instruction quiz(quiz B).In quiz A,the participants inferred the meaning of the target words from the example sentences,whereas in quiz B,the students utilized the accompanying English explanation to guess other new words.These students were in the experimental and control conditions,respectively.The two quizzes comprised of 15 multiple choice questions(MCQ)which differentiated the participants?word recognition response to two different learning methods.There were two significant findings from this study.First,the results showed that the context-based strategy leads to a better vocabulary learning performance compared to the direct instruction strategy.Second,although it is not as effective as the context-based strategy,the direct instruction strategy may assist EFL learners in remembering words in short term.展开更多
In this paper, a Context-based 2D Variable Length Coding (C2DVLC) method for coding the transformed residuals in AVS video coding standard is presented. One feature in C2DVLC is the usage of multiple 2D-VLC tables a...In this paper, a Context-based 2D Variable Length Coding (C2DVLC) method for coding the transformed residuals in AVS video coding standard is presented. One feature in C2DVLC is the usage of multiple 2D-VLC tables and another feature is the usage of simple Exponential-Golomb codes. C2DVLC employs context-based adaptive multiple table coding to exploit the statistical correlation between DCT coefficients of one block for higher coding efficiency. ExpGolomb codes are applied to code the pairs of the run-length of zero coefficients and the nonzero coefficient for lower storage requirement. C2DVLC is a low complexity coder in terms of both computational time and memory requirement. The experimental results show that C2DVLC can gain 0.34dB in average for the tested videos when compared with the traditional 2D-VLC coding method like that used in MPEG-2. And compared with CAVLC in H.264/AVC, C2DVLC shows similar coding efficiency.展开更多
Cognition and pain share common neural substrates and interact reciprocally: chronic pain compromises cognitive performance, whereas cognitive processes modulate pain perception. In the present study, we established a...Cognition and pain share common neural substrates and interact reciprocally: chronic pain compromises cognitive performance, whereas cognitive processes modulate pain perception. In the present study, we established a non-drug-dependent rat model of context-based analgesia,where two different contexts(dark and bright) were matched with a high(52°C) or low(48°C) temperature in the hot-plate test during training. Before and after training,we set the temperature to the high level in both contexts.Rats showed longer paw licking latencies in trials with the context originally matched to a low temperature than those to a high temperature, indicating successful establishment of a context-based analgesic effect in rats. This effect was blocked by intraperitoneal injection of naloxone(an opioid receptor antagonist) before the probe. The context-based analgesic effect also disappeared after optogenetic activation or inhibition of the bilateral infralimbic or prelimbic sub-region of the prefrontal cortex. In brief, we established a context-based, non-drug dependent, placebo-like analgesia model in the rat. This model provides a new and useful tool for investigating the cognitive modulation of pain.展开更多
Context-based adaptive binary arithmetic coding(CABAC) is the major entropy-coding algorithm employed in H.264/AVC.In this paper,we present a new VLSI architecture design for an H.264/AVC CABAC decoder,which optimizes...Context-based adaptive binary arithmetic coding(CABAC) is the major entropy-coding algorithm employed in H.264/AVC.In this paper,we present a new VLSI architecture design for an H.264/AVC CABAC decoder,which optimizes both decode decision and decode bypass engines for high throughput,and improves context model allocation for efficient external memory access.Based on the fact that the most possible symbol(MPS) branch is much simpler than the least possible symbol(LPS) branch,a newly organized decode decision engine consisting of two serially concatenated MPS branches and one LPS branch is proposed to achieve better parallelism at lower timing path cost.A look-ahead context index(ctxIdx) calculation mechanism is designed to provide the context model for the second MPS branch.A head-zero detector is proposed to improve the performance of the decode bypass engine according to UEGk encoding features.In addition,to lower the frequency of memory access,we reorganize the context models in external memory and use three circular buffers to cache the context models,neighboring information,and bit stream,respectively.A pre-fetching mechanism with a prediction scheme is adopted to load the corresponding content to a circular buffer to hide external memory latency.Experimental results show that our design can operate at 250 MHz with a 20.71k gate count in SMIC18 silicon technology,and that it achieves an average data decoding rate of 1.5 bins/cycle.展开更多
Increasing research has focused on semantic communication,the goal of which is to convey accurately the meaning instead of transmitting symbols from the sender to the receiver.In this paper,we design a novel encoding ...Increasing research has focused on semantic communication,the goal of which is to convey accurately the meaning instead of transmitting symbols from the sender to the receiver.In this paper,we design a novel encoding and decoding semantic communication framework,which adopts the semantic information and the contextual correlations between items to optimize the performance of a communication system over various channels.On the sender side,the average semantic loss caused by the wrong detection is defined,and a semantic source encoding strategy is developed to minimize the average semantic loss.To further improve communication reliability,a decoding strategy that utilizes the semantic and the context information to recover messages is proposed in the receiver.Extensive simulation results validate the superior performance of our strategies over state-of-the-art semantic coding and decoding policies on different communication channels.展开更多
The number of blogs and other forms of opinionated online content has increased dramatically in recent years.Many fields,including academia and national security,place an emphasis on automated political article orient...The number of blogs and other forms of opinionated online content has increased dramatically in recent years.Many fields,including academia and national security,place an emphasis on automated political article orientation detection.Political articles(especially in the Arab world)are different from other articles due to their subjectivity,in which the author’s beliefs and political affiliation might have a significant influence on a political article.With categories representing the main political ideologies,this problem may be thought of as a subset of the text categorization(classification).In general,the performance of machine learning models for text classification is sensitive to hyperparameter settings.Furthermore,the feature vector used to represent a document must capture,to some extent,the complex semantics of natural language.To this end,this paper presents an intelligent system to detect political Arabic article orientation that adapts the categorical boosting(CatBoost)method combined with a multi-level feature concept.Extracting features at multiple levels can enhance the model’s ability to discriminate between different classes or patterns.Each level may capture different aspects of the input data,contributing to a more comprehensive representation.CatBoost,a robust and efficient gradient-boosting algorithm,is utilized to effectively learn and predict the complex relationships between these features and the political orientation labels associated with the articles.A dataset of political Arabic texts collected from diverse sources,including postings and articles,is used to assess the suggested technique.Conservative,reform,and revolutionary are the three subcategories of these opinions.The results of this study demonstrate that compared to other frequently used machine learning models for text classification,the CatBoost method using multi-level features performs better with an accuracy of 98.14%.展开更多
In opportunistic Networks,compromised nodes can attack social context-based routing protocols by publishing false social attributes information.To solve this problem,we propose a security scheme based on the identity-...In opportunistic Networks,compromised nodes can attack social context-based routing protocols by publishing false social attributes information.To solve this problem,we propose a security scheme based on the identity-based threshold signature which allows mobile nodes to jointly generate and distribute the secrets for social attributes in a totally self-organized way without the need of any centralized authority.New joining nodes can reconstruct their own social attribute signatures by getting enough partial signature services from encounter opportunities with the initial nodes.Mobile nodes need to testify whether the neighbors can provide valid attribute signatures for their routing advertisements in order to resist potential routing attacks.Simulation results show that:by implementing our security scheme,the network delivery probability of the social context-based routing protocol can be effectively improved when there are large numbers of compromised nodes in opportunistic networks.展开更多
An adaptive pipelining scheme for H.264/AVC context-based adaptive binary arithmetic coding(CABAC) decoder for high definition(HD) applications is proposed to solve data hazard problems coming from the data dependenci...An adaptive pipelining scheme for H.264/AVC context-based adaptive binary arithmetic coding(CABAC) decoder for high definition(HD) applications is proposed to solve data hazard problems coming from the data dependencies in CABAC decoding process.An efficiency model of CABAC decoding pipeline is derived according to the analysis of a common pipeline.Based on that,several adaptive strategies are provided.The pipelining scheme with these strategies can be adaptive to different types of syntax elements(SEs) and the pipeline will not stall during decoding process when these strategies are adopted.In addition,the decoder proposed can fully support H.264/AVC high4:2:2 profile and the experimental results show that the efficiency of decoder is much higher than other architectures with one engine.Taking both performance and cost into consideration,our design makes a good tradeoff compared with other work and it is sufficient for HD real-time decoding.展开更多
Grid computing is concerned with the sharing and coordinated use of diverse resources in distributed “virtual organizations”. The heterogeneous, dynamic and multi-domain nature of these environments makes challengin...Grid computing is concerned with the sharing and coordinated use of diverse resources in distributed “virtual organizations”. The heterogeneous, dynamic and multi-domain nature of these environments makes challenging security issues that demand new technical approaches. Despite the recent advances in access control approaches applicable to Grid computing, there remain issues that impede the development of effective access control models for Grid applications. Among them there are the lack of context-based models for access control, and reliance on identity or capability-based access control schemes. An access control scheme that resolve these issues is presented, and a dynamically authorized role-based access control (D-RBAC) model extending the RBAC with context constraints is proposed. The D-RABC mechanisms dynamically grant permissions to users based on a set of contextual information collected from the system and user’s environments, while retaining the advantages of RBAC model. The implementation architecture of D-RBAC for the Grid application is also described.展开更多
Response features of mitral cells in the olfactory bulb were examined using principal component analysis to determine whether they contain information about odorant stimuli.Using microwire electrode array to record fr...Response features of mitral cells in the olfactory bulb were examined using principal component analysis to determine whether they contain information about odorant stimuli.Using microwire electrode array to record from the olfactory bulb in freely breathing anesthetized rats,we recorded responses of different mitral cells to saturated vapor of anisole(1 M),carvone(1 M),isobutanol(1 M),citral(1 M)and isoamyl actate(1 M).The responses of single mitral cells to the same odorant varied over time.The response profiles showed similarity during certain amount of period,which indicated that the response was not only depended on odor itself but also associated with context.Furthermore,the responses of single mitral cell to different odorants were observed with difference in firing rate.In order to recognize different odorant stimuli,we apply four cells as a sensing group for classification using principal component analysis.Features of each cell’s response describing both temporal and frequency characteristics were selected.The results showed that five different single molecular odorants can be distinguished from each other.These data suggest that action potentials of mitral cells may play a role in odor coding.展开更多
Recommender systems are rapidly transforming the digital world into intelligent information hubs.The valuable context information associated with the users’prior transactions has played a vital role in determining th...Recommender systems are rapidly transforming the digital world into intelligent information hubs.The valuable context information associated with the users’prior transactions has played a vital role in determining the user preferences for items or rating prediction.It has been a hot research topic in collaborative filtering-based recommender systems for the last two decades.This paper presents a novel Context Based Rating Prediction(CBRP)model with a unique similarity scoring estimation method.The proposed algorithm computes a context score for each candidate user to construct a similarity pool for the given subject user-item pair and intuitively choose the highly influential users to forecast the item ratings.The context scoring strategy has an inherent capability to incorporate multiple conditional factors to filter down the most relevant recommendations.Compared with traditional similarity estimation methods,CBRP makes it possible for the full use of neighboring collaborators’choice on various conditions.We conduct experiments on three publicly available datasets to evaluate our proposed method with random user-item pairs and got considerable improvement in prediction accuracy over the standard evaluation measures.Also,we evaluate prediction accuracy for every user-item pair in the system and the results show that our proposed framework has outperformed existing methods.展开更多
In this paper, we propose an incremental method of Granular Networks (GN) to construct conceptual and computational platform of Granular Computing (GrC). The essence of this network is to describe the associations bet...In this paper, we propose an incremental method of Granular Networks (GN) to construct conceptual and computational platform of Granular Computing (GrC). The essence of this network is to describe the associations between information granules including fuzzy sets formed both in the input and output spaces. The context within which such relationships are being formed is established by the system developer. Here information granules are built using Context-driven Fuzzy Clustering (CFC). This clustering develops clusters by preserving the homogeneity of the clustered patterns associated with the input and output space. The experimental results on well-known software module of Medical Imaging System (MIS) revealed that the incremental granular network showed a good performance in comparison to other previous literature.展开更多
Cartographic communication and support within emergency management(EM)are complicated issues with changing demands according to the incident extent and phase of the EM cycle.Keeping in mind the specifics of each purpo...Cartographic communication and support within emergency management(EM)are complicated issues with changing demands according to the incident extent and phase of the EM cycle.Keeping in mind the specifics of each purpose,it is obvious that spatial data used for maps preparation and production must be differently visualized even for the same type of emergency incident(traffic accident,fire,and natural disaster).Context-based cartography is a promising methodology to deal with the changing demands of an operational EM center.An overview of cartographic communication is presented within the context of an operational EM center,activities of particular actors,and map use supporting the incident elimination.The authors of the paper respond to a series of questions,for example:what is the current cartographic support of operational EM in the Czech Republic in Digital Earth conditions?What possibilities are there to improve the cartographic communication?How can contextual cartographic services be implemented in a Web environment and how can the usability of results be tested?The paper gives several examples of the usage of cartographic technologies in map creation for various emergency situations.展开更多
As a mean to map ontology concepts, a similarity technique is employed.Especially a context dependent concept mapping is tackled, which needs contextual information fromknowledge taxonomy. Context-based semantic simil...As a mean to map ontology concepts, a similarity technique is employed.Especially a context dependent concept mapping is tackled, which needs contextual information fromknowledge taxonomy. Context-based semantic similarity differs from the real world similarity in thatit requires contextual information to calculate similarity. The notion of semantic coupling isintroduced to derive similarity for a taxonomy-based system. The semantic coupling shows the degreeof semantic cohesiveness for a group of concepts toward a given context. In order to calculate thesemantic coupling effectively, the edge counting method is revisited for measuring basic semanticsimilarity by considering the weighting attributes from where they affect an edge''s strength. Theattributes of scaling depth effect, semantic relation type, and virtual connection for the edgecounting are considered. Furthermore, how the proposed edge counting method could be well adaptedfor calculating context-based similarity is showed. Thorough experimental results are provided forboth edge counting and context-based similarity. The results of proposed edge counting wereencouraging compared with other combined approaches, and the context-based similarity also showedunderstandable results. The novel contributions of this paper come from two aspects. First, thesimilarity is increased to the viable level for edge counting. Second, a mechanism is provided toderive a context-based similarity in taxonomy-based system, which has emerged as a hot issue in theliterature such as Semantic Web, MDR, and other ontology-mapping environments.展开更多
Previous trust models are mainly focused on reputational mechanism based on explicit trust ratings. However, the large amount of user-generated content and community context published on Web is often ignored. Without ...Previous trust models are mainly focused on reputational mechanism based on explicit trust ratings. However, the large amount of user-generated content and community context published on Web is often ignored. Without enough information, there are several problems with previous trust models: first, they cannot determine in which field one user trusts in another, so many models assume that trust exists in all fields. Second some models are not able to delineate the variation of trust scales, therefore they regard each user trusts all his friends to the same extent. Third, since these models only focus on explicit trust ratings, so the trust matrix is very sparse. To solve these problems, we present RCCtrust -a trust model which combines Reputation-, Content- and Context-based mechanisms to provide more accurate, fine-grained and efficient trust management for the electronic community. We extract trust-related information from user-generated content and community context from Web to extend reputation-based trust models. We introduce role-based and behavior-based reasoning functionalities to infer users' interests and category-specific trust relationships. Following the study in sociology, RCCtrust exploits similarities between pairs of users to depict differentiated trust scales. The experimental results show that RCCtrust outperforms pure user similarity method and linear decay trust-aware technique in both accuracy and coverage for a Recommender System.展开更多
Audio Video coding Standard (AVS) is established by the AVS Working Group of China. The main goal of AVS part 7 is to provide high compression performance with relatively low complexity for mobility applications. Th...Audio Video coding Standard (AVS) is established by the AVS Working Group of China. The main goal of AVS part 7 is to provide high compression performance with relatively low complexity for mobility applications. There are 3 main low-complexity tools: deblocking filter, context-based adaptive 2D-VLC and direct intra prediction. These tools are presented and analyzed respectively. Finally, we compare the performance and the decoding speed of AVS part 7 and H.264 baseline profile. The analysis and results indicate that AVS part 7 achieves similar performance with lower cost.展开更多
The traditional student-oriented course evaluation has been the major assessment method on teaching effectiveness worldwide.Useful as it is,it has been widely and continuously criticized for not being a fair,accurate,...The traditional student-oriented course evaluation has been the major assessment method on teaching effectiveness worldwide.Useful as it is,it has been widely and continuously criticized for not being a fair,accurate,and reliable measurement.In search of a more objective assessment method on teaching effectiveness that also reflects the impacts of context-based learning,we propose a theoretical approach from a unique perspective that recognizes teaching effectiveness as a result of the interplays between teacher,student,and context.The approach can be used to compute as well as to predict teaching effectiveness using machine and deep learning technologies,which brings strategical benefits to institutional management.In addition,we install into the approach a mechanism using tokens as incentives to assure the quality of subjective data input.The application framework for the approach is proposed leveraging blockchain.Each implementation of the framework by an establishment is a decentralized application that runs on its chosen blockchain.It is envisioned that the implementations together will form a collective ecology on context-based relative teaching effectiveness,which has the potential to fundamentally impact other academic practices besides teaching effectiveness measurement.The theoretical approach provides a common language to delineate teaching effectiveness from the context-based relative perspective and is customizable during implementation.The teaching effectiveness assessment using the approach downplays the roles played by bias(subjectity)and hence is more objective than that by traditional student-oriented course evaluation.展开更多
文摘Currently,a growing number of educators are aware of the need to look for new approaches to replace the"spoon-feeding"method.Therefore,the context-based strategy began to emerge.This study aims to investigate how students derive information through contextual clues by examining the progress of Chinese middle school EFL students in terms of word recognition.The participants of this study included 20 eighth grade students from the same middle school.The participants sat for two different quizzes:a contextual vocabulary quiz(quiz A)and a direct instruction quiz(quiz B).In quiz A,the participants inferred the meaning of the target words from the example sentences,whereas in quiz B,the students utilized the accompanying English explanation to guess other new words.These students were in the experimental and control conditions,respectively.The two quizzes comprised of 15 multiple choice questions(MCQ)which differentiated the participants?word recognition response to two different learning methods.There were two significant findings from this study.First,the results showed that the context-based strategy leads to a better vocabulary learning performance compared to the direct instruction strategy.Second,although it is not as effective as the context-based strategy,the direct instruction strategy may assist EFL learners in remembering words in short term.
基金Supported by the National Natural Science Foundation of China under Grant No. 60333020 and the Natural Science Foundation of Beijing under Grant No. 4041003.
文摘In this paper, a Context-based 2D Variable Length Coding (C2DVLC) method for coding the transformed residuals in AVS video coding standard is presented. One feature in C2DVLC is the usage of multiple 2D-VLC tables and another feature is the usage of simple Exponential-Golomb codes. C2DVLC employs context-based adaptive multiple table coding to exploit the statistical correlation between DCT coefficients of one block for higher coding efficiency. ExpGolomb codes are applied to code the pairs of the run-length of zero coefficients and the nonzero coefficient for lower storage requirement. C2DVLC is a low complexity coder in terms of both computational time and memory requirement. The experimental results show that C2DVLC can gain 0.34dB in average for the tested videos when compared with the traditional 2D-VLC coding method like that used in MPEG-2. And compared with CAVLC in H.264/AVC, C2DVLC shows similar coding efficiency.
基金supported by grants from the National Natural Science Foundation of China (91732107, 31200835, 81571067, and 81521063)the National Basic Research Development Program (973 Program) of China (2014CB548200 and 2015CB554503)
文摘Cognition and pain share common neural substrates and interact reciprocally: chronic pain compromises cognitive performance, whereas cognitive processes modulate pain perception. In the present study, we established a non-drug-dependent rat model of context-based analgesia,where two different contexts(dark and bright) were matched with a high(52°C) or low(48°C) temperature in the hot-plate test during training. Before and after training,we set the temperature to the high level in both contexts.Rats showed longer paw licking latencies in trials with the context originally matched to a low temperature than those to a high temperature, indicating successful establishment of a context-based analgesic effect in rats. This effect was blocked by intraperitoneal injection of naloxone(an opioid receptor antagonist) before the probe. The context-based analgesic effect also disappeared after optogenetic activation or inhibition of the bilateral infralimbic or prelimbic sub-region of the prefrontal cortex. In brief, we established a context-based, non-drug dependent, placebo-like analgesia model in the rat. This model provides a new and useful tool for investigating the cognitive modulation of pain.
基金Project supported by the National Natural Science Foundation of China(No.61100074)the Fundamental Research Funds for the Central Universities,China(No.2013QNA5008)
文摘Context-based adaptive binary arithmetic coding(CABAC) is the major entropy-coding algorithm employed in H.264/AVC.In this paper,we present a new VLSI architecture design for an H.264/AVC CABAC decoder,which optimizes both decode decision and decode bypass engines for high throughput,and improves context model allocation for efficient external memory access.Based on the fact that the most possible symbol(MPS) branch is much simpler than the least possible symbol(LPS) branch,a newly organized decode decision engine consisting of two serially concatenated MPS branches and one LPS branch is proposed to achieve better parallelism at lower timing path cost.A look-ahead context index(ctxIdx) calculation mechanism is designed to provide the context model for the second MPS branch.A head-zero detector is proposed to improve the performance of the decode bypass engine according to UEGk encoding features.In addition,to lower the frequency of memory access,we reorganize the context models in external memory and use three circular buffers to cache the context models,neighboring information,and bit stream,respectively.A pre-fetching mechanism with a prediction scheme is adopted to load the corresponding content to a circular buffer to hide external memory latency.Experimental results show that our design can operate at 250 MHz with a 20.71k gate count in SMIC18 silicon technology,and that it achieves an average data decoding rate of 1.5 bins/cycle.
基金supported in part by the National Natural Science Foundation of China under Grant No.61931020,U19B2024,62171449,62001483in part by the science and technology innovation Program of Hunan Province under Grant No.2021JJ40690。
文摘Increasing research has focused on semantic communication,the goal of which is to convey accurately the meaning instead of transmitting symbols from the sender to the receiver.In this paper,we design a novel encoding and decoding semantic communication framework,which adopts the semantic information and the contextual correlations between items to optimize the performance of a communication system over various channels.On the sender side,the average semantic loss caused by the wrong detection is defined,and a semantic source encoding strategy is developed to minimize the average semantic loss.To further improve communication reliability,a decoding strategy that utilizes the semantic and the context information to recover messages is proposed in the receiver.Extensive simulation results validate the superior performance of our strategies over state-of-the-art semantic coding and decoding policies on different communication channels.
文摘The number of blogs and other forms of opinionated online content has increased dramatically in recent years.Many fields,including academia and national security,place an emphasis on automated political article orientation detection.Political articles(especially in the Arab world)are different from other articles due to their subjectivity,in which the author’s beliefs and political affiliation might have a significant influence on a political article.With categories representing the main political ideologies,this problem may be thought of as a subset of the text categorization(classification).In general,the performance of machine learning models for text classification is sensitive to hyperparameter settings.Furthermore,the feature vector used to represent a document must capture,to some extent,the complex semantics of natural language.To this end,this paper presents an intelligent system to detect political Arabic article orientation that adapts the categorical boosting(CatBoost)method combined with a multi-level feature concept.Extracting features at multiple levels can enhance the model’s ability to discriminate between different classes or patterns.Each level may capture different aspects of the input data,contributing to a more comprehensive representation.CatBoost,a robust and efficient gradient-boosting algorithm,is utilized to effectively learn and predict the complex relationships between these features and the political orientation labels associated with the articles.A dataset of political Arabic texts collected from diverse sources,including postings and articles,is used to assess the suggested technique.Conservative,reform,and revolutionary are the three subcategories of these opinions.The results of this study demonstrate that compared to other frequently used machine learning models for text classification,the CatBoost method using multi-level features performs better with an accuracy of 98.14%.
基金the Major national S&T program under Grant No. 2011ZX03005-002National Natural Science Foundation of China under Grant No. 60872041,61072066the Fundamental Research Funds for the Central Universities under Grant No. JY10000903001,JY10000901034
文摘In opportunistic Networks,compromised nodes can attack social context-based routing protocols by publishing false social attributes information.To solve this problem,we propose a security scheme based on the identity-based threshold signature which allows mobile nodes to jointly generate and distribute the secrets for social attributes in a totally self-organized way without the need of any centralized authority.New joining nodes can reconstruct their own social attribute signatures by getting enough partial signature services from encounter opportunities with the initial nodes.Mobile nodes need to testify whether the neighbors can provide valid attribute signatures for their routing advertisements in order to resist potential routing attacks.Simulation results show that:by implementing our security scheme,the network delivery probability of the social context-based routing protocol can be effectively improved when there are large numbers of compromised nodes in opportunistic networks.
基金Supported by the National Natural Science Foundation of China(No.61076021)the National Basic Research Program of China(No.2009CB320903)China Postdoctoral Science Foundation(No.2012M511364)
文摘An adaptive pipelining scheme for H.264/AVC context-based adaptive binary arithmetic coding(CABAC) decoder for high definition(HD) applications is proposed to solve data hazard problems coming from the data dependencies in CABAC decoding process.An efficiency model of CABAC decoding pipeline is derived according to the analysis of a common pipeline.Based on that,several adaptive strategies are provided.The pipelining scheme with these strategies can be adaptive to different types of syntax elements(SEs) and the pipeline will not stall during decoding process when these strategies are adopted.In addition,the decoder proposed can fully support H.264/AVC high4:2:2 profile and the experimental results show that the efficiency of decoder is much higher than other architectures with one engine.Taking both performance and cost into consideration,our design makes a good tradeoff compared with other work and it is sufficient for HD real-time decoding.
基金Supported by the National Natural Science Foundation of China (No.60403027) .
文摘Grid computing is concerned with the sharing and coordinated use of diverse resources in distributed “virtual organizations”. The heterogeneous, dynamic and multi-domain nature of these environments makes challenging security issues that demand new technical approaches. Despite the recent advances in access control approaches applicable to Grid computing, there remain issues that impede the development of effective access control models for Grid applications. Among them there are the lack of context-based models for access control, and reliance on identity or capability-based access control schemes. An access control scheme that resolve these issues is presented, and a dynamically authorized role-based access control (D-RBAC) model extending the RBAC with context constraints is proposed. The D-RABC mechanisms dynamically grant permissions to users based on a set of contextual information collected from the system and user’s environments, while retaining the advantages of RBAC model. The implementation architecture of D-RBAC for the Grid application is also described.
基金This research is supported by the National Natural Science Foundation of China(Grant 60725102).
文摘Response features of mitral cells in the olfactory bulb were examined using principal component analysis to determine whether they contain information about odorant stimuli.Using microwire electrode array to record from the olfactory bulb in freely breathing anesthetized rats,we recorded responses of different mitral cells to saturated vapor of anisole(1 M),carvone(1 M),isobutanol(1 M),citral(1 M)and isoamyl actate(1 M).The responses of single mitral cells to the same odorant varied over time.The response profiles showed similarity during certain amount of period,which indicated that the response was not only depended on odor itself but also associated with context.Furthermore,the responses of single mitral cell to different odorants were observed with difference in firing rate.In order to recognize different odorant stimuli,we apply four cells as a sensing group for classification using principal component analysis.Features of each cell’s response describing both temporal and frequency characteristics were selected.The results showed that five different single molecular odorants can be distinguished from each other.These data suggest that action potentials of mitral cells may play a role in odor coding.
基金This work is supported by National Natural Science Foundation of China(No.61672133)Sichuan Science and Technology Program(No.2019YFG0535)the 111 Project(No.B17008).
文摘Recommender systems are rapidly transforming the digital world into intelligent information hubs.The valuable context information associated with the users’prior transactions has played a vital role in determining the user preferences for items or rating prediction.It has been a hot research topic in collaborative filtering-based recommender systems for the last two decades.This paper presents a novel Context Based Rating Prediction(CBRP)model with a unique similarity scoring estimation method.The proposed algorithm computes a context score for each candidate user to construct a similarity pool for the given subject user-item pair and intuitively choose the highly influential users to forecast the item ratings.The context scoring strategy has an inherent capability to incorporate multiple conditional factors to filter down the most relevant recommendations.Compared with traditional similarity estimation methods,CBRP makes it possible for the full use of neighboring collaborators’choice on various conditions.We conduct experiments on three publicly available datasets to evaluate our proposed method with random user-item pairs and got considerable improvement in prediction accuracy over the standard evaluation measures.Also,we evaluate prediction accuracy for every user-item pair in the system and the results show that our proposed framework has outperformed existing methods.
文摘In this paper, we propose an incremental method of Granular Networks (GN) to construct conceptual and computational platform of Granular Computing (GrC). The essence of this network is to describe the associations between information granules including fuzzy sets formed both in the input and output spaces. The context within which such relationships are being formed is established by the system developer. Here information granules are built using Context-driven Fuzzy Clustering (CFC). This clustering develops clusters by preserving the homogeneity of the clustered patterns associated with the input and output space. The experimental results on well-known software module of Medical Imaging System (MIS) revealed that the incremental granular network showed a good performance in comparison to other previous literature.
基金The project(No.MSM0021622418)is supported by the Ministry of Education,Youth and Sports of the Czech Republic.Notes on contributors Karel Staněk has worked as an assistant。
文摘Cartographic communication and support within emergency management(EM)are complicated issues with changing demands according to the incident extent and phase of the EM cycle.Keeping in mind the specifics of each purpose,it is obvious that spatial data used for maps preparation and production must be differently visualized even for the same type of emergency incident(traffic accident,fire,and natural disaster).Context-based cartography is a promising methodology to deal with the changing demands of an operational EM center.An overview of cartographic communication is presented within the context of an operational EM center,activities of particular actors,and map use supporting the incident elimination.The authors of the paper respond to a series of questions,for example:what is the current cartographic support of operational EM in the Czech Republic in Digital Earth conditions?What possibilities are there to improve the cartographic communication?How can contextual cartographic services be implemented in a Web environment and how can the usability of results be tested?The paper gives several examples of the usage of cartographic technologies in map creation for various emergency situations.
文摘As a mean to map ontology concepts, a similarity technique is employed.Especially a context dependent concept mapping is tackled, which needs contextual information fromknowledge taxonomy. Context-based semantic similarity differs from the real world similarity in thatit requires contextual information to calculate similarity. The notion of semantic coupling isintroduced to derive similarity for a taxonomy-based system. The semantic coupling shows the degreeof semantic cohesiveness for a group of concepts toward a given context. In order to calculate thesemantic coupling effectively, the edge counting method is revisited for measuring basic semanticsimilarity by considering the weighting attributes from where they affect an edge''s strength. Theattributes of scaling depth effect, semantic relation type, and virtual connection for the edgecounting are considered. Furthermore, how the proposed edge counting method could be well adaptedfor calculating context-based similarity is showed. Thorough experimental results are provided forboth edge counting and context-based similarity. The results of proposed edge counting wereencouraging compared with other combined approaches, and the context-based similarity also showedunderstandable results. The novel contributions of this paper come from two aspects. First, thesimilarity is increased to the viable level for edge counting. Second, a mechanism is provided toderive a context-based similarity in taxonomy-based system, which has emerged as a hot issue in theliterature such as Semantic Web, MDR, and other ontology-mapping environments.
基金supported by the National High-Technology Research and Development 863 Program of China under Grant No. 2006AA01A123National Science Fund for Distinguished Young Scholars under Grant No.60525202+1 种基金Program for Changjiang Scholars and Innovative Research Team in University under Grant No.IRT0652Defense Advanced Research Foundation of the General Armaments Department of the PLA under Grant Nos.9140A06060307JW0403 and 9140A06050208JW0414.
文摘Previous trust models are mainly focused on reputational mechanism based on explicit trust ratings. However, the large amount of user-generated content and community context published on Web is often ignored. Without enough information, there are several problems with previous trust models: first, they cannot determine in which field one user trusts in another, so many models assume that trust exists in all fields. Second some models are not able to delineate the variation of trust scales, therefore they regard each user trusts all his friends to the same extent. Third, since these models only focus on explicit trust ratings, so the trust matrix is very sparse. To solve these problems, we present RCCtrust -a trust model which combines Reputation-, Content- and Context-based mechanisms to provide more accurate, fine-grained and efficient trust management for the electronic community. We extract trust-related information from user-generated content and community context from Web to extend reputation-based trust models. We introduce role-based and behavior-based reasoning functionalities to infer users' interests and category-specific trust relationships. Following the study in sociology, RCCtrust exploits similarities between pairs of users to depict differentiated trust scales. The experimental results show that RCCtrust outperforms pure user similarity method and linear decay trust-aware technique in both accuracy and coverage for a Recommender System.
基金Supported by the National Natural Science Foundation of China under Grant Nos. 60333020 and 90207005.
文摘Audio Video coding Standard (AVS) is established by the AVS Working Group of China. The main goal of AVS part 7 is to provide high compression performance with relatively low complexity for mobility applications. There are 3 main low-complexity tools: deblocking filter, context-based adaptive 2D-VLC and direct intra prediction. These tools are presented and analyzed respectively. Finally, we compare the performance and the decoding speed of AVS part 7 and H.264 baseline profile. The analysis and results indicate that AVS part 7 achieves similar performance with lower cost.
基金The work in this paper is jointly funded by Tianjin Municipality Natural Science Foundation(18JCYBJC44500)the National Social Science Foundation of China(No.20BTQ084).
文摘The traditional student-oriented course evaluation has been the major assessment method on teaching effectiveness worldwide.Useful as it is,it has been widely and continuously criticized for not being a fair,accurate,and reliable measurement.In search of a more objective assessment method on teaching effectiveness that also reflects the impacts of context-based learning,we propose a theoretical approach from a unique perspective that recognizes teaching effectiveness as a result of the interplays between teacher,student,and context.The approach can be used to compute as well as to predict teaching effectiveness using machine and deep learning technologies,which brings strategical benefits to institutional management.In addition,we install into the approach a mechanism using tokens as incentives to assure the quality of subjective data input.The application framework for the approach is proposed leveraging blockchain.Each implementation of the framework by an establishment is a decentralized application that runs on its chosen blockchain.It is envisioned that the implementations together will form a collective ecology on context-based relative teaching effectiveness,which has the potential to fundamentally impact other academic practices besides teaching effectiveness measurement.The theoretical approach provides a common language to delineate teaching effectiveness from the context-based relative perspective and is customizable during implementation.The teaching effectiveness assessment using the approach downplays the roles played by bias(subjectity)and hence is more objective than that by traditional student-oriented course evaluation.