In conventional chromite beneficiation plant, huge quantity of chromite is used to loss in the form of tailing. For recovery these valuable mineral, a gravity concentrator viz. wet shaking table was used.Optimisation ...In conventional chromite beneficiation plant, huge quantity of chromite is used to loss in the form of tailing. For recovery these valuable mineral, a gravity concentrator viz. wet shaking table was used.Optimisation along with performance prediction of the unit operation is necessary for efficient recovery.So, in this present study, an artificial neural network(ANN) modeling approach was attempted for predicting the performance of wet shaking table in terms of grade(%) and recovery(%). A three layer feed forward neural network(3:3–11–2:2) was developed by varying the major operating parameters such as wash water flow rate(L/min), deck tilt angle(degree) and slurry feed rate(L/h). The predicted value obtained by the neural network model shows excellent agreement with the experimental values.展开更多
For name-based routing/switching in NDN, the key challenges are to manage large-scale forwarding Tables, to lookup long names of variable lengths, and to deal with frequent updates. Hashing associated with proper leng...For name-based routing/switching in NDN, the key challenges are to manage large-scale forwarding Tables, to lookup long names of variable lengths, and to deal with frequent updates. Hashing associated with proper length-detecting is a straightforward yet efficient solution. Binary search strategy can reduce the number of required hash detecting in the worst case. However, to assure the searching path correct in such a schema, either backtrack searching or redundantly storing some prefixes is required, leading to performance or memory issues as a result. In this paper, we make a deep study on the binary search, and propose a novel mechanism to ensure correct searching path without neither additional backtrack costs nor redundant memory consumptions. Along any binary search path, a bloom filter is employed at each branching point to verify whether a said prefix is present, instead of storing that prefix here. By this means, we can gain significantly optimization on memory efficiency, at the cost of bloom checking before each detecting. Our evaluation experiments on both real-world and randomly synthesized data sets demonstrate our superiorities clearly展开更多
文摘In conventional chromite beneficiation plant, huge quantity of chromite is used to loss in the form of tailing. For recovery these valuable mineral, a gravity concentrator viz. wet shaking table was used.Optimisation along with performance prediction of the unit operation is necessary for efficient recovery.So, in this present study, an artificial neural network(ANN) modeling approach was attempted for predicting the performance of wet shaking table in terms of grade(%) and recovery(%). A three layer feed forward neural network(3:3–11–2:2) was developed by varying the major operating parameters such as wash water flow rate(L/min), deck tilt angle(degree) and slurry feed rate(L/h). The predicted value obtained by the neural network model shows excellent agreement with the experimental values.
基金supported by the National Natural Science Foundation of China (Grant No. 61472130 and 61702174)the China Postdoctoral Science Foundation funded project
文摘For name-based routing/switching in NDN, the key challenges are to manage large-scale forwarding Tables, to lookup long names of variable lengths, and to deal with frequent updates. Hashing associated with proper length-detecting is a straightforward yet efficient solution. Binary search strategy can reduce the number of required hash detecting in the worst case. However, to assure the searching path correct in such a schema, either backtrack searching or redundantly storing some prefixes is required, leading to performance or memory issues as a result. In this paper, we make a deep study on the binary search, and propose a novel mechanism to ensure correct searching path without neither additional backtrack costs nor redundant memory consumptions. Along any binary search path, a bloom filter is employed at each branching point to verify whether a said prefix is present, instead of storing that prefix here. By this means, we can gain significantly optimization on memory efficiency, at the cost of bloom checking before each detecting. Our evaluation experiments on both real-world and randomly synthesized data sets demonstrate our superiorities clearly