In this paper, a novel 10 Transistor Static Random Access Memory (SRAM) cell is proposed. Read and Write bit lines are decoupled in the proposed cell. Feedback loop-cutting with single bit line write scheme is employe...In this paper, a novel 10 Transistor Static Random Access Memory (SRAM) cell is proposed. Read and Write bit lines are decoupled in the proposed cell. Feedback loop-cutting with single bit line write scheme is employed in the 10 Transistor SRAM cell to reduce active power consumption during the write operation. Read access time and write access time are measured for proposed cell architecture based on Eldo SPICE simulation using TSMC based 90 nm Complementary Metal Oxide Semiconductor (CMOS) technology at various process corners. Leakage current measurements made on hold mode of operation show that proposed cell architecture is having 12.31 nano amperes as compared to 40.63 nano amperes of the standard 6 Transistor cell. 10 Transistor cell also has better performance in terms of leakage power as compared to 6 Transistor cell.展开更多
The idea of linear Diophantine fuzzy set(LDFS)theory with its control parameters is a strong model for machine learning and optimization under uncertainty.The activity times in the critical path method(CPM)representat...The idea of linear Diophantine fuzzy set(LDFS)theory with its control parameters is a strong model for machine learning and optimization under uncertainty.The activity times in the critical path method(CPM)representation procedures approach are initially static,but in the Project Evaluation and Review Technique(PERT)approach,they are probabilistic.This study proposes a novel way of project review and assessment methodology for a project network in a linear Diophantine fuzzy(LDF)environment.The LDF expected task time,LDF variance,LDF critical path,and LDF total expected time for determining the project network are all computed using LDF numbers as the time of each activity in the project network.The primary premise of the LDF-PERT approach is to address ambiguities in project network activity timesmore simply than other approaches such as conventional PERT,Fuzzy PERT,and so on.The LDF-PERT is an efficient approach to analyzing symmetries in fuzzy control systems to seek an optimal decision.We also present a new approach for locating LDF-CPM in a project network with uncertain and erroneous activity timings.When the available resources and activity times are imprecise and unpredictable,this strategy can help decision-makers make better judgments in a project.A comparison analysis of the proposed technique with the existing techniques has also been discussed.The suggested techniques are demonstrated with two suitable numerical examples.展开更多
In this work, power efficient butterfly unit based FFT architecture is presented. The butterfly unit is designed using floating-point fused arithmetic units. The fused arithmetic units include two-term dot product uni...In this work, power efficient butterfly unit based FFT architecture is presented. The butterfly unit is designed using floating-point fused arithmetic units. The fused arithmetic units include two-term dot product unit and add-subtract unit. In these arithmetic units, operations are performed over complex data values. A modified fused floating-point two-term dot product and an enhanced model for the Radix-4 FFT butterfly unit are proposed. The modified fused two-term dot product is designed using Radix-16 booth multiplier. Radix-16 booth multiplier will reduce the switching activities compared to Radix-8 booth multiplier in existing system and also will reduce the area required. The proposed architecture is implemented efficiently for Radix-4 decimation in time(DIT) FFT butterfly with the two floating-point fused arithmetic units. The proposed enhanced architecture is synthesized, implemented, placed and routed on a FPGA device using Xilinx ISE tool. It is observed that the Radix-4 DIT fused floating-point FFT butterfly requires 50.17% less space and 12.16% reduced power compared to the existing methods and the proposed enhanced model requires 49.82% less space on the FPGA device compared to the proposed design. Also, reduced power consumption is addressed by utilizing the reusability technique, which results in 11.42% of power reduction of the enhanced model compared to the proposed design.展开更多
Tolerance charting is an effective tool to determine the optimal allocation of working dimensions and working tolerances such that the blueprint dimensions and tolerances can be achieved to accomplish the cost objecti...Tolerance charting is an effective tool to determine the optimal allocation of working dimensions and working tolerances such that the blueprint dimensions and tolerances can be achieved to accomplish the cost objectives.The selection of machining datum and allocation of tolerances are critical in any machining process planning as they directly affect any setup methods/machine tools selection and machining time.This paper mainly focuses on the selection of optimum machining datums and machining tolerances simultaneously in process planning.A dynamic tolerance charting constraint scheme is developed and implemented in the optimization procedure.An optimization model is formulated for selecting machining datum and tolerances and implemented with an algorithm namely Elitist Non-Dominated Sorting Genetic Algorithm(NSGA-II).The computational results indicate that the proposed methodology is capable and robust in finding the optimal machining datum set and tolerances.展开更多
Al-7075 alloy-base matrix, reinforced with mixtures of silicon carbide (SiC) and boron carbide (B4C) particles, know as hybrid composites have been fabricated by stir casting technique (liquid metallurgy route) and op...Al-7075 alloy-base matrix, reinforced with mixtures of silicon carbide (SiC) and boron carbide (B4C) particles, know as hybrid composites have been fabricated by stir casting technique (liquid metallurgy route) and optimized at different parameters like sliding speed, applied load, sliding time, and percentage of reinforcement by Taguchi method. The specimens were examined by Rockwell hardness test machine, Pin on Disc, Scanning Electron Microscope (SEM) and Optical Microscope. A plan of experiment generated through Taguchi’s technique is used to conduct experiments based on L27 orthogonal array. The developed ANOVA and the regression equations were used to find the optimum wear as well as co-efficient of friction under the influence of sliding speed, applied load, sliding time and percentage of reinforcement. The dry sliding wear resistance was analyzed on the basis of “smaller the best”. Finally, confirmation tests were carried out to verify the experimental results.展开更多
A pulsed,picosecond Nd:YAG laser with a wavelength of 532 nm is used to texture the surface of grade 5 titanium alloy(Ti–6Al–4V)for minimizing its wear rate.The wear properties of the base samples and laser surface ...A pulsed,picosecond Nd:YAG laser with a wavelength of 532 nm is used to texture the surface of grade 5 titanium alloy(Ti–6Al–4V)for minimizing its wear rate.The wear properties of the base samples and laser surface textured samples are analyzed by conducting wear tests under a sliding condition using pin-on-disk equipment.The wear tests are conducted based on the Box–Benhken design,and the interaction of the process parameters is analyzed using response surface methodology.The wear analysis is conducted by varying the load,rotating speed of the disc,and track diameter at room temperature with a sliding distance of 1500 m.The results demonstrate that the laser textured surfaces exhibited a lower coefficient of friction and good anti-wear properties as compared with the non-textured surfaces.A regression model is developed for the wear analysis of titanium alloy using the analysis of variance technique.It is also observed from the analysis that the applied load and sliding distance are the parameters that have the greatest effect on the wear behavior followed by the wear track diameter.The optimum operating conditions have been suggested based on the results obtained from the numerical optimization approach.展开更多
Glaucoma is a chronic and progressive optic neurodegenerative disease leading to vision deterioration and in most cases produce increased pressure within the eye. This is due to the backup of fluid in the eye; it caus...Glaucoma is a chronic and progressive optic neurodegenerative disease leading to vision deterioration and in most cases produce increased pressure within the eye. This is due to the backup of fluid in the eye; it causes damage to the optic nerve. Hence, early detection diagnosis and treatment of an eye help to prevent the loss of vision. In this paper, a novel method is proposed for the early detection of glaucoma using a combination of magnitude and phase features from the digital fundus images. Local binary patterns(LBP) and Daugman’s algorithm are used to perform the feature set extraction.The histogram features are computed for both the magnitude and phase components. The Euclidean distance between the feature vectors are analyzed to predict glaucoma. The performance of the proposed method is compared with the higher order spectra(HOS)features in terms of sensitivity, specificity, classification accuracy and execution time. The proposed system results 95.45% output for sensitivity, specificity and classification. Also, the execution time for the proposed method takes lesser time than the existing method which is based on HOS features. Hence, the proposed system is accurate, reliable and robust than the existing approach to predict the glaucoma features.展开更多
Cloud storage has gained increasing popularity,as it helps cloud users arbitrarily store and access the related outsourced data.Numerous public audit buildings have been presented to ensure data transparency.However,m...Cloud storage has gained increasing popularity,as it helps cloud users arbitrarily store and access the related outsourced data.Numerous public audit buildings have been presented to ensure data transparency.However,modern developments have mostly been constructed on the public key infrastructure.To achieve data integrity,the auditor must first authenticate the legality of the public key certificate,which adds to an immense workload for the auditor,in order to ensure that data integrity is accomplished.The data facilities anticipate that the storage data quality should be regularly tracked to minimize disruption to the saved data in order to maintain the intactness of the stored data on the remote server.One of the main problems for individuals,though,is how to detect data integrity on a term where people have a backup of local files.Meanwhile,a system is often unlikely for a source-limited person to perform a data integrity inspection if the overall data file is retrieved.In this work,a stable and effective ID-based auditing setting that uses machine learning techniques is proposed to improve productivity and enhance the protection of ID-based audit protocols.The study tackles the issue of confidentiality and reliability in the public audit framework focused on identity.The idea has already been proved safe;its safety is very relevant to the traditional presumption of the Computational Diffie-Hellman security assumption.展开更多
In the present work, the pool boiling critical heat flux, transient heat transfer characteristics, and bonding strength of thin Ni-Cr wire with aqua based reduced graphene oxide(r GO) nanofluids are experimentally stu...In the present work, the pool boiling critical heat flux, transient heat transfer characteristics, and bonding strength of thin Ni-Cr wire with aqua based reduced graphene oxide(r GO) nanofluids are experimentally studied. Results indicate:(i) the critical heat flux(CHF) of 0.01, 0.05, 0.1, 0.2, and 0.3 g·L^(-1) concentrations of r GO-water nanofluids varies from 1.42 to 2.40 MW·m^(-2);(ii) the CHF remains same for the tested samples during transient heat transfer studies and(iii) a constant value of CHF upto 10 tests when the nanocoated Ni-Cr wire is tested with DI water and deterioration occurs beyond this which implies a chance of peel off of r GO layer below the critical coating thickness.展开更多
The present work is a discussion on the performance analysis of Modified Cooperative Subchannel Allocation (CSA) Algorithms which is used in Alamouti Decoded and Forward (Alamouti DF) Relaying Protocol for wireless mu...The present work is a discussion on the performance analysis of Modified Cooperative Subchannel Allocation (CSA) Algorithms which is used in Alamouti Decoded and Forward (Alamouti DF) Relaying Protocol for wireless multi-user Orthogonal Frequency Division Multiplexing Access (OFDMA) systems. In addition, the performance of approximate Symbol Error Rate (SER) for the Alamouti DF Relaying Protocol with the Cooperative Maximum Ratio Combining Technique (C-MRC) is analyzed and compared with SER upper bound. The approximate SER is asymptotically tight bound at higher Signal-to-Noise Ratio (SNR). From the asymptotic tight bound approximate SER, Particle Swarm Optimization (PSO) based Power Allocation (PA) is determined for the Alamouti DF Relaying Protocol. The simulation results suggested that the Modified Throughput based Subchannel Allocation Algorithm achieved an improved throughput of 6% to 33% compared to that of existing cooperative diversity protocol. Further, the Modified Fairness based Subchannel Allocation Algorithm rendered fairness of 7.2% to 17% among the multiuser against the existing cooperative diversity protocol.展开更多
This paper is designed to introduce new hybrid Vedic algorithm to increase the speed of the multiplier. This work combines the principles of Nikhilam sutra and Karatsuba algorithm. Vedic Mathematics is the mathematica...This paper is designed to introduce new hybrid Vedic algorithm to increase the speed of the multiplier. This work combines the principles of Nikhilam sutra and Karatsuba algorithm. Vedic Mathematics is the mathematical system to solve the complex computations in an easier manner. There are specific sutras to perform multiplication. Nikhilam sutra is one of the sutra. But this has some limitations. To overcome the limitations, this sutra is combined with Karatsuba algorithm. High speed devices are required for high speed applications with compact size. Normally multipliers require more power for its computation. In this paper, new multiplication algorithm for the multiplication of binary numbers is proposed based on Vedic Mathematics. The novel portion in the algorithm is found to be in the calculation of remainder using complement method. The size of the remainder is always set as N - 1 bit for any combination of input. The multiplier structure is designed based on Karatsuba algorithm. Therefore, N × N bit multiplication is done by (N - 1) bit multiplication. Numerical strength reduction is done through Karatsuba algorithm. The results show that the reduction in hardware leads to reduction in the delay.展开更多
Vedic mathematics is the system of mathematics followed in ancient Indian and it is applied in various mathematical branches. The word “Vedic” represents the storehouse of all knowledge. Because using Vedic Mathemat...Vedic mathematics is the system of mathematics followed in ancient Indian and it is applied in various mathematical branches. The word “Vedic” represents the storehouse of all knowledge. Because using Vedic Mathematics, the arithmetical problems are solved easily. The mathematical algorithms are formed from 16 sutras and 13 up-sutras. But there are some limitations in each sutra. Here, two sutras Nikhilam sutra and Karatsuba algorithm are considered. In this research paper, a novel algorithm for binary multiplication based on Vedic mathematics is designed using bit reduction technique. Though Nikhilam sutra is used for multiplication, it is not used in all applications. Because it is special in multiplication. The remainder is derived from this sutra by reducing the remainder bit size to N-2 bit. Here, the number of bits of the remainder is constantly maintained as N-2 bits. By using Karatsuba algorithm, the overall structure of the multiplier is designed. Unlike the conventional Karatsuba algorithm, the proposed algorithm requires only one multiplier with N-2 bits only. The speed of the proposed algorithm is improved with balancing the area and the power. Even though there is a deviation in lower order bits, this method shows larger difference in higher bit lengths.展开更多
Automatic palmprint identification has received much attention in security applications and law enforcement. The performance of a palmprint identification system is improved by means of feature extraction and classifi...Automatic palmprint identification has received much attention in security applications and law enforcement. The performance of a palmprint identification system is improved by means of feature extraction and classification. Feature extraction methods such as Subspace learning are highly sensitive to the rotation variances, translation and illumination in image identification. Thus, Histogram of Oriented Lines (HOL) has not obtained promising performance for palmprint recognition so far. In this paper, we propose a new descriptor of palmprint named Improved Histogram of Oriented Lines (IHOL), which is an alternative of HOL. Improved HOL is not very sensitive to changes of translation and illumination, and has the robustness against small transformations whereas the small translation and rotations make no change in histogram value adjustment of the proposed work. The experiment results show that based on IHOL, with Principal Component Analysis (PCA) subspace learning can achieve high recognition rates. The proposed method (IHOL-Cosine distance) improves 1.30% on PolyU I database, and similarly (IHOL-Euclidean distance) improves 2.36% on COEP database compared with existing HOL method.展开更多
In this paper, we introduce the notion of intuitionistic fuzzy α-generalized closed sets in intuitionistic fuzzy minimal structure spaces and investigate some of their properties. Further, we introduce and study the ...In this paper, we introduce the notion of intuitionistic fuzzy α-generalized closed sets in intuitionistic fuzzy minimal structure spaces and investigate some of their properties. Further, we introduce and study the concept of intuitionistic fuzzy α-generalized minimal continuous functions.展开更多
Breast cancer is a type of cancer responsible for higher mortality rates among women.The cruelty of breast cancer always requires a promising approach for its earlier detection.In light of this,the proposed research l...Breast cancer is a type of cancer responsible for higher mortality rates among women.The cruelty of breast cancer always requires a promising approach for its earlier detection.In light of this,the proposed research leverages the representation ability of pretrained EfficientNet-B0 model and the classification ability of the XGBoost model for the binary classification of breast tumors.In addition,the above transfer learning model is modified in such a way that it will focus more on tumor cells in the input mammogram.Accordingly,the work proposed an EfficientNet-B0 having a Spatial Attention Layer with XGBoost(ESA-XGBNet)for binary classification of mammograms.For this,the work is trained,tested,and validated using original and augmented mammogram images of three public datasets namely CBIS-DDSM,INbreast,and MIAS databases.Maximumclassification accuracy of 97.585%(CBISDDSM),98.255%(INbreast),and 98.91%(MIAS)is obtained using the proposed ESA-XGBNet architecture as compared with the existing models.Furthermore,the decision-making of the proposed ESA-XGBNet architecture is visualized and validated using the Attention Guided GradCAM-based Explainable AI technique.展开更多
Medical data mining has become an essential task in healthcare sector to secure the personal and medical data of patients using privacy policy.In this background,several authentication and accessibility issues emerge ...Medical data mining has become an essential task in healthcare sector to secure the personal and medical data of patients using privacy policy.In this background,several authentication and accessibility issues emerge with an inten-tion to protect the sensitive details of the patients over getting published in open domain.To solve this problem,Multi Attribute Case based Privacy Preservation(MACPP)technique is proposed in this study to enhance the security of privacy-preserving data.Private information can be any attribute information which is categorized as sensitive logs in a patient’s records.The semantic relation between transactional patient records and access rights is estimated based on the mean average value to distinguish sensitive and non-sensitive information.In addition to this,crypto hidden policy is also applied here to encrypt the sensitive data through symmetric standard key log verification that protects the personalized sensitive information.Further,linear integrity verification provides authentication rights to verify the data,improves the performance of privacy preserving techni-que against intruders and assures high security in healthcare setting.展开更多
Group communication is widely used by most of the emerging network applications like telecommunication,video conferencing,simulation applications,distributed and other interactive systems.Secured group communication p...Group communication is widely used by most of the emerging network applications like telecommunication,video conferencing,simulation applications,distributed and other interactive systems.Secured group communication plays a vital role in case of providing the integrity,authenticity,confidentiality,and availability of the message delivered among the group members with respect to communicate securely between the inter group or else within the group.In secure group communications,the time cost associated with the key updating in the proceedings of the member join and departure is an important aspect of the quality of service,particularly in the large groups with highly active membership.Hence,the paper is aimed to achieve better cost and time efficiency through an improved DC multicast routing protocol which is used to expose the path between the nodes participating in the group communication.During this process,each node constructs an adaptive Ptolemy decision tree for the purpose of generating the contributory key.Each of the node is comprised of three keys which will be exchanged between the nodes for considering the group key for the purpose of secure and cost-efficient group communication.The rekeying process is performed when a member leaves or adds into the group.The performance metrics of novel approach is measured depending on the important factors such as computational and communicational cost,rekeying process and formation of the group.It is concluded from the study that the technique has reduced the computational and communicational cost of the secure group communication when compared to the other existing methods.展开更多
Characterized by recurrent and rapid seizures, epilepsy is a great threat to the livelihood of the human beings. Abnormal transient behaviour of neurons in the cortical regions of the brain leads to a seizure which ch...Characterized by recurrent and rapid seizures, epilepsy is a great threat to the livelihood of the human beings. Abnormal transient behaviour of neurons in the cortical regions of the brain leads to a seizure which characterizes epilepsy. The physical and mental activities of the patient are totally dampened with this epileptic seizure. A significant clinical tool for the study, analysis and diagnosis of the epilepsy is electroencephalogram (EEG). To detect such seizures, EEG signals aids greatly to the clinical experts and it is used as an important tool for the analysis of brain disorders, especially epilepsy. In this paper, the high dimensional EEG data are reduced to a low dimension by incorporating techniques such as Fuzzy Mutual Information (FMI), Independent Component Analysis (ICA), Linear Graph Embedding (LGE), Linear Discriminant Analysis (LDA) and Variational Bayesian Matrix Factorization (VBMF). After employing them as dimensionality reduction techniques, the Neural Networks (NN) such as Cascaded Feed Forward Neural Network (CFFNN), Time Delay Neural Network (TDNN) and Generalized Regression Neural Network (GRNN) are used as Post Classifiers for the Classification of Epilepsy Risk Levels from EEG signals. The bench mark parameters used here are Performance Index (PI), Quality Values (QV), Time Delay, Accuracy, Specificity and Sensitivity.展开更多
With the emergence of the Internet of things(IoT),embedded systems have now changed its dimensionality and it is applied in various domains such as healthcare,home automation and mainly Industry 4.0.These Embedded IoT...With the emergence of the Internet of things(IoT),embedded systems have now changed its dimensionality and it is applied in various domains such as healthcare,home automation and mainly Industry 4.0.These Embedded IoT devices are mostly battery-driven.It has been analyzed that usage of Dynamic Random-Access Memory(DRAM)centered core memory is considered the most significant source of high energy utility in Embedded IoT devices.For achieving the low power consumption in these devices,Non-volatile memory(NVM)devices such as Parameter Random Access Memory(PRAM)and Spin-Transfer Torque Magnetic RandomAccess Memory(STT-RAM)are becoming popular among main memory alternatives in embedded IoT devices because of their features such as high thickness,byte addressability,high scalability and low power intake.Additionally,Non-volatile Random-Access Memory(NVRAM)is widely adopted to save the data in the embedded IoT devices.NVM,flash memories have a limited lifetime,so it is mandatory to adopt intelligent optimization in managing the NVRAM-based embedded devices using an intelligent controller while considering the endurance issue.To address this challenge,the paper proposes a powerful,lightweight machine learning-based workload-adaptive write schemes of the NVRAM,which can increase the lifetime and reduce the energy consumption of the processors.The proposed system consists of three phases like Workload Characterization,Intelligent Compression and Memory Allocators.These phases are used for distributing the write-cycles to NVRAM,following the energy-time consumption and number of data bytes.The extensive experimentations are carried out using the IoMT(Internet of Medical things)benchmark in which the different endurance factors such as application delay,energy and write-time factors were evaluated and compared with the different existing algorithms.展开更多
The Internet of Things (IoT) and Cloud computing are gaining popularity due to their numerous advantages, including the efficient utilization of internetand computing resources. In recent years, many more IoT applicat...The Internet of Things (IoT) and Cloud computing are gaining popularity due to their numerous advantages, including the efficient utilization of internetand computing resources. In recent years, many more IoT applications have beenextensively used. For instance, Healthcare applications execute computations utilizing the user’s private data stored on cloud servers. However, the main obstaclesfaced by the extensive acceptance and usage of these emerging technologies aresecurity and privacy. Moreover, many healthcare data management system applications have emerged, offering solutions for distinct circumstances. But still, theexisting system has issues with specific security issues, privacy-preserving rate,information loss, etc. Hence, the overall system performance is reduced significantly. A unique blockchain-based technique is proposed to improve anonymityin terms of data access and data privacy to overcome the above-mentioned issues.Initially, the registration phase is done for the device and the user. After that, theGeo-Location and IP Address values collected during registration are convertedinto Hash values using Adler 32 hashing algorithm, and the private and publickeys are generated using the key generation centre. Then the authentication is performed through login. The user then submits a request to the blockchain server,which redirects the request to the associated IoT device in order to obtain thesensed IoT data. The detected data is anonymized in the device and stored inthe cloud server using the Linear Scaling based Rider Optimization algorithmwith integrated KL Anonymity (LSR-KLA) approach. After that, the Time-stamp-based Public and Private Key Schnorr Signature (TSPP-SS) mechanismis used to permit the authorized user to access the data, and the blockchain servertracks the entire transaction. The experimental findings showed that the proposedLSR-KLA and TSPP-SS technique provides better performance in terms of higherprivacy-preserving rate, lower information loss, execution time, and Central Processing Unit (CPU) usage than the existing techniques. Thus, the proposed method allows for better data privacy in the smart healthcare network.展开更多
文摘In this paper, a novel 10 Transistor Static Random Access Memory (SRAM) cell is proposed. Read and Write bit lines are decoupled in the proposed cell. Feedback loop-cutting with single bit line write scheme is employed in the 10 Transistor SRAM cell to reduce active power consumption during the write operation. Read access time and write access time are measured for proposed cell architecture based on Eldo SPICE simulation using TSMC based 90 nm Complementary Metal Oxide Semiconductor (CMOS) technology at various process corners. Leakage current measurements made on hold mode of operation show that proposed cell architecture is having 12.31 nano amperes as compared to 40.63 nano amperes of the standard 6 Transistor cell. 10 Transistor cell also has better performance in terms of leakage power as compared to 6 Transistor cell.
基金supported by the Deanship of Scientific Research,Vice Presidency for Graduate Studies and Scientific Research,King Faisal University,Saudi Arabia[Grant No.GRANT3862].
文摘The idea of linear Diophantine fuzzy set(LDFS)theory with its control parameters is a strong model for machine learning and optimization under uncertainty.The activity times in the critical path method(CPM)representation procedures approach are initially static,but in the Project Evaluation and Review Technique(PERT)approach,they are probabilistic.This study proposes a novel way of project review and assessment methodology for a project network in a linear Diophantine fuzzy(LDF)environment.The LDF expected task time,LDF variance,LDF critical path,and LDF total expected time for determining the project network are all computed using LDF numbers as the time of each activity in the project network.The primary premise of the LDF-PERT approach is to address ambiguities in project network activity timesmore simply than other approaches such as conventional PERT,Fuzzy PERT,and so on.The LDF-PERT is an efficient approach to analyzing symmetries in fuzzy control systems to seek an optimal decision.We also present a new approach for locating LDF-CPM in a project network with uncertain and erroneous activity timings.When the available resources and activity times are imprecise and unpredictable,this strategy can help decision-makers make better judgments in a project.A comparison analysis of the proposed technique with the existing techniques has also been discussed.The suggested techniques are demonstrated with two suitable numerical examples.
文摘In this work, power efficient butterfly unit based FFT architecture is presented. The butterfly unit is designed using floating-point fused arithmetic units. The fused arithmetic units include two-term dot product unit and add-subtract unit. In these arithmetic units, operations are performed over complex data values. A modified fused floating-point two-term dot product and an enhanced model for the Radix-4 FFT butterfly unit are proposed. The modified fused two-term dot product is designed using Radix-16 booth multiplier. Radix-16 booth multiplier will reduce the switching activities compared to Radix-8 booth multiplier in existing system and also will reduce the area required. The proposed architecture is implemented efficiently for Radix-4 decimation in time(DIT) FFT butterfly with the two floating-point fused arithmetic units. The proposed enhanced architecture is synthesized, implemented, placed and routed on a FPGA device using Xilinx ISE tool. It is observed that the Radix-4 DIT fused floating-point FFT butterfly requires 50.17% less space and 12.16% reduced power compared to the existing methods and the proposed enhanced model requires 49.82% less space on the FPGA device compared to the proposed design. Also, reduced power consumption is addressed by utilizing the reusability technique, which results in 11.42% of power reduction of the enhanced model compared to the proposed design.
文摘Tolerance charting is an effective tool to determine the optimal allocation of working dimensions and working tolerances such that the blueprint dimensions and tolerances can be achieved to accomplish the cost objectives.The selection of machining datum and allocation of tolerances are critical in any machining process planning as they directly affect any setup methods/machine tools selection and machining time.This paper mainly focuses on the selection of optimum machining datums and machining tolerances simultaneously in process planning.A dynamic tolerance charting constraint scheme is developed and implemented in the optimization procedure.An optimization model is formulated for selecting machining datum and tolerances and implemented with an algorithm namely Elitist Non-Dominated Sorting Genetic Algorithm(NSGA-II).The computational results indicate that the proposed methodology is capable and robust in finding the optimal machining datum set and tolerances.
文摘Al-7075 alloy-base matrix, reinforced with mixtures of silicon carbide (SiC) and boron carbide (B4C) particles, know as hybrid composites have been fabricated by stir casting technique (liquid metallurgy route) and optimized at different parameters like sliding speed, applied load, sliding time, and percentage of reinforcement by Taguchi method. The specimens were examined by Rockwell hardness test machine, Pin on Disc, Scanning Electron Microscope (SEM) and Optical Microscope. A plan of experiment generated through Taguchi’s technique is used to conduct experiments based on L27 orthogonal array. The developed ANOVA and the regression equations were used to find the optimum wear as well as co-efficient of friction under the influence of sliding speed, applied load, sliding time and percentage of reinforcement. The dry sliding wear resistance was analyzed on the basis of “smaller the best”. Finally, confirmation tests were carried out to verify the experimental results.
基金SASTRA University for the valuable help and support provided
文摘A pulsed,picosecond Nd:YAG laser with a wavelength of 532 nm is used to texture the surface of grade 5 titanium alloy(Ti–6Al–4V)for minimizing its wear rate.The wear properties of the base samples and laser surface textured samples are analyzed by conducting wear tests under a sliding condition using pin-on-disk equipment.The wear tests are conducted based on the Box–Benhken design,and the interaction of the process parameters is analyzed using response surface methodology.The wear analysis is conducted by varying the load,rotating speed of the disc,and track diameter at room temperature with a sliding distance of 1500 m.The results demonstrate that the laser textured surfaces exhibited a lower coefficient of friction and good anti-wear properties as compared with the non-textured surfaces.A regression model is developed for the wear analysis of titanium alloy using the analysis of variance technique.It is also observed from the analysis that the applied load and sliding distance are the parameters that have the greatest effect on the wear behavior followed by the wear track diameter.The optimum operating conditions have been suggested based on the results obtained from the numerical optimization approach.
文摘Glaucoma is a chronic and progressive optic neurodegenerative disease leading to vision deterioration and in most cases produce increased pressure within the eye. This is due to the backup of fluid in the eye; it causes damage to the optic nerve. Hence, early detection diagnosis and treatment of an eye help to prevent the loss of vision. In this paper, a novel method is proposed for the early detection of glaucoma using a combination of magnitude and phase features from the digital fundus images. Local binary patterns(LBP) and Daugman’s algorithm are used to perform the feature set extraction.The histogram features are computed for both the magnitude and phase components. The Euclidean distance between the feature vectors are analyzed to predict glaucoma. The performance of the proposed method is compared with the higher order spectra(HOS)features in terms of sensitivity, specificity, classification accuracy and execution time. The proposed system results 95.45% output for sensitivity, specificity and classification. Also, the execution time for the proposed method takes lesser time than the existing method which is based on HOS features. Hence, the proposed system is accurate, reliable and robust than the existing approach to predict the glaucoma features.
文摘Cloud storage has gained increasing popularity,as it helps cloud users arbitrarily store and access the related outsourced data.Numerous public audit buildings have been presented to ensure data transparency.However,modern developments have mostly been constructed on the public key infrastructure.To achieve data integrity,the auditor must first authenticate the legality of the public key certificate,which adds to an immense workload for the auditor,in order to ensure that data integrity is accomplished.The data facilities anticipate that the storage data quality should be regularly tracked to minimize disruption to the saved data in order to maintain the intactness of the stored data on the remote server.One of the main problems for individuals,though,is how to detect data integrity on a term where people have a backup of local files.Meanwhile,a system is often unlikely for a source-limited person to perform a data integrity inspection if the overall data file is retrieved.In this work,a stable and effective ID-based auditing setting that uses machine learning techniques is proposed to improve productivity and enhance the protection of ID-based audit protocols.The study tackles the issue of confidentiality and reliability in the public audit framework focused on identity.The idea has already been proved safe;its safety is very relevant to the traditional presumption of the Computational Diffie-Hellman security assumption.
文摘In the present work, the pool boiling critical heat flux, transient heat transfer characteristics, and bonding strength of thin Ni-Cr wire with aqua based reduced graphene oxide(r GO) nanofluids are experimentally studied. Results indicate:(i) the critical heat flux(CHF) of 0.01, 0.05, 0.1, 0.2, and 0.3 g·L^(-1) concentrations of r GO-water nanofluids varies from 1.42 to 2.40 MW·m^(-2);(ii) the CHF remains same for the tested samples during transient heat transfer studies and(iii) a constant value of CHF upto 10 tests when the nanocoated Ni-Cr wire is tested with DI water and deterioration occurs beyond this which implies a chance of peel off of r GO layer below the critical coating thickness.
文摘The present work is a discussion on the performance analysis of Modified Cooperative Subchannel Allocation (CSA) Algorithms which is used in Alamouti Decoded and Forward (Alamouti DF) Relaying Protocol for wireless multi-user Orthogonal Frequency Division Multiplexing Access (OFDMA) systems. In addition, the performance of approximate Symbol Error Rate (SER) for the Alamouti DF Relaying Protocol with the Cooperative Maximum Ratio Combining Technique (C-MRC) is analyzed and compared with SER upper bound. The approximate SER is asymptotically tight bound at higher Signal-to-Noise Ratio (SNR). From the asymptotic tight bound approximate SER, Particle Swarm Optimization (PSO) based Power Allocation (PA) is determined for the Alamouti DF Relaying Protocol. The simulation results suggested that the Modified Throughput based Subchannel Allocation Algorithm achieved an improved throughput of 6% to 33% compared to that of existing cooperative diversity protocol. Further, the Modified Fairness based Subchannel Allocation Algorithm rendered fairness of 7.2% to 17% among the multiuser against the existing cooperative diversity protocol.
文摘This paper is designed to introduce new hybrid Vedic algorithm to increase the speed of the multiplier. This work combines the principles of Nikhilam sutra and Karatsuba algorithm. Vedic Mathematics is the mathematical system to solve the complex computations in an easier manner. There are specific sutras to perform multiplication. Nikhilam sutra is one of the sutra. But this has some limitations. To overcome the limitations, this sutra is combined with Karatsuba algorithm. High speed devices are required for high speed applications with compact size. Normally multipliers require more power for its computation. In this paper, new multiplication algorithm for the multiplication of binary numbers is proposed based on Vedic Mathematics. The novel portion in the algorithm is found to be in the calculation of remainder using complement method. The size of the remainder is always set as N - 1 bit for any combination of input. The multiplier structure is designed based on Karatsuba algorithm. Therefore, N × N bit multiplication is done by (N - 1) bit multiplication. Numerical strength reduction is done through Karatsuba algorithm. The results show that the reduction in hardware leads to reduction in the delay.
文摘Vedic mathematics is the system of mathematics followed in ancient Indian and it is applied in various mathematical branches. The word “Vedic” represents the storehouse of all knowledge. Because using Vedic Mathematics, the arithmetical problems are solved easily. The mathematical algorithms are formed from 16 sutras and 13 up-sutras. But there are some limitations in each sutra. Here, two sutras Nikhilam sutra and Karatsuba algorithm are considered. In this research paper, a novel algorithm for binary multiplication based on Vedic mathematics is designed using bit reduction technique. Though Nikhilam sutra is used for multiplication, it is not used in all applications. Because it is special in multiplication. The remainder is derived from this sutra by reducing the remainder bit size to N-2 bit. Here, the number of bits of the remainder is constantly maintained as N-2 bits. By using Karatsuba algorithm, the overall structure of the multiplier is designed. Unlike the conventional Karatsuba algorithm, the proposed algorithm requires only one multiplier with N-2 bits only. The speed of the proposed algorithm is improved with balancing the area and the power. Even though there is a deviation in lower order bits, this method shows larger difference in higher bit lengths.
文摘Automatic palmprint identification has received much attention in security applications and law enforcement. The performance of a palmprint identification system is improved by means of feature extraction and classification. Feature extraction methods such as Subspace learning are highly sensitive to the rotation variances, translation and illumination in image identification. Thus, Histogram of Oriented Lines (HOL) has not obtained promising performance for palmprint recognition so far. In this paper, we propose a new descriptor of palmprint named Improved Histogram of Oriented Lines (IHOL), which is an alternative of HOL. Improved HOL is not very sensitive to changes of translation and illumination, and has the robustness against small transformations whereas the small translation and rotations make no change in histogram value adjustment of the proposed work. The experiment results show that based on IHOL, with Principal Component Analysis (PCA) subspace learning can achieve high recognition rates. The proposed method (IHOL-Cosine distance) improves 1.30% on PolyU I database, and similarly (IHOL-Euclidean distance) improves 2.36% on COEP database compared with existing HOL method.
文摘In this paper, we introduce the notion of intuitionistic fuzzy α-generalized closed sets in intuitionistic fuzzy minimal structure spaces and investigate some of their properties. Further, we introduce and study the concept of intuitionistic fuzzy α-generalized minimal continuous functions.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2024R432),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Breast cancer is a type of cancer responsible for higher mortality rates among women.The cruelty of breast cancer always requires a promising approach for its earlier detection.In light of this,the proposed research leverages the representation ability of pretrained EfficientNet-B0 model and the classification ability of the XGBoost model for the binary classification of breast tumors.In addition,the above transfer learning model is modified in such a way that it will focus more on tumor cells in the input mammogram.Accordingly,the work proposed an EfficientNet-B0 having a Spatial Attention Layer with XGBoost(ESA-XGBNet)for binary classification of mammograms.For this,the work is trained,tested,and validated using original and augmented mammogram images of three public datasets namely CBIS-DDSM,INbreast,and MIAS databases.Maximumclassification accuracy of 97.585%(CBISDDSM),98.255%(INbreast),and 98.91%(MIAS)is obtained using the proposed ESA-XGBNet architecture as compared with the existing models.Furthermore,the decision-making of the proposed ESA-XGBNet architecture is visualized and validated using the Attention Guided GradCAM-based Explainable AI technique.
文摘Medical data mining has become an essential task in healthcare sector to secure the personal and medical data of patients using privacy policy.In this background,several authentication and accessibility issues emerge with an inten-tion to protect the sensitive details of the patients over getting published in open domain.To solve this problem,Multi Attribute Case based Privacy Preservation(MACPP)technique is proposed in this study to enhance the security of privacy-preserving data.Private information can be any attribute information which is categorized as sensitive logs in a patient’s records.The semantic relation between transactional patient records and access rights is estimated based on the mean average value to distinguish sensitive and non-sensitive information.In addition to this,crypto hidden policy is also applied here to encrypt the sensitive data through symmetric standard key log verification that protects the personalized sensitive information.Further,linear integrity verification provides authentication rights to verify the data,improves the performance of privacy preserving techni-que against intruders and assures high security in healthcare setting.
文摘Group communication is widely used by most of the emerging network applications like telecommunication,video conferencing,simulation applications,distributed and other interactive systems.Secured group communication plays a vital role in case of providing the integrity,authenticity,confidentiality,and availability of the message delivered among the group members with respect to communicate securely between the inter group or else within the group.In secure group communications,the time cost associated with the key updating in the proceedings of the member join and departure is an important aspect of the quality of service,particularly in the large groups with highly active membership.Hence,the paper is aimed to achieve better cost and time efficiency through an improved DC multicast routing protocol which is used to expose the path between the nodes participating in the group communication.During this process,each node constructs an adaptive Ptolemy decision tree for the purpose of generating the contributory key.Each of the node is comprised of three keys which will be exchanged between the nodes for considering the group key for the purpose of secure and cost-efficient group communication.The rekeying process is performed when a member leaves or adds into the group.The performance metrics of novel approach is measured depending on the important factors such as computational and communicational cost,rekeying process and formation of the group.It is concluded from the study that the technique has reduced the computational and communicational cost of the secure group communication when compared to the other existing methods.
文摘Characterized by recurrent and rapid seizures, epilepsy is a great threat to the livelihood of the human beings. Abnormal transient behaviour of neurons in the cortical regions of the brain leads to a seizure which characterizes epilepsy. The physical and mental activities of the patient are totally dampened with this epileptic seizure. A significant clinical tool for the study, analysis and diagnosis of the epilepsy is electroencephalogram (EEG). To detect such seizures, EEG signals aids greatly to the clinical experts and it is used as an important tool for the analysis of brain disorders, especially epilepsy. In this paper, the high dimensional EEG data are reduced to a low dimension by incorporating techniques such as Fuzzy Mutual Information (FMI), Independent Component Analysis (ICA), Linear Graph Embedding (LGE), Linear Discriminant Analysis (LDA) and Variational Bayesian Matrix Factorization (VBMF). After employing them as dimensionality reduction techniques, the Neural Networks (NN) such as Cascaded Feed Forward Neural Network (CFFNN), Time Delay Neural Network (TDNN) and Generalized Regression Neural Network (GRNN) are used as Post Classifiers for the Classification of Epilepsy Risk Levels from EEG signals. The bench mark parameters used here are Performance Index (PI), Quality Values (QV), Time Delay, Accuracy, Specificity and Sensitivity.
文摘With the emergence of the Internet of things(IoT),embedded systems have now changed its dimensionality and it is applied in various domains such as healthcare,home automation and mainly Industry 4.0.These Embedded IoT devices are mostly battery-driven.It has been analyzed that usage of Dynamic Random-Access Memory(DRAM)centered core memory is considered the most significant source of high energy utility in Embedded IoT devices.For achieving the low power consumption in these devices,Non-volatile memory(NVM)devices such as Parameter Random Access Memory(PRAM)and Spin-Transfer Torque Magnetic RandomAccess Memory(STT-RAM)are becoming popular among main memory alternatives in embedded IoT devices because of their features such as high thickness,byte addressability,high scalability and low power intake.Additionally,Non-volatile Random-Access Memory(NVRAM)is widely adopted to save the data in the embedded IoT devices.NVM,flash memories have a limited lifetime,so it is mandatory to adopt intelligent optimization in managing the NVRAM-based embedded devices using an intelligent controller while considering the endurance issue.To address this challenge,the paper proposes a powerful,lightweight machine learning-based workload-adaptive write schemes of the NVRAM,which can increase the lifetime and reduce the energy consumption of the processors.The proposed system consists of three phases like Workload Characterization,Intelligent Compression and Memory Allocators.These phases are used for distributing the write-cycles to NVRAM,following the energy-time consumption and number of data bytes.The extensive experimentations are carried out using the IoMT(Internet of Medical things)benchmark in which the different endurance factors such as application delay,energy and write-time factors were evaluated and compared with the different existing algorithms.
文摘The Internet of Things (IoT) and Cloud computing are gaining popularity due to their numerous advantages, including the efficient utilization of internetand computing resources. In recent years, many more IoT applications have beenextensively used. For instance, Healthcare applications execute computations utilizing the user’s private data stored on cloud servers. However, the main obstaclesfaced by the extensive acceptance and usage of these emerging technologies aresecurity and privacy. Moreover, many healthcare data management system applications have emerged, offering solutions for distinct circumstances. But still, theexisting system has issues with specific security issues, privacy-preserving rate,information loss, etc. Hence, the overall system performance is reduced significantly. A unique blockchain-based technique is proposed to improve anonymityin terms of data access and data privacy to overcome the above-mentioned issues.Initially, the registration phase is done for the device and the user. After that, theGeo-Location and IP Address values collected during registration are convertedinto Hash values using Adler 32 hashing algorithm, and the private and publickeys are generated using the key generation centre. Then the authentication is performed through login. The user then submits a request to the blockchain server,which redirects the request to the associated IoT device in order to obtain thesensed IoT data. The detected data is anonymized in the device and stored inthe cloud server using the Linear Scaling based Rider Optimization algorithmwith integrated KL Anonymity (LSR-KLA) approach. After that, the Time-stamp-based Public and Private Key Schnorr Signature (TSPP-SS) mechanismis used to permit the authorized user to access the data, and the blockchain servertracks the entire transaction. The experimental findings showed that the proposedLSR-KLA and TSPP-SS technique provides better performance in terms of higherprivacy-preserving rate, lower information loss, execution time, and Central Processing Unit (CPU) usage than the existing techniques. Thus, the proposed method allows for better data privacy in the smart healthcare network.