Data compression is one of the core fields of study for applications of image and video processing.The raw data to be transmitted consumes large bandwidth and requires huge storage space as a result,it is desirable to...Data compression is one of the core fields of study for applications of image and video processing.The raw data to be transmitted consumes large bandwidth and requires huge storage space as a result,it is desirable to represent the information in the data with considerably fewer bits by the mean of data compression techniques,the data must be reconstituted very similarly to the initial form.In this paper,a hybrid compression based on Discrete Cosine Transform(DCT),DiscreteWavelet Transform(DWT)is used to enhance the quality of the reconstructed image.These techniques are followed by entropy encoding such as Huffman coding to give additional compression.Huffman coding is optimal prefix code because of its implementation is more simple,faster,and easier than other codes.It needs less execution time and it is the shortest average length and the measurements for analysis are based upon Compression Ratio,Mean Square Error(MSE),and Peak Signal to Noise Ratio(PSNR).We applied a hybrid algorithm on(DWT–DCT 2×2,4×4,8×8,16×16,32×32)blocks.Finally,we show that by using a hybrid(DWT–DCT)compression technique,the PSNR is reconstructed for the image by using the proposed hybrid algorithm(DWT–DCT 8×8 block)is quite high than DCT.展开更多
In this new information era,the transfer of data and information has become a very important matter.Transferred data must be kept secured from unauthorized persons using cryptography.The science of cryptography depend...In this new information era,the transfer of data and information has become a very important matter.Transferred data must be kept secured from unauthorized persons using cryptography.The science of cryptography depends not only on complex mathematical models but also on encryption keys.Amino acid encryption is a promising model for data security.In this paper,we propose an amino acid encryption model with two encryption keys.The first key is generated randomly using the genetic algorithm.The second key is called the protein key which is generated from converting DNA to a protein message.Then,the protein message and the first key are used in the modified Playfair matrix to generate the cypher message.The experimental results show that the proposed model survives against known attacks such as the Brute-force attack and the Ciphertext-only attack.In addition,the proposed model has been tested over different types of characters including white spaces and special characters,as all the data is encoded to 8-bit binary.The performance of the proposed model is compared with other models using encryption time and decryption time.The model also balances all three principles in the CIA triad.展开更多
The rapid growth of the use of social media opens up new challenges and opportunities to analyze various aspects and patterns in communication.In-text mining,several techniques are available such as information cluste...The rapid growth of the use of social media opens up new challenges and opportunities to analyze various aspects and patterns in communication.In-text mining,several techniques are available such as information clustering,extraction,summarization,classification.In this study,a text mining framework was presented which consists of 4 phases retrieving,processing,indexing,and mine association rule phase.It is applied by using the association rule mining technique to check the associated term with the Huawei P30 Pro phone.Customer reviews are extracted from many websites and Facebook groups,such as re-view.cnet.com,CNET.Facebook and amazon.com technology,where customers from all over the world placed their notes on cell phones.In this analysis,a total of 192 reviews of Huawei P30 Pro were collected to evaluate them by text mining techniques.The findings demonstrate that Huawei P30 Pro,has strong points such as the best safety,high-quality camera,battery that lasts more than 24 hours,and the processor is very fast.This paper aims to prove that text mining decreases human efforts by recognizing significant documents.This will lead to improving the awareness of customers to choose their products and at the same time sales managers also get to know what their products were accepted by customers suspended.展开更多
Semantic Web(SW)provides new opportunities for the study and application of big data,massive ranges of data sets in varied formats from multiple sources.Related studies focus on potential SW technologies for resolving...Semantic Web(SW)provides new opportunities for the study and application of big data,massive ranges of data sets in varied formats from multiple sources.Related studies focus on potential SW technologies for resolving big data problems,such as structurally and semantically heterogeneous data that result from the variety of data formats(structured,semi-structured,numeric,unstructured text data,email,video,audio,stock ticker).SW offers information semantically both for people and machines to retain the vast volume of data and provide a meaningful output of unstructured data.In the current research,we implement a new semantic Extract Transform Load(ETL)model that uses SW technologies for aggregating,integrating,and representing data as linked data.First,geospatial data resources are aggregated from the internet,and then a semantic ETL model is used to store the aggregated data in a semantic model after converting it to Resource Description Framework(RDF)format for successful integration and representation.The principal contribution of this research is the synthesis,aggregation,and semantic representation of geospatial data to solve problems.A case study of city data is used to illustrate the semantic ETL model’s functionalities.The results show that the proposed model solves the structural and semantic heterogeneity problems in diverse data sources for successful data aggregation,integration,and representation.展开更多
Web-blogging sites such as Twitter and Facebook are heavily influenced by emotions,sentiments,and data in the modern era.Twitter,a widely used microblogging site where individuals share their thoughts in the form of t...Web-blogging sites such as Twitter and Facebook are heavily influenced by emotions,sentiments,and data in the modern era.Twitter,a widely used microblogging site where individuals share their thoughts in the form of tweets,has become a major source for sentiment analysis.In recent years,there has been a significant increase in demand for sentiment analysis to identify and classify opinions or expressions in text or tweets.Opinions or expressions of people about a particular topic,situation,person,or product can be identified from sentences and divided into three categories:positive for good,negative for bad,and neutral for mixed or confusing opinions.The process of analyzing changes in sentiment and the combination of these categories is known as“sentiment analysis.”In this study,sentiment analysis was performed on a dataset of 90,000 tweets using both deep learning and machine learning methods.The deep learning-based model long-short-term memory(LSTM)performed better than machine learning approaches.Long short-term memory achieved 87%accuracy,and the support vector machine(SVM)classifier achieved slightly worse results than LSTM at 86%.The study also tested binary classes of positive and negative,where LSTM and SVM both achieved 90%accuracy.展开更多
文摘Data compression is one of the core fields of study for applications of image and video processing.The raw data to be transmitted consumes large bandwidth and requires huge storage space as a result,it is desirable to represent the information in the data with considerably fewer bits by the mean of data compression techniques,the data must be reconstituted very similarly to the initial form.In this paper,a hybrid compression based on Discrete Cosine Transform(DCT),DiscreteWavelet Transform(DWT)is used to enhance the quality of the reconstructed image.These techniques are followed by entropy encoding such as Huffman coding to give additional compression.Huffman coding is optimal prefix code because of its implementation is more simple,faster,and easier than other codes.It needs less execution time and it is the shortest average length and the measurements for analysis are based upon Compression Ratio,Mean Square Error(MSE),and Peak Signal to Noise Ratio(PSNR).We applied a hybrid algorithm on(DWT–DCT 2×2,4×4,8×8,16×16,32×32)blocks.Finally,we show that by using a hybrid(DWT–DCT)compression technique,the PSNR is reconstructed for the image by using the proposed hybrid algorithm(DWT–DCT 8×8 block)is quite high than DCT.
文摘In this new information era,the transfer of data and information has become a very important matter.Transferred data must be kept secured from unauthorized persons using cryptography.The science of cryptography depends not only on complex mathematical models but also on encryption keys.Amino acid encryption is a promising model for data security.In this paper,we propose an amino acid encryption model with two encryption keys.The first key is generated randomly using the genetic algorithm.The second key is called the protein key which is generated from converting DNA to a protein message.Then,the protein message and the first key are used in the modified Playfair matrix to generate the cypher message.The experimental results show that the proposed model survives against known attacks such as the Brute-force attack and the Ciphertext-only attack.In addition,the proposed model has been tested over different types of characters including white spaces and special characters,as all the data is encoded to 8-bit binary.The performance of the proposed model is compared with other models using encryption time and decryption time.The model also balances all three principles in the CIA triad.
文摘The rapid growth of the use of social media opens up new challenges and opportunities to analyze various aspects and patterns in communication.In-text mining,several techniques are available such as information clustering,extraction,summarization,classification.In this study,a text mining framework was presented which consists of 4 phases retrieving,processing,indexing,and mine association rule phase.It is applied by using the association rule mining technique to check the associated term with the Huawei P30 Pro phone.Customer reviews are extracted from many websites and Facebook groups,such as re-view.cnet.com,CNET.Facebook and amazon.com technology,where customers from all over the world placed their notes on cell phones.In this analysis,a total of 192 reviews of Huawei P30 Pro were collected to evaluate them by text mining techniques.The findings demonstrate that Huawei P30 Pro,has strong points such as the best safety,high-quality camera,battery that lasts more than 24 hours,and the processor is very fast.This paper aims to prove that text mining decreases human efforts by recognizing significant documents.This will lead to improving the awareness of customers to choose their products and at the same time sales managers also get to know what their products were accepted by customers suspended.
文摘Semantic Web(SW)provides new opportunities for the study and application of big data,massive ranges of data sets in varied formats from multiple sources.Related studies focus on potential SW technologies for resolving big data problems,such as structurally and semantically heterogeneous data that result from the variety of data formats(structured,semi-structured,numeric,unstructured text data,email,video,audio,stock ticker).SW offers information semantically both for people and machines to retain the vast volume of data and provide a meaningful output of unstructured data.In the current research,we implement a new semantic Extract Transform Load(ETL)model that uses SW technologies for aggregating,integrating,and representing data as linked data.First,geospatial data resources are aggregated from the internet,and then a semantic ETL model is used to store the aggregated data in a semantic model after converting it to Resource Description Framework(RDF)format for successful integration and representation.The principal contribution of this research is the synthesis,aggregation,and semantic representation of geospatial data to solve problems.A case study of city data is used to illustrate the semantic ETL model’s functionalities.The results show that the proposed model solves the structural and semantic heterogeneity problems in diverse data sources for successful data aggregation,integration,and representation.
基金The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4400257DSR01).
文摘Web-blogging sites such as Twitter and Facebook are heavily influenced by emotions,sentiments,and data in the modern era.Twitter,a widely used microblogging site where individuals share their thoughts in the form of tweets,has become a major source for sentiment analysis.In recent years,there has been a significant increase in demand for sentiment analysis to identify and classify opinions or expressions in text or tweets.Opinions or expressions of people about a particular topic,situation,person,or product can be identified from sentences and divided into three categories:positive for good,negative for bad,and neutral for mixed or confusing opinions.The process of analyzing changes in sentiment and the combination of these categories is known as“sentiment analysis.”In this study,sentiment analysis was performed on a dataset of 90,000 tweets using both deep learning and machine learning methods.The deep learning-based model long-short-term memory(LSTM)performed better than machine learning approaches.Long short-term memory achieved 87%accuracy,and the support vector machine(SVM)classifier achieved slightly worse results than LSTM at 86%.The study also tested binary classes of positive and negative,where LSTM and SVM both achieved 90%accuracy.