Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for rese...Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.展开更多
The deformation and fracture evolution mechanisms of the strata overlying mines mined using sublevel caving were studied via numerical simulations.Moreover,an expression for the normal force acting on the side face of...The deformation and fracture evolution mechanisms of the strata overlying mines mined using sublevel caving were studied via numerical simulations.Moreover,an expression for the normal force acting on the side face of a steeply dipping superimposed cantilever beam in the surrounding rock was deduced based on limit equilibrium theory.The results show the following:(1)surface displacement above metal mines with steeply dipping discontinuities shows significant step characteristics,and(2)the behavior of the strata as they fail exhibits superimposition characteristics.Generally,failure first occurs in certain superimposed strata slightly far from the goaf.Subsequently,with the constant downward excavation of the orebody,the superimposed strata become damaged both upwards away from and downwards toward the goaf.This process continues until the deep part of the steeply dipping superimposed strata forms a large-scale deep fracture plane that connects with the goaf.The deep fracture plane generally makes an angle of 12°-20°with the normal to the steeply dipping discontinuities.The effect of the constant outward transfer of strata movement due to the constant outward failure of the superimposed strata in the metal mines with steeply dipping discontinuities causes the scope of the strata movement in these mines to be larger than expected.The strata in the metal mines with steeply dipping discontinuities mainly show flexural toppling failure.However,the steeply dipping structural strata near the goaf mainly exhibit shear slipping failure,in which case the mechanical model used to describe them can be simplified by treating them as steeply dipping superimposed cantilever beams.By taking the steeply dipping superimposed cantilever beam that first experiences failure as the key stratum,the failure scope of the strata(and criteria for the stability of metal mines with steeply dipping discontinuities mined using sublevel caving)can be obtained via iterative computations from the key stratum,moving downward toward and upwards away from the goaf.展开更多
Major interactions are known to trigger star formation in galaxies and alter their color.We study the major interactions in filaments and sheets using SDSS data to understand the influence of large-scale environments ...Major interactions are known to trigger star formation in galaxies and alter their color.We study the major interactions in filaments and sheets using SDSS data to understand the influence of large-scale environments on galaxy interactions.We identify the galaxies in filaments and sheets using the local dimension and also find the major pairs residing in these environments.The star formation rate(SFR) and color of the interacting galaxies as a function of pair separation are separately analyzed in filaments and sheets.The analysis is repeated for three volume limited samples covering different magnitude ranges.The major pairs residing in the filaments show a significantly higher SFR and bluer color than those residing in the sheets up to the projected pair separation of~50 kpc.We observe a complete reversal of this behavior for both the SFR and color of the galaxy pairs having a projected separation larger than 50 kpc.Some earlier studies report that the galaxy pairs align with the filament axis.Such alignment inside filaments indicates anisotropic accretion that may cause these differences.We do not observe these trends in the brighter galaxy samples.The pairs in filaments and sheets from the brighter galaxy samples trace relatively denser regions in these environments.The absence of these trends in the brighter samples may be explained by the dominant effect of the local density over the effects of the large-scale environment.展开更多
To ensure agreement between theoretical calculations and experimental data,parameters to selected nuclear physics models are perturbed and fine-tuned in nuclear data evaluations.This approach assumes that the chosen s...To ensure agreement between theoretical calculations and experimental data,parameters to selected nuclear physics models are perturbed and fine-tuned in nuclear data evaluations.This approach assumes that the chosen set of models accurately represents the‘true’distribution of considered observables.Furthermore,the models are chosen globally,indicating their applicability across the entire energy range of interest.However,this approach overlooks uncertainties inherent in the models themselves.In this work,we propose that instead of selecting globally a winning model set and proceeding with it as if it was the‘true’model set,we,instead,take a weighted average over multiple models within a Bayesian model averaging(BMA)framework,each weighted by its posterior probability.The method involves executing a set of TALYS calculations by randomly varying multiple nuclear physics models and their parameters to yield a vector of calculated observables.Next,computed likelihood function values at each incident energy point were then combined with the prior distributions to obtain updated posterior distributions for selected cross sections and the elastic angular distributions.As the cross sections and elastic angular distributions were updated locally on a per-energy-point basis,the approach typically results in discontinuities or“kinks”in the cross section curves,and these were addressed using spline interpolation.The proposed BMA method was applied to the evaluation of proton-induced reactions on ^(58)Ni between 1 and 100 MeV.The results demonstrated a favorable comparison with experimental data as well as with the TENDL-2023 evaluation.展开更多
Processing large-scale 3-D gravity data is an important topic in geophysics field. Many existing inversion methods lack the competence of processing massive data and practical application capacity. This study proposes...Processing large-scale 3-D gravity data is an important topic in geophysics field. Many existing inversion methods lack the competence of processing massive data and practical application capacity. This study proposes the application of GPU parallel processing technology to the focusing inversion method, aiming at improving the inversion accuracy while speeding up calculation and reducing the memory consumption, thus obtaining the fast and reliable inversion results for large complex model. In this paper, equivalent storage of geometric trellis is used to calculate the sensitivity matrix, and the inversion is based on GPU parallel computing technology. The parallel computing program that is optimized by reducing data transfer, access restrictions and instruction restrictions as well as latency hiding greatly reduces the memory usage, speeds up the calculation, and makes the fast inversion of large models possible. By comparing and analyzing the computing speed of traditional single thread CPU method and CUDA-based GPU parallel technology, the excellent acceleration performance of GPU parallel computing is verified, which provides ideas for practical application of some theoretical inversion methods restricted by computing speed and computer memory. The model test verifies that the focusing inversion method can overcome the problem of severe skin effect and ambiguity of geological body boundary. Moreover, the increase of the model cells and inversion data can more clearly depict the boundary position of the abnormal body and delineate its specific shape.展开更多
Social media data created a paradigm shift in assessing situational awareness during a natural disaster or emergencies such as wildfire, hurricane, tropical storm etc. Twitter as an emerging data source is an effectiv...Social media data created a paradigm shift in assessing situational awareness during a natural disaster or emergencies such as wildfire, hurricane, tropical storm etc. Twitter as an emerging data source is an effective and innovative digital platform to observe trend from social media users’ perspective who are direct or indirect witnesses of the calamitous event. This paper aims to collect and analyze twitter data related to the recent wildfire in California to perform a trend analysis by classifying firsthand and credible information from Twitter users. This work investigates tweets on the recent wildfire in California and classifies them based on witnesses into two types: 1) direct witnesses and 2) indirect witnesses. The collected and analyzed information can be useful for law enforcement agencies and humanitarian organizations for communication and verification of the situational awareness during wildfire hazards. Trend analysis is an aggregated approach that includes sentimental analysis and topic modeling performed through domain-expert manual annotation and machine learning. Trend analysis ultimately builds a fine-grained analysis to assess evacuation routes and provide valuable information to the firsthand emergency responders<span style="font-family:Verdana;">.</span>展开更多
Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their perform...Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- Arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding.展开更多
By analyzing and comparing the current application status and advantages and disadvantages of domestic and foreign artificial material mechanical equipment classification coding systems,and conducting a comparative st...By analyzing and comparing the current application status and advantages and disadvantages of domestic and foreign artificial material mechanical equipment classification coding systems,and conducting a comparative study of the existing coding system standards in different regions of the country,a coding data model suitable for big data research needs is proposed based on the current national standard for artificial material mechanical equipment classification coding.This model achieves a horizontal connection of characteristics and a vertical penetration of attribute values for construction materials and machinery through forward automatic coding calculation and reverse automatic decoding.This coding scheme and calculation model can also establish a database file for the coding and unit price of construction materials and machinery,forming a complete big data model for construction material coding unit prices.This provides foundational support for calculating and analyzing big data related to construction material unit prices,real-time information prices,market prices,and various comprehensive prices,thus contributing to the formation of cost-related big data.展开更多
In the face of a growing number of large-scale data sets, affinity propagation clustering algorithm to calculate the process required to build the similarity matrix, will bring huge storage and computation. Therefore,...In the face of a growing number of large-scale data sets, affinity propagation clustering algorithm to calculate the process required to build the similarity matrix, will bring huge storage and computation. Therefore, this paper proposes an improved affinity propagation clustering algorithm. First, add the subtraction clustering, using the density value of the data points to obtain the point of initial clusters. Then, calculate the similarity distance between the initial cluster points, and reference the idea of semi-supervised clustering, adding pairs restriction information, structure sparse similarity matrix. Finally, the cluster representative points conduct AP clustering until a suitable cluster division.Experimental results show that the algorithm allows the calculation is greatly reduced, the similarity matrix storage capacity is also reduced, and better than the original algorithm on the clustering effect and processing speed.展开更多
This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,t...This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,test application time, and area overhead. To improve the compression ratio, the new method is based on variable-to-variable run length codes,and a novel algorithm is proposed to reorder the test vectors and fill the unspecified bits in the pre-processing step. With a novel on-chip decoder, low test application time and low area overhead are obtained by hybrid run length codes. Finally, an experimental comparison on ISCAS 89 benchmark circuits validates the proposed method展开更多
This paper investigates the simultaneous wireless information and powertransfer(SWIPT) for network-coded two-way relay network from an information-theoretic perspective, where two sources exchange information via an S...This paper investigates the simultaneous wireless information and powertransfer(SWIPT) for network-coded two-way relay network from an information-theoretic perspective, where two sources exchange information via an SWIPT-aware energy harvesting(EH) relay. We present a power splitting(PS)-based two-way relaying(PS-TWR) protocol by employing the PS receiver architecture. To explore the system sum rate limit with data rate fairness, an optimization problem under total power constraint is formulated. Then, some explicit solutions are derived for the problem. Numerical results show that due to the path loss effect on energy transfer, with the same total available power, PS-TWR losses some system performance compared with traditional non-EH two-way relaying, where at relatively low and relatively high signalto-noise ratio(SNR), the performance loss is relatively small. Another observation is that, in relatively high SNR regime, PS-TWR outperforms time switching-based two-way relaying(TS-TWR) while in relatively low SNR regime TS-TWR outperforms PS-TWR. It is also shown that with individual available power at the two sources, PS-TWR outperforms TS-TWR in both relatively low and high SNR regimes.展开更多
In the process of encoding and decoding,erasure codes over binary fields,which just need AND operations and XOR operations and therefore have a high computational efficiency,are widely used in various fields of inform...In the process of encoding and decoding,erasure codes over binary fields,which just need AND operations and XOR operations and therefore have a high computational efficiency,are widely used in various fields of information technology.A matrix decoding method is proposed in this paper.The method is a universal data reconstruction scheme for erasure codes over binary fields.Besides a pre-judgment that whether errors can be recovered,the method can rebuild sectors of loss data on a fault-tolerant storage system constructed by erasure codes for disk errors.Data reconstruction process of the new method has simple and clear steps,so it is beneficial for implementation of computer codes.And more,it can be applied to other non-binary fields easily,so it is expected that the method has an extensive application in the future.展开更多
UAV data link has been considered as an important part of UAV communication system, through which the UAV could communicate with warships. However, constant coding and modulation scheme that UAV adopts does not make f...UAV data link has been considered as an important part of UAV communication system, through which the UAV could communicate with warships. However, constant coding and modulation scheme that UAV adopts does not make full use of the channel capacity when UAV communicates with warships in a good channel environment. In order to improve channel capacity and spectral efficiency, adaptive coded modulation technology is studied. Based on maritime channel model, SNR estimation technology and adaptive threshold determination technology, the simulation of UAV data link communication is carried out in this paper. Theoretic analysis and simulation results show that according to changes in maritime channel state, UAV can dynamically adjust the adaptive coded modulation scheme on the condition of meeting target Bit-Error-Rate (BER), the maximum amount of data transfer is non-adaptive systems three times.展开更多
System-on-a-chips with intellectual property cores need a large volume of data for testing. The large volume of test data requires a large testing time and test data memory. Therefore new techniques are needed to opti...System-on-a-chips with intellectual property cores need a large volume of data for testing. The large volume of test data requires a large testing time and test data memory. Therefore new techniques are needed to optimize the test data volume, decrease the testing time, and conquer the ATE memory limitation for SOC designs. This paper presents a new compression method of testing for intellectual property core-based system-on-chip. The proposed method is based on new split- data variable length (SDV) codes that are designed using the split-options along with identification bits in a string of test data. This paper analyses the reduction of test data volume, testing time, run time, size of memory required in ATE and improvement of compression ratio. Experimental results for ISCAS 85 and ISCAS 89 Benchmark circuits show that SDV codes outperform other compression methods with the best compression ratio for test data compression. The decompression architecture for SDV codes is also presented for decoding the implementations of compressed bits. The proposed scheme shows that SDV codes are accessible to any of the variations in the input test data stream.展开更多
The Internet of Medical Things(IoMT)is an online device that senses and transmits medical data from users to physicians within a time interval.In,recent years,IoMT has rapidly grown in the medicalfield to provide heal...The Internet of Medical Things(IoMT)is an online device that senses and transmits medical data from users to physicians within a time interval.In,recent years,IoMT has rapidly grown in the medicalfield to provide healthcare services without physical appearance.With the use of sensors,IoMT applications are used in healthcare management.In such applications,one of the most important factors is data security,given that its transmission over the network may cause obtrusion.For data security in IoMT systems,blockchain is used due to its numerous blocks for secure data storage.In this study,Blockchain-assisted secure data management framework(BSDMF)and Proof of Activity(PoA)protocol using malicious code detection algorithm is used in the proposed data security for the healthcare system.The main aim is to enhance the data security over the networks.The PoA protocol enhances high security of data from the literature review.By replacing the malicious node from the block,the PoA can provide high security for medical data in the blockchain.Comparison with existing systems shows that the proposed simulation with BSD-Malicious code detection algorithm achieves higher accuracy ratio,precision ratio,security,and efficiency and less response time for Blockchain-enabled healthcare systems.展开更多
This paper introduces a new RAID 6 expanding method HCS, which is facing the circumstance of big data. HCS expands H-Code manner RAID 6. Two key techniques are used to avoid parity blocks’ recalculating.The first one...This paper introduces a new RAID 6 expanding method HCS, which is facing the circumstance of big data. HCS expands H-Code manner RAID 6. Two key techniques are used to avoid parity blocks’ recalculating.The first one is anti-diagonal data blocks’ selection, and the other one is horizontal data migration. These two techniques ensure the data blocks are retained in the same verification zone, that is horizontal verification zone and anti-diagonal verification zone. Experimental results showed that, compared with SDM, which is also a fast expansion method, HCS can reduce 3.6% expansion time and promote 4.62% performance under four traces.展开更多
In this paper,a novel secret data-driven carrier-free(semi structural formula)visual secret sharing(VSS)scheme with(2,2)threshold based on the error correction blocks of QR codes is investigated.The proposed scheme is...In this paper,a novel secret data-driven carrier-free(semi structural formula)visual secret sharing(VSS)scheme with(2,2)threshold based on the error correction blocks of QR codes is investigated.The proposed scheme is to search two QR codes that altered to satisfy the secret sharing modules in the error correction mechanism from the large datasets of QR codes according to the secret image,which is to embed the secret image into QR codes based on carrier-free secret sharing.The size of secret image is the same or closest with the region from the coordinate of(7,7)to the lower right corner of QR codes.In this way,we can find the QR codes combination of embedding secret information maximization with secret data-driven based on Big data search.Each output share is a valid QR code which can be decoded correctly utilizing a QR code reader and it may reduce the likelihood of attracting the attention of potential attackers.The proposed scheme can reveal secret image visually with the abilities of stacking and XOR decryptions.The secret image can be recovered by human visual system(HVS)without any computation based on stacking.On the other hand,if the light-weight computation device is available,the secret image can be lossless revealed based on XOR operation.In addition,QR codes could assist alignment for VSS recovery.The experimental results show the effectiveness of our scheme.展开更多
文摘Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.
基金Financial support for this work was provided by the Youth Fund Program of the National Natural Science Foundation of China (No. 42002292)the General Program of the National Natural Science Foundation of China (No. 42377175)the General Program of the Hubei Provincial Natural Science Foundation, China (No. 2023AFB631)
文摘The deformation and fracture evolution mechanisms of the strata overlying mines mined using sublevel caving were studied via numerical simulations.Moreover,an expression for the normal force acting on the side face of a steeply dipping superimposed cantilever beam in the surrounding rock was deduced based on limit equilibrium theory.The results show the following:(1)surface displacement above metal mines with steeply dipping discontinuities shows significant step characteristics,and(2)the behavior of the strata as they fail exhibits superimposition characteristics.Generally,failure first occurs in certain superimposed strata slightly far from the goaf.Subsequently,with the constant downward excavation of the orebody,the superimposed strata become damaged both upwards away from and downwards toward the goaf.This process continues until the deep part of the steeply dipping superimposed strata forms a large-scale deep fracture plane that connects with the goaf.The deep fracture plane generally makes an angle of 12°-20°with the normal to the steeply dipping discontinuities.The effect of the constant outward transfer of strata movement due to the constant outward failure of the superimposed strata in the metal mines with steeply dipping discontinuities causes the scope of the strata movement in these mines to be larger than expected.The strata in the metal mines with steeply dipping discontinuities mainly show flexural toppling failure.However,the steeply dipping structural strata near the goaf mainly exhibit shear slipping failure,in which case the mechanical model used to describe them can be simplified by treating them as steeply dipping superimposed cantilever beams.By taking the steeply dipping superimposed cantilever beam that first experiences failure as the key stratum,the failure scope of the strata(and criteria for the stability of metal mines with steeply dipping discontinuities mined using sublevel caving)can be obtained via iterative computations from the key stratum,moving downward toward and upwards away from the goaf.
基金financial support from the SERB,DST,Government of India through the project CRG/2019/001110IUCAA,Pune for providing support through an associateship program+1 种基金IISER Tirupati for support through a postdoctoral fellowshipFunding for the SDSS and SDSS-Ⅱhas been provided by the Alfred P.Sloan Foundation,the U.S.Department of Energy,the National Aeronautics and Space Administration,the Japanese Monbukagakusho,the Max Planck Society,and the Higher Education Funding Council for England。
文摘Major interactions are known to trigger star formation in galaxies and alter their color.We study the major interactions in filaments and sheets using SDSS data to understand the influence of large-scale environments on galaxy interactions.We identify the galaxies in filaments and sheets using the local dimension and also find the major pairs residing in these environments.The star formation rate(SFR) and color of the interacting galaxies as a function of pair separation are separately analyzed in filaments and sheets.The analysis is repeated for three volume limited samples covering different magnitude ranges.The major pairs residing in the filaments show a significantly higher SFR and bluer color than those residing in the sheets up to the projected pair separation of~50 kpc.We observe a complete reversal of this behavior for both the SFR and color of the galaxy pairs having a projected separation larger than 50 kpc.Some earlier studies report that the galaxy pairs align with the filament axis.Such alignment inside filaments indicates anisotropic accretion that may cause these differences.We do not observe these trends in the brighter galaxy samples.The pairs in filaments and sheets from the brighter galaxy samples trace relatively denser regions in these environments.The absence of these trends in the brighter samples may be explained by the dominant effect of the local density over the effects of the large-scale environment.
基金funding from the Paul ScherrerInstitute,Switzerland through the NES/GFA-ABE Cross Project。
文摘To ensure agreement between theoretical calculations and experimental data,parameters to selected nuclear physics models are perturbed and fine-tuned in nuclear data evaluations.This approach assumes that the chosen set of models accurately represents the‘true’distribution of considered observables.Furthermore,the models are chosen globally,indicating their applicability across the entire energy range of interest.However,this approach overlooks uncertainties inherent in the models themselves.In this work,we propose that instead of selecting globally a winning model set and proceeding with it as if it was the‘true’model set,we,instead,take a weighted average over multiple models within a Bayesian model averaging(BMA)framework,each weighted by its posterior probability.The method involves executing a set of TALYS calculations by randomly varying multiple nuclear physics models and their parameters to yield a vector of calculated observables.Next,computed likelihood function values at each incident energy point were then combined with the prior distributions to obtain updated posterior distributions for selected cross sections and the elastic angular distributions.As the cross sections and elastic angular distributions were updated locally on a per-energy-point basis,the approach typically results in discontinuities or“kinks”in the cross section curves,and these were addressed using spline interpolation.The proposed BMA method was applied to the evaluation of proton-induced reactions on ^(58)Ni between 1 and 100 MeV.The results demonstrated a favorable comparison with experimental data as well as with the TENDL-2023 evaluation.
基金Supported by Project of National Natural Science Foundation(No.41874134)
文摘Processing large-scale 3-D gravity data is an important topic in geophysics field. Many existing inversion methods lack the competence of processing massive data and practical application capacity. This study proposes the application of GPU parallel processing technology to the focusing inversion method, aiming at improving the inversion accuracy while speeding up calculation and reducing the memory consumption, thus obtaining the fast and reliable inversion results for large complex model. In this paper, equivalent storage of geometric trellis is used to calculate the sensitivity matrix, and the inversion is based on GPU parallel computing technology. The parallel computing program that is optimized by reducing data transfer, access restrictions and instruction restrictions as well as latency hiding greatly reduces the memory usage, speeds up the calculation, and makes the fast inversion of large models possible. By comparing and analyzing the computing speed of traditional single thread CPU method and CUDA-based GPU parallel technology, the excellent acceleration performance of GPU parallel computing is verified, which provides ideas for practical application of some theoretical inversion methods restricted by computing speed and computer memory. The model test verifies that the focusing inversion method can overcome the problem of severe skin effect and ambiguity of geological body boundary. Moreover, the increase of the model cells and inversion data can more clearly depict the boundary position of the abnormal body and delineate its specific shape.
文摘Social media data created a paradigm shift in assessing situational awareness during a natural disaster or emergencies such as wildfire, hurricane, tropical storm etc. Twitter as an emerging data source is an effective and innovative digital platform to observe trend from social media users’ perspective who are direct or indirect witnesses of the calamitous event. This paper aims to collect and analyze twitter data related to the recent wildfire in California to perform a trend analysis by classifying firsthand and credible information from Twitter users. This work investigates tweets on the recent wildfire in California and classifies them based on witnesses into two types: 1) direct witnesses and 2) indirect witnesses. The collected and analyzed information can be useful for law enforcement agencies and humanitarian organizations for communication and verification of the situational awareness during wildfire hazards. Trend analysis is an aggregated approach that includes sentimental analysis and topic modeling performed through domain-expert manual annotation and machine learning. Trend analysis ultimately builds a fine-grained analysis to assess evacuation routes and provide valuable information to the firsthand emergency responders<span style="font-family:Verdana;">.</span>
文摘Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- Arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding.
基金Research project of the Construction Department of Hubei Province(Project No.2023-64).
文摘By analyzing and comparing the current application status and advantages and disadvantages of domestic and foreign artificial material mechanical equipment classification coding systems,and conducting a comparative study of the existing coding system standards in different regions of the country,a coding data model suitable for big data research needs is proposed based on the current national standard for artificial material mechanical equipment classification coding.This model achieves a horizontal connection of characteristics and a vertical penetration of attribute values for construction materials and machinery through forward automatic coding calculation and reverse automatic decoding.This coding scheme and calculation model can also establish a database file for the coding and unit price of construction materials and machinery,forming a complete big data model for construction material coding unit prices.This provides foundational support for calculating and analyzing big data related to construction material unit prices,real-time information prices,market prices,and various comprehensive prices,thus contributing to the formation of cost-related big data.
基金This research has been partially supported by the national natural science foundation of China (51175169) and the national science and technology support program (2012BAF02B01).
文摘In the face of a growing number of large-scale data sets, affinity propagation clustering algorithm to calculate the process required to build the similarity matrix, will bring huge storage and computation. Therefore, this paper proposes an improved affinity propagation clustering algorithm. First, add the subtraction clustering, using the density value of the data points to obtain the point of initial clusters. Then, calculate the similarity distance between the initial cluster points, and reference the idea of semi-supervised clustering, adding pairs restriction information, structure sparse similarity matrix. Finally, the cluster representative points conduct AP clustering until a suitable cluster division.Experimental results show that the algorithm allows the calculation is greatly reduced, the similarity matrix storage capacity is also reduced, and better than the original algorithm on the clustering effect and processing speed.
文摘This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,test application time, and area overhead. To improve the compression ratio, the new method is based on variable-to-variable run length codes,and a novel algorithm is proposed to reorder the test vectors and fill the unspecified bits in the pre-processing step. With a novel on-chip decoder, low test application time and low area overhead are obtained by hybrid run length codes. Finally, an experimental comparison on ISCAS 89 benchmark circuits validates the proposed method
基金supported by the National Natural Science Foundation of China ( No . 61602034 )the Beijing Natural Science Foundation (No. 4162049)+2 种基金the Open Research Fund of National Mobile Communications Research Laboratory,Southeast University (No. 2014D03)the Fundamental Research Funds for the Central Universities Beijing Jiaotong University (No. 2016JBM015)the NationalHigh Technology Research and Development Program of China (863 Program) (No. 2015AA015702)
文摘This paper investigates the simultaneous wireless information and powertransfer(SWIPT) for network-coded two-way relay network from an information-theoretic perspective, where two sources exchange information via an SWIPT-aware energy harvesting(EH) relay. We present a power splitting(PS)-based two-way relaying(PS-TWR) protocol by employing the PS receiver architecture. To explore the system sum rate limit with data rate fairness, an optimization problem under total power constraint is formulated. Then, some explicit solutions are derived for the problem. Numerical results show that due to the path loss effect on energy transfer, with the same total available power, PS-TWR losses some system performance compared with traditional non-EH two-way relaying, where at relatively low and relatively high signalto-noise ratio(SNR), the performance loss is relatively small. Another observation is that, in relatively high SNR regime, PS-TWR outperforms time switching-based two-way relaying(TS-TWR) while in relatively low SNR regime TS-TWR outperforms PS-TWR. It is also shown that with individual available power at the two sources, PS-TWR outperforms TS-TWR in both relatively low and high SNR regimes.
基金supported by the National Natural Science Foundation of China under Grant No.61501064Sichuan Provincial Science and Technology Project under Grant No.2016GZ0122
文摘In the process of encoding and decoding,erasure codes over binary fields,which just need AND operations and XOR operations and therefore have a high computational efficiency,are widely used in various fields of information technology.A matrix decoding method is proposed in this paper.The method is a universal data reconstruction scheme for erasure codes over binary fields.Besides a pre-judgment that whether errors can be recovered,the method can rebuild sectors of loss data on a fault-tolerant storage system constructed by erasure codes for disk errors.Data reconstruction process of the new method has simple and clear steps,so it is beneficial for implementation of computer codes.And more,it can be applied to other non-binary fields easily,so it is expected that the method has an extensive application in the future.
文摘UAV data link has been considered as an important part of UAV communication system, through which the UAV could communicate with warships. However, constant coding and modulation scheme that UAV adopts does not make full use of the channel capacity when UAV communicates with warships in a good channel environment. In order to improve channel capacity and spectral efficiency, adaptive coded modulation technology is studied. Based on maritime channel model, SNR estimation technology and adaptive threshold determination technology, the simulation of UAV data link communication is carried out in this paper. Theoretic analysis and simulation results show that according to changes in maritime channel state, UAV can dynamically adjust the adaptive coded modulation scheme on the condition of meeting target Bit-Error-Rate (BER), the maximum amount of data transfer is non-adaptive systems three times.
文摘System-on-a-chips with intellectual property cores need a large volume of data for testing. The large volume of test data requires a large testing time and test data memory. Therefore new techniques are needed to optimize the test data volume, decrease the testing time, and conquer the ATE memory limitation for SOC designs. This paper presents a new compression method of testing for intellectual property core-based system-on-chip. The proposed method is based on new split- data variable length (SDV) codes that are designed using the split-options along with identification bits in a string of test data. This paper analyses the reduction of test data volume, testing time, run time, size of memory required in ATE and improvement of compression ratio. Experimental results for ISCAS 85 and ISCAS 89 Benchmark circuits show that SDV codes outperform other compression methods with the best compression ratio for test data compression. The decompression architecture for SDV codes is also presented for decoding the implementations of compressed bits. The proposed scheme shows that SDV codes are accessible to any of the variations in the input test data stream.
基金Taif University Researchers Supporting Project Number(TURSP-2020/98),Taif University,Taif,Saudi Arabia.
文摘The Internet of Medical Things(IoMT)is an online device that senses and transmits medical data from users to physicians within a time interval.In,recent years,IoMT has rapidly grown in the medicalfield to provide healthcare services without physical appearance.With the use of sensors,IoMT applications are used in healthcare management.In such applications,one of the most important factors is data security,given that its transmission over the network may cause obtrusion.For data security in IoMT systems,blockchain is used due to its numerous blocks for secure data storage.In this study,Blockchain-assisted secure data management framework(BSDMF)and Proof of Activity(PoA)protocol using malicious code detection algorithm is used in the proposed data security for the healthcare system.The main aim is to enhance the data security over the networks.The PoA protocol enhances high security of data from the literature review.By replacing the malicious node from the block,the PoA can provide high security for medical data in the blockchain.Comparison with existing systems shows that the proposed simulation with BSD-Malicious code detection algorithm achieves higher accuracy ratio,precision ratio,security,and efficiency and less response time for Blockchain-enabled healthcare systems.
文摘This paper introduces a new RAID 6 expanding method HCS, which is facing the circumstance of big data. HCS expands H-Code manner RAID 6. Two key techniques are used to avoid parity blocks’ recalculating.The first one is anti-diagonal data blocks’ selection, and the other one is horizontal data migration. These two techniques ensure the data blocks are retained in the same verification zone, that is horizontal verification zone and anti-diagonal verification zone. Experimental results showed that, compared with SDM, which is also a fast expansion method, HCS can reduce 3.6% expansion time and promote 4.62% performance under four traces.
文摘In this paper,a novel secret data-driven carrier-free(semi structural formula)visual secret sharing(VSS)scheme with(2,2)threshold based on the error correction blocks of QR codes is investigated.The proposed scheme is to search two QR codes that altered to satisfy the secret sharing modules in the error correction mechanism from the large datasets of QR codes according to the secret image,which is to embed the secret image into QR codes based on carrier-free secret sharing.The size of secret image is the same or closest with the region from the coordinate of(7,7)to the lower right corner of QR codes.In this way,we can find the QR codes combination of embedding secret information maximization with secret data-driven based on Big data search.Each output share is a valid QR code which can be decoded correctly utilizing a QR code reader and it may reduce the likelihood of attracting the attention of potential attackers.The proposed scheme can reveal secret image visually with the abilities of stacking and XOR decryptions.The secret image can be recovered by human visual system(HVS)without any computation based on stacking.On the other hand,if the light-weight computation device is available,the secret image can be lossless revealed based on XOR operation.In addition,QR codes could assist alignment for VSS recovery.The experimental results show the effectiveness of our scheme.