Seismic data regularization is an important preprocessing step in seismic signal processing. Traditional seismic acquisition methods follow the Shannon–Nyquist sampling theorem, whereas compressive sensing(CS) prov...Seismic data regularization is an important preprocessing step in seismic signal processing. Traditional seismic acquisition methods follow the Shannon–Nyquist sampling theorem, whereas compressive sensing(CS) provides a fundamentally new paradigm to overcome limitations in data acquisition. Besides the sparse representation of seismic signal in some transform domain and the 1-norm reconstruction algorithm, the seismic data regularization quality of CS-based techniques strongly depends on random undersampling schemes. For 2D seismic data, discrete uniform-based methods have been investigated, where some seismic traces are randomly sampled with an equal probability. However, in theory and practice, some seismic traces with different probability are required to be sampled for satisfying the assumptions in CS. Therefore, designing new undersampling schemes is imperative. We propose a Bernoulli-based random undersampling scheme and its jittered version to determine the regular traces that are randomly sampled with different probability, while both schemes comply with the Bernoulli process distribution. We performed experiments using the Fourier and curvelet transforms and the spectral projected gradient reconstruction algorithm for 1-norm(SPGL1), and ten different random seeds. According to the signal-to-noise ratio(SNR) between the original and reconstructed seismic data, the detailed experimental results from 2D numerical and physical simulation data show that the proposed novel schemes perform overall better than the discrete uniform schemes.展开更多
A high linearity,undersampling 14-bit 357 kSps cyclic analog-to-digital convert(ADC) is designed for a radio frequency identification transceiver system.The passive capacitor error-average(PCEA) technique is adopt...A high linearity,undersampling 14-bit 357 kSps cyclic analog-to-digital convert(ADC) is designed for a radio frequency identification transceiver system.The passive capacitor error-average(PCEA) technique is adopted for high accuracy.An improved PCEA sampling network,capable of eliminating the crosstalk path of two pipelined stages,is employed.Opamp sharing and the removal of the front-end sample and hold amplifier are utilized for low power dissipation and small chip area.An additional digital calibration block is added to compensate for the error due to defective layout design.The presented ADC is fabricated in a 180 nm CMOS process,occupying 0.65×1.6 mm^2. The input of the undersampling ADC achieves 15.5 MHz with more than 90 dB spurious free dynamic range(SFDR), and the peak SFDR is as high as 106.4 dB with 2.431 MHz input.展开更多
This paper presents a low sampling rate digital pre-distortion technique based on an improved Chebyshev polynomial for the non-linear distortion problem of amplifiers in 5G broadband communication systems.An improved ...This paper presents a low sampling rate digital pre-distortion technique based on an improved Chebyshev polynomial for the non-linear distortion problem of amplifiers in 5G broadband communication systems.An improved Chebyshev polynomial is used to construct the behavioural model of the broadband amplifier,and an undersampling technique is used to sample the output signal of the amplifier,reduce the sampling rate,and extract the pre-distortion parameters from the sampled signal through an indirect learning structure to finally correct the non-linearity of the amplifier system.This technique is able to improve the linearity and efficiency of the power amplifier and provides better flexibility.Experimental results show that by constructing the behavioural model of the amplifier using memory polynomials(MP),generalised polynomials(GMP)and modified Chebyshev polynomials respectively,the adjacent channel power ratio of the obtained system can be improved by more than 13.87d B,17.6dB and 19.98dB respectively compared to the output signal of the amplifier without digital pre-distortion.The Chebyshev polynomial improves the neighbourhood channel power ratio by 6.11dB and 2.38dB compared to the memory polynomial and generalised polynomial respectively,while the normalised mean square error is effectively improved and enhanced.This shows that the improved Chebyshev pre-distortion can guarantee the performance of the system and improve the non-linearity better.展开更多
Credit card fraudulent data is highly imbalanced, and it has presented an overwhelmingly large portion of nonfraudulent transactions and a small portion of fraudulent transactions. The measures used to judge the verac...Credit card fraudulent data is highly imbalanced, and it has presented an overwhelmingly large portion of nonfraudulent transactions and a small portion of fraudulent transactions. The measures used to judge the veracity of the detection algorithms become critical to the deployment of a model that accurately scores fraudulent transactions taking into account case imbalance, and the cost of identifying a case as genuine when, in fact, the case is a fraudulent transaction. In this paper, a new criterion to judge classification algorithms, which considers the cost of misclassification, is proposed, and several undersampling techniques are compared by this new criterion. At the same time, a weighted support vector machine (SVM) algorithm considering the financial cost of misclassification is introduced, proving to be more practical for credit card fraud detection than traditional methodologies. This weighted SVM uses transaction balances as weights for fraudulent transactions, and a uniformed weight for nonfraudulent transactions. The results show this strategy greatly improve performance of credit card fraud detection.展开更多
To achieve restoration of high frequency information for an undersampled and degraded low-resolution image, a nonlinear and real-time processing method-the radial basis function (RBF) neural network based super-resolu...To achieve restoration of high frequency information for an undersampled and degraded low-resolution image, a nonlinear and real-time processing method-the radial basis function (RBF) neural network based super-resolution method of restoration is proposed. The RBF network configuration and processing method is suitable for a high resolution restoration from an undersampled low-resolution image. The soft-competition learning scheme based on the k-means algorithm is used, and can achieve higher mapping approximation accuracy without increase in the network size. Experiments showed that the proposed algorithm can achieve a super-resolution restored image from an undersampled and degraded low-resolution image, and requires a shorter training time when compared with the multiplayer perception (MLP) network.展开更多
Aim To find an effective and fast algorithm to analyze undersampled signals. Methods\ The advantage of high order ambiguity function(HAF) algorithm is that it can analyze polynomial phase signals by phase rank reduct...Aim To find an effective and fast algorithm to analyze undersampled signals. Methods\ The advantage of high order ambiguity function(HAF) algorithm is that it can analyze polynomial phase signals by phase rank reduction. In this paper, it was first used to analyze the parameters of undersampled signals. When some conditions are satisfied, the problem of frequency confusion can be solved. Results and Conclusion\ As an example, we analyze undersampled linear frequency modulated signal. The simulation results verify the effectiveness of HAF algorithm. Compared with time frequency distribution, HAF algorithm reduces computation burden to a great extent, needs weak boundary conditions and doesn't have boundary effect.展开更多
The volume of social media data on the Internet is constantly growing.This has created a substantial research field for data analysts.The diversity of articles,posts,and comments on news websites and social networks a...The volume of social media data on the Internet is constantly growing.This has created a substantial research field for data analysts.The diversity of articles,posts,and comments on news websites and social networks astonishes imagination.Nevertheless,most researchers focus on posts on Twitter that have a specific format and length restriction.The majority of them are written in the English language.As relatively few works have paid attention to sentiment analysis in the Russian and Kazakh languages,this article thoroughly analyzes news posts in the Kazakhstan media space.The amassed datasets include texts labeled according to three sentiment classes:positive,negative,and neutral.The datasets are highly imbalanced,with a significant predominance of the positive class.Three resampling techniques(undersampling,oversampling,and synthetic minority oversampling(SMOTE))are used to resample the datasets to deal with this issue.Subsequently,the texts are vectorized with the TF-IDF metric and classified with seven machine learning(ML)algorithms:naïve Bayes,support vector machine,logistic regression,k-nearest neighbors,decision tree,random forest,and XGBoost.Experimental results reveal that oversampling and SMOTE with logistic regression,decision tree,and random forest achieve the best classification scores.These models are effectively employed in the developed social analytics platform.展开更多
In order to deal with aliasing distortions of Doppler frequencies shown in time-frequency representation( TFR) with aspect undersampling,an approach using adaptive segmental compressive sampling according to the asp...In order to deal with aliasing distortions of Doppler frequencies shown in time-frequency representation( TFR) with aspect undersampling,an approach using adaptive segmental compressive sampling according to the aspect dependencies of the scattering centers is proposed. The random noise problem induced by compressive sampling is solved by employing a series of signal processing techniques of filtering,image transformation and Hough Transform. Three examples are presented to verify the effectiveness of this approach. The comparisons between the built models and the precise scattered fields computed by a well-validated full-wave numerical method are investigated,and the results showgood agreements between each other.展开更多
The relationship between the discrete Fourier transform (DFT) and the symmetrical/asymmetrical number system (SNS/ANS) is introduced in this paper. And the influence of noise upon the solution to the ambiguity problem...The relationship between the discrete Fourier transform (DFT) and the symmetrical/asymmetrical number system (SNS/ANS) is introduced in this paper. And the influence of noise upon the solution to the ambiguity problem in number system is also discussed. The principle of noise insensitive solution to the ambiguity in ANS system is extended to SNS system. The unambiguous bandwidth equations with noise protection in SNS are presented, based on which a real time noise insensitive algorithm in SNS for resolving undersampling ambiguous frequency is proposed.展开更多
The phenomenon of frequency ambiguity may appear in radar or communication systems. S. Barbarossa(1991) had unwrapped the frequency ambiguity of single component undersampled signals by Wigner-Ville distribution(WVD)....The phenomenon of frequency ambiguity may appear in radar or communication systems. S. Barbarossa(1991) had unwrapped the frequency ambiguity of single component undersampled signals by Wigner-Ville distribution(WVD). But there has no any effective algorithm to analyze multicomponent undersampled signals by now. A new algorithm to analyze multicomponent undersampled signals by high-order ambiguity function (HAF) is proposed hera HAF analyzes polynomial phase signals by the method of phase rank reduction, its advantage is that it does not have boundary effect and is not sensitive to the cross-items of multicomponent signals.The simulation results prove the effectiveness of HAF algorithm.展开更多
The noncontact blade tip timing(BTT)measurement has been an attractive technology for blade health monitoring(BHM).However,the severe undersampled BTT signal causes a significant challenge for blade vibration paramete...The noncontact blade tip timing(BTT)measurement has been an attractive technology for blade health monitoring(BHM).However,the severe undersampled BTT signal causes a significant challenge for blade vibration parameter identification and fault feature extraction.This study proposes a novel method based on the minimum variance distortionless response(MVDR)of the direction of arrival(DoA)estimation for blade natural frequency estimation from the non-uniformly undersampled BTT signals.First,based on the similarity between the general data acquisition model for BTT and the antenna array model in DoA estimation,the circumferentially arranged probes on the casing can be regarded as a non-uniform linear array.Thus,BTT signal reconstruction is converted into the DoA estimation problem of the non-uniform linear array signal.Second,MVDR is employed to address the severe undersampling issue and recover the BTT undersampled signal.In particular,spatial smoothing is innovatively utilized to enhance the estimation of covariance matrix of the BTT signal to avoid ill-condition or singularity,while improving efficiency and robustness.Lastly,numerical simulation and experimental testing are employed to verify the validity of the proposed method.Monte Carlo simulation results suggest that the proposed method behaves better than conventional methods,especially under a lower signal-to-noise ratio condition.Experimental results indicate that the proposed method can effectively overcome the severe undersampling problem of BTT signal induced by physical limitations,and has a strong potential in the field of BHM.展开更多
Purpose–Individuals’driving behavior data are becoming available widely through Global Positioning System devices and on-board diagnostic systems.The incoming data can be sampled at rates ranging from one Hertz(or e...Purpose–Individuals’driving behavior data are becoming available widely through Global Positioning System devices and on-board diagnostic systems.The incoming data can be sampled at rates ranging from one Hertz(or even lower)to hundreds of Hertz.Failing to capture substantial changes in vehicle movements over time by“undersampling”can cause loss of information and misinterpretations of the data,but“oversampling”can waste storage and processing resources.The purpose of this study is to empirically explore how micro-driving decisions to maintain speed,accelerate or decelerate,can be best captured,without substantial loss of information.Design/methodology/approach–This study creates a set of indicators to quantify the magnitude of information loss(MIL).Each indicator is calculated as a percentage to index the extent of information loss(EIL)in different situations.An overall information loss index named EIL is created to combine the MIL indicators.Data from a driving simulator study collected at 20 Hertz are analyzed(N=718,481 data points from 35,924 s of driving tests).The study quantifies the relationship between information loss indicators and sampling rates.Findings–The results show that marginally more information is lost as data are sampled down from 20 to 0.5 Hz,but the relationship is not linear.With four indicators of MILs,the overall EIL is 3.85 per cent for 1-Hz sampling rate driving behavior data.If sampling rates are higher than 2 Hz,all MILs are under 5 per cent for importation loss.Originality/value–This study contributes by developing a framework for quantifying the relationship between sampling rates,and information loss and depending on the objective of their study,researchers can choose the appropriate sampling rate necessary to get the right amount of accuracy.展开更多
OCC(Optical Camera Communication)has been proposed in recent years as a new technique for visible light communications.This paper introduces the implementation and experimental demonstration of an OCC system.Phase unc...OCC(Optical Camera Communication)has been proposed in recent years as a new technique for visible light communications.This paper introduces the implementation and experimental demonstration of an OCC system.Phase uncertainty and phase slipping caused by camera sampling are the two major challenges for OCC.In this paper,we propose a novel modulation scheme called undersampled differential phase shift on–off keying to encode binary data bits without exhibiting any flicker to human eyes.The phase difference between two consecutive samples conveys one-bit information,which can be decoded by a low-frame-rate camera receiver.Error detection techniques are introduced to enhance the reliability of the system.We present the hardware and software design of the proposed system,which is implemented with a Xilinx FPGA and a Logitech commercial camera.Experimental results demonstrate that a bit-error rate of 10−5 can be achieved with 7.15 mW received signal power over a link distance of 15 cm.展开更多
基金financially supported by The 2011 Prospective Research Project of SINOPEC(P11096)
文摘Seismic data regularization is an important preprocessing step in seismic signal processing. Traditional seismic acquisition methods follow the Shannon–Nyquist sampling theorem, whereas compressive sensing(CS) provides a fundamentally new paradigm to overcome limitations in data acquisition. Besides the sparse representation of seismic signal in some transform domain and the 1-norm reconstruction algorithm, the seismic data regularization quality of CS-based techniques strongly depends on random undersampling schemes. For 2D seismic data, discrete uniform-based methods have been investigated, where some seismic traces are randomly sampled with an equal probability. However, in theory and practice, some seismic traces with different probability are required to be sampled for satisfying the assumptions in CS. Therefore, designing new undersampling schemes is imperative. We propose a Bernoulli-based random undersampling scheme and its jittered version to determine the regular traces that are randomly sampled with different probability, while both schemes comply with the Bernoulli process distribution. We performed experiments using the Fourier and curvelet transforms and the spectral projected gradient reconstruction algorithm for 1-norm(SPGL1), and ten different random seeds. According to the signal-to-noise ratio(SNR) between the original and reconstructed seismic data, the detailed experimental results from 2D numerical and physical simulation data show that the proposed novel schemes perform overall better than the discrete uniform schemes.
基金supported by the National High Technology Research and Development Program of China(No.2006AA04A109)
文摘A high linearity,undersampling 14-bit 357 kSps cyclic analog-to-digital convert(ADC) is designed for a radio frequency identification transceiver system.The passive capacitor error-average(PCEA) technique is adopted for high accuracy.An improved PCEA sampling network,capable of eliminating the crosstalk path of two pipelined stages,is employed.Opamp sharing and the removal of the front-end sample and hold amplifier are utilized for low power dissipation and small chip area.An additional digital calibration block is added to compensate for the error due to defective layout design.The presented ADC is fabricated in a 180 nm CMOS process,occupying 0.65×1.6 mm^2. The input of the undersampling ADC achieves 15.5 MHz with more than 90 dB spurious free dynamic range(SFDR), and the peak SFDR is as high as 106.4 dB with 2.431 MHz input.
文摘This paper presents a low sampling rate digital pre-distortion technique based on an improved Chebyshev polynomial for the non-linear distortion problem of amplifiers in 5G broadband communication systems.An improved Chebyshev polynomial is used to construct the behavioural model of the broadband amplifier,and an undersampling technique is used to sample the output signal of the amplifier,reduce the sampling rate,and extract the pre-distortion parameters from the sampled signal through an indirect learning structure to finally correct the non-linearity of the amplifier system.This technique is able to improve the linearity and efficiency of the power amplifier and provides better flexibility.Experimental results show that by constructing the behavioural model of the amplifier using memory polynomials(MP),generalised polynomials(GMP)and modified Chebyshev polynomials respectively,the adjacent channel power ratio of the obtained system can be improved by more than 13.87d B,17.6dB and 19.98dB respectively compared to the output signal of the amplifier without digital pre-distortion.The Chebyshev polynomial improves the neighbourhood channel power ratio by 6.11dB and 2.38dB compared to the memory polynomial and generalised polynomial respectively,while the normalised mean square error is effectively improved and enhanced.This shows that the improved Chebyshev pre-distortion can guarantee the performance of the system and improve the non-linearity better.
文摘Credit card fraudulent data is highly imbalanced, and it has presented an overwhelmingly large portion of nonfraudulent transactions and a small portion of fraudulent transactions. The measures used to judge the veracity of the detection algorithms become critical to the deployment of a model that accurately scores fraudulent transactions taking into account case imbalance, and the cost of identifying a case as genuine when, in fact, the case is a fraudulent transaction. In this paper, a new criterion to judge classification algorithms, which considers the cost of misclassification, is proposed, and several undersampling techniques are compared by this new criterion. At the same time, a weighted support vector machine (SVM) algorithm considering the financial cost of misclassification is introduced, proving to be more practical for credit card fraud detection than traditional methodologies. This weighted SVM uses transaction balances as weights for fraudulent transactions, and a uniformed weight for nonfraudulent transactions. The results show this strategy greatly improve performance of credit card fraud detection.
文摘To achieve restoration of high frequency information for an undersampled and degraded low-resolution image, a nonlinear and real-time processing method-the radial basis function (RBF) neural network based super-resolution method of restoration is proposed. The RBF network configuration and processing method is suitable for a high resolution restoration from an undersampled low-resolution image. The soft-competition learning scheme based on the k-means algorithm is used, and can achieve higher mapping approximation accuracy without increase in the network size. Experiments showed that the proposed algorithm can achieve a super-resolution restored image from an undersampled and degraded low-resolution image, and requires a shorter training time when compared with the multiplayer perception (MLP) network.
文摘Aim To find an effective and fast algorithm to analyze undersampled signals. Methods\ The advantage of high order ambiguity function(HAF) algorithm is that it can analyze polynomial phase signals by phase rank reduction. In this paper, it was first used to analyze the parameters of undersampled signals. When some conditions are satisfied, the problem of frequency confusion can be solved. Results and Conclusion\ As an example, we analyze undersampled linear frequency modulated signal. The simulation results verify the effectiveness of HAF algorithm. Compared with time frequency distribution, HAF algorithm reduces computation burden to a great extent, needs weak boundary conditions and doesn't have boundary effect.
文摘The volume of social media data on the Internet is constantly growing.This has created a substantial research field for data analysts.The diversity of articles,posts,and comments on news websites and social networks astonishes imagination.Nevertheless,most researchers focus on posts on Twitter that have a specific format and length restriction.The majority of them are written in the English language.As relatively few works have paid attention to sentiment analysis in the Russian and Kazakh languages,this article thoroughly analyzes news posts in the Kazakhstan media space.The amassed datasets include texts labeled according to three sentiment classes:positive,negative,and neutral.The datasets are highly imbalanced,with a significant predominance of the positive class.Three resampling techniques(undersampling,oversampling,and synthetic minority oversampling(SMOTE))are used to resample the datasets to deal with this issue.Subsequently,the texts are vectorized with the TF-IDF metric and classified with seven machine learning(ML)algorithms:naïve Bayes,support vector machine,logistic regression,k-nearest neighbors,decision tree,random forest,and XGBoost.Experimental results reveal that oversampling and SMOTE with logistic regression,decision tree,and random forest achieve the best classification scores.These models are effectively employed in the developed social analytics platform.
基金Supported by the National Natural Science Foundation of China(61421001,61471041,61671059)
文摘In order to deal with aliasing distortions of Doppler frequencies shown in time-frequency representation( TFR) with aspect undersampling,an approach using adaptive segmental compressive sampling according to the aspect dependencies of the scattering centers is proposed. The random noise problem induced by compressive sampling is solved by employing a series of signal processing techniques of filtering,image transformation and Hough Transform. Three examples are presented to verify the effectiveness of this approach. The comparisons between the built models and the precise scattered fields computed by a well-validated full-wave numerical method are investigated,and the results showgood agreements between each other.
文摘The relationship between the discrete Fourier transform (DFT) and the symmetrical/asymmetrical number system (SNS/ANS) is introduced in this paper. And the influence of noise upon the solution to the ambiguity problem in number system is also discussed. The principle of noise insensitive solution to the ambiguity in ANS system is extended to SNS system. The unambiguous bandwidth equations with noise protection in SNS are presented, based on which a real time noise insensitive algorithm in SNS for resolving undersampling ambiguous frequency is proposed.
文摘The phenomenon of frequency ambiguity may appear in radar or communication systems. S. Barbarossa(1991) had unwrapped the frequency ambiguity of single component undersampled signals by Wigner-Ville distribution(WVD). But there has no any effective algorithm to analyze multicomponent undersampled signals by now. A new algorithm to analyze multicomponent undersampled signals by high-order ambiguity function (HAF) is proposed hera HAF analyzes polynomial phase signals by the method of phase rank reduction, its advantage is that it does not have boundary effect and is not sensitive to the cross-items of multicomponent signals.The simulation results prove the effectiveness of HAF algorithm.
基金the National Natural Science Foundation of China(Grant Nos.52105117 and 51875433)the Funds for Distinguished Young Talent of Shaanxi Province,China(Grant No.2019JC-04).
文摘The noncontact blade tip timing(BTT)measurement has been an attractive technology for blade health monitoring(BHM).However,the severe undersampled BTT signal causes a significant challenge for blade vibration parameter identification and fault feature extraction.This study proposes a novel method based on the minimum variance distortionless response(MVDR)of the direction of arrival(DoA)estimation for blade natural frequency estimation from the non-uniformly undersampled BTT signals.First,based on the similarity between the general data acquisition model for BTT and the antenna array model in DoA estimation,the circumferentially arranged probes on the casing can be regarded as a non-uniform linear array.Thus,BTT signal reconstruction is converted into the DoA estimation problem of the non-uniform linear array signal.Second,MVDR is employed to address the severe undersampling issue and recover the BTT undersampled signal.In particular,spatial smoothing is innovatively utilized to enhance the estimation of covariance matrix of the BTT signal to avoid ill-condition or singularity,while improving efficiency and robustness.Lastly,numerical simulation and experimental testing are employed to verify the validity of the proposed method.Monte Carlo simulation results suggest that the proposed method behaves better than conventional methods,especially under a lower signal-to-noise ratio condition.Experimental results indicate that the proposed method can effectively overcome the severe undersampling problem of BTT signal induced by physical limitations,and has a strong potential in the field of BHM.
文摘Purpose–Individuals’driving behavior data are becoming available widely through Global Positioning System devices and on-board diagnostic systems.The incoming data can be sampled at rates ranging from one Hertz(or even lower)to hundreds of Hertz.Failing to capture substantial changes in vehicle movements over time by“undersampling”can cause loss of information and misinterpretations of the data,but“oversampling”can waste storage and processing resources.The purpose of this study is to empirically explore how micro-driving decisions to maintain speed,accelerate or decelerate,can be best captured,without substantial loss of information.Design/methodology/approach–This study creates a set of indicators to quantify the magnitude of information loss(MIL).Each indicator is calculated as a percentage to index the extent of information loss(EIL)in different situations.An overall information loss index named EIL is created to combine the MIL indicators.Data from a driving simulator study collected at 20 Hertz are analyzed(N=718,481 data points from 35,924 s of driving tests).The study quantifies the relationship between information loss indicators and sampling rates.Findings–The results show that marginally more information is lost as data are sampled down from 20 to 0.5 Hz,but the relationship is not linear.With four indicators of MILs,the overall EIL is 3.85 per cent for 1-Hz sampling rate driving behavior data.If sampling rates are higher than 2 Hz,all MILs are under 5 per cent for importation loss.Originality/value–This study contributes by developing a framework for quantifying the relationship between sampling rates,and information loss and depending on the objective of their study,researchers can choose the appropriate sampling rate necessary to get the right amount of accuracy.
文摘OCC(Optical Camera Communication)has been proposed in recent years as a new technique for visible light communications.This paper introduces the implementation and experimental demonstration of an OCC system.Phase uncertainty and phase slipping caused by camera sampling are the two major challenges for OCC.In this paper,we propose a novel modulation scheme called undersampled differential phase shift on–off keying to encode binary data bits without exhibiting any flicker to human eyes.The phase difference between two consecutive samples conveys one-bit information,which can be decoded by a low-frame-rate camera receiver.Error detection techniques are introduced to enhance the reliability of the system.We present the hardware and software design of the proposed system,which is implemented with a Xilinx FPGA and a Logitech commercial camera.Experimental results demonstrate that a bit-error rate of 10−5 can be achieved with 7.15 mW received signal power over a link distance of 15 cm.