The era of the Internet of things(IoT)has marked a continued exploration of applications and services that can make people’s lives more convenient than ever before.However,the exploration of IoT services also means t...The era of the Internet of things(IoT)has marked a continued exploration of applications and services that can make people’s lives more convenient than ever before.However,the exploration of IoT services also means that people face unprecedented difficulties in spontaneously selecting the most appropriate services.Thus,there is a paramount need for a recommendation system that can help improve the experience of the users of IoT services to ensure the best quality of service.Most of the existing techniques—including collaborative filtering(CF),which is most widely adopted when building recommendation systems—suffer from rating sparsity and cold-start problems,preventing them from providing high quality recommendations.Inspired by the great success of deep learning in a wide range of fields,this work introduces a deep-learning-enabled autoencoder architecture to overcome the setbacks of CF recommendations.The proposed deep learning model is designed as a hybrid architecture with three key networks,namely autoencoder(AE),multilayered perceptron(MLP),and generalized matrix factorization(GMF).The model employs two AE networks to learn deep latent feature representations of users and items respectively and in parallel.Next,MLP and GMF networks are employed to model the linear and non-linear user-item interactions respectively with the extracted latent user and item features.Finally,the rating prediction is performed based on the idea of ensemble learning by fusing the output of the GMF and MLP networks.We conducted extensive experiments on two benchmark datasets,MoiveLens100K and MovieLens1M,using four standard evaluation metrics.Ablation experiments were conducted to confirm the validity of the proposed model and the contribution of each of its components in achieving better recommendation performance.Comparative analyses were also carried out to demonstrate the potential of the proposed model in gaining better accuracy than the existing CF methods with resistance to rating sparsity and cold-start problems.展开更多
Virtual source(VS)imaging has been proposed to improve image resolution in medical ultrasound imaging.However,VS obtains a limited contrast due to the non-adaptive delay-and-sum(DAS)beamforming.To improve the image co...Virtual source(VS)imaging has been proposed to improve image resolution in medical ultrasound imaging.However,VS obtains a limited contrast due to the non-adaptive delay-and-sum(DAS)beamforming.To improve the image contrast and provide an enhanced resolution,adaptive weighting algorithms were applied in VS imaging.In this paper,we proposed an adjustable generalized coherence factor(aGCF)for the synthetic aperture sequential beamforming(SASB)ofVS imaging to improve image quality.The value of aGCF is adjusted by a sequence intensity factor(SIF)that is defined as the ratio between the effective low resolution scan lines(LRLs)intensity and total LRLs strength.The aGCF-weighted VS(aGCF-VS)images were compared with standard VS images and GCF-weighted VS(GCF-VS)images.Simulation and experimental results demonstrated that the contrast ratio(CR)and contrastto-noise ratio(CNR)of aGCF-VS are greatly improved,compared with standard VS imaging.And in comparison with GCF-VS,aGCF-VS can obtain better CNR and speckle signal-to-noise ratio(sSNR)whilemaintaining similar CR.Therefore,aGCF is suitable for VS imaging to improve contrast and preserve speckle pattern.展开更多
Much recent progress in monaural speech separation(MSS)has been achieved through a series of deep learning architectures based on autoencoders,which use an encoder to condense the input signal into compressed features...Much recent progress in monaural speech separation(MSS)has been achieved through a series of deep learning architectures based on autoencoders,which use an encoder to condense the input signal into compressed features and then feed these features into a decoder to construct a specific audio source of interest.However,these approaches can neither learn generative factors of the original input for MSS nor construct each audio source in mixed speech.In this study,we propose a novel weighted-factor autoencoder(WFAE)model for MSS,which introduces a regularization loss in the objective function to isolate one source without containing other sources.By incorporating a latent attention mechanism and a supervised source constructor in the separation layer,WFAE can learn source-specific generative factors and a set of discriminative features for each source,leading to MSS performance improvement.Experiments on benchmark datasets show that our approach outperforms the existing methods.In terms of three important metrics,WFAE has great success on a relatively challenging MSS case,i.e.,speaker-independent MSS.展开更多
Generalizing wavelets by adding desired redundancy and flexibility,framelets(i.e.,wavelet frames)are of interest and importance in many applications such as image processing and numerical algorithms.Several key proper...Generalizing wavelets by adding desired redundancy and flexibility,framelets(i.e.,wavelet frames)are of interest and importance in many applications such as image processing and numerical algorithms.Several key properties of framelets are high vanishing moments for sparse multiscale representation,fast framelet transforms for numerical efficiency,and redundancy for robustness.However,it is a challenging problem to study and construct multivariate nonseparable framelets,mainly due to their intrinsic connections to factorization and syzygy modules of multivariate polynomial matrices.Moreover,all the known multivariate tight framelets derived from spline refinable scalar functions have only one vanishing moment,and framelets derived from refinable vector functions are barely studied yet in the literature.In this paper,we circumvent the above difficulties through the approach of quasi-tight framelets,which behave almost identically to tight framelets.Employing the popular oblique extension principle(OEP),from an arbitrary compactly supported M-refinable vector functionφwith multiplicity greater than one,we prove that we can always derive fromφa compactly supported multivariate quasi-tight framelet such that:(i)all the framelet generators have the highest possible order of vanishing moments;(ii)its associated fast framelet transform has the highest balancing order and is compact.For a refinable scalar functionφ(i.e.,its multiplicity is one),the above item(ii)often cannot be achieved intrinsically but we show that we can always construct a compactly supported OEP-based multivariate quasi-tight framelet derived fromφsatisfying item(i).We point out that constructing OEP-based quasi-tight framelets is closely related to the generalized spectral factorization of Hermitian trigonometric polynomial matrices.Our proof is critically built on a newly developed result on the normal form of a matrix-valued filter,which is of interest and importance in itself for greatly facilitating the study of refinable vector functions and multiwavelets/multiframelets.This paper provides a comprehensive investigation on OEP-based multivariate quasi-tight multiframelets and their associated framelet transforms with high balancing orders.This deepens our theoretical understanding of multivariate quasi-tight multiframelets and their associated fast multiframelet transforms.展开更多
基金supported by the deanship of Scientific Research at Prince Sattam Bin Abdulaziz University,Alkharj,Saudi Arabia through Research Proposal No.2020/01/17215。
文摘The era of the Internet of things(IoT)has marked a continued exploration of applications and services that can make people’s lives more convenient than ever before.However,the exploration of IoT services also means that people face unprecedented difficulties in spontaneously selecting the most appropriate services.Thus,there is a paramount need for a recommendation system that can help improve the experience of the users of IoT services to ensure the best quality of service.Most of the existing techniques—including collaborative filtering(CF),which is most widely adopted when building recommendation systems—suffer from rating sparsity and cold-start problems,preventing them from providing high quality recommendations.Inspired by the great success of deep learning in a wide range of fields,this work introduces a deep-learning-enabled autoencoder architecture to overcome the setbacks of CF recommendations.The proposed deep learning model is designed as a hybrid architecture with three key networks,namely autoencoder(AE),multilayered perceptron(MLP),and generalized matrix factorization(GMF).The model employs two AE networks to learn deep latent feature representations of users and items respectively and in parallel.Next,MLP and GMF networks are employed to model the linear and non-linear user-item interactions respectively with the extracted latent user and item features.Finally,the rating prediction is performed based on the idea of ensemble learning by fusing the output of the GMF and MLP networks.We conducted extensive experiments on two benchmark datasets,MoiveLens100K and MovieLens1M,using four standard evaluation metrics.Ablation experiments were conducted to confirm the validity of the proposed model and the contribution of each of its components in achieving better recommendation performance.Comparative analyses were also carried out to demonstrate the potential of the proposed model in gaining better accuracy than the existing CF methods with resistance to rating sparsity and cold-start problems.
基金The National Natural Science Foundation of China(Grant No.62071165)the Fundamental Research Funds for the Central Universities of China(Grant No.JZ2021HGTB0074)the China Postdoctoral Science Foundation(Grant No.2021M690853).
文摘Virtual source(VS)imaging has been proposed to improve image resolution in medical ultrasound imaging.However,VS obtains a limited contrast due to the non-adaptive delay-and-sum(DAS)beamforming.To improve the image contrast and provide an enhanced resolution,adaptive weighting algorithms were applied in VS imaging.In this paper,we proposed an adjustable generalized coherence factor(aGCF)for the synthetic aperture sequential beamforming(SASB)ofVS imaging to improve image quality.The value of aGCF is adjusted by a sequence intensity factor(SIF)that is defined as the ratio between the effective low resolution scan lines(LRLs)intensity and total LRLs strength.The aGCF-weighted VS(aGCF-VS)images were compared with standard VS images and GCF-weighted VS(GCF-VS)images.Simulation and experimental results demonstrated that the contrast ratio(CR)and contrastto-noise ratio(CNR)of aGCF-VS are greatly improved,compared with standard VS imaging.And in comparison with GCF-VS,aGCF-VS can obtain better CNR and speckle signal-to-noise ratio(sSNR)whilemaintaining similar CR.Therefore,aGCF is suitable for VS imaging to improve contrast and preserve speckle pattern.
基金the Key Project of the National Natural Science Foundation of China(No.U1836220)the National Natural Science Foundation of China(No.61672267)+1 种基金the Qing Lan Talent Program of Jiangsu Province,Chinathe Key Innovation Project of Undergraduate Students in Jiangsu Province,China(No.201810299045Z)。
文摘Much recent progress in monaural speech separation(MSS)has been achieved through a series of deep learning architectures based on autoencoders,which use an encoder to condense the input signal into compressed features and then feed these features into a decoder to construct a specific audio source of interest.However,these approaches can neither learn generative factors of the original input for MSS nor construct each audio source in mixed speech.In this study,we propose a novel weighted-factor autoencoder(WFAE)model for MSS,which introduces a regularization loss in the objective function to isolate one source without containing other sources.By incorporating a latent attention mechanism and a supervised source constructor in the separation layer,WFAE can learn source-specific generative factors and a set of discriminative features for each source,leading to MSS performance improvement.Experiments on benchmark datasets show that our approach outperforms the existing methods.In terms of three important metrics,WFAE has great success on a relatively challenging MSS case,i.e.,speaker-independent MSS.
基金supported by the Natural Sciences and Engineering Research Council of Canada(NSERC)(Grant No.RGPIN-2019-04276)。
文摘Generalizing wavelets by adding desired redundancy and flexibility,framelets(i.e.,wavelet frames)are of interest and importance in many applications such as image processing and numerical algorithms.Several key properties of framelets are high vanishing moments for sparse multiscale representation,fast framelet transforms for numerical efficiency,and redundancy for robustness.However,it is a challenging problem to study and construct multivariate nonseparable framelets,mainly due to their intrinsic connections to factorization and syzygy modules of multivariate polynomial matrices.Moreover,all the known multivariate tight framelets derived from spline refinable scalar functions have only one vanishing moment,and framelets derived from refinable vector functions are barely studied yet in the literature.In this paper,we circumvent the above difficulties through the approach of quasi-tight framelets,which behave almost identically to tight framelets.Employing the popular oblique extension principle(OEP),from an arbitrary compactly supported M-refinable vector functionφwith multiplicity greater than one,we prove that we can always derive fromφa compactly supported multivariate quasi-tight framelet such that:(i)all the framelet generators have the highest possible order of vanishing moments;(ii)its associated fast framelet transform has the highest balancing order and is compact.For a refinable scalar functionφ(i.e.,its multiplicity is one),the above item(ii)often cannot be achieved intrinsically but we show that we can always construct a compactly supported OEP-based multivariate quasi-tight framelet derived fromφsatisfying item(i).We point out that constructing OEP-based quasi-tight framelets is closely related to the generalized spectral factorization of Hermitian trigonometric polynomial matrices.Our proof is critically built on a newly developed result on the normal form of a matrix-valued filter,which is of interest and importance in itself for greatly facilitating the study of refinable vector functions and multiwavelets/multiframelets.This paper provides a comprehensive investigation on OEP-based multivariate quasi-tight multiframelets and their associated framelet transforms with high balancing orders.This deepens our theoretical understanding of multivariate quasi-tight multiframelets and their associated fast multiframelet transforms.