This study was carried out explore the mechanism underlying the inhibition of platelet activation by kelp fucoidans in deep venous thrombosis(DVT)mouse.In the control and sham mice,the walls of deep vein were regular ...This study was carried out explore the mechanism underlying the inhibition of platelet activation by kelp fucoidans in deep venous thrombosis(DVT)mouse.In the control and sham mice,the walls of deep vein were regular and smooth with intact intima,myometrium and adventitia.The blood vessel was wrapped with the tissue and there was no thrombosis in the lumen.In the DVT model,the wall was uneven with thicken intima,myometrium and adventitia.After treated with fucoidans LF1 and LF2,the thrombus was dissolved and the blood vessel was recanalized.Compared with the control group,the ROS content,ET-1 and VWF content and the expression of PKC-βand NF-κB in the model were significantly higher(P<0.05);these levels were significantly reduced following treatments with LF2 and LF1.Compared with H_(2)O_(2)treated-HUVECs,combined LF1 and LF2 treatment resulted in significant decrease in the expression of PKC-β,NF-κB,VWF and TM protein(P<0.05).It is clear that LF1 and LF2 reduces DVT-induced ET-1,VWF and TM expressions and production of ROS,thus inhibiting the activation of PKC-β/NF-κB signal pathway and the activation of coagulation system and ultimately reducing the formation of venous thrombus.展开更多
Olive trees are susceptible to a variety of diseases that can cause significant crop damage and economic losses.Early detection of these diseases is essential for effective management.We propose a novel transformed wa...Olive trees are susceptible to a variety of diseases that can cause significant crop damage and economic losses.Early detection of these diseases is essential for effective management.We propose a novel transformed wavelet,feature-fused,pre-trained deep learning model for detecting olive leaf diseases.The proposed model combines wavelet transforms with pre-trained deep-learning models to extract discriminative features from olive leaf images.The model has four main phases:preprocessing using data augmentation,three-level wavelet transformation,learning using pre-trained deep learning models,and a fused deep learning model.In the preprocessing phase,the image dataset is augmented using techniques such as resizing,rescaling,flipping,rotation,zooming,and contrasting.In wavelet transformation,the augmented images are decomposed into three frequency levels.Three pre-trained deep learning models,EfficientNet-B7,DenseNet-201,and ResNet-152-V2,are used in the learning phase.The models were trained using the approximate images of the third-level sub-band of the wavelet transform.In the fused phase,the fused model consists of a merge layer,three dense layers,and two dropout layers.The proposed model was evaluated using a dataset of images of healthy and infected olive leaves.It achieved an accuracy of 99.72%in the diagnosis of olive leaf diseases,which exceeds the accuracy of other methods reported in the literature.This finding suggests that our proposed method is a promising tool for the early detection of olive leaf diseases.展开更多
Segmenting the semantic regions of point clouds is a crucial step for intelligent agents to understand 3D scenes.Weakly supervised point cloud segmentation is highly desirable because entirely labelling point clouds i...Segmenting the semantic regions of point clouds is a crucial step for intelligent agents to understand 3D scenes.Weakly supervised point cloud segmentation is highly desirable because entirely labelling point clouds is highly time-consuming and costly.For the low-costing labelling of 3D point clouds,the scene-level label is one of the most effortless label strategies.However,due to the limitation of classifier discriminative capability and the orderless and structurless nature of the point cloud data,existing scene-level method is hard to transfer the semantic information,which usually leads to the under-activated or over-activated issues.To this end,a local semantic embedding network is introduced to learn local structural patterns and semantic propagation.Specifically,the proposed network contains graph convolution-based dilation and erosion embedding modules to implement‘inside-out’and‘outside-in’semantic information dissemination pathways.Therefore,the proposed weakly supervised learning framework could achieve the mutual propagation of semantic information in the foreground and background.Comprehensive experiments on the widely used ScanNet benchmark demonstrate the superior capacity of the proposed approach when compared to the current alternatives and baseline models.展开更多
Over the last couple of decades,community question-answering sites(CQAs)have been a topic of much academic interest.Scholars have often leveraged traditional machine learning(ML)and deep learning(DL)to explore the eve...Over the last couple of decades,community question-answering sites(CQAs)have been a topic of much academic interest.Scholars have often leveraged traditional machine learning(ML)and deep learning(DL)to explore the ever-growing volume of content that CQAs engender.To clarify the current state of the CQA literature that has used ML and DL,this paper reports a systematic literature review.The goal is to summarise and synthesise the major themes of CQA research related to(i)questions,(ii)answers and(iii)users.The final review included 133 articles.Dominant research themes include question quality,answer quality,and expert identification.In terms of dataset,some of the most widely studied platforms include Yahoo!Answers,Stack Exchange and Stack Overflow.The scope of most articles was confined to just one platform with few cross-platform investigations.Articles with ML outnumber those with DL.Nonetheless,the use of DL in CQA research is on an upward trajectory.A number of research directions are proposed.展开更多
Limited by battery and computing re-sources,the computing-intensive tasks generated by Internet of Things(IoT)devices cannot be processed all by themselves.Mobile edge computing(MEC)is a suitable solution for this pro...Limited by battery and computing re-sources,the computing-intensive tasks generated by Internet of Things(IoT)devices cannot be processed all by themselves.Mobile edge computing(MEC)is a suitable solution for this problem,and the gener-ated tasks can be offloaded from IoT devices to MEC.In this paper,we study the problem of dynamic task offloading for digital twin-empowered MEC.Digital twin techniques are applied to provide information of environment and share the training data of agent de-ployed on IoT devices.We formulate the task offload-ing problem with the goal of maximizing the energy efficiency and the workload balance among the ESs.Then,we reformulate the problem as an MDP problem and design DRL-based energy efficient task offloading(DEETO)algorithm to solve it.Comparative experi-ments are carried out which show the superiority of our DEETO algorithm in improving energy efficiency and balancing the workload.展开更多
Blind image quality assessment(BIQA)is of fundamental importance in low-level computer vision community.Increasing interest has been drawn in exploiting deep neural networks for BIQA.Despite of the notable success ach...Blind image quality assessment(BIQA)is of fundamental importance in low-level computer vision community.Increasing interest has been drawn in exploiting deep neural networks for BIQA.Despite of the notable success achieved,there is a broad consensus that training deep convolutional neural networks(DCNN)heavily relies on massive annotated data.Unfortunately,BIQA is typically a small sample problem,resulting the generalization ability of BIQA severely restricted.In order to improve the accuracy and generalization ability of BIQA metrics,this work proposed a totally opinion-unaware BIQA in which no subjective annotations are involved in the training stage.Multiple full-reference image quality assessment(FR-IQA)metrics are employed to label the distorted image as a substitution of subjective quality annotation.A deep neural network(DNN)is trained to blindly predict the multiple FR-IQA score in absence of corresponding pristine image.In the end,a selfsupervised FR-IQA score aggregator implemented by adversarial auto-encoder pools the predictions of multiple FR-IQA scores into the final quality predicting score.Even though none of subjective scores are involved in the training stage,experimental results indicate that our proposed full reference induced BIQA framework is as competitive as state-of-the-art BIQA metrics.展开更多
Reconfigurable intelligent surface(RIS)for wireless networks have drawn lots of attention in both academic and industry communities.RIS can dynamically control the phases of the reflection elements to send the signal ...Reconfigurable intelligent surface(RIS)for wireless networks have drawn lots of attention in both academic and industry communities.RIS can dynamically control the phases of the reflection elements to send the signal in the desired direction,thus it provides supplementary links for wireless networks.Most of prior works on RIS-aided wireless communication systems consider continuous phase shifts,but phase shifts of RIS are discrete in practical hardware.Thus we focus on the actual discrete phase shifts on RIS in this paper.Using the advanced deep reinforcement learning(DRL),we jointly optimize the transmit beamforming matrix from the discrete Fourier transform(DFT)codebook at the base station(BS)and the discrete phase shifts at the RIS to maximize the received signal-to-interference plus noise ratio(SINR).Unlike the traditional schemes usually using alternate optimization methods to solve the transmit beamforming and phase shifts,the DRL algorithm proposed in the paper can jointly design the transmit beamforming and phase shifts as the output of the DRL neural network.Numerical results indicate that the DRL proposed can dispose the complicated optimization problem with low computational complexity.展开更多
Nowadays,with the widespread application of the Internet of Things(IoT),mobile devices are renovating our lives.The data generated by mobile devices has reached a massive level.The traditional centralized processing i...Nowadays,with the widespread application of the Internet of Things(IoT),mobile devices are renovating our lives.The data generated by mobile devices has reached a massive level.The traditional centralized processing is not suitable for processing the data due to limited computing power and transmission load.Mobile Edge Computing(MEC)has been proposed to solve these problems.Because of limited computation ability and battery capacity,tasks can be executed in the MEC server.However,how to schedule those tasks becomes a challenge,and is the main topic of this piece.In this paper,we design an efficient intelligent algorithm to jointly optimize energy cost and computing resource allocation in MEC.In view of the advantages of deep learning,we propose a Deep Learning-Based Traffic Scheduling Approach(DLTSA).We translate the scheduling problem into a classification problem.Evaluation demonstrates that our DLTSA approach can reduce energy cost and have better performance compared to traditional scheduling algorithms.展开更多
Beamforming is significant for millimeter wave multi-user massive multi-input multi-output systems.In the meanwhile,the overhead cost of channel state information and beam training is considerable,especially in dynami...Beamforming is significant for millimeter wave multi-user massive multi-input multi-output systems.In the meanwhile,the overhead cost of channel state information and beam training is considerable,especially in dynamic environments.To reduce the overhead cost,we propose a multi-user beam tracking algorithm using a distributed deep Q-learning method.With online learning of users’moving trajectories,the proposed algorithm learns to scan a beam subspace to maximize the average effective sum rate.Considering practical implementation,we model the continuous beam tracking problem as a non-Markov decision process and thus develop a simplified training scheme of deep Q-learning to reduce the training complexity.Furthermore,we propose a scalable state-action-reward design for scenarios with different users and antenna numbers.Simulation results verify the effectiveness of the designed method.展开更多
Avatars, as promising digital representations and service assistants of users in Metaverses, can enable drivers and passengers to immerse themselves in 3D virtual services and spaces of UAV-assisted vehicular Metavers...Avatars, as promising digital representations and service assistants of users in Metaverses, can enable drivers and passengers to immerse themselves in 3D virtual services and spaces of UAV-assisted vehicular Metaverses. However, avatar tasks include a multitude of human-to-avatar and avatar-to-avatar interactive applications, e.g., augmented reality navigation,which consumes intensive computing resources. It is inefficient and impractical for vehicles to process avatar tasks locally. Fortunately, migrating avatar tasks to the nearest roadside units(RSU)or unmanned aerial vehicles(UAV) for execution is a promising solution to decrease computation overhead and reduce task processing latency, while the high mobility of vehicles brings challenges for vehicles to independently perform avatar migration decisions depending on current and future vehicle status. To address these challenges, in this paper, we propose a novel avatar task migration system based on multi-agent deep reinforcement learning(MADRL) to execute immersive vehicular avatar tasks dynamically. Specifically, we first formulate the problem of avatar task migration from vehicles to RSUs/UAVs as a partially observable Markov decision process that can be solved by MADRL algorithms. We then design the multi-agent proximal policy optimization(MAPPO) approach as the MADRL algorithm for the avatar task migration problem. To overcome slow convergence resulting from the curse of dimensionality and non-stationary issues caused by shared parameters in MAPPO, we further propose a transformer-based MAPPO approach via sequential decision-making models for the efficient representation of relationships among agents. Finally, to motivate terrestrial or non-terrestrial edge servers(e.g., RSUs or UAVs) to share computation resources and ensure traceability of the sharing records, we apply smart contracts and blockchain technologies to achieve secure sharing management. Numerical results demonstrate that the proposed approach outperforms the MAPPO approach by around 2% and effectively reduces approximately 20% of the latency of avatar task execution in UAV-assisted vehicular Metaverses.展开更多
Dear Editor,This letter presents a novel segmentation approach that leverages dendritic neurons to tackle the challenges of medical imaging segmentation.In this study,we enhance the segmentation accuracy based on a Se...Dear Editor,This letter presents a novel segmentation approach that leverages dendritic neurons to tackle the challenges of medical imaging segmentation.In this study,we enhance the segmentation accuracy based on a SegNet variant including an encoder-decoder structure,an upsampling index,and a deep supervision method.Furthermore,we introduce a dendritic neuron-based convolutional block to enable nonlinear feature mapping,thereby further improving the effectiveness of our approach.展开更多
Accurate soil moisture(SM)prediction is critical for understanding hydrological processes.Physics-based(PB)models exhibit large uncertainties in SM predictions arising from uncertain parameterizations and insufficient...Accurate soil moisture(SM)prediction is critical for understanding hydrological processes.Physics-based(PB)models exhibit large uncertainties in SM predictions arising from uncertain parameterizations and insufficient representation of land-surface processes.In addition to PB models,deep learning(DL)models have been widely used in SM predictions recently.However,few pure DL models have notably high success rates due to lacking physical information.Thus,we developed hybrid models to effectively integrate the outputs of PB models into DL models to improve SM predictions.To this end,we first developed a hybrid model based on the attention mechanism to take advantage of PB models at each forecast time scale(attention model).We further built an ensemble model that combined the advantages of different hybrid schemes(ensemble model).We utilized SM forecasts from the Global Forecast System to enhance the convolutional long short-term memory(ConvLSTM)model for 1–16 days of SM predictions.The performances of the proposed hybrid models were investigated and compared with two existing hybrid models.The results showed that the attention model could leverage benefits of PB models and achieved the best predictability of drought events among the different hybrid models.Moreover,the ensemble model performed best among all hybrid models at all forecast time scales and different soil conditions.It is highlighted that the ensemble model outperformed the pure DL model over 79.5%of in situ stations for 16-day predictions.These findings suggest that our proposed hybrid models can adequately exploit the benefits of PB model outputs to aid DL models in making SM predictions.展开更多
The scarcity of in-situ ocean observations poses a challenge for real-time information acquisition in the ocean.Among the crucial hydroacoustic environmental parameters,ocean sound velocity exhibits significant spatia...The scarcity of in-situ ocean observations poses a challenge for real-time information acquisition in the ocean.Among the crucial hydroacoustic environmental parameters,ocean sound velocity exhibits significant spatial and temporal variability and it is highly relevant to oceanic research.In this study,we propose a new data-driven approach,leveraging deep learning techniques,for the prediction of sound velocity fields(SVFs).Our novel spatiotemporal prediction model,STLSTM-SA,combines Spatiotemporal Long Short-Term Memory(ST-LSTM) with a self-attention mechanism to enable accurate and real-time prediction of SVFs.To circumvent the limited amount of observational data,we employ transfer learning by first training the model using reanalysis datasets,followed by fine-tuning it using in-situ analysis data to obtain the final prediction model.By utilizing the historical 12-month SVFs as input,our model predicts the SVFs for the subsequent three months.We compare the performance of five models:Artificial Neural Networks(ANN),Long ShortTerm Memory(LSTM),Convolutional LSTM(ConvLSTM),ST-LSTM,and our proposed ST-LSTM-SA model in a test experiment spanning 2019 to 2022.Our results demonstrate that the ST-LSTM-SA model significantly improves the prediction accuracy and stability of sound velocity in both temporal and spatial dimensions.The ST-LSTM-SA model not only accurately predicts the ocean sound velocity field(SVF),but also provides valuable insights for spatiotemporal prediction of other oceanic environmental variables.展开更多
Thunderstorm gusts are a common form of severe convective weather in the warm season in North China,and it is of great importance to correctly forecast them.At present,the forecasting of thunderstorm gusts is mainly b...Thunderstorm gusts are a common form of severe convective weather in the warm season in North China,and it is of great importance to correctly forecast them.At present,the forecasting of thunderstorm gusts is mainly based on traditional subjective methods,which fails to achieve high-resolution and high-frequency gridded forecasts based on multiple observation sources.In this paper,we propose a deep learning method called Thunderstorm Gusts TransU-net(TGTransUnet)to forecast thunderstorm gusts in North China based on multi-source gridded product data from the Institute of Urban Meteorology(IUM)with a lead time of 1 to 6 h.To determine the specific range of thunderstorm gusts,we combine three meteorological variables:radar reflectivity factor,lightning location,and 1-h maximum instantaneous wind speed from automatic weather stations(AWSs),and obtain a reasonable ground truth of thunderstorm gusts.Then,we transform the forecasting problem into an image-to-image problem in deep learning under the TG-TransUnet architecture,which is based on convolutional neural networks and a transformer.The analysis and forecast data of the enriched multi-source gridded comprehensive forecasting system for the period 2021–23 are then used as training,validation,and testing datasets.Finally,the performance of TG-TransUnet is compared with other methods.The results show that TG-TransUnet has the best prediction results at 1–6 h.The IUM is currently using this model to support the forecasting of thunderstorm gusts in North China.展开更多
The great potentials of massive Multiple-Input Multiple-Output(MIMO)in Frequency Division Duplex(FDD)mode can be fully exploited when the downlink Channel State Information(CSI)is available at base stations.However,th...The great potentials of massive Multiple-Input Multiple-Output(MIMO)in Frequency Division Duplex(FDD)mode can be fully exploited when the downlink Channel State Information(CSI)is available at base stations.However,the accurate CsI is difficult to obtain due to the large amount of feedback overhead caused by massive antennas.In this paper,we propose a deep learning based joint channel estimation and feedback framework,which comprehensively realizes the estimation,compression,and reconstruction of downlink channels in FDD massive MIMO systems.Two networks are constructed to perform estimation and feedback explicitly and implicitly.The explicit network adopts a multi-Signal-to-Noise-Ratios(SNRs)technique to obtain a single trained channel estimation subnet that works well with different SNRs and employs a deep residual network to reconstruct the channels,while the implicit network directly compresses pilots and sends them back to reduce network parameters.Quantization module is also designed to generate data-bearing bitstreams.Simulation results show that the two proposed networks exhibit excellent performance of reconstruction and are robust to different environments and quantization errors.展开更多
240 nm AlGaN-based micro-LEDs with different sizes are designed and fabricated.Then,the external quantum efficiency(EQE)and light extraction efficiency(LEE)are systematically investigated by comparing size and edge ef...240 nm AlGaN-based micro-LEDs with different sizes are designed and fabricated.Then,the external quantum efficiency(EQE)and light extraction efficiency(LEE)are systematically investigated by comparing size and edge effects.Here,it is revealed that the peak optical output power increases by 81.83%with the size shrinking from 50.0 to 25.0μm.Thereinto,the LEE increases by 26.21%and the LEE enhancement mainly comes from the sidewall light extraction.Most notably,transversemagnetic(TM)mode light intensifies faster as the size shrinks due to the tilted mesa side-wall and Al reflector design.However,when it turns to 12.5μm sized micro-LEDs,the output power is lower than 25.0μm sized ones.The underlying mechanism is that even though protected by SiO2 passivation,the edge effect which leads to current leakage and Shockley-Read-Hall(SRH)recombination deteriorates rapidly with the size further shrinking.Moreover,the ratio of the p-contact area to mesa area is much lower,which deteriorates the p-type current spreading at the mesa edge.These findings show a role of thumb for the design of high efficiency micro-LEDs with wavelength below 250 nm,which will pave the way for wide applications of deep ultraviolet(DUV)micro-LEDs.展开更多
Recently, there have been some attempts of Transformer in 3D point cloud classification. In order to reduce computations, most existing methods focus on local spatial attention,but ignore their content and fail to est...Recently, there have been some attempts of Transformer in 3D point cloud classification. In order to reduce computations, most existing methods focus on local spatial attention,but ignore their content and fail to establish relationships between distant but relevant points. To overcome the limitation of local spatial attention, we propose a point content-based Transformer architecture, called PointConT for short. It exploits the locality of points in the feature space(content-based), which clusters the sampled points with similar features into the same class and computes the self-attention within each class, thus enabling an effective trade-off between capturing long-range dependencies and computational complexity. We further introduce an inception feature aggregator for point cloud classification, which uses parallel structures to aggregate high-frequency and low-frequency information in each branch separately. Extensive experiments show that our PointConT model achieves a remarkable performance on point cloud shape classification. Especially, our method exhibits 90.3% Top-1 accuracy on the hardest setting of ScanObjectN N. Source code of this paper is available at https://github.com/yahuiliu99/PointC onT.展开更多
Although Federated Deep Learning(FDL)enables distributed machine learning in the Internet of Vehicles(IoV),it requires multiple clients to upload model parameters,thus still existing unavoidable communication overhead...Although Federated Deep Learning(FDL)enables distributed machine learning in the Internet of Vehicles(IoV),it requires multiple clients to upload model parameters,thus still existing unavoidable communication overhead and data privacy risks.The recently proposed Swarm Learning(SL)provides a decentralized machine learning approach for unit edge computing and blockchain-based coordination.A Swarm-Federated Deep Learning framework in the IoV system(IoV-SFDL)that integrates SL into the FDL framework is proposed in this paper.The IoV-SFDL organizes vehicles to generate local SL models with adjacent vehicles based on the blockchain empowered SL,then aggregates the global FDL model among different SL groups with a credibility weights prediction algorithm.Extensive experimental results show that compared with the baseline frameworks,the proposed IoV-SFDL framework reduces the overhead of client-to-server communication by 16.72%,while the model performance improves by about 5.02%for the same training iterations.展开更多
Deep and ultra-deep reservoirs have gradually become the primary focus of hydrocarbon exploration as a result of a series of significant discoveries in deep hydrocarbon exploration worldwide.These reservoirs present u...Deep and ultra-deep reservoirs have gradually become the primary focus of hydrocarbon exploration as a result of a series of significant discoveries in deep hydrocarbon exploration worldwide.These reservoirs present unique challenges due to their deep burial depth(4500-8882 m),low matrix permeability,complex crustal stress conditions,high temperature and pressure(HTHP,150-200℃,105-155 MPa),coupled with high salinity of formation water.Consequently,the costs associated with their exploitation and development are exceptionally high.In deep and ultra-deep reservoirs,hydraulic fracturing is commonly used to achieve high and stable production.During hydraulic fracturing,a substantial volume of fluid is injected into the reservoir.However,statistical analysis reveals that the flowback rate is typically less than 30%,leaving the majority of the fluid trapped within the reservoir.Therefore,hydraulic fracturing in deep reservoirs not only enhances the reservoir permeability by creating artificial fractures but also damages reservoirs due to the fracturing fluids involved.The challenging“three-high”environment of a deep reservoir,characterized by high temperature,high pressure,and high salinity,exacerbates conventional forms of damage,including water sensitivity,retention of fracturing fluids,rock creep,and proppant breakage.In addition,specific damage mechanisms come into play,such as fracturing fluid decomposition at elevated temperatures and proppant diagenetic reactions at HTHP conditions.Presently,the foremost concern in deep oil and gas development lies in effectively assessing the damage inflicted on these reservoirs by hydraulic fracturing,comprehending the underlying mechanisms,and selecting appropriate solutions.It's noteworthy that the majority of existing studies on reservoir damage primarily focus on conventional reservoirs,with limited attention given to deep reservoirs and a lack of systematic summaries.In light of this,our approach entails initially summarizing the current knowledge pertaining to the types of fracturing fluids employed in deep and ultra-deep reservoirs.Subsequently,we delve into a systematic examination of the damage processes and mechanisms caused by fracturing fluids within the context of hydraulic fracturing in deep reservoirs,taking into account the unique reservoir characteristics of high temperature,high pressure,and high in-situ stress.In addition,we provide an overview of research progress related to high-temperature deep reservoir fracturing fluid and the damage of aqueous fracturing fluids to rock matrix,both artificial and natural fractures,and sand-packed fractures.We conclude by offering a summary of current research advancements and future directions,which hold significant potential for facilitating the efficient development of deep oil and gas reservoirs while effectively mitigating reservoir damage.展开更多
基金supported by the Special Fund for Clinical Scientific Research of Shandong Medical Association(No.YXH2020ZX058).
文摘This study was carried out explore the mechanism underlying the inhibition of platelet activation by kelp fucoidans in deep venous thrombosis(DVT)mouse.In the control and sham mice,the walls of deep vein were regular and smooth with intact intima,myometrium and adventitia.The blood vessel was wrapped with the tissue and there was no thrombosis in the lumen.In the DVT model,the wall was uneven with thicken intima,myometrium and adventitia.After treated with fucoidans LF1 and LF2,the thrombus was dissolved and the blood vessel was recanalized.Compared with the control group,the ROS content,ET-1 and VWF content and the expression of PKC-βand NF-κB in the model were significantly higher(P<0.05);these levels were significantly reduced following treatments with LF2 and LF1.Compared with H_(2)O_(2)treated-HUVECs,combined LF1 and LF2 treatment resulted in significant decrease in the expression of PKC-β,NF-κB,VWF and TM protein(P<0.05).It is clear that LF1 and LF2 reduces DVT-induced ET-1,VWF and TM expressions and production of ROS,thus inhibiting the activation of PKC-β/NF-κB signal pathway and the activation of coagulation system and ultimately reducing the formation of venous thrombus.
文摘Olive trees are susceptible to a variety of diseases that can cause significant crop damage and economic losses.Early detection of these diseases is essential for effective management.We propose a novel transformed wavelet,feature-fused,pre-trained deep learning model for detecting olive leaf diseases.The proposed model combines wavelet transforms with pre-trained deep-learning models to extract discriminative features from olive leaf images.The model has four main phases:preprocessing using data augmentation,three-level wavelet transformation,learning using pre-trained deep learning models,and a fused deep learning model.In the preprocessing phase,the image dataset is augmented using techniques such as resizing,rescaling,flipping,rotation,zooming,and contrasting.In wavelet transformation,the augmented images are decomposed into three frequency levels.Three pre-trained deep learning models,EfficientNet-B7,DenseNet-201,and ResNet-152-V2,are used in the learning phase.The models were trained using the approximate images of the third-level sub-band of the wavelet transform.In the fused phase,the fused model consists of a merge layer,three dense layers,and two dropout layers.The proposed model was evaluated using a dataset of images of healthy and infected olive leaves.It achieved an accuracy of 99.72%in the diagnosis of olive leaf diseases,which exceeds the accuracy of other methods reported in the literature.This finding suggests that our proposed method is a promising tool for the early detection of olive leaf diseases.
基金Key-Area Research and Development Program of Guangdong Province,Grant/Award Number:2021B0101200001National Natural Science Foundation of China,Grant/Award Numbers:61876140,U20B2065,U21B2048Open Research Projects of Zhejiang Lab,Grant/Award Number:2019KD0AD01/010。
文摘Segmenting the semantic regions of point clouds is a crucial step for intelligent agents to understand 3D scenes.Weakly supervised point cloud segmentation is highly desirable because entirely labelling point clouds is highly time-consuming and costly.For the low-costing labelling of 3D point clouds,the scene-level label is one of the most effortless label strategies.However,due to the limitation of classifier discriminative capability and the orderless and structurless nature of the point cloud data,existing scene-level method is hard to transfer the semantic information,which usually leads to the under-activated or over-activated issues.To this end,a local semantic embedding network is introduced to learn local structural patterns and semantic propagation.Specifically,the proposed network contains graph convolution-based dilation and erosion embedding modules to implement‘inside-out’and‘outside-in’semantic information dissemination pathways.Therefore,the proposed weakly supervised learning framework could achieve the mutual propagation of semantic information in the foreground and background.Comprehensive experiments on the widely used ScanNet benchmark demonstrate the superior capacity of the proposed approach when compared to the current alternatives and baseline models.
文摘Over the last couple of decades,community question-answering sites(CQAs)have been a topic of much academic interest.Scholars have often leveraged traditional machine learning(ML)and deep learning(DL)to explore the ever-growing volume of content that CQAs engender.To clarify the current state of the CQA literature that has used ML and DL,this paper reports a systematic literature review.The goal is to summarise and synthesise the major themes of CQA research related to(i)questions,(ii)answers and(iii)users.The final review included 133 articles.Dominant research themes include question quality,answer quality,and expert identification.In terms of dataset,some of the most widely studied platforms include Yahoo!Answers,Stack Exchange and Stack Overflow.The scope of most articles was confined to just one platform with few cross-platform investigations.Articles with ML outnumber those with DL.Nonetheless,the use of DL in CQA research is on an upward trajectory.A number of research directions are proposed.
基金This work was partly supported by the Project of Cultivation for young top-motch Talents of Beijing Municipal Institutions(No BPHR202203225)the Young Elite Scientists Sponsorship Program by BAST(BYESS2023031)the National key research and development program(No 2022YFF0604502).
文摘Limited by battery and computing re-sources,the computing-intensive tasks generated by Internet of Things(IoT)devices cannot be processed all by themselves.Mobile edge computing(MEC)is a suitable solution for this problem,and the gener-ated tasks can be offloaded from IoT devices to MEC.In this paper,we study the problem of dynamic task offloading for digital twin-empowered MEC.Digital twin techniques are applied to provide information of environment and share the training data of agent de-ployed on IoT devices.We formulate the task offload-ing problem with the goal of maximizing the energy efficiency and the workload balance among the ESs.Then,we reformulate the problem as an MDP problem and design DRL-based energy efficient task offloading(DEETO)algorithm to solve it.Comparative experi-ments are carried out which show the superiority of our DEETO algorithm in improving energy efficiency and balancing the workload.
基金supported by the Public Welfare Technology Application Research Project of Zhejiang Province,China(No.LGF21F010001)the Key Research and Development Program of Zhejiang Province,China(Grant No.2019C01002)the Key Research and Development Program of Zhejiang Province,China(Grant No.2021C03138)。
文摘Blind image quality assessment(BIQA)is of fundamental importance in low-level computer vision community.Increasing interest has been drawn in exploiting deep neural networks for BIQA.Despite of the notable success achieved,there is a broad consensus that training deep convolutional neural networks(DCNN)heavily relies on massive annotated data.Unfortunately,BIQA is typically a small sample problem,resulting the generalization ability of BIQA severely restricted.In order to improve the accuracy and generalization ability of BIQA metrics,this work proposed a totally opinion-unaware BIQA in which no subjective annotations are involved in the training stage.Multiple full-reference image quality assessment(FR-IQA)metrics are employed to label the distorted image as a substitution of subjective quality annotation.A deep neural network(DNN)is trained to blindly predict the multiple FR-IQA score in absence of corresponding pristine image.In the end,a selfsupervised FR-IQA score aggregator implemented by adversarial auto-encoder pools the predictions of multiple FR-IQA scores into the final quality predicting score.Even though none of subjective scores are involved in the training stage,experimental results indicate that our proposed full reference induced BIQA framework is as competitive as state-of-the-art BIQA metrics.
文摘Reconfigurable intelligent surface(RIS)for wireless networks have drawn lots of attention in both academic and industry communities.RIS can dynamically control the phases of the reflection elements to send the signal in the desired direction,thus it provides supplementary links for wireless networks.Most of prior works on RIS-aided wireless communication systems consider continuous phase shifts,but phase shifts of RIS are discrete in practical hardware.Thus we focus on the actual discrete phase shifts on RIS in this paper.Using the advanced deep reinforcement learning(DRL),we jointly optimize the transmit beamforming matrix from the discrete Fourier transform(DFT)codebook at the base station(BS)and the discrete phase shifts at the RIS to maximize the received signal-to-interference plus noise ratio(SINR).Unlike the traditional schemes usually using alternate optimization methods to solve the transmit beamforming and phase shifts,the DRL algorithm proposed in the paper can jointly design the transmit beamforming and phase shifts as the output of the DRL neural network.Numerical results indicate that the DRL proposed can dispose the complicated optimization problem with low computational complexity.
基金supported in part by the National Natural Science Foun-dation of China(61902029)R&D Program of Beijing Municipal Education Commission(No.KM202011232015)Project for Acceleration of University Classi cation Development(Nos.5112211036,5112211037,5112211038).
文摘Nowadays,with the widespread application of the Internet of Things(IoT),mobile devices are renovating our lives.The data generated by mobile devices has reached a massive level.The traditional centralized processing is not suitable for processing the data due to limited computing power and transmission load.Mobile Edge Computing(MEC)has been proposed to solve these problems.Because of limited computation ability and battery capacity,tasks can be executed in the MEC server.However,how to schedule those tasks becomes a challenge,and is the main topic of this piece.In this paper,we design an efficient intelligent algorithm to jointly optimize energy cost and computing resource allocation in MEC.In view of the advantages of deep learning,we propose a Deep Learning-Based Traffic Scheduling Approach(DLTSA).We translate the scheduling problem into a classification problem.Evaluation demonstrates that our DLTSA approach can reduce energy cost and have better performance compared to traditional scheduling algorithms.
文摘Beamforming is significant for millimeter wave multi-user massive multi-input multi-output systems.In the meanwhile,the overhead cost of channel state information and beam training is considerable,especially in dynamic environments.To reduce the overhead cost,we propose a multi-user beam tracking algorithm using a distributed deep Q-learning method.With online learning of users’moving trajectories,the proposed algorithm learns to scan a beam subspace to maximize the average effective sum rate.Considering practical implementation,we model the continuous beam tracking problem as a non-Markov decision process and thus develop a simplified training scheme of deep Q-learning to reduce the training complexity.Furthermore,we propose a scalable state-action-reward design for scenarios with different users and antenna numbers.Simulation results verify the effectiveness of the designed method.
基金supported in part by NSFC (62102099, U22A2054, 62101594)in part by the Pearl River Talent Recruitment Program (2021QN02S643)+9 种基金Guangzhou Basic Research Program (2023A04J1699)in part by the National Research Foundation, SingaporeInfocomm Media Development Authority under its Future Communications Research Development ProgrammeDSO National Laboratories under the AI Singapore Programme under AISG Award No AISG2-RP-2020-019Energy Research Test-Bed and Industry Partnership Funding Initiative, Energy Grid (EG) 2.0 programmeDesCartes and the Campus for Research Excellence and Technological Enterprise (CREATE) programmeMOE Tier 1 under Grant RG87/22in part by the Singapore University of Technology and Design (SUTD) (SRG-ISTD-2021- 165)in part by the SUTD-ZJU IDEA Grant SUTD-ZJU (VP) 202102in part by the Ministry of Education, Singapore, through its SUTD Kickstarter Initiative (SKI 20210204)。
文摘Avatars, as promising digital representations and service assistants of users in Metaverses, can enable drivers and passengers to immerse themselves in 3D virtual services and spaces of UAV-assisted vehicular Metaverses. However, avatar tasks include a multitude of human-to-avatar and avatar-to-avatar interactive applications, e.g., augmented reality navigation,which consumes intensive computing resources. It is inefficient and impractical for vehicles to process avatar tasks locally. Fortunately, migrating avatar tasks to the nearest roadside units(RSU)or unmanned aerial vehicles(UAV) for execution is a promising solution to decrease computation overhead and reduce task processing latency, while the high mobility of vehicles brings challenges for vehicles to independently perform avatar migration decisions depending on current and future vehicle status. To address these challenges, in this paper, we propose a novel avatar task migration system based on multi-agent deep reinforcement learning(MADRL) to execute immersive vehicular avatar tasks dynamically. Specifically, we first formulate the problem of avatar task migration from vehicles to RSUs/UAVs as a partially observable Markov decision process that can be solved by MADRL algorithms. We then design the multi-agent proximal policy optimization(MAPPO) approach as the MADRL algorithm for the avatar task migration problem. To overcome slow convergence resulting from the curse of dimensionality and non-stationary issues caused by shared parameters in MAPPO, we further propose a transformer-based MAPPO approach via sequential decision-making models for the efficient representation of relationships among agents. Finally, to motivate terrestrial or non-terrestrial edge servers(e.g., RSUs or UAVs) to share computation resources and ensure traceability of the sharing records, we apply smart contracts and blockchain technologies to achieve secure sharing management. Numerical results demonstrate that the proposed approach outperforms the MAPPO approach by around 2% and effectively reduces approximately 20% of the latency of avatar task execution in UAV-assisted vehicular Metaverses.
基金partially supported by the Japan Society for the Promotion of Science(JSPS)KAKENHI(JP22H03643)Japan Science and Technology Agency(JST)Support for Pioneering Research Initiated by the Next Generation(SPRING)(JPMJSP2145)JST through the Establishment of University Fellowships Towards the Creation of Science Technology Innovation(JPMJFS2115)。
文摘Dear Editor,This letter presents a novel segmentation approach that leverages dendritic neurons to tackle the challenges of medical imaging segmentation.In this study,we enhance the segmentation accuracy based on a SegNet variant including an encoder-decoder structure,an upsampling index,and a deep supervision method.Furthermore,we introduce a dendritic neuron-based convolutional block to enable nonlinear feature mapping,thereby further improving the effectiveness of our approach.
基金supported by the Natural Science Foundation of China(Grant Nos.42088101 and 42205149)Zhongwang WEI was supported by the Natural Science Foundation of China(Grant No.42075158)+1 种基金Wei SHANGGUAN was supported by the Natural Science Foundation of China(Grant No.41975122)Yonggen ZHANG was supported by the National Natural Science Foundation of Tianjin(Grant No.20JCQNJC01660).
文摘Accurate soil moisture(SM)prediction is critical for understanding hydrological processes.Physics-based(PB)models exhibit large uncertainties in SM predictions arising from uncertain parameterizations and insufficient representation of land-surface processes.In addition to PB models,deep learning(DL)models have been widely used in SM predictions recently.However,few pure DL models have notably high success rates due to lacking physical information.Thus,we developed hybrid models to effectively integrate the outputs of PB models into DL models to improve SM predictions.To this end,we first developed a hybrid model based on the attention mechanism to take advantage of PB models at each forecast time scale(attention model).We further built an ensemble model that combined the advantages of different hybrid schemes(ensemble model).We utilized SM forecasts from the Global Forecast System to enhance the convolutional long short-term memory(ConvLSTM)model for 1–16 days of SM predictions.The performances of the proposed hybrid models were investigated and compared with two existing hybrid models.The results showed that the attention model could leverage benefits of PB models and achieved the best predictability of drought events among the different hybrid models.Moreover,the ensemble model performed best among all hybrid models at all forecast time scales and different soil conditions.It is highlighted that the ensemble model outperformed the pure DL model over 79.5%of in situ stations for 16-day predictions.These findings suggest that our proposed hybrid models can adequately exploit the benefits of PB model outputs to aid DL models in making SM predictions.
基金supported by the National Natural Science Foundation of China(Grant No.42004030)Basic Scientific Fund for National Public Research Institutes of China(Grant No.2022S03)+1 种基金Science and Technology Innovation Project(LSKJ202205102)funded by Laoshan Laboratory,and the National Key Research and Development Program of China(2020YFB0505805).
文摘The scarcity of in-situ ocean observations poses a challenge for real-time information acquisition in the ocean.Among the crucial hydroacoustic environmental parameters,ocean sound velocity exhibits significant spatial and temporal variability and it is highly relevant to oceanic research.In this study,we propose a new data-driven approach,leveraging deep learning techniques,for the prediction of sound velocity fields(SVFs).Our novel spatiotemporal prediction model,STLSTM-SA,combines Spatiotemporal Long Short-Term Memory(ST-LSTM) with a self-attention mechanism to enable accurate and real-time prediction of SVFs.To circumvent the limited amount of observational data,we employ transfer learning by first training the model using reanalysis datasets,followed by fine-tuning it using in-situ analysis data to obtain the final prediction model.By utilizing the historical 12-month SVFs as input,our model predicts the SVFs for the subsequent three months.We compare the performance of five models:Artificial Neural Networks(ANN),Long ShortTerm Memory(LSTM),Convolutional LSTM(ConvLSTM),ST-LSTM,and our proposed ST-LSTM-SA model in a test experiment spanning 2019 to 2022.Our results demonstrate that the ST-LSTM-SA model significantly improves the prediction accuracy and stability of sound velocity in both temporal and spatial dimensions.The ST-LSTM-SA model not only accurately predicts the ocean sound velocity field(SVF),but also provides valuable insights for spatiotemporal prediction of other oceanic environmental variables.
基金supported in part by the Beijing Natural Science Foundation(Grant No.8222051)the National Key R&D Program of China(Grant No.2022YFC3004103)+2 种基金the National Natural Foundation of China(Grant Nos.42275003 and 42275012)the China Meteorological Administration Key Innovation Team(Grant Nos.CMA2022ZD04 and CMA2022ZD07)the Beijing Science and Technology Program(Grant No.Z221100005222012).
文摘Thunderstorm gusts are a common form of severe convective weather in the warm season in North China,and it is of great importance to correctly forecast them.At present,the forecasting of thunderstorm gusts is mainly based on traditional subjective methods,which fails to achieve high-resolution and high-frequency gridded forecasts based on multiple observation sources.In this paper,we propose a deep learning method called Thunderstorm Gusts TransU-net(TGTransUnet)to forecast thunderstorm gusts in North China based on multi-source gridded product data from the Institute of Urban Meteorology(IUM)with a lead time of 1 to 6 h.To determine the specific range of thunderstorm gusts,we combine three meteorological variables:radar reflectivity factor,lightning location,and 1-h maximum instantaneous wind speed from automatic weather stations(AWSs),and obtain a reasonable ground truth of thunderstorm gusts.Then,we transform the forecasting problem into an image-to-image problem in deep learning under the TG-TransUnet architecture,which is based on convolutional neural networks and a transformer.The analysis and forecast data of the enriched multi-source gridded comprehensive forecasting system for the period 2021–23 are then used as training,validation,and testing datasets.Finally,the performance of TG-TransUnet is compared with other methods.The results show that TG-TransUnet has the best prediction results at 1–6 h.The IUM is currently using this model to support the forecasting of thunderstorm gusts in North China.
基金supported in part by the National Natural Science Foundation of China(NSFC)under Grants 61941104,61921004the Key Research and Development Program of Shandong Province under Grant 2020CXGC010108+1 种基金the Southeast University-China Mobile Research Institute Joint Innovation Centersupported in part by the Scientific Research Foundation of Graduate School of Southeast University under Grant YBPY2118.
文摘The great potentials of massive Multiple-Input Multiple-Output(MIMO)in Frequency Division Duplex(FDD)mode can be fully exploited when the downlink Channel State Information(CSI)is available at base stations.However,the accurate CsI is difficult to obtain due to the large amount of feedback overhead caused by massive antennas.In this paper,we propose a deep learning based joint channel estimation and feedback framework,which comprehensively realizes the estimation,compression,and reconstruction of downlink channels in FDD massive MIMO systems.Two networks are constructed to perform estimation and feedback explicitly and implicitly.The explicit network adopts a multi-Signal-to-Noise-Ratios(SNRs)technique to obtain a single trained channel estimation subnet that works well with different SNRs and employs a deep residual network to reconstruct the channels,while the implicit network directly compresses pilots and sends them back to reduce network parameters.Quantization module is also designed to generate data-bearing bitstreams.Simulation results show that the two proposed networks exhibit excellent performance of reconstruction and are robust to different environments and quantization errors.
基金This work was supported by National Key R&D Program of China(2022YFB3605103)the National Natural Science Foundation of China(62204241,U22A2084,62121005,and 61827813)+3 种基金the Natural Science Foundation of Jilin Province(20230101345JC,20230101360JC,and 20230101107JC)the Youth Innovation Promotion Association of CAS(2023223)the Young Elite Scientist Sponsorship Program By CAST(YESS20200182)the CAS Talents Program(E30122E4M0).
文摘240 nm AlGaN-based micro-LEDs with different sizes are designed and fabricated.Then,the external quantum efficiency(EQE)and light extraction efficiency(LEE)are systematically investigated by comparing size and edge effects.Here,it is revealed that the peak optical output power increases by 81.83%with the size shrinking from 50.0 to 25.0μm.Thereinto,the LEE increases by 26.21%and the LEE enhancement mainly comes from the sidewall light extraction.Most notably,transversemagnetic(TM)mode light intensifies faster as the size shrinks due to the tilted mesa side-wall and Al reflector design.However,when it turns to 12.5μm sized micro-LEDs,the output power is lower than 25.0μm sized ones.The underlying mechanism is that even though protected by SiO2 passivation,the edge effect which leads to current leakage and Shockley-Read-Hall(SRH)recombination deteriorates rapidly with the size further shrinking.Moreover,the ratio of the p-contact area to mesa area is much lower,which deteriorates the p-type current spreading at the mesa edge.These findings show a role of thumb for the design of high efficiency micro-LEDs with wavelength below 250 nm,which will pave the way for wide applications of deep ultraviolet(DUV)micro-LEDs.
基金supported in part by the Nationa Natural Science Foundation of China (61876011)the National Key Research and Development Program of China (2022YFB4703700)+1 种基金the Key Research and Development Program 2020 of Guangzhou (202007050002)the Key-Area Research and Development Program of Guangdong Province (2020B090921003)。
文摘Recently, there have been some attempts of Transformer in 3D point cloud classification. In order to reduce computations, most existing methods focus on local spatial attention,but ignore their content and fail to establish relationships between distant but relevant points. To overcome the limitation of local spatial attention, we propose a point content-based Transformer architecture, called PointConT for short. It exploits the locality of points in the feature space(content-based), which clusters the sampled points with similar features into the same class and computes the self-attention within each class, thus enabling an effective trade-off between capturing long-range dependencies and computational complexity. We further introduce an inception feature aggregator for point cloud classification, which uses parallel structures to aggregate high-frequency and low-frequency information in each branch separately. Extensive experiments show that our PointConT model achieves a remarkable performance on point cloud shape classification. Especially, our method exhibits 90.3% Top-1 accuracy on the hardest setting of ScanObjectN N. Source code of this paper is available at https://github.com/yahuiliu99/PointC onT.
基金supported by the National Natural Science Foundation of China(NSFC)under Grant 62071179.
文摘Although Federated Deep Learning(FDL)enables distributed machine learning in the Internet of Vehicles(IoV),it requires multiple clients to upload model parameters,thus still existing unavoidable communication overhead and data privacy risks.The recently proposed Swarm Learning(SL)provides a decentralized machine learning approach for unit edge computing and blockchain-based coordination.A Swarm-Federated Deep Learning framework in the IoV system(IoV-SFDL)that integrates SL into the FDL framework is proposed in this paper.The IoV-SFDL organizes vehicles to generate local SL models with adjacent vehicles based on the blockchain empowered SL,then aggregates the global FDL model among different SL groups with a credibility weights prediction algorithm.Extensive experimental results show that compared with the baseline frameworks,the proposed IoV-SFDL framework reduces the overhead of client-to-server communication by 16.72%,while the model performance improves by about 5.02%for the same training iterations.
基金Dao-Bing Wang was supported by the Beijing Natural Science Foundation Project(No.3222030)the National Natural Science Foundation of China(No.52274002)+1 种基金the PetroChina Science and Technology Innovation Foundation Project(No.2021DQ02-0201)Fu-Jian Zhou was supported by the National Natural Science Foundation of China(No.52174045).
文摘Deep and ultra-deep reservoirs have gradually become the primary focus of hydrocarbon exploration as a result of a series of significant discoveries in deep hydrocarbon exploration worldwide.These reservoirs present unique challenges due to their deep burial depth(4500-8882 m),low matrix permeability,complex crustal stress conditions,high temperature and pressure(HTHP,150-200℃,105-155 MPa),coupled with high salinity of formation water.Consequently,the costs associated with their exploitation and development are exceptionally high.In deep and ultra-deep reservoirs,hydraulic fracturing is commonly used to achieve high and stable production.During hydraulic fracturing,a substantial volume of fluid is injected into the reservoir.However,statistical analysis reveals that the flowback rate is typically less than 30%,leaving the majority of the fluid trapped within the reservoir.Therefore,hydraulic fracturing in deep reservoirs not only enhances the reservoir permeability by creating artificial fractures but also damages reservoirs due to the fracturing fluids involved.The challenging“three-high”environment of a deep reservoir,characterized by high temperature,high pressure,and high salinity,exacerbates conventional forms of damage,including water sensitivity,retention of fracturing fluids,rock creep,and proppant breakage.In addition,specific damage mechanisms come into play,such as fracturing fluid decomposition at elevated temperatures and proppant diagenetic reactions at HTHP conditions.Presently,the foremost concern in deep oil and gas development lies in effectively assessing the damage inflicted on these reservoirs by hydraulic fracturing,comprehending the underlying mechanisms,and selecting appropriate solutions.It's noteworthy that the majority of existing studies on reservoir damage primarily focus on conventional reservoirs,with limited attention given to deep reservoirs and a lack of systematic summaries.In light of this,our approach entails initially summarizing the current knowledge pertaining to the types of fracturing fluids employed in deep and ultra-deep reservoirs.Subsequently,we delve into a systematic examination of the damage processes and mechanisms caused by fracturing fluids within the context of hydraulic fracturing in deep reservoirs,taking into account the unique reservoir characteristics of high temperature,high pressure,and high in-situ stress.In addition,we provide an overview of research progress related to high-temperature deep reservoir fracturing fluid and the damage of aqueous fracturing fluids to rock matrix,both artificial and natural fractures,and sand-packed fractures.We conclude by offering a summary of current research advancements and future directions,which hold significant potential for facilitating the efficient development of deep oil and gas reservoirs while effectively mitigating reservoir damage.