With the popularization of the Internet and the development of technology,cyber threats are increasing day by day.Threats such as malware,hacking,and data breaches have had a serious impact on cybersecurity.The networ...With the popularization of the Internet and the development of technology,cyber threats are increasing day by day.Threats such as malware,hacking,and data breaches have had a serious impact on cybersecurity.The network security environment in the era of big data presents the characteristics of large amounts of data,high diversity,and high real-time requirements.Traditional security defense methods and tools have been unable to cope with the complex and changing network security threats.This paper proposes a machine-learning security defense algorithm based on metadata association features.Emphasize control over unauthorized users through privacy,integrity,and availability.The user model is established and the mapping between the user model and the metadata of the data source is generated.By analyzing the user model and its corresponding mapping relationship,the query of the user model can be decomposed into the query of various heterogeneous data sources,and the integration of heterogeneous data sources based on the metadata association characteristics can be realized.Define and classify customer information,automatically identify and perceive sensitive data,build a behavior audit and analysis platform,analyze user behavior trajectories,and complete the construction of a machine learning customer information security defense system.The experimental results show that when the data volume is 5×103 bit,the data storage integrity of the proposed method is 92%.The data accuracy is 98%,and the success rate of data intrusion is only 2.6%.It can be concluded that the data storage method in this paper is safe,the data accuracy is always at a high level,and the data disaster recovery performance is good.This method can effectively resist data intrusion and has high air traffic control security.It can not only detect all viruses in user data storage,but also realize integrated virus processing,and further optimize the security defense effect of user big data.展开更多
Early screening of diabetes retinopathy(DR)plays an important role in preventing irreversible blindness.Existing research has failed to fully explore effective DR lesion information in fundus maps.Besides,traditional ...Early screening of diabetes retinopathy(DR)plays an important role in preventing irreversible blindness.Existing research has failed to fully explore effective DR lesion information in fundus maps.Besides,traditional attention schemes have not considered the impact of lesion type differences on grading,resulting in unreasonable extraction of important lesion features.Therefore,this paper proposes a DR diagnosis scheme that integrates a multi-level patch attention generator(MPAG)and a lesion localization module(LLM).Firstly,MPAGis used to predict patches of different sizes and generate a weighted attention map based on the prediction score and the types of lesions contained in the patches,fully considering the impact of lesion type differences on grading,solving the problem that the attention maps of lesions cannot be further refined and then adapted to the final DR diagnosis task.Secondly,the LLM generates a global attention map based on localization.Finally,the weighted attention map and global attention map are weighted with the fundus map to fully explore effective DR lesion information and increase the attention of the classification network to lesion details.This paper demonstrates the effectiveness of the proposed method through extensive experiments on the public DDR dataset,obtaining an accuracy of 0.8064.展开更多
Structured illumination microscopy(SIM)is a popular and powerful super-resolution(SR)technique in biomedical research.However,the conventional reconstruction algorithm for SIM heavily relies on the accurate prior know...Structured illumination microscopy(SIM)is a popular and powerful super-resolution(SR)technique in biomedical research.However,the conventional reconstruction algorithm for SIM heavily relies on the accurate prior knowledge of illumination patterns and signal-to-noise ratio(SNR)of raw images.To obtain high-quality SR images,several raw images need to be captured under high fluorescence level,which further restricts SIM’s temporal resolution and its applications.Deep learning(DL)is a data-driven technology that has been used to expand the limits of optical microscopy.In this study,we propose a deep neural network based on multi-level wavelet and attention mechanism(MWAM)for SIM.Our results show that the MWAM network can extract high-frequency information contained in SIM raw images and accurately integrate it into the output image,resulting in superior SR images compared to those generated using wide-field images as input data.We also demonstrate that the number of SIM raw images can be reduced to three,with one image in each illumination orientation,to achieve the optimal tradeoff between temporal and spatial resolution.Furthermore,our MWAM network exhibits superior reconstruction ability on low-SNR images compared to conventional SIM algorithms.We have also analyzed the adaptability of this network on other biological samples and successfully applied the pretrained model to other SIM systems.展开更多
In view of the problems of inconsistent data semantics,inconsistent data formats,and difficult data quality assurance between the railway engineering design phase and the construction and operation phase,as well as th...In view of the problems of inconsistent data semantics,inconsistent data formats,and difficult data quality assurance between the railway engineering design phase and the construction and operation phase,as well as the difficulty in fully realizing the value of design results,this paper proposes a design and implementation scheme for a railway engineering collaborative design platform.The railway engineering collaborative design platform mainly includes functional modules such as metadata management,design collaboration,design delivery management,model component library,model rendering services,and Building Information Modeling(BIM)application services.Based on this,research is conducted on multi-disciplinary parameterized collaborative design technology for railway engineering,infrastructure data management and delivery technology,and design multi-source data fusion and application technology.The railway engineering collaborative design platform is compared with other railway design software to further validate its advantages and advanced features.The platform has been widely applied in multiple railway construction projects,greatly improving the design and project management efficiency.展开更多
With the construction of the power Internet of Things(IoT),communication between smart devices in urban distribution networks has been gradually moving towards high speed,high compatibility,and low latency,which provi...With the construction of the power Internet of Things(IoT),communication between smart devices in urban distribution networks has been gradually moving towards high speed,high compatibility,and low latency,which provides reliable support for reconfiguration optimization in urban distribution networks.Thus,this study proposed a deep reinforcement learning based multi-level dynamic reconfiguration method for urban distribution networks in a cloud-edge collaboration architecture to obtain a real-time optimal multi-level dynamic reconfiguration solution.First,the multi-level dynamic reconfiguration method was discussed,which included feeder-,transformer-,and substation-levels.Subsequently,the multi-agent system was combined with the cloud-edge collaboration architecture to build a deep reinforcement learning model for multi-level dynamic reconfiguration in an urban distribution network.The cloud-edge collaboration architecture can effectively support the multi-agent system to conduct“centralized training and decentralized execution”operation modes and improve the learning efficiency of the model.Thereafter,for a multi-agent system,this study adopted a combination of offline and online learning to endow the model with the ability to realize automatic optimization and updation of the strategy.In the offline learning phase,a Q-learning-based multi-agent conservative Q-learning(MACQL)algorithm was proposed to stabilize the learning results and reduce the risk of the next online learning phase.In the online learning phase,a multi-agent deep deterministic policy gradient(MADDPG)algorithm based on policy gradients was proposed to explore the action space and update the experience pool.Finally,the effectiveness of the proposed method was verified through a simulation analysis of a real-world 445-node system.展开更多
We report a design and implementation of a field-programmable-gate-arrays(FPGA)based hardware platform,which is used to realize control and signal readout of trapped-ion-based multi-level quantum systems.This platform...We report a design and implementation of a field-programmable-gate-arrays(FPGA)based hardware platform,which is used to realize control and signal readout of trapped-ion-based multi-level quantum systems.This platform integrates a four-channel 2.8 Gsps@14 bits arbitrary waveform generator,a 16-channel 1 Gsps@14 bits direct-digital-synthesisbased radio-frequency generator,a 16-channel 8 ns resolution pulse generator,a 10-channel 16 bits digital-to-analogconverter module,and a 2-channel proportion integration differentiation controller.The hardware platform can be applied in the trapped-ion-based multi-level quantum systems,enabling quantum control of multi-level quantum system and highdimensional quantum simulation.The platform is scalable and more channels for control and signal readout can be implemented by utilizing more parallel duplications of the hardware.The hardware platform also has a bright future to be applied in scaled trapped-ion-based quantum systems.展开更多
Vast amounts of heterogeneous data on marine observations have been accumulated due to the rapid development of ocean observation technology.Several state-of-art methods are proposed to manage the emerging Internet of...Vast amounts of heterogeneous data on marine observations have been accumulated due to the rapid development of ocean observation technology.Several state-of-art methods are proposed to manage the emerging Internet of Things(IoT)sensor data.However,the use of an inefficient data management strategy during the data storage process can lead to missing metadata;thus,part of the sensor data cannot be indexed and utilized(i.e.,‘data swamp’).Researchers have focused on optimizing storage procedures to prevent such disasters,but few have attempted to restore the missing metadata.In this study,we propose an AI-based algorithm to reconstruct the metadata of heterogeneous marine data in data swamps to solve the above problems.First,a MapReduce algorithm is proposed to preprocess raw marine data and extract its feature tensors in parallel.Second,load the feature tensors are loaded into a machine learning algorithm and clustering operation is implemented.The similarities between the incoming data and the trained clustering results in terms of clustering results are also calculated.Finally,metadata reconstruction is performed based on existing marine observa-tion data processing results.The experiments are designed using existing datasets obtained from ocean observing systems,thus verifying the effectiveness of the algorithms.The results demonstrate the excellent performance of our proposed algorithm for the metadata recon-struction of heterogenous marine observation data.展开更多
From the beginning,the process of research and its publication is an ever-growing phenomenon and with the emergence of web technologies,its growth rate is overwhelming.On a rough estimate,more than thirty thousand res...From the beginning,the process of research and its publication is an ever-growing phenomenon and with the emergence of web technologies,its growth rate is overwhelming.On a rough estimate,more than thirty thousand research journals have been issuing around four million papers annually on average.Search engines,indexing services,and digital libraries have been searching for such publications over the web.Nevertheless,getting the most relevant articles against the user requests is yet a fantasy.It is mainly because the articles are not appropriately indexed based on the hierarchies of granular subject classification.To overcome this issue,researchers are striving to investigate new techniques for the classification of the research articles especially,when the complete article text is not available(a case of nonopen access articles).The proposed study aims to investigate the multilabel classification over the available metadata in the best possible way and to assess,“to what extent metadata-based features can perform in contrast to content-based approaches.”In this regard,novel techniques for investigating multilabel classification have been proposed,developed,and evaluated on metadata such as the Title and Keywords of the articles.The proposed technique has been assessed for two diverse datasets,namely,from the Journal of universal computer science(J.UCS)and the benchmark dataset comprises of the articles published by the Association for computing machinery(ACM).The proposed technique yields encouraging results in contrast to the state-ofthe-art techniques in the literature.展开更多
Spammer detection is to identify and block malicious activities performing users.Such users should be identified and terminated from social media to keep the social media process organic and to maintain the integrity ...Spammer detection is to identify and block malicious activities performing users.Such users should be identified and terminated from social media to keep the social media process organic and to maintain the integrity of online social spaces.Previous research aimed to find spammers based on hybrid approaches of graph mining,posted content,and metadata,using small and manually labeled datasets.However,such hybrid approaches are unscalable,not robust,particular dataset dependent,and require numerous parameters,complex graphs,and natural language processing(NLP)resources to make decisions,which makes spammer detection impractical for real-time detection.For example,graph mining requires neighbors’information,posted content-based approaches require multiple tweets from user profiles,then NLP resources to make decisions that are not applicable in a real-time environment.To fill the gap,firstly,we propose a REal-time Metadata based Spammer detection(REMS)model based on only metadata features to identify spammers,which takes the least number of parameters and provides adequate results.REMS is a scalable and robust model that uses only 19 metadata features of Twitter users to induce 73.81%F1-Score classification accuracy using a balanced training dataset(50%spam and 50%genuine users).The 19 features are 8 original and 11 derived features from the original features of Twitter users,identified with extensive experiments and analysis.Secondly,we present the largest and most diverse dataset of published research,comprising 211 K spam users and 1 million genuine users.The diversity of the dataset can be measured as it comprises users who posted 2.1 million Tweets on seven topics(100 hashtags)from 6 different geographical locations.The REMS’s superior classification performance with multiple machine and deep learning methods indicates that only metadata features have the potential to identify spammers rather than focusing on volatile posted content and complex graph structures.Dataset and REMS’s codes are available on GitHub(www.github.com/mhadnanali/REMS).展开更多
Massive rural-to-urban migration in China is consequential for political trust: rural-to-urban migrants have been found to hold lower levels of trust in local government than their rural peers who choose to stay in th...Massive rural-to-urban migration in China is consequential for political trust: rural-to-urban migrants have been found to hold lower levels of trust in local government than their rural peers who choose to stay in the countryside (mean 4.92 and 6.34 out of 10, respectively, p < 0.001). This article explores why migrants have a certain level of political trust in their county-level government. Using data of rural-to-urban migrants from the China Family Panel Survey, this study performs a hierarchical linear modeling (HLM) to unpack the multi-level explanatory factors of rural-to-urban migrants’ political trust. Findings show that the individual-level socio-economic characteristics and perceptions of government performance (Level-1), the neighborhood-level characteristics-the physical and social status and environment of neighborhoods (Level-2), and the objective macroeconomic performance of county-level government (Level-3), work together to explain migrants’ trust levels. These results suggest that considering the effects of neighborhood-level factors on rural-to-urban migrants’ political trust merits policy and public management attention in rapidly urbanizing countries.展开更多
基金This work was supported by the National Natural Science Foundation of China(U2133208,U20A20161).
文摘With the popularization of the Internet and the development of technology,cyber threats are increasing day by day.Threats such as malware,hacking,and data breaches have had a serious impact on cybersecurity.The network security environment in the era of big data presents the characteristics of large amounts of data,high diversity,and high real-time requirements.Traditional security defense methods and tools have been unable to cope with the complex and changing network security threats.This paper proposes a machine-learning security defense algorithm based on metadata association features.Emphasize control over unauthorized users through privacy,integrity,and availability.The user model is established and the mapping between the user model and the metadata of the data source is generated.By analyzing the user model and its corresponding mapping relationship,the query of the user model can be decomposed into the query of various heterogeneous data sources,and the integration of heterogeneous data sources based on the metadata association characteristics can be realized.Define and classify customer information,automatically identify and perceive sensitive data,build a behavior audit and analysis platform,analyze user behavior trajectories,and complete the construction of a machine learning customer information security defense system.The experimental results show that when the data volume is 5×103 bit,the data storage integrity of the proposed method is 92%.The data accuracy is 98%,and the success rate of data intrusion is only 2.6%.It can be concluded that the data storage method in this paper is safe,the data accuracy is always at a high level,and the data disaster recovery performance is good.This method can effectively resist data intrusion and has high air traffic control security.It can not only detect all viruses in user data storage,but also realize integrated virus processing,and further optimize the security defense effect of user big data.
基金supported in part by the Research on the Application of Multimodal Artificial Intelligence in Diagnosis and Treatment of Type 2 Diabetes under Grant No.2020SK50910in part by the Hunan Provincial Natural Science Foundation of China under Grant 2023JJ60020.
文摘Early screening of diabetes retinopathy(DR)plays an important role in preventing irreversible blindness.Existing research has failed to fully explore effective DR lesion information in fundus maps.Besides,traditional attention schemes have not considered the impact of lesion type differences on grading,resulting in unreasonable extraction of important lesion features.Therefore,this paper proposes a DR diagnosis scheme that integrates a multi-level patch attention generator(MPAG)and a lesion localization module(LLM).Firstly,MPAGis used to predict patches of different sizes and generate a weighted attention map based on the prediction score and the types of lesions contained in the patches,fully considering the impact of lesion type differences on grading,solving the problem that the attention maps of lesions cannot be further refined and then adapted to the final DR diagnosis task.Secondly,the LLM generates a global attention map based on localization.Finally,the weighted attention map and global attention map are weighted with the fundus map to fully explore effective DR lesion information and increase the attention of the classification network to lesion details.This paper demonstrates the effectiveness of the proposed method through extensive experiments on the public DDR dataset,obtaining an accuracy of 0.8064.
基金supported by the National Natural Science Foundation of China(Grant Nos.62005307 and 61975228).
文摘Structured illumination microscopy(SIM)is a popular and powerful super-resolution(SR)technique in biomedical research.However,the conventional reconstruction algorithm for SIM heavily relies on the accurate prior knowledge of illumination patterns and signal-to-noise ratio(SNR)of raw images.To obtain high-quality SR images,several raw images need to be captured under high fluorescence level,which further restricts SIM’s temporal resolution and its applications.Deep learning(DL)is a data-driven technology that has been used to expand the limits of optical microscopy.In this study,we propose a deep neural network based on multi-level wavelet and attention mechanism(MWAM)for SIM.Our results show that the MWAM network can extract high-frequency information contained in SIM raw images and accurately integrate it into the output image,resulting in superior SR images compared to those generated using wide-field images as input data.We also demonstrate that the number of SIM raw images can be reduced to three,with one image in each illumination orientation,to achieve the optimal tradeoff between temporal and spatial resolution.Furthermore,our MWAM network exhibits superior reconstruction ability on low-SNR images compared to conventional SIM algorithms.We have also analyzed the adaptability of this network on other biological samples and successfully applied the pretrained model to other SIM systems.
基金supported by the National Key Research and Development Program of China(2021YFB2600405).
文摘In view of the problems of inconsistent data semantics,inconsistent data formats,and difficult data quality assurance between the railway engineering design phase and the construction and operation phase,as well as the difficulty in fully realizing the value of design results,this paper proposes a design and implementation scheme for a railway engineering collaborative design platform.The railway engineering collaborative design platform mainly includes functional modules such as metadata management,design collaboration,design delivery management,model component library,model rendering services,and Building Information Modeling(BIM)application services.Based on this,research is conducted on multi-disciplinary parameterized collaborative design technology for railway engineering,infrastructure data management and delivery technology,and design multi-source data fusion and application technology.The railway engineering collaborative design platform is compared with other railway design software to further validate its advantages and advanced features.The platform has been widely applied in multiple railway construction projects,greatly improving the design and project management efficiency.
基金supported by the National Natural Science Foundation of China under Grant 52077146.
文摘With the construction of the power Internet of Things(IoT),communication between smart devices in urban distribution networks has been gradually moving towards high speed,high compatibility,and low latency,which provides reliable support for reconfiguration optimization in urban distribution networks.Thus,this study proposed a deep reinforcement learning based multi-level dynamic reconfiguration method for urban distribution networks in a cloud-edge collaboration architecture to obtain a real-time optimal multi-level dynamic reconfiguration solution.First,the multi-level dynamic reconfiguration method was discussed,which included feeder-,transformer-,and substation-levels.Subsequently,the multi-agent system was combined with the cloud-edge collaboration architecture to build a deep reinforcement learning model for multi-level dynamic reconfiguration in an urban distribution network.The cloud-edge collaboration architecture can effectively support the multi-agent system to conduct“centralized training and decentralized execution”operation modes and improve the learning efficiency of the model.Thereafter,for a multi-agent system,this study adopted a combination of offline and online learning to endow the model with the ability to realize automatic optimization and updation of the strategy.In the offline learning phase,a Q-learning-based multi-agent conservative Q-learning(MACQL)algorithm was proposed to stabilize the learning results and reduce the risk of the next online learning phase.In the online learning phase,a multi-agent deep deterministic policy gradient(MADDPG)algorithm based on policy gradients was proposed to explore the action space and update the experience pool.Finally,the effectiveness of the proposed method was verified through a simulation analysis of a real-world 445-node system.
基金the Strategic Priority Research Program of CAS(Grant No.XDC07020200)the National Key R&D Program of China(Grants No.2018YFA0306600)+5 种基金the National Natural Science Foundation of China(Grant Nos.11974330 and 92165206)the Chinese Academy of Sciences(Grant No.QYZDY-SSW-SLH004)the Innovation Program for Quantum Science and Technology(Grant Nos.2021ZD0302200 and 2021ZD0301603)the Anhui Initiative in Quantum Information Technologies(Grant No.AHY050000)the Hefei Comprehensive National Science Centerthe Fundamental Research Funds for the Central Universities。
文摘We report a design and implementation of a field-programmable-gate-arrays(FPGA)based hardware platform,which is used to realize control and signal readout of trapped-ion-based multi-level quantum systems.This platform integrates a four-channel 2.8 Gsps@14 bits arbitrary waveform generator,a 16-channel 1 Gsps@14 bits direct-digital-synthesisbased radio-frequency generator,a 16-channel 8 ns resolution pulse generator,a 10-channel 16 bits digital-to-analogconverter module,and a 2-channel proportion integration differentiation controller.The hardware platform can be applied in the trapped-ion-based multi-level quantum systems,enabling quantum control of multi-level quantum system and highdimensional quantum simulation.The platform is scalable and more channels for control and signal readout can be implemented by utilizing more parallel duplications of the hardware.The hardware platform also has a bright future to be applied in scaled trapped-ion-based quantum systems.
基金supported by the Shandong Province Natural Science Foundation(No.ZR2020QF028).
文摘Vast amounts of heterogeneous data on marine observations have been accumulated due to the rapid development of ocean observation technology.Several state-of-art methods are proposed to manage the emerging Internet of Things(IoT)sensor data.However,the use of an inefficient data management strategy during the data storage process can lead to missing metadata;thus,part of the sensor data cannot be indexed and utilized(i.e.,‘data swamp’).Researchers have focused on optimizing storage procedures to prevent such disasters,but few have attempted to restore the missing metadata.In this study,we propose an AI-based algorithm to reconstruct the metadata of heterogeneous marine data in data swamps to solve the above problems.First,a MapReduce algorithm is proposed to preprocess raw marine data and extract its feature tensors in parallel.Second,load the feature tensors are loaded into a machine learning algorithm and clustering operation is implemented.The similarities between the incoming data and the trained clustering results in terms of clustering results are also calculated.Finally,metadata reconstruction is performed based on existing marine observa-tion data processing results.The experiments are designed using existing datasets obtained from ocean observing systems,thus verifying the effectiveness of the algorithms.The results demonstrate the excellent performance of our proposed algorithm for the metadata recon-struction of heterogenous marine observation data.
文摘From the beginning,the process of research and its publication is an ever-growing phenomenon and with the emergence of web technologies,its growth rate is overwhelming.On a rough estimate,more than thirty thousand research journals have been issuing around four million papers annually on average.Search engines,indexing services,and digital libraries have been searching for such publications over the web.Nevertheless,getting the most relevant articles against the user requests is yet a fantasy.It is mainly because the articles are not appropriately indexed based on the hierarchies of granular subject classification.To overcome this issue,researchers are striving to investigate new techniques for the classification of the research articles especially,when the complete article text is not available(a case of nonopen access articles).The proposed study aims to investigate the multilabel classification over the available metadata in the best possible way and to assess,“to what extent metadata-based features can perform in contrast to content-based approaches.”In this regard,novel techniques for investigating multilabel classification have been proposed,developed,and evaluated on metadata such as the Title and Keywords of the articles.The proposed technique has been assessed for two diverse datasets,namely,from the Journal of universal computer science(J.UCS)and the benchmark dataset comprises of the articles published by the Association for computing machinery(ACM).The proposed technique yields encouraging results in contrast to the state-ofthe-art techniques in the literature.
基金supported by the Guangzhou Government Project(Grant No.62216235)the National Natural Science Foundation of China(Grant Nos.61573328,622260-1).
文摘Spammer detection is to identify and block malicious activities performing users.Such users should be identified and terminated from social media to keep the social media process organic and to maintain the integrity of online social spaces.Previous research aimed to find spammers based on hybrid approaches of graph mining,posted content,and metadata,using small and manually labeled datasets.However,such hybrid approaches are unscalable,not robust,particular dataset dependent,and require numerous parameters,complex graphs,and natural language processing(NLP)resources to make decisions,which makes spammer detection impractical for real-time detection.For example,graph mining requires neighbors’information,posted content-based approaches require multiple tweets from user profiles,then NLP resources to make decisions that are not applicable in a real-time environment.To fill the gap,firstly,we propose a REal-time Metadata based Spammer detection(REMS)model based on only metadata features to identify spammers,which takes the least number of parameters and provides adequate results.REMS is a scalable and robust model that uses only 19 metadata features of Twitter users to induce 73.81%F1-Score classification accuracy using a balanced training dataset(50%spam and 50%genuine users).The 19 features are 8 original and 11 derived features from the original features of Twitter users,identified with extensive experiments and analysis.Secondly,we present the largest and most diverse dataset of published research,comprising 211 K spam users and 1 million genuine users.The diversity of the dataset can be measured as it comprises users who posted 2.1 million Tweets on seven topics(100 hashtags)from 6 different geographical locations.The REMS’s superior classification performance with multiple machine and deep learning methods indicates that only metadata features have the potential to identify spammers rather than focusing on volatile posted content and complex graph structures.Dataset and REMS’s codes are available on GitHub(www.github.com/mhadnanali/REMS).
文摘Massive rural-to-urban migration in China is consequential for political trust: rural-to-urban migrants have been found to hold lower levels of trust in local government than their rural peers who choose to stay in the countryside (mean 4.92 and 6.34 out of 10, respectively, p < 0.001). This article explores why migrants have a certain level of political trust in their county-level government. Using data of rural-to-urban migrants from the China Family Panel Survey, this study performs a hierarchical linear modeling (HLM) to unpack the multi-level explanatory factors of rural-to-urban migrants’ political trust. Findings show that the individual-level socio-economic characteristics and perceptions of government performance (Level-1), the neighborhood-level characteristics-the physical and social status and environment of neighborhoods (Level-2), and the objective macroeconomic performance of county-level government (Level-3), work together to explain migrants’ trust levels. These results suggest that considering the effects of neighborhood-level factors on rural-to-urban migrants’ political trust merits policy and public management attention in rapidly urbanizing countries.