The scale and complexity of big data are growing continuously,posing severe challenges to traditional data processing methods,especially in the field of clustering analysis.To address this issue,this paper introduces ...The scale and complexity of big data are growing continuously,posing severe challenges to traditional data processing methods,especially in the field of clustering analysis.To address this issue,this paper introduces a new method named Big Data Tensor Multi-Cluster Distributed Incremental Update(BDTMCDIncreUpdate),which combines distributed computing,storage technology,and incremental update techniques to provide an efficient and effective means for clustering analysis.Firstly,the original dataset is divided into multiple subblocks,and distributed computing resources are utilized to process the sub-blocks in parallel,enhancing efficiency.Then,initial clustering is performed on each sub-block using tensor-based multi-clustering techniques to obtain preliminary results.When new data arrives,incremental update technology is employed to update the core tensor and factor matrix,ensuring that the clustering model can adapt to changes in data.Finally,by combining the updated core tensor and factor matrix with historical computational results,refined clustering results are obtained,achieving real-time adaptation to dynamic data.Through experimental simulation on the Aminer dataset,the BDTMCDIncreUpdate method has demonstrated outstanding performance in terms of accuracy(ACC)and normalized mutual information(NMI)metrics,achieving an accuracy rate of 90%and an NMI score of 0.85,which outperforms existing methods such as TClusInitUpdate and TKLClusUpdate in most scenarios.Therefore,the BDTMCDIncreUpdate method offers an innovative solution to the field of big data analysis,integrating distributed computing,incremental updates,and tensor-based multi-clustering techniques.It not only improves the efficiency and scalability in processing large-scale high-dimensional datasets but also has been validated for its effectiveness and accuracy through experiments.This method shows great potential in real-world applications where dynamic data growth is common,and it is of significant importance for advancing the development of data analysis technology.展开更多
In this paper,we analyze a hybrid Heterogeneous Cellular Network(HCNet)framework by deploying millimeter Wave(mmWave)small cells with coexisting traditional sub-6GHz macro cells to achieve improved coverage and high d...In this paper,we analyze a hybrid Heterogeneous Cellular Network(HCNet)framework by deploying millimeter Wave(mmWave)small cells with coexisting traditional sub-6GHz macro cells to achieve improved coverage and high data rate.We consider randomly-deployed macro base stations throughout the network whereas mmWave Small Base Stations(SBSs)are deployed in the areas with high User Equipment(UE)density.Such user centric deployment of mmWave SBSs inevitably incurs correlation between UE and SBSs.For a realistic scenario where the UEs are distributed according to Poisson cluster process and directional beamforming with line-of-sight and non-line-of-sight transmissions is adopted for mmWave communication.By using tools from stochastic geometry,we develop an analytical framework to analyze various performance metrics in the downlink hybrid HCNets under biased received power association.For UE clustering we considered Thomas cluster process and derive expressions for the association probability,coverage probability,area spectral efficiency,and energy efficiency.We also provide Monte Carlo simulation results to validate the accuracy of the derived expressions.Furthermore,we analyze the impact of mmWave operating frequency,antenna gain,small cell biasing,and BSs density to get useful engineering insights into the performance of hybrid mmWave HCNets.Our results show that network performance is significantly improved by deploying millimeter wave SBS instead of microwave BS in hot spots.展开更多
The generalized Smoluchovski equation with reinforced source is analysed. The asymptotic expression of the size distribution of cluster C m(t) is obtained by seeking the steady state solution and post gel solution...The generalized Smoluchovski equation with reinforced source is analysed. The asymptotic expression of the size distribution of cluster C m(t) is obtained by seeking the steady state solution and post gel solution of the generalized Smoluchovski equation with reinforced source. The result can be verified by Friedlander's experiment.展开更多
Most existing applications of centroidal Voronoi tessellations(CVTs) lack consideration of the length of the cluster boundaries.In this paper we propose a new model and algorithms to produce segmentations which would ...Most existing applications of centroidal Voronoi tessellations(CVTs) lack consideration of the length of the cluster boundaries.In this paper we propose a new model and algorithms to produce segmentations which would minimize the total energy—a sum of the classic CVT energy and the weighted length of cluster boundaries.To distinguish it with the classic CVTs,we call it an Edge-Weighted CVT(EWCVT).The concept of EWCVT is expected to build a mathematical base for all CVT related data classifications with requirement of smoothness of the cluster boundaries.The EWCVT method is easy in implementation,fast in computation,and natural for any number of clusters.展开更多
Clusters greatly influence thermophysical properties of near critical gases. The cluster structures of supercritical fluids in general and Carbon Dioxide especially are important for the advanced supercritical fluid t...Clusters greatly influence thermophysical properties of near critical gases. The cluster structures of supercritical fluids in general and Carbon Dioxide especially are important for the advanced supercritical fluid technologies and analytics development. The paper extends to near critical densities the developed earlier methods to extract the clusters’ properties from Online Electronic Database of NIST on thermophysical properties of fluids. This Database contains a hidden knowledge of cluster fractions’ properties in real gases. The discovered earlier linear chain clusters dominate at intermediate densities. Their properties can be extrapolated to high density gases, thus opening the way to study large 3D clusters in near critical zone. The potential energy density of a gas, cleared from the chain clusters’ contribution, reflects only the 3D clusters’ characteristics. A series expansion of this value by the Monomer Fraction density discovers properties of n-particle 3D clusters. The paper demonstrates a discrete row of 3D clusters’ particle numbers and gives estimations for bond energies of these clusters.展开更多
The network switches in the data plane of Software Defined Networking (SDN) are empowered by an elementary process, in which enormous number of packets which resemble big volumes of data are classified into specific f...The network switches in the data plane of Software Defined Networking (SDN) are empowered by an elementary process, in which enormous number of packets which resemble big volumes of data are classified into specific flows by matching them against a set of dynamic rules. This basic process accelerates the processing of data, so that instead of processing singular packets repeatedly, corresponding actions are performed on corresponding flows of packets. In this paper, first, we address limitations on a typical packet classification algorithm like Tuple Space Search (TSS). Then, we present a set of different scenarios to parallelize it on different parallel processing platforms, including Graphics Processing Units (GPUs), clusters of Central Processing Units (CPUs), and hybrid clusters. Experimental results show that the hybrid cluster provides the best platform for parallelizing packet classification algorithms, which promises the average throughput rate of 4.2 Million packets per second (Mpps). That is, the hybrid cluster produced by the integration of Compute Unified Device Architecture (CUDA), Message Passing Interface (MPI), and OpenMP programming model could classify 0.24 million packets per second more than the GPU cluster scheme. Such a packet classifier satisfies the required processing speed in the programmable network systems that would be used to communicate big medical data.展开更多
We demonstrate real-time three-dimensional(3D)color video using a color electroholographic system with a cluster of multiple-graphics processing units(multi-GPU)and three spatial light modulators(SLMs)corresponding re...We demonstrate real-time three-dimensional(3D)color video using a color electroholographic system with a cluster of multiple-graphics processing units(multi-GPU)and three spatial light modulators(SLMs)corresponding respectively to red,green,and blue(RGB)-colored reconstructing lights.The multi-GPU cluster has a computer-generated hologram(CGH)display node containing a GPU,for displaying calculated CGHs on SLMs,and four CGH calculation nodes using 12 GPUs.The GPUs in the CGH calculation node generate CGHs corresponding to RGB reconstructing lights in a 3D color video using pipeline processing.Real-time color electroholography was realized for a 3D color object comprising approximately 21,000 points per color.展开更多
Systems containing multiple graphics-processing-unit(GPU)clusters are difficult to use for real-time electroholography when using only a single spatial light modulator because the transfer of the computer-generated ho...Systems containing multiple graphics-processing-unit(GPU)clusters are difficult to use for real-time electroholography when using only a single spatial light modulator because the transfer of the computer-generated hologram data between the GPUs is bottlenecked.To overcome this bottleneck,we propose a rapid GPU packing scheme that significantly reduces the volume of the required data transfer.The proposed method uses a multi-GPU cluster system connected with a cost-effective gigabit Ethernet network.In tests,we achieved real-time electroholography of a three-dimensional(3D)video presenting a point-cloud 3D object made up of approximately 200,000 points.展开更多
Computationally, the calculation of computer-generated holograms is extremely expensive, and the image quality deteriorates when reconstructing three-dimensional(3 D) holographic video from a point-cloud model compris...Computationally, the calculation of computer-generated holograms is extremely expensive, and the image quality deteriorates when reconstructing three-dimensional(3 D) holographic video from a point-cloud model comprising a huge number of object points. To solve these problems, we implement herein a spatiotemporal division multiplexing method on a cluster system with 13 GPUs connected by a gigabit Ethernet network.A performance evaluation indicates that the proposed method can realize a real-time holographic video of a3 D object comprising ~1,200,000 object points. These results demonstrate a clear 3 D holographic video at32.7 frames per second reconstructed from a 3 D object comprising 1,064,462 object points.展开更多
We demonstrate fast time-division color etectroholography using a multiple-graphics-processing-unit (GPU) cluster system with a spatial light modulator and a controller to switch the color of the reconstructing ligh...We demonstrate fast time-division color etectroholography using a multiple-graphics-processing-unit (GPU) cluster system with a spatial light modulator and a controller to switch the color of the reconstructing light. The controller comprises a universal serial bus module to drive the liquid crystal optical shutters. By using the controller, the computer-generated hologram (CGH) display node of the multiple-GPU cluster system synchronizes the display of the CGH with the color switching of the reconstructing light. Fast time-division color electroholography at 20 fps is realized for a three-dimensional object comprising 21,000 points per color when 13 GPUs are used in a multiple-GPU cluster system.展开更多
The National Institute of Standards and Technology(NIST)has identified natural language policies as the preferred expression of policy and implicitly called for an automated translation of ABAC natural language access...The National Institute of Standards and Technology(NIST)has identified natural language policies as the preferred expression of policy and implicitly called for an automated translation of ABAC natural language access control policy(NLACP)to a machine-readable form.To study the automation process,we consider the hierarchical ABAC model as our reference model since it better reflects the requirements of real-world organizations.Therefore,this paper focuses on the questions of:how can we automatically infer the hierarchical structure of an ABAC model given NLACPs;and,how can we extract and define the set of authorization attributes based on the resulting structure.To address these questions,we propose an approach built upon recent advancements in natural language processing and machine learning techniques.For such a solution,the lack of appropriate data often poses a bottleneck.Therefore,we decouple the primary contributions of this work into:(1)developing a practical framework to extract authorization attributes of hierarchical ABAC system from natural language artifacts,and(2)generating a set of realistic synthetic natural language access control policies(NLACPs)to evaluate the proposed framework.Our experimental results are promising as we achieved-in average-an F1-score of 0.96 when extracting attributes values of subjects,and 0.91 when extracting the values of objects’attributes from natural language access control policies.展开更多
In the analysis of overlaid wireless Ad-hoc networks, the underlying node distributions are commonly assumed to be two independent homogeneous Poisson point processes. In this paper, by using stochastic geometry tools...In the analysis of overlaid wireless Ad-hoc networks, the underlying node distributions are commonly assumed to be two independent homogeneous Poisson point processes. In this paper, by using stochastic geometry tools, a new inhomogeneous overlaid wireless Ad-hoc network model is studied and the outage probability are analyzed. By assuming that primary (PR) network nodes are distributed as a Poisson point process (PPP) and secondary (SR) network nodes are distributed as a Matern cluster processes, an upper and a lower bounds for the transmission capacity of the primary network and that of the secondary network are presented. Simulation results show that the transmission capacity of the PR and SR network will both have a small increment due to the inhomogeneity of the SR network.展开更多
The National Institute of Standards and Technology(NIST)has identified natural language policies as the preferred expression of policy and implicitly called for an automated translation of ABAC natural language access...The National Institute of Standards and Technology(NIST)has identified natural language policies as the preferred expression of policy and implicitly called for an automated translation of ABAC natural language access control policy(NLACP)to a machine-readable form.To study the automation process,we consider the hierarchical ABAC model as our reference model since it better reflects the requirements of real-world organizations.Therefore,this paper focuses on the questions of:how can we automatically infer the hierarchical structure of an ABAC model given NLACPs;and,how can we extract and define the set of authorization attributes based on the resulting structure.To address these questions,we propose an approach built upon recent advancements in natural language processing and machine learning techniques.For such a solution,the lack of appropriate data often poses a bottleneck.Therefore,we decouple the primary contributions of this work into:(1)developing a practical framework to extract authorization attributes of hierarchical ABAC system from natural language artifacts,and(2)generating a set of realistic synthetic natural language access control policies(NLACPs)to evaluate the proposed framework.Our experimental results are promising as we achieved-in average-an F1-score of 0.96 when extracting attributes values of subjects,and 0.91 when extracting the values of objects’attributes from natural language access control policies.展开更多
基金sponsored by the National Natural Science Foundation of China(Nos.61972208,62102194 and 62102196)National Natural Science Foundation of China(Youth Project)(No.62302237)+3 种基金Six Talent Peaks Project of Jiangsu Province(No.RJFW-111),China Postdoctoral Science Foundation Project(No.2018M640509)Postgraduate Research and Practice Innovation Program of Jiangsu Province(Nos.KYCX22_1019,KYCX23_1087,KYCX22_1027,KYCX23_1087,SJCX24_0339 and SJCX24_0346)Innovative Training Program for College Students of Nanjing University of Posts and Telecommunications(No.XZD2019116)Nanjing University of Posts and Telecommunications College Students Innovation Training Program(Nos.XZD2019116,XYB2019331).
文摘The scale and complexity of big data are growing continuously,posing severe challenges to traditional data processing methods,especially in the field of clustering analysis.To address this issue,this paper introduces a new method named Big Data Tensor Multi-Cluster Distributed Incremental Update(BDTMCDIncreUpdate),which combines distributed computing,storage technology,and incremental update techniques to provide an efficient and effective means for clustering analysis.Firstly,the original dataset is divided into multiple subblocks,and distributed computing resources are utilized to process the sub-blocks in parallel,enhancing efficiency.Then,initial clustering is performed on each sub-block using tensor-based multi-clustering techniques to obtain preliminary results.When new data arrives,incremental update technology is employed to update the core tensor and factor matrix,ensuring that the clustering model can adapt to changes in data.Finally,by combining the updated core tensor and factor matrix with historical computational results,refined clustering results are obtained,achieving real-time adaptation to dynamic data.Through experimental simulation on the Aminer dataset,the BDTMCDIncreUpdate method has demonstrated outstanding performance in terms of accuracy(ACC)and normalized mutual information(NMI)metrics,achieving an accuracy rate of 90%and an NMI score of 0.85,which outperforms existing methods such as TClusInitUpdate and TKLClusUpdate in most scenarios.Therefore,the BDTMCDIncreUpdate method offers an innovative solution to the field of big data analysis,integrating distributed computing,incremental updates,and tensor-based multi-clustering techniques.It not only improves the efficiency and scalability in processing large-scale high-dimensional datasets but also has been validated for its effectiveness and accuracy through experiments.This method shows great potential in real-world applications where dynamic data growth is common,and it is of significant importance for advancing the development of data analysis technology.
文摘In this paper,we analyze a hybrid Heterogeneous Cellular Network(HCNet)framework by deploying millimeter Wave(mmWave)small cells with coexisting traditional sub-6GHz macro cells to achieve improved coverage and high data rate.We consider randomly-deployed macro base stations throughout the network whereas mmWave Small Base Stations(SBSs)are deployed in the areas with high User Equipment(UE)density.Such user centric deployment of mmWave SBSs inevitably incurs correlation between UE and SBSs.For a realistic scenario where the UEs are distributed according to Poisson cluster process and directional beamforming with line-of-sight and non-line-of-sight transmissions is adopted for mmWave communication.By using tools from stochastic geometry,we develop an analytical framework to analyze various performance metrics in the downlink hybrid HCNets under biased received power association.For UE clustering we considered Thomas cluster process and derive expressions for the association probability,coverage probability,area spectral efficiency,and energy efficiency.We also provide Monte Carlo simulation results to validate the accuracy of the derived expressions.Furthermore,we analyze the impact of mmWave operating frequency,antenna gain,small cell biasing,and BSs density to get useful engineering insights into the performance of hybrid mmWave HCNets.Our results show that network performance is significantly improved by deploying millimeter wave SBS instead of microwave BS in hot spots.
文摘The generalized Smoluchovski equation with reinforced source is analysed. The asymptotic expression of the size distribution of cluster C m(t) is obtained by seeking the steady state solution and post gel solution of the generalized Smoluchovski equation with reinforced source. The result can be verified by Friedlander's experiment.
基金supported in part by the U.S.National Science Foundation under grant number DMS-0913491.
文摘Most existing applications of centroidal Voronoi tessellations(CVTs) lack consideration of the length of the cluster boundaries.In this paper we propose a new model and algorithms to produce segmentations which would minimize the total energy—a sum of the classic CVT energy and the weighted length of cluster boundaries.To distinguish it with the classic CVTs,we call it an Edge-Weighted CVT(EWCVT).The concept of EWCVT is expected to build a mathematical base for all CVT related data classifications with requirement of smoothness of the cluster boundaries.The EWCVT method is easy in implementation,fast in computation,and natural for any number of clusters.
文摘Clusters greatly influence thermophysical properties of near critical gases. The cluster structures of supercritical fluids in general and Carbon Dioxide especially are important for the advanced supercritical fluid technologies and analytics development. The paper extends to near critical densities the developed earlier methods to extract the clusters’ properties from Online Electronic Database of NIST on thermophysical properties of fluids. This Database contains a hidden knowledge of cluster fractions’ properties in real gases. The discovered earlier linear chain clusters dominate at intermediate densities. Their properties can be extrapolated to high density gases, thus opening the way to study large 3D clusters in near critical zone. The potential energy density of a gas, cleared from the chain clusters’ contribution, reflects only the 3D clusters’ characteristics. A series expansion of this value by the Monomer Fraction density discovers properties of n-particle 3D clusters. The paper demonstrates a discrete row of 3D clusters’ particle numbers and gives estimations for bond energies of these clusters.
文摘The network switches in the data plane of Software Defined Networking (SDN) are empowered by an elementary process, in which enormous number of packets which resemble big volumes of data are classified into specific flows by matching them against a set of dynamic rules. This basic process accelerates the processing of data, so that instead of processing singular packets repeatedly, corresponding actions are performed on corresponding flows of packets. In this paper, first, we address limitations on a typical packet classification algorithm like Tuple Space Search (TSS). Then, we present a set of different scenarios to parallelize it on different parallel processing platforms, including Graphics Processing Units (GPUs), clusters of Central Processing Units (CPUs), and hybrid clusters. Experimental results show that the hybrid cluster provides the best platform for parallelizing packet classification algorithms, which promises the average throughput rate of 4.2 Million packets per second (Mpps). That is, the hybrid cluster produced by the integration of Compute Unified Device Architecture (CUDA), Message Passing Interface (MPI), and OpenMP programming model could classify 0.24 million packets per second more than the GPU cluster scheme. Such a packet classifier satisfies the required processing speed in the programmable network systems that would be used to communicate big medical data.
基金partially supported by the Japan Society for the Promotion of Science(JSPS)KAKENHI(Nos.18K11399 and 19H01097)the Telecommunications Advancement Foundation.
文摘We demonstrate real-time three-dimensional(3D)color video using a color electroholographic system with a cluster of multiple-graphics processing units(multi-GPU)and three spatial light modulators(SLMs)corresponding respectively to red,green,and blue(RGB)-colored reconstructing lights.The multi-GPU cluster has a computer-generated hologram(CGH)display node containing a GPU,for displaying calculated CGHs on SLMs,and four CGH calculation nodes using 12 GPUs.The GPUs in the CGH calculation node generate CGHs corresponding to RGB reconstructing lights in a 3D color video using pipeline processing.Real-time color electroholography was realized for a 3D color object comprising approximately 21,000 points per color.
基金supported by the Japan Society for the Promotion of Science(JSPS)KAKENHI(Nos.18K11399 and 19H01097)the Telecommunications Advancement Foundation.
文摘Systems containing multiple graphics-processing-unit(GPU)clusters are difficult to use for real-time electroholography when using only a single spatial light modulator because the transfer of the computer-generated hologram data between the GPUs is bottlenecked.To overcome this bottleneck,we propose a rapid GPU packing scheme that significantly reduces the volume of the required data transfer.The proposed method uses a multi-GPU cluster system connected with a cost-effective gigabit Ethernet network.In tests,we achieved real-time electroholography of a three-dimensional(3D)video presenting a point-cloud 3D object made up of approximately 200,000 points.
基金partially supported by the Japan Society for the Promotion of Science(JSPS)KAKENHI(Nos.18K11399 and 19H01097)the Telecommunications Advancement Foundation
文摘Computationally, the calculation of computer-generated holograms is extremely expensive, and the image quality deteriorates when reconstructing three-dimensional(3 D) holographic video from a point-cloud model comprising a huge number of object points. To solve these problems, we implement herein a spatiotemporal division multiplexing method on a cluster system with 13 GPUs connected by a gigabit Ethernet network.A performance evaluation indicates that the proposed method can realize a real-time holographic video of a3 D object comprising ~1,200,000 object points. These results demonstrate a clear 3 D holographic video at32.7 frames per second reconstructed from a 3 D object comprising 1,064,462 object points.
基金partially supported by the Japan Society for the Promotion of Science through a Grant-in-Aid for Scientific Research(C)under Grant No.15K00153
文摘We demonstrate fast time-division color etectroholography using a multiple-graphics-processing-unit (GPU) cluster system with a spatial light modulator and a controller to switch the color of the reconstructing light. The controller comprises a universal serial bus module to drive the liquid crystal optical shutters. By using the controller, the computer-generated hologram (CGH) display node of the multiple-GPU cluster system synchronizes the display of the CGH with the color switching of the reconstructing light. Fast time-division color electroholography at 20 fps is realized for a three-dimensional object comprising 21,000 points per color when 13 GPUs are used in a multiple-GPU cluster system.
文摘The National Institute of Standards and Technology(NIST)has identified natural language policies as the preferred expression of policy and implicitly called for an automated translation of ABAC natural language access control policy(NLACP)to a machine-readable form.To study the automation process,we consider the hierarchical ABAC model as our reference model since it better reflects the requirements of real-world organizations.Therefore,this paper focuses on the questions of:how can we automatically infer the hierarchical structure of an ABAC model given NLACPs;and,how can we extract and define the set of authorization attributes based on the resulting structure.To address these questions,we propose an approach built upon recent advancements in natural language processing and machine learning techniques.For such a solution,the lack of appropriate data often poses a bottleneck.Therefore,we decouple the primary contributions of this work into:(1)developing a practical framework to extract authorization attributes of hierarchical ABAC system from natural language artifacts,and(2)generating a set of realistic synthetic natural language access control policies(NLACPs)to evaluate the proposed framework.Our experimental results are promising as we achieved-in average-an F1-score of 0.96 when extracting attributes values of subjects,and 0.91 when extracting the values of objects’attributes from natural language access control policies.
基金supported by the National Natural Science Foundation of China (60972073, 61271257)the Hi-Tech Research and Development Program Of China (2011AA100706)+1 种基金Beijing Scientific and Technological Program (D111100001011002)the Beijing Natural Science Foundation (4122034)
文摘In the analysis of overlaid wireless Ad-hoc networks, the underlying node distributions are commonly assumed to be two independent homogeneous Poisson point processes. In this paper, by using stochastic geometry tools, a new inhomogeneous overlaid wireless Ad-hoc network model is studied and the outage probability are analyzed. By assuming that primary (PR) network nodes are distributed as a Poisson point process (PPP) and secondary (SR) network nodes are distributed as a Matern cluster processes, an upper and a lower bounds for the transmission capacity of the primary network and that of the secondary network are presented. Simulation results show that the transmission capacity of the PR and SR network will both have a small increment due to the inhomogeneity of the SR network.
基金supported by Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The National Institute of Standards and Technology(NIST)has identified natural language policies as the preferred expression of policy and implicitly called for an automated translation of ABAC natural language access control policy(NLACP)to a machine-readable form.To study the automation process,we consider the hierarchical ABAC model as our reference model since it better reflects the requirements of real-world organizations.Therefore,this paper focuses on the questions of:how can we automatically infer the hierarchical structure of an ABAC model given NLACPs;and,how can we extract and define the set of authorization attributes based on the resulting structure.To address these questions,we propose an approach built upon recent advancements in natural language processing and machine learning techniques.For such a solution,the lack of appropriate data often poses a bottleneck.Therefore,we decouple the primary contributions of this work into:(1)developing a practical framework to extract authorization attributes of hierarchical ABAC system from natural language artifacts,and(2)generating a set of realistic synthetic natural language access control policies(NLACPs)to evaluate the proposed framework.Our experimental results are promising as we achieved-in average-an F1-score of 0.96 when extracting attributes values of subjects,and 0.91 when extracting the values of objects’attributes from natural language access control policies.