In this article,we study Kahler metrics on a certain line bundle over some compact Kahler manifolds to find complete Kahler metrics with positive holomorphic sectional(or bisectional)curvatures.Thus,we apply a strateg...In this article,we study Kahler metrics on a certain line bundle over some compact Kahler manifolds to find complete Kahler metrics with positive holomorphic sectional(or bisectional)curvatures.Thus,we apply a strategy to a famous Yau conjecture with a co-homogeneity one geometry.展开更多
In this paper,we study a class of Finsler metrics defined by a vector field on a gradient Ricci soliton.We obtain a necessary and sufficient condition for these Finsler metrics on a compact gradient Ricci soliton to b...In this paper,we study a class of Finsler metrics defined by a vector field on a gradient Ricci soliton.We obtain a necessary and sufficient condition for these Finsler metrics on a compact gradient Ricci soliton to be of isotropic S-curvature by establishing a new integral inequality.Then we determine the Ricci curvature of navigation Finsler metrics of isotropic S-curvature on a gradient Ricci soliton generalizing result only known in the case when such soliton is of Einstein type.As its application,we obtain the Ricci curvature of all navigation Finsler metrics of isotropic S-curvature on Gaussian shrinking soliton.展开更多
In a very recent article of mine I have corrected the traditional derivation of the Schwarzschild metric thus arriving to formulate a correct Schwarzschild metric different from the traditional Schwarzschild metric. I...In a very recent article of mine I have corrected the traditional derivation of the Schwarzschild metric thus arriving to formulate a correct Schwarzschild metric different from the traditional Schwarzschild metric. In this article, starting from this correct Schwarzschild metric, I also propose corrections to the other traditional Reissner-Nordstrøm, Kerr and Kerr-Newman metrics on the basis of the fact that these metrics should be equal to the correct Schwarzschild metric in the borderline case in which they reduce to the case described by this metric. In this way, we see that, like the correct Schwarzschild metric, also the correct Reissner-Nordstrøm, Kerr and Kerr-Newman metrics do not present any event horizon (and therefore do not present any black hole) unlike the traditional Reissner-Nordstrøm, Kerr and Kerr-Newman metrics.展开更多
In this paper,we prove that for some completions of certain fiber bundles there is a Maxwell-Einstein metric conformally related to any given Kahler class.
Purpose:This study examines the effects of using publication-based metrics for the initial screening in the application process for a project leader.The key questions are whether formal policy affects the allocation o...Purpose:This study examines the effects of using publication-based metrics for the initial screening in the application process for a project leader.The key questions are whether formal policy affects the allocation of funds to researchers with a better publication record and how the previous academic performance of principal investigators is related to future project results.Design/methodology/approach:We compared two competitions,before and after the policy raised the publication threshold for the principal investigators.We analyzed 9,167 papers published by 332 winners in physics and the social sciences and humanities(SSH),and 11,253 publications resulting from each funded project.Findings:We found that among physicists,even in the first period,grants tended to be allocated to prolific authors publishing in high-quality journals.In contrast,the SSH project grantees had been less prolific in publishing internationally in both periods;however,in the second period,the selection of grant recipients yielded better results regarding awarding grants to more productive authors in terms of the quantity and quality of publications.There was no evidence that this better selection of grant recipients resulted in better publication records during grant realization.Originality:This study contributes to the discussion of formal policies that rely on metrics for the evaluation of grant proposals.The Russian case shows that such policy may have a profound effect on changing the supply side of applicants,especially in disciplines that are less suitable for metric-based evaluations.In spite of the criticism given to metrics,they might be a useful additional instrument in academic systems where professional expertise is corrupted and prevents allocation of funds to prolific researchers.展开更多
A measure of the“goodness”or efficiency of the test suite is used to determine the proficiency of a test suite.The appropriateness of the test suite is determined through mutation analysis.Several Finite State Machi...A measure of the“goodness”or efficiency of the test suite is used to determine the proficiency of a test suite.The appropriateness of the test suite is determined through mutation analysis.Several Finite State Machine(FSM)mutants are produced in mutation analysis by injecting errors against hypotheses.These mutants serve as test subjects for the test suite(TS).The effectiveness of the test suite is proportional to the number of eliminated mutants.The most effective test suite is the one that removes the most significant number of mutants at the optimal time.It is difficult to determine the fault detection ratio of the system.Because it is difficult to identify the system’s potential flaws precisely.In mutation testing,the Fault Detection Ratio(FDR)metric is currently used to express the adequacy of a test suite.However,there are some issues with this metric.If both test suites have the same defect detection rate,the smaller of the two tests is preferred.The test case(TC)is affected by the same issue.The smaller two test cases with identical performance are assumed to have superior performance.Another difficulty involves time.The performance of numerous vehicles claiming to have a perfect mutant capture time is problematic.Our study developed three metrics to address these issues:FDR/|TS|,FDR/|TC|,and FDR/|Time|;In this context,most used test generation tools were examined and tested using the developed metrics.Thanks to the metrics we have developed,the research contributes to eliminating the problems related to performance measurement by integrating the missing parameters into the system.展开更多
Meteorological droughts occur when there is deficiency in rainfall;i.e. rainfall availability is below some acclaimed normal values. Hence, the greater challenge is to be able to obtain suitable methods for assessing ...Meteorological droughts occur when there is deficiency in rainfall;i.e. rainfall availability is below some acclaimed normal values. Hence, the greater challenge is to be able to obtain suitable methods for assessing drought occurrence, its onset or initiation and termination. Thus, an attempt was made in this paper to evaluate the performance of Standardised Precipitation Index (SPI) and Standardised Precipitation Anomaly Index (SPAI) to characterise drought in Northern Nigeria for purposes of comparison and eventual adoption of probable candidate index for the development of an Early Warning System. The findings indicated that despite the fact that the annual timescale may be long, it can be employed to obtain information on the temporal evolution of drought especially, regional behaviour. However, monthly timescale can be more appropriate if emphasis is on evaluating the effects of drought in situations relating to water supply, agriculture and groundwater abstractions. The SPAI can be employed for periodic rainfall time series though;it accentuates drought signatures and may not necessarily dampen high fluctuations due to implications of high climatic variability considering the stochastic nature and state transition of drought phenomena. On the other hand, the temporal evolution of SPI and SPAI were not coherent at different temporal accumulations with differences in fluctuations. However, despite the differences between the SPI and SPAI, generally at some timescales, for instance, 6-month accumulation, both spatial and temporal distributions of drought characteristics were seemingly consistent. In view of the observed shortcomings of both indices, especially the SPI, the Standardised Nonstationary Precipitation Index (SnsPI) should be looked into and too, other indexes that take into consideration the implications of global warming by incorporating potential evapotranspiration may be deemed more suitable for drought studies in Northern Nigeria.展开更多
In a competitive digital age where data volumes are increasing with time, the ability to extract meaningful knowledge from high-dimensional data using machine learning (ML) and data mining (DM) techniques and making d...In a competitive digital age where data volumes are increasing with time, the ability to extract meaningful knowledge from high-dimensional data using machine learning (ML) and data mining (DM) techniques and making decisions based on the extracted knowledge is becoming increasingly important in all business domains. Nevertheless, high-dimensional data remains a major challenge for classification algorithms due to its high computational cost and storage requirements. The 2016 Demographic and Health Survey of Ethiopia (EDHS 2016) used as the data source for this study which is publicly available contains several features that may not be relevant to the prediction task. In this paper, we developed a hybrid multidimensional metrics framework for predictive modeling for both model performance evaluation and feature selection to overcome the feature selection challenges and select the best model among the available models in DM and ML. The proposed hybrid metrics were used to measure the efficiency of the predictive models. Experimental results show that the decision tree algorithm is the most efficient model. The higher score of HMM (m, r) = 0.47 illustrates the overall significant model that encompasses almost all the user’s requirements, unlike the classical metrics that use a criterion to select the most appropriate model. On the other hand, the ANNs were found to be the most computationally intensive for our prediction task. Moreover, the type of data and the class size of the dataset (unbalanced data) have a significant impact on the efficiency of the model, especially on the computational cost, and the interpretability of the parameters of the model would be hampered. And the efficiency of the predictive model could be improved with other feature selection algorithms (especially hybrid metrics) considering the experts of the knowledge domain, as the understanding of the business domain has a significant impact.展开更多
Evaluating complex information systems necessitates deep contextual knowledge of technology, user needs, and quality. The quality evaluation challenges increase with the system’s complexity, especially when multiple ...Evaluating complex information systems necessitates deep contextual knowledge of technology, user needs, and quality. The quality evaluation challenges increase with the system’s complexity, especially when multiple services supported by varied technological modules, are offered. Existing standards for software quality, such as the ISO25000 series, provide a broad framework for evaluation. Broadness offers initial implementation ease albeit, it often lacks specificity to cater to individual system modules. This paper maps 48 data metrics and 175 software metrics on specific system modules while aligning them with ISO standard quality traits. Using the ISO25000 series as a foundation, especially ISO25010 and 25012, this research seeks to augment the applicability of these standards to multi-faceted systems, exemplified by five distinct software modules prevalent in modern information ecosystems.展开更多
Purpose: To comprehensively evaluate the overall performance of a group or an individual in both bibliometrics and patentometrics. Design/methodology/approach: Trace metrics were applied to the top 30 universities i...Purpose: To comprehensively evaluate the overall performance of a group or an individual in both bibliometrics and patentometrics. Design/methodology/approach: Trace metrics were applied to the top 30 universities in the2014 Academic Ranking of World Universities(ARWU) — computer sciences, the top 30 ESI highly cited papers in the computer sciences field in 2014, as well as the top 30 assignees and the top 30 most cited patents in the National Bureau of Economic Research(NBER) computer hardware and software category.Findings: We found that, by applying trace metrics, the research or marketing impact efficiency, at both group and individual levels, was clearly observed. Furthermore, trace metrics were more sensitive to the different publication-citation distributions than the average citation and h-index were.Research limitations: Trace metrics considered publications with zero citations as negative contributions. One should clarify how he/she evaluates a zero-citation paper or patent before applying trace metrics.Practical implications: Decision makers could regularly examinine the performance of their university/company by applying trace metrics and adjust their policies accordingly.Originality/value: Trace metrics could be applied both in bibliometrics and patentometrics and provide a comprehensive view. Moreover, the high sensitivity and unique impact efficiency view provided by trace metrics can facilitate decision makers in examining and adjusting their policies.展开更多
Component-based software engineering is concerned with the develop-ment of software that can satisfy the customer prerequisites through reuse or inde-pendent development.Coupling and cohesion measurements are primaril...Component-based software engineering is concerned with the develop-ment of software that can satisfy the customer prerequisites through reuse or inde-pendent development.Coupling and cohesion measurements are primarily used to analyse the better software design quality,increase the reliability and reduced system software complexity.The complexity measurement of cohesion and coupling component to analyze the relationship between the component module.In this paper,proposed the component selection framework of Hexa-oval optimization algorithm for selecting the suitable components from the repository.It measures the interface density modules of coupling and cohesion in a modular software sys-tem.This cohesion measurement has been taken into two parameters for analyz-ing the result of complexity,with the help of low cohesion and high cohesion.In coupling measures between the component of inside parameters and outside parameters.Thefinal process of coupling and cohesion,the measured values were used for the average calculation of components parameter.This paper measures the complexity of direct and indirect interaction among the component as well as the proposed algorithm selecting the optimal component for the repository.The better result is observed for high cohesion and low coupling in compo-nent-based software engineering.展开更多
Letting F be a homogeneous(α_(1),α_(2))metric on the reductive homogeneous manifold G/H,we first characterize the natural reductiveness of F as a local f-product between naturally reductive Riemannian metrics.Second...Letting F be a homogeneous(α_(1),α_(2))metric on the reductive homogeneous manifold G/H,we first characterize the natural reductiveness of F as a local f-product between naturally reductive Riemannian metrics.Second,we prove the equivalence among several properties of F for its mean Berwald curvature and S-curvature.Finally,we find an explicit flag curvature formula for G/H when F is naturally reductive.展开更多
This paper suggests that a single class rather than methods should be used as the slice scope to compute class cohesion. First, for a given attribute, the statements in all methods that last define the attribute are c...This paper suggests that a single class rather than methods should be used as the slice scope to compute class cohesion. First, for a given attribute, the statements in all methods that last define the attribute are computed. Then, the forward and backward data slices for this attribute are generated by using the class as the slice scope and are combined to compute the corresponding class data slice. Finally, the class cohesion is computed based on all class data slices for the attributes. Compared to traditional cohesion metrics that use methods as the slice scope, the proposed metrics that use a single class as slice scope take into account the possible interactions between the methods. The experimental results show that class cohesion can be more accurately measured when using the class as the slice scope.展开更多
Modified Theories of Gravity include spin dependence in General Relativity, to account for additional sources of gravity instead of dark matter/energy approach. The spin-spin interaction is already included in the eff...Modified Theories of Gravity include spin dependence in General Relativity, to account for additional sources of gravity instead of dark matter/energy approach. The spin-spin interaction is already included in the effective nuclear force potential, and theoretical considerations and experimental evidence hint to the hypothesis that Gravity originates from such an interaction, under an averaging process over spin directions. This invites to continue the line of theory initiated by Einstein and Cartan, based on tetrads and spin effects modeled by connections with torsion. As a first step in this direction, the article considers a new modified Coulomb/Newton Law accounting for the spin-spin interaction. The physical potential is geometrized through specific affine connections and specific semi-Riemannian metrics, canonically associated to it, acting on a manifold or at the level of its tangent bundle. Freely falling particles in these “toy Universes” are determined, showing an interesting behavior and unexpected patterns.展开更多
文摘In this article,we study Kahler metrics on a certain line bundle over some compact Kahler manifolds to find complete Kahler metrics with positive holomorphic sectional(or bisectional)curvatures.Thus,we apply a strategy to a famous Yau conjecture with a co-homogeneity one geometry.
基金Supported by the National Natural Science Foundation of China(11771020,12171005).
文摘In this paper,we study a class of Finsler metrics defined by a vector field on a gradient Ricci soliton.We obtain a necessary and sufficient condition for these Finsler metrics on a compact gradient Ricci soliton to be of isotropic S-curvature by establishing a new integral inequality.Then we determine the Ricci curvature of navigation Finsler metrics of isotropic S-curvature on a gradient Ricci soliton generalizing result only known in the case when such soliton is of Einstein type.As its application,we obtain the Ricci curvature of all navigation Finsler metrics of isotropic S-curvature on Gaussian shrinking soliton.
文摘In a very recent article of mine I have corrected the traditional derivation of the Schwarzschild metric thus arriving to formulate a correct Schwarzschild metric different from the traditional Schwarzschild metric. In this article, starting from this correct Schwarzschild metric, I also propose corrections to the other traditional Reissner-Nordstrøm, Kerr and Kerr-Newman metrics on the basis of the fact that these metrics should be equal to the correct Schwarzschild metric in the borderline case in which they reduce to the case described by this metric. In this way, we see that, like the correct Schwarzschild metric, also the correct Reissner-Nordstrøm, Kerr and Kerr-Newman metrics do not present any event horizon (and therefore do not present any black hole) unlike the traditional Reissner-Nordstrøm, Kerr and Kerr-Newman metrics.
文摘In this paper,we prove that for some completions of certain fiber bundles there is a Maxwell-Einstein metric conformally related to any given Kahler class.
基金This work is supported by Russian Science Foundation(Grant No.21-78-10102).
文摘Purpose:This study examines the effects of using publication-based metrics for the initial screening in the application process for a project leader.The key questions are whether formal policy affects the allocation of funds to researchers with a better publication record and how the previous academic performance of principal investigators is related to future project results.Design/methodology/approach:We compared two competitions,before and after the policy raised the publication threshold for the principal investigators.We analyzed 9,167 papers published by 332 winners in physics and the social sciences and humanities(SSH),and 11,253 publications resulting from each funded project.Findings:We found that among physicists,even in the first period,grants tended to be allocated to prolific authors publishing in high-quality journals.In contrast,the SSH project grantees had been less prolific in publishing internationally in both periods;however,in the second period,the selection of grant recipients yielded better results regarding awarding grants to more productive authors in terms of the quantity and quality of publications.There was no evidence that this better selection of grant recipients resulted in better publication records during grant realization.Originality:This study contributes to the discussion of formal policies that rely on metrics for the evaluation of grant proposals.The Russian case shows that such policy may have a profound effect on changing the supply side of applicants,especially in disciplines that are less suitable for metric-based evaluations.In spite of the criticism given to metrics,they might be a useful additional instrument in academic systems where professional expertise is corrupted and prevents allocation of funds to prolific researchers.
文摘A measure of the“goodness”or efficiency of the test suite is used to determine the proficiency of a test suite.The appropriateness of the test suite is determined through mutation analysis.Several Finite State Machine(FSM)mutants are produced in mutation analysis by injecting errors against hypotheses.These mutants serve as test subjects for the test suite(TS).The effectiveness of the test suite is proportional to the number of eliminated mutants.The most effective test suite is the one that removes the most significant number of mutants at the optimal time.It is difficult to determine the fault detection ratio of the system.Because it is difficult to identify the system’s potential flaws precisely.In mutation testing,the Fault Detection Ratio(FDR)metric is currently used to express the adequacy of a test suite.However,there are some issues with this metric.If both test suites have the same defect detection rate,the smaller of the two tests is preferred.The test case(TC)is affected by the same issue.The smaller two test cases with identical performance are assumed to have superior performance.Another difficulty involves time.The performance of numerous vehicles claiming to have a perfect mutant capture time is problematic.Our study developed three metrics to address these issues:FDR/|TS|,FDR/|TC|,and FDR/|Time|;In this context,most used test generation tools were examined and tested using the developed metrics.Thanks to the metrics we have developed,the research contributes to eliminating the problems related to performance measurement by integrating the missing parameters into the system.
文摘Meteorological droughts occur when there is deficiency in rainfall;i.e. rainfall availability is below some acclaimed normal values. Hence, the greater challenge is to be able to obtain suitable methods for assessing drought occurrence, its onset or initiation and termination. Thus, an attempt was made in this paper to evaluate the performance of Standardised Precipitation Index (SPI) and Standardised Precipitation Anomaly Index (SPAI) to characterise drought in Northern Nigeria for purposes of comparison and eventual adoption of probable candidate index for the development of an Early Warning System. The findings indicated that despite the fact that the annual timescale may be long, it can be employed to obtain information on the temporal evolution of drought especially, regional behaviour. However, monthly timescale can be more appropriate if emphasis is on evaluating the effects of drought in situations relating to water supply, agriculture and groundwater abstractions. The SPAI can be employed for periodic rainfall time series though;it accentuates drought signatures and may not necessarily dampen high fluctuations due to implications of high climatic variability considering the stochastic nature and state transition of drought phenomena. On the other hand, the temporal evolution of SPI and SPAI were not coherent at different temporal accumulations with differences in fluctuations. However, despite the differences between the SPI and SPAI, generally at some timescales, for instance, 6-month accumulation, both spatial and temporal distributions of drought characteristics were seemingly consistent. In view of the observed shortcomings of both indices, especially the SPI, the Standardised Nonstationary Precipitation Index (SnsPI) should be looked into and too, other indexes that take into consideration the implications of global warming by incorporating potential evapotranspiration may be deemed more suitable for drought studies in Northern Nigeria.
文摘In a competitive digital age where data volumes are increasing with time, the ability to extract meaningful knowledge from high-dimensional data using machine learning (ML) and data mining (DM) techniques and making decisions based on the extracted knowledge is becoming increasingly important in all business domains. Nevertheless, high-dimensional data remains a major challenge for classification algorithms due to its high computational cost and storage requirements. The 2016 Demographic and Health Survey of Ethiopia (EDHS 2016) used as the data source for this study which is publicly available contains several features that may not be relevant to the prediction task. In this paper, we developed a hybrid multidimensional metrics framework for predictive modeling for both model performance evaluation and feature selection to overcome the feature selection challenges and select the best model among the available models in DM and ML. The proposed hybrid metrics were used to measure the efficiency of the predictive models. Experimental results show that the decision tree algorithm is the most efficient model. The higher score of HMM (m, r) = 0.47 illustrates the overall significant model that encompasses almost all the user’s requirements, unlike the classical metrics that use a criterion to select the most appropriate model. On the other hand, the ANNs were found to be the most computationally intensive for our prediction task. Moreover, the type of data and the class size of the dataset (unbalanced data) have a significant impact on the efficiency of the model, especially on the computational cost, and the interpretability of the parameters of the model would be hampered. And the efficiency of the predictive model could be improved with other feature selection algorithms (especially hybrid metrics) considering the experts of the knowledge domain, as the understanding of the business domain has a significant impact.
文摘Evaluating complex information systems necessitates deep contextual knowledge of technology, user needs, and quality. The quality evaluation challenges increase with the system’s complexity, especially when multiple services supported by varied technological modules, are offered. Existing standards for software quality, such as the ISO25000 series, provide a broad framework for evaluation. Broadness offers initial implementation ease albeit, it often lacks specificity to cater to individual system modules. This paper maps 48 data metrics and 175 software metrics on specific system modules while aligning them with ISO standard quality traits. Using the ISO25000 series as a foundation, especially ISO25010 and 25012, this research seeks to augment the applicability of these standards to multi-faceted systems, exemplified by five distinct software modules prevalent in modern information ecosystems.
基金the National Natural Science Foundation of China (Grant No.: 71173187)Jiangsu Key Laboratory Fund for financial support
文摘Purpose: To comprehensively evaluate the overall performance of a group or an individual in both bibliometrics and patentometrics. Design/methodology/approach: Trace metrics were applied to the top 30 universities in the2014 Academic Ranking of World Universities(ARWU) — computer sciences, the top 30 ESI highly cited papers in the computer sciences field in 2014, as well as the top 30 assignees and the top 30 most cited patents in the National Bureau of Economic Research(NBER) computer hardware and software category.Findings: We found that, by applying trace metrics, the research or marketing impact efficiency, at both group and individual levels, was clearly observed. Furthermore, trace metrics were more sensitive to the different publication-citation distributions than the average citation and h-index were.Research limitations: Trace metrics considered publications with zero citations as negative contributions. One should clarify how he/she evaluates a zero-citation paper or patent before applying trace metrics.Practical implications: Decision makers could regularly examinine the performance of their university/company by applying trace metrics and adjust their policies accordingly.Originality/value: Trace metrics could be applied both in bibliometrics and patentometrics and provide a comprehensive view. Moreover, the high sensitivity and unique impact efficiency view provided by trace metrics can facilitate decision makers in examining and adjusting their policies.
基金We deeply acknowledge Taif University for Supporting this research through Taif University Researchers Supporting Project number(TURSP-2020/231),Taif University,Taif,Saudi Arabia.
文摘Component-based software engineering is concerned with the develop-ment of software that can satisfy the customer prerequisites through reuse or inde-pendent development.Coupling and cohesion measurements are primarily used to analyse the better software design quality,increase the reliability and reduced system software complexity.The complexity measurement of cohesion and coupling component to analyze the relationship between the component module.In this paper,proposed the component selection framework of Hexa-oval optimization algorithm for selecting the suitable components from the repository.It measures the interface density modules of coupling and cohesion in a modular software sys-tem.This cohesion measurement has been taken into two parameters for analyz-ing the result of complexity,with the help of low cohesion and high cohesion.In coupling measures between the component of inside parameters and outside parameters.Thefinal process of coupling and cohesion,the measured values were used for the average calculation of components parameter.This paper measures the complexity of direct and indirect interaction among the component as well as the proposed algorithm selecting the optimal component for the repository.The better result is observed for high cohesion and low coupling in compo-nent-based software engineering.
基金the National Natural Science Foundation of China(12131012,12001007,11821101)the Beijing Natural Science Foundation(1222003,Z180004)the Natural Science Foundation of Anhui province(1908085QA03)。
文摘Letting F be a homogeneous(α_(1),α_(2))metric on the reductive homogeneous manifold G/H,we first characterize the natural reductiveness of F as a local f-product between naturally reductive Riemannian metrics.Second,we prove the equivalence among several properties of F for its mean Berwald curvature and S-curvature.Finally,we find an explicit flag curvature formula for G/H when F is naturally reductive.
基金The National Natural Science Foundation of China(No.60425206,60633010)the High Technology Research and Development Program of Jiangsu Province(No.BG2005032)
文摘This paper suggests that a single class rather than methods should be used as the slice scope to compute class cohesion. First, for a given attribute, the statements in all methods that last define the attribute are computed. Then, the forward and backward data slices for this attribute are generated by using the class as the slice scope and are combined to compute the corresponding class data slice. Finally, the class cohesion is computed based on all class data slices for the attributes. Compared to traditional cohesion metrics that use methods as the slice scope, the proposed metrics that use a single class as slice scope take into account the possible interactions between the methods. The experimental results show that class cohesion can be more accurately measured when using the class as the slice scope.
文摘Modified Theories of Gravity include spin dependence in General Relativity, to account for additional sources of gravity instead of dark matter/energy approach. The spin-spin interaction is already included in the effective nuclear force potential, and theoretical considerations and experimental evidence hint to the hypothesis that Gravity originates from such an interaction, under an averaging process over spin directions. This invites to continue the line of theory initiated by Einstein and Cartan, based on tetrads and spin effects modeled by connections with torsion. As a first step in this direction, the article considers a new modified Coulomb/Newton Law accounting for the spin-spin interaction. The physical potential is geometrized through specific affine connections and specific semi-Riemannian metrics, canonically associated to it, acting on a manifold or at the level of its tangent bundle. Freely falling particles in these “toy Universes” are determined, showing an interesting behavior and unexpected patterns.