In the contemporary era, the proliferation of information technology has led to an unprecedented surge in data generation, with this data being dispersed across a multitude of mobile devices. Facing these situations a...In the contemporary era, the proliferation of information technology has led to an unprecedented surge in data generation, with this data being dispersed across a multitude of mobile devices. Facing these situations and the training of deep learning model that needs great computing power support, the distributed algorithm that can carry out multi-party joint modeling has attracted everyone’s attention. The distributed training mode relieves the huge pressure of centralized model on computer computing power and communication. However, most distributed algorithms currently work in a master-slave mode, often including a central server for coordination, which to some extent will cause communication pressure, data leakage, privacy violations and other issues. To solve these problems, a decentralized fully distributed algorithm based on deep random weight neural network is proposed. The algorithm decomposes the original objective function into several sub-problems under consistency constraints, combines the decentralized average consensus (DAC) and alternating direction method of multipliers (ADMM), and achieves the goal of joint modeling and training through local calculation and communication of each node. Finally, we compare the proposed decentralized algorithm with several centralized deep neural networks with random weights, and experimental results demonstrate the effectiveness of the proposed algorithm.展开更多
针对目标函数中包含耦合函数H(x,y)的非凸非光滑极小化问题,提出了一种线性惯性交替乘子方向法(Linear Inertial Alternating Direction Method of Multipliers,LIADMM)。为了方便子问题的求解,对目标函数中的耦合函数H(x,y)进行线性化...针对目标函数中包含耦合函数H(x,y)的非凸非光滑极小化问题,提出了一种线性惯性交替乘子方向法(Linear Inertial Alternating Direction Method of Multipliers,LIADMM)。为了方便子问题的求解,对目标函数中的耦合函数H(x,y)进行线性化处理,并在x-子问题中引入惯性效应。在适当的假设条件下,建立了算法的全局收敛性;同时引入满足Kurdyka-Lojasiewicz不等式的辅助函数,验证了算法的强收敛性。通过两个数值实验表明,引入惯性效应的算法比没有惯性效应的算法收敛性能更好。展开更多
This article aims at studying two-direction refinable functions and two-direction wavelets in the setting R^s, s 〉 1. We give a sufficient condition for a two-direction refinable function belonging to L^2(R^s). The...This article aims at studying two-direction refinable functions and two-direction wavelets in the setting R^s, s 〉 1. We give a sufficient condition for a two-direction refinable function belonging to L^2(R^s). Then, two theorems are given for constructing biorthogonal (orthogonal) two-direction refinable functions in L^2(R^s) and their biorthogonal (orthogonal) two-direction wavelets, respectively. From the constructed biorthogonal (orthogonal) two-direction wavelets, symmetric biorthogonal (orthogonal) multiwaveles in L^2(R^s) can be obtained easily. Applying the projection method to biorthogonal (orthogonal) two-direction wavelets in L^2(R^s), we can get dual (tight) two-direction wavelet frames in L^2(R^m), where m ≤ s. From the projected dual (tight) two-direction wavelet frames in L^2(R^m), symmetric dual (tight) frames in L^2(R^m) can be obtained easily. In the end, an example is given to illustrate theoretical results.展开更多
文摘In the contemporary era, the proliferation of information technology has led to an unprecedented surge in data generation, with this data being dispersed across a multitude of mobile devices. Facing these situations and the training of deep learning model that needs great computing power support, the distributed algorithm that can carry out multi-party joint modeling has attracted everyone’s attention. The distributed training mode relieves the huge pressure of centralized model on computer computing power and communication. However, most distributed algorithms currently work in a master-slave mode, often including a central server for coordination, which to some extent will cause communication pressure, data leakage, privacy violations and other issues. To solve these problems, a decentralized fully distributed algorithm based on deep random weight neural network is proposed. The algorithm decomposes the original objective function into several sub-problems under consistency constraints, combines the decentralized average consensus (DAC) and alternating direction method of multipliers (ADMM), and achieves the goal of joint modeling and training through local calculation and communication of each node. Finally, we compare the proposed decentralized algorithm with several centralized deep neural networks with random weights, and experimental results demonstrate the effectiveness of the proposed algorithm.
文摘针对目标函数中包含耦合函数H(x,y)的非凸非光滑极小化问题,提出了一种线性惯性交替乘子方向法(Linear Inertial Alternating Direction Method of Multipliers,LIADMM)。为了方便子问题的求解,对目标函数中的耦合函数H(x,y)进行线性化处理,并在x-子问题中引入惯性效应。在适当的假设条件下,建立了算法的全局收敛性;同时引入满足Kurdyka-Lojasiewicz不等式的辅助函数,验证了算法的强收敛性。通过两个数值实验表明,引入惯性效应的算法比没有惯性效应的算法收敛性能更好。
基金supported by the Natural Science Foundation China(11126343)Guangxi Natural Science Foundation(2013GXNSFBA019010)+1 种基金supported by Natural Science Foundation China(11071152)Natural Science Foundation of Guangdong Province(10151503101000025,S2011010004511)
文摘This article aims at studying two-direction refinable functions and two-direction wavelets in the setting R^s, s 〉 1. We give a sufficient condition for a two-direction refinable function belonging to L^2(R^s). Then, two theorems are given for constructing biorthogonal (orthogonal) two-direction refinable functions in L^2(R^s) and their biorthogonal (orthogonal) two-direction wavelets, respectively. From the constructed biorthogonal (orthogonal) two-direction wavelets, symmetric biorthogonal (orthogonal) multiwaveles in L^2(R^s) can be obtained easily. Applying the projection method to biorthogonal (orthogonal) two-direction wavelets in L^2(R^s), we can get dual (tight) two-direction wavelet frames in L^2(R^m), where m ≤ s. From the projected dual (tight) two-direction wavelet frames in L^2(R^m), symmetric dual (tight) frames in L^2(R^m) can be obtained easily. In the end, an example is given to illustrate theoretical results.