Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Latent space model plays a crucial role in network analysis, and accurate
estimation of latent variables is essential for downstream tasks such as link
prediction. However, the large number of parameters to be estimated presents a
challenge, especially when the latent space dimension is not exceptionally
small. In this paper, we propose a transfer learning method that leverages
information from networks with latent variables similar to those in the target
network, thereby improving the estimation accuracy for the target. Given
transferable source networks, we introduce a two-stage transfer learning
algorithm that accommodates differences in node numbers between source and
target networks. In each stage, we derive sufficient identification conditions
and design tailored projected gradient descent algorithms for estimation.
Theoretical properties of the resulting estimators are established. When the
transferable networks are unknown, a detection algorithm is introduced to
identify suitable source networks. Simulation studies and analyses of two real
datasets demonstrate the effectiveness of the proposed methods.