# reading report of attributed network embedding for learning in a dynamic environment Introduction and Contribution Main Algorithm Experiment My Idea Summary

I first write the latex version. But it is hard for markdown to edit the formula, therefore I directly paste the picture that I cut. Mostly have inline formula. If you want to get the complete version, please contact me at [email protected]

# Introduction and Contribution

Network structure often evolves over time with addition/deletion of links and nodes, which means the network is not static, but dynamic. It is valuable and necessary to learn about dynamic network embedding. But most of the existing famous network embedding approaches are limited to deal with plain networks. Therefore, we need to seek an effective embedding representation to capture network and attribute evolving patterns.

In the paper, the authors propose a novel dynamic attributed network embedding framework – DANE, an unsupervised embedding algorithm, which can combine node attributes and network structure information at the same time in a dynamic environment. And they are the first one to tackle the following two challenges:
(1)the information in network might be noisy and incomplete, it necessitates a robust consensus representation to learn their characteristics; (2)the network embedding learning should be able to adapt to the change, which means to there should be an online pattern.

# Main Algorithm

The algorithm contains two key steps:

1. An offline method for a consensus embedding, and maximize the correlations between the embeddings of node attributes and node structure.
2. Using matrix perturbation theory to update the online embedding results.

## Offline Model

Network structure and node attributes in networks are presented in different representations. In this section, the authors learn a robust consensus embedding of two different representations to help solve the data sparsity problem.

Now the problem is to seek a consensus embedding. The authors propose to maximize their correlations. They seek two projection vectors \$p_A^{(t)}\$ and \$p_X^{(t)}\$ such that the correlation of \$Y_A^{(t)}\$ and \$Y_X^{(t)}\$ is maximized after projection. After some derivation about eigenvectors, the final consensus embedding representation can be computed as:

## Online Model

In real-world networks often evolve smoothly in the temporal dimension between two consecutive time steps, therefor the authors use \$Delta A\$ and \$Delta X\$ to denote the perturbation of network structure and node attributes between two consecutive time steps t and t+1.

The offline model focuses on finding the top eigenvectors corresponding to the smallest eigenvalues of the generalized eigen-problems. So the key idea to enable online update of the embeddings is to efficiently update the top eigenvectors and eigenvalues. Because the derivation process is long, here I skip this part and simply present the pseudo code and the final formula.

The variation of the eigenvalue is as follows:

The perturbation of eigenvector a_i is as follows:

# Experiment

The authors use four datasets BlogCatalog, Flickr, Epinions and DBLP for experimental evaluation. And the baseline methods are Deepwalk, LINE, DANE-N, DANE-A, CCA, LCMF, LANE for different tasks. And the proposed DINE algorithm performs better than other methods.

# My Idea

The offline model in this paper mostly focus on finding the top eigenvectors to solve the problem, which I am not familiar with. Besides, the proposed method has two inherent constraints: (1)the node attributes are related to the formation of the network; (2) requiring that between different steps, the network node property of links change little or small.

This algorithm is suitable for real time risk identification, which can rapidly update vector representation of high-dimensional information.

# Summary

The authors first introduce the offline and online model of their algorithm, then analysis its computational complexity. Later comes the experimental part, and the tasks including network clustering and node classification.