FMUSER Wirless Transmit Video And Audio More Easier !

[email protected] WhatsApp +8618078869184
Language

    Basic knowledge of neural networks, as well as two more advanced algorithms: Deepwalk and Graphsage

     

    Figure neural network (GNN) become increasingly popular in various fields, this paper introduces the basic knowledge map neural network, as well as two more advanced algorithms: DeepWalk and GraphSage. Recently, map neural network (GNN) has been gaining popularity in various fields, including social networking, mapping knowledge, recommendation systems, and even the life sciences. GNN in dependence between nodes in a graphical modeling capabilities were strong, making the diagram analysis related research breakthrough has been made. This article aims to introduce the basics of map neural network, as well as two more advanced algorithms: DeepWalk and GraphSage. Figure (Graph) Before discussing the GNN, let's look at what is chart (Graph). In computer science, a data structure of FIG composed of two components: the vertex (the vertices) and edges (edges). A graph G can be a set of vertices and edges E V it contains will be described. Edges may be directed or undirected, depending on whether there is a dependency between the vertex direction. A directed graph (Wiki) Also commonly referred to vertex nodes (nodes). Herein, the two terms are interchangeable. Figure neural network FIG neural network is a neural network operating directly on the structure of FIG. A typical application is GNN node classification. Essentially, each node in the graph are associated with a tag, our aim is to predict no ground-truth label of the node. The graph neural network model will be described in this section (Scarselli, F., et al., 2009) [1] of this paper algorithm, which is the first time GNN papers, it is often considered primitive GNN. In the classification setting node, wherein each node v x_v and a ground-truth t_v associated tag. Given a portion labeled graph G, the goal is to predict the label unlabeled nodes using the node of these markers. It study indicates for each node contains a d-dimensional vector h_v neighborhood information. which is: Wherein x_co [v] and v represent feature edges connected, h_ne [v] represents the embedded adjacent node v, x_ne [v] wherein v represents an adjacent node. The function f is a mapping input to a transition function on d-dimensional space. Since we are looking for a unique solution h_v, we can apply Banach fixed point theorem, will rewrite the above equation is an iterative update process. Denote H and X h and x are all connected in series. Transmitted to the output function g by the characteristics and status h_v x_v, to calculate the output of GNN. Where f and g can be interpreted as a fully connected feedforward neural network. L1 loss may be directly expressed as: It can be optimized by gradient descent. However, the original GNN there are three main limitations: If we relax the assumption that "fixed point" (fixed point), then you can take advantage of multi-layer Perceptron learning more stable representation, and delete an iterative update process. This is because, in the original paper, different iterations using the same parameters of the conversion function f, different layers of different parameters MLP allows hierarchical feature extraction. It can not handle the edge information (e.g., different knowledge figures may indicate different relationships between edge nodes) Fixed Point hinder diversity distributed nodes, the nodes represent not suitable for learning. To solve this problem, researchers have proposed several variants of the GNN. However, they are not the focus of this article. DeepWalk: The first unsupervised learning algorithms embedded node DeepWalk [2] was the first to propose an unsupervised way to learn nodes embedded algorithm. It is in the training process is very similar vocabulary embedded. Motivated graph nodes and distributed corpus of words follow a power law shown in the following figure: The algorithm consists of two steps: Performing random walks on a graph the nodes, to generate a sequence of nodes Run skip-gram, according to the sequence of nodes created in step 1, each node learning embedded Random walks at each time step, the next node from the neighbor node a uniformly sampled. Each sequence was then cut to a length of 2 | w | + 1 sequence, wherein w represents a window size of skip-gram. In this paper, hierarchical softmax to solve softmax due to the large number of computing nodes caused costly problems. To calculate the value of each individual output softmax element, we must calculate the k elements of all e ^ xk. Softmax definition Therefore, the calculation time of the original softmax is O (| V |), where V represents a set of vertices in FIG. Softmax hierarchical binary tree to deal with this problem. In this binary tree, all the leaves (in the following figure v1, v2, ... v8) expressed in vertex graph. In each internal node, there is a binary classifier to decide which path to select. To calculate the probability of a given vertex v_k probability from the root node to each of the sub-path v_k leaf node only needs to calculate a path. Since the probability of the child nodes and each node is 1, so the probability of a characteristic of all vertices and 1 remained unchanged in stratified softmax. Since the longest path binary tree is O (log (n)), where n represents the number of leaf nodes, and therefore a calculation time is now reduced to the element O (log | V |). Hierarchical Softmax After completion of the training DeepWalk GNN, a learning model has good representation of each node, as shown in FIG. Different colors enter different labels. We can see that, in the output map (two-dimensional embedding), the nodes having the same label are grouped together, while most nodes with different labels have been correctly separated. However, the main problem is the lack of DeepWalk generalization. Whenever a new node appears, it must re-training model to represent this node. Therefore, this does not apply to GNN FIG dynamic changing node. GraphSage: learning embedded in each node GraphSage provide a solution to the above problem, in a way inductive learning embedded in each node. Specifically, GraphSage each node represented by polymerization (aggregation) neighborhood. Therefore, even if there is a new node in the training process not seen in the figure, it can still use its neighboring nodes to properly represent. Here is GraphSage algorithm: Represents the number of iterations of the outer loop update, the vector h ^ k_v potential node v indicates updating iteration k. At each update iteration, h ^ k_v updated based on an aggregate function, the previous iteration vector potential in the neighborhood of v and v, and the weight matrix W ^ k. Paper presents three aggregate functions: 1. Mean aggregator: averaged mean aggregator node and a vector potential of all neighboring domains. Compared to the original equation, it deletes connection operation in the above pseudo code, line 5. This operation can be seen as a "skip-connection", in a later section of the paper, proved it can improve the performance of the model to a large extent. 2. LSTM aggregator: Since the nodes in the graph without any order, they randomly assigned by arranging the sequence of these nodes. 3. Pooling aggregator: The operator performs an element-wise functions of pooling on adjacent set. Below is an example of a max-pooling: Max-pooling using paper as the default aggregation function. Loss function defined as follows: Wherein u and v in the random walk coexist fixed length, and u v_n not coexist with the negative sample. This loss function nodes are closer to encourage similar embedded, while the more distant nodes is separated in the projected space. In this way, the node will get more and more information about their neighborhood. GraphSage by polymerization nodes in its vicinity, can be expressed as to generate an embedding invisible node. It allows to embed the nodes to the domain to dynamic diagram, a structure diagram is constantly changing. For example, Pinterest using an extended version of PinSage GraphSage as its core content discovery system. Summarize In this paper, we learned the basics of map neural network, DeepWalk and GraphSage of. GNN powerful in modeling complex view of the structure is indeed amazing. Given its effectiveness, I believe that in the near future, GNN will play an important role in the development of AI. [1] Scarselli, Franco, et al. "The graph neural network model." http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1015.7227&rep=rep1&type=pdf [2] Perozzi, Bryan, Rami Al-Rfou, and Steven Skiena. "Deepwalk: Online learning of social representations." http://www.perozzi.net/publications/14_kdd_deepwalk.pdf [3] Hamilton, Will, Zhitao Ying, and Jure Leskovec. "Inductive representation learning on large graphs." https://www-cs-faculty.stanford.edu/people/jure/pubs/graphsage-nips17.pdf, Read, original title: Mastering basic map neural network GNN, reading this is enough Article Source: [Micro Signal: AI_ERA, WeChat public number: Xin Zhiyuan] Welcome to add attention! Please indicate the source of the article.

     

     

     

     

    List all Question

    Nickname

    Email

    Questions

    Our other product:

    Professional FM Radio Station Equipment Package

     



     

    Hotel IPTV Solution

     


      Enter email  to get a surprise

      fmuser.org

      es.fmuser.org
      it.fmuser.org
      fr.fmuser.org
      de.fmuser.org
      af.fmuser.org ->Afrikaans
      sq.fmuser.org ->Albanian
      ar.fmuser.org ->Arabic
      hy.fmuser.org ->Armenian
      az.fmuser.org ->Azerbaijani
      eu.fmuser.org ->Basque
      be.fmuser.org ->Belarusian
      bg.fmuser.org ->Bulgarian
      ca.fmuser.org ->Catalan
      zh-CN.fmuser.org ->Chinese (Simplified)
      zh-TW.fmuser.org ->Chinese (Traditional)
      hr.fmuser.org ->Croatian
      cs.fmuser.org ->Czech
      da.fmuser.org ->Danish
      nl.fmuser.org ->Dutch
      et.fmuser.org ->Estonian
      tl.fmuser.org ->Filipino
      fi.fmuser.org ->Finnish
      fr.fmuser.org ->French
      gl.fmuser.org ->Galician
      ka.fmuser.org ->Georgian
      de.fmuser.org ->German
      el.fmuser.org ->Greek
      ht.fmuser.org ->Haitian Creole
      iw.fmuser.org ->Hebrew
      hi.fmuser.org ->Hindi
      hu.fmuser.org ->Hungarian
      is.fmuser.org ->Icelandic
      id.fmuser.org ->Indonesian
      ga.fmuser.org ->Irish
      it.fmuser.org ->Italian
      ja.fmuser.org ->Japanese
      ko.fmuser.org ->Korean
      lv.fmuser.org ->Latvian
      lt.fmuser.org ->Lithuanian
      mk.fmuser.org ->Macedonian
      ms.fmuser.org ->Malay
      mt.fmuser.org ->Maltese
      no.fmuser.org ->Norwegian
      fa.fmuser.org ->Persian
      pl.fmuser.org ->Polish
      pt.fmuser.org ->Portuguese
      ro.fmuser.org ->Romanian
      ru.fmuser.org ->Russian
      sr.fmuser.org ->Serbian
      sk.fmuser.org ->Slovak
      sl.fmuser.org ->Slovenian
      es.fmuser.org ->Spanish
      sw.fmuser.org ->Swahili
      sv.fmuser.org ->Swedish
      th.fmuser.org ->Thai
      tr.fmuser.org ->Turkish
      uk.fmuser.org ->Ukrainian
      ur.fmuser.org ->Urdu
      vi.fmuser.org ->Vietnamese
      cy.fmuser.org ->Welsh
      yi.fmuser.org ->Yiddish

       
  •  

    FMUSER Wirless Transmit Video And Audio More Easier !

  • Contact

    Address:
    No.305 Room HuiLan Building No.273 Huanpu Road Guangzhou China 510620

    E-mail:
    [email protected]

    Tel / WhatApps:
    +8618078869184

  • Categories

  • Newsletter

    FIRST OR FULL NAME

    E-mail

  • paypal solution  Western UnionBank OF China
    E-mail:[email protected]   WhatsApp:+8618078869184   Skype:sky198710021 Chat with me
    Copyright 2006-2020 Powered By www.fmuser.org

    Contact Us