Table of Contents
1. Introduction to graph neural networks" >1. Introduction to graph neural networks
1. Why study graphs? " >1. Why study graphs?
2. Graph structured data is everywhere" >2. Graph structured data is everywhere
3. Recent trends in graph machine learning" >3. Recent trends in graph machine learning
4. A brief history of graph neural networks" >4. A brief history of graph neural networks
2. Application of graph neural network in computer vision" >2. Application of graph neural network in computer vision
3. Application of graph neural network in natural language processing" >3. Application of graph neural network in natural language processing
4. Application of graph neural network in program analysis" >4. Application of graph neural network in program analysis
Home Technology peripherals AI The foundation, frontier and application of GNN

The foundation, frontier and application of GNN

Apr 11, 2023 pm 11:40 PM
machine learning Neural Networks

The foundation, frontier and application of GNN

# In recent years, graph neural networks (GNN) have made rapid and incredible progress. Graph neural network, also known as graph deep learning, graph representation learning (graph representation learning) or geometric deep learning, is the fastest growing research topic in the field of machine learning, especially deep learning. The title of this sharing is "Basics, Frontiers and Applications of GNN", which mainly introduces the general content of the comprehensive book "Basics, Frontiers and Applications of Graph Neural Networks" compiled by scholars Wu Lingfei, Cui Peng, Pei Jian and Zhao Liang. .

1. Introduction to graph neural networks

1. Why study graphs?

The foundation, frontier and application of GNN

Diagrams are a universal language for describing and modeling complex systems. The graph itself is not complicated, it mainly consists of edges and nodes. We can use nodes to represent any object we want to model, and we can use edges to represent the relationship or similarity between two nodes. What we often call graph neural network or graph machine learning usually uses the structure of the graph and the information of edges and nodes as the input of the algorithm to output the desired results. For example, in a search engine, when we enter a query, the engine will return personalized search results based on the query information, user information, and some contextual information. This information can be naturally organized in a graph.

The foundation, frontier and application of GNN

2. Graph structured data is everywhere

The foundation, frontier and application of GNN

Graph structured data can be found everywhere, such as the Internet, social networks, etc. In addition, in the currently very popular field of protein discovery, people will use graphs to describe and model existing proteins and generate new graphs to help people discover new drugs. We can also use graphs to do some complex program analysis, and we can also do some high-level reasoning in computer vision.

The foundation, frontier and application of GNN

Graph machine learning It is not a very new topic. This research direction has been in the past 20 years, and it has always been relatively niche before. Since 2016, with the emergence of modern graph neural network-related papers, graph machine learning has become a popular research direction. It is found that this new generation of graph machine learning method can better learn the data itself and the information between the data, so that it can better represent the data, and ultimately be able to better complete more important tasks.

4. A brief history of graph neural networks

The foundation, frontier and application of GNN

The earliest paper related to graph neural network appeared in 2009, before deep learning became popular. Papers on modern graph neural networks appeared in 2016, which were improvements to early graph neural networks. Afterwards, the emergence of GCN promoted the rapid development of graph neural networks. Since 2017, a large number of new algorithms have emerged. As the algorithms of graph neural networks become more and more mature, since 2019, the industry has tried to use these algorithms to solve some practical problems. At the same time, many open source tools have been developed to improve the efficiency of solving problems. Since 2021, many books related to graph neural networks have been written, including of course this "Basics, Frontiers and Applications of Graph Neural Networks".

The foundation, frontier and application of GNN

The book "Basics, Frontiers and Applications of Graph Neural Networks" systematically introduces the core concepts and technologies in the field of graph neural networks, as well as cutting-edge research and development, and introduces applications in different fields. Applications. Readers from both academia and industry can benefit from it.

##2. The basis of graph neural network

1. The life cycle of machine learning

The foundation, frontier and application of GNN

The above figure reflects the life cycle of machine learning, in which feature learning is a very important link. Its main task is to transform raw data into structured data. Before the emergence of deep learning, everyone mainly completed this task through feature engineering. After the emergence of deep learning, this end-to-end machine learning method began to become mainstream

.

2. Feature learning in the graph

The foundation, frontier and application of GNN

Feature Learning in Graphs is very similar to deep learning. The goal is to design an effective task-related or task-independent feature learning method to map the nodes in the original graph into a high-dimensional space to obtain the embedding of the nodes. representation, and then complete the downstream tasks.

3. The basis of graph neural network

The foundation, frontier and application of GNN

There are two types of representations that need to be learned in graph neural networks:

  • Representation of graph nodes

Requires a filter operation, which takes the matrix of the graph and the vector representation of the node as input, continuously learns, and updates the vector representation of the node. Currently, the more common filter operations include Spectral-based, Spatial-based, Attention-based, and Recurrent-based.

  • Representation of the graph

Requires a pool operation, Taking the matrix of the graph and the vector representation of the nodes as input, it continuously learns to obtain the matrix of the graph containing fewer nodes and the vector representation of its nodes, and finally obtains a graph-level vector representation to represent the entire graph. Currently, the more common pool operations include Flat Graph Pooling (such as Max, Ave, Min) and Hierarchical Graph Pooling (such as Diffpool).

4. Basic model of graph neural network

The foundation, frontier and application of GNN

There is a context learning in the field of machine learning concept. In graph neural networks, the context of a node is its neighbor nodes. We can use the neighbor nodes of a node to learn the vector representation of this node.

The foundation, frontier and application of GNN

In this way, each node can define a calculation graph.

The foundation, frontier and application of GNN

We can layer the calculation graph. The first layer is the most original information, and the sum is passed layer by layer. Aggregate information to learn vector representations of all nodes.

The foundation, frontier and application of GNN

The foundation, frontier and application of GNN


The foundation, frontier and application of GNN

The above figure roughly describes the main steps of graph neural network model learning. There are mainly four steps:

  • Define a Aggregation function;
  • Define the loss function according to the task;
  • Train a batch of nodes , for example, you can train a batch of calculation graphs at one time;
  • # produces the required vector representation for each node, even some nodes that have never been trained. (What is learned is the aggregation function, and the vector representation of the new node can be obtained by using the aggregation function and the vector representation that has been trained).

The foundation, frontier and application of GNN

##Picture above is an example of using average as an aggregation function. The vector representation of node v in the kth layer depends on the average of the vector representations of its neighbor nodes in the previous layer and its own vector representation in the previous layer.

The foundation, frontier and application of GNN

To summarize the above content, the main point of the graph neural network is to generate the target node by aggregating the information of neighbor nodes. Vector representation of points, which takes into account parameter sharing in the encoder and also allows for inference learning.

5. Popular models of graph neural networks

The foundation, frontier and application of GNN

##The classic or popular algorithm essence of graph neural networks The above is to use different aggregation function or filter function, which can be divided into supervised graph neural network and unsupervised graph neural network.

The foundation, frontier and application of GNN

##GCN

is one of the most classic algorithms, It can Act directly on the graph and exploit its structural information. Focusing on improving model speed, practicality and stability, as shown in the figure above, GCN has also gone through several iterations. The GCN paper is of epoch-making significance and laid the foundation for graph neural networks.

The foundation, frontier and application of GNN

MPNN

The core pointis to transform the graph into convolution For the process of information transmission, it defines two functions, namely aggregation function and update function. This algorithm is a simple and general algorithm, but it is not efficient.

GraphSage

is an industrial-level algorithm. It uses sampling to get a certain number of neighbor nodes. Thus the vector representation of the school node.

The foundation, frontier and application of GNN

GAT

is the introduction of the idea of ​​attention, its core The point is to dynamically learn edge weights during information transfer.

The foundation, frontier and application of GNN

In addition to the algorithms introduced above, there is also GGNN. Its characteristic is that the output can be multiple nodes. If you are interested, you can check it out. Related papers.

In the book "Basics, Frontiers and Applications of Graph Neural Networks", Chapters 5, 6, 7 and 8 also introduce how to evaluate the scalability of graph neural networks and graph neural networks respectively. , the interpretability of graph neural networks, and the adversarial stability of graph neural networks. If you are interested, you can read the corresponding chapters in the book.

##3. The Frontier of Graph Neural Networks

1. Graph Structure Learning

The foundation, frontier and application of GNN

Graph neural network requires graph structure data, but it is doubtful whether the given graph structure is optimal. Sometimes there may be a lot of noise, and many applications There may be no graph-structured data, or even just raw features.

The foundation, frontier and application of GNN

So, we need to use the graph neural network to learn the optimal graph representation and graph node representation.

The foundation, frontier and application of GNN

We transform the learning of the graph into the learning of similarities between nodes, and control the smoothness through regularization , system attributes and connectivity, and iteratively refine the structure of the graph and the vector representation of the graph.

The foundation, frontier and application of GNN

The foundation, frontier and application of GNN

The foundation, frontier and application of GNN

#Experimental data can show the advantages of this method.

The foundation, frontier and application of GNN

Through the visualization results of the graph, it can be found that the learned graphs tend to compare similar graphs Objects are clustered together and have a certain interpretability.

2. Other Frontiers

In the book "Basics, Frontiers and Applications of Graph Neural Networks", the following frontiers are also introduced. Research, these cutting-edge research have important applications in many scenarios:

  • Graph classification;
  • Link Prediction;
  • Graph generation;
  • Graph conversion;
  • Graph matching;
  • Dynamic graph neural network;
  • Heterogeneous Graph Neural Network;
  • AutoML for Graph Neural Network;
  • Self-supervised learning of graph neural networks.
4. Application of graph neural network

1. Application of graph neural network in recommendation system

The foundation, frontier and application of GNNWe can use session information to construct a heterogeneous global graph, and then learn the vector representation of users or items through graph neural network learning, and use this vector representation for personalization recommendation.

2. Application of graph neural network in computer vision

The foundation, frontier and application of GNN

We can track the dynamic change process of objects, Deepen your understanding of video with graph neural networks.

3. Application of graph neural network in natural language processing

The foundation, frontier and application of GNN

We can use graph neural networks to understand high-level information of natural language.

4. Application of graph neural network in program analysis

The foundation, frontier and application of GNN

##5. Application of graph neural network in smart cities

The foundation, frontier and application of GNN

##5. Q&A session

Q1: Is GNN an important method for the next generation of deep learning?

#A1: Graph neural network is a very important branch, and the one that keeps pace with graph neural network is Transformer. In view of the flexibility of graph neural networks, graph neural networks and Transformer can be combined with each other to take advantage of greater advantages.

Q2: Can GNN and causal learning be combined? How to combine?

#A2: The important link in causal learning is the causal graph. The causal graph and GNN can be naturally combined. The difficulty of causal learning is that its data size is small. We can use the ability of GNN to better learn causal graphs.

Q3: What is the difference and connection between the interpretability of GNN and the interpretability of traditional machine learning?

#A3: There will be a detailed introduction in the book "Basics, Frontiers and Applications of Graph Neural Networks".

Q4: How to train and infer GNN directly based on the graph database and using the capabilities of graph computing?

A4: At present, there is no good practice on the unified graph computing platform. There are some startup companies and scientific research teams exploring related directions. This will be a very For valuable and challenging research directions, a more feasible approach is to divide the research into different fields.

The above is the detailed content of The foundation, frontier and application of GNN. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

This article will take you to understand SHAP: model explanation for machine learning This article will take you to understand SHAP: model explanation for machine learning Jun 01, 2024 am 10:58 AM

In the fields of machine learning and data science, model interpretability has always been a focus of researchers and practitioners. With the widespread application of complex models such as deep learning and ensemble methods, understanding the model's decision-making process has become particularly important. Explainable AI|XAI helps build trust and confidence in machine learning models by increasing the transparency of the model. Improving model transparency can be achieved through methods such as the widespread use of multiple complex models, as well as the decision-making processes used to explain the models. These methods include feature importance analysis, model prediction interval estimation, local interpretability algorithms, etc. Feature importance analysis can explain the decision-making process of a model by evaluating the degree of influence of the model on the input features. Model prediction interval estimate

Identify overfitting and underfitting through learning curves Identify overfitting and underfitting through learning curves Apr 29, 2024 pm 06:50 PM

This article will introduce how to effectively identify overfitting and underfitting in machine learning models through learning curves. Underfitting and overfitting 1. Overfitting If a model is overtrained on the data so that it learns noise from it, then the model is said to be overfitting. An overfitted model learns every example so perfectly that it will misclassify an unseen/new example. For an overfitted model, we will get a perfect/near-perfect training set score and a terrible validation set/test score. Slightly modified: "Cause of overfitting: Use a complex model to solve a simple problem and extract noise from the data. Because a small data set as a training set may not represent the correct representation of all data." 2. Underfitting Heru

The evolution of artificial intelligence in space exploration and human settlement engineering The evolution of artificial intelligence in space exploration and human settlement engineering Apr 29, 2024 pm 03:25 PM

In the 1950s, artificial intelligence (AI) was born. That's when researchers discovered that machines could perform human-like tasks, such as thinking. Later, in the 1960s, the U.S. Department of Defense funded artificial intelligence and established laboratories for further development. Researchers are finding applications for artificial intelligence in many areas, such as space exploration and survival in extreme environments. Space exploration is the study of the universe, which covers the entire universe beyond the earth. Space is classified as an extreme environment because its conditions are different from those on Earth. To survive in space, many factors must be considered and precautions must be taken. Scientists and researchers believe that exploring space and understanding the current state of everything can help understand how the universe works and prepare for potential environmental crises

Transparent! An in-depth analysis of the principles of major machine learning models! Transparent! An in-depth analysis of the principles of major machine learning models! Apr 12, 2024 pm 05:55 PM

In layman’s terms, a machine learning model is a mathematical function that maps input data to a predicted output. More specifically, a machine learning model is a mathematical function that adjusts model parameters by learning from training data to minimize the error between the predicted output and the true label. There are many models in machine learning, such as logistic regression models, decision tree models, support vector machine models, etc. Each model has its applicable data types and problem types. At the same time, there are many commonalities between different models, or there is a hidden path for model evolution. Taking the connectionist perceptron as an example, by increasing the number of hidden layers of the perceptron, we can transform it into a deep neural network. If a kernel function is added to the perceptron, it can be converted into an SVM. this one

Implementing Machine Learning Algorithms in C++: Common Challenges and Solutions Implementing Machine Learning Algorithms in C++: Common Challenges and Solutions Jun 03, 2024 pm 01:25 PM

Common challenges faced by machine learning algorithms in C++ include memory management, multi-threading, performance optimization, and maintainability. Solutions include using smart pointers, modern threading libraries, SIMD instructions and third-party libraries, as well as following coding style guidelines and using automation tools. Practical cases show how to use the Eigen library to implement linear regression algorithms, effectively manage memory and use high-performance matrix operations.

Five schools of machine learning you don't know about Five schools of machine learning you don't know about Jun 05, 2024 pm 08:51 PM

Machine learning is an important branch of artificial intelligence that gives computers the ability to learn from data and improve their capabilities without being explicitly programmed. Machine learning has a wide range of applications in various fields, from image recognition and natural language processing to recommendation systems and fraud detection, and it is changing the way we live. There are many different methods and theories in the field of machine learning, among which the five most influential methods are called the "Five Schools of Machine Learning". The five major schools are the symbolic school, the connectionist school, the evolutionary school, the Bayesian school and the analogy school. 1. Symbolism, also known as symbolism, emphasizes the use of symbols for logical reasoning and expression of knowledge. This school of thought believes that learning is a process of reverse deduction, through existing

Is Flash Attention stable? Meta and Harvard found that their model weight deviations fluctuated by orders of magnitude Is Flash Attention stable? Meta and Harvard found that their model weight deviations fluctuated by orders of magnitude May 30, 2024 pm 01:24 PM

MetaFAIR teamed up with Harvard to provide a new research framework for optimizing the data bias generated when large-scale machine learning is performed. It is known that the training of large language models often takes months and uses hundreds or even thousands of GPUs. Taking the LLaMA270B model as an example, its training requires a total of 1,720,320 GPU hours. Training large models presents unique systemic challenges due to the scale and complexity of these workloads. Recently, many institutions have reported instability in the training process when training SOTA generative AI models. They usually appear in the form of loss spikes. For example, Google's PaLM model experienced up to 20 loss spikes during the training process. Numerical bias is the root cause of this training inaccuracy,

Explainable AI: Explaining complex AI/ML models Explainable AI: Explaining complex AI/ML models Jun 03, 2024 pm 10:08 PM

Translator | Reviewed by Li Rui | Chonglou Artificial intelligence (AI) and machine learning (ML) models are becoming increasingly complex today, and the output produced by these models is a black box – unable to be explained to stakeholders. Explainable AI (XAI) aims to solve this problem by enabling stakeholders to understand how these models work, ensuring they understand how these models actually make decisions, and ensuring transparency in AI systems, Trust and accountability to address this issue. This article explores various explainable artificial intelligence (XAI) techniques to illustrate their underlying principles. Several reasons why explainable AI is crucial Trust and transparency: For AI systems to be widely accepted and trusted, users need to understand how decisions are made

See all articles