Table of Contents
Experiments and results
Summary
Home Technology peripherals AI To solve the problem of VAE representation learning, Hokkaido University proposed a new generative model GWAE

To solve the problem of VAE representation learning, Hokkaido University proposed a new generative model GWAE

Apr 07, 2023 pm 05:47 PM
frame data

Learning low-dimensional representations of high-dimensional data is a fundamental task in unsupervised learning because such representations succinctly capture the essence of the data and enable execution based on low-dimensional inputs downstream tasks are possible. Variational autoencoder (VAE) is an important representation learning method, however due to its objective control representation learning is still a challenging task. Although the evidence lower bound (ELBO) goal of VAE is generatively modeled, learning representations is not directly targeted at this goal, which requires specific modifications to the representation learning task, such as disentanglement. These modifications sometimes lead to implicit and undesirable changes in the model, making controlled representation learning a challenging task.

To solve the representation learning problem in variational autoencoders, this paper proposes a new generative model called Gromov-Wasserstein Autoencoders (GWAE). GWAE provides a new framework for representation learning based on the variational autoencoder (VAE) model architecture. Unlike traditional VAE-based representation learning methods for generative modeling of data variables, GWAE obtains beneficial representations through optimal transfer between data and latent variables. The Gromov-Wasserstein (GW) metric makes possible such optimal transfer between incomparable variables (e.g. variables with different dimensions), which focuses on the distance structure of the variables under consideration. By replacing the ELBO objective with the GW metric, GWAE performs a comparison between the data and the latent space, directly targeting representation learning in variational autoencoders (Figure 1). This formulation of representation learning allows the learned representations to have specific properties that are considered beneficial (e.g., decomposability), which are called meta-priors.

To solve the problem of VAE representation learning, Hokkaido University proposed a new generative model GWAE

##Figure 1 The difference between VAE and GWAE

This study has so far Accepted by ICLR 2023.

  • Paper link: https://arxiv.org/abs/2209.07007
  • ##​
  • Code link: https://github.com/ganmodokix/gwae
  • Method introduction

Between data distribution and potential prior distribution The GW target is defined as follows:

To solve the problem of VAE representation learning, Hokkaido University proposed a new generative model GWAE

This formula of optimal transmission cost can measure the inconsistency of distribution in incomparable space; however, for continuous distribution , it is impractical to calculate exact GW values ​​due to the need to lower bound all couplings. To solve this problem, GWAE solves a relaxed optimization problem to estimate and minimize the GW estimator, the gradient of which can be calculated by automatic differentiation. The relaxation target is the sum of the estimated GW metric and three regularization losses, which can all be implemented in a differentiable programming framework such as PyTorch. This relaxation objective consists of a main loss and three regularization losses, namely the main estimated GW loss, the WAE-based reconstruction loss, the merged sufficient condition loss, and the entropy regularization loss.

This scheme can also flexibly customize the prior distribution to introduce beneficial features into the low-dimensional representation. Specifically, the paper introduces three prior populations, which are:

Neural Priori (NP) In GWAEs with NP, a fully connected neural network is used to construct a priori sampler. This family of prior distributions makes fewer assumptions about the underlying variables and is suitable for general situations.

Factorized Neural Priors (FNP)In GWAEs with FNP, use locally connected neural priors The network builds a sampler in which entries for each latent variable are generated independently. This sampler produces a factorized prior and a term-independent representation, which is a prominent method for representative meta-prior,disentanglement.

Gaussian Mixture Prior (GMP) In GMP, it is defined as a mixture of several Gaussian distributions, and its sampler can use heavy Parameterization techniques and Gumbel-Max techniques are implemented. GMP allows clusters to be hypothesized in the representation, where each Gaussian component of the prior is expected to capture a cluster.

Experiments and results

This study conducted two main meta-prior empirical evaluations of GWAE:Solution Entanglement and clustering.

Disentanglement The study used the 3D Shapes dataset and DCI metric to measure the disentanglement ability of GWAE. The results show that GWAE using FNP is able to learn object hue factors on a single axis, which demonstrates the disentanglement capability of GWAE. Quantitative evaluation also demonstrates the disentanglement performance of GWAE.

To solve the problem of VAE representation learning, Hokkaido University proposed a new generative model GWAE

Clustering To evaluate the representations obtained based on clustering element priors, the study conducted a Out-of-Distribution (OoD) detection. The MNIST dataset is used as In-Distribution (ID) data and the Omniglot dataset is used as OoD data. While MNIST contains handwritten numbers, Omniglot contains handwritten letters with different letters. In this experiment, the ID and OoD datasets share the domain of handwritten images, but they contain different characters. Models are trained on ID data and then use their learned representations to detect ID or OoD data. In VAE and DAGMM, the variable used for OoD detection is the prior log-likelihood, while in GWAE it is the Kantorovich potential. The prior for GWAE was constructed using GMP to capture the clusters of MNIST. The ROC curve shows the OoD detection performance of the models, with all three models achieving near-perfect performance; however, the GWAE built using GMP performed best in terms of area under the curve (AUC).

To solve the problem of VAE representation learning, Hokkaido University proposed a new generative model GWAE

In addition, this study evaluated the generative ability of GWAE.

Performance as an Autoencoder-Based Generative Model To evaluate the ability of GWAE to handle the general case without specific meta-priors, the CelebA data is used The set generation performance was evaluated. The experiment uses FID to evaluate the model's generative performance and PSNR to evaluate the autoencoding performance. GWAE achieved the second best generative performance and the best autoencoding performance using NP, demonstrating its ability to capture the data distribution in its model and capture the data information in its representation.

To solve the problem of VAE representation learning, Hokkaido University proposed a new generative model GWAE

Summary

  • GWAE is a variational autoencoder generative model built based on the Gromov-Wasserstein metric, aiming to Direct representation learning.
  • Since the prior only requires differentiable samples, various prior distribution settings can be constructed to assume meta-priors (ideal properties of the representation).
  • Experiments on primary meta-priors and performance evaluation as a variational autoencoder demonstrate the flexibility of the GWAE formulation and the representation learning capabilities of GWAE.
  • First author Nao Nakagawa’s personal homepage: https://ganmodokix.com/note/cv
  • Hokkaido University Multimedia Laboratory homepage: https://www-lmd.ist.hokudai.ac.jp/

The above is the detailed content of To solve the problem of VAE representation learning, Hokkaido University proposed a new generative model GWAE. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

AI startups collectively switched jobs to OpenAI, and the security team regrouped after Ilya left! AI startups collectively switched jobs to OpenAI, and the security team regrouped after Ilya left! Jun 08, 2024 pm 01:00 PM

Last week, amid the internal wave of resignations and external criticism, OpenAI was plagued by internal and external troubles: - The infringement of the widow sister sparked global heated discussions - Employees signing "overlord clauses" were exposed one after another - Netizens listed Ultraman's "seven deadly sins" Rumors refuting: According to leaked information and documents obtained by Vox, OpenAI’s senior leadership, including Altman, was well aware of these equity recovery provisions and signed off on them. In addition, there is a serious and urgent issue facing OpenAI - AI safety. The recent departures of five security-related employees, including two of its most prominent employees, and the dissolution of the "Super Alignment" team have once again put OpenAI's security issues in the spotlight. Fortune magazine reported that OpenA

How to evaluate the cost-effectiveness of commercial support for Java frameworks How to evaluate the cost-effectiveness of commercial support for Java frameworks Jun 05, 2024 pm 05:25 PM

Evaluating the cost/performance of commercial support for a Java framework involves the following steps: Determine the required level of assurance and service level agreement (SLA) guarantees. The experience and expertise of the research support team. Consider additional services such as upgrades, troubleshooting, and performance optimization. Weigh business support costs against risk mitigation and increased efficiency.

70B model generates 1,000 tokens in seconds, code rewriting surpasses GPT-4o, from the Cursor team, a code artifact invested by OpenAI 70B model generates 1,000 tokens in seconds, code rewriting surpasses GPT-4o, from the Cursor team, a code artifact invested by OpenAI Jun 13, 2024 pm 03:47 PM

70B model, 1000 tokens can be generated in seconds, which translates into nearly 4000 characters! The researchers fine-tuned Llama3 and introduced an acceleration algorithm. Compared with the native version, the speed is 13 times faster! Not only is it fast, its performance on code rewriting tasks even surpasses GPT-4o. This achievement comes from anysphere, the team behind the popular AI programming artifact Cursor, and OpenAI also participated in the investment. You must know that on Groq, a well-known fast inference acceleration framework, the inference speed of 70BLlama3 is only more than 300 tokens per second. With the speed of Cursor, it can be said that it achieves near-instant complete code file editing. Some people call it a good guy, if you put Curs

How does the learning curve of PHP frameworks compare to other language frameworks? How does the learning curve of PHP frameworks compare to other language frameworks? Jun 06, 2024 pm 12:41 PM

The learning curve of a PHP framework depends on language proficiency, framework complexity, documentation quality, and community support. The learning curve of PHP frameworks is higher when compared to Python frameworks and lower when compared to Ruby frameworks. Compared to Java frameworks, PHP frameworks have a moderate learning curve but a shorter time to get started.

How do the lightweight options of PHP frameworks affect application performance? How do the lightweight options of PHP frameworks affect application performance? Jun 06, 2024 am 10:53 AM

The lightweight PHP framework improves application performance through small size and low resource consumption. Its features include: small size, fast startup, low memory usage, improved response speed and throughput, and reduced resource consumption. Practical case: SlimFramework creates REST API, only 500KB, high responsiveness and high throughput

China Mobile: Humanity is entering the fourth industrial revolution and officially announced 'three plans” China Mobile: Humanity is entering the fourth industrial revolution and officially announced 'three plans” Jun 27, 2024 am 10:29 AM

According to news on June 26, at the opening ceremony of the 2024 World Mobile Communications Conference Shanghai (MWC Shanghai), China Mobile Chairman Yang Jie delivered a speech. He said that currently, human society is entering the fourth industrial revolution, which is dominated by information and deeply integrated with information and energy, that is, the "digital intelligence revolution", and the formation of new productive forces is accelerating. Yang Jie believes that from the "mechanization revolution" driven by steam engines, to the "electrification revolution" driven by electricity, internal combustion engines, etc., to the "information revolution" driven by computers and the Internet, each round of industrial revolution is based on "information and "Energy" is the main line, bringing productivity development

The inside story of Google's search algorithm was revealed, and 2,500 pages of documents were leaked with real names! Search Ranking Lies Exposed The inside story of Google's search algorithm was revealed, and 2,500 pages of documents were leaked with real names! Search Ranking Lies Exposed Jun 11, 2024 am 09:14 AM

Recently, 2,500 pages of internal Google documents were leaked, revealing how search, "the Internet's most powerful arbiter," operates. SparkToro's co-founder and CEO is an anonymous person. He published a blog post on his personal website, claiming that "an anonymous person shared with me thousands of pages of leaked Google Search API documentation that everyone in SEO should read." Go to them! "For many years, RandFishkin has been the top spokesperson in the field of SEO (Search Engine Optimization, search engine optimization), and he proposed the concept of "website authority" (DomainRating). Since he is highly respected in this field, RandFishkin

How to choose the best golang framework for different application scenarios How to choose the best golang framework for different application scenarios Jun 05, 2024 pm 04:05 PM

Choose the best Go framework based on application scenarios: consider application type, language features, performance requirements, and ecosystem. Common Go frameworks: Gin (Web application), Echo (Web service), Fiber (high throughput), gorm (ORM), fasthttp (speed). Practical case: building REST API (Fiber) and interacting with the database (gorm). Choose a framework: choose fasthttp for key performance, Gin/Echo for flexible web applications, and gorm for database interaction.

See all articles