Stanford CS236: Deep Generative Models I 2023 I Lecture 2 - Background

06 May 2024 (7 months ago)
Stanford CS236: Deep Generative Models I 2023 I Lecture 2 - Background

Generative vs. Discriminative Models

  • Generative models aim to learn the joint distribution of the input data, P(X, Y), while discriminative models aim to predict the output variable given the input data, P(Y|X).
  • Generative models can be used for tasks such as sampling new data, density estimation, anomaly detection, and unsupervised feature learning.
  • Discriminative models are often simpler and more efficient to train, but they may not provide as much information about the underlying data distribution.

Challenges in Generative Modeling

  • The curse of dimensionality: the number of possible outcomes grows exponentially with the number of random variables.
  • Conditional independence assumptions: simplifying assumptions are often necessary to make the problem tractable.
  • Complex conditional distributions: both generative and discriminative models involve dealing with complex conditional distributions.

Deep Generative Models

  • Deep generative models use neural networks to represent conditional distributions and can be used to generate new data.
  • Autoregressive models use a neural network to predict the next element in a sequence given the previous elements.
  • Mixture models can be used to model data that comes from multiple distributions.
  • Variational autoencoders (VAEs) use a neural network to represent a complicated conditional distribution.

Overwhelmed by Endless Content?