Alexander Amini
Alexander Amini
  • 77
  • 11 500 312
MIT 6.S191: Deep Generative Modeling
MIT Introduction to Deep Learning 6.S191: Lecture 4
Deep Generative Modeling
Lecturer: Ava Amini
*New 2024 Edition*
For all lectures, slides, and lab materials: introtodeeplearning.com​
Lecture Outline
0:00​ - Introduction
6:10- Why care about generative models?
8:16​ - Latent variable models
10:50​ - Autoencoders
17:02​ - Variational autoencoders
23:25 - Priors on the latent distribution
32:31​ - Reparameterization trick
34:36​ - Latent perturbation and disentanglement
37:40 - Debiasing with VAEs
39:37​ - Generative adversarial networks
42:09​ - Intuitions behind GANs
44:57 - Training GANs
48:28 - GANs: Recent advances
50:57 - CycleGAN of unpaired translation
55:03 - Diffusion Model sneak peak
Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to stay fully-connected!!
Переглядів: 4 443

Відео

MIT 6.S191: Convolutional Neural Networks
Переглядів 17 тис.День тому
MIT Introduction to Deep Learning 6.S191: Lecture 3 Convolutional Neural Networks for Computer Vision Lecturer: Alexander Amini * New 2024 Edition * For all lectures, slides, and lab materials: introtodeeplearning.com​ Lecture Outline 0:00​ - Introduction 2:45​ - Amazing applications of vision 4:56 - What computers "see" 13:09- Learning visual features 18:53​ - Feature extraction and convolutio...
MIT 6.S191: Recurrent Neural Networks, Transformers, and Attention
Переглядів 49 тис.14 днів тому
MIT Introduction to Deep Learning 6.S191: Lecture 2 Recurrent Neural Networks Lecturer: Ava Amini New 2024 Edition For all lectures, slides, and lab materials: introtodeeplearning.com Lecture Outline 0:00​ - Introduction 3:42​ - Sequence modeling 5:30​ - Neurons with recurrence 12:20 - Recurrent neural networks 14:08 - RNN intuition 17:14​ - Unfolding RNNs 19:54 - RNNs from scratch 22:41 - Desi...
MIT Introduction to Deep Learning | 6.S191
Переглядів 152 тис.21 день тому
MIT Introduction to Deep Learning 6.S191: Lecture 1 *New 2024 Edition* Foundations of Deep Learning Lecturer: Alexander Amini For all lectures, slides, and lab materials: introtodeeplearning.com/ Lecture Outline 0:00​ - Introduction 7:25​ - Course information 13:37​ - Why deep learning? 17:20​ - The perceptron 24:30​ - Perceptron example 31;16​ - From perceptrons to neural networks 37:51​ - App...
MIT 6.S191 (2023): The Future of Robot Learning
Переглядів 43 тис.Рік тому
MIT Introduction to Deep Learning 6.S191: Lecture 10 The Future of Robot Learning Lecturer: Daniela Rus 2023 Edition For all lectures, slides, and lab materials: introtodeeplearning.com​ Lecture Outline - coming soon! Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to stay fully-connected!!
MIT 6.S191 (2023): The Modern Era of Statistics
Переглядів 78 тис.Рік тому
MIT Introduction to Deep Learning 6.S191: Lecture 9 The Modern Era of Statistics Lecturer: Ramin Hasani 2023 Edition For all lectures, slides, and lab materials: introtodeeplearning.com​ Lecture Outline - coming soon! Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to stay fully-connected!!
MIT 6.S191 (2023): Text-to-Image Generation
Переглядів 43 тис.Рік тому
MIT Introduction to Deep Learning 6.S191: Lecture 8 Deep Learning Limitations and New Frontiers Lecturer: Dilip Krishnan 2023 Edition For all lectures, slides, and lab materials: introtodeeplearning.com​ Lecture Outline - coming soon! Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to stay fully-connected!!
MIT 6.S191 (2023): Deep Learning New Frontiers
Переглядів 80 тис.Рік тому
MIT Introduction to Deep Learning 6.S191: Lecture 7 Deep Learning Limitations and New Frontiers Lecturer: Ava Amini 2023 Edition For all lectures, slides, and lab materials: introtodeeplearning.com​ Lecture Outline - coming soon! Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to stay fully-connected!!
MIT 6.S191 (2023): Reinforcement Learning
Переглядів 119 тис.Рік тому
MIT Introduction to Deep Learning 6.S191: Lecture 5 Deep Reinforcement Learning Lecturer: Alexander Amini 2023 Edition For all lectures, slides, and lab materials: introtodeeplearning.com Lecture Outline: 0:00 - Introduction 3:49 - Classes of learning problems 6:48 - Definitions 12:24 - The Q function 17:06 - Deeper into the Q function 21:32 - Deep Q Networks 29:15 - Atari results and limitatio...
MIT 6.S191 (2023): Robust and Trustworthy Deep Learning
Переглядів 84 тис.Рік тому
MIT Introduction to Deep Learning 6.S191: Lecture 5 Robust and Trustworthy Deep Learning Lecturer: Sadhana Lolla (Themis AI, themisai.io) 2023 Edition For all lectures, slides, and lab materials: introtodeeplearning.com​ Lecture Outline 0:00 - Introduction and Themis AI 3:46 - Background 7:29 - Challenges for Robust Deep Learning 8:24 - What is Algorithmic Bias? 14:13 - Class imbalance 16:25 - ...
MIT 6.S191 (2023): Deep Generative Modeling
Переглядів 293 тис.Рік тому
MIT Introduction to Deep Learning 6.S191: Lecture 4 Deep Generative Modeling Lecturer: Ava Amini 2023 Edition For all lectures, slides, and lab materials: introtodeeplearning.com​ Lecture Outline 0:00​ - Introduction 5:48 - Why care about generative models? 7:33​ - Latent variable models 9:30​ - Autoencoders 15:03​ - Variational autoencoders 21:45 - Priors on the latent distribution 28:16​ - Re...
MIT 6.S191 (2023): Convolutional Neural Networks
Переглядів 239 тис.Рік тому
MIT Introduction to Deep Learning 6.S191: Lecture 3 Convolutional Neural Networks for Computer Vision Lecturer: Alexander Amini 2023 Edition For all lectures, slides, and lab materials: introtodeeplearning.com​ Lecture Outline 0:00​ - Introduction 2:37​ - Amazing applications of vision 5:35 - What computers "see" 12:38- Learning visual features 17:51​ - Feature extraction and convolution 22:23 ...
MIT 6.S191 (2023): Recurrent Neural Networks, Transformers, and Attention
Переглядів 646 тис.Рік тому
MIT Introduction to Deep Learning 6.S191: Lecture 2 Recurrent Neural Networks Lecturer: Ava Amini 2023 Edition For all lectures, slides, and lab materials: introtodeeplearning.com Lecture Outline 0:00​ - Introduction 3:07​ - Sequence modeling 5:09​ - Neurons with recurrence 12:05 - Recurrent neural networks 13:47 - RNN intuition 15:03​ - Unfolding RNNs 18:57 - RNNs from scratch 21:50 - Design c...
MIT Introduction to Deep Learning (2023) | 6.S191
Переглядів 1,9 млнРік тому
MIT Introduction to Deep Learning 6.S191: Lecture 1 Foundations of Deep Learning Lecturer: Alexander Amini 2023 Edition For all lectures, slides, and lab materials: introtodeeplearning.com/ Lecture Outline 0:00​ - Introduction 8:14 ​ - Course information 11:33​ - Why deep learning? 14:48​ - The perceptron 20:06​ - Perceptron example 23:14​ - From perceptrons to neural networks 29:34​ - Applying...
MIT 6.S191: Uncertainty in Deep Learning
Переглядів 30 тис.Рік тому
MIT Introduction to Deep Learning 6.S191: Lecture 10 Uncertainty in Deep Learning Lecturer: Jasper Snoek (Research Scientist, Google Brain) Google Brain January 2022 For all lectures, slides, and lab materials: introtodeeplearning.com​​ Lecture Outline - coming soon! Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to st...
MIT 6.S191: AI for Science
Переглядів 36 тис.2 роки тому
MIT 6.S191: AI for Science
MIT 6.S191: Automatic Speech Recognition
Переглядів 28 тис.2 роки тому
MIT 6.S191: Automatic Speech Recognition
MIT 6.S191: LiDAR for Autonomous Driving
Переглядів 29 тис.2 роки тому
MIT 6.S191: LiDAR for Autonomous Driving
MIT 6.S191 (2022): Deep Learning New Frontiers
Переглядів 36 тис.2 роки тому
MIT 6.S191 (2022): Deep Learning New Frontiers
MIT 6.S191 (2022): Reinforcement Learning
Переглядів 83 тис.2 роки тому
MIT 6.S191 (2022): Reinforcement Learning
MIT 6.S191 (2022): Deep Generative Modeling
Переглядів 79 тис.2 роки тому
MIT 6.S191 (2022): Deep Generative Modeling
MIT 6.S191 (2022): Convolutional Neural Networks
Переглядів 119 тис.2 роки тому
MIT 6.S191 (2022): Convolutional Neural Networks
MIT 6.S191 (2022): Recurrent Neural Networks and Transformers
Переглядів 253 тис.2 роки тому
MIT 6.S191 (2022): Recurrent Neural Networks and Transformers
MIT Introduction to Deep Learning (2022) | 6.S191
Переглядів 615 тис.2 роки тому
MIT Introduction to Deep Learning (2022) | 6.S191
LiDAR-Based End-to-End Navigation | ICRA 2021
Переглядів 15 тис.2 роки тому
LiDAR-Based End-to-End Navigation | ICRA 2021
MIT 6.S191: AI in Healthcare
Переглядів 31 тис.3 роки тому
MIT 6.S191: AI in Healthcare
MIT 6.S191: Towards AI for 3D Content Creation
Переглядів 21 тис.3 роки тому
MIT 6.S191: Towards AI for 3D Content Creation
MIT 6.S191: Taming Dataset Bias via Domain Adaptation
Переглядів 22 тис.3 роки тому
MIT 6.S191: Taming Dataset Bias via Domain Adaptation
MIT 6.S191: Deep CPCFG for Information Extraction
Переглядів 20 тис.3 роки тому
MIT 6.S191: Deep CPCFG for Information Extraction
MIT 6.S191: AI Bias and Fairness
Переглядів 46 тис.3 роки тому
MIT 6.S191: AI Bias and Fairness

КОМЕНТАРІ

  • @civilengineeringonlinecour7143
    @civilengineeringonlinecour7143 17 годин тому

    Awesome lecture. 🎉

  • @mariuspy
    @mariuspy 17 годин тому

    this is an "intro" class, however it is attended by experts 😅

  • @ahmadkhadra9766
    @ahmadkhadra9766 19 годин тому

    Would you get in touch with me?

  • @ghaithal-refai4550
    @ghaithal-refai4550 20 годин тому

    Thanks for the video, but statquest did better video on this topic with more details.

  • @geoffreyporto
    @geoffreyporto 20 годин тому

    I have a dataset of 120 images of cell phone photographs of the skin of dogs sick with 12 types of skin diseases, with a distribution of 10 images for each dog. What type of Generative Adversarial Network (GAN) is most suitable to increase my dataset with quality and be able to train my DL model? DcGAN, ACGAN, StyleGAN3, CGAN?

  • @mukeshodhano4094
    @mukeshodhano4094 23 години тому

    if tf.keras.layer.Dense(units = 2) does not work then try this import tensorflow as tf from tensorflow.python.keras.layers import Dense layer = Dense(units = 2)

    • @mukeshodhano4094
      @mukeshodhano4094 23 години тому

      btw i am using 2.16.1 version of tensorflow

  • @wingsoftechnology5302
    @wingsoftechnology5302 День тому

    can you please share the Lab session or codes as well to try out?

  • @gapcreator726
    @gapcreator726 День тому

    Nice amini teaching❤ and your curly hair nice😮

  • @freddybrou405
    @freddybrou405 День тому

    Thank you so much for the course. So much interesting.

  • @genkideska4486
    @genkideska4486 День тому

    Who's here for the curly hair lady 🥰 ?

  • @genkideska4486
    @genkideska4486 День тому

    5 mins more let's gooooo

  • @catalinmanea1560
    @catalinmanea1560 День тому

    awesome, many thanks for your initiative ! keep up the great work

  • @omartariqmuhammed
    @omartariqmuhammed День тому

    It's finally out!! 🤗🤗

  • @SurprisedDivingBoard-vu9rz
    @SurprisedDivingBoard-vu9rz День тому

    Please send me a degree certificate to get a job in US. Some degree certificate is okay. God bless USA. Or great Britain. All AI only. Some degree tag is enough. I ask because Bill gates really needs a degree for a job offer for me.

  • @arhabhasankhan8465
    @arhabhasankhan8465 День тому

    Thank you!

  • @aurabless7552
    @aurabless7552 День тому

    when gpt 4o lectures :D

  • @yasirfarooq2256
    @yasirfarooq2256 День тому

    TOP G

  • @erikkim4739
    @erikkim4739 2 дні тому

    so excited for this!

  • @mrkshsbwiwow3734
    @mrkshsbwiwow3734 2 дні тому

    what an awesome lecture, thank you!

  • @bobo-lc4yi
    @bobo-lc4yi 2 дні тому

    I'm confused, is the 2024 version a short form of this course? because you mentioned its just 1 week or am I missing something?

  • @martinriveros3470
    @martinriveros3470 3 дні тому

    Excellent video! just a minor comment: about 27:00 i think you should state clear that (1+3x1-2x2) = z and include the "hat" to y (in the graph)...🖖

  • @arjitc12
    @arjitc12 3 дні тому

    🔥

  • @Hustler0109
    @Hustler0109 3 дні тому

    Sir's explanation is better than any Udemy and Coursera course out there fr😮

  • @robsoft_gt
    @robsoft_gt 3 дні тому

    So basically what Meta with Llama 3 has done is give to the community the weights for each perceptron?

  • @chezhian4747
    @chezhian4747 4 дні тому

    Dear Alex and Ava, Thank you so much for the insightful sessions on deep learning which are the best I've come across in youtube. I've a query and would appreciate a response from you. In case if we want to translate a sentence from English to French and if we use an encoder decoder transformer architecture, based on the context vector generated from encoder, the decoder predicts the translated word one by one. My question is, for the logits generated by decoder output, does the transformer model provides weightage for all words available in French. For e.g. if we consider that there are N number of words in French, and if softmax function is applied to the logits generated by decoder, does softmax predicts the probability percentage for all those N number of words.

  • @Priyanshuc2425
    @Priyanshuc2425 4 дні тому

    Hey if possible please upload how you implement this things practically in labs. Theory is important so does practical work

  • @AreshaBasirSpriha
    @AreshaBasirSpriha 4 дні тому

    I loved this, It's my major course......It's extremely helpful...love from Bangladesh

  • @bloodline39
    @bloodline39 4 дні тому

    Why Tensorflow, why not Pytorch?🥲

  • @enisten
    @enisten 4 дні тому

    How can we be sure that our predicted output vector will always correspond to a word? There are an infinite number of vectors in any vector space but only a finite number of words in the dictionary. We can always compute the training loss as long as every word is mapped to a vector, but what use is the resulting callibrated model if its predictions will not necessarily correspond to a word?

  • @karterel4562
    @karterel4562 4 дні тому

    thank for sharing that course , that's so usefull !

  • @user-to1qv7oz6g
    @user-to1qv7oz6g 5 днів тому

    how many clocks can go off in a school

  • @user-to1qv7oz6g
    @user-to1qv7oz6g 5 днів тому

    look for the countdowns, what does ai look for and if it can understand you training with it. what were the biggest fixes.

  • @RazWorld..
    @RazWorld.. 5 днів тому

    Thanks Amini- such a great introduction 🙋🏾‍♂️👌🏾

  • @abdelazizeabdullahelsouday8118
    @abdelazizeabdullahelsouday8118 5 днів тому

    Thank you for sharing, please i need a help and i send an email to you but no response, could you please help me? thanks in advance.

  • @AnuwktootLee-yf9ff
    @AnuwktootLee-yf9ff 5 днів тому

    Bahia hu hum ab hum tum huneaha sath rahneged university kit universe abantw aur oyra Karen’s gaadi mwd humne svn layered D muje apne array Adamu aki fire m me stover ki emowpwr hitw rehte hua is

  • @gominboda_go
    @gominboda_go 6 днів тому

    This is the amazing work. Thank you for sharing.

  • @tmcgraw
    @tmcgraw 6 днів тому

    right?

  • @a0z9
    @a0z9 6 днів тому

    Ojalá todo el mundo fuera así de competente. Da gusto aprender de gente que tiene las ideas claras.

  • @ghaithal-refai4550
    @ghaithal-refai4550 6 днів тому

    Thank you very much, it is a great lecture. I hope that you develop the lectures over the years as it seems to be the same contents. topics like pretrained models and knowledge transfer, YOLO might be good to be added to CNN

  • @opalpearl3051
    @opalpearl3051 6 днів тому

    Thank you for sharing this wonderfully put together course for the general public's benefit. I would love to get some insight as to what goes in the lab work the student's go through as an adjunct to the course lectures. Will that be possible in the future.

  • @mdidris7719
    @mdidris7719 6 днів тому

    excellent so great idris italy

  • @faridsaud6567
    @faridsaud6567 7 днів тому

    Amazing, top content! Out of curiosity: Why TensorFlow instead of Pytorch?

  • @tonyndiritu
    @tonyndiritu 7 днів тому

    🔥🔥🔥

  • @crazy.vlog369
    @crazy.vlog369 7 днів тому

    Finally i looked your session Next session pls came

  • @kyhines1060
    @kyhines1060 7 днів тому

    You make it so understandable

  • @TheNewton
    @TheNewton 7 днів тому

    51:52 Position Encoding - isn't this just the same as giving everything a number/timestep? but with a different name (order,sequence,time,etc) ,so we're still kinda stuck with discrete steps. If everything is coded by position in a stream of data wont parts at the end of the stream be further and further away in a space from the beginning. So if a long sentence started with a pronoun but then ended with a noun the pronoun representing the noun would be harder and harder to relate the two: 'it woke me early this morning, time to walk the cat'

  • @DreamBuilders-rq6km
    @DreamBuilders-rq6km 7 днів тому

    Thanks for sharing this knowledge. Be blessed

  • @AmarVashishth
    @AmarVashishth 7 днів тому

    Attended Deep Learning lectures at a topmost college of a country, here he clearly explained all that in a single lecture for which the former took 10s of lectures to explain.

  • @husseinekeita8909
    @husseinekeita8909 8 днів тому

    Thank you for sharing quality content like this for free for several years

  • @4threich166
    @4threich166 8 днів тому

    Where is the software lab?