BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//wp-events-plugin.com//5.981//EN
TZID:Asia/Jerusalem
X-WR-TIMEZONE:Asia/Jerusalem
BEGIN:VEVENT
UID:95@web.iem.technion.ac.il
DTSTART;TZID=Asia/Jerusalem:20210706T123000
DTEND;TZID=Asia/Jerusalem:20210706T133000
DTSTAMP:20210630T095448Z
URL:https://web.iem.technion.ac.il/site/iemevents/variational-data-augment
ation-with-deep-learning-generative-models-for-imbalanced-learning/
SUMMARY:Variational Data Augmentation with Deep Learning Generative Models
for Imbalanced Learning [ \n Computational Data Science (CDS) Seminar\
n Seminars\n \n ]
DESCRIPTION:By: M.Sc Yanay Manheim\n Advisors: Prof. Tamir Hazan\n Where: Z
OOM From:\nTechnion\nAbstract: Deep Learning models are known for their ab
ility to successfully solve Machine Learning problems involving large amou
nts of data with evenly distributed classes. In contrast\, imbalanced lear
ning problems\, where some classes are underrepresented\, remains a challe
nge and often require creative solutions for them to be solved effectively
. This work will focus on balancing the classes distribution by generating
examples from the underrepresented class\, known as oversampling\, by usi
ng deep generative models based on Generative Adversarial Networks (GANs)
and Variational Auto Encoders (VAEs). Deep generative models are designed
to work with high dimensional (possibly structured) data\, as opposed to o
ther methods such as Synthetic Minority Over-sampling Technique (SMOTE) an
d Adaptive Synthetic Sampling Approach (ADASYN). This thesis proposes a di
rect variational generative discriminative network\, that is able to gener
ate instances conditioned on additional data\, such as class label. Our me
thod models an instance as a composition of discrete and continuous latent
features in a probabilistic model. The discrete and continuous variables
are sampled using Gumbel-max and standard Gaussian reparameterization resp
ectively. The resulting loss objective still includes an argmax operation\
, which can be estimated with softmax based relaxations. Instead\, the dis
crete loss is directly optimized by applying the direct optimization throu
gh argmax for discrete VAE approach. We use the encoding of data into a di
screte component as a classifier\, which means that when the training is d
one\, we are left with both a generator and a classifier. By augmenting th
e data\, the classifier is able to learn from a larger\, balanced and dive
rsified training set. While training\, synthetic data is generated and add
ed to every mini batch training set in order to improve classification per
formance and generalization\, which is measured by several evaluation metr
ics. Our proposed method effectiveness is demonstrated on two data sets an
d compared to the latest state of the art deep learning based generative-d
iscriminative algorithms.\n\nZoom Link\n\nhttps://technion.zoom.us/j/97933
825886
CATEGORIES:Computational Data Science (CDS) Seminar,Seminars
END:VEVENT
BEGIN:VTIMEZONE
TZID:Asia/Jerusalem
X-LIC-LOCATION:Asia/Jerusalem
BEGIN:DAYLIGHT
DTSTART:20210326T030000
TZOFFSETFROM:+0200
TZOFFSETTO:+0300
TZNAME:IDT
END:DAYLIGHT
END:VTIMEZONE
END:VCALENDAR