►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Earlier
today,
we
heard
a
bit
of
kind
of
practical
things
in
deep
learning
and
applying
deep
learning.
Mustafa
talked
about
a
lot
of
a
lot
of
different
things.
Things
you
can
do
in
models
to
make
them
train
better
or
generalize.
Better
write
things
like
batch
normalization
drop
out,
different
kinds
of
weight,
regularization
techniques,
issues
related
to
overfitting
and
underfitting
right.
A
Joel
was
talking
about
about
a
lot
of
interesting
stuff,
so
I
think,
maybe
you
know
if
you
haven't
already,
you
can
try
to
implement
some
of
these
ideas
with
the
examples
here.
So
in
particular,
I
would
strongly
recommend
at
this
point,
I
think
it's
very
important
already
to
look
at
this
overfitting
and
underfitting
notebook.
A
Yeah,
so
now
you
I
do
encourage
you
to
try
and
look
at
look
into
this
and
it's
going
to
have
you
train
some
models.
First,
you
train
like
a
small
one,
a
medium
one
and
a
big
one
on
the
same
data
to
sort
of
all
in
one
go
and
then
it
compares
the
results,
okay
and
that
it
introduces
some
regularization
techniques,
and
then
you
kind
of
look
at
the
results
again.
A
Take
a
look
kind
of
the
the
final
results
and
see
what
you
think
cuz
I'm
gonna
ask
you
about
it,
I
think
a
little
bit
later
and
then
yeah
I
want
you
to
kind
of
think
about
these
questions.
You
know
sort
of
like
you've
seen
this
a
bunch
of
times,
you've
seen
these
kind
of
plots
just
make
sure
that
it's
sunk
in,
like
how
do
you
know
whether
you're
under
fitting?
How
do
you
know
whether
you're
over
fitting
what
are
the
real
Diagnostics
and
then
what
do
you
do?
So
if
it's
overfitting?
A
What
do
you
think,
let's
say
based
on
that
example?
What
seems
to
be
the
most
promising
thing
to
try?
Okay,
let's
say:
well
so
yeah,
this
a
little
bit
of
a
trick
question.
What
is
the
most
ideal
way
to
improve
it,
so
think
about
that
and
then
thinking
about
kind
of
things
you
can
do
if
you
don't
have
the
ideal
situation
under
fitting
there's
not
too
much
here
on
under
fitting
people.
Don't
talk
about
it
quite
as
much,
but
it's
worth
thinking
about,
and
then
this
this
case
here
yeah
actually
be
a
look
at
this.
A
All
of
these
are
actually
over
fitting
in
the
sense
that
you
know
the
training
and
validation
always
diverge,
so
I'm
curious
to
see
you
know,
try
to
see
if
you
can
actually
build
a
model
that
just
does
not
over
fit
at
all
on
this
case.
I
didn't,
try
it
myself,
so
I
don't
even
know
how
hard
that
is,
but
I
think
this
is
very
worthwhile
to
go
through.
So
do
that
and
then
one
other
thing
that
I
added
did
everybody
go
through
the
convolutional
neural
networks,
one?
A
A
The
data
set
is,
of
course,
very
small,
but
you
can
do
things
like
now:
try
to
add
data
augmentation,
so
there's
this
documentation
here
of
some
examples
of
this
image:
data
generator
thing
in
Cara's,
so
this
will
also
work
in
the
tensorflow
Kerris
notice
here,
I'm
on
that
Karras
io
page,
but,
like
you
were
told
before
you
know
everything.
That's
here
on
Keros
io:
this
is
just
a
API
spec,
so
you
can
also
use
just
TF
Kerris
for
all
of
this.
A
So
this
thing
in
particular,
is
very
convenient
for
applying
a
lot
of
different
kinds
of
transformations.
You
might
care
about
on
image,
datasets
to
augment
the
training
data,
to
help
you
kind
of
learn
to
help
your
model
learn
things
more
easily,
learn
more
symmetries
or
things
that
that
shouldn't
affect
mapping
of
input
to
output,
so
translations
in
the
images
some
like
changing
of
various
image
kinds
of
things.
If
you're
an
image
expert,
you
probably
understand
but
like
zca,
epsilon,
some
normalization,
some
flipping,
rescaling
or
even
I
think
I'm,
pretty
sure
I
saw
rotation.
A
There's
yeah,
there's
some
some
some
kind
of
rotation
thing
you
can
do,
and
you
don't
have
to
change
too
much.
If
you
do
this,
so
this
example
actually
uses
the
slide
for
10
data
set.
This
is
probably
this
might
work
a
little
better
on
syfy
10.
So
one
thing
you
could
do.
Actually,
if
you
want,
is
just
sort
of
copy
the
notebook
make
a
new
notebook
and
just
instead
of
doing
the
Emnes
problem
you
could
use
sigh.
A
Fart
n
is
very
easy,
so
you
just
have
this:
you
just
import
the
sigh
farts
anthing,
so
this
doesn't
let
you
show
where
you
import
so
I,
fart
n.
It
must
be
in
some
other
example,
but
you
could
Google.
How
do
you
get
so
far?
10
it's
just
like
from
cara's
data
sets,
I
think
imports
I,
fart
n.
Of
course
there
should
be
from
tensorflow.
A
Actually
it
might
not
even
matter
where
you
import
that
one
from
from
which
cares,
but
this
is
a
numpy
data
set
on
the
the
Emmis
one
we
had
was
a
numpy
data
set.
So
basically
you
just
have
a
way
to
kind
of
wrap
it
in
this
image.
Data
generator-
and
one
thing
to
note,
is
that
instead
of
calling
model
fit,
we
call
model
fit
generator.
If
you
do
this,
so
fit
generator
just
means
we're
gonna,
pass
it
some
kind
of
python
generator
or
generator
like
object.
A
So
it's
something
that's
just
going
to
produce
batches
of
data.
So
if
you
wanted
to
you
pure
care
as
and
you
you
know-
had
your
data
set
in
many
different
files-
that
you
need
to
be
able
to
open
and
close
things,
you
would
have
to
do
it
with
some
kind
of
generator.
But
in
this
case
we
just
use
it
to
kind
of
generate
using
this
existing
numpy
array,
which
is
fitting
in
memory.
So
you
can
just
say
flow
on
the
data
set
and
it
will
generate
batches
and
you
can
pass
that
at
the
generator.
A
So
this
should
be
pretty
easy
for
you
to
try.
If
you
tried
on
the
EM
nest
thing,
you
know
you
may
or
may
not
see
any
kind
of
improvement
in
terms
of
the
model,
and
this
is
pretty
simple.
But
if
you
do
see
some
improvements
and
they're
good,
then
I
guess
I'd
be
curious
to
know.
So
that's
something
you
can
try.
I
encourage
you
to
do
it.
A
You
could
try
things
like
adding
batch
normalization,
so
you
may
have
seen
snippets
of
code
of
best
normalization
in
Cara,
so
far,
I'm
not
sure,
probably
would
have
flown
by
in
a
slide
or
something.
But
it's
very
easy
to
find
documentation.
It's
really
just
a
layer
again,
if
you
manage
to
add
in
another
convolutional
layer
to
your
model,
it's
pretty
straightforward
to
instead
add
in
normalization
layer.
Okay,
so
see
if
you
can
do
that
CBC
any
any
difference
again.
A
You
may
not
an
in
this,
but
when
you
think
about
applying
this
to
your
own
scientific,
datasets,
Bachelor
mobilization
matron
out
to
be
important.
Okay,
so
yeah
that
that's
really
it
in
terms
of
what
I
would
guide
you
to
do
again.
The
the
advanced
examples
are
still
here.
If
you
haven't
worked
through
some
of
those,
you
can
go
ahead
and
do
that
the
code
can
be
a
little
more
complicated.
It's
using,
you
know,
I
think
pretty
much.
A
If
you
have
questions,
if
you
have
issues
raise
your
hand,
we'll
try
to
come
to
you
and
help
you
or
ask
on
slack,
we
only
we're
only
going
to
be
here
doing
this
for
a
little
bit,
though
only
another
30
minutes,
and
then
we
have
a
talk,
so
do
what
you
can
and
you
can
still
do
stuff
on
Thursday
and
the
self-guided
hands-on.
That
will
be
part
of
the
working
lunch
yeah
and
you
can.
A
A
Folks
were
having
issues
logging
on
to
Jupiter,
getting
these
500
errors
and
our
Jupiter
expert
Rawlin
is
able
to
see
some
of
that
activity
and
he
can
kind
of
clear
out
the
issue
with
certain
accounts.
If
you
had
an
issue
earlier,
try
again
it
might
be
fixed
now,
if
you
see
it
now,
let
us
know
since
there's
like
15
minutes
before
the
next
talk,
I
just
curious.
If
anybody
kind
of
noticed
this
thing
so
this
over
fit
and
under
fit
notebook
certainly
shows
a
bit
of
interesting
stuff.
A
It
shows
how
you
can
implement
these
things,
but
did
anybody
have
any
kind
of
thoughts
on
the
final?
The
final
result,
any
yeah
I,
think
I.
Think
you
get
it
I
guess
it's
just
not
a
great
example,
but
if
you
look
at
this
plot,
so
there's
the
blue
lines
and
then
there's
the
yellow
lines
right
and
actually
they
don't
have
the
base
line
on
here.
But
this
one
is
baseline
with
l/2.
A
A
Yellow
seems
better
I
mean,
certainly
it's
there's
less
of
a
generalisation
gap
over
here
right,
but
is
this
where
we
would
take
our
model
from?
A
A
It's
like
it's,
probably
just
noise
they're,
basically
equivalent
so
I
mean
it
indicates
that
kind
of
you're
improving
the
gap
a
little
bit
but
yeah.
In
this
case.
It's
just
a
bad
example.
It
doesn't
seem
to
actually
be
any
better,
and
here
was
the
same
because
I
mean
this
is
your
best
result
right
here,
so
I
think
it's
just
like
a
little
bit
too
much
of
an
academic
example.
A
I
think
there
are
people
who
would
contest
that
philosophy.
Who
would
say
it's
better
to
start
simple,
where
you
kind
of
understand
models
and
then
progressively
get
more
complex,
I'm
a
little
more
in
that
camp,
but
it
does
I
think
depend
on
the
final
outcome.
Maybe
if
you
really
really
really
want
to
get
state
of
the
art,
then
then
this
is
what
makes
sense.
A
But
if
you're
just
starting
to
do
research,
maybe
it
makes
more
sense
to
start
slow,
so
there's
something
now
you
can
do
because
there
is,
it's
actually
pretty
easy
to
put
early
stopping
into
these
models.
There's
just
another
Chara's
callback
for
that,
so
you
can
look
up
how
to
do
Chara's
early,
stopping
plenty
of
links.
Most
of
them
are
blog
posts.
It
seems-
and
this
is
TF
1.14,
but
that
should
be
the
same.
So
there
should
be
an
example,
so
you
can
create
an
early
stop
being
call
back.
A
A
Well,
there's
not
a
huge
number
of
things
that
maybe
you
haven't
played
with
yet
or
thought
about
yet
there's
checkpointing
that
was
shown
on
one
of
the
examples
stuff
for
doing
things
with
the
learning
rate.
So
Mustafa
talked
a
little
bit
about.
Sometimes
it's
useful
to
decay,
the
learning
rate
at
various
stages
of
the
training.
So
one
of
these
things,
like
learning
rate
scheduler,
lets
you
do
that
or
reduce
LR
on
plateau.
So
this
would
check
like
has
the
training
kind
of
stagnated?
Okay,
let's
DK
the
learning
rate?