►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
I
think
we're
gonna
get
started
again,
so
this
is
practical.
Hands-On
sessions
I
see
you've
all
got
laptops
at
the
ready,
I'm
sure
you're
not
doing
other
work
you're
just
anxiously.
Setting
yourselves
up
to
do
these
exercises
so
Steve's
leading
this
work
in
the
data
analytics
group
he's
learning
expert
those
things
like
running
tutorials
like
this,
and
also
supporting
Python
machine
he's
in
the
ml.
Well,
he's
co-chair
at
the
amount
of
agency
working.
A
B
B
So
now
we're
gonna
actually
try
to
play
with
some
of
the
things
that
we've
been
talking
about
today,
particularly
some
of
the
stuff
that
Josh
was
just
talking
to
you
about.
So
what
we
have
are
basically
the
exact
tutorials
that
there
are
the
official
tensorflow
ones,
the
TF
2.0
beta
ones,
the
ones
that
he
was
pointing
pointing
you
to
on
the
website.
B
Looking
through
all
these
we've
grabbed
a
number
of
those
and
basically
just
dropped
them
into
a
repository
here
for
convenience,
with
some
minor
updates
so
that
they
should
work
out
of
the
box
on
Cori
GPU,
so
changing
it
to
basically
load
the
right,
Python
and
then
not
install
tensorflow,
because
we
already
have
it
installed
and
that
kind
of
thing.
Okay.
So
basically
what
you're
going
to
do?
B
Probably
the
easy
way
to
get
to
the
stuff
is
to
go
to
the
agenda
page
this
one
right
so
we're
here
on
Monday,
we're
here
end
of
the
day
building
and
ends
using
Karis.
So
if
you
click
that
that
should
pull
off
the
github
repo,
you
can
also
just
find
it
by
googling,
probably
or
you
can
go
to
github.com,
slash
nurse
and
probably
just
browse
the
repositories
and
find
it
might
even
be
the
top
one
right
now
kind
of
curious,
well,
its
third
okay.
B
So
what
we
have
here
are
these
notebooks
and
a
little
bit
of
instruction,
some
readme
information
to
kind
of
guide
you
through
it.
It's
gonna
be
fairly
self-guided,
though
so
I
mean
I.
Have
some
suggestions
on
what
you
should
do
that
you
should
start
with
basic
stuff.
Some
of
you
might
be
bored
with
the
basic
stuff.
You
can
feel
free
to
skip
those
and
go
right
on
to
the
advanced
things,
I'll
get
into
kind
of
good
making.
Sure
you
get
it
set
up
on
Cory's
GPU
in
just
a
second.
B
But
let
me
just
get
a
quick
overview
here.
First,
so
in
the
readme,
as
you
scroll
down,
you
see,
I
have
some
contents
here.
So
there's
a
set
up
stuff
now,
there's
introductory
examples
and
advanced
examples,
so
particularly
those
of
you
who
are
new
to
deep
learning
new
to
Kharis
new
to
Chara's
and
2.0.
B
So,
for
example,
we
can
click
on
the
basic
classification
one,
and
you
see
these
are
basically
the
web
pages
that
the
Josh
was
showing
you.
So
this
is
basically
a
rendered
version
of
the
notebook
and
you
have
the
buttons
that
he
showed,
so
you
can
click
and
run
in
google
colab.
So
this
is
one
option
to
you.
You
can
run
all
these
in
a
colab.
B
If
you
like,
or
if
you're
feeling
too
lazy
to
fill
out
the
form
and
get
access
to
our
training
accounts,
that's
up
to
you,
the
everything
should
run
fine
in
colab
dirty'
got
the
accounts.
Well,
maybe
not
everybody,
but
okay
or
maybe
you
got
the
you
took
the
form
and
no
you
said
we
gave
up
almost
all
the
accounts
right
yeah.
So
presumably
nearly
all
of
you
who've
done
that.
But
if
you
have
some
issues,
you
can
always
use
collab
as
a
backup.
B
Okay,
for
example,
if
your
training
account
doesn't
work
or
if,
for
some
reason
all
the
GPUs
are
gone.
So
we
have
120
GPUs
on
the
core
AGP
partition
available
for
this
for
us
right
now
we
might
have
more
people
than
that
trying
to
use
them.
So
there
might
be
a
handful
of
people
that
just
there's
no
GPU
available
and
you
might
have
to
use
a
lab.
B
Ok,
basically,
if
you
have
any
issues
running
the
stuff,
raise
your
hand
and
somebody
will
come
and
help
you
I'll
get
to
who
the
TAS
are
in
in
a
moment
but
back
to
where
we
were
here
yet
so,
basically,
I
have
some
stuff
for
all
the
notebooks
I
do
recommend
that
you
start
with
the
the
basic
ones,
at
least,
if
that's
applicable
to
you.
So
basic
classification,
I
think
we'll
look
at
it
a
little
bit
together,
just
to
give
you
a
sense
before
you're
just
thrown
in
the
deep
end
and
then
there's
convolutional
neural
networks.
B
So
it's
you
know
really
the
same
level
of
being
very
introductory,
but
it
will
show
you
how
to
do
do
CN,
NS
and
just
shown
a
little
of
that
already
as
well.
Just
beyond
those
first
two,
you
can
kind
of
maybe
pick
and
choose
what
you
want
to
do.
Next.
Josh
mentioned
this
classify
structured
data,
one
a
little
bit.
So
this
is
basically,
if
you
have
a
cab,
Euler
data.
If
you
have
data,
that's
the
same,
a
panda's
data
frame
or
you
have
names
for
the
columns
and
you
want
to
build
into
your
model.
B
Some
feature
wise
kinds
of
transformations
like
he
talked
about
bucket
izing,
some
features.
This
basically
shows
you
how
to
do
all
that
overfitting
and
underfitting.
So
this
is
a
lot
of
practical
stuff.
I
do
actually
encourage
you
to
look
at
this
one,
because
it's
going
to
show
you
and
if
all
you
have
a
model,
it's
very
over
fit.
What
do
you
do?
Let's
try
add
regularization
and
let's
see,
does
it
get
better?
B
How
do
I
actually
use
it
to
do
science
after
I've
trained
it
so
yeah
these
are
very
much
practical
stuff
and
then
it
gets
pretty
interesting
now,
as
you
go
down
to
the
advanced
notebooks
and
here
I
think,
depending
on
time
that
you
have,
you
can
look
into
these
according
to
your
preference,
so
you
don't
only
have
tonight.
First
of
all,
sorry
I'm
going
a
bit
out
of
order
for
telling
you
things,
but
we're
gonna
go
through
hands-on
today,
so
this
3:30
to
5:00
p.m.
right
end
of
the
day,
but
tomorrow
also
notice.
B
We
have
this
training
session
here
in
the
afternoon.
So
that's
a
chance
for
you
to
run
on
to
run
things
some
more
to
go
back
and
try
more
notebooks
or
try
to
tweak
them
more
customize
them
more
on
things
you
didn't
get
to
before
and
then
there's
yet
another
one.
So
on
Thursday,
there's
a
self-guided
hands-on
here,
it's
part
of
this
working
lunch,
so
you
might
just
want
to
eat.
B
We
may
have
like
a
hyper
parameter:
optimization
example:
you
can
run
from
from
then
and
some
other
stuff
so
yeah,
so
you
don't
have
to
get
through
it
all
today.
You
don't
just
have
to
like
click
every
cell
as
fast
as
you
can
and
try
to
get
done.
I
do
encourage
you
to
take
your
time
and
try
to
understand
things
read
through
it
and
yeah,
so
the
advanced
ones.
There's
a
lot
of
fancy
stuff
Josh
pointed
you
to
some
of
these
a
little
bit
so
like
defining
custom
layers.
B
B
There
are
some
that
focus
a
little
bit
on
how
you
do
this
data
pipeline,
this
processing,
pre-processing
of
data
kind
of
stuff,
but
built
into
tensorflow
2.0,
so
loading
and
pre
processing
images
has
a
lot
of
different
kind
of
examples
for
how
you
can
use
TF
data
to
load
your
load.
Your
images
do
some
pre-processing
and
so
on.
It
covers
some
pitfalls
of
like
what
might
be
slow
want,
might
be
fast
and
it
does
some
timing,
so
you
can
actually
see.
Oh,
this
is
fast.
B
This
is
slow
to
give
you
a
sense
of
what
you
should
do
when
you
write
your
own
TF
data
pipeline,
so
very
practical,
but
also
getting
a
little
bit
more
technical.
So
if
you
don't
care
about
that
so
much,
if
you
don't
care
about
performance
too
much,
you
can
probably
skip
that
one
and
you
might
be
more
interested
in
playing
with
things
like
generative
model.
So
there's
a
DC
gang
example.
A
variational
auto-encoder
example
we
I
did
recently
put
in
the
cycle
gang
example.
B
Basically
because
I
saw
Josh
was
I,
think
going
to
talk
about
it.
So
these
more
advanced
ones,
there's
only
so
much.
You
can
do
in
like
an
hour
and
a
half
on
a
single
GPU.
Obviously
you'll
be
able
to
run
these
and
they
they
might
take
a
while.
You
might
want
to
decrease
the
epochs
if
it
looks
like
it's
gonna
take
a
long
time.
I
already
did
decrease
the
epochs
for
some
of
them
so
that
they
would
finish
a
little
bit
faster.
Then
you
might
not
get
the
most
really
state-of-the-art
results
when
you
run
them.
B
So
don't
expect
to
get
just
like
really
amazing
translation
of
horses
to
zebras.
It's
gonna
look
a
little
bit.
It's
gonna
look
a
little
bit
messy
or
the
image
captioning.
One
I
did
play
with
this
once
and
saw
a
pretty
kind
of
bizarre
example.
I
won't
I
won't
show
it
to
you,
but
it
went
a
little
crazy
but
yeah,
so
you
can
work
through
those
image.
Captioning
was
one
here
yet
I,
there's
also
an
example
here.
B
If
you
want
to
try
and
get
tensor
board
weren't
working
in
there
and
the
notebooks,
we
did
manage
to
get
this
to
work.
I'm,
not
sure
it's
really
production
ready.
Yet
so
you
might
run
into
issues
and
you
can
list,
though,
if
you
do
okay,
so
I
think
what
we'll
do
now
is
really
kind
of
go
back
to
making
sure
everybody
can
get
set
up
and
start
running
these.
B
So
hopefully
everybody
filled
out
the
form
got
a
training
account.
We
actually
don't
have
any
more
training
accounts.
Yet
so,
if
you
don't
have
a
training
account
yet
I
think
you
probably
just
have
to
use
collab
at
this
point,
there
might
still
be
a
few.
Does
anybody
still
need
a
training
account
everybody's
filled
out
the
form
got
a
training
account
if
they
want
one
okay,
the
training
accounts
will
last
throughout
this
week
and
then
they
will
be
purged
reset
at
some
point
on
the
weekend
or
early
next
week.
B
I
think
you
could
probably
use
the
give
command
actually.
So,
if
somebody's
interested
in
doing
that,
we
can
just
we
can
show
you
how
to
do
it,
but
there's
a
give
command.
So
you
can
give
something
to
another
user
on
erst
and
it'll
cash
it
in
scratch.
And
then
you
get
an
email
and
you
use
take
to
get
this
up.
So
I
think
that's
probably
the
easiest.
B
So
if
anybody
has
the
nerve
account
wants
to
know
how
to
do
that,
then
just
ask
us
and
and
I'll
show
you
I
expect
this-
probably
the
minority
of
people
but
okay.
So
you
have
your
account
now
you
need
to
log
in
try
I've
actually
seen
that
there
are
a
number
of
training
accounts
already
locked
in
on
Corey
GPU.
So
how
many
people
have
already
started
up
Jupiter
kind
of
worked
ahead
in
the
instructions
and
they
got
a
GPU
node
most
of
the
people?
B
Okay,
that's
great,
but
for
the
rest,
who
haven't
done
it
yet
we
weren't
just
eager
or
who
were
you
know,
paying
close
attention
to
the
lectures
I'll
just
sort
of
briefly
show
you
what
to
do
so.
First
of
all,
no
okay,
we'll
start
with
quarry
GPU
and
then,
and
then
I
can
mention
mention
collab.
But
collab
sees
here.
You
just
click
that
button
and
you
get
a
collab
run
time.
Josh
Josh,
I'm,
Cory
GPU.
B
You
need
to
use
this
URL
that
what
he
mentioned
earlier
to
get
to
Jupiter
to
get
to
Jupiter
hub,
so
that's
Jupiter,
DL
dot
in
respective
okay.
If
you're
already
logged
in
and
you
try
to
come
in
again,
you
might
see
something
like
this
I
think
what
I'll
do
is
a
little
bit
risky.
Now
it's
a
logout.
What
I'm
gonna
gonna
do
that
and
show
you
from
scratch.
B
A
B
Didn't
kill,
I
logged
out
will
remind
you
about
that
pitfall
later.
Don't
just
logout,
don't
just
close
your
browser.
You
do
actually
need
to
to
kill
your
job
manually.
It's
just
like
a
minor
kind
of
quirk
of
our
setup
that
we
hope
to
make
better
in
the
future.
So
here,
you're
gonna,
put
your
training,
account
name
train
one.
Two
three
and
you're
gonna
put
the
password
here:
you're,
not
gonna,
put
anything
for
OTP.
So
even
if
there's
like
a
third
column,
which
is
just
the
training
account
index,
don't
put
that
in
there.
B
Okay
and
then
you're
going
to
get
those
buttons
we
heat
showed
this
earlier.
Gpu
node,
CPU,
node
I,
will
stress
again
so
for
training.
We
want
you
to
use
a
GPU
node.
So
the
next
thing
is:
if
you
try
to
do
a
CPU
node
and
you
start
to
run
the
notebooks
I
think
you
will
get
some
error
message
right
away,
because
it'll
try
to
use
CUDA
libraries
and
tensorflow
that
has
CUDA
and
I
think
it
won't
work.
B
I
could
be
wrong,
but
you
definitely
want
a
GPU
node,
because
it's
faster
and
it's
kind
of
the
fancy
stuff
that
we
have
so
and
and
that's
what
this
is
for
you
to
run
it.
So
you
won't
be
hogging
resources
on
a
shared
thing:
the
GPU
nodes.
They
are
actually
shared
nodes,
but
you
get
just
like
a
fixed
subset
of
a
node
and
you
get
one
of
the
GPUs
on
that.
Okay,
that's
the
technical
detail.
B
You
don't
really
need
to
worry
about,
but
once
you've
started
up
your
once
you've
clicked
that
button
to
start
up
your
server
on
that
GPU
node.
You
should
land
on
a
page
that
looks
a
little
bit
like
this.
It
might
look
a
little
bit
different
because
I
have
a
bunch
of
these
custom
kernels.
You
might
only
see
a
few
okay,
but
then
what
you're
gonna
want
to
do
is
clone
the
repository.
B
So
you
have
all
the
notebooks
they're
ready
to
go
so
the
easiest
way
to
do
this
is
to
start
a
terminal,
and
this
is
all
in
the
instructions
by
the
way
getting
started
on
core
GPU.
So
right
here
around
the
second
part,
start
a
terminal,
that's
going
to
the
bottom
and
finding
the
terminal
Bucky.
So
these
are
basically
notebook
kernels
to
run
things
like
python
and
and
a
jupiter
notebook
there's.
Also
this
console
stuff,
which
is
just
like
a
Python
console
kind
of
thing,
not
in
a
notebook
and
then
other
down
at
the
bottom.
B
You
can
do
things
like
open
a
text,
file
or
open
a
terminal,
so
this
is
what
we
want
to
do.
We
want
to
open
a
terminal
because
we're
going
to
use
that
to
clone
the
repository
with
git
okay,
so
you
should
see
a
nice,
regular,
Linux
terminal,
and
now
you
can
copy
and
paste
this
depending
on
your
setup.
If
you
have
a
Windows
laptop
or
possibly
a
different
browser,
sometimes
copy
and
paste
doesn't
work.
Apologies
for
that!
That's
very
annoying
for
those
who
have
to
type
it
out.
B
But
for
those
you
know
who
are
you
know
one
of
us
folks
with
Mac
books,
and
you
know
vast
majority
us
do
and
it's
very
easy.
You
should
just
be
able
to
copy/paste
the
gate
phone
man
now
it
should
be
in
your
home
directory
and
of
course
you
won't
have
so
much
stuff
here,
because
this
is
my
own
account.
B
But
now,
when
we
look
at
the
file
browser
thing
on
the
Left
we
see
we
have
this
deal
force
ITF
tutorials,
since
the
folder
that
we
just
checked
out,
we
can
double
click
that
and
open
it,
and
now
we
see
all
the
notebooks
here.
We
also
see
the
readme.
This
readme
corresponds
to
what
you're
looking
at
on
the
web
page
so
I.
B
What
you
can
either
do
is
you
can
have
the
readme
open
here
and
just
kind
of
browse
through
it,
and
then
click
back
to
the
the
notebooks
that
you
want
to
run.
But
what
I
actually
think
is
probably
best.
Is
you
just
keep
this
repository
open
in
your
browser
as
a
tab?
So
you
can
kind
of
work
back
and
forth
between
the
instructions
here
and
kind
of
guiding
you
through
which
notebook
you
want
to
do
next
and
then
between
Jupiter,
where
you're
actually
running
the
notebooks.
Okay,.
B
B
B
Right,
luckily,
it's
not
the
worst
possible
thing
that
you
have
to
type,
so
you
might
have
to
flip
back
and
forth
a
couple
times
to
get
it,
but
you
know.
Is
it
yeah?
You
just
need
to
know
it's
github
nurse
and
DL
for
Sai
tf2,
toriel's,
okay,
I
take
you
might
take
you
a
couple
flip
back
flips
back
and
forth,
but
it
should
be
doable.
Hopefully,.
B
Yeah,
okay,
really
fancy
techniques
now
browsers
in
the
same
yeah.
Okay,
however,
you
want
to
do
it
just
figure
out
how
to
type
in
that
git
command
and
then,
after
that,
things
should
be
easier
for
you,
okay,
okay,
so
we're
gonna
run
notebooks
you're,
going
to
kind
of
you
know
twos,
which
ones
you
want
to
do.
Hopefully
enough
people
are
interested
in
the
introductory
ones
that
this
is
interesting.
B
I
have
things
like
little
quiz
questions
after
you
run
the
notebooks,
try
and
think
about
these
questions
and
see,
if
you
understand
so,
you
find
the
answer.
They're
not
very
complex
is
just
some
pretty
basic
things
on
a
couple
of
books:
I
have
suggestions
of
challenges
so
things
to
try
and
do
now
that
you
finish
the
notebook
to
try
and
tweak
things
like
changing
the
the
network.
Architectures
stuff
like
that.
B
So
I
really
only
have
those
for
the
first
three
notebooks,
but
you
can
probably
come
up
with
your
own
ways
to
tweak
things
in
the
later
examples,
and
as
you
have
issues,
is
you
run
into
things
or
if
you
just
have
questions,
you
have
questions
about
the
machine
learning
code.
The
models
something
does
it
work
whatever?
What
you
can
do
is
you
can
raise
your
hand,
and
one
of
us
will
try
to
get
to
you.
So
it's
going
to
be
very
hard
for
the
folks
who
are
in
the
middle
just
bear
with
us.
B
We'll
try
to
do
our
best
to
get
you
know
to
where
you
are.
You
might
need
to
kind
of
meet
us
halfway
or
something
like
this,
we'll
play
it
by
ear
and
try
to
do
the
best.
This
is
not
the
best
room
for
this
kind
of
hands-on
thing,
but
this
was
the
best
that
we
could
get
for
the
the
size
for
the
number
of
people
we're
gonna
have.
So
we
have
kind
of
an
army
of
secret
agents
in
here
that
are
basically
TAS
that
are
gonna,
be
able
to
help
you
I'm
gonna.
B
Ask
them
all
to
stand
up
right
now,
so
if
you'd
agreed
to
help
people
with
technical
or
machine
learning
issues,
please
stand
up.
So
this
is
the
these
are
all
the
people
you
saw
the
pictures
of,
and
we
need
slide
early
today.
He
made
it
sound
like
you
needed
to
memorize
those
photos.
So
this
is
your
chance
now
you
don't
necessarily
need
to
memorize
those
photos
that
you
can
see
what
they're
actually
wearing
today
and
what
their
hairstyle
is
today.
B
So
these
are
the
folks
who
will
be
hopefully
able
to
help
you
if
they
can't
help
you,
maybe
they'll,
try
to
flag
somebody
else
who
can.
Hopefully
we
won't
have
any
issues
and
everything
will
just
run
out
of
the
box,
and
then
you
might
just
have
interesting
machine
learning,
questions
and
we're
happy
to
talk
about
those
too.
Okay,
any
questions
before
we
start
going
through
the
first
notebook
together.
B
B
B
B
So
again,
if
you
go
through
this,
you
see
we
want
to
do
the
first
one,
so
basic
classification,
there's
the
name
of
the
notebook
I
probably
could
have
renamed
things
so
that
they
would
show
up
in
order
and
the
file
system
here.
But
I
didn't
do
that,
it's
alphabetical,
so
you
have
to
look
through
and
find
it.
But
in
this
case
it
actually
has
the
first
one
because
it
starts
with
B.
B
Okay,
so
basically,
you
should
see
that
all
of
these
are
already
using
this.
This
kernel,
this
installation
up
here
that
we
have
set
up
for
the
core,
a
GPU,
so
tensorflow
GPU
2.0
beta
okay,
if
you
do
want
to
try
another
notebook
that
I
didn't
include
in
here,
that's
one
thing:
you'll
have
to
change
so
I
didn't
put
every
advanced
example
in
this
repository
I
just
grabbed
some
that
I
thought
people
might
be
interested
in
trying.
B
But
if
you
want
to
try
another
one
like
one
of
the
ones,
Josh
mentioned
that
I
don't
have
in
here,
you
can
always
download
it
from
from
tensorflow
web
page
like
this
download
notebook,
okay
and
then
in
Jupiter.
You
can
upload
a
notebook,
so
you
can
upload
it.
It'll
just
appear
right
there,
along
with
all
the
other
notebooks,
but
when
you
run
it,
it's
not
going
to
be
using
the
GPU
enabled
software
out
of
the
box.
In
fact,
it's
going
to
default
to
one
of
our
standard,
Python
3,
kernels,
okay.
B
So
what
what
you?
What
you
can
do,
then,
is
just
sort
of
click
up
here
and
change
the
kernel
to
again.
You
won't
have
this
many
in
your
list
but
find
the
G
tensorflow
GPU
2.0,
okay,
so
just
something
to
think
about.
If
that's
something
you
want
to
try,
of
course,
if
you
have
issues
we
can,
we
can
fix
them
as
they
come
on.
Okay,
but
really
these
should
all
just
run
out
of
the
box.
B
Okay,
so
we
had
changed
the
kernel,
then
the
only
other
thing
that
we
changed,
but
we
removed
the
pip
install
commands.
Okay,
oh
I,
guess
I
did
also
add
this
thing.
So
well,
he'd
mentioned
this
that
you
might
want
to
kill
old
notebooks
when
you're
trying
new
new
ones,
because
you
don't
want,
like
a
whole
bunch
of
notebooks,
all
trying
to
use
the
same
GPU
consuming
its
memory,
but
actually
we've
at
least
got
it.
So
you
can
run
like
all
the
basic
ones
without
any
issue.
B
You
wouldn't
necessarily
need
to
shut
down
the
previous
ones
and
that
that's
basically
because
we
added
this
option
here.
This
allow
growth.
It
just
means
so
by
default,
when
you,
when
you
setup
tensorflow
you
import
and
start
talking
to
the
GPU
it'll,
just
snatch
up
all
the
memory
by
the
default,
so
it
has
it
available.
But
with
an
option
like
this,
you
can
say
only
use
as
much
memory
as
you
need
and
then
expand
that
as
you
need
it.
So.
B
B
B
So
then
I
think
maybe
we'll
just
fly
through
this
and
focus
on
kind
of
the.
What
we
think
are
the
important
things.
Sometimes
the
yeah,
the
import
can
take
a
second,
but
it's
usually
pretty
quick.
Here
we
see
okay,
good,
we're
running
the
correct
version,
2.00
beta,
so
there's
nobody's
going
to
talk
a
little
bit
about
the
data
set.
You
may
have
heard
of
em
mist,
which
is
this
handwritten
digit
classification
problem
data
set.
B
It's
this
really
canonical
very
easy
kind
of
data
set
for
tutorial
like
things
and
basically
this
is
a
just
like
a
slightly
more
interesting
version
of
that,
depending
on
what
your
interest
is,
but
it's
basically
a
drop-in
replacement
for
fashion
for
M.
This
cult
fashion.
In
this
it's
just
called
fashion
Emnes,
because
it's
so
much
like
M
this.
It's
the
exact
same
size
of
data,
pretty
much
the
same
I
think
it's
on
grayscale,
but
instead
of
digits,
it's
pieces
of
clothing.
B
So
yeah
Josh
talked
a
little
bit
about
like
the
availability
of
date.
Availability
of
datasets
through
libraries
like
tensorflow
datasets
Kara's,
also
has
Karis
datasets.
So
that's
what
we'll
use
here
to
download
fashion
amnesty,
the
first
time
you
run
this
cell.
It
will
download
a
dataset,
it's
not
very
large,
it'll
put
it
under
the
home
directory
of
the
training
account
and
then
the
next
time
you
run
the
notebooks
it'll,
just
always
be
there
and
won't
download
again,
okay,
but
then
up
what
talks
a
little
about
about.
B
What's
in
the
data
set,
not
super
crucial
to
go
through
at
this
stage,
but
you
can
sort
of
see
they
have
labels
and
each
label
corresponds
to
a
kind
of
picture
like
a
coat
or
an
ankle
boot
and
then
there's
a
little
bit
of
stuff
here,
that's
kind
of
looking
at
the
data
and
then
basically
preparing
these
labels.
So
we
can
look
at
like
the
shape
of
the
data,
so
in
the
training
the
training
image
data
set,
we
have
60,000
examples.
So
usually
usually
data
is
structured.
B
This
way
in
machine
learning
and
deep
learning,
you
have
some
large
array
that
represents,
you
know
a
whole
data
center.
Our
subset
of
data
set
or
the
first
dimension
is
the
number
of
samples
or
the
number
of
samples
in
a
mini
batch,
and
then
you
have
the
dimensions
of
the
data
itself,
so
this
is
an
image
with
a
single
channel,
so
we
have
28
by
28.
So
these
are
that's
the
dimensions
of
the
image
28
pixels
high
28
pixels
wide.
Okay,
we
have
the
same
number
of
training
labels
that
should
make
sense
to
you.
B
So
now
we
do
some
pre-processing
here
on.
First,
we
actually
look
at
a.
We
look
at
an
image,
there's
a
little
bit
of
scaling
of
an
image.
This
is
actually
one
of
the
quiz
questions,
but
we
might
as
well
just
ask
it
now.
Somebody
know
why
we
take
this
data,
this
these
images,
arrays
and
divide
them
by
255.
B
Yeah,
it's
just
a
normalizing,
so
we
we
pixels
in
an
image,
can
range
from
0
up
to
255
and
it's
usually
good
practice
very,
very,
very
strongly
recommended
always
normalize
your
data
in
some
way
so
it'll.
It
depends
a
little
bit
on
the
case
like
what
you
really
need
to
do
so
here
we're
just
normalizing
them
all.
So
every
pixel
is
somewhere
between
0
and
block
right.
B
That's
usually
fine
people
will
also
say
you
need
to
maybe
normalize
things,
so
it's
mean
of
0
standard
deviation
of
1,
so
between
kind
of
roughly
negative,
1
and
1.
The
important
point
is
just
kind
of
to
get
the
overall
scale
of
your
data
down
to
something
that's
kind
of
matching
kind
of
consistent,
maybe
around
the
scale
of
order
of
wan
and
maybe
potentially
matching
kind
of
how
you're
going
to
normalize
the
weights
in
the
model.
B
You
can
think
of
this
just
like
if
your
data
is
somehow
like
but
say
all
numbers
with,
like
millions,
tens
of
millions,
these
kind
of
values.
The
first
thing
your
model
has
to
do
is
your
optimizing.
It
is
kind
of
learn
what
that
scale
is
and
start
like
increasing
weights.
Very
much
to
try
like
this
is
the
important
scale
of
inputs
that
I
need
to
map
it
into
something
reasonable.
So
you
can
basically
do
away
with
that
with
pre-processing
your
data
so
very
important
to
always
do
something
like
that.
B
It'll
make
some
plots.
So
this
is
now
the
normalized
images
I
believe
and
we're
just
seeing
it
with
the
labels.
We
can
see.
What's
why
it's
so
smooth
yours
handle,
etc.
Then
the
notebooks
gonna
get
into
how
you
build
the
model,
so
you've
already
seen
some
code
a
little
bit
thanks
to
Josh,
you
can
kind
of
see
what
we're
doing
here,
we're
using
this
sequential
model
API
that
Josh
talked
about.
He
said
this
was
the
one
that
you
should
generally
try.
First,
it's
the
easiest
fewest
lines
of
code.
B
Didn't
tell
you
what
it
is
yeah,
so,
basically
we
went.
We
want
these
dense
layers
to
sort
of
look
at
every
pixel
of
the
the
inputs
we
just
flatten
it
into
an
array.
We
don't
really
need,
like
the
spatial
structure,
for
a
fully
connected,
Network,
that's
what's
going
on
here,
so
we're
flattening
it
and
we're
running
it
through
two
fully
connected
layers
or
dense
layers
is
what
they're
called
in
Paris.
B
As
Josh
said,
there's
often
like
multiple
terms
for
the
same
thing,
just
to
make
deep
learning
more
fun
so
fully
connected.
It's
probably
most
of
what
people
say
when
they
mean
this
kind
of
layer
and
in
carrots
that
happens
to
be
called
a
dense
layer,
but
it
just
means
that
every
neuron
at
a
layer
is
connected
to
every
single
one
of
the
neurons
on
the
previous
layer,
so
fully
connected
everything's
connected
to
everything,
in
contrast
to
a
convolutional
neural
network,
where
we
have
local
connections
because
we're
applying
this
filter
at
different
regions.
Okay,.
B
So
we
go
through
defining
the
model
and
calling
this
compile
step.
Josh
showed
some
of
that
earlier,
and
then
we
just
call
fit
so
really
just
a
few
lines
that
actually
define
the
model
and
do
the
training
I
mean.
This
is
mainly
why
we
chose
to
do
Harris
for
this
kind
of
hands-on,
because
it
is
the
easiest
to
teach
it's
the
least
amount
of
code.
B
By
far
to
do
anything,
I
tore
just
nice,
though
too,
if
you
like,
if
you
like,
if
you
like
my
torch
I
personally,
do
like
by
George,
but
Carris
is
the
easiest
to
teach.
So
this
should
train
very
quickly
because
it's
a
small
problem,
it's
a
small
model.
You
don't
even
really
need
a
GPU
for
this.
If
you
run
on
Cola
without
a
GPU,
it
might
take
like
six
or
five
seconds
instead
of
three
or
four,
it's
not
a
huge
speed-up.
B
So
that
is
why
it's
faster
at
all
yeah,
but
it's
only
like
maybe
20%
faster
in
this
case
and
David
transfer
is
part
of
it.
That's
that's
usually
like
a
major
culprit.
Is
the
data
ingestion
into
a
GPU
in
this
case
yeah?
That
might
be
a
bit
of
it?
I
didn't
profile
exactly,
but
that's
a
potential
contributor,
but
also
it's
just
a
small
model.
Small
data
said
it
doesn't
parallel
lives
very
well.
So,
even
if
you
had
all
the
data
sort
of
painted
onto
the
GPU,
you
probably
wouldn't
have
really
great
utilization.
B
B
There's
no
real
reason
why
you
can't
get
it
to
be
very
high
utilization
and
hence
you
see
really
large
speed
ups
on
GPUs
yeah,
so
you
have
access
to
a
lot
of
very
advanced
features
and
if
you're
interested
in
that
I
think
a
bit
of
that
is
covered
in
the
let's
say,
loading
images,
one
I
mean
also
there's
the
the
data
pre
post,
pre
processing,
so
you'll
see
a
little
bit
of
how
to
use
TF
data
in
a
couple
of
these.
So
there's
the
let's
go.
This
way.
A
B
Loading
and
pre-processing
images
is
one
this
last
sort
of
the
most
advanced
stuff,
but
it
probably
doesn't
have
everything.
So
there
is
actually
a
tensor
flow
guide
to
using
data
to
using
the
data
set.
The
data
API
I
do
encourage
you
to
look
through
that
and
look
at
examples
that
are
optimized
to
sort
of
see
how
they
put
it
together.
B
But,
yes,
there
are
options
to
kind
of
read
things
in
parallel
to
do
some
prefetching
and
to
put
them
on
the
GPU
so
that
they're
ready
to
go
as
soon
as
you're,
ready
to
start
processing
that
image
yeah,
there's
quite
a
bit
of
complicated
stuff.
Even
with
Chara's.
You
can
do
just
like
with
the
regular
Chara's
way
to
specify
a
data
pipeline.
You
can
have
a
Python
generator
kind
of
function
or
a
care.
A
sequence
thing
that
you
subclass,
which
allow
you
to
do
some
parallelization
and
kind
of
you
know
prefetching
in
some
sense.
B
B
I,
don't
think
I'm
running
any
anything
at
the
moment
they
probably
finished
training
by
now,
but
you
can
run
this
in
video
SMI
command
and
this
will
show
you
some
details
about
the
the
GPU.
So
here
this
this
is
the
GPU
I
have
a
Tesla,
V
100,
so
volta,
it's
a
little
bit
about
the
temperature
and
the
power
in
the
amount
of
memory
that's
being
consumed.
So
that's
what
you
see
right
here
so
we're
using
about
half
a
gigabyte
out
of
16
gigabytes
and
this
percentage
number
here.
This
is
your
Lea
utilization.
B
So
if
you're
training,
a
really
heavy
model,
you'll
hopefully
see
that
around,
like
98
to
100
percent,
okay
at
the
Mona's
Eero,
because
nothing's
running
on
the
GPU,
but
I
am
still
consuming
memory
because
there's
still
a
process
there.
Okay,
so
this
is
a
very
useful
command
to
know.
Nvidia
SMI,
GPU
stat
is
just
something
that
makes
it
look
a
little
bit
nicer.
So
what
can
you
do
you
can
if
you
really,
if
you
want
to
see
that
you
can
do,
you
can
load
the
Python
module,
that's
being
used
in
the
notebook
I.
B
Don't
really
have
detailed
instructions
for
this,
but
if
you
want
me
to
show
it
to
you,
I
can
so
you
can
do
module
load
that
thing
and
I
have
GPU
stat
installed
there,
and
now
you
get
just
a
kind
of
a
prettier
version
of
that.
It's
a
little
more
concise
a
little
bit
easier
to
read,
and
you
have
nice
colors
so
here
I
can
see.
B
Actually
OS
ferrule
is
the
one
running
a
process
and
then,
if
you
have
multiple
ones,
you
see
additional
columns
there
so
couple
of
ways
at
least
yeah:
okay,
okay,
so
there
are
ways
to
say,
run
the
whole
notebook
at
once
in
some
of
these
dropdowns
here
so
under
run,
you
can
say
things
like
run:
all
cells
run
all
above
this
cell
etc.
You
can
go
to
Colonel,
you
can
say,
restart
and
run
all
cells.
Okay,
if
you
just
want
to
work
through
them,
though
you
can
do
shift-enter
so
yeah.
Sorry,
this!
B
We
didn't
give
you
a
great
overview
of
how
to
use
Jupiter
notebooks
if
you
want,
but
this
is
what
I
was
doing.
So
if
I
like
to
run
cells,
one
at
a
time
and
kind
of
understand,
what's
going
on
so
I
would
select
the
cell
and
hold
shift
and
push
return,
and
then
that
executes
the
cell
okay
or
you
can
click
the
play
button
up
here.
B
B
And
we
got
a
result,
so
we
saw
our
training
loss.
Hopefully
it's
going
down,
does
it
just
after
the
end
of
the
first
epoch?
It
was
one
and
it
came
down
to
about
0.4.
So
that's
good.
We
had
some
improvement
and
then
we
evaluate
on
the
on
a
test
dataset
and
we
see
accuracy,
we
get.
You
know
88%
ish,
it's
not
too
bad.
You
might
be
able
to
do
better
if
you
tweak
the
model
and
then
did
I
run
these
yet
actually
well.
That
I
did.
B
So
model
dot
predict.
This
is
something
I
think
Josh
mentioned,
but
I
don't
know
if
he
showed
you
too
much
in
detail.
This
is
just
like
you
want
to
run
inference.
You
have
your
model
is
trained,
and
now
you
have
some
some
data
you
just
want
to
produce
its
outputs,
make
its
predictions
and
then
do
whatever
with
them.
You
want
to
do
science
or
yeah.
B
B
So
that's
a
one
hot
representation
of
the
class
labels
so
frequently
what
you
might
do
is
you
use
some
utility
function
that
can
just
convert
integers
into
this
representation
or
you
can
do
it
by
hand?
It's
not
terribly
difficult
and,
and
then,
in
your
model,
I
mean
you
just
have
a
deterministic
way
that
you're
saying
you
know
integer
to
label
to
means
it
like.
Let's
say
the
second.
The
second
element
of
that
vector
means
that
its
class
that
class
right
and
then
in
our
model
we
we
just
have
a
neck.
B
B
B
That's
about
that's
about
it,
the
only
other
things
that
this
will
do
is
it
will
look
at
some
specific
examples,
so
we
we
got
all
the
predictions
for
our
test
set,
which
is
now
this
this
big
array,
so
you
can
print
out
some
values.
You
can
print
out
the
what
the
overall
shape
of
this
is
predictions
shape.
So
you
know
we
have
10,000
examples
in
the
training
set
and
for
each
one
of
these
we
have
10
values,
outputs.
So
this
is
that
class
vector
and
then
what
it
does
is.
B
It
shows
you
for
the
first
image
what
that
looks
like,
and
you
can
even
kind
of
see
that
these
are
all
very,
very
small
numbers
close
to
zero,
except
one
of
them
is
more
like
a
third.
So
that's
this
once
and
and
this
one
is
about
50%,
so
you
can
kind
of
see
it
thinks
you
know.
Maybe
this
picture
is
in
one
of
those
classes
and
it
just
sort
of
compares
so
the
largest
one
in
this
case
happened
to
be
the
the
ninth
dimension
or
the
class
number
nine
and
that
actually
corresponds
to
the
label.
B
So
this
one
was
classified
correctly
and
then
it
does
some
plots
that
you
can
just
sort
of
play
with
this
will
show
you
sort
of
yeah,
for
maybe
this
is
that
same
one
zero.
This
is
the
same
one
yeah,
so
we
saw
like
there
were
a
few
classes
where
I
had
fairly
nonzero
results,
and
you
can
kind
of
even
kind
of
see
it
there.
Okay,
so
that's
about
it.
So
you
can
kind
of
work
through
this
one
on
your
own
and
think
about
the
the
questions
that
I
had
there
right.
Where
was
it
right
here?
B
So
we
already
talked
about
number
one.
You
can
think
about
these
the
activation
functions
and
you
can
try
to
see
if
you
can
modify
the
architecture,
see
what
kinds
of
things
you
can
try
see
what
works.
Changing
the
optimizer
algorithm
see
if
you
can
get
see,
if
you
can
prove
on
the
test
set
accuracy,
that's
just
kind
of
for
fun.
B
You
can
play
with
it
and
see
how
good
of
an
accuracy
you
can
get
I
do,
encourage
you
to
go
onto
the
convolutional
one,
it's
a
bit
redundant
because
it's
like
very
much
the
same
kind
of
thing.
I
think
it
might
actually
be
regular,
eminence
not
fashioned
m-miss,
but
otherwise.
It's
very
similar
to
the
example
we
had
here,
but
it
uses
convolutional
neural
networks.
You
can
actually
get
practice
with
instantiating
those
kinds
of
models
and
then
then
you
can
go
back
and
figure
out.
How
do
you
customize
that?
B
B
I
think
that's
about
it.
I
already
mentioned
kind
of
interesting
stuff
in
the
classify
structured
data,
one
for
sort
of
tabular
data
and
overfitting
and
underfitting
overfitting
and
underfitting.
This
one
will
show
you
like
how
to
put
in
l2
regularization
to
a
model.
That's
very
much
overfitting
how
to
put
in
drop
out
things
like
that.
B
Yeah,
so
what
I
like
to
do
is
because
this
is
ipython,
you
can
just
sort
of
print
out
the
documentation
very
easily
right
in
here.
So
let's
go
up
here.
Let's
say
we
want
to
find
the
arguments
to
tab
complete.
Also,
it's
very
nice
it's
below
here.
Where
is
it
sent
the
layers?
Okay,
let's
say
I
want
to
find
the
arguments.
Did
that
dense
there,
what
they
mean
so
I
can
do
Chara's
layers
dance
already,
if
I
just
sort
of
shift
tab,
you're
gonna
see
something
pop
up
here
oftentimes.
B
This
is
all
you
need,
so
this
is
literally
just
the
code.
Documentation.
That's
been
rendered
here
like
this,
so
you
can
see
what
the
arguments
are
and
then
you
can
see
the
doc
string,
which
should
hopefully
tell
you
what
all
the
arguments
actually
be
very
useful
and
it
doesn't
actually
clutter
it
up
here,
your
notebook
at
all,
okay,
ipython.
That's
also
to
do
things
like
well.
Sorry,
Python
lets
you
do
some
of
these,
but
I've
Python.
B
What
to
do
some
more
things
like
just
like
putting
a
question
mark
at
the
end
and
then
you
can
see
the
documentation
right
there.
As
the
output
of
the
cell,
you
can
put
two
question
marks
and
it
will
show
you
the
code
source,
the
whole
code
source,
so
you
may
or
may
not
want
to
look
at
that.
Usually,
hopefully
you
don't
need
that
if
I
get
to
that
point,
I
usually
just
go
to
github
and
find
the
code,
but
I
guess
very
wrong.
B
I
do
just
want
to
look
at
the
source
very
good
and
that's
a
way
to
do
it.
Okay
or
it's
very
easily,
it's
very
easy
just
with
Google
to
find
the
documentation
for
something
in
Kerris
or
TF
Garris,
so
I
can
say:
TF,
Kara's,
dents
and
I
bet
you.
The
first
thing
is
going
to
be
the
API
Doc's.
Unfortunately,
it's
not
2.0
is
2.0
is
still
in
beta,
but
you
know
I
think
actually
the
API
is
pretty
much
equivalent
pretty
much
the
same
first
up
for
these,
like
Harris
layers.
B
If
you're
worried,
you
might
want
to
make
sure
you're
in
a
2.0
version
of
this
I,
don't
this'll
keep
me
on
the
same
page.
It
doesn't,
you
could
probably
add
2.0
to
the
Google
search
and
yeah.
Then
you
see
well,
you
should
see
the
arguments
down
here
there.
You
go
the
arguments
so
activation
function.
You
can
change
the
activation
function
and
now
you
know
how
to
find
it.
Okay,
any
other
questions.
B
B
Think
it
might
get
killed
at
6:00,
but
I
think
you
could
probably
still
try
to
get
a
job
after
that
yeah
I
think
you
should
still
be
able
to
log
in
you
might
even
you
might
be
able
to
start
a
server,
but
it
wouldn't
be
in
the
reservation
and
you'd
be
part
of
like
the
normal
queue,
so
you
may
or
may
not
get
one
after
6:00
p.m.
but
I
do
think.
If
you
have
a
job,
that's
running
I
think
it
would
get
killed.
B
Let
me
just
show
you
one
more
time:
I
think
we
heat
showed
it,
but
how
to
close,
how
to
stop
your
server.
Ok,
because,
if
I
so
just
like
how,
if
I
just
close
this
tab,
oh
okay
doesn't
matter
if
I
close
this
tab,
it
doesn't
end
that
process.
It's
still
down
here
running.
Similarly,
if
I
just
close
this
tab,
it
doesn't
and
my
server
so
I'm
still
I'm
still
consuming
resources
on
the
reservation.
B
Hub
so
I
think
hub
because
I'm
trying
to
control
this
at
the
level
of
the
hub.
This
is
Jupiter
hub.
The
service
itself,
affective
logout
that
still
will
not
kill
my
process
that
only
that
only
logs,
you
out
of
the
service
but
you're
thinking
still
run,
and
you
can
log
back
in.
You
can
check
the
status
of
the
notebooks.