►
From YouTube: Use Case: Transfer Learning
Description
Setup: https://www.youtube.com/watch?v=y6bz-9U4hJg
Make sure you're looking at the version of the notebook corresponding to the version of dffml you have! (dffml version)
The following is the link to the notebook from this video: https://github.com/intel/dffml/blob/master/examples/notebooks/transferlearning.ipynb
A
Hi
everyone
welcome
to
dfml
videos
in
this
video.
We
are
going
to
go
through
transfer
learning,
using
the
pytorch
free
training
models
and
the
ffml
python.
Api
transfer
learning
is
a
technique
in
which
a
machine
learning
model
is
developed
for
one
task
and
is
used
as
a
starting
point
to
build
a
machine
learning
model
for
another
task.
A
Feature
extraction
is
a
method
where
we
use
all
layers
of
model
as
they
are,
except
for
the
last
one
which
we
replace
according
to
our
task.
To
do
this,
we
freeze
all
the
weights
for
the
pre-trained
model,
which
is
done
by
setting
the
trainable
flag
to
false
in
the
python
module.
After
that,
we
will
just
add
our
own
classifier
on
top
of
these
layers
and
train
it.
Fine
tuning
is
a
method
where
we
want
to
retrain
some
of
the
last
layers
according
to
our
task.
Hence
the
name
fine
tuning.
A
So,
let's
get
into
the
setup,
we
won't
be
discussing
the
setup
in
detail.
You
can
go
through
that
and
the
other
video
that
we
have
on
the
topic.
It
will
be
linked
in
the
description,
let's
import
all
the
packages
and
I'm
setting
the
logging.
A
So
that
we
can
see
what's
going
on
in
our
model
and
we'll
be
using
the
cachet,
download
and
black
archive
to
download
our
data
set
rocket
paper
scissors
and
it
is
already
splitted
in
three
ways.
So
we
download
the
three
datasets
individually
and
then
we
load
them
into
a
direct
resource.
A
Okay,
let's
now
we
define
our
additional
layers
for
the
model.
We
imported
this
package
already
and
we
will
be
using
the
nn
dot
module
from
this
package
to
define
the
class
in
the
nf
method.
We
define
all
our
layers
for
the
model.
This
is
the
first
convolutional
layer
and
it
has
4096
features
and
and
256
features
out.
A
We
define
a
railway
activation,
we
define
a
dropout
player
and
this
is
the
second
convolutional
layer.
It
has
256
in
and
three
features
out
three,
because
we
have
three
labels.
That's
why
and
we
had
a
log
softmax
layer,
because
we
had
some
multi-class
classification
and
then
we
defined
for
then
we
defined
a
forward
function.
A
A
Okay
and
then
we
call
this
whole
class
instantiated
all
right
now,
time
to
instantiate
our
model.
You
can
find
all
the
mod
modules
and
there
aren't
requires
at
the
model
plugins
page,
for
this
one.
We
are
going
to
use
alexnet,
which
is
a
pre-trained
model.
A
A
Then
we
provide
the
predict
feature
and
the
location
note
that
the
pre-trained
is
true
and
we
set
the
training
quality
false
because
we
are
going
to
perform
feature
extraction
rather
than
the
fine
tuning,
and
we
set
add
layers
to
true
because
we
want
to
replace
the
last
layers
since
it's
a
different
task,
then
that
alex
net
was
trained
on.
A
So
we
defined
the
last
layers,
our
custom
glass
layers,
and
we
set
that
to
layers
right
here
and
then
we
define
and
then
we
add,
all
the
hyper
parameters
for
the
model
used
at
their
custom.
20
batch
size,
32,
adam
optimizer.
You
get
the
idea
after
that,
we
can
pretty
much
train
the
model.
A
So,
let's
train
them
out,
this
will
take
some
time,
so
we're
probably
gonna
skip
it
all
right.
The
model
is
trained,
and
after
just
one
epic,
we
get
an
accuracy
of
1.0
on
the
validation
set
and
0.96
on
the
training
set,
which
means
that
our
model
is
generalizing
well.
So
that
makes
sense
all
right.
We
can
also
check
the
accuracy
by
loading,
the
clf
score
ourselves
and
calling
the
high
level
accuracy
function.
A
We
get
an
accuracy
of
0.946
on
the
test
set
and
we
can
make
predictions
and
predict
these
predictions
by
the
high
level
predict
function.
We
define
this
display
image
predictions
function.
To
do
that,
and
now
we
call
it
with
the
prediction:
source
predict
source
and
the
model,
and
here
we
have
it.
The
predictions
seem
all
right
with
good
confidences
on
each
of
them,
so
there
it
is.
That's
it
for
this
video.
A
If
you
didn't,
understand
anything
or
have
trouble
with
something
you
can
always
come
open
up,
github
issues
on
our
github
or
you
can
even
reach
us
out
at
the
githur
channel.
Thank
you.