►
From YouTube: Use Case: Ensemble by Stacking
Description
Setup: https://www.youtube.com/watch?v=y6bz-9U4hJg
Make sure you're looking at the version of the notebook corresponding to the version of dffml you have! (dffml version)
The following is the link to the notebook from this video: https://github.com/intel/dffml/blob/master/examples/notebooks/ensemble_by_stacking.ipynb
A
A
We
will
basically
be
experimenting
by
assembling
a
classifier
and
regressor
to
see
if
the
ensemble
performs
better
than
those
classifiers
and
regressors
individually.
A
If
you
would
like
a
detailed
description
about
downloading
the
data
sets
and
loading
them
into
sources
and
setting
up
dfml
for
different
tasks,
you
may
go
ahead
and
watch
the
video
on
that
topic,
which
will
also
be
linked
in
the
description
for
this
video.
We
are
going
to
focus
on
the
assembling
part
of
the
tasks,
so
let
then
symbol
begin.
A
A
You
can
choose
the
base
models
and
find
their
entry
points
from
the
models.
Plugin
page
of
the
documentation
right
here,
as
you
can
see,
there
are
different
models
listed
and
you
can
find
their
hyper
their
entry
points
as
well.
Once
you
have
the
entry
points,
you
can
pass
those
entry
points
to
model.load
method.
A
As
usual,
we
instantiate
our
base
models
with
features
the
target
feature
and
the
location
of
the
model.
You
can
also
pass
in
any
hyper
parameters
if
you
wish,
in
this
case,
we
are
not
passing
any
and
we
are
using
the
default
values.
A
A
After
that,
you
can
go
about
tuning
your
model
and
testing,
as
you
like.
In
this
case,
you
can
see
that
the
classifier
performs
better
than
the
regressor.
The
classifier
gets
up
to
0.55
accuracy,
whereas
the
regressor
gets
0.47
accuracy.
A
A
You
basically
want
to
do
that,
because
in
the
second
step
we
will
be
getting
the
predictions
of
the
base
models
we
trained
on
both
validation
set
and
the
test
set.
Once
we
have
the
two
sets,
we
are
ready
to
get
the
predictions
in
the
usual
way
by
calling
the
predict
method
on
the
validation
set
once
and
on
the
test
set.
A
We
do
this
for
all
our
base
models,
and
once
we
have
the
predictions
we
are
all
set
for
the
third
step,
which
is
to
stack
the
predictions,
we
will
be
stacking,
all
the
validation
predictions
together
and
all
the
test
predictions
together
in
the
form
of
dictionaries,
along
with
the
true
label,
which
is
why,
in
this
case,
note
that
we
are
actually
creating
another
data
set
for
our
level
2
model.
The
meta
model,
that
is
the
validation
predictions,
will
be
used
to
train
the
meta
model
and
the
test
predictions
will
be
used
to
test
our
metamodels.
A
Once
we
have
the
stack
predictions
and
on
both
the
data
sets
and
all
our
models,
we
are
ready
to
train
the
last
meta
model.
So,
let's
begin
the
step
four
and
train
our
metamodel.
A
A
Note
that
we
are
using
the
same
features
that
we
stacked
in
our
data
set
earlier
and
the
true
label
is
set
to
y
the
predict
feature.
A
We
can
say
that
our
experiment
was
a
success
because,
generally
in
simple
models
are
considered
successful
only
if
they
perform
better
on
the
unseen
data
which
in
this
case
our
model
does
as
an
exercise.
I
suggest
that
you
download
the
notebook
and
try
to
get
a
better
accuracy.
A
A
good
place
to
start
would
be
changing
the
base
model,
hyper
parameters
or
even
adding
a
new
base
model
that
might
provide
unique
info
to
the
ensemble.
I
hope
that
everything
goes
clear
if
there
are
any
queries
that
you
might
have,
you
can
always
go
ahead
and
open
issues
on
our
github
or
you
can
even
reach
out
on
our
getter
community
channel.
Thank
you.