►
From YouTube: Weekly Sync 2021-07-13
A
A
All
right,
so
what
do
we
have
on
the
dock
today?
So
I
saw
that
you've
got
this
the
rename
pr
up,
let's
see
great.
A
All
right,
so,
let's
see
so
saw
hills.
So
what
what
would
you
like
to
talk
about
today.
B
Renaming
one
here
directly
to
location
and.
A
A
Okay,
yeah
and
I
think.
A
I
think,
let's
see
well
we'll
talk
about
that
in
a
second
okay,
so
hashim,
so
anything
else
from.
C
C
Okay,
let's
see
and
other
than
that,
I'm
starting
to
work
on
multi-support
for
multi-output
models,
and
I
wanted
to
discuss
that
as
well.
C
And
also
if
we
can
merge
the
other
pr's,
I
know
you
mentioned
suction
for
review.
Are
we
getting
them
reviewed
as
well.
A
C
Yeah
we
were
waiting
for.
A
Okay,
all
right,
so
let's
just
go.
Take
a
look
at
these
right
now,
since
these
hopefully
quick
so.
D
A
A
A
C
A
A
I
would
say
that
we
should
link
to
the.
I
think
we
should
link
to
the
docs
page
for
this,
because
there
is
oh.
C
Yeah,
I
linked
it
in
the
transfer
learning
vr,
but
I
forgot
to
link
it
on
the
other
ones:
okay,
cool.
A
Oh
yeah,
I
think
there's
this
curl
command.
That
shows
you
how
to
calculate
the
shot
sum.
So
I
think
that's
probably
going
to
be
helpful
because
you
know
people
will
need
that
so
rather
than
put
that
in
yeah,
I
think
that's
definitely
a
good
idea
to
to
just
explain
more
about
this,
but
we
could
probably
just
link
to
it.
Let's
just
make
a
note
of
that.
E
A
Okay-
and
I
think,
can
you
use,
can
you
use
the
cross
references
in
this?
The
rst
cross
references,
or
is
that
not
available?
Will.
D
A
Okay,
let's
see
yeah
that'll
be
fun.
A
Does
it
support
top
level
async?
I
wonder
I
thought
I'd
seen
something
about
it.
Let's
just
see
what
happens.
A
D
A
A
Maybe
we
should
do
that.
I
wonder:
will
that
work
in
the.
A
A
Is
this
more
intuitive
than
the
other
one
or
is
the
is
the
other,
because
this
is
this
load
will
return
a
generator?
So
what
do
you
guys
think
do
we
do
the
full
function
or
do
we
do
this?.
A
So
this
split
stuff
so.
A
A
A
A
Okay
yeah,
so
the
issue
here
was
essentially
this
sources.
Sources
is
like
this
list
thing:
okay,
what
have
we
got
here?
A
Yeah,
so
source
is
just
like
a
list
right
now,
and
so
we
can't
pass
there's
that,
let's
see
where's,
we
have
this
sources
class
right,
which
sort
of
wraps
a
collection
of
sources
together
and
where
is
it.
A
Sources,
source
source:
this
is
the
regular
source
stuff,
and
this
is
the
sources
class
yeah.
So
this
this
stuff
is
not
easily
configured
because
it's
this
async
context
manager
list,
and
so
ideally
we
would
make
it
an
object
with
a
config.
So
this
is
like
one
of
the
things
that
doesn't
follow
the
config
pattern,
because
it's
a
list
essentially
so,
let's
see-
and
how
did
this
relate
back
to
this?
That
was
oh,
the
splitting
right.
A
So
eventually
we
were
going
to
try
to
implement
the
splitting
within
this
sources
context
class,
because
this
sort
of
wraps
this
is
we.
We
usually
use
this
to
wrap
any
sources
and
I
believe
it
gets
used
in
load
to
wrap
the
sources
as
well.
So,
ideally,
we
could
pass
arguments
to
load
or
something
to
just
do
the
split
within
the
load
call,
but
right
now.
Obviously
we
cannot
so
anyways
just
wanted
to
call
that
out
in
case
anybody
wants
to
tackle
that
so,
okay,
so
yeah.
So
what
do
we
do?
A
What
do
dude
should
we
do
which,
which
one
makes
more
sense?
So
I
think
that
this
makes
sense,
because
you
know
there's
an
async
call
here
and
we
show
people
async
io
run
now.
I
think
this
is
shorter
lines
right.
So
what
what
would
be
more
friendly
from
a
person
who
is
not
familiar
standpoint
coming
in,
like
which
one
of
these
do
we
think
do
we
think,
would
be
easier
easier
for
a
first-time
user
to
to
use.
A
All
right,
that's
that's
a
good
point.
Okay!
So
let's
see
okay,
yeah
and
feel
free
to
try
out
the
no
async
version
of
load.
Okay,
perfect,
all
right!
So
where
do
we.
A
All
right,
whatever
so,
here's
the
async
and
then
also
feel
free
to
try
out
the
no
async
version
of
load.
Okay,
perfect,
okay,
good
idea:
why
not
have
both-
and
I
know
we
have
some
of
this
stuff
in
the
other
ones,
but
this
way
we
have
at
least
shown
it
right.
A
Okay,
let's
just
try
and
split
all
right
and
trim
four
we'll
be
using
stacking
technique
and
simpler
model.
The
following
are
the
steps
to
assemble
by
stacking
train
first
base
model
on
train
data,
use
the
first
level
base
models
to
provide
validation
of
validation,
data
and
test
data.
All
right,
please
stack
all
the
validation
predictions
to
list
consisting
of
stack
validation,
predictions
and
stack
test.
Predictions
build
and
train
level.
Two
meta
model
stack
value.
A
Predictions
will
serve
as
features
to
train
our
level
2
meta
model
now
and
so
on,
ready
to
predict
training
for
several
models.
Okay,.
A
A
Let's
see
what
should
we
call
this
so
model
entry
points
argument
the
name
pass
to
load
can
be
found
on
the
models.
Plugin
page.
D
A
A
A
I
don't
see
that
juniper
raw
cells
and
save
changes
am,
I
is,
am
I
doing
something
obviously
wrong
here,
I'm
not
seeing
this.
D
Do
you
have
a
node,
I
believe
so,
let's
check,
because
I
can
see
it
in
my
notebook.
D
A
Okay,
perhaps
we're
running
a
different
version
of
python
here:
juniper
lab
3.7
site
packages.
A
Notebooks
links
notebook
sphinx,
oh
yeah,
okay,
this
is
notebook
sphinx.
Would
you
look
at
that
all
right.
A
I
wonder:
okay:
well,
we
won't,
we
won't,
let's
say
focus
on
too
much,
but
if
you
have
it,
if
you
have
it
then
see,
if
you
can,
you
can
add
it
right
when
you,
if
you
have,
if
you
have
it,
then
then
you
can
try
to
do
the
craft
cross
reference
with
the
I
I
would
say,
try
to
do
the
cross
reference
right
or
when,
when
you're
editing
these
try
to
do
it
and
let's
see
if
it
shows
up,
let's
see
all
right
and
yeah,
okay
or
via
the
df
list
models
command.
A
I
was
thinking
we
should
add
the
like
json
or
config
loader
output
to
this
thing,
because
this
is
slr
model
yeah,
let's
see
what
what
do
we
got
here?
Okay,
these
are
by
class
name
yeah.
We
need
to
update
this
to
do
entry
point,
and
so
this
is
I
I
realized
this
the
other
day.
Looking
at
that
output,
there
are
the
operations
that
you'd
written
saw
and
and
that
we
only
have
list
models
right
now
and
services
sources,
but
they
aren't
very
helpful
because
they
just
listed
class
name.
A
A
A
Where
was
that?
Thank
you
for
finding
this
okay
and
then
we
need
to
update
the
list
command.
A
So
support
output
and
config
loader,
okay,
all
right!
So
let's
run
this
all
right,
so
we've
got
our
models
and
we
train
our
models
and
assess
the
accuracy
great
visualize,
the
accuracies,
and
I
don't
have
matplotlib.
A
Why
don't
I
have
my
plot
live
or
wait?
No,
it's
just
mad
wrong
version;
okay,
first
of
all
test,
all
right
and
then
okay
and
predict
prediction:
quality,
value,
okay
and
a
lot
of
predictions.
Okay,
so
for
loop,
that
looks
good
validation,
prediction
2,
which
is
the
level
2
so
or
no
model
2
model
2-
and
this
is
the
record.
A
All
right
great
and
then
we
have
the
actual
actual
the
predictions.
The
test
data
meta
model.
A
A
D
A
A
A
Oh,
it
stops
there.
I
see:
okay,
whatever
not
a
big
deal
circuit.
A
A
Okay,
so
we
show
no
async
version
of
load.
A
A
All
right,
so
that
was
that,
did
we
have
any
other
things
on
that?
Let's
see
yeah
ref
to
cache
download
and
then
we'll
be
good
so,
and
we
also
linked
to
models
page
for
entry
points.
A
That
way,
people
know
where
to
find
the
other
models
or
the
the
list
command,
which
you
know
we'll
have
to
find
a
way
to
make
a
little
more
manageable.
A
All
right
and
then
saving
and
loading.
A
A
Same
thing
with
cash
download,
let's
see
same
thing
with
model
load.
A
A
I
wonder
okay,
so
this
is
another
thing:
when
we
do,
let's
see
yeah,
we
were
gonna,
save
the
config
too
right.
Remember
we.
We
talked
about
saving
the
entire
config
of
the
model
into
the
directory,
and
so
in
that
case
I
guess
now.
This
is
not
for
right
now,
but
you
know
that
would
be
something
like
that
load
saved
and
then
the
directory.
A
Is
that
what
we're
gonna
want,
let's
see,
see
how
we
have
to
do.
This
would
be
the
the
location
that's
passed
right,
so
we
would
do
model
we'd
need
a.
We
need
a
instance
of
the
let's
see
yeah
so
when
we,
when
we,
when
we
save
the
config.
So
this
is
right.
We
have
to
instantiate
the
kind
of
big
ourselves,
but
then,
when
we,
when
we
are
saving
the
config
to
wherever
the
location
is
at
the
directory,
ideally
we
could
provide.
A
You
know
a
way
to
load
the
model,
just
from
that
and
and
not
need
to
override
the
config.
So
you
know
this,
we
talked
about.
You
know
what
takes
precedence.
So
in
the
case
that
we
do
define
it
right,
then
we
would
load
the
saved
data
right,
but
no
changes
to
config
right.
A
So
in
the
case
that
we
don't
define
any
of
this
right
and
we
just
have
the
directory
and
we
want
to
load
the
same
thing
from
the
directory
or
from
the
location,
then
we
would
pass
you
know
just
we'd
have
to
have
some
method.
That
just
does
the
you
know
just
loads
loads,
the
like
loads
from
the
location
and
figures
out
the
model
within
it,
so
that
I
mean
essentially
that
would
be
if
we're
looking
at
this
new
location
thing.
A
All
right,
we
have
location,
stir,
location.
A
Actually,
I
think
we'd
have
yeah,
because
this
location,
the
location,
really
becomes
like
a
optional.
At
this
point,
look
here,
let's
see,
load.
A
A
I
think
that
that
maps,
what
we
talked
about,
sahil
right
so
we
would
take
you
know
this
model
saved-
would
take
the
either
the
string
or
the
data
flow
right,
because
you'll
be
creating
a
data
flow
based
on
the
location
and
so
a
user
could
provide
either
a
string
which
we
auto
create
into
a
data
flow
or
a
data
flow
itself
and
we'd
load
the
saved
model,
and
so
that
would
be.
A
C
Have
a
high
class
high
level
class
maxim
load
already.
A
A
Yeah,
that
sounds
sounds
good
to
me.
So,
let's
see
so
model
loads,
okay,
so
yeah
so
loads
is,
does
load
saved,
seem
like
a
good
good
name,
or
should
we
do
because
we
have
modeled
that
load
for
the.
A
A
Yeah
we
wouldn't
need
a
save
method
right.
Is
that
what
you're
saying
or
oh
no.
A
Load
saved
oh
yeah,
yeah
yeah,
that
is
a
bit
of
an
overloaded
method,
name
in
there,
okay,
so
maybe
yeah,
let's
see
so
maybe
we
should
actually
repurpose
this
whole
load
class
method,
because
this
is
a
little
bit.
Let's
see,
this
is
a
little
bit
overloaded
as
it
were.
A
A
A
C
Another
alternative
could
be
to
have
two
high
level
functions,
one
load
source
and
another
load
model.
A
Yeah
that
that
could
be
good,
let's
see,
yeah
one
load
source.
A
A
A
A
They
also
support
this
thing
where
it
converts
it
converts
it
into
like.
If
it's
a
string
or
a
path,
it
goes
ahead
and
does
it
basesource.load
based
on
that,
so
which
is
also
really
okay,
it's
base
it's
loading
based
on
the
suffix
of
the
file
path.
So
if
you
do
a
csv
source
when
it
loads
the
csv
or
if
you
say
dot
csv,
then
it
loads,
the
csv
source,
dot,
json
loads,
a
json
source
okay.
A
So
if
we
were
to
overload
this,
then
you
know
yeah
and
we
had
a
model.
It
really
yeah
when
it
wouldn't
it
wouldn't
really
reliably
be
able
to
determine
it.
I
don't
think
because
we're
already
sort
of
looking
at
this
string
parameter
here,
so
it
loads
so
so
separate
functions
may
be
the
way
to
go
there,
but
also
also,
I
think,
moving
this
right
into
the
class
might
be
good,
so
high
level
or
move
it
into
the
class.
A
A
Okay,
so
any
other
thoughts.
Anybody
anybody
got
more
thoughts
on
this,
so
so
we
basically
we've
determined
that
we
need
to.
We
need
to
change
that
load
function
to
not
not.
I
think
I
think
this
is
probably
what
we
need
to
go
with
here,
and
I
mean
entry
point
may
not
be
the
best
term.
So
what
any
any
thoughts,
because
I
think
load
load-
definitely
needs
to
be.
A
Let's
see,
okay,
here's
the
other
thing.
Actually,
the.
A
Okay
yeah:
let's
see
that.
A
A
So
model
create
yeah.
A
So
let's
see.
A
We
should
implement
the
create
class
method
which
takes
the
model
model
entry
point.
A
And
then
either
the
config
for
the
model
as
the
second
argument
or
the
keyword
or
okay
or
keyword,
arguments
will
be
passed
to
the
model.
A
A
A
A
A
The
issue
here
becomes
the
same
thing
that
you
just
ran
into
sahil
with
the
high
level
run
function
and
importing
that
from
high
level
high
level,
because
if
you
call
run
from
model
and
then
model
implements
or
and
then
model
calls
in
a
high
level,
then
yeah
we
have
a
circular
import
situation,
so
I
was
thinking
the
best
way
to
handle
that
is
probably
to
move
this
stuff.
Let's
see,
we
should
probably
move
to
model.
A
A
We
we
have
ml
and
dataflow,
so
we
should
probably
move
high
level
into
its
own
directory
and
then
split
it
out
into
dataflow
and
ml.
I
believe
that
will
solve
our
problem.
Let's
see.
A
Yes,
yes,
and
that
way,
because
I
think
you'd
ran
into
that
right.
B
A
Hopefully
that
that
solves
that,
I
believe
that
would
solve
that
issue
so
model.create,
so
we
decided
we
want
to
do
this
and
then
we
decide
for
loading.
So
this
is
our
our
loading,
because
this
is
a
loading
code
block.
We
wanted
to
do
okay,
so
we're
going
to
need
to
do
an
await
this.
A
This
load
would
have
to
turn
into
a
a
weight
call,
and
then
this
would
be
the
location
or
the
data
flow,
and
then
it
would
use
the
high
level
data
flow
or
high
level
run
function
if
it
is
a
data
flow,
otherwise
it
otherwise.
What
does
it
do?
Well,
it
creates
data
flow.
It's
same
same
sort
of
flow
that
you're
dealing
with
right.
Now,
where
we'll
and
we'll
address
that
in
a
second.
So
essentially,
we
would
actually
we
could
just
go
and
implement
this
okay.
A
So,
let's
finish
up,
let's
finish
up
the
this.
This
looks
this
looks
good
to
me.
Let's
just
make
the
same
tweaks
that
we
made
to
the
last
one
with
referencing
cache,
download
and
same
thing
on
the
on
the
load
data
set.
Let's
show
both
versions
here
and
then,
let's
link
to
the
entry
points
page.
A
A
A
A
C
About
the
green
output,
I
get
some
messy
outputs
on
my
notebook
right,
like
some
cuda
warnings,
and
it
also
shows
my
local
pads.
So
I
I
decided
to
watch.
A
All
right,
let's
see,
let
me
read
this:
I
don't
get
any
outputs
on
train
at
all
other
than
worries
about
crude
oil.
My
local
pass!
So,
okay
and
that's
what
you
just
said
wasn't
sure
if
this
stupid
launch,
our
outputs
are
to
be
expected
at
all,
show
the
train
outfit
here.
Also
the
extraction
and
feature
extraction
example
doesn't
have
any
layers
replaced
accommodate
rock
person.
A
I
wasn't,
I
wasn't
sure,
what's
the
default
on
an
interaction
treatment
which
here
about
the
other
one
on
twitter,
see
you
reply
to
my
queries,
I
had
assumed
we're
adjusting
layers
to
follow
and
then
adding
layers
is
false
and
classifications
to
define
it,
but
answer
to
the
model,
I
have
having
layers
adjusted
to
the
number
and
classification
by
default,
rather
than
setting
them
to
thousands
or
whatever
yeah.
That
seems
like
something
we
should
change.
Similarly,
fine
too.
In
the
example,
the
out
features
should
be
three
and
not
seventeen
okay.
A
A
Okay,
fine
tunes
in
it
okay,
so
this
is
basically
where
such
has
said.
Let's
add
some
layers,
so
if
we're
adding
layers
to
things,
then
trainable
is
true,
I
believe
right
or
what's
features
predictor.
Why
are
we
okay.
C
Basically,
he's
saying
that
the
difference
between
the
two
use
cases
is
just
setting
the
trainable
bull
to
true
and
false.
So
we
can
just
you
know,
showcase
one
of
the
use
cases.
A
A
Up
okey-dokey
so
yeah,
it
could
be
good,
I
think,
might
as
might
as
well.
Let's
see
wait
a
minute,
so
of
course,
okay.
So
in
this
demo
we'll
be
using
the
rock
purposes
image
image
classification
data
fit
data
set.
A
Let's
see
cnn
for
future
exciting,
using
the
representations
learned
by
previous
network
extract
meaningful
features
from
new
samples.
You
simply
add
a
classifier
which
we
trained
from
scratch
on
top
of
the
pre-trained
model.
A
You
simply
add
new
glass
fiber,
which
will
be
trained
from
scratch
on
top
of
the
pre-turn
model.
In
this
approach,
we
generally
freeze
all
the
weights
of
layers
except
the
finer
layers
and
dfm.
All
the
weights
of
other
layers
are
frozen
by
setting
tradable
equals
false
okay,
okay,
okay,
all
right.
Let's
see
fine-tuning
the
cnn,
I'm
wondering
you
know
about
the
word
trainable
and
whether
that's
the
best
word
for
this
so
fine
tune.
It
unfreezing
the
weights
of
the
top
layers
in
a
frozen
model
base
jointly
training.
A
A
A
Model
pytorch
update.
A
Okay,
a
help
message
for
the
trainable.
A
Transfer
learning
or
what
is
it?
Is
this
a
transfer.
A
A
Great
okay,
when
creating
issues
got
a
lot
faster,
all
right
so
fine-tuning
using
unfreezing
the
weights
on
top
of
layers
and
frozen
mileage
base
and
jointly
training,
both
newly
added
classifier
layers
and
the
last
layers
of
the
base
model.
It's
done
by
saying
the
trainer
will
equals
true
parameter.
This
part
is
a
customized
model
before
performing
the
fine
tuning
of
c,
o
n,
all
right,
great
cross,
entropy
loss
function,
okay,
and
I
think
we're
supposed
to
get
entry
points
for
these
as
well.
A
So
oops
matt,
plot
love,
jesus,
okay,
so
build
our
data
set.
Now
let
me
actually
reload
this
thing
discard.
A
A
A
I
mean
it
won't
matter
if
you're
doing
yeah
it
won't
matter
if
you're
doing
the
reference.
So
let's
just
make
a
note
of
that.
So.
A
For,
let's
link
to.
A
A
A
Dir
yeah
there's
like
no
docs
on
this,
so
we
need
to
make
sure
we
have
docs
on
that.
I'm
thinking
about
this,
as
maybe
an
opportunity
to
explain
it
a
little
more
so
feature.
Folder
name
is
what
folder
it
is
labels
rock
paper
scissors
because
it's
like
you
know,
it's
really
doesn't
tell
us
much
right.
Let's
see.
D
A
A
A
A
Let's
just
add
a
little
note,
so
feature
is
the
feature
name,
the
loaded
data
or
the
okay,
so
folder
name,
and
this
should
really
be
directory.
If
we
do
direct
research,
all
right
so
is
the
directory.
We
are
loading
from
labels.
A
A
A
A
Folder
name
yeah,
okay,
these
these
will
be
we'll
take
this
and
we'll
update
these
and
we'll
link
to
this.
So,
let's
link
to
this
dur
source.
So,
let's
link
to
the
dirt
source
and
the
plugins
page,
which,
let's
see
where's
the
there's,
no
view
source
on
this
yeah.
So
it
should
be.
There
should
be
a
tag.
A
A
A
The
tag
that
we
need
to
link
to
or
the
reference
I'm
wondering.
A
A
Okay,
this
is
the
label,
that's
assigned,
okay,
yeah.
We
need
better
documentation
on
this
directory
source
thing.
So
basically,
it's
going
to
iterate
through
all
these.
It's
going
to
sign
the
feature
to
be
image
and
it's
going
to
assign
the
label
to
be
rock
paper
scissors
based
on
the
directory
name.
A
Feature
is
the
name
of
the
feature
within
the
record
that
will
contain
image
data,
the
data
or,
let's
see
the
labels
rock
paper
scissors
yeah.
Okay,
so
we
need-
let's
just
let's
get
seksham-
to
explain
this
some
more
because
this
is
like:
where
will
the
label
go?
You
know
feature
predict
label,
okay,
yeah,
so
that
needs
to
be
configurable
as
well.
So
that's
a
problem.
A
So,
let's
see
the
label
feature
will
be
set
to.
A
Corresponding
the
directory
the
image
was
found
in
is
that
correct,
yeah?
Okay,
so
this
is
not
predict
or
know,
predict
label
classifications,
rock
paper
scissors
features.
A
All
right,
so
we
load
the
model.
Okay,
you,
you
added
your
layers,
you
loaded
the
model
and
then
now
we
leave.
A
A
Let
us
actually
set
these
values
here
so
and
then
mention
that
they're
the
defaults,
because
explicit
is
better
than
implicit,
so
especially
for
the
tutorial.
So
let's,
let's
actually
set
these
right,
so
we
set
trainable
right
and
then
we
can
say,
but
it
is
also
the
default
value
and
similarly
we're
leaving
out
or
we
set
pre-trained,
even
though
it's
the
default
value
because
we're
about
to
change
it
right,
we're
about
to
do
we're
about
to
do
a
different
value
down
here,
right,
so
trainable
equals
true
and
pre-trained.
C
Yeah
we
because
pre-trained
is
true
for
both
of
them.
A
C
I
was
saying
you
actually
have
to
make
this
bull
value
true,
if
you're
dealing
with
pre-trained
models.
A
A
A
A
Okay,
all
right,
okay
and
you
have
the
testing.
Perfect
looks
good
yes,
so,
let's
just
let's
just
set
trainable
to
false
here,
just
so
that
there's
a
like
a
you
know
a
difference
and
then
what
else
did
we
say?
Then
yeah,
let's
get
some
explanation
from
saksham
for
the
directory.
So
let's
actually
just
let's.
We
can
just
create
an
issue
for
that
and
if
you
link
to
it
that'll
be
enough
because
then
we
can
update
that
later
right,
okay
looks
good,
so.
C
A
C
Don't
happen
yeah.
If
we
make
the
use
case
for
tuning
for
feature
extraction,
then
we
can
say
that
if
you
wanted
to
do
it
for
tuning
the
cnn,
you
can
you
know,
set
the
pool
value
after
enable
to
the
other.
A
C
Yeah,
we
are
because
it's
essentially
the
same,
we
are
using
the
same
layers.
Oh.
C
Oh
sorry,
yeah
saksham
said
that
you
still
have
to
use
the
last
layers.
A
A
A
Yeah
yeah
so
like
what?
What
do
we
really
gain
out
of
that
and
it
does?
It
looks
like
the
the
results
there
was.
I
mean
the
results
are,
are
not
any
difference
right.
So,
let's
see
the
accuracy
is
90
versus
the
accuracy
is
97..
So
so
at
this
point
yeah
I
don't
know
if
we
get
much
out
of
saying
you
know
doing
that
doing
that
last
portion
right
and
freezing,
I.
A
A
C
C
One
in
the
cell:
that's
training
the
model,
so
let's
train
yeah.
So
oh
this
yeah,
this
yeah,
so
I've
been
clearing
the
outputs
for
all
the
training.
A
A
About
the
more
detailed
okay,
no,
we
need
also
d,
so
source
stir.
A
A
Oh
okay,
okay,
great!
So,
let's
see
so
we
gotta,
okay,
we
gotta
move
on
and
we
don't
have
a
temp
time.
So
let's
talk
about
this
so
sudhanshu.
What
did?
What
did
you
want
to
talk
about
today?.
A
Okay,
let's
see
yep,
okay,
okay,
yeah,
okay,
crap!
I
didn't
I
forgot
about
this.
Sorry,
okay,
I'm
thinking
we're
working!
This
is
already
a
long
meeting,
so.
A
A
Yeah,
that's
just
a
helper
test
for
us,
so
all
right!
Okay!
So
let's
just
make
a
note
of
that.
All
tests
are
passing
okay,
only
lines.
That
message
is
failing:
okay,
but
verified;
okay,
okay,
implement
data
flows
for
save
lows
and
then
create
command;
okay,
so.
A
The
data
flows
for
save
load
and
to
create
command
all
right,
so
the
data
flows
for
save
load
of
location.
Okay,
let's
just
talk
about
the
create
command,
real,
quick
first
and
then
we'll
talk
about
that.
So
so
what
did
you
do?
You
have
anything
you
wanted
to
show
us
or.
E
So
previously,
what
we
discussed
was
that
so
we
had
something
like
this
right,
where
we
discussed
that
if
you
want
to
perform
some
operation
on
some
feature.
E
Yeah
so
so
we
had
to
like
perform
a
specific
operation
on
one
of
these
source
features
right.
A
E
We
had
to
provide
where
the
data
should
go.
The
operations.
E
So
we
were
actually
now
doing
it
with
the
help
of
a
data
flow.
So
what
I
have
created
here
is
this
is
the
create
short
command
data
flow.
So
so
I
had
to
discuss
like
so
doing
the
pre-processing
thing,
so
I'm
thinking
of
providing
it
input
something
like
this,
so
I
have
taken
that
same
example,
but
this
is
the
operation
that
you
want
to
perform
and
the
inputs
array,
which
should
give
it
the
value
from
the
source
features,
and
the
denominator
should
be
provided
from
something
here
to
the
seed
value.
E
So
this
was
my
initial
thinking
like
like
how
we
can
like
provide
the
values,
but
I'm
also
unable
to
figure
out
like
like
like
so
I
have
a
people,
a
data
flow
here
like
when
I
do
like
pre-processing
here.
So,
let's
suppose
we
added
another
feature
like
another
entry
here
pre-processing
and
we
provide
it
the
the
operations,
the
features
which
needs
to
be
processing.
E
So
so
so
we
have
a
data
flow
rate.
A
A
E
A
A
A
A
A
So
in
this
case
it
was,
let's
see
so
so
we
have
this
image
feature
and
we
wanted
to
normalize
the
image
so
search
image,
source,
okay,
features,
image,
yeah,
basically
we're
just
taking
every
image
and
we're
running
the
array
of
the
image
data
through
this
array,
normalize
operation
and
then
we're
saying
that
the
so
features
image
feature
is
array.
A
A
E
A
Let's
see
the
feature,
the
value
of
the
feature
goes
to
the
array,
the
array
input
right
because
there's
a
ray
and
denominator.
Okay,
so
I
think,
let's
see.
A
E
A
D
E
A
A
A
Let's
look
at
this
and
look
at
how
we
might
you
know,
do
a
shorthand
create
version
of
this
right,
and
so
what
we
want
to
do
here
is
we
want
to
take
okay
city
month
state
and
put
these
into
the
lookup
population
and
look
up
temperature
right.
So
this
might
be
something
like
here:
let's
see,
okay,
so
inputs
get
single
spec,
so
we
want.
E
A
And
then
that
would
eliminate
because
I'm,
let's
we're
just
trying
to
think
about
like
how
do
we
shorten
this
command
right
and
so
that
would
eliminate
the
need
to
pass
to
get
single
spec
stuff
for
the
flow
we're
saying.
Okay,
we
want
the
city
to
go
to
the
in
the
city
input
month
to
go
to
the
month.
Input
city
go
to
city
and
state,
go
to
state.
A
So,
let's
see
what
could
we
do
here,
so
this
is
about
the
inputs
that
we're
passing
to
the
network,
so
this
is
kind
of
that
feature
stuff
where
it's
like.
Okay,
where
does
that
end
up
so.
A
E
So
it's
very
similar
to
the
create
command.
This
is
actually
similar
to
create
command,
but
the
only
part
which
I
was
like
thinking
like
having
some
pre-processing.
A
I
think
that
so
what
we
did
we
did
with
the
pre-processing
sources.
We
essentially
said
we
know
what
input
data
we
have
right.
The
input
data
that
we
have
is
is
any
features
from
the
records
right
and,
and
the
definitions
become
the
record,
the
the
feature
names
right,
and
so
then
it
was
really
a
matter
of
saying,
okay.
A
Well,
for
each
we
we
can
assign
one
operation
to
to
change
the
data,
which
was
the
pre-processing
of
each
feature
right
so
for
the
shorthand
create
command,
we
could
take
a
similar
approach
and
we
could
say
you
know
we
could
basically
say
because,
because
here
in
the
long
version
we
have,
you
know
we're
saying
city
months
and
state
right
and
we're
telling
it
where
those
should
go
right.
A
Yeah,
let's
see
you
know,
I
I
I
think
that
copying
that
syntax,
that
we
had
might
actually
still
be
a
good
way
to
go
and
implementing
the
dictionary
of
the
config
dictionary
stuff,
because
this
yeah
yeah
right.
So
essentially,
though,
the
point
of
this
was
was
really
you
know
we
had
this.
This
was
the.
Let
me
go
back.
Let
me
flip
to
this
issue
as
well,
so
I
can
have
it
open
this
one
yeah.
This
is
a
while
ago,
let's
see
yeah,
we
talked
about
it.
A
Semi
recently
didn't
we,
okay
choose
all
right
yeah,
and
this
is
when
we
talked
about
the
shorthand
command
so,
and
this
is
from
january
of
2020,
okay,
so
shorthand
for
oh-
and
this
is
the
issue
now,
that's
why
it's
been
renamed
already.
Okay,
I
was
searching
for
the
old
issue
title
all
right,
so
the
main
thing
here
that
needed
to
happen
is
is
the.
A
A
If
you
do
that,
then
you
can.
If,
if
you
implement
that
that
support
for
the
for
the
dictionary
right,
then
you
can
begin
to
to
traverse
that
dictionary
and
link
up
the
inputs
and
the
outputs
of
like
and
link
up
the
inputs
and
the
outputs
of
the
various
operations
in
your
data
flow
based
on
those
dictionaries
right
so
which
is
what
we
were
sort
of
showing
down
below
right.
A
So
if
you
were
to
provide
the
command
line
flags
that
say
so,
for
example,
if
we
go
down
right
and
we're
looking
at
your
example
right
with
the
maybe
we
should
here.
Let
me
let
me
sort
of
pop
open.
Let
me
present
and
then
I'll
show
what
I
was
thinking
like
with
that
with
that
example
that
that
you
have
with
ice
cream
sales.
A
D
D
A
Okay,
and
where
did
we
have
it
or
no,
it's
within
the
demo,
yeah,
okay,
here's
the
create
command
all
right.
So
what
if
we
were
to
take
this
and
do
a
shorthand
of
it?
That
would
be.
E
Sorry,
the
shorthand
is,
is
actually
the
main
idea
behind
shorthand
creation
is
that
we
can
actually
modify
the
records,
while
in
the
fluids.
A
A
E
Was
that
in
the
ice
creams
and
in
the
ice
cream
demo.
E
We
had
like
we
had
like
data
points
right
city,
so
so,
while
going
through
the
data
flow
like
we
had
to
bring
in
the
population
and
the
temperature,
and
so
we
actually
did
that
using
the
merge
command.
A
E
Similarly,
like
we
had
the
idea
of
the
chain
come
on:
yes,
where
we
take
the
data
pre-process
data
from
one
data.
A
E
A
A
E
All
right,
no,
we
don't
have
any.
A
A
If
we
follow
this
pattern
that
we
had
here
just
for
the
sake
of
example,
data
flow
create
short
rate.
We
could
say
we
have
our
input
data
just
and
and
we'll
just
you
know.
This
is
not
what
we'd
want
to
call
things
right
now,
but
right
so
city.
A
Month
and
state
right,
and
so
this
would
be
our
input
data
and
then
you
know
the
flow
would
basically
be.
You
know,
let's
see
well
what
did
we
have
here
so
this?
A
City
op
would
be
look
up
temperature
and
then
the
city.
A
Just
so
the
city,
the
data
of
the
or
city
value,
so
city
operation
goes
to
look
up
temperature.
The
value
of
the
city
goes
to
the
city
input
and
then
the
inputs
look
up
temperature
month,
okay,
yeah.
So
this
is
where
awesome.
A
Now
this
breaks
down
so
month,
op
yeah-
you
really
have
to
define
this
by,
let's
see,
can
you
do
it
by
operation?
A
Op,
look
up
temperature,
you
know
so
city
is
city
and
then
month
I'm
trying
to
play
with
what
the
syntax
would
be
right.
So
because,
if
you
say,
if
you
say,
if
we
did
it
this
way
right
now,
all
of
a
sudden
when
I
do
month,
it's
like
well,
okay.
Is
this
the
same
instance
of
lookup
temperature?
Is
it
a
different
instance
of
lookup
temperature?
So
the
operation
for
month
is
lookup
temperature
of
the
opera,
because
this,
but
now
you
need
to
put
month
two
places.
A
A
A
Yeah,
you
see
what
I'm
saying
like,
because
the
previous
approach
that
we
prototyped
here
was
buy
feature
data
right,
but
if
you're
looking
at,
for
example,
this
this,
you
know
this
one
that
you
had
just
been
messing
like
the
one
that
we
just
had
as
an
example.
Here
it
becomes
clear
that
you
want
to
do
it
by
you
want
to
define
the
data
flow.
The
shorthand
data
flow
by
by
operation
right,
because
you're
going
to
have
multiple
pieces
of
data
going
to
different
operations
right,
so
you
could
do
another.
A
A
Is
255
so
and
then
it's
like
okay?
Well,
is
this
a
hard-coded
value,
or
is
this
okay?
Well,
yeah?
Okay,
so
if
it
doesn't
appear
here,
then
it's
a
hard-coded
value,
but
also
you
know
what,
if
you
have
a
feature
name,
that
is
the
same
as
the
hard-coded
value
right.
So
I
don't
know
these
are
these.
Are
these
are
options
right?
I
think
that
if
you
want
to
do
a
short
command,
it
seems
like
you
probably
should
denominator
yeah
so
array
comes
from
image.
Denominator
is
255,
so.
A
Yeah,
you
could
do
something
like
input,
image
or
value
255,
but
that
is
not.
This
is
not
that's
not
clean.
I
would
say
array
yeah,
because
you
have
to
think
about.
The
other
thing
you
have
to
think
about
is:
how
is
this
going
to
look
as
like
a
yaml
representation
or
a
json
representation,
because
we
still
have
to
support
we're
going
to
try
to
support
the
config
files
eventually
right
so
and
that's
probably
hopefully
soon
so
yeah?
You
did
your
your
trick.
Here
is
really.
How
do
you?
How
do
you?
A
A
A
A
All
right,
sweet
and
then
the
last
thing
would
be
okay,
so
then
you
would
have
you
know
so
input
or
normalize,
so
you
would
have
to
implement
the
dictionary
type
on
the
config
to
do
that
and
then
you'd
have
to
implement
the
same.
Seed
would
also
be
a
dictionary
type,
and
what
do
we
have
here?
So
this
is
inputs,
so
arena
image
array
comes
from
image.
A
Cli
data
flow
create
config
inputs.
A
A
The
siege
section
should
really
be
inputs
and
then
origin
should
be
set
as
seed
for
each
of
them,
because
the
whole
concept
with
seed
is
that
it
allows
you
to
to
clearly
know
where
your
input
data
came
from
right,
and
so,
if
you're
dealing
with
trusted
versus
untrusted
input,
then
you,
you
know
right
away
that
the
seed
stuff
is
coming
from.
A
You
know
predefined
and
if
you
said
maybe
origin
equals
untrusted,
then
you
would
know
that
you
can't
trust
this
data
right
and
you
wouldn't
feed
it
to
operations
that
need
only
trusted
data,
so
anyways.
Okay.
Does
that
give
you
something
to
play
with?
I
think
I
think
the
moral
of
the
stories
you
probably
need
to
implement
the
dick
stuff
on
on
config.
If
you
want
to
do
this,
because
I
don't
I
mean
I
don't
see
a
path
forward
that
doesn't
involve
that.
A
A
So
this
is
config
dict!
All
of
this
stuff
I
mean
all
this
stuff
is
kind
of
a
mess,
because
we've
we're
still
in
the
middle
of
transitioning
it
to
one
unified
thing
right.
So
this
config
right,
so
here's
our
config
class.
Didn't
we
just
change
this.
A
Maybe
it's
in
a
pr
crap,
okay,
whatever
may
config,
so
these
config
classes
right.
So
if
they
see
a
if
we
go
in,
if
we
write
a
class,
that's
like
the
one
you
just
showed,
which
is
you
know
this
stuff
here
right?
Yes,
so
the
the
implementation
trick.
Here
is:
what
do
you
do
when
you
see
right
and
then
what
do
you
do
when
you,
when
you
see
a
data
type
here
right,
because
what
we're
doing
is
in
this
convert
value?
Let's
see
this
isn't
convert
value,
let's
see
yeah.
A
So
when
we
get
into
convert
value,
we
say:
okay,
what's
the
argument
and
what's
the
value
and
the
argus
comes
from
mk
arg,
this
function
here
right
and
this
is
basically
inspecting
the
data
types
of
the
fields
on
the
on
the
on
the
the
config
class
right,
so
you're
going
to
get
something
that
represents
features.
It'll
say
the
data
type
is
dict
right
and
then
you're
going
to
need
to
go
in,
and
I
believe
it's
field
annotation
so
and
see
arg,
annotation
right,
so
you'll
come
in
and
you'll
basically
say.
A
Okay,
if
it's
instance,
value
stir
type
class,
is
not
stir
dick
today
class.
So
this
is
saying
this
basically
says:
okay,
if
that,
if
the
field
is,
if
this
is
a
data
class
right,
then
you're
going
to
go
through
and
change
the
dictionary
to
the
data
class.
Okay-
and
this
is
probably
around
where
you
might
want
to
do
it
in
here-
is:
let's
see
you
know
what
is
this
so
type
in
arg
yeah,
so
convert
equals
true.
A
Okay,
so
this
does
things
like
unions
and
stuff
trying
to
convert
the
type
to
the
arc.
Let's
see
okay,
yeah,
and
it
also
handles
things
that
are
already
there.
A
So
you'll
probably
end
up
in
this
thing
here
where
you're
saying
okay,
this
you'll
probably
add
another
statement
here
where
you
say:
if
it's
a
dictionary
and
the
annotate
or
well
maybe
you'll
end
up
in
here.
If
it's
a
dictionary
and
the
annotation
is
of
type
dict,
then
you
need
to
go
in
and
then
you
need
to
recursively
call
this
convert
value
function
on
each
on
each
value.
In
that
dictionary
right
and
then
you
need
to
return
the
then
you'll
return.
A
You'll
return
a
dictionary
with
each
value,
so
the
keys
stay
the
same
and
the
values
get
passed
recursively
through
this
function
again.
I
believe
that
that
is
what
will
happen
here
for
you
now
now
you
you'll
find
that
this
is
much
trickier
than
that,
because
the
config
code
is
hard,
so
all
right.
Okay,
so
are
we
good
on
that?
Then?.
E
A
A
big
thing
to
have
so
that
would
be
great
if
you
feel
like
doing
that,
all
right
so
then,
and
then
we
then
the
so
the
location
rename.
So
I'm
gonna
just
double
check
that
before
I
merge
that
so
implement
data
flow
save
load
for
location.
This
is
so.
I
hope
I
think
this
is
a
pretty
good
segue.
We
talked
about
saving
loading,
we
talked
about
data
flows
and
now
we're
talking
about
saving
loading
data
flows.
So
forgive
the
long
timeline
here,
but
at
least
we
are
on
topic.
A
So
let's
take
a
look
at
that
stuff.
We
were
just
doing
with
the
create
class
method,
so
yeah,
so
we
talked
about
you
know,
create
create
the
model
and
we
had
you
know
our
previous
code
had
like
the
union
and
stuff.
So
if
we
go
and
we
look
at
model
model,
so
we
go
and
we
look
at
model.
A
We
talked
about
basically
taking
the
load
method
and
making
a
create
method.
We
talked
about
splitting
out
the.
Let
me
capture
this
in
the
notes
I
talked
about.
A
Making
we'll
play
with
config
code
to
implement
predict
support,
okay,
so
oops
so
talked
about
making
model.create
class
method
and
so
talked
about
moving
high-level
code
into
a
directory
and
splitting
out
into
files
such
as
high
level
slash
data
flow
similar
to
cli.
A
A
What
if
we
had
this
way
of
saving
and
loading
models,
right,
okay,
so,
and
and
this
obviously
this
is
feeding
off-
is
this
coming
from
the
location,
the
directory
to
location
stuff?
So
if
we
change
directly
to
location-
and
we
talked
about
okay-
so
this
clearly
here
is
doing
a
you
know.
This
is
modifying
the
config
parameter
and
I
think
we
talked
about
a
little
bit
in
the
issue.
Comment
that
we
probably
want
to
move
to
something
like
you
know,
just
setting
the
property
right.
A
So
that
way
you
know,
because
this
is
something
that
that's
dynamic
for
the
lifetime
of
the
class.
So
say
we
set
the
property
rather
than
changing
the
config
parameter
right
and
so
okay.
So
then
you
know
what
we
wanted
to
do
is
what
we
want
to
do
here
is
basically
say
on
a
enter.
A
You
know:
what
can
we
load?
Can
we
load
this
thing
and
we
wanted
to
create
a
data
flow
based
off
of
the
location
parameter,
which
is
the
you
know,
the
directory,
which
will
be
the
location
and
let
me
just
make
sure
yeah
so.
B
B
It
is
11.55
only,
but
it
was
a
previous
checkout,
so
the
previous
account.
A
B
A
A
B
A
Okay,
perfect,
all
right
so
yeah,
so
self-location
is
file
so
yeah,
so
we
said
basically
we'll
make
config
location
just
be
location
and
then,
let's,
let's
you
know
we'll
just
only
call
this
when
we're.
You
know
there.
B
Actually,
I
did
it
because
it
was
to
be
called
twice
once
and
then
turn
once
and
exit.
So
I
just
made
it.
A
I
see
yeah
okay,
yeah,
perfect,
okay,
yeah.
So,
and
actually
you
know
this
is
going
to
be
self.
You
will
just
leave
this
as
self.config.location
is
file
because
we
won't
change
it
right
and
that
way
we
don't
that
way.
We
don't
modify
the
config
object
itself,
which
means
that
we
should
probably
be
marketing
we'll
we'll
probably
mark
yeah.
A
Let's
see
yeah,
okay,
that's
that
looks
good
because
because
we
talked
about
the
mutability
stuff
too,
and
so
if
somebody
mutates
the
location
parameter
in
between
saving
and
loading,
we
would
want
to
make
sure
that
we
we
we
save
out
to
the
to
the
new
to
the
newly
modified
place
right
here:
okay
and
if
you're
using
config
location
it'll,
do
that
all
right.
So
then
we
dump
the
config.
A
And
I
think
we
want
dot
export
here.
A
A
We
can
change
that,
though,
so
export
the
thing,
output,
location,
config,
file,
location,
run,
operation,
okay,
so-
and
this
was
the
run-
operation
stuff,
okay
and
create
data
flow.
Okay
create
therefore
yeah.
Okay-
and
this
made
me
think,
let's
see.
A
A
What
was
the
one
that
we
merged
the
models?
Let's
see
or
the
operations,
I
mean
operation
archive
this
guy.
A
A
B
Yes,
but
they
will
drop
later
on
as
per
year,
yeah
feedback,
something.
A
A
A
A
B
A
A
Okay,
so
I
think
that
this
is
probably
this-
probably
I
I
okay,
so
I
think
we
need
to
make
outputs
on
these,
because
what
I
realized
was
and
on
the
compression
ones
themselves
so
yeah.
I
think
we
need
to
make
out
outputs
because
did
three
all
right.
This
is
still
like
this
okay,
so
I
think
we
need
to
make
outputs,
because
if
we
don't
have
outputs,
you
know
everything
is
event
based.
A
A
Yeah,
maybe
these
should
be
like.
I
think
this
is
this
was
sort
of,
because
looking
at
the
looking
at
the
code,
where
is.
E
A
A
A
A
A
A
Equals
you
know
model.tar.gz,
you
know
we
load
the
let's
say
we
take
suffixes
and
we'd
say
you
know.
If
there's
two
extensions,
if
there's
two
extensions.
B
So
the
thing
is,
like
the
extensions
may
be
written
differently,
dot,
dot,
dot
g
may
even
be
written
as
t
g
z,.
D
B
Similarly,
so
the
best
way
to
know
if
file
is
an
archive
or
not
is
like
using
the
libraries
is
zip
file
or
a
star
file
function
and
then
getting
tar
file
info
or
something
like
that
so
to
check
if
it
is
a
archive
or
not,
that
is
much
more
reliable.
A
Yes,
yeah
you
could
you
could
do
so.
You
could
definitely
do
that
right,
but
then
you're
gonna
end
up
so
in
the
case
where
okay,
so
we
so
you
can
def
yeah.
You
can
definitely
do
that.
We
need
the.
The
point
of
this,
though,
is,
is
really
to
create
the
data
flows
right,
so
it
we.
We
really
just
need
to
get
like
a
minimum.
A
Thing
working
right
and
yeah,
okay,
so
yeah
you
can
go
and
you
can
do
zip
file
tar
file.
The
the
point
is
to
to
build
to
build
the
data
flow
right.
So
I
ideally
what
you
would
do.
Is
you
figure
out
what
extensions
you
need
right
and
then
you
load
the
operations
that
you
need
based
on
the
extensions
that
you
have
right.
A
So,
for
example,
if
you
had,
if
you
had
like
you,
said
so
tgz
right,
then
you
would
map
this
to
tar
and
then
gz
right
and
then
you
would
load
the
archive
operation
for
tar
and
the
compression
operation
for
gzip
right.
B
A
A
B
Example
I
created
with
cheez-it,
but
don't
name
it
as
dart
or
jesus
just
name
it.
It
will
like
end
up
being
in
a
different
bucket
of
data
flows,
which
it
shouldn't
be
in.
A
B
A
A
I
don't
know
I
mean
I
guess
it's
so
this
is
something
that
this
is
something
to
consider
right,
but
sort
of
the
the
point
of
this
is.
Is
it
can
get
arbitrarily
complex
very
quickly
right?
Because
I
don't,
I
think,
zip
file
also
incorporates
its
own
compression
algorithms.
A
I
believe
they
have
some
internal
support
for
that
tar
file.
I
let's
see
tar
file,
I
don't
know
okay
yeah,
so
they
have
lcma
compression.
Do
they
have
it
built
in
yeah?
They
do
have
it
built
in
all
right,
so
yeah,
so
they
have
it
built
in,
in
which
case
we
really
don't
even
need
to.
We
didn't
really
need
to
do
the
compression
algorithm
separately,
anyways
all
right,
so
it
seems
like
we
don't
even
need
to
do
the
compression
algorithms
separately
if
they
already.
B
A
Yeah,
okay,
so
so
for
now,
for
now,
let's
just
focus
on
zip
and
tar
files
right
and
let's
just
run,
let's
just
run,
let's
create
two
data
flows
right,
so
so,
let's
only
focus
on
zip
and
tar
files
and-
and
you
know,
as
as,
if
they're
they're
named
properly
right
and
let's
create
two
two
data
flows
right
so
create
one
data
flow
on
extraction
that
just
runs
the
yeah,
so
you
have
like
make
zip
archive
and
and
extract
zip
archive
makes
it
profit
right.
A
So
on
a
inter,
if
you
see
a
zip
extension,
then
or
or
you
could
do
you
know
one
yeah,
why
don't
you
do
the
zip
file
right?
So
if
you
see,
if
you
see
you
know,
self.config
lock,
location
yeah,
so
if
you
see
the
location
is
a
zip
file,
then
run
the
zip
arc,
the
zip
extraction
right
that
operation
and,
let's
see
we'll,
create
the
data
flow.
So,
let's
see
we
want
to
return,
create
data
flow
of
input,
directory
path,
output,
directory
path,
operations,
file,
type
action,
operation,
zip,
extract.
A
A
A
A
B
A
A
Okay,
great
okay,
so
yeah.
This
simplifies
things.
I
was
thinking
that
we
were
going
to
involve
the
compression
operations
as
well,
which
which
made
it
more
complicated
because
they
didn't
have
outputs,
and
so
I
think
the
fix
that
we
need
is.
We
need
to
make
sure
that
the
compression
operations
and
the
archive
operations
both
have
outputs,
because
or
else
you
can't
really
chain
them
together
with
other
operations.
What.
A
You
can
output
the
same
file
path.
It
just
has
to
be
a
different
definition,
because,
or
else
it'll
have
a
you
know,
you'll
end
up
with
an
infinite
loop
there.
So
we
just
need
to
output.
We
need
an
output
from
each
of
those
operations
so
that
if
you
were
to
hook
them
up
to
another
operation,
then
you
know
they
would
trigger
it.
So,
let's.
A
Yes,
definitely,
let's
do
it
so,
let's
see
and
then
I
think
yeah
it
looks
like
you're
under
the
the
right.
You
you've
got
it
down
there
perfect
all
right.
So.
A
All
right,
so,
let's
do
operation
operation
archive.
A
Yeah,
so
operations
need
an
output.
This
is
so
the
completion
can
trigger,
because
all
the
all
the
data
flow
stuff
is
based
on
like
events
right
and
the
output
of
one,
the
completion
of
one
operation
generates
an
output
which
is
an
event.
That's
an
input
to
other
things,
so
vision
can
trigger
other
operations.
A
The
definition
name
is
not
the
same
as
the
output
file
name,
input,
definition,
name:
okay,
does
that
make
sense?
A
Yes,
okay,
great
and
then
let's
do
another
one
for
compression.
So
great,
perfect,
all
right!
This
greatly
simplifies
things
without
the
well
okay.
So
it
doesn't
greatly
simplify
things,
but
you
don't.
A
Doing
these
right
now,
because
you
did
the
archive
operations,
support
the
compression
algorithms
anyway,
so
that
solves
that
great
okay,
perfect.
So
let's
I'll
take
a
look
at
that
and
because
so
then
we
can
stop
the
meeting
now
and
then
we'll
get
that
merged.
So
perfect,
all
right
anything
else
from
anyone.
A
Let's
see
oh
in
our
one-on-one
chat
or
here.
A
C
A
A
Oh
multi-alpha
models,
let's
see
yeah
well.
What
did
you
want
to
discuss.
C
A
I
think
yeah
that
makes
sense.
I
think
that
makes
sense.
Okay
and
I
would
say
you
know,
probably
just
making
yeah
I
I
think
that
makes
sense.
I
I
I
would
probably
go
ahead
and
make
predict
a
union
of
a
list
of
features
and
a
single
feature.
A
Yeah,
so
I
think
I
I.
I
think
that
if
you
went
in
and
made
like
you
know,
I
think
if
you
went
in
and
made
predict.
A
Yeah,
okay,
perfect
and
eventually
we
need
to
drop
that
features
class.
I
don't
even
think
it's
doing
anything
at
this
point,
so
all
right,
great
cool
yeah.
So,
let's
just
let's
implement
within
psychic
base
for
native
progressors
awesome.
C
Sweet
so
we'll
be
doing
the
same
for
classifiers
know.
A
Yeah,
whatever
whatever
natively
supports
you
know,
without
wrapping
in
in
the
multiplayer.
C
The
classifiers
don't
natively
support
multi-output,
but
you
know
it's
essentially
just
calling
the
rapper
so.
A
A
C
C
C
We
just
have
a
condition
for
you
know
when
the
the
predict
predict
is
a
list
for
if
it's
a
single
feature.