►
From YouTube: Weekly Sync 2020-06-16
Description
Meeting Minutes: https://docs.google.com/document/d/16u9Tev3O0CcUDe2nfikHmrO3Xnd4ASJ45myFgQLpvzM/edit#heading=h.hsgtjmur9itz
B
C
Let's
see
I
haven't
seen
that,
let's
see
oh,
did
you
maybe
get,
did
you
have
an
extension
installed
or
something
because
I
know
they
have
an
intention.
D
C
This
looks
like
it's
working
so,
but
we'll
just
leave
it
up.
Just
just
double
check:
kiss
sighs
good
to
recheck
that,
okay,
so
that's
settled
there,
that's
great
nice!
Okay!
This
will
be
great.
That
was
one
of
the
key
ones.
We
need
to
get
done
before
the
next
release,
so.
B
E
C
Okay,
so
all
right
now,
let's
go
through
and
get
everybody
so
I'm,
not
sure
you
want
to
go
next.
Just
I.
C
C
F
A
A
E
C
A
C
Alright,
alright,
sorry
so
yeah,
let's,
let's
talk
about
the
question
entering
model.
D
Yes,
so
I
was
able
to
do
the
training
part,
and
but
when
I
was
using
the
equation
method,
it
got
really
really
messy
in
tensorflow.
So
then
it's
yeah
I
mean
I,
tried,
I
spend
two
or
three
days
doing
that,
but
it
got
really
messy
and
it
was
quite
difficult.
So
I
I
switched
to
pi
torch
and
now
it's
coming
along
nicely.
So
far,
nope.
D
Yeah,
the
problem
is
the
squad
metric,
so
there
is
it
a
particular
way
of
checking
whether
the
models
performing
good
or
not.
So
that
is
a
squad
metric
that
that
is
given
with
the
latest,
at
only
so
that
is
quite
complex
problem
using
that
is
quite
challenging.
If
you
use
the
tensor
flow
and
tensor
flow
is
really
I
mean.
If
we
are
doing
everything
from
scratch,
then
it's
get
really
dirty,
also
so
yeah.
So
that
was
the
problem.
Cope.
D
D
C
E
C
C
C
F
Casual
conversation
man,
so
both
of
them
require
some
specific
digest
and
like
from
what
I
understand
what
they
try
to
do
is
they
have
like
multiple
motor
skills
together?
Well,
one
motor
tries
to
find
what
the
intent
of
the
like
input
is.
One
tries
to
find
what
the
context
is
and
like
after
this,
they
pass
this
message
to
some
back-end
and
like
whatever
you
want
to
do
with
that
data.
You
do
it
in
back
and
answer
this
report
back.
Okay,.
D
C
F
C
F
C
Could
do
I
mean
so
the
point
here
is
to
show
that
can
how
you
configure
that
you
know
the
configuration
parameters
and
also
sort
of
like
something.
That's
a
real
world
right
away.
Useful
example
to
people
is
like,
if
you
can,
the
chat
by
I
like
to
chat
about
example,
because
you
know,
if
we
could,
you
know.
Basically,
if
you
wanted
to
demo
something
and
you.
A
C
C
And
then
you'd
say,
like
you
know,
calc
your
let's
see
like
we
could
do
something
like
you
know
like
what
is
that
one?
C
C
You
know
four
or
five
trusts
0.4
and
then
you
know
chatbot
reads
this:
as
you
know,
the
first
colon
separator
is
the
thing
to
do
the
next
one.
Is
this
value
this
value,
this
value
right
and
then
we,
you
know,
pump
that
into
a
model
and
it
you
know,
spits
out
the
output
right.
So
now
you
have
an
example
where
you're
showing
how
to
do
configuration
and
how
to
use
prediction
in
a
model
you
know,
and
so
now
people
can
basically
take
this
and
you
know
demo
their
demo
download
their
train
models
quickly.
Right.
C
Basically,
what
you're
gonna
do?
Is
you
you're
just
yeah
you
you
basically
well
you
the
chat.
Bot
is
just
like
you
know
it's
it's
it's
just
that's
that's.
The
whole
idea
here
is
is
we're
just
like
okay?
What's
the
the
input
mechanism
for
stuff
is
abstracted
and
then
the
dataflow
tells
you?
How
do
you
actually
get
things
execute.
F
F
C
Just
make
this
yeah
exactly
so,
if
you
have
an
NLP
motto
or
several
NLP
models
capable
of
doing
that
stuff,
you
could
take
unstructured
data
and
convert
it
into
the
structured
data
and
then
pass
it
in
right.
So
it's
you
did
you
know
it's
all
do
at
that
point
though,
then
it's
all
just
about
you
know
links
you
have
once
you
have
the
model.
You
just
link
it
together
in
this.
What
you
know
you
know
using
this
data
flow
as
a
starting
point
right.
F
G
C
C
C
C
I
mean,
ideally,
this
thing
is
a
asynchronous
iterator
that
can
sort
of
say
like
this
was
either
a
message
to
the
channel
or
a
message
to
itself
or
well.
No
it
just
it
should
just
like
yield
every
single
methods
message
and
then
you
know
you
can
forward
that
to
another
operation
and
that
can
decide
what
to
do
with
it.
So
a
synchronous
generator
operation
that
yields
every
message
to
channel
and
that
way
you
know
you
just
forward
it
to
the
next
operation.
C
It
decides
what
to
do
with
there
yeah
and
then
the
rest
of
the
operations.
So
so
this
one
should
be
right
in
operations
IRC,
but
none
of
the
restaurants
are
gonna,
be
specific
to
IRC
really,
so
you
should
rest
of
operations
for
this
demo
should
just
the
should
be
in
single
file
similar
to
so
or
let's
see
the
parsing.
C
You
know
the
string
parsing
in
parsing
operation
for
the
stem
I
should
be
doing
signifiers
similar
to
ffmpeg,
and
then
you
know
you
link
just
to
the
to
the
so
yeah
shut
three
operations
right,
basically,
the
one
that
connects
up
and
yields
things
the
next
one
that
parses
the
string.
If
it
matches
you
know,
then
it
it
returns.
C
C
C
C
All
right,
so,
basically,
we
can
do
this
stuff
or
we
like
I,
don't
know,
there's
that
one
there's
that
one
that's
just
the
other
day.
No,
that
was
the
database
stuff.
Well,
this
is
an
okay
example
all
right.
So,
basically,
like
you
know,
we
can
do
when
we,
because
we
do
the
clear
double
the
operations
are
really
double
context:
entry
to
classes,
but
the
decorator
just
creates
two
correct:
two
classes
that
wrap
a
function
so
where's,
the
ones
of
us
yeah,
yeah,.
C
All
right,
so
this
is
an
operation
of
mutation
context,
and
this
is
a
method
that
does
sort
of
matter.
This
is
run
and
so
usually,
when
you
use
the
OP
decorator,
that's
where
it
just
puts
this
that
you
know
I
just
push
the
puts
the
function
within
this
function
or
within
this
method,
and
then
this
is
the
main
operation
implementation.
So
this
is
the
context.
This
is
the
run
method
of
the
context.
C
This
is
where,
when
you
wrap
something
with
op,
it
goes
in
here
and
then
this
is
the
main
operation
implementation,
which
is
just
the
thing
that
gets
instantiated.
When
you
stand,
she
ate
the
dataflow
right
and
so
for
this
one
we
we
basically
create
this
thread,
pool
execution
executors
so
that
when
we
instantiate
this
operation,
we
create
the
thread
pool
in
that
way.
C
Every
time
we
create
an
operation
context,
so
every
time
you
go
to
use
in
operation
context
is
created,
and
but
every
time
you
go
to
you
know
instantiate
the
operation
for
specific
data
flow.
Then
you
know
this
guy
does
a
enter.
So
if
you
had
an
a
enter
here,
every
time
you
went
to
run
right
before
you
ran
the
operation,
you
could
have,
you
know
an
intern
and
a
exit
around
what
would
happen
before
and
after
the
run.
C
Actually,
this
is
actually
way
easier
than
I
thought
it
was.
We
just
need
to
add
this
like
okay.
This
is
what
we
need
to
do.
We
have
you
know
we
have,
although
we
have
the
redundancy
checker,
the
input
network
context
or
the
input
network,
the
operation
network
and
the
operation,
implementation,
Network
and
the
lock
network,
but
you'll
probably
just
add
another
one
there
and
pass
it
through
here.
C
I
think
that
works,
but
basically
it
would
just
be
like
sort
of
like
almost
like
a
you
know,
just
a
dictionary
that
everything
within
a
data
flow,
any
operation
within
a
data
flow
can
access.
This
is
this
is
not
ideal.
This
is
I,
don't
know
the
thing
is
we
only
want
one
connection
together
right,
it
is
almost
better.
Actually
it's
almost
better
to
have
the
operation.
That's
yielding
responses,
also
waiting
for
inputs
to
send
it
back
together
because
it
should
be
able
to
do.
C
E
C
Okay,
no
I,
don't
think
it
works
like
that,
but
it
think
this
might
be
sort
of
similar
to
the
input
forwarding
that
you
already
did
right.
So
with
input
forwarding,
we
were
looking
at
a
sub
flow
right,
but
you
can
cry
like
I
think
we
right
so
yeah.
So
I
guess:
yes,
I!
Guess
yes
yeah
so
well!
You
did
with
a
run
data
flow
where,
when
you
run
the
data
flow
and
you
register
it
as
a
sub
flow,
you
can
you
know,
then,
if
we're
we're
do
any
applicable
inputs.
C
C
Okay,
so,
but
the
thing
with
that
was
that
the
thing
with
that
is
that,
if
you
have
multiple
database
operations
within
this
is
another
yes
of
the
thing
with.
That
was
that
if
you
had
multiple
opera
database
operations
within
the
same
data
flow,
if
you
didn't
initialize
them
from
Python,
you
couldn't
point
them
at
the
same
database
object
right.
C
C
C
Yeah,
okay,
so
yeah,
we
actually
like
instantiate,
this
actual
like
database
object,
and
we
need
some
sort
of
way
I
could
this
is
another?
This
is.
This
is
sort
of
part
of
yeah.
This
is
sort
of
part
of
it.
This
is
why
I
guess
it
whoo
I
went
down
that
system.
Local
resource
management
path
was
because
we
really
what
we
really
need
is
a
way
for
to
config
right.
So,
if
I,
if
I,
have
some
kind
of
that's
it,
how
do
I
explain
this?
C
H
C
C
C
H
H
G
C
Okay,
I
just
want
to
sort
of
mark
enough
things
that
we
did
talk
about,
okay
and
then,
let's
see
so
okay.
Let's
just
look
at
this
real
quick.
This
issue.
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
Do
we
like?
How
do
we?
How
do
we
make
sure
that
they
end
up
with
the
actual
same
instance
right
because
they
need
to
have
the
same
instance
of
this
thing
for
this
to
work
right
like
if
we
instantiate
if
I
ran
this
data
flow
from
the
command
line?
This
wouldn't
work
because
I'm
going
to
open
the
sequel,
Lite
database
twice,
does
that
make
sense.
F
C
Yeah,
that's
that's
for
them!
So
that's
what
I'm
thinking
here
is
basically
we'll
have
you
know
you
know
shared
config
or
global
config
or
something
and
then,
if
you
put
it
in
here
and
then
you
reference
it
from
configs
or
something,
then
you
know
that.
That's
how
you
would
that's
how
you
would
you
know
say.
Basically,
that's.
C
C
Is
because
when
you
go
to
do
this,
your
project
Argand
with
the
nats
thing,
how
do
you
know
which
one
you
know
I
like?
How
do
you
make
sure
that
this
thing
you
need
some
more
information
about
this
now
right,
you
need
to
know
whether
you
can
actually
instantiate
this
thing
on
two
different
like
what,
where
this
thing
where
it
can
be
scheduled
right
if
they
have
to
be
scheduled
on
the
same
machine.
C
The
issue
here
is
that
we
risk
breaking,
like
you
know,
breaking
what
we're
doing
with
data
flows,
because
then
it's
like
you
know,
then,
if
you
start
doing
this
config,
like
you
start
having
these
just
like
network
at
objects
everywhere,
then
it's
like
well,
should
we
really
be
doing
should
do
you
know?
Should
it
really
work
like
that.
C
C
C
C
F
C
H
C
C
C
C
C
G
H
I
I
I
I
C
So
I
mean
we
have
to
sort
of
options
here
like
so
one
thing
is
we
can
figure
out?
Okay?
How
do
we
split
out
that
array
value
into
you
know?
Every
single
thing
is
that
some
one
feature
and
I
think
the
answer
there
is
basically
like
you
have
a
you
know:
you
used
it
data
for
pre-processing
sourcing
and
you
and
you
split
it
out,
so
the
other
thing
is
or
the
other.
C
The
other
solution
is
so
you
you
can
use
the
data
flows
pre-processing
source
and
create
a
data
flow
that
basically
results
in
every
single
value
or
every
you
know.
The
exploding
of
that
into
you
know,
like
every
single
feature,
name
like
image,
0
image,
one
image,
2
all
the
way
up
to
image.
You
know
X
for
each
pixel
in
the
array
or
we
can
do
I
mean
then
that
may
be
a
valuable
thing
to
show
how
to
do
to
or
we
can
do.
C
You
know
fixing
just
fixing
the
image
erase
stuff,
which
probably
needs
to
be
done
anyways,
so,
let's
see
so
like,
but
it
also
depends
like
does.
Is
that
what
you
want
specifically
like?
Do
you
just
want
it
to
work
with
arrays,
or
do
you
want?
It
really
do
actually
want
it
to
be
splitting
them
into
their
each
value?
You
know
I.
I
C
C
So
we
so
we
what
we
could
do
is
if
you
see
that
you
can
split
it
out.
You
know
for
the
user
right,
if
you
see
something,
that's
basically,
we've
got
a
length,
that's
not
you
just
split
it
out
for
the
user
right
and
you
make
each
each.
You
know
you
just
do
that
for
them
within
the
site
kit
model
right,
that's
actually
something
that
I
think
I
recently
did
with
can't
remember
what
it
was
yeah
that
just
this
this
yeah
this
was
recently
done
by
somebody
else.
They
split
it
out.
C
C
C
C
C
G
C
C
At
this
point,
that
was
augmented
to
do
I
think
the
right
solution
is
to
modify
the
model,
because
the
model
should
basically
I
mean
if
you
yeah
I,
think
the
right
solution
is
to
modify
the
model
here,
because
people
aren't
going
to
expect
like
people
are
gonna
expect
to
be
able
to
feed
each
model.
The
same
data
without
modifying
the
source
right,
we
shouldn't
be
modifying
the
data
source.
C
It
should
be
the
model,
that's
that's
handling
it
differently,
and
if
the
model
can't
handle
it,
then
we
should
use
the
data
flow
pre-processing
source
to
handle
it
and
I
think
that
I
mean
you
could
use
the
data
flow
pre-processing
source
to
explode
this
into
into
sort
of
like
image,
0
through
image
and
but
I
think
that
it's
probably
better,
ok,
so
yeah.
So.
C
C
C
C
C
C
Is
more
okay
that
it's
not
related
yeah,
but
isn't
this
also
sort
of
related
in
that?
If
you
passed
it
the
I
mean,
isn't
this
have
directly
to
do
with
the
fact
that
wait,
explain
what
this
I
don't
understand?
What
this
is
then,
so
is
because
isn't
this
you
saying
that
you
know
if
I
passed
in
a
you
know
a
you
know,
a
a
feature
with
you
know:
yeah
your
feature.
Yes,
you
made.
C
C
C
C
H
H
E
C
C
Because
I
just
saw
it
again
this
morning,
oh
yeah,
this
this
OH.
So
this
is
related
what
Yash
said
so
yeah.
She
said
that
you
had
talked
about
basically
there's
issues
with
running
the
SH
files
on
Windows,
and
so
you
were
gonna,
read
the
contents
and
then
run
them.
You
know,
as
their
CLI
commands
via
Python
right,
was
that
what
I
was
understanding
from
that.
I
C
I
guess
you
should
yeah
you
just
that
would
be
good.
I
mean
we
probably
just
won
a
little
helper
function
to
help.
Do
that
then
to
to
save
time
but
yeah
that's
a
good
plan,
and
then
so,
basically,
what
I
was
saying
here
is
with
this
create
data
flow.
You
have
two
commands
in
one
file,
so
let's
split
it
into
another
file
so
that
when
Yash
does
that
you
know
he
doesn't
run
into
this
weird
this
one
case,
that's
an
edge
case
right
and
then
the
rest
of
this
looks
great.
D
A
A
C
Yeah,
so
you
might
need
yeah
at
the
pet,
so
it
depends
on
you
know.
It
depends
on
whether
this
is
something
that's
ever
going
to
be
an
input
or
whether
you
know
that's
sort
of
the
way
you
need
you
want
to
think
about
it
right
is
as
a
config
is,
is
typically
something
that's.
You
know
a
configure
something
that
needs
to
be
around
at
time
of
operation
instantiation
right.
So
when
I
do
you
know
when
I
instantiate
a
data
flow?
C
The
config
needs
to
be
around
right
so,
like
with
the
Gator
operation,
you
know
we
clearly
need
you
know
the
username
or
password
or
whatever,
to
be
in
the
in
the
config
right,
because
it
needs
to
be
there
upon
instantiation.
So
if
it
needs
to
be
there
upon
it
and
it's
never
gonna
change,
then
that's
where
you
wanted
it,
my
config,
if
you
you
know,
they.
A
A
C
C
A
C
C
C
C
So
I
hesitate.
The
reason
why
I'm
hesitating
on
default
values
is
because
I
think
that
I
think
that
the
fact
that
we
were
doing
the
permutations
creates
I'm
not
sure
how
it's
going
to
interact
with
the
permutations
like,
if,
if
someone's
sure
of
how
that's
going
to
interact
with
the
fact
that
we're
permeating
inputs,
then
I
say
go
for
it.
If
we're
not
sure
of
of
how
that
might
be
I.
C
C
You
know
the
question
is
so
the
question
is
the
optional
values
have
to
be
stored,
so
they
had
to
be
stored
some
somewhere
right,
because
the
core
concept
here
is,
we
have
to
decouple
the
the
the
the
core
concept
here
of
the
data
flows
is
that
we
have
to
decouple
the
definitions
and
the
operation.
You
know
the
the
declaration
of
what
the
operation
is
from
the
implementation
right,
and
so
as
soon
as
now,
we
have
default
values,
it's
kind
of
like
well,
it's
kind
of
like
the
spec.
C
C
C
C
There's
a
see,
this
is
the
thing.
The
thing
is,
it
becomes.
It
becomes
sort
of
non-trivial
quickly,
it's
not
sort
of
just
add
default
values.
Maybe
it
is
and
I'm
just
not
quite
thinking
about
it
correctly,
but
then
then
I
need
I
need
somebody
to
chime
in
here.
If
I'm,
not
if
I'm
missing
something
right,
but
so
I
think.
If
we
have
specs
and
specs
have
defaults,
then
it
stands
to
reason
that
we
could
just
have
like
the
whole
value,
have
a
default.
C
But
then
the
question
comes
in
like
where
so
it's
somewhere
in
in
DF
memory,
that
with
somewhere
in
DF
memory
that
we'll
need
to
check
if
it's
something
and
gather
input
steal
from
LDF
memory
together
it
puts
so
because
in
here
what
we
do
is
we
check?
If
we
can,
we,
you
know
if
we
have
an
input
for
each
thing
right
and
obviously
okay,
we're
not
going
to
do
conditions
we're
just
going
to
do
inputs
here
right.
So
we
need
to
say
what
else
do
we
have
on
our
agenda
here,
because
this
is.
G
C
C
And
we
would
basically
be
saying:
okay
input
flow,
so
we
go
through
each
of
the
inputs
in
the
input
flow
and
we
check
all
the
origins
and
we
check
out
the
existent
by
origin,
okay,
so
for
a
demand
by
origin,
okay,
those
continuous
or
at
that
for
loop
level.
Okay,
so
gather
input,
name,
dot,
append
all
right.
This
might
be
pretty
easy.
Actually,
if
not
gather
input,
name,
okay,
so
here's
here's
the
deal.
C
So
we
bail
okay,
so
if
operation
inputs
input
name
okay,
so
this
is
tricky
too,
because
we
need
to
say
okay.
Where
did
we
do
that?
Alternate
definitions?
Okay,
so
we
need
to
go
through.
So
if
alternate
definitions
and
the
definition
name
not
in
alter
definitions,
because
we
also
need
to
check
alternate
definitions.
So
first
we
check.
G
C
C
So
if
you
can
find
an
example,
then
then
let
us
know
and
we'll
do
it,
but
it
it
is
likely
I
mean
if
there
is,
you
know,
an
equals
in
the
Declaration
right.
It's
gonna
tell
you
what
it'll
give
you
it'll
it'll
have
a
default
value
that
we
can
use
so
and
if
not,
then
you
know
you
had
that
you
wouldn't
be
able
to
call
the
function
without
knowing
what
the
what
the
thing
should
be
anyway.
So
alternate
definitions,
we're
just
gonna,
go
and
okay.
What
level
is
not
so
much
definitions
defined
us?
Okay,.
C
C
C
To
do
right
actually,
and
that
doesn't
seem
too
bad
I-
think
you're,
basically
just
going
to
need
to
copy
sort
of
this
structure
of
like
basically
it's
going
to
be,
like
you
know,
grab
the
alternate
definitions
here
right
and
then
do
this
check
here,
which
this
is
very
unhelpful,
because
it's
not
actually
highlighting
the
lines
for
some
reason,
and
then
you
know
loop
through
that.
You
know
you
loop
through
you
loop
through
that
array
here.
That
also
now
includes
this.
So
it's
basically.
C
C
You
know
there
you
go
so,
although
now,
what's
this
going
to
end
up
with,
is
the
fact
that
you
know
one
of
these,
you
know
multiple
of
these
may
have
a
default
value,
in
which
case
only
one
of
them
is
going
to
get
used
so
and
I
don't
know.
If
there's
really
a
way
around
that.
Well,
let's
see
gathered
out
a
penned
parameter.
C
C
C
So
you
want
to
do
fault,
support
default
values
for
image
operations
like
my
mouse,
that's
slowly,
different
drifting,
it
ok!
So
let's
see
that's
a
start
on
that,
and
that
is
that,
basically
all
you
wanted
to
talk
about
with
image
operations.
So
do
you
have
more
things
you
might
talk
about
there
yeah.
A
For
yeah
other
thing,
like
I,
have
a
question
like
if
I
like
I'll,
be
adding
like
three
more
operations
to
extract
features
like
I'll,
add
a
lad
operation
that
they
will
extract
features
from
the
same
image
for
once
and
then
we'll.
Then
they
have
to
merge
them
into
a
single
feature
vector
to
tray
or
not.
So
how
will
that
work.
A
D
G
A
C
A
C
So
I
would
sort
of
I
mean
and
I
think
so.
We've
got
a
lot
of
other
people
on
here
that
that
would
that
that
have
some
machine
learning
experience,
here's
but
I
think
the
way
that
I
might
approach.
This
is
to
split
those
so
say
you
know.
Obviously
we
have
the
issue
with
the
psychic
model,
but
I'm
so
say,
for
example,
you're
using
the
tensorflow
model
I
would
have
those
each
be
their
own
input
feature
I
feel
like
that.
C
Might
give
you
the
greatest
success
here,
because
if
you
each
give
them,
you
know
assign
them
as
their
own
input
feature.
You
know,
tensorflow
would
treat
them.
As
you
know,
each
like
a
matrix
value
and
you
know
it
would
no
they
would
you
know
I,
don't
know
exactly
what
it
does
internally,
but
you
know
it.
It
still
gives
it
some
sort
of
like
you
know
these
are
all
correlated
like
those
values
are
I
mean
at
the
end
of
the
day.
I
think
we're.
C
You
know
everything
just
goes
into
a
matrix
right,
so
yeah
well,
at
least
for
neural
networking
becomes,
like
you
know,
essentially
a
giant
matrix
multiply
right.
So
it's
like,
you
could
flatten
it
out,
and
you
could
you
know,
sort
of.
Basically,
you
know
like
like
put
them
into
in
by
you
know,
making
them
one
big
feature
and
you're
probably
going
to
get
the
same.
I
think
you're,
probably
gonna,
end
up
with
the
same
results,
either
way
at
this
point
ray
like
what
are
you
guys,
thinking
that
Himanshu
yesh.
C
C
F
C
Mean
I
think
that's
sort
of
the
open
issue
here,
right
so
I
think
I
think
I
think
we
might
be.
We
might
be
a
little
too
in
the
weeds
on
this
I
think
we
kind
of
just
need
to
try
it
right.
So
I
think
you
need
to.
Basically
just
you
know
what
you
can
do
is
you
can
you
can
have
them
each
be
their
own
I.
Think.
C
C
You
know
if
we
do
that
that
what
we
just
talked
about
right
and
we
split
them
each
into
their
own
feature-
then
you're
gonna
end
up
with
you're
gonna
end
up
with
you
know,
essentially
a
bunch
of
columns
right
like
input
well
like
so,
if
you're
looking
at
it,
you
know
if
you
looked
at
this,
like
you
had
some
sort
of
input.
You
know
like
a
CSV
file
right
and
each
one.
C
Each
value
for
the
column
was,
you
know,
say
it
was
like
the
normalized
images
right,
and
so
all
of
these
were
values
between
0
and
1.
Right
you're,
just
gonna
have
like
a
million
columns.
You
know,
or
let's
see,
like
you
know
whatever.
If
you're
six,
your
pixels
is
like
28
times
28,
it's
like
760,
something
right
so
say
you
have
760.
C
Pixels
right,
zero
through
one
values
and
now
you've
done
well,
I
guess
we'll
just
do
0
through
255
values
right
and
you
have
760
of
them,
and
now
you're
gonna
run
each
of
these
operations
on
them.
So
you're
gonna
end
up
with,
like
you
know,
760
times,
3
0
through
255
values
and
at
the
end
of
the
day,
like
psych
it
as
far
as
psyche
gets
concerned
like
if
we
do
this
thing,
where
we
split
it
out,
it's
just
going
to
treat
them
each
like
their
own.
C
Let's
see
what
its
going
to
treat
them
each
as
if
there
are
their
own.
What
is
it
a
next
date?
Oh
right,
you
know
when
we
passed
the
first,
the
first
parameter
to
fit.
It's
basically
just
like
an
array
of
arrays
right,
so
it's
just
going
to
have
more
erase
in
it.
No
matter
what
you
do
here,
I
understand.
C
A
C
Cool
is
there
anything
you
wanted
to
sort
of
talk
about
specifically
they're,
just
like
it's
coming.
A
C
A
C
The
video
yes,
okay,
let's
see
where
that's
at
okay,
hopefully
that's
coming
soon
here,
I
needed
to
install
ffmpeg,
which
is
apparently
still
installing.
So,
let's
see
sequel
light
insertion,
V,
automating
classification
data
flow,
okay,
so
a
noggin.
Basically,
this
is
the
one
we
had
this
working
at
some
point.
It
sounds
like
it's
not
working
right
now,
right.
A
F
C
C
C
We
can't
make
it
complicated
right
and
so
the
heavy
lifting
to
this
tutorial
is
we
sort
of
we
gloss
over
how
we
do
the
get
operations
pretty
much,
but
the
point
is
sort
of
you
can
gather
up
right.
You
know
you
can
gather
data
and
then
the
other
point
should
be
you
can
you
know,
then?
Then
you
can,
you
know,
get
that
data
it
through
some
machine
learning
and
interior.
Your
database
right
and
we
don't
I-
think
we
do
too
much
like
we.
C
Yeah
exactly
I
mean
this
is
the
thing
where
it's
like
you
know.
The
the
data
flow
stuff
can
be
very
like,
especially
when
you
look
at
that
get
operations
flow
and
you're,
like
you
know,
collecting
all
that
data
like
that.
That's
that's
where
it's
very
useful,
but
it's
also
not
entirely
straightforward.
That's
for
sure,
because
you
have
to
like
think
you
have
to
think
differently
and
that
doesn't
make
for
fun.
C
Sometimes
so,
let's
see
so,
let's
so
make
that
a
let's
scrap
the
flow
and
make
it
so
that
we
and
you
can
basically
just
call
like.
C
E
F
F
C
C
C
C
F
C
It's
with
the
import,
yeah
I
think
this
is
correct.
I.
C
C
C
Must
a
nice
setup
at
some
point:
I
mean
that
could
be
a
separate
PR,
but.
C
This
next
release
is
gonna,
be
huge,
I
gotta,
do
all
the
compliance
test.
I
would
attracted
a
bunch
of
attention
recently,
so
I
really
have
to
do
all
the
compliance
s
quickly,
because
the
there
was
like
this
person
who's,
like
some
senior
management
person
had
at
this
webcast
last
week,
and
they
said
tell
me
and
my
manager,
if
you
guys,
have
ideas
that
are
in
basically
the
space
the
DFO
from
Allison
and
I
was
like
well.
C
C
Okay,
like
luckily
I,
have
a
I've
got
the
dowel
for
piyah,
guys,
I've
been
talking
to
them,
and
then
I
was
talking
to
people
over
there's
this
edge
X
project.
They
want
to
use
this
for
basically
the
automatic
classification
demo.
They
want
to
do
the
same
thing.
So
now,
there's
multiple
people
within
Intel
that
actually.
C
C
F
C
C
F
C
C
C
Yeah
we
had
to
do
this
because,
if
you
put
seed,
then
seed
ends
up
being
a
string,
which
is
the
only
entry
in
the
list.
That's
why
it
had
to
be
like
that.
So
it's
basically
a
list
of
you
know
places
it
can
come
from,
and
so
you
could
say
you
know
like
create
it
can
come
from
and
each
each
entry
is
an
object
where
it's
a
key
value.
C
C
You
can
override
that's.
Basically,
it
allows
you
to
override.
So
if
you,
if
the
definition
isn't
the
same,
this
is
what's
allowing
you
to
override,
because
when
you
do
when
you,
when
it
creates
the
autoflow
uh-huh,
this
is
what
happens.
You
did
Auto
flow,
which
means
it's
gonna
wipe
out
all
of
this
stuff.
So
that's
what
happened
there.
So,
basically
yeah.
If
you
call
Auto
flow,
it's
gonna
wipe
out
everything
and
it's
gonna
rebuild
dot
flow.
C
F
C
Updates
the
this
is
so
the
thing
was
I
couldn't
figure
out
how
to
hook
in
so
when
you
call,
maybe
we
should
make
it
a
context
manager
actually,
but
when
you
call
when
you
modify
dot
flow,
there's
that
whole
by
origin
structure-
and
maybe
we
should
just
be
calling
dot,
update
right
right
before
we
enter
the
data
flow-
sure
that's
what
we
should
do
so
when
you
call
dot
updated
updates
by
origin
structure
within
the
data
flow
to
figure
out
it.
C
Basically,
it's
a
shorthand,
so
we
don't
have
to
calculate
all
of
like
where
things
are
coming
from
when
we're
in
the
when
we're
in
the
main
loop
there
so
yeah.
So
basically
that's
what
we
end
up
calling
update,
but
we
can
actually,
we
probably
want
to
remove
this
dot,
update,
call
and
have
it
be.
We
have
to
call
this
right
now,
but
it's
not
intuitive.
You
know
like
it's.
F
C
F
F
Yeah,
so
can
scroll
up
yeah
yeah,
so
here
I
keep
called
to
see
not
maintained
yeah.
So
when
you
ask
the
sparing
the
input,
oh
yeah
value,
main
game
yeah,
so
we
specify
those
genus
silikal
don't
mean
it
so
now.
Does
this
input
go
to
any
definition
which
is
expecting
any
operation
which
is
expecting
this
definition
or
does
it
only
go
to
something
which
we
specified?
It.
C
I
C
So,
if
I
didn't
put
an
origin
here,
then
here
the
default
would
be
seed
right
and
both
of
these
definitions
are
the
same
so
that
which
means
that
if
I,
we
didn't
modify
the
flow
and
we
didn't
modify
the
origins,
then
you
know
maintained
and
key
would
be
both
valid.
You
know
permutations
that
it
should
do
right,
and
so
therefore,
we
need
to
say
specifically,
hey
I
need
you
to
get
this
from
seed
dot
maintain.
C
C
F
H
C
C
C
C
Right
is
this:
oh,
my
god,
this
is
window,
it's
just
just
just
install
ffmpeg.
It's
been
doing
this
for,
like
all
day,
good
stuff
to
add
to
videos.
We
talked
about
gained,
see,
meaning
it's
May,
12th,
20,
Thanks,
all
right,
cool,
I
think
we're
good
for
the
day.
Then
we
definitely
went
over
on
time.
Sorry
about
that
there
was
a
bunch
of
which
this
stuff
was
using.
You
know
it's
always
hard.
We
maybe
should
do
I,
don't
know
if
we
should
do
another
meeting.
What
do
you
guys
think?
Should
we
space
this
out
more.
A
C
Just
mean
like:
should
we
do
another
meeting,
then
we
can
have
more.
You
know,
then
you
know.
Sometimes
then,
if
there's
multiple
times
you
can
join,
then
you
don't
have
to
join
every
time
right.
So
then,
maybe
we
end
up
with
more
spaced
out
and
we
don't
end
up
going
for
you
know
like
two
and
a
half
hours.
C
C
So
then,
if
we,
if
we
get
to
the
end
of
the
hour-
and
we
haven't
yeah
like
maybe
maybe
at
the
beginning
of
the
week,
like
everybody-
send
out
stuff
that
they
know
that
they
want
to
talk
about
and
I'll
try
to
come
up
with
some
kind
of
formula
for
this
and
then
basically
we
can
know
ahead
of
time
how
long
we're
gonna
be
in
here.
So
cuz
I'm
meant
to
do
a
break.
So
we
could
all
have
a
break
here,
but
I
never
forgot
about
that
I'm.
So
sorry
about
that,
let's
see.