►
From YouTube: Weekly Sync 2020-08-04
Description
Meeting Minutes: https://docs.google.com/document/d/16u9Tev3O0CcUDe2nfikHmrO3Xnd4ASJ45myFgQLpvzM/edit#heading=h.4d3vzsyk7afc
A
Okay-
and
I
still
need
to
do
the
layer-
support
okay,.
A
A
All
right,
okay,
so
we're
gonna
go
over
the
review
on
the
distributed
orchestrator,
and
I
still
need
to
do
this.
What
else
do
we
have
for
you
argen.
B
Yeah,
that's
it
like.
I
have
finished
that.
A
A
C
A
Oh
yeah
where'd
that
go.
Oh,
you
had
some
just
questions
around
it
or
oh
yeah.
D
A
They're
just
ready,
okay,
saksham,
so
you're
still
waiting
for
the
layer,
support
example
and
what
else
for
you.
A
Would
be
good,
okay,
anything
else.
E
G
E
Yeah,
so
I
am
working
on
those
two
examples
and
I
have
things
to
discuss
related
to
operation,
okay,
so
which
two
examples
yeah.
So
one
is
python
example
that
we
discussed
that
we
don't
have
to
use
the
cli
just
by
an
example
and
other
and
the
other
one
is
that's
spam.
A
E
G
G
G
A
All
right,
great
anything,
am
I
missing
anything
on
this
list
from
anyone.
H
A
A
So
and
then
other
business
ci
is
still
broken.
There
was
issue
with
numpy
dependency.
A
I
thought
I
had
fixed
this
and
then
himachu.
We
thought
he
had
fixed
this
and
it's
still
not
fixed,
for
some
reason,
wonderful,
but
I
did
see.
A
I
did
see
that
okay,
this
is
all
the
I
want
all
right.
I
did
see
that
they
said
I'm
thinking
that
this
might
be.
The
next
course
of
action
here
is
that
they
recommend
that
we
use
this
new
dependency,
resolver
yeah
yeah.
I
A
So
I'm
thinking,
maybe
maybe
there's
some
kind
of
environment
variable
that
we
can
set
so
that
we
don't
have
to
add
that
flag
everywhere,
because
clearly
this
is
a
mess.
So.
A
A
Yeah
because
it
should
be
installing
everything
should
be
installing
everything
in
test
dot
when
we
test
the
main
package
for
3.7,
this
should
be
the
same
set
of
installs.
So
that's
weird
how's
it
going
yash.
A
You
need
to
use,
didn't
see
over
and
then
issues
with
should
I
so
there
was
an
issue
with
the
should
I
stuff
where
I
found
that
basically
we're
having
this
rust
package.
So
should
I
just
runs
whatever
the
applicable
static
analysis
tool
is
with
the
use
command,
and
so
the
test
case
for
rust
ended
up
actually
testing
javascript,
and
so
we
weren't
actually
picking
up.
Basically,
there
was
some
modifications
required
there,
so
I
had
tried
to
fix
that
one
as
a
part
of
tackling
the
various
things
with
the
ci
this
weekend.
A
E
Try
yeah,
I
have
actually
I
faced
one
more
issue
today
morning
I
was
trying.
I
don't
know
if
anyone
else
is
facing.
When
I
run
the
docks
it
says
con,
there
is
version
conflict
in
our
torch,
also
by
torch,
so
torch
vision
required
starts
to
be
1.6.0,
but
we
have
1.5.1.
E
G
All
right,
so,
let's
see
this
is
with
what
were
the
two
packages.
A
This
is
the
new
least
favorite
thing
of
mine.
Is
this
stupid
contextual
version
conflict?
I
feel
like
we
keep
running
into
this,
and
so
just
the
the
issue
here
is
basically
like.
I
think
I
think
the
main
issue
here
and
I'm
hoping
that
this
will
be
solved
by
that
new
dependency
resolver.
A
But
it
seems
to
be
that
that
you
know
when
you
specify
a
version
range
which
is
greater
than
it
just
installs
the
greatest
version,
and
then
it
looks
later
at
all
the
other
packages
and
says:
oh,
this
one
actually
is
restricted,
but
then
it
just
goes
and
it
says
oops
I
already
installed
the
one
with
the
latest
version
and
then
it
just
fails
with
contextual
version
conflict.
I'm
that's
my
current
understanding
of
what's
happening
here
and
basically
yeah.
A
Yeah,
it's
like.
Why
are
you
installing
the
latest
version?
If
you
have
these
ones,
that
say
the
version
range,
then
you
should
be.
Hopefully
I
mean
you
should
be
resolving
it
to
install
within
this
range.
If
you
see
one
that
says
a
range
and
one
that
says
just
greater
than
x
right,
but
obviously
that's
not
happening
so
maybe
there's
a
way
we
can.
You
know,
set
this
within
an
environment
variable
or
something
so,
okay.
This
is
like
new
stuff.
A
A
Okay-
and
this
is
gonna-
be
the
default
in
a
couple
months,
so
we
should
definitely
figure
out
how
to
test
it.
Okay,
let's
see
what
does
it
say?
A
It
will
reduce
inconsistency,
no
longer
install
a
combination
of
packages
that
is
mutually
inconsistent,
not.
A
A
A
We
have
to
modify
all
the
damn
commands.
Okay,
well,
there's
the
feedback
right.
There
are
23
questions,
so
I
have
one
one.
One
piece
of
feedback
make
it
an
environment
variable
all
right,
okay,
so
let
me
just
put
this
link
here
and
we
will
investigate
this.
A
Okay
and
then
issues
with,
should
I
was
that
rust
test
issue
causing
ci
to
fail.
So
basically
it
started
failing
because
it
was
detecting
more
vulnerability.
So
I
went
in
to
update
the
fact
that
it
now
has
more
vulnerabilities,
and
then
I
found
that
it's
actually
testing
the
rust
package
for
javascript
vulnerabilities
and
not
detecting
that
it
needs
to
be
scanning
for
rust
and
yeah.
So
all
right.
A
And
so
it's
still
broken,
but
I'm
hoping
to
fix
that
one.
I
want
the
ci
to
be
clean.
Obviously,
before
I
take
off,
I
probably
still
have
my
laptop,
but
all
right
so
distributed
orchestrator.
Okay,
now
I've
got
this
on
two
computers,
so
all
right,
nope
wrong.
One.
Okay,
we'll
take
care
of
that.
One
too.
A
All
right,
thank
you
for
the
notes.
I
appreciate
the
notes,
so
the
one
thing
was
that
okay,
so
first
of
all
notes,
let's
try
to
move
this
into.
A
A
Let's
see
oh
yeah,
so
this.
D
B
B
A
But
okay
yeah!
So
let's
take
that
out
since
just
yeah.
A
A
Okay,
let's
see
what
else
we
have
over
here,
all
right,
so.
A
I
don't
think,
there's
anything
else
at
the
moment,
other
than
the
fact
that
we
should
test
with
multiple
worker
nodes.
Just
you
know
to
make
sure
that
this
is
in
fact
running
with
multiple
workers
right.
You
should
set
up
the
tests
for
that
right.
So
I
would,
I
would
recommend,
like
three
of
them
or
something.
A
D
A
I
A
Them
overlap,
and
some
only
for
specific,
only
or
in
some.
I
A
So
basically,
you
know
validate
that
you,
you
see
the
number
of
nodes
that
you
expect
in
each
one
for
each
operation
and
you
could
just
go
through
and
you
could
use
like
you
could
use
like
operation,
dot,
load
and
then
and
pick
at
random
yeah
and
just
make
sure
you
have.
A
Who
you
could
use
this
or
you
could
just
have
a
set
of
operations,
just
make
sure
you
have?
Actually
this
might
be
just
that
might
be
overkill.
Just
make
sure
you
have
enough
right,
where
you're
having
some
of
the
nodes
with
overlapping
operations
right.
A
Yeah
so
make
sure
you
have
multiple
nodes
with
check
if
repository
valid
okay
and
then
maybe
you
know
ones
that
only
yeah
ones
that
have
operations
that
other
nodes
are
going
to
have
and
then
also
make
sure
that
nodes
have
operations
that
only
they
have
and
then
make
sure
that
numbers
and
the
circular
fuse
is
correct.
So
because
yeah,
that's
going
to
probably
save
you
headache
later
on.
Yes,
yeah.
A
Yeah
yeah:
this
is
the
way
to
go
and-
and
I'm
sure
you
probably
found
the
fact
that
once
you
started
because
I
saw
you
I
remember
when
you
went
through
it
first
and
you
were
having
to
restart
the
nat
server.
This
is
probably
very
helpful
right.
A
All
right,
okay,
so
I
think
yes,
all
right
so
anything
else
that
you
wanted
to
hear
specifically
on
that.
B
A
H
A
All
right,
I'm
realizing,
I
may
not
have
caught
this.
A
B
Oh
so
like
earlier,
we
were
giving
back
a
code,
so
we
had
what
contacts
it
was.
But
since
we
are
returning
dictionary
dictionaries
now
we
lost
what,
when
you
are
running
multiple
context,
we
lost
the
context
input.
So
I
thought
it
would
be
better
with
the
context
as
the
key.
A
A
H
A
A
A
Yeah,
let's
just
do
okay,
this
is
like
okay,
so
the
problem
with
this
is
okay.
The
reason
why
this
is
the
way
it
is
is
because
so
the
idea
was
that
you
would
have
context
and
then
the
context
might
be
some
object
that
lives
in
memory
right.
The
context
is
some
object
that
lives
in
memory,
maybe
on
a
node
or
maybe
on
you
know,
yeah,
maybe
you
know
maybe
on
on
different
machines
and
it
might
reside.
A
A
You
would
just
have
the
context
and
you
would
want
to
to
like
dynamic
like
you
would
want
to
pull,
and
this
is
where,
like
I
don't
know,
if
this
made
sense-
and
this
may
need
to
change,
but
the
idea
was
that
you,
the
handle,
so
there's
the
contacts
and
there's
the
handle
and
the
handle
is
what
you
would
use
to
actually
access
the
data
within
like
redis
or
something
right.
So
maybe
it's
like
a
unique
key
or
something,
and
why
was
it
async?
A
You
know
I
should
have
commented
it,
but
basically
there
was
this.
Is
this
I
was
trying
to
explain
why
this
looks
so
horrible
and
the
point
is
there
was
a
reason
and
it
probably
should
change,
because
you
know
I
don't.
I
don't
like
at
this
point.
Who
knows
what
it
was
and
we're
not
using
it
so
basically,
but
for
now,
until
we
change
it
everywhere,
we
should
probably
make
it
the
same
everywhere
in
case
we
end
up
using
it
or
changing
it.
A
I
know
that
probably
that
was
just
like
you
guys.
You
guys
know
what
I
was
saying
that
right
basically
yeah.
Okay,
this
this
is
ugly.
There
was
a
reason
for
it:
it's
not
currently
being
used,
but
until
we
change
it
everywhere,
let's
just
be
consistent,
yeah,
that's
that's
the
point,
so,
okay,
so
that
and
then
that-
and
that's
this
should
be.
Let's
see,
that's
that's
that
on
this
one.
So
I'll
just
do
this.
A
A
In
base
okay,
yeah,
no,
I
believe,
that's
it
yeah.
The
only
thing.
The
other
thing
that
we've
done
is
we
use
stir
on
the
context,
some
places-
and
that's
probably
that's
like
we
shouldn't
be
doing
that
either,
but
I
think
that's
also
places
where
we
know
that
it's
a
input
set
context
or
a
string
input
set
context.
Oh
yeah,
that's
in
root
c
that
shouldn't
be
done
either
and
as
string.
A
A
Okay
yeah:
this
is
one
of
those
premature,
optimization
things
I
should
probably
should
not
have
been
around
all
right.
Okay,
so
let's
merge
this
guy.
A
H
A
All
right,
all
right,
so
anything
else
on
your
own,
all
right,
okay,
waiting
on
layers,
for
example.
I
need
to
do
this
today,
especially
if
I'm
going
to
leave.
So
I
want
that's
my
these
are
my
goals.
Is
layer
support
example
and
make
sure
the
ci
is
clean
for
everyone
before
I
take
off,
I
will
still
be
available-ish,
but
you
know
not
as
much
until
next
tuesday,
so,
okay,
okay,
so
I
may
we'll
see
we'll
see
if
oh
yeah
yash
is
here.
A
Yes,
do
you
think
you
have
bandwidth
to
run
the
meeting
friday,
no
pressure,
if
not
but
yeah
I
can.
I
can
okay
sweet,
so
I
will
make
let's
see,
run
meeting
on
friday,
great.
H
A
Okay,
so
yeah
I'll,
just
I
mean
you
just
you
know
you
can
just
make
a
make
a
meeting,
link
and
post
it.
Let's
see
because
yeah
the
one
that's
on
the
calendar
hit.
Obviously
that's
I
don't.
I
haven't
been
able
to
figure
out
how
to
make
it
so
that
everybody
can
just
join
okay,
so
working
on
adding
custom
models
using
the
api
for
pi
torch.
So
you
wanted
to
talk
about
that
right.
E
A
Okay,
do
you
want
to
you
know,
run
through
any
of
that
with
us,
or
do
you
just
want
to?
Is
that
just
sort
of
an
update.
E
A
Anything
you
wanted
to.
Oh,
I
wanted
to
run
through.
What's
going
on
with,
with
with
this,
the
pre-processing.
E
A
Interesting
all
right!
Okay!
E
E
J
A
So
so
the
length
is
going
to
be
different
for
each
one.
That's
interesting,
okay,
yeah,
so
I
mean
isn't
the.
E
A
Okay,
yeah
well
yeah
because
I
don't
think
we're
doing
any
validation
on
that.
I'm
just
wondering
is
there
the
point
of
that
that
the
giving
the
length
is?
Is
that
if
I
recall
like
when
I
had
initially
done
the
tensorflow
one,
you
wanted
to
know
what
this?
What
the
dimensions
of
or
like
what?
A
What
this
length
of
the
of
the
feature
was
when
you
create
the
numeric
column,
it
wanted
to
know
what
the
length
was
there,
but
I
would
assume
that
that
stuff
has
been
maybe
updated
with
I
mean
himacha,
you
updated
it
to
cf2
stuff.
So
I
don't
know
if
that
still
matters
at
all,
and
I
guess
we're
not
using
tensorflow
here.
Are
we
so,
let's
see
sorry
which
one
when
we
did?
A
A
Okay,
yep
sounds
good
thanks,
yeah.
Thank
you.
So
the
reason
why
we
had
that
we
have
the
length
on
on
our
features
because
is
because,
let's
see
where
let
me
just
pull
up
the
code,
yeah.
E
A
So
yeah
and
that's
making
me
think
you
know
like
because
we
we
want
to
make
sure
everything
is
consistent
here,
so
yeah,
okay,
yeah
shape
feature
dot
length.
So,
let's
see.
E
And
there
is,
there
is
just
one
more
thing
I
found.
I
don't
know
if
someone
got
stuck
there
too,
what
happens
is
when
we
are
reading
the
columns?
We
increase
one
time
dimension,
because
so,
if
there
is
two
dimensional
data
we
are
putting
in
every
we
write,
np
dot
array,
and
then
we
put
the
data
inside
it.
So
it
makes
it
three
dimensional
data,
so
so
that
is
something
we
need
to
keep
in
mind
while
giving
the
input.
E
G
E
Array,
yes,
so
so
we
are
converting
np
array
and
there
is
x,
calls
features,
so
that
is
already
a
two
dimensional
and
we
are
when
we
are
converting
it
again
into
an
error.
It
adds
one
more
dimension
to
it
all
right
yeah,
so
so
that
is
something
if
code
is
not
visible,
then
that
is
something
difficult
to
guess.
E
So
I
was
just
thinking
the
other
day
that
we
should
be
able
to
guess
what
will
be
the
first
dimension.
That
is
so
it
what
tensorflow
does
it
is.
It
does
not
takes
the
first
dimension,
it
takes
it
as
none
okay.
So
that
means
the
number
of
examples
that
we
can
have
so
that
may
vary.
Nobody
knows
in
file.
How
many
I
mean
you
can
count,
but
that
is
not
friendly
right
yeah
so,
but
we
need
to
specify
here
because
we
don't
do
it
automatically.
E
So
I
need
to
go
check
in
the
file
that
there
are
50
examples
that
any
I
am
putting.
So
I
need
to
specify
that
the
dimension
of
the
data
will
be
50
cross,
whatever
the
column
size
is
so
that
is
something
yeah.
So
so,
actually
I
need
to
talk
about
these
things
only
today
and
I
have
one
more
problem
with
this:
okay.
A
A
What's
going
on,
okay,
we'll
run
the
merge
command
and
then
we'll
yeah
we'll
hit
that
one
we're
talking
about.
Okay,.
A
A
Oh
yeah,
obviously
we
need
some.
We
need
some.
G
A
A
A
D
J
Should
I
paste
the
command
in
yeah
if
you
want
to
place
the
man
that
would
be.
J
J
A
F
E
H
A
H
J
A
A
D
H
A
A
It
seems
to
be
slow.
I
wonder
we
probably
need
to
do
a
thing
where
I
have
a
branch
open
right
now,
where
basically,
you
can
control
the
number
of
executing
contacts,
and
I
guess
I
mean
it's
not
critical-
that
we
do
this,
but
the
idea
here
would
be
you
know
we
have.
We
have
two
approaches
we
talked
about
this
before,
but
we
have
two
approaches
here
that
we
could
take.
We
could
do
where
is
the
data
flow
source
source
df.
A
So
yeah
here
we
use
the
orchestrator
context.run
and
we
do
it
for
each
record.
So
we
have.
We
could
do
here
something
like
this,
where
we
do
like
record
context
and
then
we
say
record,
and
this
way
the
thing
is,
we
would
get
the
records
out
of
order,
but
it
shouldn't
necessarily
matter
and
we
would
end
up
with
it's
running
all
the
context
at
the
same
time.
The
other
thing
is
that,
oh
here,
this
is
why
oh
yeah-
because
these
are
all
pretty
these-
are
all
cpu
intensive
operations.
A
So
we
need
to
get
the
other
part
of
this,
which
is
the
running
non-async
operations
in
threads.
I
think
there's
an
issue
for
that
too.
So
that
would
probably
speed
this
up.
So
the
two
of
those
things
would
probably
speed
this
up,
but
that's
just
sort
of
for
the
future.
H
A
F
J
A
A
A
We
need
more
logging
in
here
if
this
is
not
okay,
so
jump
off
the.
A
E
I
think
that
it
is
writing
it
to
json,
but
it
is
taking
a
very
long
time
because
there
are
like
more
than
100
records
yeah
if
we
run
it
with
20
30
records.
It
takes
a
minute,
but
it
completes
the.
E
The
json
dumping
is
very
slow.
That's
what
I
invert
from
this.
A
Yeah
yeah,
it
just
seems
like
it
shouldn't
be
that
slow,
especially
if
you
only
had
like
a
3.7
megabyte
file
like
this.
J
Records
that
makes.
A
A
lot
more
sense
now:
okay,
that's.
F
A
A
E
I
think
the
the
histogram
calculate
histogram
method
gives
some
sparse
matrix.
Okay,
because
if
it
is
sparse,
then
json
may
not
be
the
perfect
thing
to
use,
because
then
we
have
npc
format
and
then
we
have
a
sci-fi
that
handles
the
sparse.
It's
just
very
everything
will
speed
up,
and
especially
the
memory
users.
E
Go
down
so
if
that
optimization
is
possible,
because
I
see
a
lot
of
zeros
here
so
yeah.
If
we
can
convert
it
to
side
by
dot
sparse,
then
everything
will
speed
up.
Maybe
that
will
help
all
right.
E
E
In
the
features.yaml
file,
we
are
also
saving
the
the
original
image
data
in
json,
because
when
you
see
the
record
you
there
are
five
columns,
one
variable
for
the
features
and
one
for
image,
data.
E
A
This
type
of
thing,
but
that's
sort
of
that's
something
we
can
do
as
a
separate
thing.
So,
let's
see
yeah.
E
A
A
A
A
A
All
right,
okay,
so
merged,
so
we
merged
the
okay
export
numpy
fixes.
A
And
2.json
file
not
really
stick.
Instead,
we
should
continue
doing
the
on
the
fly,
processing.
H
A
A
A
A
A
Oh
no,
okay
looks
like
we
have
changelog
conflicts
right
yeah,
so
I
guess,
can
you
push
the
it's?
Probably
because
I
just
merged
your
other
branch.
So
can
you
push
an
update
to
the
change
log
here
and
then
we'll
be
able
to
do
that?
Okay,
so.
H
Source
example.
H
A
Okay,
so
all
right
so
himachu
so
are
either
okay,
you
wanna
you
wanna.
Just
we
can
talk
about
these
first
and
then
we
can
talk
about
tensorflow
or
is
one
of
them
related.
A
E
Using
data
flow
source,
so
what
happens?
Is
one
record
gets
transferred
and
the
operations
work
on
them
right?
So
what
I
want
is
because
we
have
tf
idf
vectorizer,
so
it
needs
to
see
the
whole
data
at
once,
not
the
single
records
at
a
time.
G
E
No,
no,
let's
say
I
have
10
sentences,
so
what
will
happen
is
first
sentence
will
go
in
and
all
the
operations
will
work
on
it.
Then
second
sentence
will
go
in
then.
Third
then
fourth
like
this,
but
I
want
is
all
the
ten
of
them
to
go
at
once,
not
a
single
at
recorded
time.
E
A
E
A
The
one
problem
you're
going
to
have
is
is
figuring
out
when
it's
done
yeah,
so
that's
that
is
where
I'm
stuck
yeah.
Okay,
so,
let's
see
yeah,
it
might
be
good
to
implement.
A
I
talked
about
this
before
where
it
might
be
good
to
implement
a
a
method
on
the
on
the
source
to
say
you
know
what
the
count
is,
because
that
and
the
reason
why
this
wasn't
done
is
because
sometimes
you
might
know
the
count,
and
sometimes
you
might
not,
for
example
like
with
the
I
mean
this
was
the
original
thing
with
when
we
changed
the
predict
method
recently
to
not
take
the
iterator
and
now
take
the
sources.
A
You
know,
if
you're
predicting
on
something
that's
a
stream.
You
may
not
know
what
the
count
is,
but
we
could
just
return
like
negative
one
or
something
in
that
case.
So
because
yeah,
you
need
to
know
how
many
records
exist
within
the
source,
so
you
need
the
source
to
implement
some
sort
of
count.
Method.
A
Okay,
so
needs
a
way
to
collect
all
the
feature
data.
A
So
you
need
a
way
to
collect
all
the
feature.
Data.
G
A
To
know
the
size
of
oh,
the
data
sets
okay,
so.
A
E
So
that's
if,
if
you
go
to
specifics,
then
it's
like
I
have
first
operation
is
remove
stopwatch.
It
is
taking
a
single
sentence,
it
is
removing
the
stopwatch.
Then
it
takes
a
second
one
and
on
top
of
it
I
have
the
new
operation
that
is
vectorizer.
E
So
what
I
want
is
I
want
to
insert
an
operation
in
between
these
two
so
that
the
output
of
the
the
most
upward
operation
is
accumulated
using
that
particular
operation,
and
when
everything
is
exhausted,
it
will
create
a
list
of
outputs
of
the
remote
stopward
and
then
it
will
send
to
the
vectorizer
yeah
yeah.
Okay,.
A
A
So
right
now
the
orchestrator
context
that's
running
is
going
to
I
mean
we
we
complete,
we
need
to.
We
need
to
run
like
everything
has
to
be
run
differently,
because
so
the
in
this.
With
this,
what
we
talked
about
earlier
right
when
we,
I
think
that's
when
this
came
up,
we
could
also
do
it
like
this
right.
So
basically
stir
well
context.
A
A
A
So
all
of
the
operations
for
every
record
are
running
at
the
same
time
right,
and
so
you
would
run
the
remove,
stop
words
would
be
the
first
thing
that
runs
and
then
the
next
thing
that
runs
is
this
accumulator,
and
so
the
commu,
the
accumulators
would
be,
would
need
to
communicate
between
each
other.
A
Well,
there's
going
to
be
one:
let's
see,
there'll
be
one
instance
per
data
flow
yeah
there's
one
instance
per
data
flow,
so
you
could
create
an
operation
that
has
like
a
lock
and
then
a
list
and
then
every
time
it
okay.
So
it
would
look
like
it
would
look
something
like
so
yeah.
We
need
to
change
it
first
of
all,
so,
first
off
we
need
to
change
it
to
be
like
this
and
then
second
off,
we
would
say
something
like
async
def.
A
So
this
would
be
like
what
what
are
we
calling
this.
A
Thing
well,
it
doesn't
really
matter
we'll
just
call
it
example,
all
right
so
self,
and
then
it
gets.
What
the
stop
words.
G
A
A
A
So
I
see
we
need
a
way
to
initialize
the
parent
context.
That's
not
a
okay
to
do
it
or
modify
so
we
need
to
modify.
A
Okay,
so
we
need
to
modify
okay,
so
imp
inner
is
gonna
set
something
on
the
on
the
parent.
So
basically
there's
there's
and
it
might
be
more.
Actually,
it
might
be
more
straightforward.
If
I
just
do
that,
all
right,
so
where's
this.
A
A
Okay,
so
there's
two
ways:
I
mean:
there's
multiple
ways:
we
can
define
this
right
and
so
we're
we're.
We
could
have
defined
it
like
that,
but
that's
probably
going
to
be
a
little
bit
less
than
straightforward.
So.
A
A
A
And
when
we
instantiate
the
data
flow,
so
when
we
instantiate
the
orchestrator
context,
we
can
create
a
lock
because
we
want
to
create
locks
within
a
enter
methods
and
there's
basically
some
bugs
with
the
file
source
right
now.
But
I'm
working
on
that.
A
All
right
so
when
the
orchestrator
context
is
created
right
and
that's
basically
when
we
yeah
it's
here
so
when
we
have
this
when
we
create
a
data
flow
source
context,
so
for
so
for
the
lifetime
of
this,
this
data
flow
source
context
this
this
this
operation,
like
the
example
our
example
operation,
we'll
we'll
have
this
list
right.
So
basically,
if
we
run
if
we
run
the
records
method
here,
it's
going
to
be
the
same
thing
for
the
whole.
A
I
guess
body
of
this
method
in
this
case
that
actually
may
not
be
exactly
what
we
want
here,
because
we
really
do
want
it
to
be
just
for
the
body
of
this
method
and
not
for
the
lifetime
of
every
other
thing.
A
A
A
A
Plus
blank,
if
not
self
dot
length
or
let's
see.
A
Count
maybe
or
I
don't
know,
length
length
or
count.
What
do
you
guys
think
makes
more
sense
for
the
sworthmet
source
method.
A
All
right
length,
it
is
okay,.
A
A
It's
gonna,
be
some
primitive
would
be
right.
So
basically,
if
we'll
add
this
config
parameter
to
the
source,
to
say
you
know,
length
is
stir
equals.
A
None
all
right
so
should
feature
namer
definition,
name
to
add
as
source
length
right,
so
the
number
of
so
basically,
if
someone
specifies
length
right,
they
should
give
the
definition
name
that
they
want
to
be
added
to
the
context
and
it
will
contain
the
length
of
the
source
right
and
so
we'd
come
in
here,
and
we
would,
let's
see
so,
there's
length
right.
So
we'd
come
in
here
and
we'd
have,
let's
see
like
first
up
words
and
length.
A
And
the
output
is
the
I'm
just
going
to
put
all.
A
We
need
to
just
basically
wait
here
so
or
self.parent.list,
so
that's
what
we're
returning
sync
with
self.parent
dot,
lock,
append
inputs,
stop
words.
A
A
So
if
length
so
so
we
got
to
figure
out
how
do
we?
How
do
we
make
it
so
that
how
do
we
make
it
so
that
we
wait
for
everything.
A
So
inputs
source
length.
Okay,
so
if
self.length
is
none,
then
length
is
the
source
length
and
we
append
r
one
and
we
say
if.
A
A
A
Notify
all
okay,
it
might
be
yeah.
This
might
be
better:
okay,
yeah,
okay,
so
this
is
better
yeah.
We
want
this
wake
up
all
testing
fine
on
this
condition,
all
right,
so
yeah,
let's
make
a
condition:
okay,
what's
up
parent
dot,.
A
A
A
Okay,
yeah.
I
believe
this
is
what
we
want
encouraging
a.
A
A
A
A
This
might
be
a
problem.
Okay,
let's
see,
maybe
we
don't
want
this.
A
A
A
All
right,
I
hope
this
doesn't
blow
up.
I
don't
think
it
will.
I
think
this
is
the
intended
way
that
this
stuff
is
supposed
to
be
used.
It's
been
a
while,
obviously,
since
I've
written
something
like
this,
because
this
is
like
largely
what
the
data
flow
stuff
is.
A
And
then
it
should
wake
up
everyone
else,
so
we
grab
the
lock.
We
add
our
word
to
the
lock
or
we
add
our
list
to
the
lock
or
to
the
list,
and
then
we
check
okay.
If
the
length
is
correct,
then
we
set
the
event
and
everyone
else.
So
basically
only
the
last,
the
last
one
is
going
to
hit
this,
and
everyone
else
should
be
here,
waiting
right
and
then
the
rest
of
the
operations
and
the
data
flow
will
complete
when
that
event
is
set.
A
Okay,
multiple
async,
io
tests.
I
believe
that
this
is
how
this
is
supposed
to
work.
So
I
think
that
should
be
what
you
want
here
now.
The
one
other
thing
that
we
thought
about
was
that
this
orchestrator
context
is
being
created
when
orchestrator
context
is
being
created
on
the
a
enter
method,
but
it's
really
only
relevant
and
it's
really
only
relevant
within
records,
and
the
other
problem
is
that
if
you
get
a
single
record,
that's
not
going
to
work
right.
A
F
A
Oh
geez
167
megabytes
for
the
gzipped
one
I
mean
did
it
complete?
Did
the.
E
A
Complete
it
did,
it
said:
170
records
were
saved
and
it
was
167
megabytes
so
and
that
was
it
yeah.
That's
that's
a
lot.
Let's
see,
no
wonder
that
didn't
work,
okay,
so
source,
I'm
just
thinking
like
this
is
an
abstract
base
method.
Here
the
record
and
obviously
it
didn't
base
sort
of
context
record
is
abstract,
but
this
must
not
instantiate
or
inherit
from
abc
yeah.
It's
not
because
yeah.
If
we
call
record
there
is
no
record
method
on
data
flow
source
context,
so
that's
kind
of
a
problem.
A
So
that's
an
issue,
but
also
that
the
reason
that
I
was
mentioning
this
is
because
this
works
so
long
as
you're
grabbing
every
single
one.
It
doesn't
work
it
doesn't
it
won't.
It
wouldn't
work.
A
If
you
grabbed
a
single
record
right
you
you
need
to
process
every
record
to
do
this
right,
so
you
also
need
essentially
an
option
to
the
data
flow
pre-processing
source
to
say.
Okay,
so
you
need
you
need.
This
record
needs
to
be
implemented
this
this.
This
method
needs
to
be
implemented,
implement
this
record
earth
method.
A
We
forgot
to
implement
that.
So
I
didn't
catch
this
when
we
did
the
code
review
on
this
one,
because
this
I
should
have
seen
this,
but
I
also
would
have
thought
that
the
abstract
base
method
stuff
would
have
caught
it.
So
that's
another
problem
there,
so
we
forgot
to
implement
it
when
we
initially
added
the
data
flow
source
context
and
the
so
basically
what's
gonna
happen
is
that
you
would
have
to
either
do
you'd
have
to
have
two
options
so,
like
all
of
the
records
right
and
if.
A
All
right
so
you'd
have
to
have
this.
You'd
have
to
have
some
option
that
says
like
if
you
know,
self.config
or
self.parent.config.
A
I
guess
that
would
here:
you'd
have
to
have
some
option
that
says
like
do.
I
have
to
run
everything
to
get
this
record,
or
can
I
just
run?
Can
I
just
run
the
data
flow
on
the
record
itself
right?
Okay,
so.
A
Or
something
like
this
right,
so
you
need
to
be
able
to
say
like
do.
I
run
all
the
records
right
because
so,
if
you,
if
you,
if
someone
were
to
call
dot
record
and
you,
if
someone
were
to
call
dot
record,
you
would
need
you
need
to
run
the
entire.
You
need
to
run
every
single
record
through
because
you
can't
you
can't
know
what
one
records
output
is
without
running
all
of
the
records
data
flows
right
because
you
have
to
run
this
operation
that
accumulates
everything,
and
so
therefore
you
need.
A
You
need
an
option
to
the
data
flow
pre-processing
source
that
says
run
everything
and
otherwise
you
just
do
you
just
do
you
just
do
the
one
right.
So,
let's
see.
A
Key
something
like
this:
okay,
so
I'm
just
gonna
post
this
patch.
A
And
hopefully,
and
if
you
run
into
trouble
just
let
me
know
other
ways
like,
hopefully
the
recording
in
this
patch
will
be
helpful.
Do
you
have
any
other
immediate
questions
on
this.
E
I
was
just
thinking
I
I
was
just
thinking
yes
now,
why
don't
we
have
loops
in
a
flow
like
we
can
modify
it
to
have
like
that,
can
be
look
that
can
be
backward
flow
also,
and
we
can
base
that
on
condition.
Then
we
can
just
have
this
type
of
things
as
a
universal
everywhere.
We
can.
E
Mean
so
like
we
have
the
flow
feature
right.
We
can
flow
from
one
operation
to
other
data.
We
can
flow
using
the
flow
right.
A
G
E
G
A
E
G
A
Okay
and
yeah
I
mean
so
so
the
the
logic
behind
the
the
reasoning
behind
the
way
it
is
that
it
is
is
because
it's
all
it's
it's
completely
defined
as
like
event
driven.
You
know
right
like
because
essentially
every
time
a
new
definition
is
produced,
that's
an
event
right,
and
so
it
may
be
possible
to
to
introduce
a
concept
of
of
loops
more
in
a
more
user-friendly
way.
A
A
It
does
give
you
this,
it
does
give
you
it
it
does.
Let
you
define
this,
you
know
event-driven
approach
to
things,
and
so,
when
you're
defining
you
know,
if
you
want
a
loop,
then
you
end
up
needing
to
say
like
okay.
Well,
there's
that
definition
again
and
then
and
you'll
you
would
see
like
when
you
visualize,
when
you
make
the
diagram
you'd,
see
that
that
that
the
where
I
swear,
I
had
an
example
of
this
somewhere.
A
You
would
see
that
oh,
this
was
the
depth
tree
command,
but
that's
not
finished
yeah.
You
would
see.
You
would
basically
see
the
diagram
feeding
the
definition
back
to
itself
and
there's.
There
was
one
basically
that
that
the
there
was
an
issue.
There's
an
issue
up
there
to
create
the
dependency
tree
right
for
the
python
projects,
where
we
would
go
figure
out
the
version
numbers
and
of
each
package
and
sort
of
create
a
tree
right,
and
that
is
essentially
like
it's
it's
kind
of
a
loop,
but
it's
kind
of
like
a
recursive.
A
It's
it's
it's.
The
way
it
ended
up
working
was
that
you
have
a
data
flow
and
the
data
the
data
flow.
Has
this?
Essentially
you
put
the
data
you,
the
top
there's
a
top
level
data
flow
and
then
there's
the
subflow
and
the
subflow.
Does
the
thing
where
it
just
says:
okay,
you
know
start
with
a
package
and
find
all
the
dependencies
right
and
then
the
output
of
that
data
flow,
which
is
the
subflow,
is
a
bunch
of
packages.
And
so
then
you
just
say:
okay,
that's
that's!
A
The
subflow
is
running
as
an
operation
right,
so
the
run
data
flow
operation
runs
a
data
flow
right,
so
it
now
it
takes
a
package
and
outputs
packages
and
when
it
sees
packages,
as
you
know,
being
output,
it
immediately
reruns
this
operation
right,
so
it
just.
It
inherently
creates
this
tree
until
there's
no
more-
and
I
don't
know-
I
mean
it-
it
essentially
ends
up
being
like
a
it's,
it's
kind
of
like
a
loop,
but
it's
kind
of
not
you
know
right,
because
the
whole
thing
is
is,
is
the
idea?
A
Is
that
it's
event
driven?
So
basically
I
mean
I
just
wanted
to
give
you
some
background
on
that
right
and
if
you
can,
if
you
have,
if
you
have
a
way
that
you
can
maintain,
if
you
can
come
up
with
a
way
where
you
maintain
the
event-driven
nature
of
things,
while
making
loops
more
like
more
user-friendly
in
in
the
way
that
you
would,
you
would
declare
that
this
is
something
that
has
a
feedback
loop
right.
Then,
then,
I'm
all
for
it.
A
It's
it
may
not
be
it
may
not
be
you
know
immediately
it.
It
mean
it.
It's
probably
going
to
take
some
take
some
take
some
thinking
about
yeah,
just
just
because
it's
it's
kind
of
it's
wacky
stuff
right.
So,
let's
see
okay!
Actually
now
I
know
what's
going
on
here
all
right,
okay,
so
let
me
post
this
to
the
create
a
gift,
because
it's
kind
of
long,
okay,
so
df
source
and
and
the
other
thing
to
say
on
that.
I
know
we're
getting
really
long
here.
A
I'm
sorry
about
this,
but
the
other
thing
to
say
on
that
is
that
yeah,
so
the
data,
the
syntax
of
declaring
data
flows,
especially
within
python,
could
definitely
be
improved.
You
know
the
way
that
we
connect
operations
and
stuff
yes
and.
A
Yeah
it's
confusing
right,
and
so,
if
anybody
ever
wants
to
tackle
that,
that's
definitely
I
mean
I'm
all.
For
that.
I
had
the
only
thought
that
I
had
was.
Basically
you
could
maybe
take
like
a
function.
You
could
you
could
you
could
write,
you
could
use
you
could
use.
A
You
could
take
like
a
fight,
a
python
function
and
decorate
it
with
something
you
know
make
some
kind
of
decorator
and
then
have
the
inputs
be
the
operations
that
you
want
to
run
and
then,
when
you
call
this
function
it
would
actually
it
would
run
it
would
pass.
It
would
sort
of
okay.
So
yeah,
you
imagine,
you
have
this
function
and
you
have.
The
arguments
are
all
the
operations
you
want
to
run
now.
The
body
of
the
function
shows
you
know
you're.
A
You
would
basically
call
the
functions
right,
call
the
operations
and
take
the
outputs
and
pass
them
to
the
other
operations
right
and
so
basically
you're.
You
have
this
regular
style.
You
know
syntax
for
what's
going
on
here
right,
it's
it's
very
obvious
right
to
the
to
the
casual
observer.
Now
the
trick
becomes.
What
you
would
do
is
that
within
the
decorator
you
you
you
make
it
so
that
when
the
function's
called
it,
doesn't
it's
not
actually
calling
those
operations
it
it
creates.
A
If
that
makes
sense,
that
was
the
only
sort
of
preliminary
thoughts
I
had
on
on
making
this
better.
Because
then
you
get,
you
know
the
regular
syntax
of
you
know
what
people
are
used
to.
I
don't
know,
did.
A
E
A
Yeah
yeah,
I
mean
there's
a
lot.
There's
like
I
yeah,
there's,
there's
some
there's
some
weirdness
in
here
for
sure
so
data
flow
source
I
mean,
I
I
think
any
any
ideas
to
make
it
more
clear
is
is,
is
I'm
all
for
it
just
wanted
to
share
my
my
only
thing.
A
Yeah,
because
it's
not
it's
not
a
it's,
not
a
walk
in
the
park
to
to
write
that
so
data
flow
source.
Accumulator
operation,
with
with
partial
modifications
to
data
flow
to
source
for
record.
A
A
Okay
and
then,
okay,
so
what
what
else
do
you
want
to
spam
detection?
Is
that
sort
of
closely
related,
or
both
of
them
have
the
same
problem?.
E
Yeah,
okay,
so
tensorflow.
Actually
I
I
will
have
to
see
it
once
again
because
it's
not
creating
a
problem
as
of
now,
but
it
is
indeed
a
problem,
but
not
something
that
you're
facing
so.
G
A
That
sounds
good
because
yeah
I
was,
I
was
having
a
little
bit
of
trouble
understanding.
Let's
see
it's
been
so
long
since
I've
messed
with
that
those
files.
E
A
All
right
so
tensorflow
will
provide
us
an
update
in
the
future
on
the
possible
slash,
maybe
present,
issue
with
arrays
right.
It
has
to
do
with
numpy
race,
yeah,
yeah,
okay,
sweet
all
right!
Well,
thanks!
Everyone
is
there
anything
from
anyone
else.
A
A
A
Okay,
yes,
I
think
we
are
yeah
we're
getting
pretty
close.
The
main
thing
is,
we
have
to
go,
let's
see,
yeah,
we
can
just
check
that
real,
quick.
A
Okay,
the
docker
container
is
open,
broken
cli
needs
to
be
upgraded
to
use
flow
and
then
the
dam
integration
demo
that
one
needs
to
be
fixed
actually
I've
had
I've
had
multiple
people
ask
me
about
that
use
case,
and
I'd
have
to
be
like
that
demo
is
actually
broken,
so
I
that's
still
my
ars
to
go
fix
that
all
right.
Well,
thank
you.
Everyone
and
anybody
got
anything
else
or
are
we
good.
A
Yes,
the
layer
support
thing
yes,
and
so
that
is
my.
Let
me
just
I
will
write
that
down
right
now.
I
have
like
20
lists
going.
Actually,
let's
say
no,
that
is
not
on
this
list.
Okay,
all
your
support,
your
support,
all
right.
Okay,
thank
you,
guys
have
a
great
I'll
talk
to
you,
maybe
on
getter,
and
you
might
see
me
online
a
little
bit,
but
probably
next
tuesday.
So
all
right,
thank
you
and
have
a
good
one.
Thank
you.
Bye.