►
From YouTube: Weekly Sync 2020-09-22
Description
Meeting Minutes: https://docs.google.com/document/d/16u9Tev3O0CcUDe2nfikHmrO3Xnd4ASJ45myFgQLpvzM/edit#heading=h.42d8l2yvuvo3
Sorry for the long gap in recordings! We had some issues with the audio not recording for a while and various other recording issues. Hoping those are cleared up with this!
A
A
B
A
Okay,
I
want
to
find
out
where
these
methods
are.
So
are
you
talking
about
get
parents
here.
C
Oh
yeah,
that
one
so
it
iterates
and
it
needs
all
the
patterns
in
illustrator.
But.
C
We
are
transporting
parent
ids
and
only
the
main
orchestrator
has
yeah
inputs.
C
This
is
just
two:
what
sorry,
sorry,
what
did
you
see
so.
A
Well,
this
really,
I
mean
it's
for
it's,
for
you
need
it
for
locking,
let's
see
yeah,
I
mean
this
yeah
this.
This
tells
you
which
one
comes
from,
which
right
because
of
the
set
of
parents
but.
A
The
thing
is
this
is:
oh
damn
it.
Why
did
how
did
this
end
up
happening?
So
what
the
idea
here
was
supposed
to
be
was
that
you
would
have
the
okay
where's.
The
input
set.
A
B
A
C
Yeah,
it's
an
a
single
iterator
course.
You
are
calling
it
parents
in
inputs,
and
parents
recursive
in
parameter
set
and
you
are
yielding
from
there.
A
Yeah,
I
think
that
the
thing
was
that
that
get
parents
is
supposed
to
there's
an
open
issue
for
this.
Isn't
there
yeah
there
is.
Let's
see
there
might
be
some
notes
in
this
issue.
A
A
C
Oh
there's
a
lot
of
places:
let's
see.
C
A
A
Well,
let's
read
let
here:
let's,
let's
read
the
rest
of
this
real
quick
because
I
think
I'd.
Obviously
I'd
come
back
several
times
over
the
span
of
like
you
know
the
past
last
year.
Apparently
I
was
thinking
about
this
all
year
from
march
till
november.
So,
okay,
I'm
not
sure,
I'm
no
longer
sure
how
critical
this
is.
A
I
remember
that
the
reason
I
did
this
was
the
way
is
because
yeah,
because
input
sets
could
be
created
by
an
input
network
and
they
could
take
incoming
input
sets
and
make
them
input
sets
of
their
own
specific
type,
okay,
so
yeah.
So
this
was
sort
of
the
idea.
Here
was
that
maybe,
if
you
had
a
distributed
setting,
then
you
might
have
like
a
redis
input
set
and
then
the
inputs
method
would
end
up
iterating
over.
You
know
like
the
database
call
or
something,
and
then
that's
why
this
is
probably
bad.
A
So
it
really
needs
to
happen
here.
Is
that
give
parentheses
to
the
async
function?
So
then
it
could
do
the
same
thing
parameter,
set.inputs
returns
to
inputs
and
all
the
amp
processors.
This
is
likely
not
what
it
should
be
doing,
because
it's
sort
of
an
overloaded
bad
term.
Now
we
should
audit,
where
it's
being
called
and
figure
out.
What's
up
right,
let's
go
check
that
out
too
before
we
make
a
decision
here,
so
base
parameter,
set
inputs.
A
A
B
A
B
A
A
A
Okay!
Oh,
that's!
Why?
Okay!
This
is
all
of
those
three
types
of
methods.
That's
what's
going
on
here:
okay,
yeah,
okay!
So
here's
here's!
What
I'm
thinking
about
this
is
that
yeah,
okay,
I
like
I
like
the
idea
of
storing
all
of
the
ids,
that's
good,
because
okay,
so
yeah,
okay!
So
let's,
let's
take
some
notes
here
on
this.
A
A
There's
lots
of
back
burner
things
here.
What
is
it
I
think
it's
labeled,
xl?
Okay,
I
think
we'll
be.
I
think
we
can
actually.
Hopefully
we
can
knock
this
out
pretty
easily,
but
we'll
we'll
see
how
it
goes.
So
can
you
replace
the
title
here?
Please?
Yes,
now,
let's
turn
it
again
all
right.
So,
let's
summarize
the
the
input.getparents
is
not
an
async
function,
because
one
time,
why
did
I
not
make
it
an
async
function
that
was
really
dumb?
A
I'm
obviously
I
make
everything
an
async
function,
so
this
one
was
slip.
The
judgment
all
right
input.
Good
parents
is
not
an
async
function.
This
leads
to
a
situation
where,
when
we're
in
a
distributed
setting,
we
don't
have
all
the
other
all
the
parents,
memory,
okay,
but.
C
A
A
A
Go
grab
them
and
so
yeah.
So
basically
we
would
set
up
some
sort
of
system
to
where
we
can
go,
get
those
get
those
inputs
and
that
I
mean
that
probably
just
ends
up
being
another
channel
right
within
gnats
to
say:
hey,
hey,
where's
this
input,
and
then
somebody
responds
and.
C
A
Right
yeah,
so
as
soon
as
you
have
them,
you'd
cache
them
right.
So
basically,
the
the
worker
nodes
would
only
end
up
with
whatever
inputs
they
need.
So
if
they
never
access
it,
then
they'd
never
they'd,
never
send
it.
A
Let's
see
what
is
it
that
that
saying,
there's
like
there's
three
hard
problems
in
computer
science
like
or
too
hard
problems
in
computer
science
like
caching-
and
I
can't
remember
what
the
other
one
is.
Let's
see,
no
yeah,
there's
yeah
there's
two
hard
problems
or
what
was
it?
No
yeah
there's
there's
three
hard
problems:
caching
and
off
by
one
errors.
A
Oh,
it
means
you're,
it
means
you've,
you've,
you've
calculated
the
array
index
out
of
bounds
so
and
then
the
joke
is
that
there's
three
there's
three
problems
and
there's.
A
Anyways,
don't
joke
okay,
so
this
leads
to
a
situation
where
we're
in
a
distributed
serving.
We
don't
have
all
the
parents,
parent
input
objects
in
memory.
A
A
Okay,
so
possible
solutions:
okay,
store
all
of
parent
uids
within
each
input.
A
A
A
B
Which
input
objects?
We
need
to
cache
locally
to
preform.
A
This
input,
so
they
can
all
be
requested
at
the
same
time,
instead
of
getting
one
parent
then
requesting
its
parent
et
cetera,
yeah,
because
or
else
we'd
do
the
call
response
a
bunch
of
times
until
we
get
all
of
them
so
cons.
A
B
B
A
All
right
so
yeah,
so
we
can
store
all
the
parents
uids
and
then
we
could
just
grab
them
when
we
need
them
so
and
then
the
other
thing,
though,
is
the
the
question
of
okay.
Why
do
why
do
we
need
this?
So
obviously
the
the
output
operations
there's
sort
of
no
no
choice
here,
you
know
if
they
need.
You
know
looking
at.
Why
is
this
necessary
with
the
output
operations
they
might
be
looking
at
the
they're
they're,
looking
at
the
whole
object
right
like
they
want.
A
They
want
to
know
about
the
whole
thing.
I
think
that
in
let's
see
the
the
let's
see
the
parameter
set,
I
believe
that
has
to
do
with
locking-
and
that
is
just
ids.
So
let's
investigate
that
further,
because,
obviously
so
we
don't
we
want
to,
we
want
to
sort
of
all
right.
So
what
I'm
thinking
here
is
basically,
if
we
cache
things
locally,
there's
a
finite
limit,
obviously
on
the
amount
of
things
that
we
can
cache
right,
because
the
worker
nodes
right.
A
So
if
you
have
some
sort
of
large
setting
and
say
you
had
like
a
bunch
of
say,
you
had
a
raspberry
okay.
So
this
is
something
that
I
want
to
do.
Eventually
I
haven't
said.
I
don't
think
I
think
I
had
told
yash
in
swedish
in
the
last
year,
but
I
wanted
it
would
be
really
cool
to
use
this
for
some
kind
of
like
iot
demo
right
because,
basically
right
yeah
so
you've
got
you
know,
you've
got
your
worker
node
and
and
eventually
that
there
we
would
add
something.
A
A
So,
for
example,
so,
okay,
so
here's
the
demo,
the
demo
is,
you
know,
there's
a
camera
and
then
there's
and
then
there's
some
inference
done
right.
That's
basically
that
we
call
that
the
demo
right
right.
So
you
want
to
do
so.
There's
two
machines,
there's
a
machine
with
maybe
a
gpu
and
there's
a
machine
with
a
camera
right
now.
A
The
goal
here
is
we
define
one
workflow
that
says:
take
a
picture,
give
me
inference
right
and
we
deploy
to
two
machines
right
using
the
distributed,
orchestrator
right,
and
so
so
the
you
know,
the
the
one
machine
has
a
gpu
and
the
other
machine
has
a
camera.
A
It
says:
okay,
well,
this
operation
goes
on
the
one
with
the
gpu
because
it
requires
a
gpu,
and
this
operation
goes
on
the
one
with
the
camera
because
it
requires
a
camera
and-
and
so
you
basically
you
know,
you
spin
up
both
worker
nodes
and
you
from
your
laptop,
you
know-
or
you
know,
maybe
from
the
machine
with
the
gpu
or
whatever
and
say
you
know
that
runs
the
domain
to
orchestrate
the
distributed
orchestrator
and
it
it
it
deploys
the
you
know
it
instantiates.
A
A
C
A
So
eventually
right
we
might
not
have
you
know
you
might
be
able
to
run
those
on
a
worker,
node
or
something
right,
and
when
you
choose
the
worker
node,
you
want
to
run
it
on.
A
You
wouldn't
choose
the
raspberry
pi
right,
you
would
choose,
maybe
the
gpu
machine
or
the
laptop
or
something
right,
because
to
get
all
those
you
know
you
might
send
one
image
at
a
time
on
raspberry
pi
right,
but
then
you
want
to
send
that
image
and
free
it
from
memory
right,
because
you
don't
have
that
much
memory
yeah,
so
so
that's
sort
of
the
the
thing
here
is
to
sort
of
think
about.
Okay.
Well!
A
Well,
you
know,
in
that
case,
with
the
with
the
output
operations
that
require
that
we
actually
have
the
whole
thing
in
memory.
Then,
oh,
let's
see
I'm
just
reading
the
code.
Okay,
sorry
when,
when
with
the
output
operations
with
the
ones
that
we
have
that
require
that
we
have
the
whole
thing
in
memory
that
that's
something
we
still
want
to
preserve
this,
this
same
behavior
where
we
actually
go
and
get
all
the
you
know
all
the
input
objects.
A
But
if
the
locking
network
truly
only
requires
the
the
locking
network
truly
only
requires
the.
A
Yeah
to
distinguish
the
ids,
then
we're
we're
good
to
go
right.
We
can,
we
can
basically
say
we
can
basically
now
now
I
mean,
then
then
we
can
basically
make
this
inputs
and
parents
recursive
just
be.
You
know,
give
me
the
ids
recursively
right
like
ancestors
ids
or
something,
and
then
we
can
call
it
good.
So.
B
B
C
A
Okay,
so
let's
see
if
this
explains
it,
so
this
is
issue
51
funny
enough.
Okay,
so
these.
C
A
Have
been
around
for
a
while,
so
the
current
implementation
is
such
that
it
locks
all
the
parents
and
the
input
itself.
A
So
what
we
should
do
is
look
at
the
parents
and
let
descendants
of
appearance
all
operate
on
a
descendant
input
at
the
same
time,
but
not
let
anyone
operate
on
the
parent
until
all
the
operations
working
on
the
ascended
inputs
have
completed
effectively
all
operation
working
on
the
descendant
input
share
the
lock
that
the
parent
descended
from
okay
so-
and
this
is
where
we
could
have
a
diagram.
A
I
wish
I
wanted
to
fill
with
this
giant
thing:
okay,
whoa.
They
reskinned
everything
all
right.
Okay,
let's
see
if
we
have
anything.
A
Okay,
perfect,
so,
okay,
so
let
me
I'll
just
do
the
same
thing
that
we
had
here.
So
what
this
is
like.
A
A
Two
things
to
explain
this:
okay,
so,
let's
see.
A
And
black
run,
let's
see.
A
Okay
so
say
we
have
some
python
source
code
and
so
or
let's
see
now
the
get
repose.
A
Wait
what
was
that?
Okay,
sorry,
it's
been
a
little
bit
okay.
What
was
the
deal
with
this
okay?
The
issue
here
is
that
you
end
up
with
okay.
Where
is
that
damn
flow.
B
A
Okay,
I
think
it
has
to
do
with
something
under
checkout
and
there's
more
operations,
but
this
got
simplified
or
something
I
think,
okay.
So,
for
example,
like
say
you
checked
out
a
git
repo,
okay,
so
yeah
okay.
So
so,
if
you,
if
you.
A
All
right,
so
when
you
have
a
good
repo-
and
we
did
a
bunch
of
these-
let's
see
we
did
a
bunch
of
these,
so
so
within
this.
This
example
that
we
do
a
bunch
of
git
analysis
stuff.
We
we
lock
a
git
repo
instance
says
that
it's
locked,
because
if
you
do
multiple,
if
you
do
multiple,
if
you
do
multiple
git
calls
onto
an
object
like
a
a
reap.
A
So
if
you
run
the
git
command
more
than
one
more
than
one
time
like
in
parallel
it,
it
creates
this
index
file
and
everything
gets
screwed
up
and
it
basically
says
you
could
only
run
one
one
command
at
a
time,
and
so
that's
why
the
github!
You
know
the
get
definition
is
locked,
and
so
so.
A
It
goes
up
okay,
so
it
goes.
It
should
go
up
until
it
finds
a
lock.
Basically
is
what
it's
supposed
to
be
doing,
so
it
goes,
and
it
just
grabs
all
the
locks.
Why
is
it
grabbing
all
the
locks
yeah?
What
it's
supposed
to
do
is
it's
supposed
to
go
up
and
say
where's,
the
first
lock
and
then
I
think
that's
basically.
What
this
issue
is
saying
is
that
this
is
all
sort
of
let's
see,
but.
C
A
Yeah
yeah
exactly
well,
that's
and
that's
what
this
issue
is
saying
is
basically
all
of
the
descendants
should
share
the
lock
right.
So
if,
if
it's
a
descendant,
so
if
we
have
two
operations
running
like
if
we
have
run
run,
if
we
have
checked
out
gear
repo
and
we
have
run
black
and
run
safety,
these
two
should
be
able
to
run
in
parallel
because
it's
a
parent
like
they
like.
If
these
guys,
you
know,
checked
out,
get
repo
produces
like
the
checkout
gate.
Repo
operation
produces.
A
C
So
if
we
don't
lock
it,
both
of
them
will
run
at
the
same
time
right.
A
Yeah
yeah,
but
the
idea
the
idea
is
to
lock
it
and
lock
it
while
both
of
them
are
running
and
then
unlock
it
once
both
of
them
complete
right.
So.
C
A
A
little
bit
more
there's
more
to
this
than
that,
though,
so
I'm
I'm
not
quite
not
quite
hitting
on
it
all
right
now.
I
can't
remember
all
of
it,
unfortunately,
but
I
will
oh
man.
A
A
A
Yeah,
okay,
so
we
use
the
uids
and
we
use
the
lock
required
or
not,
but
the
other
problem
is
that
this
is
sort
of
a
cheap,
cheap,
shot
way
to
do
this
right
because
we're
supposed
to
be
looking.
This
basically
goes
and
does
everything,
but
the
correct
implementation
would
look
I'm
at
the
actual
tree
instead
of
just
locking
everything
right
so.
A
Should
be
calling
inputs
and
then
calling
the
you
know,
looking
at
the
parent
like
and
then
deciding
right,
we
go
all
the
way
up
to
true.
We
actually
like
walk
the
tree
instead
of
just
walking
a
flattened
version
of
the
tree,
so
so
all
right,
but
but
just
to
get
to
the
immediate.
How
are
we
gonna
fix
this
right
now?
I
think.
A
C
A
I
think
I
think,
though,
that
the
thing
where
it
needs
we
need
access
to
the
inputs.
You
know
across
nodes,
though,
is
is
kind
of
key
here,
because
you
can
write
an
output
operation
that
that
accesses
other
stuff,
but
you
could
also
write.
You
know
any
other
operation
that
accesses
all
the
inputs
right
so
and
and
to
deny
access
to
the
inputs,
because
you're
distributed
would
and
we
could
raise
an
error,
and
so
you
can't
do
it,
but
I
think
it's
it's
something
that
we
can
solve
pretty
easily
here.
A
C
So
currently
the
parameters
and
inputs
have
to
get
parent
method
right,
so
we
can
have
another
class
which
inherits
from
input
and
replace
the
getparent
method.
So
say
we
say
distributed
input,
let's
call
it
distributed
input.
A
C
A
A
Okay,
we're
not
checking
if
anything
is
an
instance
of
input.
So
that's
good.
I
just
realized
well
instance,
yeah
okay
instance:
we'll
do
subclasses
too,
so
we
should
be
okay,
yeah.
I
think
that
is
the
correct
way
to
go
here.
So
basically,
yeah
take
that
df
types
yeah
make
this
async
right.
So
async
get
our
input
and.
A
A
A
A
A
Okay,
so
it
looks
like
the
from
dick
here.
The
issue
with
the
from
decked,
I
guess
is-
is
that
what
are
we
going
to
do
when
it
gets
uids?
C
Kind
of
distributed
uid
in
the
worker
node.
A
C
A
D
A
B
A
A
Be
tied
to
the
input
set
right
because
the
input
so
the
reason
yeah,
I
think
something
something
must
have
happened,
because
the
input
should
always
be
either
okay.
A
So
the
thing
is
that
the
input
sets
with
an
input
set.
You
can
create
an
input
set.
That's
a
reference
to
you
know
wherever,
wherever
your
input
network
is
right,
like
let's
see
right
and
and
if
you
have
an
input
site,
then.
A
Yeah
the
input
set
should
be
able
to
be
like
you
know,
it
would
be
like
you
know,
a
redis
input
set
or
something
because
it
was
it
was
like.
Okay,
my
inputs
are
actually
redis
right
in
this
case
it
would
be
like
you
know.
The
gnats
input
set
right
and
the
inputs
are
cached
locally
in
memory,
but
they're,
actually
you
know
excess
access.
You
know
they're
going
to
be
stream
tests
or
nats
how's
it
going
sudhanshu
see,
but.
A
Yeah,
I'm
just
I'm
concerned,
I'm
so
I'm
concerned,
because
I'm
because
we
could
get
into
this
situation
well.
First
off
it
looks
like
I
guess.
First
off
it
looks
like
from
dick
right
now
doesn't
even
deal
with
the
parents
at
all
so
yeah.
So
so
I
guess
in
that
case
we
haven't
yet
hit
a
situation
where,
like
we
haven't
yet
we
would
have
hit
this
by
now
right
because
it
would
have
been
called
and
we
would
have
had
an
input
that
doesn't
have
any
appearance.
C
Currently
it
it
has
the
pattern
list,
but
if
the
parent
listener
id
just
accepts
that
ids
yeah.
A
A
Yeah
and
so
basically,
what
we
need
here
right
is
a
way
to
to
lazy
load.
Those
ids
right
and
so
lazy.
Lotus
right
is,
is
the
term
for
when
we
just
have,
you
know,
we
just
have
the
reference
to
it
and
then,
when
we
actually
go
to
use
it,
then
we
need
to
pull
it
over
the
network
or
whatever
right,
but
we
can't
do
that
unless
the
input
has
a
reference
to
the
network,
like
the
input
network.
C
But
the
only
input
network
which
has
all
the
ids
is
the
orchestrator
network
yeah,
but.
A
A
A
B
A
Yeah,
the
only
thing
is,
I
don't
even
know
if
this
really
even
solves
the
locking
problem.
Quite
honestly,
because
I
think
I
think
the
locking
problem
needs
to
be
solved
on
its
own
in
sort
of
another
distributed
way
right,
because
this
is
all
yeah.
This
is
all
and
then
the
locking
is
sort
of
just
like
not
it's
not
entirely
accurate
in
this
point
at
this
point
anyways.
A
A
A
Okay,
so
let's
see
issues
we
are
currently.
A
Okay,
so
we
are
currently
storing
the
immediate
parent
uids,
and
this
is
because
so
from
dict
doesn't
do
any
conversion
into
input,
objects
and
export
only
exports
uids,
all
right,
so
we're
straight
currently
storing
the
immediate
pairing
uids.
The
input
object
has
no
reference
to
any
input
network
or
input
set,
which
might
have
a
reference
to
an
input
network
right,
and
so
the
reason
just
to
recap.
The
reason
why
an
input
set
might
have
a
reference
to
an
input
network
is
because,
when
we
do.
B
A
Add
it
yeah,
okay
added
will
return.
So
basically
we
might
have
that
because
you
could
take
it
and
you
could
make
it
so
that
okay,
someone
adds
an
input
set
right.
So
here
this
is
the
input
set
or
the
memory
input
network
context,
and
so
one
can
add,
input
sets
to
it
right
and
then
on
the
other
end.
Basically,
is
the
added
method
right,
and
so
you
could,
you
could
add
right,
like
you
could
you
could
add
from
the
from
the
clients
right
or
from
the
workers
right?
A
They
they
add
their
inputs
right
and
then,
on
the
other
end
in
the
orchestrator.
It
comes
out
as
added
right
and
then,
when
we
send
back
to
the
clients,
I
guess
when
we
dispatch
to
the
clients
right
now
we
are
when
we
dispatch
to
the
clients
right
now
we're
doing
it
within
the
the
the
gnats
sort
of
the
specialized
worker
node
class
right.
Is
that
correct,
where's
that.
A
A
I
mean
it's
in
well,
I
mean
the
orchestra
like
what
is
the
worker,
the
workers
sitting
here?
It
looks
like
so
the.
C
Orchestrator
node
looks
through
the
ids
and
find
what
worker
is
free,
which
can
do
that
operation
and
pushes
that
input
to
that
channel.
C
A
A
A
Operations
have
access
to
to
the
orchestrated
context.
The
orchestrator
context
holds
the
input
network.
A
But
this
okay,
so
we're
in
we're
in
the
space
of
the
the
locking
you
know
what
yeah
so
locking
is
not
going
to
get
solved
okay,
so
I
think
this
is
the
thing.
Is
that
locking
is
not
going
to
get
solved
right
now
in
a
meaningful
way?
Anyways.
It
sounds
like
right
because
of
the
whole
thing
where
it
basically
locks.
It
locks
anything,
that's
apparent,
so
so
we
might
just
yeah.
We
might
want
to
table
this.
A
Yeah,
I
think
this
needs
certain,
but
more
that's
why
that
thing
says
deep
analysis
or
whatever
it
says,
it's
a:
what
is
it
audit
and
audit
yeah?
That's
why
so.
C
A
Yeah
exactly
right
because
we
need,
I
mean
essentially
what
we
need
is.
We
need
some
kind
of
distributed.
Locking
infrastructure
set
up
here
as
well.
Right,
you're
gonna
need
sorry,
you
need
like
signaling
and,
like
you
need
you
need
yeah,
you
need,
you
need
a
way
to
do,
distributed,
locking
and
we
don't.
We
don't
current.
That's
another
thing
that
we'd
have
to
develop
right
or
leverage.
I
couldn't
I
can't
I
couldn't
find
anything.
I
don't.
A
If
I
remember
correctly,
there
was
some
I
looked
for
stuff,
but
I
hadn't
been
able
to
find
anything
sort
of
that
was
sort
of
drop
in
yeah,
so
we
need
locking
so
it
might.
This
might
be
a
case
where
you
do
okay,
so
we
had
another
case
where
we
rejected
data
flows
for
a
certain
reason
when
they're,
when
you're
running
on
the
distributed
orchestrator.
Do
you
remember
what
it
was.
C
A
Okay,
well,
we
should
have
some
input
validation
in
here
for
it
yeah.
We
had
some
case
where
we
were
saying.
Well,
we
can't
run
this
right
right
now,
anyways,
it
doesn't
really
matter.
Basically,
you
can
just
throw
an
exception
on
on
the
run
method
if
you
detect
locks
in
any
of
the
in
any
of
the
definitions
right,
okay,
so,
let's
just
okay,
so.
A
Oh,
the
cleanup,
yeah,
okay
yeah,
so
right,
so
if
you're,
detecting
any
cleanup
or
any
locks,
you
can
just
say:
hey,
not
implemented,
error
and,
and
then
we'll
I
mean
that
gives
that
gives
that
gives
it
enough
functionality
right,
like
there's,
there's
there's
uses
for
this
without
those
things
right.
So,
okay,
let's
see
okay.
So
let's
just
sort
of
summary
resolution
we're
going
to
table
this
for
now
locking
should
be
fixed
first.
A
A
A
Okay,
raise
not.
B
A
Orchestrator
and
previous.
B
A
Association
would
be
okay,
so
we
didn't.
This
was
because
we
don't
have
a
way
to
track
whichever
route
node
they
should
be
run
on.
The
node
they
should
be
run
on
would
be
the
node
which
produced
the
input
in
question
node,
which
produced
a
matching
input.
A
Okay,
all
right.
A
All
right,
so
all
right
sounds
good
that
took
us.
It
took
us
a
whole
hour
to
figure
that
one
out
thanks
for
hanging
in
there.
Let's
see
all
right,
yeah
that'll
be
interesting.
Let
me
let
me
think
some
more
sort
of
larger
thinking
on
this
in
this
space,
because
that
that
damn
that
damn
get
parents
method-
I
don't
know
what
the
hell
made
me
do
that
well,
I
was
probably
up
against
the
deadline.
A
It's
probably
what
so
yeah,
okay,
yeah
sort
of
rational
thinking
goes
out
the
window
and
you're
like
oh,
oh,
oh,
this
works!
No!
No!
No!
That
only
works
for
now
it
doesn't
work
two
years
from
now.
Damn
it
all
right.
Okay,
so,
let's
see
okay,
all
right
well,.
A
Okay,
oh
the
two
files,
yeah.
C
Like
I
think
sudan
he
pointed
to
me
the
files,
but
I
don't
know
where
to
change
text
actually.
A
Okay,
let's
see
and
I
think.
A
Okay,
so
that
you
can
get
nats
installed,
okay,
I
believe
it
should
just
be
depth.sh.
A
A
We
should
probably
be
testing
with
oh
yeah,
okay,
so
basically
here's
the
deal
so
there's
a
new
there's,
a
new
ci
thing
about.
Let
me
just
write
this
down
too,
oh
and
then
we
didn't
get
so
new
version
of
the
container
okay.
A
A
All
right,
so
it's
basically
going
to
be
ci
and
then
depths.sh,
I
believe
so
and
then
yeah
global
dependencies
test
dependencies.
Okay,
so
I
think
it
goes
in
here
is
where
you're
going
to
be
looking-
and
this
is,
I
think,
I
think
it's
had
since
been
updated.
A
All
right
so
two
files,
okay,
I
think
it's
just.
I
think
if
you
just
need
to
install
nats,
it's
just
the
one,
but
when
you
rebase
you're
also
gonna
want,
let's
see
or
you
don't
need
to
install
in
the
container.
Actually
so
yeah
you're
good,
just
put.
A
And
then
I
think
you're
good
to
go,
and
that
brings
me
to
is
there
anything
else.
Then.
A
A
Yeah
right,
yeah,
exactly
yeah
and
so
yeah
yeah
I
mean
it's
it's
it's.
This
is
this.
Is
it's
interesting
stuff?
A
Okay?
So,
let's
see
okay
so
suit?
Oh,
and
let
me
just
talk
about
this
stuff,
real,
quick
and
then
we'll
go
to
suit
hunter.
So
I
wanted
to
ask
you
guys,
so
I
was
thinking
about
this
okay.
So
basically
I'm
I
told
again,
but
I'm
I'm
and
I
said
I
didn't
get
her,
but
I'm
I'm
working
on
the
console,
basically
testing,
all
of
the
tutorials
and
making
it
so
that
we
can
have
automated
tests
of
all
the
tutorials.
A
So
like
running
the
console,
commands
and
stuff
and
in
the
process
I'm
now
ending
up
sort
of
re-re-uh
refactoring
the
mysql
source
a
little
bit
so
here's
the
deal-
I
was
thinking
about
it
and
I'm
thinking
about
it
as
like.
A
Does
it
make
sense
to
just
store?
You
know
we
have
record.prediction
right
and
it
gives
you
back.
You
know
the
value
and
the
confidence
as
a
dictionary,
but
does
it
really
make
sense
to
do
that
or
sorry
or
does
it
make
sense
to
just
when
we
make
a
prediction?
Add
that
to
the
set
of
feature
data
for
that
record,
and
so
my
thoughts
on
that
are
basically
you
know.
The
reason
why
it
might
make
sense
is
that
you
know
the
output
of
a
prediction
of
one.
A
So
that's
from
that
point.
It
could
be
helpful
now
the
cons
is,
it
might
be,
it
might
be
less
clear.
What's
ground
truth
and,
what's
you
know,
predicted
features.
A
Okay,
have
them
separate,
and
then
I
guess
the
other
thing
was
and
then
there's
sort
of
a
side
note
on
this
is
you
know,
and
this
could
this
could
have
happened
as
a
part
of
that
or
it
could
happen
separately
is?
Should
we
just
you
know?
Should
we
separate
out
that
confidence
right?
Because
right
now,
it's
like
okay,
I
get
this
dictionary
where
I've
got
the
value,
and
then
I
have
to
separate
the
value
and
the
confidence
it's
stored
together
right.
We
could
have
right
now
we
have
sort
of
three
three.
A
We
have
two
sub
dictionaries
within
rec
or
well.
We
have
three
subdictionaries
within
record.
We
have
extra,
which
we're
not
not
relevant
to
this.
We
have
features
which
is
you
know
where
the
feature
data
is
ground,
truth
rate,
then
we
have
predictions
which
is
predictions
and
then
there
are
sub
dictionaries
of
value
confidence
right.
So
this
the
this
idea
is
to
split
it
out
and
have
another
dictionary
called
confidence
where
we
store
confidences
for
each
prediction
right.
So
it's
another
key
value.
A
Mapping
of
you
know,
prediction
name
to
confidence
value,
whereas
now
predictions
becomes
just
prediction
name
to
value,
so
I
don't
know
that's
sort
of
that's
that's
the
main
thing
that
I
was
sort
of
thinking
about
right
now
and
that
one
is,
I
guess,
that's
sort
of
like
a
just
an
idea,
and
I
don't
know
whether
you
guys
think
that
makes
things
more
more
usable
or
clear
or
less
usable
or
clear.
What
do
you?
What
are
you
guys
opinions
on
like
I.
A
B
A
B
A
Features
so
prediction
returns,
this
confidence
value
dictionary
right
so
now,
basically,
we
just
return
the
value
right
and
then
we'd
have
yeah.
We'd
have
we'd,
have
another
method
called
confidence
and
it
would
just
return
the
confidence,
and
this
is
that
no,
this
is
no.
This
is
yeah
anyways.
I
don't
know.
What's
the
thoughts
on
that?
Does
that
make
sense?
I
guess
it
might
be
sort
of
a
it's
either.
A
A
D
I
I
think
like
like,
if
we
do
it
like
this
way,
then
like
it
will
be
more
useful
for
like
time
series
kind
of
data
sets.
Okay.
D
A
D
A
D
A
A
Okay,
so
if,
if
there's
no
solid,
no
one
seems
having
much
solid
feelings
about
this,
so
I'm
going
to
throw
this
idea
out
the
window.
We
have
a
million
other
things
to
do
so.
It
lives
in
the
meeting
minutes
and
it
may
die
in
the
meeting
minutes,
but
otherwise
we
can
come
resurrect
it.
If
we
want
to.
This
could
be
helpful
with
time
series,
but
let's
try
some
time
series
stuff.
A
And
then
decide
how
helpful
it
might
be,
so
I
guess
you
know,
I
think,
that's
kind
of
the
resolution
here
is.
It
could
be
helpful
with
time
series,
but
if,
if
we're
gonna
assess
this,
we
should
go,
try
some
time
serious
stuff
and
see
how
helpful
that
is.
D
A
All
right,
okay!
So
now,
let's
get
to
how
are
things
going
with
you.
D
C
D
A
Okay,
great
all
right,
fantastic,
all
right
so
yeah,
just
let
me
know
how
things
are
going
like.
I
said:
I'm
I've
gotten
now,
I'm
I'm
back
to
having
some
more
time
for
the
moment
here.
So
I'm
trying
to
ramp
up
and
and
get
get
more
things
done,
and
I'm
still
thinking,
I
think,
with
the
current
trajectory
of
what
you've
got
and
then
what
the
rest
of
the
cleanup
we've
got.
I
think
that
it
still
makes
sense
to
target
the
accuracy
stuff
that
you're
doing
for
the
the
beta
release.
A
Nice
all
right:
well,
it
was
good
talking
to
you
and
is
there
anything
else
you
had
for
me?
You
wanted
to
talk
about,
or
just
this.