►
From YouTube: Weekly Sync 2020-06-26
Description
Meeting Minutes: https://docs.google.com/document/d/16u9Tev3O0CcUDe2nfikHmrO3Xnd4ASJ45myFgQLpvzM/edit#heading=h.ydka57v9c0fn
A
B
A
A
Okay,
all
right
so
I
was
working
on
this
thing,
so
I
told
you
guys.
I
have
to
do
all
these
compliance
tests
right.
So
I
was
working
on
this
thing
to
do
the
compliance
test
and
basically
I
built
this
command-line
utility
to
interact
with
all
these
various
web
apps
that
we
have
has
gone
yesh
and
so
that
I
don't
have
to
go
to
all
the
web.
Apps
and
I
can
basically
script.
Some
of
the
compliance
tasks
and
stuff,
for
example,
like
there's
some
things
that
are
like
I,
have
to
go
and
set.
A
One
task
is
not
applicable
every
single
time,
because
I'm
not
really
seeing
any
binary
right,
we're
releasing
only
source
code
so
and
Intel
has
this
one
task
that
we
have
to
do
it
for
releasing
binaries,
but
we're
not
releasing
binaries.
So
basically,
I
have
to
go
through
and,
like
you
know,
however,
many
plugins
we
have
in
DSM
all
have
to
go
through
the
web,
app
and
click
through
several
screens
of
the
web
app
and
hit
n/a
right,
so
I'm
ready
to
script
this
stuff.
A
So
in
the
process
of
joining
now
we
have
these
command
line,
clients
and
so
or
command
line
code
and
so
I'm,
using
obviously
the
command
line
framework
that
we
have
within
dff
ml
and
I
realized.
So
I
realized
that
we
sort
of
have
a
gap,
and
our
gap
is
basically
that
we,
okay
so
and
then
I
wanted
to
write
some
scripts
where
I
was
using
basically
the
command
line.
Clients
so
I'd
have
to
instantiate
the
class
and
call
the
run
method.
But
you
know
each
of
these
command
line.
A
A
Well,
I'm
not
going
to
need
to
have
a
inner
method
for
this
thing,
using
the
clients
si
session,
just
kind
of
like
the
pipe
eye
operations
have.
You
know
just
like
how
the
pipe
eye
operations
have
imp
inter
and
context
Center,
and
we
enter
the
client
session
and
the
database
operations
do
enter
on
the
database
and
then
context
and
around
the
database
context.
A
It's
sort
of
the
same
thing
we
ran
through
when
we
did
the
high
level
stuff
issues
we
had
to
pull
out
or
I
had
to
pull
out
the
code
from
the
CLI
I
had
to
pull
out
the
code
from
the
CLI
classes
into
high
level
right
and
then
we
end
up
with
these
functions
in
high
level,
and
it's
like
okay,
great
now
we
have
functions,
and
now
we
need
the
CLI
wrapper
around
them.
Okay,
so
and
then
okay.
Well,
what?
If
I
want
to
use
them
as
operations?
A
So
then
I
have
to
write
an
operation
that
wraps
them
all
right.
So
basically,
what
I
realized
is
is
everything
as
you
as
as
we've
all
all
realized.
You
know
everything
needs
to
be
an
operation,
so
this
led
me.
This
led
me
down
this
path
of
trying
to
figure
out.
Okay,
let's
make
let's
make,
let's
try
to
figure
out
how
to
make
an
operation
into
something
that
we
can
set
as
a
command
all
right,
because
we
go
into
our
CLI
data
flow
right.
So
if
we
go
in
here
and
we're
like
okay,
what
are
these?
A
What
are
these?
What
are
the
sub
commands
here?
Well,
we
have
to
point
them
at
another
class,
which
is
a
CLI
class
right
and
has
a
run
method.
Well,
what?
If
we
could
just
point
them
at
an
operation
right
and
then
right,
then
we
can.
We
can
take
this.
You
know
what
would
be
the
config
is
now
the
operation
inputs
so
yeah.
A
For
you
know
some
of
these
right,
like
if
we're
gonna,
if
we're
going
to
simulate
well,
if
we're
going
to
redo
what
we
did
here,
then
we're
going
to
need
default
value,
we're
going
to
need
a
lack
of
we're
going
to
need
the
ability
to
set
default
values
and
operations,
because
obviously
command
line,
things
have
default,
values
and
then
command
line.
Things
also,
you
know,
like
I,
was
talking
about
if
a
command
line,
if
we've
got
like
an
a
inter
method
or
one
of
these
commands.
A
Well,
how
is
that
going
to
work?
Well,
that
ends
up
becoming
essentially
the
operations
config,
and
then
you
can
use
the
operations
config
to
do
the
a
enter
there,
and
so
essentially,
I
got
down
this
rabbit
hole
that
sort
of
it
sounded
like
and
then,
as
is
what's
going
to
lead
to,
where
we're
at
with
your
thing,
Augen
is
that
I
ended
up
needing
some
of
what
sock
Shaam
you
were
doing
and
then
I
ended
up
needing
or
I
ended
up,
hitting
the
same
issue
that
you
were
hitting
again
with.
A
You
know
parsing
these,
these
config
as
as
well
as
base
calls
it.
The
config
dict
right
is
the
plug-in
plus
config
and
so
I
think
I
realized.
What
I
realized
was
that
we
really
need
to
be
storing,
and
it's
like
you
were
saying
like
how
do
I
get
the
type
information
right.
Well,
the
type
of
information
is
usually
stored
in
the
Arg
structure,
which
would
come
from
the
config
right,
because
we
take
these.
We
used
to
have
the
args
and
config
method,
and
then,
basically,
you
know
sock
Shawn
winter
and
made
it
all
config.
A
A
This
convert
you
function
takes
an
are
gonna
value
and
then
it
decides
how
to
convert
it
right
and
so
what?
But,
but
this
won't
help
you
load
your
plugin.
If
you
don't
know
what
your
type
of
your
plugin
is
right,
so
basically
what
I
realized.
This
is
pretty
simple,
but
basically
we
just
need
to
be
including
the
type
of
the
plug-in
in
with
these
config
Dix,
because
it's
like
it's
it's.
If
we
have
the
type,
then
we
can
just
go
through
the
entry
points.
A
A
A
A
A
B
A
B
A
B
A
A
If
we
store
everything
like
this
now,
we've
got
all
the
information
we
need
at
any
point
in
time
to
load
anything
right,
so
that's
sweet
and
now
the
other
part
of
this
is
that
we
need
to
do
that
thing
that
we
talked
about
where
we
basically
anytime,
we
get
anything
yes,
I
am
recording.
Okay,
anytime,
we
get
anything
like
so
say
the
data
flow
comes
in
and
we
get
the
configs
or
you
know
we
get
something
in
from
the
command
line
right
away.
A
C
A
Cuz
they're
else
we
end
up
with
this
mess
right,
yeah
exactly
right.
We
end
up
with
this
mess
where
it's
like.
Okay.
Well,
what
is
like
you,
you
pass
a
value
to
something,
and
then
it's
not
loaded
right
so
and
another
thing
that
we
might
want
to
do
is
we
might
want
to
actually
just
apply
this
load
config
dict
on
top
of
top
of
what
needs
to
go
everywhere,
yeah,
it
needs
to
go
everywhere,
I
like
it
pretty
much
needs
to
go
everywhere,
I
mean
unless
the
the
only
other
thing
is
like.
A
If
you
see
you
know,
if
you
the
other
place
that
I
think
this
is
important,
that
we
do.
This
is
on
top
of
the
make
config,
so
on
top
of
the
data
class
that
we
create
basically
like
wrap
the
Annette
method
or
something
to
intercept
any
types
that
are
based,
configurable
types
and
if
they
are
based
configurable
types,
then
attempt
to
config
load
what
you
see
as
the
input.
If
the
input
is
like
a
dictionary,
you
know
so
there's
a
couple
plate,
but
because
you
guys
know
you
guys
have
all
experienced
this
problem.
A
A
A
F
B
F
See
that
because
there'll
be
like
in
image
of
processing
there'll
be
likes,
there
are
so
many
operations
you
need
to
perform
on
some
image
for
getting
two
to
three
features,
feature
that
is,
and
that
will
be
a
very
long
config
file
and
we
need
to
add
a
very
long
data
flow
create
command.
So
that
would
be
very
helpful
if
you
can
explain
more
on
this.
A
A
C
E
D
A
A
A
So
basically,
what
I'm
saying
is
that
when
I
say
we
can
use
an
operation
as
a
command
line
from
the
command
line,
him
I'm
saying
that
if
we
had
this
thing
that
was
like
class
should
I
see
a
lie
right,
and
so
this
is
I
mean
this
is
how
I,
you
guys,
probably
haven't
seen
this
much
other
than
suck
shop.
But
if
you
were
to
say
you
know,
Python
equals
Python.
A
A
B
A
A
This
is
the
body
of
the
CLI
command
right,
and
so
what
I'm
saying
is
that
and
actually
let
me
redo
that
so
then,
here
when
we've
got
like
so
when
we
have
predict,
we
have,
you
know
record
equals
predict
record
and
all
it
goes
predict
all,
and
so,
when
we
have
these,
you
know,
then.
So
this
is
what
happens
when
you
say:
DF
FML
differ
from
all
predict
all.
It
runs
this
command
here
that
were
that
we're
looking
at
the
body
of
right
now,
that's
what
happens.
It
runs
this
right.
A
So
what
I'm
saying
is
that,
instead
of
doing
that,
kind
of
way
where
we're
defining
command
line
commands
like
that,
we
could
just
do
run
bandit
and
then,
when
I
type
you
know
should
I
python
bandit.
It
runs
this
right
here.
Right
and
I
would
pass
it.
You
know
PKG
whatever
right
and
if
it
had
a
config
I
would
do
config.
A
You
know
clients
or
a
client
session,
for
example,
with
the
HP
stuff,
that
timeout
is
42
right.
So
these
are
going
to
be
arguments
and
then
things
prefix
what's
config
are
gonna,
be
you
know
things
will
go
into
the
configuration
of
the
operations
so
basically
you're
just
going
to
be
able
to
take
any
opera
build
the
command
links
like
if
you
had
operations,
just
single
operations
that
you
wanted
to
run
from
the
command
line.
A
You
could
build
a
command
line
client
using
this
syntax,
where
it
just
runs
that
operation
now
the
nice
part
of
that
is,
of
course
it's
going
to
you.
Can
you
can
just
call
those
Python
functions
as
regular
functions
of
their
places
right?
So,
if
you're
building
a
command
line,
client,
that's
also
a
library,
then
you
have
the
function,
that's
just
your
regular
function
and
then
you
can
call
it
you
can
you
can?
Have
you
basically
a
command
line?
This
is
you
decline?
A
What
your
command
line
looks
like
and
I
don't
have
to
write
any
wrappers
around.
What
are
the
argument?
Abstraction?
Like
I
just
say,
here's
this
operation,
and
now
my
command
line
knows
how
to
run
that
operation
right
and
then
the
other
nice
part
about.
This
is,
of
course,
basically
anything
you
write
now.
You
can
also
just
throw
it
in
a
data
flow
and
run
it
in
a
data
flow
too.
F
E
A
I
mean
you
could
do
that's
so
yeah.
The
sort
of
the
idea
is
that
you
know
you
could
you
could
have
I
mean
you?
Could
you
could
yeah?
The
idea
is
basically,
you
know
no
matter
what
you're
doing,
whether
you're
writing,
data
flows
or
you're
writing
a
command
line,
client
or
you're.
Writing.
You
know
just
some
Python
functions
that
you're
using
as
a
library.
You
know
you
can
use
them
all
is
the
same
way.
A
You
don't
have
to
do
anything
different
like
you,
don't
have
any
right,
any
extra
code
to
wrap
anything,
whether
whatever
you're
doing
right
right
so
the
whole
idea
is
we
make
it
so
that
we're
always
writing
less
code
and
not
any
wrappers
that
change,
because
it's
all
just
the
same
stuff
right.
It's
just
has
to
do
with
what
are
the
arguments
to
this
thing
and
what
is
if
it's
config,
so
anyways
yeah,
so,
okay,
so
I'm
going
to
let's
see,
let's.
E
A
B
A
A
Okay,
so
yeah,
so
basically
I'm
gonna,
try
to
fix
the
whole
config
thing
and
hopefully
it'll
be
good.
I
think
this
was
sort
of
the
last
step
that
we
needed
when
we
talked
about
unifying
that
config
stuff.
We
sort
of
glossed
over
this
as
something
that
like
should
be
done
at
some
point
like
there's,
these
objects
they're
not
always
as
I
mentioned
this
to
sake
on
is
that
we
have
all
of
these.
You
know
we
have
all
these
config
objects
and
we
have
the
configs
of
various
types
right.
Some
things
are
dictionaries.
A
Some
things
they're
strings
numbers
like
whatever
some
things
are
dictionaries
that
are
not
configure
objects
right.
So
what
we
really
need
to
make
sure
is
that
we're
looking
through
everything
every
time
we
load
any
kind
of
dictionary
in
memory
we
look
through
it
and
load
any
config
objects
right
that
we
anytime
it's
you
know
coming
through
the
CLI
or
the
data
flow
through
configs
right.
It's
something
that
might
contain
configure
objects
that
we're
going
to
need
to
instantiate.
A
F
G
G
G
A
A
A
H
A
E
B
A
A
All
right,
yes,
these
are
the
ones
okay,
great!
Yes,
so
these
are
the
examples.
I
use
the
question-answering
model,
and
this
is
a
question
answering
with
oh
yeah.
This
is
the
classifier
wait.
Do
we
know
that
we,
let's
see
yeah?
Oh
yeah,
here's
the
QA
model?
Okay,
yes-
and
this
is
the
one
with
context
that
we
had
been
talking
about
so
sweet
right,
very
cool.
Well,
we
got
a
bunch
of
NLP
stuff
in
here
now
nice
job.
A
B
G
A
Okay,
yeah
and
then
you
can
tell
us
please
please
take
notes
on
what
you
on
your
perceptions
of
the
documentation
and
where
it's
lacking
I
know
I
know
we
know
it's
lacking,
especially.
We
know
we
need
like
a
page
on
data
flows
and
just
like
a
bunch
of
random,
you
can
do
with
data
flows
and
sort
of
various
in
taxes
for
things
that
so,
if
you
sort
of
as
you
think
of
things
as
you're
reading
through
it,
please
write
them
down,
and
then
we
can.
We
can,
you
know,
really
focus
on
it
great.
Thank
you.
B
A
A
A
D
A
B
A
A
D
A
Yes,
so
and
I
mean
yeah
I
saw
I
saw
your
code
there
and
I
was
when
I
I
didn't
see
your
features
code,
so
I
didn't
I,
didn't
think
anything
of
it.
I
just
thought.
Okay,
he's
onto
that
step
now,
so
sorry
I
should
should
have
thought
more
into
it.
Okay,
so
I'm
glad
we
got.
That,
though,
is
there
anything
else
that
you're
sort
of
thinking
about
on
this.
B
D
A
Let's
not,
let's
not
have
that
yeah,
so
we
I
think
Sudarsan
I
went
through
and
recently
removed
all
the
default
directory,
so
I
think
we're
currently
in
a
good
spot
where
we
don't
have
any
default
directories.
I,
don't
know.
If
we
got
all
the
feature
hashing
that
happened.
We
need
to
go
through
and
recheck
that
so
need
to
re-check
before
next.
So
we
need
to
recheck
before
next
release
that
we
removed
any
places
where.
A
Model
directory
was
being
determined
from
feature,
hashes,
etc,
because
you
know
how
we
were
doing.
We
were
doing
this
thing
where
we
were
trying
to
be
clever,
and
you
know
putting
the
directory
in
the
cache
and
then
trying
to
figure
out
based
on
what
features
the
user
used,
what
model
to
load
from
the
cache.
Well,
we
all
had
run
into
that
issue
right
where
that
was
actually
tripping
us
all
up.
A
So
yeah
we're
gonna
want
to
make
sure
that
we
got
rid
of
that
before
the
next
release
and
so
we're
gonna
we're
gonna,
make
it
so
that
there's
no
default
directory
now,
let's
see
now
part
of
this
I
think
this
is
something
we
probably
need
to
talk
about
is
so,
let's
see
okay
and
then
you're
going
to
need
so
you're
going
to
need
to
load.
I
think
this
is
another
thing:
is
that
we're
gonna
need
to
probably
load
and
save
these
things
so
load
model
yeah,
okay,
job
load,
that
load
okay,
self-taught
path?
A
A
Alright,
so
now
we
end
up
with
the
sisal
or
model
right,
and
this
is
what
Sudarsan
had
recently
done
is
she
went
through
and
made
it
so
that
it
wasn't
sort
of
a
hash
of
the
features
it's
just
gonna
say
model
within
that
directory
right
and
and
then
we're
storing
the
JSON
in
there,
because
with
this
with
this
SLR
model
is
the
basic
one.
It
just
source
it's
configured
to
JSON.
So
now
the
issue
becomes
like
and
for
yours,
so
this
is
the
most.
This
is
most
simplistic
model
right,
so
this
is
not.
A
A
We
don't
pass
in
this
and
some
in
symbol
size
right
if
we
don't
pass
in
that
that
value
into
the
config
next
time.
This
is
part
of
why
we
were
doing
that
hashing
to
ensure
that
you
end
up
with
the
same
model
parameters
in
memory
right.
So
we
don't.
You
know
we
don't
pass
that
in.
We
just
passed
the
directory.
A
It's
not
going
to
know
what
to
do
right,
so
it's
just
going
to
use
the
default
value
from
before
and
the
default
values
not
going
to
be
the
same
default
value
that
you
used
or
it's
not
gonna,
be
the
same
as
the
one
you
specified
right
so
so
yeah.
This
SLR
models,
not
a
great
example,
but
we
should
probably
do
something.
We
probably
need
to
do
something,
and
it's
probably
needs
to
be
like
within
model
or
something
to
say.
You
know:
load
save
to
directory
load
to
directory
right
Oh.
A
Actually
we
were
going
to
do
this
hope
now,
I'm,
remembering
there
was
a
whole
nother
thing
we
were
going
to
do,
but
basically
we
were
going
to
do
this.
We
need.
We
need
to
basically
say
first
thing:
when
you
get
into
the
model
you
instantiate
that
config
right
or
well,
we
probably
do
the
double
aunt
or
on
let's
see,
yeah.
We
need
to
make
sure
that
we're
loading
things
into
the
self
dot,
parent,
config
right
and-
and
so
we
probably
can
do
that
in
each
model-
could
do
that.
A
D
A
Structures
default,
since
it
wasn't
specified
the
next
time
right,
because
this
was
the
whole
thing
that
we
were
trying
to
avoid
by
doing
the
feature
hashing
is
that
they
would
specify
different
things,
and
mainly
it
was
because,
when
I,
when
I
had
done
this
one
point
with
tensorflow
I
realized
that
tensor
flow
model
blows
up
when
you
give
it
different
feature,
names
and
different
parameters
and
stuff,
and-
and
so
it
was
like-
alright,
okay.
Well,
it's
we
don't.
A
We
want
to
the
tensor
flow
if
you
guys
have
seen
the
tensor
for
error
messages,
but
they're
gnarly,
looking
things,
and
so
the
idea
was
was
saved
people
from
ever
having
to
look
at
those
so
yeah,
but
this
is
this
is
what
we
really
need
to
do
here.
So
this
is
sort
of
just
will
create
this
issue.
Don't
worry
about
this
now
for
SK
learn
stuff,
just
though
what
I
meant
to
say
on
this
is
that
so.
B
A
Write
the
tests
when
you
write
the
tests,
so
we
have
a
good
example
at
this
actually
yeah.
So
when
you
write
the
tests,
you
are
probably
going
to
be
sufficiently
writing
the
test.
If
you
do
something
like
how
much
you
did
this
one
here,
so
basically,
when
we've
been
I
mean
we
talked
about
how
we're
going
to
need
to
write
that
sort
of
config,
parser
and
stuff
to
validate
documentation
within
Sphynx
and
things.
But
that's
that's
not
going
to
be
a
part
of
this
but
sort
of
just
generically.
A
A
So
and
this
stuff
is
not
so
this
is
like
you
know
this
is
this
was
so
much
you
know.
Actually,
writing
I'll
see
Allah
commands
within
that
the
Python
file,
but
so
what
is
really
gonna
be
also
test
run
so
actually
with
CSV.
So
what
you're
gonna
want
to
focus
on
here
would
be.
You
know
something
like
this.
A
A
When
you
write
those
tests-
and
this
is
something
that
we
should
add
to
the
document
yet
well,
this
is
something
we
should
make
some
sort
of
standard
method
to
do
for
us,
because
we
keep
having
to
do
this
and
and
and
I
know,
it's
probably
annoying
I
know
that
when
I've
done
it,
it's
been
like
that,
damn
like
who
don't
we
have
a
function.
You
do
this
when
you
write
the
tests
duplicate
this
behavior,
where
the
test
is
reading
the
CSH
files.
A
So
the
test
is
basically
just
reading
the
dot
sh
files
in
and
then
calling
them
from
the
you
know
the
pythonic
interface
of
the
command
line,
and
that
way
that
way,
you
have
written
the
examples
for
the
shell
and
you've
tested
them
from
from
python
right.
So
you
have
a
test
and
your
test
is
the
examples
essentially
and
you're
testing
the
examples
all
in
one,
so
it
saves
it
saves,
saves
time.
A
A
A
A
Sweet
sweet
that
would
be
cool
and
there's
more
to
be
done.
I
know
hacchyan
sort
of
started
the
gal
for
PI
stuff,
but
I've
been
in
contact.
Oh
I
forgot
to
tell
you
guys
yeah,
so
I've
been
in
contact
with
the
team
with
an
Intel
that
does
dal
for
pi,
and
they
had
some
some
interesting
things
to
say
basically
about
how
they
did.
If
you
have
an.
I
pushed
a
patch
for
this.
So
if
you
have
Dell
4
PI
installed
so.
A
A
So
you
guys
I,
don't
know
if
I
guess
so.
Actually
this
you
may
not
be
familiar
with
this,
but
so
basically
Intel
does
a
lot
of
work.
You
know
everybody
always
wants
a
faster
processor,
but
the
thing
is
like
you
can
only
make
them
go
so
fast
and
because
it
gets
it
just
gets
very
hard
to
make
it
go
faster
right.
A
So,
instead
of
making
it
go
faster,
what
they
do
is
they
may
they
try
to
make
certain
things
faster
and
the
best
way
to
make
certain
things
faster,
as
we
all
know,
is
to
to
paralyze
them,
and
so
what
they've
done
is
just
like.
You
know
why
we
use
GPUs
for
a
lot
of
machine
learning.
Stuff
is
because
you
know
we
can
we
have
this
sim
D
instruction
single
instruction,
multiple
data,
where
we
can
do
you
know
lots
of
the
same
type
of
you
know.
A
C
B
A
A
A
All
you
have
to
do
is
call
this
dal
for
pi
SK
learn
patch
SK
learn
and
it
will.
You
know
basically
speed
up
your
SK
learning
stuff,
so
I
added
this
to
they
told
me
about
it
and
so
I
just
added
this
to
psych
it,
the
Sai
kit
model.
So
basically,
if
you're
running
the
socket
models-
and
you
have
dal
for
pi
installed-
and
you
know
you
pretty
much-
have
to
install
it
through
Conda
or
try
to
build
it
as
much
you
figured
out
or
as
hashim
figured
out,
I
would
and
watch
it.
A
You
were
the
one
who
had
to
figure
out
Conda
in
the
first
place,
but
yeah.
Then
that
was
a
pain.
So
if
but
but
you
know,
if
you
got
Conda
installed
and
if
you
run
the
CI
locally,
you
have
Conda
installed
and
I
think
it
will
install
I,
think
it
will
install
tail
4-ply
in
there
when
you
run
the
all
of
them.
A
You
know
it'll
patch,
your
stuff
and
make
Sai
kit
faster,
which
is
sweet
and
that's
all
we
have
to
do.
Is
they
told
me
about
this?
No,
it's
like
sweet
done
like
great.
Thank
you.
So
I
put
up
an
issue
where
we
need
to
document
that
and
then
the
other
thing
that
they
do
is
what
they
provide.
It
sounds
like
Intel's
gonna
have
some
more
like,
like
they're
gonna,
maybe
do
some
like
they
have.
A
You
know
they
have
their
accelerated,
compute,
sticks
and
stuff
right,
like
the
mo
videos,
and
things
like
that,
so
they're
gonna
have
more
stuff
like
that,
and
it
sounds
like
they
might
have.
Some
that
are
just
like.
You
know
not
on
a
USB
stick
but,
like
you
know,
probably
I
would
assume
connected
via
PCI
or
something
so
more
just
machine
learning,
chip
stuff
that
are
like
they
call
them
accelerators.
So,
basically,
they're
gonna
transparently
use
those.
A
If
you
have
them
and
then
they
also
have
this
really
nice
thing,
which
is
basically-
and
you
go-
we've
talked
about
this
before,
but
the
fact
that
you
know
all
of
our
models
right
now
pretty
much
pull
everything
into
memory
and
then
do
stuff
on
it
right
and
like
all
of
it.
Df
of
ml
is
built
to
be
asynchronous
so
that,
essentially
you
wouldn't
have
to
do
that
right.
You
could
stream
everything
and
well
I,
like
obviously
I've
had
no
common
communication
with
these
guys
until
now
and
and
I
come
to
find
out.
A
So
that
is,
that
is
pretty
cool
and
so
I
think
I
think
we're
using
that
right
now,
I
think
has
shims
implementation
used
that,
but
I'm
not
exactly
sure
actually
may
not
have,
but
so
there's
some
more
work
to
be
done
there
too,
if
you're
interested
streaming
and
GPU
support
just
because
we
already
have
you
know
we
already
have
that
model
in
there
in
the
code
base,
so
it
might
be
good
to
go.
Do
some
improvements
on
it?
But
if
you
want
to
do
the
plot
ml
stuff,
that's
awesome
too.
A
So
if
you
wanted
to
sort
of
you
know
just
just
sort
of
go
and
and
get
more
out
of
the
one
that
we
already
have
sort
of
as
a
break
from
from
writing
a
lot
of
this
scaffolding
code,
you
know
because
that
can
that
can
you
know
there
can
be
a
lot
of
scaffolding
code.
So
if
you
need
a
break
from
the
scaffolding
code,
I
want
it
offer
that
has
an
option
or
you
can.
You
know
yeah.
A
A
Okay,
cool
yeah,
so
I'll
just
put
that
out
there
as
an
option
and
then
yeah.
Otherwise
you
know
yeah
whatever
you
whatever
you
want
to
do.
I
just
want
to
give.
You
do
be
sort
of
fun
ideas
so
and
then
okay,
so
let's
see
so
sock
tom.
Let's
go
look
at
the
image
operation.
Actually,
let's
look
at
the
locking
PR,
because
I
saw
that's
a
green
check.
So
if
Augen
wants
to
wants
to
wants
to
get
out
of
here,
he
can.
B
A
A
A
A
A
A
So
without
a
locked
object,
let's
see
yeah,
okay,
right,
yeah,
so
and
or
well
for
the
last
one
right
and
then
because
we
don't
want
to
test
locking
we
want
to
test
we
want
to
test.
Let's
see,
is
this
well
the
order
you
said
the
orders
are
yeah.
The
order
is
always
going
to
be
different
right
so
or
well,
not
necessarily
always,
but
it
might
be.
C
C
A
C
A
There
you
go
yeah,
just
you
yeah,
so
do
something
about
the
second
one
right
just
to
make
sure
that
we're
getting
the
right
output
there
right
so
yeah
make
sure
we
have
the
right
number
of
lines
right,
and
you
know
the
right
number
of
lines
that
say
set
and
then
the
right
number
of
numbers
right,
okay
and
then
I
think
we'll
be
good.
There
is
there
anything
else
you
wanted
to
say
on
that.
One.
A
Okay
and
then
suction,
in
which
operations
PR
so
yeah
and
then,
as
as
usual
I,
don't
know,
I,
don't
know
the
image
image
operations
they're
very
confusing.
So
if
anybody
wants
to
drop
just
feel
free
to
drop
and
just
as
let
me
reiterate,
as
always,
you
know
that
if
you
you
want
to
drop
off
the
call
its
feel
free,
it's
drop
on
drop
off.
So,
okay,
so
new
image,
processing
operations.
F
F
So
it
was
copying
the
note
notify
our
default
value,
no
default
object
of
that
data
flow,
so
it
was
not
giving
a
correct
data
flow
in
the
service
HTTP
test,
so
I
changed
it
to
in
the
definition
in
types
dot,
Phi
I
changed
it
to
like
the
missing
type
acts.
You've
used.
The
fact
you
used
in
data
class.
A
A
F
Taking
three
errors
for
now,
so
it
was
not
a
good
operation.
I,
just
added
it
temporarily
to
work,
so
it
was
working
before
like
it
was
giving
me
the
single
feature
vector
I
wanted.
But
what?
If
there
are
two
or
four
of
operations
we
are
running
at
once
and
need
to
convert
them
into
a
single
feature,
vector.
A
F
A
Oh
I,
don't
think:
let's
see,
wait.
Okay,
so
you're,
saying
yeah,
there's
an
arbitrary
number
of
feature
vectors
like
how
do
we
combine
them?
Well,
okay,
so,
oh
okay,
so
I
believe
that
we
added
these
array
operations
in
the
integration
usage
example:
PR
I,
don't
think
we've
merged
them
yet
and
actually
we
may
have
taken
them
out.
I'm,
not
sure
we
probably.
E
H
A
A
E
F
A
B
A
A
A
B
E
B
A
B
A
B
F
A
A
B
A
A
B
H
B
A
F
A
E
E
A
A
A
F
A
That's
what
I'm
looking
at
the
flow
right
now,
let's
see,
and
then
we
basically
want
the
outputs
of
those
to
become
our
new.
You
know.
Okay
with
this
is
gonna
map
yeah.
This
is
gonna
map
image
to
these
guys.
So
what
we
really
need
is
something
that's
more
like
okay,
we
really
need
something.
That's
more
like.