►
From YouTube: Weekly Sync 2020-05-29
Description
Meeting Minutes: https://docs.google.com/document/d/16u9Tev3O0CcUDe2nfikHmrO3Xnd4ASJ45myFgQLpvzM/edit#heading=h.w92x8uaeu3am
B
A
Oh
media
minutes,
oh
my
brain
is
a
little
fried.
Oh,
I
was
up
up
late
last
night
trying
to
trying
to
do
some
stuff
in
kvm
and
it's
a
mess.
A
A
D
D
A
A
D
A
Yeah,
basically,
I
just
I
was
up
late
last
night
trying
to
figure
out
this
thing,
but,
and
my
I
told
my
boss-
I
was
gonna-
have
it
done
by
today,
and
I
have
a
meeting
with
her
at
11
and
well.
I
I'm
I'm
currently
stumped
by
it.
A
It
has
to
do
with
nested
virtualization,
and
this
thing
that
I'm
working
on
that's
like
this
virtualization
based
security
feature,
and
it's
very
confusing,
and
I
don't
know
what's
going
on,
but
I
mean
I
kind
of
know
what's
going
on,
but
like
it's
a
mess
and
then
you're
on
two
levels
of
vms,
which
is
actually
one
level
of
vms,
but
it
pretends
to
be
multiple
levels,
and
it's
like
I
don't
know,
I'm
confused
right
now,
but
we'll
see
hopefully
I'll
be
able
to
figure
it
out.
A
Sometimes
it
looks
like
oh,
this
isn't
even
going
to
be
possible
and
sometimes
it
looks
like
it
is,
and
I've
been
like
working
on
it
in
some
way
or
another
since,
like
october
end
of
october,
so
it's
like
it's
a
bit
of
a
mess
yeah.
I
really
need
to
finish
it
up
because,
like
yeah,
I
just
I
have
too
many
projects
going,
but
oh
my
gosh
yeah,
your
manchu.
Did
you
do
this.
E
A
E
So
I
did,
but
I
have
one
problem
there:
what
to
do
when
we
are
not
using
the
simple
mode,
because.
A
Let's
see
so
yeah
you
got
simple
well,
I
would
connect
valid
to
oh
yeah.
There's,
no
other
we're
not
listing
the
conditions.
Are
we
now?
I
see,
I
guess
I
would
say,
what's
the
best
way
to
do
this.
A
A
Diamond
shape
that
might
be
too
big.
Oh
no
it'll
probably
be
fine,
I'm
just
thinking
we
could
put
like
a
diamond
shape,
condition,
conditions
or
something
in
this
box
here
right
like
if
we
had
a
little
if
we
had
like
a
diamond
shape
right
here
that
said
conditions
and
then
valid,
went
into
conditions
and
then
conditions
went
into
clone
repo.
A
Okay,
all
right:
let's
do
that
then,
so
I
would
just
say
yeah.
So
if
a
thing,
if,
if
an
operation
has
conditions,
then
put
that
little
conditions
statement
there,
so
let's
see
all
right
great.
This
is
I'm
excited
to
see
this
happen
because
I
remember
there
was
one
time
again
and
I
were
doing
something
and
I
was
like
oh
well.
We
can
just
visualize
it
and
then
I
was
like.
Oh,
we
can't
just
visualize
it
because
it
doesn't
make
any
sense.
I
think
it
was
the.
F
A
C
A
A
A
Talked
about
diagram
condition
linking.
A
This
is
supposed
to
be
here:
okay,
moved
io
usage,
nice.
A
A
Where
is
aha,
okay,
so
so
everybody's
started
to
do
a
good
job
of
this
in
their
pull
request
titles,
but
I
noticed
that
the
commit
the
titles
of
the
commits
are
still
not
quite
doing
that,
and
also
I've
also
noticed
that
that
mostly
everyone
is
also
not
capitalizing.
The
first
word
which
it
doesn't
really
matter,
but
you
know
it's
nice
to
have
everything
be
consistent,
so
this
is
perfect.
The
body
there,
because,
basically
so
so.
A
The
ideal
situation
here
is
that
I
can
come
and
I
can
just
hit
rebase
and
merge
right
so
as
if
you're
committing
to
master
right.
So
so,
if
you're
thinking
about
like
what?
What
does
my
commit
message?
Look
like
and
does
it
look
like
the
commit
message
is
on
master,
so,
for
example,
so
they
usually
well
this
one.
I
guess
I
just
merged
without
the
thing,
because
it
was
close
enough
and
I
wanted
to
give
sudhanshu
points
for
for
for
formatting.
A
His
pull
request
almost
exactly
the
way
we're
talking
about
so
so
yeah,
so
the
capitalization
here,
but
so
yeah.
Basically,
the
the
way,
the
way
that
everybody's
titling
their
form
their
pull
request.
Titles,
if
we
can
title
our
commits
that
way
and
then
do
fixes
and
then
colon
and
then
the
issue
number
with
the
with
the
the
pound
sign
and
then
like
this
one
so
blank
and
then
there's
a
blank
line
here.
Let
me
show
you
guys
some
kit.
A
All
right
so
yeah,
so
it's
always
it's
title.
Blank
line,
fixes
and
then
signed
off
by
signed
off
by
doesn't
really
matter,
but
I've
been
adding
it.
Let's
see,
yeah
like
this
guy
has
got
this
guy's.
Well,
he's
he's
from
linux,
so
he
basically
this
is
sort
of
like
us.
A
You
see
this
style
and
a
lot
of
like
larger
projects
and
since
we're
getting
larger
and
larger,
then
we
want
to
keep
this
style,
but
basically
the
idea
is
that
if
okay,
so
I'm
going
to
make,
I'm
gonna
make
a
thing
about
this
pretty
soon,
but
there's
this
whole
there's
this
whole.
Actually,
where
is
that
link?
This
might
be
helpful.
A
You
guys
don't
have
to
care
too
much
about
this,
but
if
you
can
title
the
pull
requests,
that's
helpful,
just
because
it's
mostly
helpful,
it's
helpful.
It's
I!
I
can
change
it
no
problem,
but
basically
it's
like
from
a
mentoring's
perspective.
This
is
this
is
something
good
for
you
guys
to
learn
because,
let's
see
where
is
it
so,
the
linux
kernel
is
a
prime
example
of
this,
because
they're
a
huge
community.
A
Where
is
the
I'll
just
post?
This.
A
A
C
A
That
that
I've
been
trying
to
do
this,
and-
and
we
usually
like
it's
a
good
thing
to
try
to
do
most
of
the
time
like
a
single
pull
request-
will
fall
into
this
sort
of
like
paradigm,
but
sometimes
it
doesn't,
in
which
case.
Then
this
is
the
kind
of
things
that
long
term
we
should
be
thinking
about
when
you're
working
with
git
and
stuff,
but
basically
I'm
going
to
send
this
out
or,
while
I'll
put
in
the
meeting
minutes.
A
Okay.
So
so
that's
a
lot
of
hand
waiting
and
not
a
lot
of
explaining.
So
the
idea
here
is
that
if
we
have
a
pull
request,
let's
see
actually
let
me
take
I'll
use
this
project.
A
Let's
see
that's
a
good
example,
and
maybe
we
can
find
a
bad
example
too:
okay,
okay,
for
example.
This
is
like
this
is
one
pull
request,
but
there's
three
separate
commits
and
let's.
A
A
All
right,
so
this
is
what
it
looks
like
when
you,
when
you
apply
each
commit
one
by
one
right,
we
apply
or
wait
which
one
is
it
yeah?
This
is
the
first
one
so
like
if
you're
thinking
about
your
commits
as
like
a
series
of
you,
you
want
to
think
about,
like
you
want
to
think
about
your
pull
request,
and
then
you
want
to
think
about.
Okay.
A
A
Yeah,
you
guys
might
have
noticed
that
I've
done
this,
sometimes
all
right.
So
we
had
this
bull
crust
right
and
it
added
that
pre-processing
source
right.
So
saksham
wrote
this.
So
we
look
at
the
whole
thing.
Saksam
had
written
this
associate
definition,
output,
operation
and
then
he'd
written
all
of
this
data
flow
source,
and
then
he
wrote
the
test
cases
and
you
know
for
each
one
right
and
then
we
also
had
this
change
that
that
introduced
the
new,
the
new
syntax
for
for
for
getting
definitions.
A
If
you're
grabbing
definitions
from
a
specific
origin,
then
you're
going
to
you,
you
can
list
which
definitions
in
that
origin
are
allowed
for
the
single
input
right.
So
we
had
three
there's
basically
three
separate
changes
in
here
right,
but
it
all
shows
as
one
pull
request
and
usually
we
just
merge
things
as
one
pull
request,
actually
I'll
just
I'll.
What
I'll
do
is.
I
will
take
okay,
so
we'll
make
a
demo
video.
A
A
I
might
not
ask
you
guys
to
do
this
by
the
end
of
the
summer.
I
might
start
asking
you
to
do
this,
because
it's
good
good
practice
like
the
linux
kernel.
Does
this
heavily
the
tpm2
software
project?
Does
this
heavily
like
projects
with
like
large
communities
that
really
stick
to
their
standards?
They
they
they
will
require
you
to
do
this,
so
it's
a
good
thing
to
like
practice
and
know
how
to
do
and
so
yeah,
okay,
so
I'll
make
a
video
of
showing
how
this
works.
A
But
basically,
you
know
the
the
thing
is
that
we're
taking
this
this
one
consolidated
patch
right
and
we're
going
to
split
it
into
multiple,
logical
patches
and
well?
The
way
we
split
it
has
we're
trying
to
think
about
it
as
if
I
applied
one
patch
and
then
I
applied
the
next
patch
and
then
I
applied
the
next
patch
after
each
application
of
a
patch,
the
ci
would
still
pass
or
all
the
tests
would
still
pass
right.
A
So
for
this
one,
the
the
logical,
the
logical
way
to
split
this
up
was
first,
we
add
this
operation,
because
this
operation
doesn't
depend
on
the
any
of
the
other
changes
right.
So
we
take
that
and
we
make
that
the
first
commit
right
and
then
we
add
the
ability
to
override
the
definitions,
because
we're
going
to
need
that
for
the
data
flow
pre-processing
stuff.
So
we
know
that
it's.
This
is
a
nice
example
because
there's
three
of
them,
so
one
of
them
is
unrelated.
A
We
take
the
next
one
or
we
take
the
rest
of
the
changes
and
we
say
okay,
what
what
part
of
these
changes
like
is
dependent
on
other
parts
of
these
changes
and
then
the
ones
that
are
you
know
the
changes
that
are
dependent
on
other
changes.
They
go
last
right
and
then
you
just
keep
repeating
that
process.
A
Until
you
have
a
series
of
commits
where
you
know
you
can
apply
each
one
cleanly
cleanly
being
all
the
tests
pass
and
without
you
know,
build
errors
or
whatever,
and
you
end
up
with
this
nice
log
of
what
all
the
changes
were
and
the
reason
we
do.
That
is
because
I
don't
know
if
you
guys
do
this
much,
but
I
find
this
to
be
extremely
helpful
when
I
say
what
the
happened,
I
do
get
log
dash
p
and
you
can
see
exactly
what
happened
like
it.
A
Well,
it
just
lists
all
the
changes
with
all
the
it's
it's
it's.
This
is
just
going
through
each
commit
one
by
one,
whereas
like
github,
doesn't
have
a
great
interface
to
do
this
because
you
have
to
click
in
each
commit
the
command
line.
A
Client
of
git
does,
and
when
this
really
really
really
is
helpful,
is
when
you're
like
what
the
hell
happened
to
this
file,
because
you
can
do
git
log
dash,
p,
dash
dash
and
then
like,
for
example,
we
can
do
operation
output
and
we
can
say
okay,
what
the
hell
is
going
on
in
here
right
since
the
beginning
of
time.
Well,
okay,
so
you
know
it
looks
like
we
added
some.
We
added
some
the
initial
operations.
A
We
added
some
context
management
and
then
we
oh,
we
switched
everything
to
that
standardized
pattern
way
long
ago
and
then
we
started
adding
more
operations.
We
changed
some
stuff,
we
formatted
things
with
black,
and
so
you
can
see
for
one
given
file.
If
you're
like
what
what
happened,
things
aren't
working
the
way
that
I
thought
they
were.
You
can
start
you.
A
It
gives
you
better
visibility
into
what
what
the
bug
is,
because
you
can
look
at
the
log
of
the
code
and
it's
structured
in
the
right
way
right,
because
if
we
had
committed
all
of
this
stuff
under
one
single
giant
commit,
then
we
might
not
understand
why
these
changes
right.
A
I'm
going
to
see
this
commit
and
it
would
have
instead
said
you
know,
data
flow-based,
pre-processing
source
and
I'd
be
looking
at
it
going
well
what
the
happened.
You
know
wait
a
minute.
That's
this
isn't.
Does
this
have
to
do
with
the
data
for
pre-processing
sources,
because
it's
not
and
in
reality
it's
really
a
separate
thing
right.
So
we
want
to
make
it
a
separate
change
and
then
it
helps
us
debug
when
we
find
problems.
A
So
sorry
this
was
wow.
That
was
a
very
long-winded
explanation,
but
this
is
important
so
well,
what
would
you
guys
know?
I've
got
some
long-winded
explanations,
okay,
so
anyways
I'll
make
a
video
about
that,
and
I
have
a
bunch
of
videos
that
I
need
to
make.
Don't
I,
but
that
one
I'll
do
soon
so,
okay,
yeah
great
job
making
an
issue
too
anyway.
A
So,
basically,
if
you
do
that,
if
you
split
things
up
great
but
beyond
splitting
things,
so
splitting
things
up
is
something
that
I'll
probably
ask
you
guys
to
do
like
later,
because
you
know
it
takes
some
time
to
start
take
some
time
to
start
thinking
about
just
thinking
about
it.
This
way
and
then,
but
for
now,
if
we
can
format
the
titles
the
same
way,
then.
Basically,
when
we
get
to
the
point
where
we're
doing
that,
I
can
just
hit
this
rebase
and
merge
button
and
everything
goes
through
and
applies
cleanly.
A
A
All
right
cool!
Well,
I'm
glad
I'm
glad
that
was
that
was
interesting
because
yeah
so
yeah
I
mean
this
is
something
that
you're
gonna
see
like
a
lot.
So
as
as
you
go
through
your
careers,
so
it's
it's
a
good
thing
to
get
some
practice
on
awesome,
great
job.
Let's
see.
A
All
right
great
and
let's
see
oh,
I
don't
do
we
have
sudanshi.
Oh,
we
did
not
have
tsunacho,
okay,
no,
it
looks
like
well.
If
we
don't
have
sutonshi,
then
I
will
go
over
this
later.
A
A
Okay,
merged
moving
data
flow.
I
o
to
data
flows
tutorial.
A
Okay-
let's
actually
not
add
this,
because,
since
vocal
rabbit
models
got
added
within
this
within
this
past
change,
then
oh,
we
don't
need
to
say
that
they
also
got
example
usage.
A
Oh
no,
we
lost
homachu,
let's
see
all
right
sweet
and
you
did
it
so
they
could
be
tested.
Awesome.
A
Okay,
what's
going
on.
A
A
A
A
A
A
A
Oh
yeah,
I
mean
yeah,
oh
so
you
basically
just
oh
you
added.
I
think
I
missed
that
you
added
some
logic
to
say:
if
it's
prefixed
with
underscore,
then
it
gets
double
dash.
F
A
Okay,
nice:
this
is
really
exciting.
Let's
see
so
yeah,
let's
just
make
them
single
dash.
I
think
we'll
just
do
single
dash
everywhere.
I
mean
single
dash.
A
A
A
A
All
right
yeah
now
I
can't
find
them
but
yeah.
If
you
could
just
make
the
single
dash,
then
was
it
yeah
there
we
go.
A
Yeah
good
job
trying
to
preserve
the
existing
behavior,
though
it's
always
it's
always
the
right
thing
to
do
on
the
first
first
go
around
okay,
so
and
then
let
me
just
make
a
note.
A
Did
we
go
through
all
the
notifications,
okay,
okay,
cb
pen,
tool,
okay,
so
just
one
minor
change
needed
and
then
oh
and
I
think
you
needed
to
rebase
because
we
just
merged
something
that
changed
the
changelog
so
or
you
need
to
merge
master
or
whatever.
A
Just
one
minor
change
needed,
then
we'll
merge
all
right,
plus
master
and
update
log.
A
Okay,
is
there
anything
else
on
your
end,
suction.
E
Yeah
you
we
talked
about
the
mnist
normalization
of
adding
a
config
loadable
to
the
data
flow
and
then
doing
the
config
loading
stuff
in
base
dot.
Pi.
A
Oh
yeah,
that's
right!
So
you're
gonna
do
yeah.
Basically,
if
you
see,
if
you
see
a
type
that
says
it
can
be
loaded
with
a
config
loader,
because
we're
going
to
say.
C
A
A
Nice
yeah,
the
importing
issues,
can
get
tricky
okay.
Well,
I
mean,
I
think
the
solution
at
this
point
is
probably
split
that
stuff
into
util,
probably
into
util
config,
where
we
put
fields.
You
know
like
that:
mk
like
mk,
argh
and
so
fmk
yeah
this
stuff,
like
there's,
really
no
reason
for
this
to
be
in
here.
You
know
like
this.
Is
this
could
definitely
go
in
util?
Well,
let's
see
convert
value
yeah
I
feel
like
this
is
stuff
that
we
should
put
into
util
config.
A
So,
let's
see
like,
and
then
I
don't
because
I
don't
think
any
of
this
stuff
is
subclass
based,
configurable,
okay!
Well,
that's
a
bit
of
an
issue,
but
that
doesn't
really
matter
because
we
could
always
just
say:
has
adder
entry
point
name
yeah,
I
think
most
of
the
stuff
we
should
just
split
into.
A
We
should
take
this
out
of
base
because
I
mean
if
it's
not
dependent
on
anything
in
here
it
shouldn't
be
in
here
right,
because
that
helps
us
avoid
import
problems
right,
because
I
think
in
this
case,
what
you're
going
to
be
looking
at
is
convert
value
right.
A
Yeah,
and
so
if
we
split
that
into
df
fml
slash
util
config,
I
don't
know
what
it
would
wherever
this
should
go
in
there
like.
Maybe
this
is
just
config.py
for
now,
unless
we
have
a
better
word
name
for
it
or
like
field
or
something
it's
probably
just
config
all
this
is
just
config
yeah
anyways.
It
could
go
in
that
directory
right
and
then
that
way
we
can
import
base
or,
let's
see
that
way.
Well,
let's
see
see,
then
it's
dependent
on
config
loaders.
Well,
no,
no
see
that
doesn't
work
still
because
base
imports.
B
F
A
A
Well,
yeah:
we
need
to
split
it
out
in
some
way,
so
that
doesn't
become
a
circular
dependency
and
I
think
the
way
to
do
that
is.
We
have
to
do
like
what
we
were
talking
about
where
we
split
out.
We
get
rid
of
the
class
method,
methods
of
arcs
and
configs
and
with
config,
because
that's
what's
creating
the
circular
dependency
right
now.
A
A
The
question
becomes,
how
do
you
get
a
new
instance
easily,
so
yeah
base
configurable.
A
Okay,
yeah,
I
don't
know,
I
think
we
might
have
to
take
this.
We
might
want
to
take
this
one
offline.
So,
let's
so
we
don't
take
up
too
much
time
with
this,
so
to
get
so
to
make
it
so
that
we
could
all
right.
But
for
now
I
mean
as
a
stop
gap.
What
what?
A
E
Last
time
I
gave
an
idea
about
doing
the
config
loading
stuff
in
data
flow
source
itself.
If.
A
Yeah
yeah
yeah,
you
yeah,
I
would
say
yeah
exactly.
If
you
get
a
data
flow,
that's
a
string,
then
you
use
the
config
loader.
Is
that
what
you're
saying.
A
Just
do
that:
let's
just
do
that
so
circular
dependency
issue
with
the
idea
of
trying
to
have
convert
value.
A
Okay,
so
work
around
we'll
have
to
do
this.
We
want
to
do
this
eventually,
because
we're
just
going
to
keep
running
into
this
type
of
thing:
slash
heck
of
just
import,
config
loaders
in
data
flow
source,
all
right
there
we
go
sweet.
A
All
right!
Does
that
that
solves
that,
then,
for
now,.
A
Nice
yeah
good
good,
good,
good,
workaround,
we'll
we'll
have
to
find
a
way
to
split
this
out
eventually,
because,
obviously,
loading
files
into
data
classes
is
a
very.
A
So
hashem
the
so
the
issue
that
we're
talking
about
this
guy,
I'm
I
don't
right,
we
don't
see
that
in
the
ci
and
we
don't
see
that
and
we
didn't
see
it
one
one
time.
Well,
let's
see
was
it
just
not
as
far
through
on
that
ci
log.
C
C
Yeah,
I
think
so.
Okay,
let's,
like
you
said
the
installation
is
corrupted.
A
Okay,
okay,
so
and
then
so
what
I
was
going
to
do
about
this
is
that
was
my
guess.
Basically,
what
I
was
thinking
is
that
we
can
have
our.
We
should
have
another
container
image
like
that.
We
have
on
docker
hub.
We
have
the
one
container
image,
but
it's
got
basically
just
the
built.
I
was
just
here
come
on.
A
So
we
have
this
image,
but
let's
see
what's
the
docker
file
yeah,
the
dockerfile
is
just
like
the
master
branch
of
just
the
base
repo.
A
So
I
was
thinking
we
could
add,
probably
another
tag
of
like
dev,
and
then
we
basically
just
throw
everything
that
we
need
for
development
into
that
container
and
then
we
could
also
use
that
for
the
local
ci
runs,
and
that
way
you
know
if
it
builds
on
docker
hub,
hopefully
like
they
have,
because
I
think
a
lot
of
times
what
happens
is
like,
I
think,
conda
downloads,
a
bunch
of
things
with
various
different
ways
of
downloading,
and
maybe
some
of
them
time
out,
and
it
doesn't
error
check
on
them
like
if
there's
connection
issues
and
then
it
doesn't
verify
the
files
after
it
downloads
them
all.
A
So
I
think
I
think
that's
why
we've
sometimes
ended
up
with
like
corrupted
installations
and
I'm
hoping
that
if
we
just
have
it
built
in
the
docker
hub
ci,
then
you
know
docker
hubs
wherever
their
servers
are
is,
is
a
good
enough
connection
that
it'll
it'll
always
give
us
the
right
image,
but
I'm
not
I'm
not
sure
of
that.
A
But
I'm
just
thinking
you
know,
that's
that's
if
we
can
get
an
image
that
works
and
we
put
it
in
docker
hub
then,
if
we
just
keep
pulling
down
that
image,
then
we
know
that
it's
not
going
to
be
a
problem
with
like
that,
writing
to
the
local
cache
right,
because
that
local
cache
is
kind
of
dubious
anyways
mounting
it
as
a
volume,
and
everything
means
that
it's
going
to
change,
and
so
what's
the
point
of
a
container,
if
the
stuff
in
there
is
actually
not
we're,
not
sure
that
that's
what
we
want.
A
So
we
should
probably
do
that
or
we
should
find
some
way
to
verify
the
conda
installation,
which
would
be
actually
a
better
solution,
but
I'm
not
sure
if
that
exists.
So
I
guess
we
could
just
look
for
that.
First.
So.
A
A
A
Are
you
having
are
you?
Are
you
able
to
run
the
model
dial
for
pi
stuff
locally?
C
Yeah,
it's
actually
the
same
error
that
I
posted
the
one
without
the
cashier.
Now
that
we
have
the
wow
pal
rabbit
issue
fixed
with
the
library
being
downloaded,
it
still
shows
the
same
error.
A
C
B
A
A
Oh,
this
is
interesting,
so
what
I'm
looking
at
now
is
basically
right,
like
it
complains
when
it
says,
dolph
or
pi
is
listed
as
an
as
a
dependency,
but
dolph
or
pi
is
only
available
on
anaconda
or
through
conda
right
and
then
so.
What
I'm
thinking
is.
A
A
See
what
we
really
need
is
like
some
way
for
some
way
for
when
you
do
pip
install
it
actually
does
conda
install
or
maybe
I
guess
we
could
just
run
conda
install.
Does
that
work,
let's
find
out.
A
A
A
A
A
Oh,
oh
yeah.
This
is
something
that
we
need
to
do
all
right,
so
this
is
sort
of
like
notice
for
everyone.
I
ran
into
this
the
other
day,
and
this
is
important
that
we
fix
this.
A
We
talked
about
this,
where
we
would
like
log,
an
error
message,
and
I
realized
we
should
really
just
raise
an
exception,
because
if
you're
like,
for
example,
if
we
do
like
the
tensorflow
models
and
stuff
like,
if
you
say
if
you,
if
you
try
to
grab,
features
like
if
you're
trying
to
grab
records
from
sources
using
the
with
features
method-
and
it
gives
you
no
records-
you're
just
going
to
immediately
fail
anyways
because
you're
not
going
to
have
any
records
to
train
on
right
or
to
do
accuracy,
assessment
on
or
to
do
prediction
on,
and
so
so
I
realized.
A
We
should
really
just
be
racing
in
here,
and
so
I
actually
hooked
into
the
unit
test
stuff
to
get
it
to
do
the
nice
little
diff.
Let's
see,
let's
see
well
yeah.
So
basically,
if
you
do
that,
it'll
give
you
a
diff,
and
what
I
realized,
though,
was
we
the
current
way
that
that
models
work
is.
A
Is
that
predict
just
takes
a
bunch
of
records,
but
there's
no
guarantees
that
those
records
actually
have
the
features
right.
So
if
you
just
provide
it
with
some
kind
of
it
like
there's
no
guarantees
that
those
records
have
the
right
features,
so
you
might
get
cryptic
error
messages
like,
for
example,
from
the
psychic
models.
It
tries
to
drop
the
predict
like
what
does
it
do?
It
builds.
A
It
builds
giant,
pandas
arrays,
I
think,
and
or
panda's
data
structures,
and
then
it
drops
the
prediction
one
from
the
x
and
then
only
keeps
the
prediction
one
from
the
y
or
something
or
what
happens
here
here.
It
is,
let's
just
look
at
it.
It's
right
here.
A
Yeah,
so
it
grabs,
alt,
feature
data
and
then
soft
features.
Okay.
Well,
oh!
That's
what
happened
so
if,
if
record.features
returns
an
empty
dict,
unless
the
record
has
all
of
those
features
right,
which
is
why
we
use
sources
with
features
because
it'll
always
give
us
only
record
that
have
all
of
only
records
that
have
all
of
those
features
well,
since
we're
not
using
sources
with
features,
there's
no
validation
here.
A
So
if
you
happen
to
pass
in
a
source
where
the
records
don't
have
those
features,
you're
just
going
to
end
up
with
a
giant
cryptic
error
message,
so
we
really
should
be
passing
sources
to
this
and
so
yeah.
Basically,
we
just
need
to
go
through
and
change
every
single
model
so
that
it
accepts
sources
for
predict.
Basically,
the
predict
is
going
to
look
pretty
much
just
like
every
other.
It's
going
to
look
like
you
know,
train
and
accuracy,
mostly
right.
It's
this.
The
signature
on
that
function
is
going
to
look
pretty
damn
similar.
A
I'm
not
sure
if
we're
going
to
make
our
next
week
release
deadline
here,
but
that's
okay,
it's
better
to
have
things
that
that
are
really
working
than
to
have.
You
know
things
that
are
maybe
not
fully
working.
I
mean
we
can
always
just
throw
in
another
release,
but
I
think
things
are
kind
of.
A
So
all
right,
how
shame
did
you
figure
that
out.
C
Okay,
it
was
working
before,
but
I
just
wanted
to
be
sure.
A
All
right,
so,
if
someone
wants
to
do
that,
that'd
be
great,
just
comment
on
there
and
pick
it
up.
It
shouldn't
be
too
much
of
a
heavy
lift.
I
mean
we
don't
have
a
million
models
so
and
it's
really
just
change
the
function,
signature
and
then
change
the
loop
to
do
sources
with
features
and
then,
if
anywhere,
is
actually
passing
records
into
the
predict
method,
which
I
don't
think
we
have
any
places
doing
that
we'll
change
that.
A
But
everything
should
be
pretty
much
using
the
high
level
apis
and
if
they
aren't,
then
we
can
simplify
test
cases
by
doing
that.
So,
let's
just
check
in
like
model
tensorflow
tests
just
see:
okay,
so
yeah,
oh
nope,
this
one's
already
using
the
high
level
api,
so
model
answer
test.
A
Oh,
but
these
ones
are
all
derived
from
the
same
test
case.
So
that's
an
easy
fix,
yeah.
Okay,
so
that's
an
easy
fix
because
you
basically
for
for
the
ones
that
aren't
you
just
delete
that
and
then
that's
the
fix.
So
this
is
should
not
be
such
a
heavy
lift,
but
it
does
need
to
get
done
so,
let's
see,
and
then
this
one
is
a
pretty
heavy
lift,
but
I
think
sudarsana
is
going
to
do
this
all
right
so
yeah.
I
guess
hashem
on
that
one.
If
the.
A
Otherwise,
you
can
always
sort
of
push
things
up
and
see
what
happens
these
guys.
Actually,
I
think
I
don't
think
I
got
a
chance
to
tell
you,
but
they,
the
people
on
that
project
reached
out
to
me
and
said:
hey.
What
are
you
doing
like?
What's
up?
Let's
see
if
we
can
collaborate
on
things
and
I
emailed
them
back,
but
I
sent
them
a
very
long
email,
so
they
may
not
have
read
the
whole
thing
yet
so
yeah
we'll
see
we'll
see
what
happens
there.
A
I
basically
told
them
that
you
know
you're
working
on
this.
This
plug-in
we've
got
this.
We've
got
a
bunch
of
plug-ins
and
you
know:
we've
got
this
command
line,
interface
and
stuff
so
basically
like
they
can.
They
can
expose
the
work
they're
doing
through
all
of
the
the
ways
that
we
expose
models
and
you're
going
to
you're.
A
A
It
would
be
good
if
we
just
if
you
try
to
just
pass
modeled
alpha
pi
locally
and
and
if
we
can
just
if
we
can
get
your
debugging
cycle
working
locally,
then
and
and
you
can-
you
can
get
this
guy
going,
then
then
we
can
probably
figure
out
the
rest
like
we'll
cross
the
rest
of
those
those
bridges
when
we
get
there.
If
we're
still
seeing
issues
after
after
this
one
works,
you
know
with
it
with
the
main
models,
because
I
don't
think
you're
changing
anything
in
in
the
main
package
right.
A
So
if
we're
seeing
issues
there,
it's
probably
just
wacky
stuff
that
we'll
figure
out
so.
D
A
D
Okay,
because,
like
it's
using
the.
A
A
A
A
A
A
A
So
if
we
subclass
from
any
source,
when
we
create
a
file
secrets,
it
will
itch
well:
okay,
let's
see
it's
gonna
get
mad
because
we
don't
have
that
stuff
in
the
config.
Isn't
it
let's
see
what
happens.
A
A
A
In
memory
source
context,
well,
yeah,
that's
that's
that's
something
you
could
do
too
here
right.
So
you
could
say
this.
Is
this
that's
another
thing
that
I
was
saying
you
could
just
go
and
usually
we
return
none.
So
you
could
do
you
could
do
this
kind
of
thing
where
you
just
do
record,
equals
weight,
load,
self.source.
E
A
Right
and
then
let's
see
what
would
you
do
if.
A
Is
this
what
you
were
saying,
so
you
make
a
source,
basically
yeah,
exactly
yeah.
So
that's
that's!
Definitely
something
you
could
do
here
too.
I
was,
I
thought
about
suggesting
that
as
well
right,
where
you
just
say
you
know,
load
this
name
and
then
you
have
a
feature
and
you
say,
record
dot,
feature
data
right
and
then
this.
D
A
You
know
they
ever
should
use,
because
they
should
be
using
something,
that's
actually
secure,
but
it
is
something
that
that
we
sort
of
need
for
testing
right
and
for
ease
of
use.
So
I
think
that
right
so
yeah.
So
it's
like
this
for
the
purpose
of
the
like.
A
It
doesn't
really
matter
how
we
implement
it,
because
it's
not
going
to
get
used
very
much
right.
Yes,
so,
and
I
think
this
is
definitely
you
were
right.
This
is
probably
the
correct.
This
is
probably
the
the
the
path
of
least
resistance.
Here
is
probably
this
right,
so
data
equals
features
and
then
data
value,
and
this
should
give
us
what
we're
looking
for.
Of
course,
you
got
to
go
and
implement
source,
and
this
is
probably
just
self.parent.
A
And
then
you're
probably
done
so
yeah.
This
is
probably
the
whole
thing.
If
you
go
that
route
right,
so
it's
probably
a
good
way
to
do
it.
I'll
just
leave
I'll
just
leave
this
or
I'll.
Just
leave
this
like
this,
and
then
you
can.
You
can
do
the
rest
here.
Does
that
sound
good,
all
right,
cool
yeah,
and
then
you
probably
just
want
like
or
well
it's
probably
not
self.parent.config
right,
it's
probably
just
like
you
create
a
source
on
a
knit
in
the
init
method
here
and
then
you
do
it
right.
A
A
E
A
It's
not
going
to
be
something
that
gets
used
so,
and
I
am
pretty
sure
that
the
reading
writing
of
a
file
and
a
thing
that's
not
going
to
get
used
is
not
going
to
be
a
performance
impact
that
we
need
to
care
about.
So
all
right,
great
good,
good,
good!
Thank
you
man!
So,
let's
see
oh
modify,
well,
use
high
level
load
and
save
apis
yeah.
You
guys
got
to
get
my
head
out
of
the
out
of
the
way.
Sometimes
so,
let's
see
load
high
level
for
file
secret.
B
A
And
then,
and
then,
where
you
had
like
file
this,
the
having
it
be
file
underscore
secret
or
whatever
it
was,
let's,
let's
just
make
it
file
right
like
how,
if
you
look
in
dfm
source,
slash
file,
dot
py,
this
will
be
dfml
secret,
slash
file,
dot,
py
yeah,
because
I
think
that
that's
probably
just
redundant
all
right
great.
Is
there
anything
else.
D
That
you're
after
this
yeah
I
have
changed
and
like
can
I
just
now
make
like
a
function
and
just
go
into
that
function?
Does
that
work
now?
I.
D
Like
is
the
definition
created
automatically
now
like,
if
I
just
have
a
normal
python
function
in
a
file.
A
Does
it
work,
oh
yeah,
so
that
would
be
okay
so,
and
this
is
something
that
we
should
add
to
the
data
flows
tutorial
stuff
that
we
just
now
have,
but
I
think
this
silently
went
in
a
while
ago,
oh
yeah.
So
if
you
look
in
this
file
now-
and
I
believe
you
added
this
file
a
while
ago-
but
so
this
is
what
this
file
looks
like
now,
and
it's
checking
that
it
does
dataflow
create
and
then
it
does
and
then
it
runs
the
data
flow
and
so
I've.
A
I
chopped
it
up
a
little
bit
and
I
added
basically,
I
took
the
one
that
you
had.
I
split
out
the
thing
that
grabs
that
creates
the
data
flow
and
then
I
made
another
test,
so
we
have
one
that
just
is
a
regular
operation
and
then
one
that
is
an
asynchronous
generator
operation,
and
so
yes,
it
does
work
and
here's
where
the
test
cases
are.
So
if
you
need
a
reference
cool,
what
are
you?
What
were
you
gonna
do
with
that?
Are
you
just
you're
gonna?
A
Function,
yeah
awesome,
awesome,
that'll
be
great,
and
then
I
guess
you
won't
need
yeah.
You
won't
even
need
to
define
the
data
flow
in
that
file
right
because
you'll
have
the
you'll
you'll
change
the
create
command
to
modify
the
input
boa.
Oh,
this
would
be
great
yeah.
Now
we
can
just
throw
python
functions
in
files.
A
Heaven
yeah
you
just
throw
python
functions
and
files
and
all
of
a
sudden
they're
running
in
http
servers
and
getting
auto
redeployed.
This
is
great
all
right,
sweet,
okay,
so
anything
else
on
your
front.
Argon.
A
Cool
yeah:
well
you
had
that
presentation
right.
So
I'm
going
to
tackle
ffmpeg
next,
so
operations
or
functions
are
getting
auto.
Op.
A
This
is
another
example
of
I
took
all
of
these,
and
I
split
them
into
that
was
one
that
was,
that
was
the
one.
When
I
did
this
was
when
I
was
thinking
man.
I
should
really
record
how
I'm
doing
this
splitting
one
commit
into
a
bunch
of
commits,
because
all
of
these,
I
think,
were
the
same,
commit
that
had
to
get
split.
D
A
A
Good
yeah
you'll
like
it's,
it's
it's
really
nice
later,
when
you're
going
through
and
you're
like
okay,
this
is
actually
just
like
each
commit
is
like
a
couple
lines
and
it
makes
a
lot
of
sense
when
you're
like
what
did
I
do
and
where's
my
bug
now
all
right
sweet?
Are
we
good.
E
A
F
A
Nice,
oh,
this
is
going
to
be
great.
Okay,
I
want
to
get
to
sudhanshu
because
I
think
we've
gotten
everybody
else
so
far
so
how's
it
going
sid
hunter.
A
It's
been
good,
yeah,
I've
been
it's
been,
we've
had
some
beautiful
weather
here,
especially
so
I
lugged
my
monitor
outside
and
I've
been
sitting
outside
with
my
monitor
on
the
in
the
in
the
backyard
and
enjoying
the
enjoying
the
80
degree
weather.
So
it's
been
nice.
How
about?
What
have
you
been
up
to.
B
Yeah,
so
actually
I
was
working
on
the
cleanup
stuff.
B
In
that
one
of
the
like
in
the
http
deployment
thing
like
one
of
the
commands,
was
failing.
A
B
So
in
the
below
I
like,
I
have
posted
like
in
the
below
yeah.
Let's
see
so
one
yeah
that
one
so
that
was
feeling
missing.
A
B
A
Okay
right
and
that's
obviously,
okay,
oh
and
you
know
what
I
think.
The
other
thing
is
that,
since
great
job
on
this,
this
was
a
what
we
thought
was
not
going
to
be
right.
I
labeled
the
whole
issue
as
as
medium
and
this
quickly
blew
up
into
a
giant
change
set.
So
great.
C
A
Working
on
this
yeah,
this
was
this
was
not
for
the
faint
of
heart,
so
okay,
so
I
think
that
actually,
this
is
a
great.
This
is
a
great
time
to
use
the
stuff
that
algon
added
recently
with
the
http
channel
config.
A
So,
let's
see,
I
think,
oh
and
then
we
do
need
to
still
re-add
these
svgs.
So,
let's
see-
and
you
might
have
to
add
them
with
git-
add
dash
f.
That
might
be
why
they're
not
going
because
I've
noticed
that
I've
accidentally
deleted
things
sometimes
doing
that.
A
A
A
This
should
be
the
patch
that
fixes
this,
so
basically
what
this
does.
Oh-
and
I
found
this-
was
a
dumb
bug.
So
what
we
should
have
here
is
that
when
we
auto-create
the
name
on
these
operations,
it
should
check
if
it
was
already
registered
as
a
given
entry
point.
So
I'm
wondering
if
let
me
pull
this
down
and
let's
see
what
happens
here
so.
A
A
A
A
A
This
one
came
out
correctly:
let's
see
the
definitions
are
not
coming
out
correctly,
let's
see
yeah
and
these
are
all
the
definitions.
Okay,
so
my
guess
is
maybe
well
no
that
shouldn't
make
any
difference
because
you
should
have
had
the
pie
play.
Let's,
let's
run
through
the
tutorial
real
quick.
B
A
A
All
right,
okay,
so
then
the
next
thing
we
should
check
here
is
okay,
so
we
probably
now
this
is
making
me
also
think
that,
because,
or
else
we
then
end
up
with
these
long
long
names
and
we
want
the
ability
for
people
to
obviously
shorten
their
names
if
they're
going
to
go
through
the
trouble
of
registering
them
under
their
entry
point
right.
A
Otherwise,
then
we
give
them
this
nice,
nice
quote
unquote
predefined
long,
but
very
descriptive
name
so
that
they
don't
have
collisions
right,
but
if
they're
registering
them
as
entry
points,
they're,
basically
saying:
okay
like
I'm,
I'm
I'm
choosing
this
short
name
for
something
and
if
it
has
a
collision
like
so
be
it
I
chose.
I
chose
a
shorter
name,
so
I'm
thinking,
maybe
what
we
need
to
do
here
is.
A
Oops,
let's
see.
A
Because
yeah
this
is,
I
mean
this:
is
these
diagrams
huge
right,
so
we
could
probably
shorten
this
down
to
the
point
of
where.
A
We
just
say
we
could
do
the
same
thing
for
the
for
the
inputs
right
and
we
could
say
you
know
like
it
could
just
be
safety
check
like
trying
to
hold
up
my
hand
to
the
screen.
A
That's
not
gonna
help
anybody,
but
basically
we
could
do
you
know
just
we
could
cut
off
this
part
right
and
say:
okay,
you
registered
only
because
they
registered
the
operation
name
as
safety
check
right,
and
so,
instead
of
doing
this
full
path,
we
just
say:
what's
the
operation
name,
so
df
base
create
definition,
so
we
want
to
grab
or
let's
see.
A
Oh
yeah,
this
is
where
this
is
so
if
it's
already
been
registered
as
another
name,
so
we
can
just
grab
this
and
we
can
say
def
create
definition,
okay
and
then
that's.
The
thing,
though,
is
when
we're
doing
the
create
definition,
we're
doing
name
list
and
the
name
list
is
getting
populated
from.
A
So
if
we
just
do
name
or
wait,
let's
see
opt
name
right.
So
maybe
we
can
do
this
as
this
is
not
name
in
kwrx.
So
we
should
just
do
kw,
args,
name
right
and
then
now
now
we're
now
we're
actually
like
respecting
what
that
it's
a
function:
module
because
now
it'll
auto
generate
the
name
right.
It's
auto
generated
the
long,
descriptive
function
name
for
us
already.
A
So
therefore
we
can
probably
we're
probably
safe.
If
we
just
do
this.
A
Expect.Getmodule,
what
happens
if
that's
main
but
we'll
find
out
right?
So,
let's
see
what
happens
now:
okay,
yeah
now
I
chose
safetycheck.inputs.package.
So
that's
probably
more
manageable.
There
yeah,
that's,
probably
more
manageable
cool.
So
this
is
something
that
we're
obviously
going
to
have
to
test
this
change,
but
well,
let's
just
test
it
and
see
what.
A
A
A
B
A
All
right,
okay,
it
looks
it
looks
like
this
kind
of
throws
things
off:
okay,
well,
okay,
and
that
is
why,
because
it's
doing
this,
so
you
yeah
okay,
so
we
have
to
take
that.
A
A
A
A
That
one's
interesting-
and
this
is
just
in
the
test
that
we
were
just
talking
about
so
test
test,
df
test,
df.
A
A
A
A
So,
let's
push
this
change:
is
this
going
to
break
any
documentation
that
we
have?
Did
we
change
any
documentation
ogin
when
we
did
this.
A
A
Okay,
this
will
solve
our
long
long
long,
long,
name,
problem,
okay,.
F
A
Are
definitions,
follow
entry,
point
style
for
definition,
names
and
that's
probably
something
we
needed
to
do
anyways
because
we
had
operations
referenced
by
entry
points
and
definitions
were
referenced,
sort
of
by
their
like
doc,
their
their
the
way
that
you
reference
them
within
the
docs.
A
So
this
is
this
is
probably
good
that
we
do
this
anyways,
okay
and
then
well.
Let's
just
look
at
that.
A
Commit
entry
point
style,
loading
operations,
okay,.
A
Okay
yeah,
so
we
support
it,
but
we
didn't
update
the
documentation
yet.
Okay,
great
that's
perfect!
Yes,
you
are
right.
Sorry!
I
just
wanted
to
check
and
then
see.
This
is
also
where.
A
A
A
Okay
and
then
this
is
the
other
day
see.
This
is
the
other
thing
about
the
commit
structuring
that
we
were
talking
about
sutonto.
I
think
you
might
have
missed
this,
but
because
this
is
like
the
very
first
thing
that
we
talked
about.
We
talked
about
sort
of
like
how
we
we
can
split
apart
large,
like
a
lot
of
large
code
based
communities.
A
That's
happened
to
that
file
for
for
like
what
are
all
the
relevant
things
that
have
happened
since
then,
so
he
introduced
it
and
it
looks
like
he
had
that
we
we
were
naming.
We
did
the
operation
names
auto
generated,
and
then
I
came
in
and
I
auto
applied
the
decorator
and
then
I
came
in
and
I
moved
the
I
made
it.
I
made
this.
A
I
I
simplified
this
function
so
that
we
could
simplify
this
this
method
or
that
for
loop
there
and
and
then
we
did
the
names
situation
so
all
right.
So
I
think
we're
good
to
push
this.
So
I'm
going
to
push
this
up
and
hopefully
it
won't
break
any
other
tests.
But
we'll
we'll
know
if
it
was
me
that
broke
all
of
your
stuff
so
and
if
not
then
we'll
know,
and
if
not,
then
we
can
just
regenerate
those
graphs.
A
A
So,
let's
see
and
actually
well.
Actually
I
have
that
thing
right
here.
So
let
me
just
regenerate
that,
while
we're
at
it
and
I'll
just
post
it
up
in
there
and
that
way
you
don't
have
to
like
do
this
one
so
save
save
a
few
seconds
while
we're
here,
you
know.
A
A
A
Since
we're
here,
download,
svg
and
then
I'll
just
throw
those
svgs
in
an
issue
comment.
A
A
A
This
is
a
trick.
This
is
a
fun
trick
that
you
can
use.
It's
probably
probably
already
discovered
this,
but
github
doesn't
actually
validate
file
types.
A
A
B
Maybe
I
can
like
generate
the
images
and
up
upload
them.
A
B
A
The
other
thing
is
since
we're
here,
but
so
the
other
thing
is
that
with
the
okay,
so
the
http
channel
config.
A
Them
examples
should
I
should
I
deploy
mc
this
okay.
So
again,
what
was
the
I
guess
we
can
read
it
from
fmpeg.
A
A
A
Okay,
so
we
can
do
this
input
mode
and
then
we
can
just
say
like
what
is
the
definition.
So
what
is
this?
It's
right
again,
it's
like
json
and
then
whatever
we're
taking
as
the.
D
A
Can't
remember:
that's
okay,
let's
see,
let's
just
do
this,
it's
pretty
sure
it's
just
the
definition
name.
Oh.
I
guess
this
is
why
we
were
doing
it
this
way.
It
was
because
we
could
do
two
at
a
time,
but
I
don't
think
anyone
really.
A
A
It's
still
not
not
beautiful,
but
we'll,
probably
without
doing
a
different
sort
of
thing
to
the
input
modes
and
like
making
it
so
that
it
like
takes
a
list
or
something
that's
like,
and
that's
then
we're
just
like
blowing
up
what
we're
doing
with
input
modes
and
at
that
point
yeah,
I
don't
know
if
we
really
want
to
be
doing
that.
So,
let's
just
do
it
like
this
and
just
you
know
lop
off
this
first
part
and
we
should
be
good
there.
A
All
right,
sweet,
I
think
we're
almost
done
here
then.
Yes,
that
should
be
the
last
of
it
right.
A
All
right,
this
is
great.
Okay,
so,
provided
I
didn't
break
anything
with
that
pull
request
so
hopefully
or
that
push.
So
hopefully
that's
fine.
B
If
something
breaks
like
I,
I
will
try
to
fix
it.
Okay,
cool
sure,
thanks.
B
A
A
A
A
E
It's
working,
I
can
see
it's
normalizing
to
between
zero
and
one
now.
A
Yeah
yeah
watch
out.
Well
I
mean
the
thing
is
the
thing
is:
if
it's
you
not
recommended
to
train
on
your
laptop.
If
what.
A
Oh,
if
it's
not
high-end,
oh
yeah,
yeah
yeah,
we'll
let
people
take
that
risk
yeah,
no
I've,
I've
gotten!
This
is
something
you
guys
will
probably
experience
this
eventually,
but
I
don't
know
if
you
guys
have
ever
run
something
a
compilation
with
make
dash
j
dash
j
says,
use
all
the
cores
you
can
and
like
periodically.
I
run
this
you,
you
use
all
the
cores
you
can
for
confirmation
and
linux
just
hangs
and
there's
no
way
out
of
it.
It's
a
mess.
A
There's
some
there's
some
kind
of
thread:
deadlock
problems
in
linux
that
still
are
not
figured
out.
Well,
let's
merge
this
guy
and
then
I
think
we'll
call
it
a
day
for
this
meeting.
Sorry
we
ran
over
as
usual,
but
I
think
we
covered
everything.
So
that's
good.