►
From YouTube: Weekly Sync 2021-06-29
Description
Meeting Minutes: https://docs.google.com/document/d/16u9Tev3O0CcUDe2nfikHmrO3Xnd4ASJ45myFgQLpvzM/edit#heading=h.9x182y16bnvh
A
So
let's
talk
so
sudhanshu,
I
wanted
to
say
it
done.
I
went
and
I
went
and
checked
out
some
of
the
stuff
we
talked
about
on
the.
B
B
A
Yeah,
so
yes,
so
we
have
one,
it
looks
like
there's
one
bug
left
there.
Yes,
I
do.
I
am
a
little.
Let's
see
there
is.
You
know,
I
think
I
think
there's
one
issue
right
now.
Actually,
okay,
crap.
I
just
realized
this.
A
A
Flower
example,
and
then
also
where
did
that,
where
did
okay
now
I
can't
do
two
computers,
all
right.
Okay,
I'm
like
looking
on
the
rock
up
here.
Okay,.
A
So
where's
what
I
wanted.
A
A
Test
doc
string:
okay,
crap,
okay,
and
this
is
the
one.
So
basically,
I
think
this
is
a
problem
because
we
may
not
be
testing
the
doc
strings
right
now.
I
just
realized,
because
there
was
this
that
got
messed
up,
okay
and
then
this,
for
some
reason,
is
irrestrained
source
expecting
okay,
yeah.
Okay,
so
I
think
if
you
want
to
take
a
look
at
that,
I
think
so
you
you'll
take
a
look
at
that
issue,
so
phase
eight
merged,
okay,
so
again
great
job
with
that
so
yeah.
A
A
Okay,
so
let's
we
need
to
flower
17.
A
A
So
actually,
staging
I'm
sure
duck
tests
pass.
So
we
need
to
figure
these
guys
out
so
well
so
and
and
and
this
this
is
okay.
So
this
is
the
main.
The
the
main
problem
is
that
the
you
know
the
doc
tests
are
not.
The
doctors
are
not
active
on
the
master
branch,
and
so
they
may
be
failing
within
theirs
because
yeah
this
line
got.
I
refactored
this
line
incorrectly.
A
Somehow
so,
let's
see
and
now
I'm
trying
to
now,
we
tried
to
put
it
back
and
then
it
didn't
work
when
it
got
put
back.
So
I'm
not
sure
why,
especially
because
this
url
with
this
archive
should
not
have
changed
at
all
so
curious,
but
we'll
I'll
have
to
take
a
look
at
this
and
then
I'll,
let
you
know,
and
and
but
hopefully
you're
still
in
progress
on
the
flower
17.,
so
I'll
try
to
do
that
after
this
meeting.
B
B
A
All
right
so
did
I
capture
everything
there
yep
great,
all
right
so
sahil.
What
do
you
want
to
talk
about
today?.
C
My
operations
of
archives
and
compression
are
ready
for,
like
they
are
almost
ready
for
much.
I
just
need
to
finally
reveal
that
okay
and
the
lent
comment
issue.
I
tried
a
lot
but
haven't
been
able
to
fix
that
black
with
child
watch.
It
error
other
than
that.
I
was
done
with
it.
There's
one
more
thing
to
implement
and
it
would
be
ready
for
miles.
Okay
cool.
C
So
actually
you
said
that
change
that
get
extension
called
to
use
our
globe
instead
of
getting
this.
But
actually
what
happens
in
that
case
is
we
track
a
lot
less
files
than
what
our
glob
would
give
us
in
paths,
so
we
would
end
up
slowing
up
the
screwing
down
the
process.
Okay
by
considerable
amount
of
time.
Okay,
it
would
be
very
slow.
A
Mutations
yeah,
so
so
can
we
just
go
through
and
cut
off
the
like
can?
Can
we
just?
Can
we
change
the
way
the
mutation
like
so
your
validation,
the
the
way
that
your
validation
works
right
now?
Is
you
know
that
the
way
that
the
functions
work
to
validate
is
is
tied
to
this
extensions
right
because
you're
taking
every
part
of
the
path
and
adding
every
possible
extension
to
it?
A
So
could
you
just
change
the
validation
to
you
know,
look
at
the
look
at
the
look
at
the
you
know,
you're
splitting
it
on
collins,
and
then
you,
you
verify
that
that
path
exists.
A
A
Well,
it
looks
like
it
looks
like
right
now
here.
Let
me
just
pull
it.
A
E
I
made
the
changes
to
the
ensemble
notebook
and
I
think
it's
good
now.
Okay,
great.
A
A
Okay,
great
anything
else.
E
A
A
A
A
So
so
the
actually
staging
flower
17
fix
so
you're
going
to
work
on
that.
Do
you?
You
just
saw
that
right.
So
no
comments
on
that.
A
A
Okay,
so
second
clustering
model
questions:
okay,
okay,
let's
see
so,
let's,
let's
start
with
hashem's
stuff
today,
so.
E
I
think
you
should
solve
it.
D
A
E
So
I
ended
up
in
assembling
a
classifier
and
a
regressor
okay.
You
know
just
as
an
experiment,
and
I
think
that
works
with
our
tutorial
as
well,
and
the
real
issue
was
that
I
was
using
a
regressor
as
the
meta
model
and
when
I
used
a
classifier,
the
accuracy
went
up
because
the
dataset
we're
using
can
be
used
for
both
regressors
and
classifiers,
but
positive
areas
work
better
on
that.
A
A
A
Following
the
steps
to
assemble
by
stacking
train,
first
level
base
models
on
train
data,
use
the
first
base
models
to
get
predictions
on
validation,
data
and
test
data.
Okay,
the
dogs
are
going
to
freak
out
here.
We
simply
use
the
high
level
predict
function
to
get
the
predictions
and
store
those
predictions
in
lists
stack
all
the
validation
predictions
together
and
then
stack
the
test
predictions
together
after
this
we'll
have
two
arrays,
consisting
of
stacked
validation,
predictions
and
stack
test
positions,
build
and
train
okay.
So,
let's
see
so.
This
is.
A
A
It's
a
let's
see
for
each
model,
so
you
predict,
you
say:
validation
prediction
one.
So
this
is
your
when
you
say
stack
all
or
sorry
simply
use
high
level
predict
functions,
give
predictions
and
store
these
predictions
with
lists.
This
will
have
two
arrays
containing
stack
validation,
prediction
stack.
A
So
stacked
I'm
trying
to
make
sure
that
I'm
just
trying
to
verify
that
our
bullet
points
map
logically
so
use
the
first
base
level
models
to
give
predictions
okay.
So
maybe
we
should
turn
these
into
a
little
numbered
list.
Here
too,.
B
A
A
A
A
A
So-
and
this
is
the
okay,
so
six
times
two
miles,
just
one
amount
of
money
right,
superior
prediction
system
data
can
use
classification
and
regressive
stress,
we'll
use
both
to
make
an
ensemble
model
show
how
it
performs.
A
A
D
E
Think
you
can
remove
the
os
because
I
was
using
it
for
the
reasons
I
think.
A
A
A
A
A
Alright,
so
now
this
is
what
we're
looking
at
here,
so
remove
os,
make
that
numbered
list,
and
then
okay,
so
this
guy
was
missing
in
paren,
okay,
the
missing
of
the
in
parent
is
concerning,
because
that
means
that
the
test
cases
may
not
be
working
as
intended.
A
A
All
right,
so,
let's
double
check
so
or
else
maybe
this
would
the
the
test
cases
are
coming
in
a
different
pr
here.
So
that's
probably
what's
going
on
right,
so
yeah,
okay,
so
we
probably
need
to
yeah.
We
probably
should
have
done
these
test
things
as
a
their
own
pr,
so
the
evaluating
model
performance.
What
is
this
one
done?
What's
the
deal
with
this
one.
A
All
right,
okay,
so
yeah,
I'm
getting
the
same
same
error
about
the
lack
of
or
let's
see,
wait
a
minute.
Okay.
So
all
right!
So
yeah
we
were
missing
a
parent
there.
A
Let's,
let's
take
the
so,
let's,
let's
see
so
so,
let's
see.
Okay,
where
is
this
ensemble
notebook
here
looks
good
need
to
make
sure
it's
being
tested
before
we
merge
so
we'll
merge
the
which
one
was
this:
the
evaluated
model
performance
pr
with
tests.
A
First:
okay,
any
other
comments
on
this
one
while
we're
at
it
from
anybody.
A
A
All
right
great,
so
let's
we'll
push
that
up
and
then
okay,
so
what's
next,
here's
great
so
saving
and
loading
models
pr.
So
how
are
we
doing.
A
E
Yeah,
I
just
you
know,
created
a
model
and
trained
it
and
then
restarted.
The
kernel
to
you
know
show
that
if
you
load
it
again
and
the
model
was
framed
before
you
can
use
it.
A
A
A
Great
okay
cool,
let's
see
so
the
one
thing
is
okay,
so
we're
using
this
wine
quality
again.
A
A
A
A
A
Saving
and
loading-
it's
pretty
straightforward,
so
great
nice
job
figuring
out
how
to
get
that
working
with
the
in
ipython
notebook.
A
A
Okay
and
then
how
are
we
proceeding
on
beautiful
immutable,
config,
okay,
so
anything
else
on
your
interior,
I
stud,
I
saw
one
more
pr.
D
E
On
okay,
so
the
other
testing
pr
passed
on
the
tests
and
it,
but
it
fails
on
mac
os
due
to
some
dependency
of
xd
boost.
I
think
I
don't
know.
A
A
E
A
A
D
D
A
A
1141,
let's
go
ahead
and
throw
it
in
the
model
actually.
A
A
A
A
All
right
so,
let's
see
what
happens.
A
A
A
All
right,
so
how
are
we
proceeding
with
mutable
config?
So
I
think
saksham
said
he
was
gonna.
Take
a
look.
It
looks
like
okay.
So
if
you
haven't,
you
haven't
heard
from
saksham
on
this
one,
no
okay!
So
let's
ping
him
because
he
said
he
was
gonna.
Take
a
look
at
that.
So
I
think
we're
waiting
on
him
so.
A
Okay,
cool
good,
so
sakshan
was
going
to
take
a
look
at
the
patch
and
hashem's
comment
and
then
communicate
to
machine
and
sahel
from
there.
Okay,
so
yeah
you'll
you'll,
we'll
get,
will
pink's
suction
on
this.
A
E
A
A
A
Okay
and
got
him
all
right
great,
so
followed
up
on
that
all
right,
so
is
that
okay,
so
mutable
config
all
right
is
that
everything
on
your
side,
hashem.
A
All
right
great,
thank
you
all
right,
so
so
sahil,
so
let's
go
over
operations
and
archives
for
compression
okay,
so
now
link
commit
pr
all
right.
So
I
think
that
if
you
can't
you
really
so
we
really
shouldn't
need
to
touch
okay,
so
I
I
think
that.
C
A
Yeah,
you
could,
let's
see
so.
This
was
caused
by
os
x,
something
with
child
watcher,
just
unit
test.
Dot
skip
it
for
now
on
os
x.
C
A
Yeah
yeah
yep
all
right
great.
So
then
operations
for
archives
and.
A
A
So
I'm
wondering
if
we
should
just
be
using
pack
and
unpack
archive
here.
C
A
A
A
A
Exactly
so
so
so,
let's
see
so
the
main
thing
here
that
I'm
thinking
about
is
the
file
like
sort
of
file
names
versus
streams
and
as
with
the
compression,
so
where
I
don't
think
I
saw
the
compression
yet
okay,
so.
A
A
All
of
this
is
very
file
based,
which
is
good.
It's
just.
If
you're
going
to
do
compression
as
a
part
of
it,
you
would
ideally
do
it
just
using
a
stream
right
and
then
you
would
stream
the
data
from
the
the
arc
like
if
you're
compressing
an
arc.
If
you're
creating.
A
An
archive,
ideally
you
create
the
archive
and
then
compress
it
and
then
write
it
to
disk.
So,
let's
see.
C
A
A
Complicate
this
for
now,
so
so
I'm
trying
to
let's
try
to
decide
so
trade
off
here.
So
the
one
thing
is
so
we
have.
A
We
have
output
file,
so
we
have
a
bunch
of
file
paths
involved
here,
so
input,
file,
path
and
output,
file
path.
Okay,
so-
and
these
are
also
two
generic
so
because
you
know
we
want
to
make
sure
input,
file
path
is
just
any
input
file
right
at
this
point.
So
this
is
probably
something
like
you
know,
a
compressed.
This
is
like
a
compressed
file
right,
let's
see,
and
I'm
wondering.
A
A
You
want
to
think
about
it
like
what
pieces.
So,
if
you
were
to
throw
so
this
is
one.
This
is
what
happens.
So,
basically,
if
you
take
a
bunch
of
these
operations-
and
you
put
them
together
into
a
data
flow
it'll
try
to
automatically
connect
any
of
their
inputs
and
their
outputs,
and
it
does
that
by
looking
at
the
definition
names.
A
So
so,
if
anything
produces
a
input,
file
path,
object,
then,
or
a
a
if
anything
produces
or
provides
a
input
of
type
input,
file,
path
of
definition,
input
file
path,
then
it's
going
to
be
used
there
and
auto
linked
there.
So
we
want
to.
We
want
to
tread
so
this
is
where
you
try
to
think
about
like
the
granularity
which
you
need
to
define
things.
A
So
if
you
were
to
put
a
bunch
of
things
in
a
network
right
and
they
all
took
input
file
path,
then
you're
going
to
end
up
with
a
automatically
connected
network
of
you
know.
Okay,
so
what
provides
you
might
have
a
you
know,
you
might
give
the
network
a
input,
file,
file,
path
right,
and
do
you
really
want
it
to
go
to
every
single
operation
that
takes
input
file
like
that
takes
a
file?
A
A
Compressed
file
and
uncompressed
file
yeah,
so
in
this
case
yeah
in
this
case,
where
you're
having
the
format
as
another
object.
That's
probably
you
know,
that's
probably
that's
that's
the
right
way
to
go.
Yeah,
compress
file
and
decompress
file
right
because
then
you've
you've
clarified
that
this
is
is,
is
compressed
right,
compressed
data.
So
let's
see
decompress
format
because
yeah.
A
Yes,
you
should
likely
return
the
output
file
path
right,
and
so
what
you're
going
to
find
is,
as
you
change
it,
to
compress
file
path
and
decompress
file
path.
Is
that
your
inputs
to
the
decompression
will
become
you
know,
compressed
file
path
and
your
outputs
from
the
compression
will
become
decompressed
file
path
right
and
that
way,
if
you
threw
an
input
and
output
together,
they
would
I
mean
they
would
automatically
be
linked
together
into
sort
of
a
you
know,
into
a
little
infinite
loop
there
right.
A
Well,
you
just
you
just
you,
you
disconnect
them
in
the
event
that
it's
automatically
created
in
infinite
loop.
This
is
just
to
route
things,
so
the
the
routing
of
things
based
on
definition
is
is
a
is
a
helper
utility,
so
you
aren't
always
so
you
aren't
always
manually
defining
everything
and
the
purpose
is
to
you
know.
If
we
correctly
scope
things
then
putting
operations
that
work
on
the
same
data
together
they
get
automatically
connected.
For
you
know
as
an
ease
of
use
mechanism
to
the
end
user.
A
D
A
Is
the
automating
classification
demo
data
flow
that
is
like
editable
in
the
browser,
so
you
can
take
the
things
and
you
can
move
them
other
places
and
stuff,
so
you
can
reconnect
things
so
and
and
so
the
the
reason
why
the
reason
why
we
try
to
define
that
the
data
flows
with
the
particularly
scope
definitions
is
so
say
you
wanted
to
add
a
new
operation
of
this,
and
obviously
I'm
not
done
with
that
yet
so
I
just
did
this
yesterday
so
say
you
wanted
to
add
a
new
to
add
a
new
operation
to
this
right
and
it
has
something
to
do
like
okay,
it
takes
a
branch,
so
if
I
dropped
it
onto
like,
if
I,
if
I
added
it
right
so
some
kind
of
drag
and
drop
thing
over
here
right
and
I
drop
it
into
the
the
map
into
the
graph,
then
it
would
automatically
route
anything
that
produces
a
branch
to
my
new
operation
right.
A
So
it'll
automatically
connect
it
for
me
and
then
I
can
reconnect
it
if
I
feel
so
inclined
right
and
so
that's
sort
of
the
motivation
behind
this.
This.
The
way
that
we're
scoping
and
naming
definitions
is
to
provide
an
ease-of-use
mechanism
for
for
people
who
to
drop
these.
You
know
new
operations
onto
the
onto
the
the
graph
here.
Let's
see
because
this
is
a
this-
is
sort
of
one
of
the
end
goals
right
to
be
able
to
provide
a
visual
way
to
knit
these
things
together.
A
So
anyways,
that's
the
motivation
here.
Let
me
go
back
to
this
so
resume
presenting.
So
I
think
yeah.
Let's
try
to
change
it
to
compress
file
path
and
decompress
file
path,
and
let's
make
those
let's
make
that
an
output
as
well.
A
So
now
that
the
one
thing
is
okay,
so
if
we
make
an
input
if
we're
making
the
input,
this
is
the
problem
with
using
file
paths,
okay,
so
the
input
file
path
and
the
output
file
path-
and
you
want
to
have
the
option
to
specify
the
output
file
path.
C
A
A
We
need
to
try
to
create.
You
know
the
the
goal
is
to
create
the
the
operations
that
will
lead
to
as
much
reuse
as
possible
right.
So
my
my
question
here
in
my
head
is:
do
we
do
this
like?
A
I
was
saying
with
with
streams
or
do
we
do
it
with
files,
and
I
think
that
the
best
approach
here
might
be
the
stream
break
stream
based
up
approach,
because
that
also
solves
our
issue
of
the
optional
file
path,
names
and
the
fact
that
those
would
be
linked
back
in
automatically,
because
if
you
said
you
know
decompress
file
path
and
you
took
that
as
an
input
and
you
produced
this
in
an
output,
then
you're
going
to
get
that
you're
you're
going
to
have
that
called
again
for
you,
because
everything
with
a
different
permutation
gets
called
right
now.
A
Definitely
the
compression
part,
the
archiving
part,
let's
see
the
archiving
part,
let's
see
yeah.
It
would
be
great
if
we
did
this
as
a
stream
based
approach
as
well,
but
the
input
the
output
path.
If
you
do
the
output
path
of
this
stream,
let's
see
yeah
because
you
could
access.
A
If
you
want
to
extract
an
archive,
you
have
to
point
it
at
a
directory,
though
so
we'll
we'll
have
to
take
a
a
output
directory
path.
So,
okay,
an
input,
zip
file,
path;
okay,
so
that
would
be
now
okay
with
that,
what
happens
if
that's
a
stream.
A
Okay-
let's
see
here
so.
C
A
A
Yeah
yeah-
let's-
let's:
let's:
let's
do
that
for
now
here,
because
we
can
always
have
you
know
we
can
always
have
some
intermediary
operations
to
to
convert
streams
to
files
or
vice
versa,
or
we
will
probably
just
create
file
objects
and
extract,
accept
them
as
streams.
So
I
I
would
go
with.
I
would
say
that
io
bytes
is
probably
the
type
that
you
I
think
the
type
hint
let's
see
is
there.
A
Remember,
oh
yeah,
binary.
Okay,
that's
what
it
was
generic
io
any
string.
A
A
A
All
right,
whatever
okay,
so
compression,
assume
you're,
taking
a
stream
for
the
inputs
and
you
stream
to
the
output,
so
yeah
input
stream,
compressed
input
stream,
decompress
output
stream,
and
then
you
can
create
like
yeah
you.
You
can
create
the
eye
of
python
objects
and
stuff
in
your
test
to
do
the
crate
streams-
okay,
great
so
yeah.
Let's
do
that
and
then
so.
The
mock
calls
what
was
going
on
here.
B
A
A
Okay,
it
looks
like
okay,
so
I
think
the
one
thing
would
be.
It
looks
like
okay,
so
this
is
on
result:
okay
and
where's.
The
data
flow
create
their
file.
A
A
Okay,
great
okay,
let's
see
so
then
let's
just
double
check
this.
So
you've
got
your
say:
you
have
the
location
with
the
zip
file,
that's
okay!
Well,
let's
do
tarjay.
Zz
comes
in,
you
will,
in
the
next
phase,
you'll
make
the
data
flow
and
you'll
okay,
so
you
make
the
data
flow
and
you
do
the
decompression
of
gz.
A
A
A
A
And
okay
how's
that
gonna
work,
though
okay,
so
if
we
take
streams,
so
say
say:
we
have
two
streams
right
and
then
and
two
operations,
so
the
input
stream
in
the
output
stream
and
the
output
stream
of
the
decompress
goes
to
the
input
stream
of
the
like
the
tar
file
extract,
which
is
taking
a
file
path.
So
you
have
to
write
the
stream
to
disk.
A
If
you,
the
tar
extraction
operation,
took
a
stream,
then
it
would
read
the
stream
it
would
read
from
the
stream,
which
is
also
accepted
as
an
input
from
the
decompression
operation.
So
those
operations
would
get
kicked
off
at
the
same
time
and
it
would
just
sit
and
wait
and
do
an
asynchronous
read.
A
I
don't
know
if
you
guys
are
following
this,
but
so
what
I'm
trying
to
do
here
is
I'm
trying
to
figure
out
if
we
kick
off
the
way
that
the
operations
are
executed
right,
they
get
kicked
off
when
an
input
of
the
the
correct
data
type
is
is
present,
and
so,
if
you
have
two
inputs,
so
if
you,
if
you
have
this
stream
input,
that's
being
decompressed
like
you're,
writing
the
decompressed
data
to
the
stream
and
then
the
stream
is
also
being
used.
A
A
Did
yeah,
okay,
so
you
might
want
to
take.
It
should
really
be
an
asynchronous
stream,
because
if
we
kick
off
both
these
files,
so
it
did
here's
the
other
thing.
So
what
if
you
take
say
you
had
these
two
operations
running
on
two
different
machines
right?
You
can't
just
pass
an
fd
between
the
two
of
them
right
when
you
create
that
stream
object.
A
A
So
once
executing
the
decompression
once
executing
the
archive
on
strat
extract
right,
and
so
you
need
the
the
stream,
so
it
has
to
be
asynchronous
because
the
event
loops
that
are
running
will
you
know
we
can't
block
the
event
loop
based
on
an
I
o
operation,
so
we'd
create
the
object
at
some
sort
of
orchestrator
scope
and
and
feed
it
to
each
operation,
and
they
have
to
write
to
it.
Asynchronously
so
or
write,
write
and
read
using
or
they
have
to
read
asynchronously.
So
in
this
case.
A
A
Okay,
so
the
main,
the
main
thing,
though,
is,
is
becomes
like:
okay,
how
do
you
integrate
with
the
existing
apis?
That
are,
let's
see
so,
let's
see
we're
using
input
file
and
okay,
so
compression
class
sets
you
to
copy
file,
object,
okay,
so
this
one's
easy.
You
know
you
just
do
like
that.
You
just
do
a
write
to
the
output
stream,
all
right
so
decompress
taking
a
stream
yeah,
I
mean
we
just
removed
the
file
file.
A
Copy
file
object
things,
because
now
you
have
an
asynchronous
some,
some
kind
of
sort
of
a
string
stream
to
deal
with
all
right,
but
the
problem
is
now.
We
have
to
introduce
this
concept
of
asynchronous
streams,
and
this
overcame
complicates
this
project.
Probably
too
much
so
sorry.
This
is
it
isn't
we
need
to
flush
all
this
stuff
out,
because
that's
sort
of
the
point
of
it
all
right.
A
I
think
that
it
probably
sounds
too
complicated
to
do
the
stream
thing,
because
we
don't
want
to
do
the
stream
thing
wrong.
We
don't
want
to
do
it
in
a
way
that
that
needs
to
be
changed
later,
because
I
think
what
we
found
out
here
is
that
we
need
some
kind
of
support
for
these.
We
need
some
sort
of
concept
of
this
asynchronous
stream
right
and
like,
like
you
said,
you
know
what
that
would
be
is
a
well.
Let
me
bring
up
the
page.
What
is
that?
What
is
this?
A
No,
what's
the
page
for
it.
A
Yeah
so
just
like
you
just
said
that
transports
those
transporting
protocols,
where's
that
yeah,
okay,
great
yeah,
so
reader.read
writer.right,
yeah,
okay,
and
then
you
wait.
We'll
wait
the
flush
okay!
So
and
what
is
this
object?
That's
returned
here?
A
Yeah,
it
returns
a
transport
protocol
pair
okay,
so
this
is
likely
what
we'd
end
up
with
here
is
the
happy
eyeballs
delay.
Okay,
so.
A
Okay,
so
yeah
so
transport
and
then
protocol.
Is
that
correct
yeah
so
does
it
transfer
protocol
to
uphill
is
return
on
success,
so
transport
protocol
so
yeah,
and
then
you
await
the
reads.
Okay,
oh
it's
drain,
not
flush,
that's
right!
Okay,
so
yeah
the
type
hints
would
basically
be
transported
protocol.
C
A
Right,
I
think
I
think
what
we'll
do
here
is:
let's
you
can
we,
and
I
really
would
like
this
to
be
it's.
It
would
be
great
if
we
can
get
this
sort
of.
You
know
more
more
to
the
the
ideal
on
the
first
shot
here,
because
this
is
going
to
be
the
foundation
for
the
you
know,
sort
of
this
arbitrary
connections.
A
If
you
were
to
take
a
okay,
if
you
were
to
if
you
were
to
deal
with
these
transport
objects,
I
think
that
would
solve
our
issue
here.
So.
A
C
B
A
So
it's
let's
see
so,
let's,
let's
just
let's
just
keep
as
okay.
So
I
really
would
like
this
to
be
streams.
Okay,
so
in
profile
streaming
output
file
stream.
C
A
C
A
Okay,
no,
it's
probably
both
of
us,
because
I
know
my
internet
had
been
kind
of
wacky.
So
all
right!
So
if
you
feel
comfortable
with
a
cincy
l
then
attempt
a
stream
based
approach
so
and
I'll
put
the
links,
sort
of
relevant
links
in
here.
A
Okay,
so
I
know
I'm
writing
this
down
just
so
that
so
input
file
path.
C
So,
regarding
the
streams
thing,
I
have
a
few
questions
here,
maybe
like
compression
and
decompression
can
be
done
in
streams.
But
what
about
like
creating
archives?
Would
it
be?
I
mean
streams
as
well.
A
If
you
go
with
streams,
then
if
you
go
with
streams
except
streams
for
inputs
to
unpack
and
outputs
or
inputs
unpack,
and
let's
see
why
so
streams
are
always
inputs
of
streams
so
for
archives,
if
you
go
stream,
six
six
streams
for
for
the
archive
to
unpack
and
the
stream
to
pack
data
into
use
paths
for
directories
to
extract
data
into
and
pack
data.
A
C
Okay,
I
would
need
to
like
think
a
bit
on
it
and
then
implement
it,
because.
C
A
C
A
It
is
not
straightforward
right
and
that's
why
we
just
spent
so
much
time
talking
about
it,
but
you
know
it's
worth
it's
worth
trying
to
to
figure
this
out,
because
we
sort
of
uncovered
a
new
concept
of
the
of
the
streams
here
and
and
then
and
yeah.
We
uncovered
this
concept
of
streams
and-
and
I'm
not
sure
if
I
love
this
whole
stream
thing,
because
it
introduces
all
sorts
of
it-
reduces
all
sorts
of
funkiness.
A
But
I
think
it's
fine-
it's
probably
it's
probably
a
necessary
evil
long
term.
So
I
think
I
think
it's
probably
something.
That's
gonna
be
necessary
long
term.
So
then
this
this
this
would
be
at
least
a
good
foray
into
figuring
it
out
so
green
and
different
for
compressed
capacity
yeah.
And
so
you
could.
You
could
do
this
as
a
stop
gap
and
come
back
to
this
later.
A
If
you
wanted
to-
or
you
could
just
sort
of
leave
this
and
we
can
figure
it
out
at
some
other
point
right
so
at
a
minimum,
we'll
submit
an
issue
for
this
and
go
with
this.
But
if
you,
if
you,
if
you
want
to
go
for
it,
then
great
that
that
would
be
interesting
right.
D
A
A
Great
thinking
so,
let's
submit
so
let's
submit
an
issue
for
this.
Let's
submit
issue
to
track
this
and
if
you
feel
like
you
have
time
to
come
back
to
it,
then
do
so.
A
A
All
right,
perfect,
so
anything
else
you
want
to
talk
about
on
these.
A
So,
let's
see
yeah,
that's
a
good
question,
so
no
we
need
to
do
this
first.
So
so,
let's
make
sure
that
we
make
sure
our
definitions
are
specific
enough
and
then
we
can
merge
it
all
right
and
then
let's
see
so
this
is
let's
see
we
try
to.
This
is
another
linter
that
we
need.
But
if
it's
already
under
the
added
section,
then
you
don't
need
to
say
add
it
again
so
operations
for
compression
and
archive.
A
Because
you've
added
this
so
added
consolidate,
test
cases
tutorial
or
else
we'd
say
see
this
one
we
needed
to
remove
the
added.
So
all
right.
Okay,
where
is
okay?
So
now,
okay,
wait!
No!
I
was
gonna
comment
that
that's
why
I
had
that.
A
All
right,
so
you
want
to
talk
about
psychic,
clustering
model,
questions
and
questions
on
cleanup
operations.
So
what
what
questions
do
you
have
there.
B
So
I
would
like
to
share
my
screen.
Okay,.
B
B
So
the
thing
here
is:
if
we
take
a
look
here
so
the
scikit
model
scorer
will
not
work
with
the
clustering
models
because
they
don't
have
the
score
method.
Okay,
so
what
I
was
planning
is
in
the
test
circuit.
This
will
actually
raise
an
exception.
A
B
Sounds
good
yep,
so
I
was
thinking.
Maybe
should
I
remove
this
test.
A
B
D
A
Great
no
need
for
accuracy
in.
B
And
like
there
are
two
scorers
like
psychic
model
scorer
and
the
psychic
scorers,
so
should
I
make
the
pull
request
for
both
of
them
in
the
same
player.
B
There
are
actually
two
scorers
one
which
uses
the
model
and
one
which
uses
the
scikit
matrix
method.
So
should
I
make
the
pull
request
for
both
of
these?
In
the
same
pr.
A
B
A
A
A
Yes,
okay,
so
yeah,
if
it's,
if
it's
already
set
up
like
that,
let's
just
do
it
as
one
pr
and
then
we
can
split
it
into
two
if
need
be,
but
if
not,
if
it's
not
you
know,
if
it's
already
two
submitted
is
two:
if
it's
one,
then
let's
just
do
it
as
one
and
then
we
might
decide
that
we
need
to
split
it
later.
A
Yep
all
right
great
are
we
done
then.
A
Flower,
so
I
thought
you
might
fix
it
during
this
all
right
great.
So
let's
go
see
that.
A
A
Should
I
is
doing
something
dumb
so
great
all
right,
awesome,
okay,
so
I'll
take
a
look
at
flower
17,
so
all
right
and
then
okay.
So
this
is
the
next
thing
I
need
to
do
and
then
we'll
get
that
to
you
and
hopefully
then
the
doc
test,
because
I
think
there's
two
things
right
now
with
possibly
doctest
issues.
Actually,
oh,
we
wanted
to
make
sure
that
the
doctors
pass
on
that
log
time,
one
pr
and
then
we'll
merge
that
okay.
A
A
B
Right
and
and
we
have
to
talk
about
operations,
yeah
cleanup.
A
So
accuracy
staging
and
then
pr
for
a
long
time,
all
right
great.
So
questions
about
cleanup
operations,
all
right,
shoot.
B
Yes,
so
the
the
thing
is
in
the
cleanup
operation:
let's
suppose
we
have
a
transformation
operation.
D
B
In
that
we
have
a
lot
of
operations
like
log
transformation,
or
there
are
like
many
transformations
on
normalization
which
we
can
do
so.
What
I
was
thinking
should
I
create
a
single
operation
for
each
of
these,
or
should
I
create
a
single
operation
and
like
have
some
kind
of
config,
give
some
kind
of
config
to
the
user?
So
user
can
choose
like
what?
What
operation,
what
specific
operation
the
user
want
to
perform?
A
E
B
Let's
say
that
I
I
saw
like
some
of
the
operations
in
the
cleanup
operation
and
like
there
are
many
operations
under
a
specific
type
of
task
that
you
want
to
perform.
A
Okay,
can
you
give
me
a
concrete
example
here.
A
B
A
A
B
Of
the
particular
rows
and
columns
to
make
it
the
scale
of
the
values
to
be
in
the
same
same
scale,
so
they
may
want
to
perform
like
square
operations.
B
So
these
are
actually
the
same
operations
right
transformation
operations.
B
So
should
these
operations
should
be
different,
like
should
log
transformation,
be
a
different
different
operation
and
squaring
operation
should
be
a
different
operation,
or
should
we
keep
them
in
a
single
operation,
and
we
should
give
a
user
some
kind
of
option
like
what
they
want
to
perform
when
they
want,
when
they're
using
the
operation.
A
B
A
Same
question,
all
right,
I
think
I
mean
it
sounds
to
me
like
these
needs
to
be
separate
operations,
because,
if
you're
thinking
about
it
from
the
term
of
like
okay,
for
example,
like
you
know,
you're
doing
some
pre-processing
right
and
where
are
we
going
to
do
that?
Like
streamlined,
create
command
and
stuff
like
that?
A
So
in
that,
in
that
case,
it
seems
like
it
seems
like
you
would
want
to
have
these,
be
you
know
separate
operations
defined
during
the
create
command
or
something
like
that
right
I
mean
I,
I
would
think
about
it
in
the
way
where
it's
like.
Okay,
what
is
most
convenient,
syntactically
for
an
end
user
to
define.
A
A
A
It
kind
of
begs
the
same
question
about
that:
those
compression
operations
and
decompression
operations,
because
in
in
that
case
we
were
doing
you
know
we
were
accepting
the
file
format
as
where
we
were
accepting
the
extension
as
the
the
compression
algorithm
like
we
were
choosing
the
compression
algorithm
based
off
the
extension,
and
perhaps
we
should
not
be
doing
that
because
then
you
know
yeah,
you,
let's
see.
A
A
And
if
this
is
the
way
that
we're
treating
this,
then
probably
right,
because
if
you
were
to
go
to
define,
for
example,
if
you're
going
to
make
that
data
flow,
so
you
would
either
add
an
input
that
would
be
that
file
type,
that
it
needs
to
switch
on,
like
that
extension
that
it
needs
to
switch
on,
or
you
would
add
you
know,
you
would
just
add
the
operation
itself
right
and
you
wouldn't
need
to
add
that
separate
input
for
what
the
file
type
is.
So
those
should
probably
be
their
each
their
own
operation.
A
Yeah,
I
think,
let's
see,
let
me
go
open.
That
up,
I
mean,
and
you
can
also
I
mean
you
can
always
you
know,
create
these
things
in
a
for
loop
right.
You
know
they.
They
are
functions,
but
you
can
spit
these
things
out
in
an
automated
fashion.
If
you
wanted
to
so,
let's
see
format
for
the
compressed
output
file.
A
Yeah,
I
would
do
open
compression
class
you're
already
doing
this
compression
class
get
compression
class
things,
so
you
could
basically
just
have
some
kind
of
loop
that
creates
these
compress
and
decompress
functions
right
so
and
then
decorate
them
with
op,
so
yeah
I
think
you
could
do.
A
Let's
see,
I
wonder
what
would
this
look
like?
Let
me
go
back
to
this
yeah
I
so
I
would
say
I
would
say
that
that
also
says
something
about.
So,
let's
make,
let's
make
these
into
different
all
right.
Let's
make
compression
d
compression
operations
for
each
file
type
are
for
each
algorithm.
A
So
you
could
create
the
functions
in
like
via
a
loop,
so
you
don't
do
it
do
not
handwrite
each
function.
A
So
what
we'll
do
is
we're
going
to
create
a
decompress
for,
let's
see,
let's
just
do
our
supported
compression
formats.
D
A
A
A
A
A
All
right,
so
so
we
now
okay
and
compress
decompress
rbr,
okay,
great!
A
So
then
we
just
get
rid
of
the
file
format
here
and
we
get
rid
of
that
get
compression
class
okay,
so
op
inputs,
outputs,
okay,
great,
so
you
can
basically
say
now
so
say
you
wanted
to
do,
let's
see
actually
compression
class,
so
you
got
to
do
yes.
We
have
to
have
one
of
these
okay,
so
there's
closure
issues
here,
so
we
need
to
take
it.
We
need
to
make
a
function
that
says,
make
compress.
A
A
And
then
this
returns
decompress
and
now
in
this
way
in
this.
In
this
way
we
can
now
go
through
and
we
can
say,
let's
see
so
this
would
be
where's
that
stuff
from
scikit
model
scikit.
A
Where
is
this,
so
we
go
through
our
file
format,
so
we
have
our
make
compress
and
make
decompress
okay.
So
my
screen
is
slightly
too
small,
so
sorry
may
compress
and
make
decompress
right,
and
we
can
say
that
we
need
to
go
through
each
one
and
we
can
say:
decompress
equals
make
decompress,
and
then
we
pass
this.
So
now
we
have
a
decompress
function.
A
A
compress
function,
so
we
have
a
compress
function
and
we
have
a
decompress
function
in
this
loop.
We
create
a
decompress
function
and
decompress
function
for
each
file
format,
and
then
we
decorate
them
with
op,
and
we
say
you
know
the
file
or
the
function.
Name
is
extension.
A
And
so
the
result
of
this
is
that
at
the
global.
A
Scope
of
this
file,
gz
compress
gets
set
to
that
out
to
the
come
on
now.
A
And
we
change
the
outputs,
you
know
appropriately
so
input
file,
path
right,
so
input
file,
path,
input,
file,
path
to
make
decompress
output.
So
what
we
were
saying
were
we
saying
I
think
we
said
that
we
were
going
to
put
the
the
output
here
so
output
file
path,
where
we're
going
to
put
the
output
there's
really
no
point,
because
it
is
an
input
already.
A
So
if
you
had
a
let's
yeah,
let's
not
do:
let's
not
do
these
as
outputs.
Let's
see
what
we
said
in
the
notes,
because
if
you
do
them
as
outputs,
so
if
you
had,
for
example,
a
file
or
if
you
had,
for
example,
a
operation
that
created
a
temporary
file,
you
would
have
a
cleanup
operation
that
removes
a
temporary
file.
But
you
wouldn't
want
to
create
a
cleanup.
A
You
wouldn't
want
to
write
a
cleanup
operation
that
removes
a
decompressed
file
because
that
just
wouldn't
be
you
wouldn't
want
to
do
that,
because
then
you're
just
going
to
remove
all
your
decompressed
files
right
and
you
know-
maybe
you
don't
want
to
do
that
to
all
of
them.
So,
but
you
probably
you
definitely
do
want
to
remove
all
your
temporary
files
right.
I
think
we
wrote
something
contradictory
to
this.
I
want
to
make
sure
that
we
don't
have
that
so.
A
A
A
A
A
Input
is
a
compressed
file,
path
and
output
is
a
decompress
file
path
and
then
we
have
extension
compress
and
extension
decompress
right.
So
this
is
how
we
create
them
in
a
loop.
A
No
problem
yeah
the
set
attribute,
that's
a
that's
a
good
one,
so
I'll
just
because
then
we
know
all
the
supported
compression
formats
ahead
of
time.
So
we
will
do
this.
A
Okay,
great
supported
compression
formats.
A
Okay,
oh.
A
A
What
pr
are
we
on
here?
Okay,
11,
29,
jhpo,
comment.
D
A
A
Right-
and
this
is
the
example
or
creating
operations
for
each
format
with
a
loop
okay
and
then
we
were
also
posted
just.
A
Okay-
let's
just
make
sure
these
showed
up
here
and
then
so
we
decided
that
we're
gonna
do
that
for
the
other
ones
or
for
the
ones
that
originally
started
this
discussion
and
we're
gonna
do
them
for
this
tar
archive
thing,
okay,
great
and
it
showed
up
in
the
comments,
so
all
right,
cool
anything
else
from
anybody
today.
I
know
we
are
over
on
time
again
so.
A
A
A
A
Okay,
so
let's
merge
this
guy,
then
value
reading
model
performance.
Okay,
you
won,
and
we
just
took
a
look
at
this
okay.
A
A
All
right
and
still,
oh
yeah,
okay,
we're
gonna,
keep
that
one,
a
separate
okay.
What
are
we
doing
here?
Remove
that
okay
require
playing
this
remove
single
tests?
Okay,
let
me
clean
this
up
one
real
quick
here
and
then
I
won't
take
any
more
of
you
guys
this
time.
A
So
all
right,
great,
I'm
gonna,
I'm
just
gonna
rebase
the
commits
so
that
we
remove,
like
you,
know
this
type
of
stuff
and
then
I'll
push
it
in
a
master
so
great,
and
let
me
note
that
here
that
we
did
that
today
great
work
guys.
Thank
you
all.
A
Right,
everyone
and
I'll
talk
to
you
soon.
Oh,
do
you
guys
want
to
meet
on
friday
or
no.
A
Right
cool
all
right,
we'll
well!
Well,
okay,
wait!
Actually!
No!
I
no!
I
said
so.
Why
don't
we?
Why
don't
we
try
to
handle
anything
because
I
just
realized-
I
probably
I
forgot
okay,
so
I
can't
I
probably
won't
be
able
to
be
here
on
friday.
I
might
be,
but
let's
try
to
bring
things
up
over,
try
to
shoot
things
over
getter
and
then
I'll
try
to
handle
them
in
you
know
at
time
of
so,
and
if
we
do
find
that
we
really
need
it.
A
Then
then
we'll
I'll
see
I'll
see,
but
I
mean
I
just
may
not
be
able
to
so
all
right
thanks
everyone
and
have
a
good
one.