►
From YouTube: Weekly Sync 2021-08-10
Description
Meeting Minutes: https://docs.google.com/document/d/16u9Tev3O0CcUDe2nfikHmrO3Xnd4ASJ45myFgQLpvzM/edit#heading=h.5c13eb36goty
C
B
B
B
If
you
want
to
keep
writing
code.
Obviously
that's
great,
but
as
far
as
you
know,
gsoc
is
concerned
that
we
really
oh
come
on
everything,
nothing
and
wants
to
work
for
me
today.
Okay,
as
far
as
gsoc
is
related,
we
need
we
need
to
try
to
get
that
stuff
done
by
the
23rd.
C
Okay,
so
by
then,
do
we
mean
merged.
B
C
The
other
one
was
tuning
models
and
I
think
I'm
pretty
much
done
with
it.
I
also
added
a
parameter
grid
as
part
of
the
vr.
I
didn't
pr
it
yet.
My
remote
is
messed
up
I'll.
Do
it
as
soon
as
that.
B
Gets
fixed,
yeah
github
is
all
wonky
right
now:
okay,
so
and
then
multi-output
right,
so
yeah.
B
C
Or
yeah
ready
for
a
review,
there
was
an
error
as
well.
There
I
didn't
know
much
about
okay.
So
let's
get
that.
A
A
B
B
C
B
Let's
see,
okay,
all
right
so
we'll
just
leave
that
open.
A
B
All
right
and
wait:
this
was
oh.
This
was
your
question
about
the
model,
so
we're
gonna
have
to
do
something
there.
Okay,
let's
just
make
a
note
of
that.
B
A
Interact
with
immutable,
config
properties,
okay
and
then
this
guy.
A
B
Fake
model
context
has
no
attribute
config,
okay
yeah,
so
this
is
sort
of
that
risk
that
we
want
run
with.
This
is
sort
of
the
risk
that
we
run
with
that
approach
that
we
were
talking
about.
You
know
we
talked
about.
Do
we
move
this
thing.
A
B
A
B
B
C
A
Yeah
features.
B
B
So
how
would
it
be
supported?
Otherwise,
though,
because
if
this
only
takes
one,
if
our
score
function
only
takes
one
feature,
you
know,
then
then
it's
only
going
to
take
one
feature
across
all
of
them
right.
B
A
B
Score,
it
would
be
nice
to
have
it
in
the
function.
B
B
This
is
you
know,
it's
not
ideal
right.
I
think
you
either
need
to
take
like
a
list
of
features
here
or,
let's
see.
A
C
B
B
B
On
the
other
hand,
it
it
sort
of
complicates
things
if
we
end
up
with
multiple,
you
know
multiple
features
that
a
score
might
might
score
based
off
of.
B
B
We
can
take
it
as
an
array
and
then
maybe
most
people
take
the
first
index.
We
can
take
it
as
a
variable.
You
know
set
of
arguments.
Most
people
take
the
first
index
or-
or
you
know,
just
define
their
own
feature
here
and
then
the
rest
trail
off
to
nothing
so.
B
Right
other
accuracy
context,
and
it
wants
two
features
right,
maybe
or
maybe
it
can
take
a
variable
amount
of
features
right
then,
if
this
one
gets
called
with
more
than
one
feature,
it'll
just
throw
an
error
right
on
call.
If
this
one
gets
called
with
more
than
one
feature,
it'll
just
grab
them
all
and
use
them
right
yeah.
I
think
that
might
be
the
way
to
go.
If
we're
gonna
try
to
put
this
in
the
methods
or
in
the
method,
signature
that
would
solve
your
problem
now.
B
The
question
is,
like
you
know
we
use,
so
we
use
the
so.
The
question
in
my
mind,
is
is:
is
we
use
the
you
know?
We
use
the
config
for
a
lot
of
things
right
and
it's
like
where,
where
where
do
you
draw
the
line
between
something
that
goes
in
the
config
and
something
that
goes
in
the
you
know
in
the
method?
Signature
for
these
various
classes?
B
B
So,
and,
and
the
reason
why
it's
contained
within
the
config
for
the
model
is
because,
if
you
serialize
the
model
to
disk,
then
the
model
that's
been
trained
is
tied
to
those
features,
so
the
accuracy
scores
not
serializable,
but
at
the
same
time
yeah.
So
the
accuracy
score
is
once
again
yeah
the
feature.
The
feature
really
should
reside
in
the
method
signature,
because
it's
not
something
that's
ever
going
to
be.
You
know
tied
to
the
specific
score
right
say
if
that,
if
you
were
to
serialize
it
to
disk,
does
it
make?
Does
it?
B
B
B
Is
there
the
one
thing
that
concerns
me
here
is
okay,
so
let's
see
the
one
thing,
that's
actually
context.
The
thing
is
with
the
accuracy
score
is
like
it's
all
sort
of
it's
all
sort
of
dynamic.
So
is
there
something
at
what
at
what
point?
You
know?
Okay,
so
because
this
is
like
you
know,
I'm
thinking
about
that
double
context,
entry
pattern
right
and-
and
so
with
the
accuracy
scores.
B
You
know
you
configure
the
class
and
you
okay.
You
enter
the
context,
I'm
trying
to
figure
out
whether
there's
whether
there's
a
reason
to
use
the
double
context,
entry
pattern
there
and
I
think
the
the
answer
is
probably
you
know
the
reason
why
we
use
it
is
because
we
don't
know
if
there's
a
reason
in
the
future,
so
we
should
probably
just
keep
those
configs
so
we'll
keep
the
actual
score.
Let's
just
move.
Let's
just
move
features
to
the
end
of
this,
because
it's
dependent
on
it's
highly
dependent
on
on
the
model
context.
B
That's
given,
yeah,
okay,
that's
perfect,
so
features
is
dependent.
The
the
features
here
in
the
signature
sort
of
go
hand
in
hand
with
this
stuff.
That's
being
passed
also
in
the
method,
therefore,
all
of
this
stuff
is,
you
know
con
this.
This
stuff
is
like
the
three
arguments
that
we
have
are
very
closely
related
to
each
other,
which
means
that
they
all
belong
in
the
method.
Signature.
Nothing
belongs
in
the
config
for
the
accuracy
context
there
and
if
it
did,
then
you
know
the
the
the
qualification
on
moving
something
from
a
config
to
a
method.
B
Signature
is:
is
it
related
to
the
other
things
for
the
method
call
right,
and
I
think
the
answer
is.
Is
you
know
if
we
had
sort
of
like
you
know,
for
example,
this
is
a
dumb
example,
but
if
we
wanted
to
make
this
too
configurable
right,
that
would
not
be
necessarily
related
to
model
context,
sources
and
feature.
B
I
was
just
trying
to
make
sure
that
we
have
that
right
there,
okay,
so
let's
just
move.
Let's
just
move
this
you
know
to
to
the
end,
and
then
you
can
invalid
number
of
features
there.
C
B
This
is
so.
This
is
not
mean
squared
error.
This
is
in
general
if
I,
if
we're
implementing
an
accuracy
score
right.
So
this
is
other
accuracy
context
down
here.
This
would
be
our
our
next
implementation
right.
This
is
a
some
other
implementation
of
an
accuracy
score
right,
yeah.
So
in
that
case
say
this
one
accepts
multiple
features.
B
Then
all
we
have
to
do
is
make
the
message
method,
signature,
star,
args,
and
then
you
know
all
the
all
the
features
will
we'll
get
we'll
get
past
here
you
know
and
then
we
could
say
you
know
if
it's
less
than
two,
then
we
throw
an
error
right,
yeah,
cool
okay,
so
I
think
that
sort
of
fixes
the
fixes,
the.
B
Yeah
that
that
that
sort
of
fixes
the
method
signature
there.
So,
let's.
A
B
All
right,
so
we
decided
that
the
msc
scorer
or
we
decided
that
the
accuracy
scores
should
take.
The
okay
should
take
the.
B
Method
in
this
way,
if
a
scorer
wanted
to
implement
or
to
work
on
multiple
features.
B
All
right
great,
so,
let's
see
so
then
this
was
this
error
was
about
okay,
yeah!
That's
right!.
B
So
this
error
was
about
config
and
I
think
this
is
actually
let's
see
actx.score.
B
Let's
see
so
I
was
like
okay,
I
was
like
this
is
probably
going
to
get
complicated
when
we
hit
the
http
service,
and
this
makes
sense
so
because
yeah
this
is
not
now
it's
it's
no
longer
exactly
straightforward
to
do
the
to
do
the
to
do
the
modifications
to
the
http
service.
Unfortunately,
so,
let's
see
so
okay
yeah,
I
think
what
you're
going
to
have
to
do
is
you're
going
to
have
to
do
something
similar
to
what
you
did
to
high
level.
B
B
B
B
Yeah,
unfortunately,
I
think
this
stuff
makes
like
no
sense
right
now,
let's
see
yeah,
because
the
way
it
had
been
originally
written
was
to
basically
instantiate
the
sources
and
then
post
the
sources
as
the
body.
B
B
B
A
B
Okay,
well,
we
don't
have
an
issue
for
this,
but
so
for
now,
for
now,
let's
do
exactly
what
you
did
let's,
but
let's
just
move
it.
You
know
one
over
so
because
yeah
see
the
thing
is.
This
is
what
I
was
getting.
This
is
what
I
was
left
with
this
right
here
is
going
to
be
a
pain
to
add
features
too.
I
mean
it's
not
that
bad,
it's
basically
the
same
code.
B
You
did
it
the
changes
you
did
in
high
level,
but
it's
just
gonna
become
this
giant
mess
you
know
of,
like
you
know,
changes
that
are
gonna
cascade
through
of
things
that
we're
going
to
need
to
refactor
later
so,
let's
just
sort
of
stop
gap
it
just
like
you
did
and
mctx.
B
B
Okay,
I
think
maybe
we
just
need
to
modify
the
the
fake
model
config
here
so
until.
A
B
I
didn't
do
this,
okay,
so
yeah.
So
if
we
add
this
with
a
you
know,
if
we
add
this
this
with
a
with
a
feature
to
predict
and
then
the
question
is
what
the
hell
were,
we
asking
it
to
predict.
What
is
this
test
to
score
test
score
score
m
service?
Okay.
B
So
if
we
do
this,
then
this
will
be
a
good
stop
gap,
we'll
refactor
the
http
service,
so
you'll
move,
you'll,
move
the
so
you'll
move
the
parameter
to
the
you'll
move,
the
argument
to
the
end
and
then
propagate
that
change
through
and
then
you
know
propagate
that
change
through
your
high
level
changes.
B
Let
me
write
this
down,
okay,
so
all
right.
This
is
mainly
because
of
need
to
refactor
okay.
So
so,
let's
move
feature
to
score
to
end
of
score
arguments.
A
To
refactor
would
be.
B
Too
much
work
because
then
you
have
to
update
all
the
docs
and
all
the
other
test
cases
and
stuff.
So
we're
just
going
to
move
this,
we're
just
going
to
do
exactly
what
you
did
for
now.
That's.
A
Work
all
right,
you
choose
to
request
body
right
now,
we'll
put
this
as
a
to
do
for
http
service,
refactor,
okay,.
A
A
A
B
All
right
yeah
do
it.
A
Okay,
917.
B
A
B
This
is
because
this
is
not
an
order:
okay,
great
okay.
So
now
this
is
all
ordered
correctly,
okay,
so
hopefully
this
is
at
least
the
changes
you
need
to
get.
The
http
service
working
now
feature
has
now
attribute
with
features
or
the
yeah.
A
B
To
get
scores
working
with
http
service,
we
need
to
add
a
to
do
to
the
http
service
refactor
to
accept
the
features
to
score
from
the
body.
The
request.
B
B
C
Yeah
yeah,
okay,
you
haven't
yet
seen
the
multi-output.
B
Oh
there's
a
multi-output
notebook.
Look
at
that
all
right,
great
okay.
So
let
me
review
that
offline
here
because
I
think
you
know
so:
let's
let's
go
through
and
so
reviewed
in
meeting,
so
we
reviewed
partially.
B
We
need
to
move
the
score
feature
to
the
end
of
the
argument,
so
once
you've
done
that
then
ping
me
for
a
review
and
then
I'll
I'll
tell
you
you
know
if,
like
you
know,
I
mean
you
know
how
this
goes
right.
So,
let's
see,
but
let's
just
make
sure
that
it's
all
as
cleaned
up
as
possible.
Right,
if
it
all
looks
good,
then
I'll
just
merge
it.
But
you
know
I'm
assuming
that
you
probably
you
made
a
logical
tutorial
so
yeah.
B
Let's
just
try
to
make
sure
that
we
captured
the
other
types
of
things.
You
know
if
you
use
cash,
download
link
to
it
that
type
of
stuff
that
we
had
in
the
last
ones.
Let's
see
so
and
then
obviously,
hopefully,
you've
you've
mentioned
you
know.
Here's
where
you
can
find
the
list
of
or
all
the
models
support.
Multi-Output
right
is.
Do
you
wrap
yeah,
yeah?
Okay?
B
So
then
you
mentioned
that
I
assume
and-
and
you
said
anything
in
scikit,
so
great
yeah-
that
should
be
fine,
then
so
parameter
grid
tuning
model,
so
parameter
grid
is
is,
is
a
non-gsoc
related
right?
This
is
sort
of
just
an
extra
spin-off
that
happened.
C
I
was
working
on
the
tuning
models,
notebook
and
I
thought
we
could.
You
know-
have
some
sort
of
optimizer
to
optimize
the
hyper
parameters
like
grid
search
and
etc.
So
I
didn't
really
implement
grid.
Sir
well.
Good
search
involves
a
cv
as
well,
so
I
just
created
a
parameter
grid
to
search
through
the
parameters
and
look
through
their
accuracies
to
see
which
one
works
better.
With
the
model.
B
B
Thing
all
right,
yeah,
I
feel
like
they
used
to
I'm
used
to
getting
two
warnings.
I
only
got
one,
let's
see
so
working
on
tuning
models,
notebook
and
implementing
almost
great
search.
So
what
what
else?
I
wanted
to
capture
some
more
notes
on
this,
so
you
said:
can
you
give
me
a
little
more
details.
C
B
C
And
yeah
it
returns
the
it
sets
the
model
to
the
best
parameters
and
returns
the
highest
accuracy.
B
B
Do
I
need
to
see
the
notebook
I
mean
so
I
mean
I
I
we
can
review
it
now,
but
is
there
anything
that's
that's
blocked
on
it.
You
know
we
can.
We
can
work
through
a
blocked
issue,
but
otherwise
you
know
I
would
assume
it's
probably
you
know
probably
pretty
close
to
done,
if
not
done
so.
C
Yeah
I
made
it
work
with
xg
boost
models
because
they
were
actually
mutating
the
config
inside
the
train
function.
But
when
it
comes
to
scikit
functions,
they
don't
really
do
that
inside
train.
They
do
it
in
the
a
enter
function
and
that
results
from.
B
And
that's
what
we
were
talking
about
with
okay,
so
how?
How
can
we
support
mutable
configs
with
scikit
models,
right,
yeah,
okay,
so,
okay
and
so.
B
A
B
Okay,
we
need
to
make
sure
that
we
can
we
can
we
get
those
psychic
models
working
though
yeah
and
that's
essentially
it
looks
like
we
have
a
open
open
questions
around.
You
know
make
config
numpy
may
config
inspect,
make
config
tensorflow
all
those
to
to
you
know
integrate
those
with
the
mutable
config
stuff
a
little
more,
and
we
can
talk
about
that
in
a
future
meeting.
So
we
don't
take
up
more
time
with
that,
but
I
think
you
know
what
it'll
come
down
to
is.
A
B
Inspect
is
probably
more
friendly
to
look
at
what
it'll
come
down
to
is
okay,
so
we're
looking
at
the
parameters
right.
So
we
can
imagine
this
is
kind
of
like
the
numpy
thing
and
then
you
know
we'll
need
to
set
field.
We'll
have
field
and
we'll
need
to
say
mutable
equals
true
or
mutable
equals
false
right.
So
so
my
config
so.
B
My
brain
just
gave
out
yeah
we'll
need
to
set
mutable
equals.
True
remediably
equals
false
right
and
we
talked
about
how
we
might
want
to.
You
know,
set
those
for
set
those
for
integer
parameters,
or
you
know,
et
cetera,
and
actually
that
it
kind
of
kind
of
with
the
way
that
it
works
right
now,
with
the
way
that
the
mutable
config
patch
works
right
now.
B
It
creates
a
specific
setter
and
getter
method
for
each
config
property,
but
we
could
sort
of
make
that
run
time
configurable
so
that,
after
the
config
is
created
that
we
could
set
specific
properties
to
be,
you
know,
enforcing
immutability
or
not
enforcing
immutability
at
the
property
level.
That
could
be
a
really
good
way
to
go
there
yeah.
B
B
Yeah,
okay,
so
I'll
just
write
that
down
as
an
idea
and
then
let's
try
to
you
know
if
anybody
else
has
any
ideas
shout
out
and
get
her
or
now
and
we
can
figure
that
out.
There's
definitely,
you
know
a
bit
of
a
performance
up
there.
C
Make
can
I
share
just
just
for
the
sake
of
the
course
snippet
can
I
share
my
screen
just
wanted
to
make
sure
if
I'm
on
the
right
track.
C
So
basically
I
don't
know
if
it
was
meant
to
be
like
this,
but
I
came
up
with
this
to
mutate.
B
C
A
B
C
We
got
an
output
of
the
same
accuracy
as
the
one
above
okay,
and
I
rerun
this
one
after
that.
Okay,
let's
see
so
basically
setting
the
learning
rate
to
0.2
didn't
change
the
accuracy
at
all.
Okay,.
B
C
B
C
I
guess
it
could
be
the
same
problem
because
it
doesn't
really
change
it.
It
doesn't
really
change
the
config
stuff,
so
it
could
be
the
same
problem
that
resulted
in
me
going
for
this
route.
C
All
right
I'll
I'll
change
it
to
okay,
sweet
the
other
way.
A
B
All
right
so,
let's
see
so
where
are
we
at
so
this
is
all
your
gsoc
related
stuff
right,
hashem.
B
D
D
A
Great
great
okay,
yeah
cool
good
operations.
D
So,
actually,
instead
of
passing
the
action
from
the
model
class,
we
are
just
deducing
it
because
action
can
be
reduced.
Pretty
simply
if
input
as
a
directory
an
output
is
a
file.
Then
it
is
a
compression
action
archive.
I
mean
archive
action
and
extraction,
otherwise,
okay,
so
so
what
you're
looking
at
here
is
actually
three
cases.
D
It's
like
a
file
can
exist
and
it
cannot
exist,
and
the
other
thing
that
can
change
is
that
other
variable
is
the
path
can
be
a
file
or
a
directory,
and
the
third
variable
is
it
could
be
in
input
or
output.
So
taking
the
cross
product
of
all
these
three
possibilities,
we
end
up
with
two
into
two
into
two
that
has
eight
possibilities
out
of
which
three
are
invalid,
and
five
possibilities
can
be
grouped
into
three
cases
which
are
being
shown
which
have
been
shown
here
right.
D
So,
like
I
all
the
math,
it's
it's.
D
So
I
have
spotted
that
those
data
flows
where
we
were
adding
a
compression
operation
after
archive
operation
or
an
archive
operation
after
compression
operation.
Decompression
operation
was
actually
very
much
same
just
with
some
minor
changes.
So
I
made
it
as
like
chained
operation,
where
I
can
switch
the
order
of
the
operations
and
it
will
work
in
an
operation,
agnostic
manner.
B
Okay,
so
let's
see
the
one
thing
I
would
say
so
we
have
this
tempter.
That
should
probably
be
something
that
needs
to
be
set
at
run
time.
B
Okay,
I
mean
at
run
time
of
the
data
flow,
though
so,
let's
see
seed
input
path;
okay,
so
yeah.
Let's
check
that
out.
Okay,
create
archive
data
flow,
create
change,
archived,
okay,.
D
Yes,
so
that
that
part
about
loading
stuff,
so
actually
it
is
if
it
is
already
in
memory,
I
don't
need
to
mutate
it,
but
when
I
need
to
add
new
properties
to
the
conflict,
how
will
I
do
that.
D
Yes,
for
example,
there
is
a
config
and
I
only
pass
few
of
the
fields
and
I
don't
pass
rest
of
those
and
then
I
load
it
from
a
file.
So
it
has
a
config
file
and
it
has
the
missing
properties.
But,
like
you
you,
how
will
I
add
those
properties
in
the
existing
conflict
in
memory.
B
Okay,
so
okay
yeah,
this
isn't
necessarily
what
I'm
confused
on
now.
Okay,
so
this
is
a
great
flow
chart.
This
is
great,
so
I'm
thinking
more
about
that
config.json
right
now,.
B
Trying
to
figure
out
what's
happening
there.
I
didn't
get
a
chance
to
respond
to
this.
Unfortunately,
but
it
looks
I'm
glad
you
went
through
with
your
with
your
changes,
because
that
was
the
right
right
right
course
of
action.
B
Oh,
that's:
config.json,
okay,
okay,
okay,
great,
okay,
that's
perfect,
yeah,
okay,.
D
D
So
I
just
like-
and
ideally
it
should
not
break
anything,
because
we
have
not
added
any
archive
tests
yet
in
any
of
the
models,
but
all
of
the
models
are
giving
a
similar,
similar
error
with
the
same
error
message,
which
has
a
very
same
trace,
similar
trace
path.
So
if
you
can
check
the
second
last,
if
the
latest
one
hasn't
completed
yet
they
have
all
like
something
wrong
with
the
high
level
stuff
being
called.
Let's.
B
And
then
type
object
is
not
callable,
I
usually
just
forgot
to
return
a
enter.
This
is
a
common
thing,
so
if
we
just
jump
into
this.
B
I
think
I
think
that's
it
just
the
returned
self
is
missing
because
a
intern
needs
to
it.
B
Mostly
most
of
the
time
you
return
selfie,
so,
let's
see
that's
probably
gonna
fix
that.
D
B
D
B
D
D
B
D
D
D
B
Oh,
oh
okay.
I
see
where
you're
confused,
okay,
okay,
yeah
yeah,
okay,
yeah
yeah.
That
makes
perfect
sense.
Why
why
you
would
be
confused
here?
Okay,
so
so
can
you
let's
see
so,
let's
bring?
Let
me
show
that
code
again.
So,
okay,
so
you're
saying
so
so.
B
If
we
want
to
add
so
why,
why
would
we
want
to
add
a
property?
I
guess
you
know
so
so
the
reason
why,
okay,
let
me
say
the
reason
why
we're
doing
that
get
at
her
with
the
default
of
none
is
just
in
case
the
because
this
is
the
model
based
class
right,
so
we're
implementing
some
some
helpers
and
stuff
in
in
the
base
class.
If
the
you
know
the
the
subclass
defines
these
properties
like.
B
So
if
the
subclass
defines
the
location
property,
then
we're
going
to
then
we're
going
to
do
this
stuff
right,
but
we
don't.
We
don't
necessarily.
B
It's
not
a
given
that
the
subclass
will
define
the
location
property
right.
So
that's
why
we're
checking
with
git
after
so
so?
Why
would
we
want
to
set
a
property?
Is
this?
This
have
to
do
with
the
config.json
loading?
Is
that
where
your
concern
is
revolving
around.
B
D
B
Okay-
let's
see
so
in
that
case,
in
that
case,
we'd
need
to
load
from
the
location
right
and
then
and
then
we'd
instantiate,
the
model
after
we've
loaded
it
from
its
location
right.
B
Okay,
because
we
would,
we
would
take
a
location,
we'd
load,
the
location
into
memory,
then
we'd
load,
the
config
json
and
then
once
we
had
the
config
json,
then
we
could
instantiate
the
model
right,
but
the
model
calls
this
part
right.
Well,
that's
what
I'm
saying
we're
going
to
have
to
change
that
right.
B
So
so,
let's
see
so-
and
I
think
this
is
sort
of
going
off-
that
create
discussion
that
we'd
had
you
know
the
creating
a
specific
model.
You
know
modifying
our
helpers
there
right
so
because
we
talked
about
that
like
model.create
or
something
like
that
or
the
entry
point
dot
create
where
you.
A
B
Yeah,
so
so
exactly
yeah,
so
this
is
sort
of
like
an
extension
of
that
right.
Where
you'd
be
doing-
and
I
think
we
even
talked
about-
maybe
the
name-
the
need
to
didn't-
we,
I
think
we
talked
about-
perhaps
the
need
to
rename
the
load
or
to
repurpose
the
load,
function
or
load
class
method,
because
I
think
what
we
end
up
with
here
is
really
like.
B
Yeah
yeah,
and
I
think
this
is
going
to
be
generic
code.
I
mean
this
isn't
even
going
to
be
tied
to
models
at
this
point
right.
It's
really
just
going
to
be
like
you
know,
this
is
how
you
load
any
of
these
data
based
data
flow
facilitator
objects
right,
because
we
we
have
yeah
like
this-
would
work
for
anything
that
has
a
config
right.
A
B
B
Yes
exactly
well,
no,
actually,
I
think
I
think
so,
let's,
let's
drill
it
down
a
little
bit
further.
So
so,
okay,
let's
see,
let
me
write
some
code
here
so.
B
B
Oh
okay,
great
okay,
yeah,
okay,
11
56.,.
A
A
B
B
Saving
and
loading
models
here
and
loading
we're
I
need
to
define,
need
to
this
document,
outlines
our
plan
to
implement
a
generic
saving
and
loading
mechanism
for
all
base.
Configurable
objects.
B
Okay,
so
and.
B
It's
configurable
so
base
configurable
is
basically
anything
that
takes
a
config
right,
yeah,
okay,
so
yeah.
So
basically
we
can
use
the
stuff
that
you
implemented
here
to
save
and
load
anything
right.
Let's
see
see
the
so
you
all
of
a
sudden,
your
your
project
is
applicable
to
more
than
just
models,
so.
B
We
don't
need
this
okay,
so
where's
your
code
here
and
let's
use
that
as
our
guide.
So
essentially
what's
our
flow.
Our
flow
is.
B
We've
got,
you
know,
start
with
or
well,
okay,
so
we're
gonna.
Do
we
have
we,
let's,
let's
just
think
about
how
do
we
think
about
this?
Well,
we
know
we
have
two
async
functions
right.
We
know
we
have.
B
B
B
B
So
dataflow
is
given
the
plugin
type
plug-in.
B
So
this
is
a
tricky
thing,
so
we've
talked
about
this
many
times
before,
because
you
need
to
give
people
control
somewhat,
but
you
can't
so
you
can't
give
this
information
on
disk
too
much
control,
because
if
you
give
it
too
much
control,
then
all
of
a
sudden
you
open
yourself
up
to
you,
know
unintentional
side
effects
and
unintentional
side
effects
is
basically
like.
You
know.
B
For
example,
the
yaml
safe
load
and,
if
you've
seen
the
the
vulnerability
around
that,
so
you
know
all
safe
load
cve.
So
basically,
you
know.
The
argument
here
is:
where
is
this
stuff.
B
Okay,
so
basically,
essentially
the
yaml
spec
says
that
you
can
sort
of
it's
very
similar
to
the
what
we've
got
here,
where
it
says
you
can
sort
of
take
a
python
path.
Right,
like
a
you,
know:
module
dot,
sub
module,
dot,
sub
module.
You
know
colon
class
type
of
thing,
or
I
think
it
just
uses
dots,
but
doesn't
really
matter
and
you
can
and
then
you
can
instantiate
that
class
or
you
can
even
call
like
you
know
the
thing
was
they
could
call
os
dot
system
right
and
run
commands
right.
B
So
we
need
to
avoid
that
type
of
situation
and
the
the
way
that
we
can
avoid
that
type
of
situation
is,
is
you
know
putting
some
filtering
on
what
we
load,
and
so
in
this
case,
right
so
say
we're,
say
we're
loading
or
say
we're
saving
an
object
right.
B
We
know,
we
know
the
type
right.
So
the
type
in
this
case
is
a
model
right.
So
we
know
that
when
we
load
back
in
the
model
like
we
know
that
we're
loading
using
the
model
like
you
know,
we
know
we're
so
think
about
this
from
the
standpoint
of
a
config
right.
So
if
I
have
a
config-
and
it
has,
you
know
the
properties
and
then
the
data
types
right.
B
So
if
I
were
to
load
you
know
if
I
were
to
load
features
and
I
were
to
load
it
from
a
location,
then
I,
if
I
were
to
load,
features
from
a
location.
I.
B
I
know
that
it's
of
type
so
features
isn't
a
grid.
Well,
so
if
I
were
to
load
features
from
a
from
a
location,
then
I
know
that
the
data
type
is
features
right.
So
I
know
I'm
going
to
load
a
features,
object
so
say
it
wasn't
one
of
our
plugins
right
or
better.
Yet
you
know
say
say:
let's
see
say
we
had
a
config,
whereas
let
me
just.
B
So
we
look
at
like,
for
example,
one
of
these
one
of
these
operations.
B
So
the
model
yeah
model
predict
right
so
say
we
have.
We
have
this
model
predict
config
right
and
it
takes
you
know
a
type
of
model
right,
so
we
know
that
we
know
that
we're
loading
a
model.
So
then
we
just
need
to
know
which
model
we're
loading
right.
So
if
we,
if
we
open
it
up-
and
we
say-
okay,
you
can
load
anything.
Then,
all
of
a
sudden,
we
we
open
ourselves
to
that
libiamo
style,
cve
right
and
we
don't
want
to
do
that.
B
B
B
A
A
B
All
right,
so,
let's
see
where
what
else
do
you
have
in
your
code
here
so
is
this?
Does
this
sound
good
to
you
so
far.
D
B
Just
going
to
put
this
in
another
function,
though,
is
is
really
the
deal
right.
So
this
all
you're
doing
is
is
this:
a
exit
method
is
just
going
to
call
a
function
where
you
you've
made
the
body
this
right
and
you're
passing
self-config
location
right.
So
this
stuff
here,
let's
see
11.74
so
ghpr
check
out.
B
B
B
Yeah
exactly
yeah,
so
so,
let's
just
file
this
as
something
to
do
later.
Good,
good
point,
because
this
should
be
relatively
simple
though
so
it's
just
good
to
know
right
because.
B
Yeah-
let's
put
this
after
gsoc,
but
it
it's
good
to
know
that
I
think
we're
almost
there.
So
so
and
then
the
main
thing
was,
you
know
what
happens
with
the
load.
So
I
think
this
is
also
gonna
sort
of
fix
the
load
problem
so
right
now,
because
that
was
how
this
discussion
started.
So
you
come
in
and
you
need
to
know.
B
Well,
you
can
load
if
you
instantiate
a
model
with
parameters,
you
can
load
the
model
right
and
that
would
load
it.
You
know
and
give
it
the
correct,
tempter
and
everything
right
with
the
loaded
model
contents.
It
just
wouldn't
load
the
config
properly
right
now,
right,
yes,
okay,
so
yeah!
So,
let's
see
and
then,
if
from
from
that
perspective
yeah,
I
think
let's
just
do.
B
Let's
maybe
do
so.
Let's
have
let's
leave
this
as
a
to
do
for
now,
but
within
this
loop
body
instead
of
instead
of
pass,
let's
do
a
comparison
and
log
self.logger
dot,
warning
that
the
config
differed.
B
B
So
if
the
value
from
loaded
config
is
the
same
type
as
what
it
is,
it
should
be
in
the
config
class,
then
override.
D
We
overwrite
right
because
it's
just
that
is
well
yeah.
B
Yeah,
that
is
what
the
adr
says:
yeah,
so,
okay,
so
then
it
would
okay,
so
yeah
the
whole
thing:
okay,
yeah.
The
whole
thing
is
essentially
to
do
that:
yeah,
okay,.
B
B
So,
let's
see
so,
we
have
a
to
do
on
config
loading,
which
we
need
to
figure
out
how
to.
B
Entry
point
associated
and
then
pass
the
config
on
loaded
from
the
data
flow
to
the
loaded
or
pig
loader
from
the
data
flow
to
instantiate.
The
entry
point.
B
Got
too
many
too
many
usages
of
the
word
loaded
there,
overloaded?
Okay,
so
you
are
good
for
next
steps.
Then,
is
there
anything
else
you
need
here
or.
D
B
Great
all
right,
great
great,
okay,
great
great
work,
looking
good,
so,
let's
see
so
sudhanshu.
Where
are
you
at
here.
E
E
Like
working
on
creating
an
example,
okay
for
the
cleanup
operations,
okay
yeah,
so
I
haven't-
pushed
the
changes
yet
and
like
I'm
facing
some
issues
with
the
output
layer.
Okay,
I
would
like
to
share
my
screen.
That
sounds
good.
E
So
is
my
skin
visible
yeah?
We
can
see
so
right
now.
What
I
have
done
up
until
now
is.
I
have
created
the
input
layer
which
will
take
all
the
inputs
and
it
will
get
it
into
a
matrix
and
then
it
will
perform
the
operations
cleanup
operations.
E
So
this
was
one
of
the
operation
which
I
performed
on
dataset
and
it
gave
give
out
like
a
normalized
data
set,
which
you
can
use
for
training
and
testing
okay.
E
And
after
that
I
have
this
output
layer.
So
in
the
output
layer,
what
I
am
trying
to
achieve
is
is
to
get
like,
like
this
is
in
the
form
of
a
matrix
right
this
thing,
so
I
was
actually
trying
to
get
like
a
single
row
of
this
matrix
and
and
return
it
so
that
it
can
again
be
converted
into
a
form
of
records.
E
Yeah,
so
I
what
I
was
actually
doing
here
is
like
I
created
a
class
output
layer
class
and
in
that
I
have
this
index
value,
so
it
will
give
me
like
which
row
we
want
to
output
and
for
the
context
I
have
written
this
code,
but
the
the
problem
in
this
part
here
is
like
suppose,
we
have
19
features
and
let's
say
we
have
20
data
points.
We
have
20
rows
in
the
a
csv
file.
Then
this
code
actually
runs
both
the
multiplied
by
that
times.
B
E
That's
actually
the
problem,
because
I'm
not
able
to
do
it
like
that.
B
So
your
approach
right
now
is
you
had
those
sidekick
operations
and
they
take
so
did
those
operations
that
you'd
implemented
before
this,
the
ones
that
wrap
the
scikit
stuff
do
they
did
they
take
the
entire
data
set
as
a
list
and
then
do
clean
up
on
that.
E
B
E
Like
decomposition
operations,
which
requires
the
whole
data.
B
B
So,
okay,
so
it
requires
okay.
So
if
an
operation
requires
the
entire
data
set,
then
what
did
we
do
for
this
previously?
Didn't
we
haven't?
We
done
this
before.
Let's
see,
where
did
we
do
this
before.
B
E
E
Let's
see
in
the
collect
output
actually
in
the
nlp
example,
what
we
have
is
we
have
sentences
and
the
output
actually,
so
we
we
just
take
all
the
sentences
and
then
we
create
a
matrix
out
of
it
and
for
the
output.
How
it
is
done
is
we
take
each
of
the
input
string
and
we
find
what
is
its
index
in
the
in
the
collected
output
and
then
whatever
is
the
corresponding
like
value
the
list
value?
We
actually
return
that
okay.
E
The
tutorial
name
is
using
nlp
operations.
It's
under
data
flows,
oh
under
data
flows,.
E
B
A
B
A
B
What
the
hell
is
happening
here,
so
I
thought
we'd
implemented
that
all
for
single
specifically
before
this,
because
you
need
to
run
everything
through.
B
A
E
B
Okay,
and
previously
we
could
just
index
in
easily
because
we
just
needed
to
we
could
use
so
we
could
use
this
string
right
and
now
we
can't
just
pull
from
the
string.
We
need
to
know
a
specific
index
that
it
entered
at
right,
yep.
Okay,
I
mean,
I
think
you
could
return
can't
you
return
something
from
your
input
layer
operation,
that
is
the
index.
It
was
input
at
or
something.
B
B
B
Okay,
so
you
take
all
your
records
in
the
data
flow
and
you
put
them
in
a
list
and
then
you
run
them
through
your
scikit
operation
that
you
know
yeah
and
then
you
need
to
figure
out
which
index
in
that
output
list
maps,
to
which
record.
E
Not
necessarily,
we
need
to
know
like
what
we
need
really
need
to
do
is
like
give
the
out
like
we
have
this
matrix
b
process
matrix
and
we
have
to
just
return
each
of
the
row
so
that
it
will
come
in
the
csv
format
when
we
do
the
merge
command
or
even
if
we
not
do
the
merge
command.
But
if
we
try
to
train
on
something,
then
at
least
we
have
the
column
name
so
that
we
can
do
training
on
top
of
them.
B
B
E
Let
me
start
with
the
so
this
is
the
data
flow
create
command
which
I
have
created
so
the
so
so
I
have
a
data
set
with
me.
It.
B
E
Very
small
but
like
it
is
a
very
large
data
set,
but
I
actually
kept
it
small
so
that
we
can
run
through
it
multiple
times.
So
this
is
the
data
set
we
have,
and
it
is
a
csv
format.
Okay,
yes,
so
now
here,
what
we
are
doing
here
is
in
the
flow,
so
we
have
three
things.
So
this
is
the
output
layer
result.
We
want
to
get
it
into
single
spec.
E
This
is
the
source
length,
which
is
actually
how
many
rows
we
have
in
the
csv
file,
and
it's
the
feature
length
like
in
each
of
the
row
how
what
are
the
number
of
features
that
we
have
right
so
in
the
seed?
What
I
am
doing
here
is,
I
am
providing
all
these
values
to
the
input
layer,
input,
data
points.
E
Okay,
the
input
layer
also
has
another
input
which
is
source
length
and
feature
length,
so
it
will
take
all
these
values
and
it
will
convert
into
a
matrix
form.
E
So,
after
that,
the
input
layer
will
return
a
data
which
is
this
output,
which
will
then
go
to
one
of
the
standard
scalar
operations,
which
will
remove
the
mean,
and
it
will
transform
it
in
such
a
way
that
it's,
it
has
a
unit
variance.
So
this
is
the
operation
which
you
are
performing
on
top
of
this
matrix,
okay
and
when
we
are
done
with
the
performing
the
matrix.
So
what
I
am
doing
here
is
whatever
the
result
we
are
getting
out
of
this
operation.
E
We
will
give
it
to
the
input
layer
which
has
inputs
data
and
this
output
layer.
So
this
is
the
output
layer
which
has
input
data,
so
this
output
layer
has
an
output
result
which
will
give
it
to
like
single
spec.
E
So
this
was
the
data
flow
creation
operation,
and
then
we
have
the
merge
command.
So
in
the
merge
come
on.
What
I'm
doing
is
I'm
running
the
whatever
the
data
flow
we
got
in
the
json
format,
I'm
trying
to
run
it
through
the
data
set
that
we
have
and
create
another
data
set,
which
is
the
pre-processed
dataset,
which
we
have.
B
I
think
that
we
may
okay
so
we're
having
to
do
a
lot
of
extra
stuff
here
right
to
get
this
whole
data
set
flat
like
to
load
in
the
whole
data,
set
right
and
then
run
it
through
these
operations
that
take
the
whole
data,
set
right,
yeah,
so,
and-
and
that's
essentially
because
you
know
the
way
that
we're
currently
running
it
is
to
you
know,
pass
each
record
to
a
data
flow
one
by
one.
B
Now,
if
we,
if
we
had
another
way
of
of
of
doing
you
know
if
we
had
a
source,
for
example,
that
output
you
know,
output
records,
not
necessarily
based
on
you
know,
not
necessarily
a
one
to
one
mapping
of
input.
So
so,
if
we
had
a
if
we
had
a
source
that
when
you
ran
it
every
time,
the
the
results
were
yielded
from
the
output
like
if
we
had
a
source
that
within
within
so
basically
it
runs
a
data
flow
and
within
the
data
flow.
B
We
load
all
the
data
from
a
source
from
a
from
a
sub
source
right,
so
we
want
to
load
data
from
that
csv
file
right
that
kc
house
data.
Right
so
say
we
had
a
data
flow
that
ran
some
operation,
that
load
every
single
record
from
the
kc
house.
Data
set
csv
file
right
and
now
it
takes
all
of
the
records
and
it
passes
them.
You
know,
as
now
you
have
you
know
an
operation
that
loaded
all
of
your
stuff.
So
now
it
is
flat
right.
B
So
now
you
can
go
and
you
can
pass
that
you
know
to
other
operations
within
this
data
flow,
and
then
you
just
need
a
way
to
yield.
One
record:
you
know
for
each
row
right
it's
because
in
this
in
this
new
setup
right,
we
don't
necessarily
have
you
know.
We
don't
necessarily
know
that
we're
doing
you
know
a
set
of
operations
on
each
row.
We
just
know
that
we're
running
a
data
flow
and
it's
going
to
yield
us
a
bunch
of
records
right
yes,
so
this
is
almost
like
a
streaming
use
case
here
right.
B
This
is
like
a
run,
a
data
flow
and
it
will
return
a
variable
amount
of
records
right,
and
it
just
happens
to
be
that
we
know
how
many
records
it's
going
to
return,
because
it's
it's
doing
that
it's
it's
doing
that
from
from
a
from
a
csv
file
that
we
know
of,
but
but
in
general
you
know
it
doesn't
we.
This
is
essentially
our
first.
This
would
be
our
first
like
streaming
source
right.
B
So
the
question
is:
how
do
we
I
mean?
Would
that
I
mean
that
that
should
make
things
a
lot
easier
for
you?
Wouldn't
it
yes,
okay,
so
then
we
would
need
to
figure
out.
You
know
what
does
that
source
look
like
right
and
how
do
we?
How
do
we
grab
the
records
out
of
it
right?
Do
you
return
some
giant
object
and
then
each
you
know
dude.
Does
it
return?
B
You
know
in
the
in
the
results
dictionary
right
for
one
context:
does
it
return?
You
know
something
that
you
convert
into
records
and
you
yield
each
one.
That
could
be
probably
the
easiest
way
to
go
about
this
for
now
right
yep.
I
think
that
might
be
you
see,
you
see
what
I'm
saying,
though,
right.
B
I
think
that
might
be
a
really
easy
path
forward
to
for
you
and
and
because
you
know,
part
of
what
you're
doing
here
is
is
is
you're
implementing
cleanup
operations,
but
you're
also
trying
to
discover
what
is
the
best
way
to
to
implement
cleanup
operations
right,
and
I
think
what
we've
seen
so
far
is
that
you
know
we.
We
know
that
there
are
operations
that
take
the
whole
data
set
right
and
so
the
the
existing
approach
that
we
have
a
process.
B
B
We
can
have
multiple
data
flow
sources
or
sources
based
on
data
flows
right
because
the
current
one
it
may
just
need
to
be
renamed
right
to
you
know,
run
data
flow
on
you
know
one
run
data
flow
on
each
record
type
or
something
like
that
right
and
this
one
is
just
you
know,
run
the
data
flow
and
the
output
is
each
record.
So
I
would
say
I
would
say:
that's
probably
does
that
do
you
feel
like
that,
would
give
you
a
clearer
path
forward
here.
B
Yeah
yeah,
I
would
say
you
could
implement
a
source.
Let's
take
a
look
at
what
the
data
flow
source
looks
like
right
now,.
B
B
Yeah
this
stuff
all
needs
to
get
this
stuff
will
all
get
consolidated.
So
I
think-
and
this
just
hit
on
this
again-
because
I
don't
think
we
quite
quite
did
it,
but
you
know
how
we're
like
with
the
location
stuff.
We
now
have
another
place,
so
we
have
the
data
flow
source.
We
have
the
data
flow
running
stuff
and
then
we
have
the
run
data
flow
operation
and
we
have
the
model,
saving
and
loading.
All
of
these
are
places
where
you
know
where
we
are
taking
a
data
flow
and
we're
executing
it.
B
I
think
that
what
we'll
probably
get
to
is
we're
going
to
go
through
and
consolidate
all
of
those
places
to
actually
call
through
the
run
data
flow
operation
eventually
and
that
way
anytime,
you
run
a
data
flow
you're.
You
know
you're
leveraging
the
same
code
right
because,
what's
going
to
end
up
happening
pretty
quickly,
here
is
you
know
we
already
have
seen
it
in
the
dataflow
source
and
the
existing
data
flow
source
and
in
the
existing
command
line.
Interface
code-
and
I
think
in
http
service
code-
is
that
we
have
to
it's
like.
B
Oh
okay,
you're
running
a
data
flow.
You
know
you
need
to
make
sure
that
you
provide
inputs
to
each
context
or
you
know
which
context
you
want
to
run
right,
and
so
all
of
that
stuff
probably
should
sit
behind
the
run
data
flow
operation.
B
Eventually
and
then,
like
we
talked
about
with
this,
you
know
you
know
everything
is
sort
of
a
plug-in
right
and
this
will
have
to
do
with
the
unified
config
is
once
we
can
get
the
config
a
little
more
unified
to
where
operations
you
know
individual
functions
and
operations
and
stuff
like
that,
look
more
like
the
rest
of
our
classes,
then
anytime
we
want
it.
You
know
we
have
something
that
needs
to
to
take
a
data
flow.
It
will
take
a
configured
run.
Data
flow
instance
right
because
that
configured
run.
B
B
Probably
you
know
want
to
use
the
existing
data
flow
source
as
a
guide
and
you'll
probably
end
up
copy
pasting
a
lot
of
code,
but
I
think
that
I
I
you,
may
you
may
end
up
with
less
stuff,
but
because,
essentially,
what
you
really
need
to
do
is
you
don't
have
to
worry
about
the
source
as
as
as
an
input
right,
you
might
have
to
you'll
have
to
write
a
new
operation
that
you
know,
you'll
have
to
write
a
new
operation
that
that
outputs
all
the
records
from
a
source
right.
B
You
know
that
would
be
something
that
you
would
do
here
and
in
your
configuration
of
the
data
flow,
then
you
would
point
it
at
that
csv
source
right.
E
Okay,
so
we
are
going
to
use
the
data
flow
source
right.
B
Yeah,
I
think
you
need
to
write
a
new
data
flow
source
is
what
I'm
saying.
I
think
that
you
can
borrow
some
of
the
code
but
you're.
Essentially,
writing
a
new
data
flow
source
right,
so
I
I'm
this
you're
right.
So
so
the
way
you
should
think
about
this
I
mean
you're,
writing
a
new
source
right,
so
you're
implementing
the
record
method
and
the
records
method,
and
you
can
probably
leave
off
the
update
method
for
now
right.
B
Actually,
in
fact,
I
would
just
say,
raise
not
implemented
error
during
in
the
record
method
and
in
the
in
the
update
method
and
then
only
implement
the
records
method
right
now.
Right,
because
that's
the
only
thing
that
matters
for
your
use
case
at
this
point
right,
so
so
for
a
path
forward
here,
so
cleanup
operation
so
implement
a
new
source,
raise
not
implemented
error
in
dot
record
and
dot,
update,
implement
dot
records
to
be
to
run
a
data
flow
provided
via
the
config.
B
Then
then,
you
should
be
able
to
index
easily
and
stuff
right.
So
if
you
were
and
so
then
you
need
to
figure
out
so
you
have
all
of
your
at
this
point
you
you
would
have
loaded
in
all
your
stuff
into
a
giant.
You
know
an
array
of
array
or
an
array
of
objects
or
an
array
of
arrays
right
once
you
implement
that
that
operation
and
you're
running
that
operation
from
this
data
flow,
which
you're,
which
you're
running
from
your
new
source,
yes
and
when
so
you
need
to
figure
out
okay.
B
So
when
you're
implementing
the
dot
records
method
within
your
new
source,
when
you
get
output
from
the
data
flow
being
run
turn,
you
know
somehow
turn
that
output
into
records
which
you
yield
right
does
that
make
sense.
E
Maybe
we
do
not
have
to
do
indexing
then,
because
we
can
save
the
pre-processed
data
into
the
same
records
and
we
can
like
directly
train
from
that.
E
What
like
we
have
this
source
right
and
we
will
take
all
the
data
from
the
source.
We
will
do
pre-processing
on
it
and
we
will
save
the
pre-process
data
in
the
same
source,
format
and
and
then
I'm
thinking
we
don't
have
to
do
indexing
or
anything
like
that.
We
can
just
take
the
data
from
the
source
and
do
the
training
on
that.
B
Well,
you
don't
have
to
do
that
either
way.
Right,
like
that's
just
I
think,
for
the
sake
of
a
tutorial,
it's
nice
to
show
the
merge
command
right,
because
that,
basically
you
know,
then
you
can
cat
the
output
of
the
csv
file
and
you
can
show
them
that
the
data
changed
right
and
then
you
can
show
them
that
they
train
on
it
right
the
the
modified
data
right.
B
So
if
you
implement
this
data
flow
and
this
new
source
that
uses
data
flow,
then
you
know
you
would
run
the
merge
command
using
this
new
source
right.
So
not
the
existing
data
flow
source.
With
this
new
data
post
source
and
save
it
still
save
it
to
the
intermediary
csv
file
right
yep.
B
Okay,
because
I
think
so
I
think
your
output
layer
might
be
similar
here.
I
think
you're-
probably
gonna
not
have
that
input
layer,
type
of
thing
right-
that
input
layer
becomes
this
operation
that
loads
all
the
data
from
the
source
right
and
then
the
output
layer
is
still
okay.
Now
the
output
layer
is
okay.
How
do
I
select
you
know?
How
do
I
turn
these
things
into
records
now.
B
Or,
like
you
know,
how
do
I
return
probably
dictionaries
that
then
you
know
in
the
source
itself.
I
now
interpret
each
dictionary
in
some
kind
of
list
that
gets
output
as
a
as
a
record
rate,
which
I
then
yield
okay
cool.
So
let's
have
another
meeting
on
on
friday.
Just
because
I
want
to
make
sure
that
we're
all
you
know
we're
all
making
progress
here
and,
and
nobody
gets
stuck.
You
know
I
want
to
make
sure
everybody
can
can
finish.
B
You
know
all
the
stuff
that
they
set
out
to
so
obviously
you
know
we
can
do
stuff
after
gsoc.
That's
always
good,
but
let's
try
to
get
this
stuff
done
here,
while
we're
at
it.
Okay,
so
and
please
reach
out
to
me,
and
also
please
ping,
you
know
saksham
and
yash
and
himachu.
If
you
guys
need
extra
input.
Okay,.