►
From YouTube: Weekly Sync 2021-01-26
Description
Meeting Minutes: https://docs.google.com/document/d/16u9Tev3O0CcUDe2nfikHmrO3Xnd4ASJ45myFgQLpvzM/edit#heading=h.1yi7z06woyxn
B
B
So
actually,
the
issue
here
is
that
so
when
we
actually
create
a
model,
we
we
create
the
context
of
it
and
we
configure
it
and.
C
B
For
the
scorer,
we
also
create
we
configure
it,
and
you
also
create
a
context
for
that
also,
but
when
we
actually
have
to
call
the
discord
method,
so
in
this
core
method,
we
will
have
to
actually
pass
both
these
labels,
so
that.
A
A
Sorry,
so
can
you
recap
real
quick,
so
when
we
create,
I
just
want
to
make
sure
I'm
getting
all
this
in
the
notes.
So
when
we
create
the
model,
we
create
a
context
for
the
model
and
we
create
context
for
the
source
right.
B
B
Also,
yes,
so
when
we
actually
have
to
like
use
a
particular
model
context
on
a
particular
scorer
context,
we
have
to
actually
pass
both
of
these
labels
in
the
in
the
api
request.
C
B
So
yeah,
so
in
the
actual
in
the
api.js,
there
is
only
one.
B
B
B
So
easy
yep
yeah,
so
the
problem
I
was
it
was
this
problem,
so
we
actually
have
to
pass
the
the
model.
Context
label,
as
well
as
you'll,
have
to
pass
the
label
for
the
scorer
context.
B
A
Label
train-
let's
just
see
here
for
a
second,
because
it
looks
like
when
we
train
okay.
So
let's
look
at
how
you
know
this:
the
the
model
stuff
used
to
work
so
train
takes
just
the
model
label
and
model
takes
just
a
model
label.
So,
okay,
yeah,
let's
see
context
model
label
ctx
label.
A
I
think
that
it
hits.
I
think
it's
because
the
the
data
for
the
sources
is
in
the
post
request.
A
Yes-
and
I
don't
know-
I
guess-
yeah,
okay
and
and
part
of
that
reason
is
because
you
know
you
might
just
throw
the
raw
records
in
there
with
the
scorer
you're,
never
gonna
throw
the
raw
records
in
there.
Let's
see
yeah
well,
so
we
could
yeah.
So
we
could
reference
it
by
the
label
right
or
we
could,
or
we
could
put
it
in
the
post
body
and
the
benefit
of
putting
it
in
the
post
body
is
that
we
could.
A
Maybe
you
know
instantiate
it
if
it
doesn't
exist,
if
we
provide
a
config
or
something,
but
I
don't
know
I
don't
know-
I
don't
know.
If
we
want
to
do
that
yeah
I
don't
know,
I
guess
what
what
do
you
think
are
the
pros
and
cons
of
that.
A
Yeah,
okay,
so
but
and
then
we
have
the
body
is
the
source
context
names?
So
it's
context.label.
A
Let's
see,
I
thought
I
swear,
wait
a
minute
yeah.
I
swear
that
there
was,
I
swear.
You
could
just
post
the
data
itself.
C
A
A
Yeah
because
yeah
we
need
to,
we
need
to
add
that
the
next
okay,
so
you're
saying
this
wasn't
working.
B
Works:
okay,
but
but
without
this
laboring
actually
doesn't
work
and
working,
I
I
mean
like
yeah
like
the
thing
is
the
the
model,
which
is
the
fake
model,
which
is
actually
we
are
testing
here.
So
it's
accuracy
method
was
actually
just
to
count.
How
many
number
of
records
were
there.
A
B
We
had
like
an
accuracy
method
which
was
just
counting
the
number
of
records
we
had,
and
it
was
just
returning
that
value
yeah
so
that
there,
like
you,
can
test
it.
What
was
if
it
is
getting
the
current
accuracy
or
not,
but
for
my
testing
I'm
actually
using
an
msc
scorer.
A
B
The
score
it
actually
doesn't
work
for
that,
because
it
tries
to
find
a
predict
value
and.
A
A
C
B
A
I
see
okay,
so
we
probably
need
to
be
doing
something:
some
validation
on
the
fact
that
it
had
the
predict
method.
A
Yeah,
okay:
let's
see
yeah,
that's
interesting,
okay,
so
I'm
thinking
for
yeah.
A
So
for
the
purposes
all
right
so
because
we
have
the
the
okay
first,
so
so
first
off
is:
does
it
matter
with
the
predict
method?
What
the
results
you're
getting
here
are
as
far
as
msu
is
concerned,
I
mean:
can
you
just
compare?
A
B
A
B
What
I
was
actually
thinking
like,
maybe
I
can
create
like
a
fake
accuracy
square
which
will
just
mimic
what
actually
was
being
done.
A
B
A
A
So
need
to
pass
model
context:
label.
Okay,
yeah.
I
think
you
know
I'm
thinking
about
weighing
the
pros
and
cons
of
this
whole
thing
and
I'm
thinking.
A
Okay.
So
so
here's
here's
my
thought
process.
We
have
all
these
urls.
Will
they
take?
You
know
the
main
label
of
whatever
the
the
thing
that
we're
working
with
is
right.
So,
as
what
I'm
realizing
is
we
have
the
ability
to
instantiate.
A
A
They
necessitate
that
we've
instantiated
a
context
now
the
there's
the
the
cons
to
this
is
that
we
can't
instantiate
a
temporary
object
on
the
fly
if
we
wanted
to,
because
we're
only
passing
the
data
in
the
url
and-
and
originally
you
know,
I
did
I
built
this
trying
to
mimic
the
mimic
exactly
the
the
you
know,
the
python
api
right
and
we
didn't
have
those
high
level
functions
at
the
time
so
so
now
sort
of
getting.
A
I
haven't
looked
at
the
http
service
in
a
long
time
you
know,
and-
and
so
this
is
giving
me
this
new
sort
of
this-
a
fresh
look
on
it
right
and
I'm
thinking
now
that
we're
thinking
about
adding
another
label
if
it
really
makes
sense
to
be
using
labels
in
the
urls
at
all.
Given
that
we
have
to
post
some
data,
maybe
we
should
just
post
all.
A
You
know
all
the
data
right
so
so
yeah,
because
then
we
could
configure
temporary
sources
models
and
and
accuracy
scores
on
the
fly
right.
We
could
provide
their
context
or
we
can
provide
their
whole
config
if
the
user
wanted
it.
You
know
just
for
that.
One
operation,
this
you
know
the
upside
of
this-
is
that
you,
you
can
have
temporary
objects.
The
downside
is
you.
A
The
downside
is
that
the
body
of
the
request
becomes
more
complicated,
which
I
don't
think
is
a
huge
deal
but
yeah.
I
don't
think
that
should
be
too
big
of
a
deal
so
yeah.
I
think
I
and
and
sort
of
looking
forward
in
that
perspective,
I
think
maybe
we
want
to
just
include
it
in
the
body
and
then
we'll
go
refactor
this
stuff
later
to
sort
of
not
mix
so
much
data
between
url
and
body.
B
A
D
A
That
that
we're
we're
growing
the
amount
of
information,
the
url-
and
maybe
it
didn't
make
sense
to
have
that
that
information
in
the
url
in
the
first
place
right
so
so
to
head
off
the
fact
that
we
might
refactor
it.
Let's
just
go
ahead
and
include
the
label
in
the
body
of
the
request,
let's
see
so,
but
but
that
does
okay,
so
yeah
all
right.
Okay,
now
that
does
complicate
your
pull
request
further
because
we
yeah
that
complicates
your
forward
closer.
A
B
Okay,
so
so
what
do
I
do
now.
B
A
Label,
an
a
label
and
a
label
sort
of
makes
it
seem
like
that's
the
score
label
and
the
convention
would
be
the
score.
You
know
model
and
then
model
label
so
score
and
we'll
want
score
label
immediately
after
that.
So
let's
maybe
do
you
know
m
label
or
something
right.
So
we
do
the
score.
A
The
model
label
and
set
it
a
level-
oh
sorry,
first
label
and
then
m
later
label
all
right.
So
this
the
score
label
first
and
then
the
next
parameter
would
be
the
model
label.
So
m
label
just
for
for
convention
sake,
all
right
cool
and
then
let's
take
a
look
at
that
score.
Accuracy.
B
A
All
right,
so
we
have
the
model
context
in
the,
so
we
have
that
at
mctx
root
decorator
on
there
and
that's
yup
that'll.
Add
that
model
context
and
I
believe
we
have,
let's
see
so
get
source
context.
So
I
think
what
we'll
want
here
is
something
I
think
I
think
do
we
have
any
routes
where
we're
decorating
with
bolt.
I
don't
think
we
do.
A
Oh,
I
see.
Okay,
I
see
what's
going
on
all
right.
Okay
now
I
understand
yeah
okay.
So
what
we're
going
to
need
to
do
is
we'll
just
make
this
decorator
a
function
itself,
so
you
know
how
like
the
op
decorator,
it
can
take
parameters.
A
Yeah
yeah
parameterized
decorator
yeah.
So
that's
what
we're
going
to
do
here.
So,
let's
see
and
we'll
just
make
it,
you
know
an
optional
parameter
and
that
way
we
won't
have
to.
Oh,
let's
see
now
we're
oh
yeah.
I
think
I
think
we
did
that
within
yeah.
That
should
work,
because
I
think
we
have
an
example
of
this
one
within
op.
So
let's
go
take
a
look
at
op.
A
D5Ml
df
dfmldf
and
then
let's
see
yeah
dfml
df
below
config
loader,
or
wait
not
that
yeah
below
config
loader
yeah.
There
we
go
dbdf
and
then
I
believe
it's
in
base.
A
A
All
right
here
we
go
all
right.
So
let's
see
yeah.
So,
let's
just
scroll
through
this
a
little
bit
here.
So
I
believe
there
was
some
stuff
to
say
you
know
if
there
was
no
arguments
or
what
was
it.
Oh
yeah,
okay,
scroll
down
a
little
more
and
keep
going
keep
going
until
we
get
to
the
main
body
of
the
function,
all
right,
def,
wrap.
Okay,
so
all
right,
so
here's
the
wrapper
function,
keep
scrolling
to
the
end
of
this
wrap.
A
All
right
here
we
go:
oh
yep,
all
right
so
yeah.
So
if
there's
any
arguments,
then
we
call
okay.
So
if
there's
arguments
to
this
function
and
no
keyword
arguments,
then
that
means
the
decorator
was
called
without,
so
we
we
can
basically
have
it
be
maintained.
We
won't
have
to
change
all
those
definitions
of
at
mctx
to
be
calling
the
decorator
we
can
use
the
you
know
we
can.
A
We
can
call
the
decorator
based
on
whether
whether
it
was
pass
any
arguments
or
not
and
then
pass
the
you
know
the
label
in
the
the
keyword
arguments
so
because
if
if
the
the
decorator
itself
will
only
ever
be
passed
in
this
case
since
we're
only
using
keyword
arguments
we
can,
if
we,
if
there
is
any
argument
to
the
function
or
to
the
decorator,
we
know
that
it's
the
function
itself,
that's
being
wrapped,
and
if
there's
not
any
arguments
to
the
function,
then-
and
there's
only
keyword
arguments
since
we're
only
using
keyword
arguments
within
the
decorator
function.
A
Then
we
can.
We
know
that
we'll
return
the
wrapper
itself,
because
that
that
will
be
used
to
then
wrap
the
function.
A
We
can
we
can,
we
can
go,
go
through.
This,
do
do
okay,
yeah!
I
think
I
mean
do
you
want
to
do
like
a
quick
run
through
of
that,
or
does
that
make
sense.
B
Okay,
so,
okay,
so
I
will
actually
try
how
it
works
and
then
I'll
I
will
get
back
to
it.
A
It
out,
if
you,
if
you
want
a
little
example
to
sort
of
distill
this
down
to
slightly
more
more
clear
place,
because
I
know
op
is
a
mess
of
a
function.
Then
just
just
ping
me
on
getter
and
I'll.
Send
you
a
little
example
code.
A
Cool,
so
let's
see
we'll
so
you
will
modify
mctx
root
to
make
it
a
let's
see.
Oops.
I
forgot
to
let
everybody
else
in
hey,
sorry,
everyone
I,
I
was
not
looking
at
the
list
of
people
who
wanted
to
be
let
in
I
was
like
wow.
It's
just
suitable
today,
all
right,
so
we're
gonna
modify
the
mctx
route
to
make
it
a.
A
A
A
Op
and
then
we'll
copy
and
paste
the
mctx
or
that
yeah
we'll
copy
paste
it
the
mctx
roots,
method
or
decorator
to
and
modify
it
so
that
it's
a
a
ctx
roots,
all
right,
cool
and
then
modify
it.
Okay
and
then
we'll
use
the
modified
version.
So
add
label
parameter
to
modified
to.
A
Kwrx
root
label,
keyword,
argument
and
so
add
label
keyword,
argument,
2k,
rx,
mctx,
root,
use
in
score
score
accuracy.
Was
it
what
was
that?
What
was
the
root
name
of
the
rule?
We've
added
score
accuracy
or,
let's
see
yeah
score
accuracy,
all
right
great.
So
are
we
good
there
and
then
so?
A
A
Yeah
sorry,
I
was
early.
I
was
thinking
that
that
at
8
30,
I
don't
know
what
happened.
I
think
I
had
another
meeting
on
my
calendar
and
I
got
confused.
I
had
this
big
presentation
this
morning.
It
was
throwing
me
off,
but
yes,
now
now
we've
started
and
then
I
forgot
to
let
you
guys
into
the
meeting,
because
I
was
taking
notes
in
the
in
the
other
window.
All
right.
So,
let's
see,
let
me
just
go
back.
I
feel
like
there
was
some
stuff
I
wanted
to
follow
up.
A
Okay,
so
yeah
I
still
have
nitesh.
I
still
have
that
the
light
light
gpm
pr
that
I
got
to
do
the
pinning.
I
think
that
you've
been
following
that
pinning
discussion.
That's
been
going
on
and
just
to
update
everyone
on
that.
So
this
is.
This
is
basically
the
current
blocker
here
on
on
the
release,
so
so
josh
and
I
went
through.
A
A
A
A
Essentially
so
we
did
a
lot
of
work
to
put
stuff
in
the
requirements.txt
files
and-
and
that
was
that
was
because
we
need
to
scan
so
I
got
it.
I
have
to
do
as
part
of
some
compliance.
I
have
to
do
scanning
for
vulnerabilities,
and
so
you
guys
might
have
seen
snick
it's
just
this
public
tool
that
we
use
to
scan.
A
We
also
use
the
public
coverity
to
scan,
and
so
so
essentially
what
happened
is
that
snick
requires
the
requirements
being
requirements.txt
files
to
be
able
to
scan
them,
and
then,
when
I
scanned
it,
I
was
like.
I
know
it's
going
to
create
issues
but
great.
I
have
to
do
all
this
work
anyways,
so
we
scanned
it.
It
found
issues.
Most
of
them
were
with
tensorflow,
so
the
essentially
the
remediation
here
is
is
upgraded,
tensorflow
2.4.
A
That-
and
all
of
this
is
described
in
this
issue
too,
so
I'm
not
going
to
take
notes
on
it,
but
so
so
upgraded
tensorflow
2.4.
So
we
went
to
upgrade
to
tensorflow
2.4
and
found
out
that
the
this
breaks,
the
nlp
transformers
module
that
himachu
wrote
for
us.
So
because
there
was
some
api
break,
basically,
when
you
go
from
2.3
to
2.4,
transformers
needs
to
be
upgraded
to
a
like
3.51
and,
despite
it
being
a
minor
version
change,
it's
got
api
breaking
changes.
A
I
think
we
talked
about
version
numbering
schemes
in
a
different
recording,
so
I'm
not
going
to
go
over
that
again,
but,
let's
see
make
sure
we've
got
everybody
so
yeah.
So
essentially
what
what
happened
is
this
also
ties
into
something
that's
happening
in
the
python
packaging
ecosystem?
There's
some
a
few
python
enhancement
proposals
called
peps,
and
these
are
basically
how
python
works.
A
So,
if
you're
ever
curious
about
you
know
some
core
python
functionality,
you're
going
to
go
check
the
peps
on
their
on
their
website,
and
I
think
we
have
links
here,
yeah
so,
for
example,
they're
trying
to
get
away
from
using
setup
tools
which
is,
is
you
know
the
backbone
of
those
setup.py
files
and
what
was
for
a
long
time,
the
python
packaging
ecosystem,
because
we
would
build
what
are
called
eggs
and
I
think
they
were
supposed
to
be
like
you
know.
A
Everything
is
is
a
play
on
words
with
python,
so
they're
like
python
eggs.
I
googled
for
that
and
I
was
disappointed
to
see
snake
eggs
all
over
the
page.
I
I
sometimes
forget
that
not
everybody
is
programming,
so
anyways
now
we're
trying
to
build
wheels
and-
and
the
main
motivation
for
this
is
wheels-
are
like
they're,
essentially
a
zip
file
where
they're
similar
to
the
eggs,
but
they
get
structured
and
used
differently
and
they
support
binary.
A
You
know
binary
compilations
and
if
you
guys
are
familiar
with
conda,
you
know.
Conda
became
this
place
to
host
all
your
binary
packages,
because
there
was
there's
ways
that
things
get
compiled:
it's
hard
to
distribute
compiled
packages
that
work
on
many
systems,
because
there's
various
you
know
when
you're
compiling
something
you
have
to
be
compatible
with.
A
You
know
various
versions
of
the
the
c
library
which
is
pretty
much
included
everywhere,
and
you
know
various
other
libraries
that
are
going
to
be
present
on
the
system
and
different
systems
have
different
versions
of
things.
So
you
switch
from
debian
to
ubuntu
to
fedora.
You
might
have
incompatibilities
and
conda
solved
a
lot
of
that
by
by
providing
those
libraries
in
a
nicely
packaged
way.
A
That's
that's
consistent
from
my
understanding,
so
eventually
the
python
ecosystem
and
their
support
for
compile
packages
and
helping
the
the
package
maintainers
compile
those
better
has
matured
and
they
came
up
with
this
thing
called
the
mini
linux,
wheel,
format
and
so
mini
linux
basically
says:
if
you
compile
using
these
specific,
you
know,
environment
and
settings,
then
stuff
is
going
to
work
back
to
like
centos
6,
which
centos
6
is
a
very
widely
supported,
inter
enterprise
version
of
linux.
A
So
and
if
you
guys,
aren't
familiar
with
centos,
it's
a
basically
the
free
version
of
red
hat
enterprise
linux.
So
a
lot
of
companies
use
that
when
they
don't
want
to
pay
red
hat,
so
that
was
chosen
as
the
place
of
support.
A
I
just
kind
of
want
to
give
you
guys
some
background
because,
as
you
do
more
python
stuff
in
packaging
you're
going
to
be
having
headaches
about
this,
so
anyways
we're
trying
to
move
away
from
setup
tools.
We
got
wheels
compiled
packages.
You
know
wheel,
support
compiled
packages,
there's
lots
of
machine
learning,
stuff,
that's
compiled
and,
as
we've
grown
more
and
more
into
machine
learning,
we
get
more
and
more
compiled
packages
because
we
need
performance.
A
You
know,
and
underlying
hardware
features
to
get
this
this.
You
know
the
parallel
processing,
that's
required
and
so
enter
into
this
pep
517,
because
setup
tools
does
a
really
bad.
If
you've
ever
tried
to
make
a
compiled
package,
you
will
find
out
that
it
is
a
huge
pain
in
the
ass
because
setup
tools,
basically
you'll,
run
into
this
problem,
where
you
may
try
to
import
dependencies
within
the
setup.py
file
that
are
required
to
to
compile
your
code.
A
So,
for
example,
if
you're
using
cython,
if
you
guys
have
seen
cython
it's
a
it's,
a
binding
generator
from
it'll
it'll.
Take
these
pyx
files,
which
are
a
mix
of
c
and
python
syntax
and
generate
pure
c
code
using
the
python
apis,
and
so
things
like
psython
will
require
that
you
import
them
in
your
setup.py
file,
but
your
setup.ui
file
will
require
that
you
have
cython.
You
know
you
have
to
list
it
in
this
field.
Called
setup
requires.
A
So
now
you
have
this
or
like
you
know,
chicken
and
egg,
or
in
this
case
python
and
egg
problem,
where
you
know
you,
you
don't
know
what
the
setup
file
is
going
to
import
until
you
import
it,
and
so
it's
going
to
fail
as
soon
as
you
import
it.
And
this
is
why,
like
some
modules
like
profit
and
other
things
will
say,
hey
you
have
to
install
cython
before.
I
think
stans
is
a
good
example
of
this.
A
You
have
to
install
cython,
but
you
have
to
pip
install
this
before
you
pip
install
our
package,
so
there
is
different
build
systems
other
than
setup
tools
introduced
like
this
one
calls
fill
it
and
or
fillet,
I'm
not
sure,
and
so
basically
to
we
need
this
non-python
format,
so
they
they're
using
the
pi
project
dot
tomml
file
to
now
define.
A
Basically
you
know
what
what
what
are
the
packages
that
you're
gonna,
what
you're
going
to
need
before
I
run
the
setup
py
file
or
to
build
this
project,
and
so
essentially,
this
yeah.
So
this
format,
this
format,
is
sort
of
like
the
new,
the
newer
format
right
and
it's
the
format
that
we
have
to
move
to
eventually.
A
So
we're
basically
going
to
take
the
opportunity
to
move
to
this
format
now
and-
and
the
main
reason
was
in
the
way
we
were
combining
setup.py
and
requirements.txt
was
we
would
read
in
the
requirements.txt
file
into
setup.py
and
then
we'd
then,
because
we
used
to
statically
declare
the
dependencies
as
an
array
within
the
install
requires
variable.
A
Now
the
problem
is
when
we
went
to
go,
pin
all
of
these
dependencies
because
you
guys
remember,
we
did
the
3.7
release.
There
were
issues
with
tensorflow
and
so
the
versions
mismatched
for
a
while.
There
was
numpy
and
tensorflow
heading
compatible
versions
and
so
for
a
while.
The
the
release
packages
were
broken,
because
you
know
the
user
would
install
the
package.
A
It
would
install
the
latest
version
of
all
these
packages
because
we
have,
you
know,
basically
equals
two
or
greater
on
all
the
package
versions,
and
then
you
know
their
their
install
would
break
because
the
api
changes
were
incompatible.
So
to
combat
this
there
are
these
things
called
environment
markers
which
you
can
specify
in
your
requirements.txt
file
and
they
end
up.
Looking
like,
let's
see,
I
think
I
have
this
branch
here
so
okay,
so
they
end
up
sorry.
A
A
Oh
yeah,
so
these
environment
markers-
I
don't
think
we
have
an
example
in
this
one.
I
think
the
one
that
we
cared
about
was
okay,
let's
just
do
a
tune
in
too
or
where's
rpm
our
usb
pen
tool.
That's
one
of
these.
A
So
you
can
say
if
the
version
of
python,
for
example,
is
greater
than
3.4
import
then
install
this
module,
and
so
this
is
useful,
especially
for
things
like.
So
this
is
useful,
for
example,
if
you
want
to
dynamically
support
new
functionality,
if
the
dependency
supports
that
python
version
or
if
you're,
using
back
ported
packages-
and
this
is
going
to
be
something
that
we
run
into
shortly
here-
that
you
guys
will
see
but
there's
this
package
so
import
lib
metadata.
A
So
this
is
a
package
that
was
introduced
in
python,
3.9
or
yeah
3.8,
I
think,
and
since
we're
our
minimum
version
supported
is
3.7
we're
going
to
eventually
switch
to
using
this
import
lib
metadata
package,
but
because
it's
not,
let's
see
because
it's
not
it
doesn't
exist.
Let's
see
yeah,
it
doesn't
exist
within
3.7,
but
it
doesn't
resist
in
3.8
when
we
do
the
import
call
it's
going
to
fail.
A
Unless
we
install
this
third-party
package
on
pi
pi,
that
was,
you
know,
released
to
support
older
versions
of
python
that
it
hadn't
been
added
to
the
standard
library.
Yet
so
we're
going
to
add
a
line
like
this
that
says
hey.
If
the
python
version
is
less
than
3.8,
then
you
need
to
install
import
lib
metadata
and
the
other
thing
that
this
comes
into
play
for
is
as
we're
pinning
these
versions.
A
What
we
noticed
is,
as
we
pin
different
versions
across
platforms,
so
like
linux,
mac
windows
you'll
end
up
with
different
versions
being
required
on
different
platforms.
So
when
we
go
to
pin
this
requirements.txt
file
becomes
all
equals,
equals
whatever
was
installed
in
the
cnci
environment
right,
and
so
this
is
going
to
ensure
that
the
users
always
get
a
reproducible
install
right,
and
so
they
always
get
this
environment
where
all
these
machine
learning
packages
work
together,
which
is
our
goal,
and
so
the
problem
was
that
we
can't
we
we
can't.
A
We
can't
define
these
version
markers
in
the
requirements
that
are
in
setup.py
install
requires,
so
it's
supported
by
requirements.txt
if
you
use
that
with
pip,
and
some
of
them
are
supported
in
setup.py.
If
you
list
them
within
the
you
know
the
array
of
dependencies
with
their
versions,
but
not
all
of
them,
specifically
not
the
one.
That's
related
to
the
platform
system
which,
which
says,
install
this:
if
you're
on
this
specific
system.
A
Now
it
is
supported
in,
like
I
said
in
requirements.txt
and
in
this
new
pep
17
format,
which
which
leverages
this
there's
another
part
of
it,
which
is
the
setup.config
file.
So
our
options
are
essentially
you
know,
yeah.
Our
options
is
essentially
moved
to
this
new
format,
which
supports
the
version,
the
environment
markers,
which
we
now
require
to
pin
the
dependencies
across
platforms.
That
was
an
incredibly
long
explanation,
but
this
is
it's
it's
it's
very
annoying.
A
So
that's
that's
the
gist
of
it
and
I
felt
like
it
was
kind
of
hard
to
explain
in
an
issue.
So
I
wanted
to
make
sure
that
there
was
enough
background
for
everyone.
Also,
because
yeah
packaging
and
python
is,
you
know
important,
and
it's
good
good
good
to
have
background
anyways
any
any.
I'm
sure
you
guys
don't
wanna
hear
me
talk
more
about
this,
but
any
questions
on
this.
A
All
right
cool
great,
so
that's
the
current
blocker
on
release
and
yeah.
I
think
we
just
wanted
to
cover
that
dude.
How
did
we
get
on
that
yeah?
We
were
talking
about.
Oh,
I
needed
to
get
back
to
you
on
the
light
gbm
so
that
that's
basically
we're
gonna
wait
until
after
at
least
to
do
this.
I
think-
or
maybe
I'll
do
it
after
that
issue
and
we'll
see
we'll
just
see
how
it
goes
it
shouldn't
be.
Do
too
much
work
to
do
after
that
issue
is
solved.
A
We
could
get
it
in
the
release,
all
right
anyways.
So
let's
get
back
to
you
nitesh.
So
what?
What
did
you
want
to
talk
about?
Other
than
that
that
you
had
investigated
the
some
source
stuff
right.
E
Yep
yep
I'm
working
on
a
hdf5
source,
and
so
I
I
just
followed
the
sqlite
that
the
example
of
escalating
the
tutorial.
Oh
and
there
is
some
issue
while
running
the
test
case,
so
I
wanted
to
show.
E
E
E
All
right
great,
is
it
visible,
yep,
okay,
so
that's
the
file
structure,
so
basically,
dir
is
like
a
group
and
inside
a
group
feature
key
and
prediction.
These
three
are
the
data
sets
that
are
stored
in
the
sj5,
so
users
have
to
just
pass
that
group
name.
That
is
right,
just
a
file
name,
so
the
config
file
config,
contains
the
file
name
and
then
group
and.
E
This
is
the
a
enter
where
I
have
to
open
the
file
and
then
go
browse
to
the
particular
group
name
and
then
in
exit.
We
I
just
need
to
close
it
and
there
is
record
records
function
and
this
function
converts
the
the
values
into
the
record
right.
Okay,
so
I
think
I'm
I'm
missing
something
to
make
a
key.
There
is
some
kind
of
oh.
E
A
Oh
yeah,
you
want
to
you
want
to
get
rid
of
that
key.
So
basically,
your
modified
record
structure
here.
A
Yeah,
so
you
need,
I
think,
so,
data
you
see,
data
equals
modified
record
date.
Oh
wait!
Whatever
record
key
wait
a
minute!
Okay,
what's
the
error.
A
What
was
the
error
that
you're,
getting
or
oh
I
see,
I
think
I
see
what's
going
on?
Okay,
none
type
is,
I
think
it
is.
I
think
this
is
because
we
didn't
return
itself
from
a
enter.
This
specific
issue.
A
Here,
yeah.
A
And
that's-
and
this
is
so
anytime,
you
have
an
a
enter,
an
inter
method.
You
need
to
return
self,
because,
when
you're
saying
as
that's
basically
saying
okay,
whatever
the
return
value
was,
is
now
this.
E
A
All
right,
yeah,
okay,
so-
and
I
think
this
is
probably
a
hold
over
from
the
the
fact
that
this
is
no
longer
a
subclass
of
memory
memory
context.
So,
let's
see
yeah,
I
think
yeah
that
that
method
just
will
need
to
be
converted
there
into
you
know
to
modify
the
the
hdf5
file
directly.
A
Yeah,
just
like
you're
reading
from
it
directly
right
because
you
weren't
putting
those
in
mem
yeah.
You
know
you're
reading
from
the
file
directly
in
records
and
record,
and
so
you're
you're
going
to
want
to
to
write
to
the
file
directly
rather
than
how
you,
because
you
converted
this
from
the
the
file
source
example,
which
was
backed
by
memory
and
so
now,
you're,
just
not
backing
it
with
memory.
E
A
A
I
think
yes,
this
looks
great,
so
I
am
a
little
concerned
about
you
know.
I
think
the
only
thing
here
that
I
would
I
would
recommend
is
this
features
prediction
key.
That
you
know
will
mean
that
that
our
data,
you
know
that
hdfi
file
must
have
those
things
within
the
group
right.
A
Yes,
and
this
particular.
E
A
Yeah
right,
so
so
that
by
by
doing
that,
we're
you
know
we're
we're
we're
sort
of
boxing
in
the
user
to
to
needing
to
name
their
hdf5
files
like
that
right.
E
A
Yeah
so
so,
and
I
think
we
need
to
one
we
need
to
like.
Like
I
said
you
know,
we
probably
want
to
look
at
a
few
examples
of
hdf5
files
out
there
to
have
some
some
data
to
back.
You
know
whatever
decision
that
we
make
here,
because
I'm
I'm
sure
that
those
aren't
going
to
be.
You
know,
standard
right,
that
structure
or
that
structure
may
be
standard,
but
those
keys
might
not.
May
not
be
right.
So,
let's
at
a
minimum,
we
need
to
make
those
keys
configurable
the
features
prediction
and
key.
A
You
know
we
need
to
make
those
configurable
variables.
Maybe
we
default
them
to
those
values,
but
we
need
to
allow
the
user
to
overwrite
them
and
then
then
we
also
need
to
do
some
more
looking
into
you
know
making
sure
that
yeah
we
need
to
make
sure
that
that
some
example
files
out
there.
Maybe
some
popular
data
sets
really
really
do
follow
this
this
format-
and
I
think
you
said
that
you
know
this
is
basically
just
how
it
works,
but
you
know
we
want
to.
We
need
some.
A
We
need
some
kind
of
some
kind
of
hard
data
on
that,
just
just
just
to
be
sure
right.
E
That
I
have
created
a
hdf5
data
set
which
follows
this
particular
format:
right
for
testing;
okay,
cool
as
an
example.
A
Yeah
cool,
and
then
I
mean
so
I'm
just
saying
like
if
we
can
just
have
like
one
link
to
a
data
set
that
we
could
point
to.
That's
it.
That's
you
know
a
public
data
set.
Maybe
in
this
format
then
we
can.
Then
we
can.
We
can
make
sure
that
that's
you
know
the
way
other
people
are
doing
it.
I
I
mean
I
think
that
it
sounds.
A
It
sounds
like
it's
standard
right,
but
I
feel
like
we
ran
into
something
like
this
before,
where,
where
we,
I
can't
remember
what
it
was
it
may
have
been
when
we
did
like
the
well
with
the
idx
source.
We
had
yeah
with
the.
A
It's
basically
the
nist
data,
but
you
know
we
had
we
had
that
file
and
it
was
the
example
file
right
and
it's
just
good
to
have
a
specific
data
set.
You
know
that
uses
that
format
as
like
a
live
data
set
just
so
just
to
make
sure
that
we're
we're
in
tune
with
with
the
rest
of
the
community
right.
E
A
And
you
don't
have
to
make
a
test
case
with
the
link.
I
just
you
know
I
just
we
just
need
to.
I
would
like
it
would
be
good
to
have
on
hand
in
case
somebody
asked
us
for
a
reasoning
right,
because
we
we
need
to
have
documentation
on
on
why
we
make
decisions
okay,
so
it
would
be
good
to
have
a
link
to
some
data
set
that
uses
this.
A
E
A
hdf5
file
and
then
just
put
it
the
feature
feature
is
like
an
empire
array
and
prediction
keys.
All
these
things
are
a
numpy
array,
so
is
it
possible
that
user
may
create
this
kind
of
format
that
there
is?
There
is
no
group,
so
in
that
case
I
just
need
to
browse
from
the
home
directory
of
the
sdf
file
cool
yeah.
A
So
we'll
just
make
group
optional
to
that,
so
we're
going
to
make
group
optional
we're
going
to
make
features,
prediction
and
key
optional.
A
And
then,
let's
see
so
then
the
way
you
made
that
file
is
that
within
code
within
the
test
cases,
did
you
dynamically,
create
the
file
and
include
the
code
to
create
it
within
the
test
cases?
Okay,
great,
then
we're
good
there
yeah.
I
think
this
looks
great
and
then
the
only
other
thing
is
that
that
read
mode.
I
think
we
need
to
go
from
camel
case
to
underscore.
A
Oh,
okay,
yep,
all
right
sweet
anything
else.
You
want
to
talk
about
all
right,
great
and.
E
Actually
I
have,
I
have
also
started
working
on
its
that
model.
What's
h2o
automl,
so
oh.
C
E
I
think
yep
yep
because
I
have
work
on
it,
so
it
will
be
easy
for
me,
so
I
just
started
nice,
maybe
maybe
next
week
I
will
make
couple
requests
all.
D
A
D
When,
when
we
do
notice,
dot
load
underscore
file
and
in
data
flow
run,
we
have
the
code,
written
is
loader
dot,
load
b,
so
it's
just
using
the
load
b
function
all
right
and
not
going
through
the
code.
You've
written
all
right
in
config,
loader.
D
Load
b,
so
the
load
b
function
is
like
it's
in
the
json
yaml
and
everywhere
that
we
have
used
that
when
we
write
a
new
config
loader,
then
we
write
a
new
load
b
for
them
right.
A
A
Let's
see
can
figure
out
what
kind
of
figure
it
is
okay,
source,
yeah.
Okay,
where
was,
I
think
there
was
more
cla
stuff,
yeah.
Okay,
we
have
it
in
utils
cli
command.
A
So,
let's
see-
and
you
know
we
have
it
here-
I'm
thinking
that
this
is
already
instantiated
within
the
command
before
we
call
the
run
function.
Isn't
it
let's
see?
Where
was
it.
A
A
So
I
think
maybe
we
just
go
ahead
and
I
think
we
should
probably
just
go
ahead
and
let's
see
we
have
a.
A
Instantiate,
the
class
yeah
we
instantiate
wait
command
do
run,
do
run,
is
the
class
method
and
where
was
the
class,
so
command
equals
class
yeah
I
mean,
I
think,
let's
just
go
ahead
and
and
assign
you
know,
config
loaders
as
a
as
a
as
a
you
know,
a.
A
Class
local
variable
right,
so
you
know
args
dot.
We
can
even
just
do
like
args.comman.config
loaders
equals.
You
know,
config,
loaders
and,
and
then
well.
I
think
you
know,
probably
the
correct
way
to
do
this
is
to
override
the
a
inter
method
of
command
here
yeah.
I
think.
Oh
okay,
we
have
a
inter
method
so
that
maybe
we
we
we
just
throw
it
in
here.
A
A
A
A
A
A
You
know,
let's
see
what
what
are
we
doing
here
with
the
big
letters.
A
Yeah,
so
we
specify
the
config
loader
was
that
what
was
going
on?
Let's
see
so
config
loader
yeah?
Okay!
So
now
we
just
modify
you
know
this
would
be
config
loaders,
it's
already
instantiated.
I
can't
like.
We
just
need
to
change
it,
to
use
the
config
loader's
interface
right
here,
because
I
think
we
do
basically.
C
D
D
A
Flow
will
need
to
be
modified
here,
but
I
don't
think
you're
gonna,
you
know
yeah.
We
may
see
some
some
yeah.
You
know
I
don't
think
we're.
Gonna
have
massive
changes
to
the
files,
because
the
the
level
of
indentation
here
that's
relevant,
is
basically
you
know
like
these
four
lines
here
is
probably
going
to
change
to
a
different
level
of
indentation.
I
think
it's
going
to
be
pretty
minimal
file
changes.
I
believe
we
can
still
specify
so
config
loaders
until
config.
A
Cool
great,
I
just
want
to
make
sure
the
method.
Okay
load
file.
Okay,
this
takes
a
file
path,
so
I
think
okay,
so
the
difference
here,
I
believe,
is
going
to
be
that
this
looks
at
the
files
extension.
We
have
a
lot
of
examples
that
pipe
and
and
look
at
dev
standard
in
so
we
may
want
to
let's
see
yeah
we
may
want
to
go
here
and,
let's
see
load
file,
we
may
need
to
basically
add
the
ability
to
like
specify
the
extension
or
something
there's
load
single
file
yeah.
A
I
think
I
think
we're
going
to
need
to
sort
of
like
implement
support
for
what,
if
there's
no
extension
on
the
file-
and
you
know,
provide
a
a
parameter
that
says
you
know.
If
so,
if
we
are
looking
at
dev
standard
in,
for
example,
bay
stir
is
going
to
be
current
working
directory
or
I
guess
it
already
probably
defaults
to
that.
Doesn't
it
or
does
it
based
or
equals
baster.
A
C
So
yeah.
D
A
A
A
Support
piping,
or
currently,
we
specify
config
loader
format
or
do
auto
detection
via
file
type
within
the
the
data
flow
cli.
That's
data
flow.
A
A
Then
refactor
data
flow
code
to
use
that
added
property
and
make
sure
taking
input
from
stdn
or
dev
stdn
still
works
all
right
great.
Do
you
think
that
accurately
describes
what
we're
doing
here?
Yeah?
Yes,
that's
perfect!
All
right
great!
I
want
to
make
sure
that
we
don't
yeah.
We
have
enough
enough
info
here.
So,
okay
yeah,
it's
I've
noticed
it
has.
It
has
a
little
bit
of
a
weird
ui
thing
to
the
github
command
line
tool,
since
they
did
an
update
on
it
all
right.
This
is
important.
A
And
let's,
let's
just
go
ahead
and
target
the
next
release.
Just
you
know
just
no
pressure.
A
So
I
think
you
know
what
what
I
do
just
to
re
recap
on
this,
because
I
think
I've
explained
at
one
point
but
a
long
time
ago,
basically
that
the
way
that
I
was
doing
the
like
the
project
management
of
this
is
essentially
as
we
get
close.
What
we
want
to
do
is
we
want
to
have
this
defined
release.
Schedule
of,
like
you
know,
ideally
shoot
for
every
two
weeks,
but
the
thing
is,
you
know
I
got
bogged
down
in
compliance
stuff
and
and
then
recently
they
simplified
the
compliance
process.
A
So
now
I'm
sort
of
unblocked
on
that
and
so
we're
ready
to
go
pending.
You
know
this,
this
less.
You
know
our
main
pinning
issue
thing
so
so
then
what
we
do
is
basically
I
try
to
get.
You
know
all
the
things
that
we
think
we
can
get
done
in
and
tag
them
on
that
milestone
and
then,
as
we
go
to
the
next
release,
you
know
that
we're
going
to
have
a
1.4
point
x
series
right
before
we
get
to
5.0.
A
A
1.0
most
things
are
tagged
0.5
and
then,
as
we
do,
this
next
release
we're
gonna
think
about
you,
know
the
next
two
week
cycle
and
what
everybody's
working
on
and
we're
going
to
create
a
new
0.4.1
milestone
and
we're
going
to
start
thinking
about
you
know
which
which
issues
we
can.
We
want
to
tag
for
that
milestone
and
try
to
be
working
to
get
done
before
that
that
that
release
and
we're
just
going
to
try
to
stay
on
that
cycle.
A
Now
that,
now
that
this
compliance
process
is
faster,
so
that's
does
that
make
sense.
Oh
yeah,
I
see
all.
A
Great
yeah.
A
A
Yeah,
it's
more
of
a
staging
thing
right,
sort
of,
like
you
know,
we'd
like
to
get
all
these
things
done
by
then,
but
you
know
we
don't
we
don't
know
exactly
what
we're
going
to
prioritize
until
we
get
through
this.
You
know
this
next
release,
so
then
we
start
looking
we
pick
at
that
as
our
like
staging
list,
at
least
at
least
that's
how
I'm
thinking
you
know.
I
think
it's
been
pretty
successful.
I
think
for
the
past
few
releases,
so
obviously
this
release
has
been
a
a
large
debacle.
A
You
know
on
my
part
because
of
this
compliance
struggle,
but
but
luckily
things
are
are
smooth
sailing
now
it
seems
like
knock
on
wood,
all
right.
Okay,
so
would
yeah
and
I'm
gonna
work
to
get
this
pinion
thing
done
so
and
then
we
can.
We
can
throw
that
out
the
door
all.
C
A
Let's
see
and
then
let
me
make
sure
that
we
get
this
issue
in
there
in
the
body
here
all
right.
Well,
thank
you
guys,
and
I
oh
and
oh
wait.
There
is
one
last
thing
I
put
it
at
the
top.
I
almost
forgot
it
so
gsoc
2021,
I'm
thinking
of
you
know
the
the
the
project
list.
Ideas
saksham,
I
was
wondering
I
was.
I
think
I
have
I
yeah.
I
wanted
to
talk
to
you
one-on-one
sometime.
A
Right:
okay,
great
so
yeah,
we'll
we'll
figure
out
a
time
to
do
that
and
yeah.
If
you'd
be
interested
in
helping
mentor
this
this
year.
That
would
be
awesome
and
we
can
just
talk
more.
A
So
yeah:
let's,
let's
see
if
we
can
do
that
and
let's
see
so,
you
have
hyper
parameter
tuning
pre-processing,
juniper
notebooks
for
examples,
so
we
wanna,
we
wanna,
try
to
get
those
things
done
and
that
sounds
that
sounds
good,
all
right
cool
and
then
anybody
who
has
so
these
are.
These
are
possible
projects
that
yash
and
I
brainstormed.
A
So
if
you
guys
have
any
project
ideas
possible
project
ideas,
you
know
go
ahead
and
post
them
as
an
issue
or
add
them.
You
know
just
you
can
run
them
by
me.
D
A
D
We
should
focus
more
on
fixing
everything
up
and
adding
other
preprocessing
and
stuff,
rather
than
adding
models.
A
Yeah
yeah,
so
let's
see
so
and
then
let's
see
what's.
A
Yeah
yeah,
so
I
don't
believe
that's
within
scope
now
now
that
now
I
think,
there's
a
bit
of
yeah,
I
don't
it's
not
in
scope,
which
is
unfortunate
because,
obviously
you
know
our
documentation
is:
is
heavily
code
based
right,
you
know,
there's
the
documentation
involves
a
lot
of
programming
to
get
it
all
right,
so
I
don't
know
yeah,
but
I
think
I
think
they
you
know
yeah.
I
think
I
think
it
might
be
a
no.
A
So
I
think,
let's,
let's
try
to
stay
away
from
that,
although
you
know
with
the
way
that
we
have
things
structured,
I
think
we're
in
a
space
where
we
can
pretty
easily
turn
a
lot
of.
You
know,
maybe
examples
we
can
work
on
examples
and
then
pretty
easily
turn
those
into
docs.
So,
let's
see,
let's
try
to
stay
away
from
adding
models
models
as
a
part
of.
B
A
Yeah
now
the
web
ui,
I
think,
is
the
web.
Ui
is
well.
You
know.
Last
year,
cbe
been
tool
had
a
project
where
one
of
the
students
did
a
web
ui,
and
I
think
we'd
stayed
away
from
that
last
year
because
well
we're
part
of
the
python
right.
But
you
know
terry
terry
runs
the
python
org
and
she
runs
cbd
ben
tool.
So
if
you
know
us
a
student
on
her
on
her
team
did
a
a
web
ui.
That
was
mostly
you
know,
javascript
html.
A
Okay,
let's
see
yeah
because
there
is
you
know
there
is.
I
think
you
know
I
got.
I
got
a
pretty
good
way
into.
This
is
obviously
I'm
I'm
not
incredibly
fluent
and
react,
but
you
know
there's
definitely
a
base
to
start
from
here.
So
all
right
and
yeah
anything
else.
Just
just
let
me
know
we
can
we
can
try
to
post
those
so
cool
all
right.
A
Well,
thanks
thanks
everyone-
and
you
know
just
ping
me-
make
sure
you
bug
me
if
I
owe
you
something,
because
I
obviously
have
many
many
many
mini
irons
in
the
fire
right
now,
so
I've
got
hard
to
juggle
things
and
remember
so,
especially
you
know
quick
stuff.
You
know.
I
want
to
make
sure
that
that
I
can.
I
can
help
one
block
right,
cool
all
right,
thanks
everyone
and
have
a
great
rest
of
your
week.