►
From YouTube: Weekly Sync 2020-12-15
Description
Meeting Minutes: https://docs.google.com/document/d/16u9Tev3O0CcUDe2nfikHmrO3Xnd4ASJ45myFgQLpvzM/edit#heading=h.qca2n3wt2b0x
A
B
A
A
I've
gotta-
I
picked
up
another
meeting
right
before
this,
so
unfortunately,
unfortunately,
I'm
forcing
this
starting
a
couple
minutes
late
now.
So,
let's
see
all
right,
let's
go
ahead.
The
pull
request.
A
So,
let's
see
the
one
thing
I
did
notice
so
yeah,
I
guess
I
haven't.
I
didn't
get
a
chance
to
do
the
whole
thing
yet
review
the
whole
thing,
but
I
did
notice
that
we
have
a
you
have
an
example
through
python,
but
we
need
an
example
to
the
command
line
as
well.
A
So,
oh-
and
I
think
you
know
what
this
would
be-
there's
a
good
opportunity
here.
Maybe
we
will
just
do
this
all
together
and
I'll
show
you
guys
how
to
do
the
console
testing
yeah.
A
This
is
a
great
opportunity
for
that,
and
then
we
will
have
a
video
clip
of
that
just
talking
to
yash
last
week
about
how
about
how
youtube
videos
are
very
useful
for
people
learning
things,
and
I
personally
don't
watch
a
lot
of
youtube
videos,
but
we
should
have
some
because
they're
they're
useful,
so,
let's
see
yeah,
I
never
think
to
watch
a
youtube
video
I
like.
A
A
A
And
so
we're
going
to
go
over
that
and
then
do
you
have
anything
else.
You
wanted
to
talk
about.
B
Yeah,
actually,
I
found
an
issue
that
the
default
values
in
a
in
a
model
are
not
actually
the
defaults
when
you
yeah
that's
present
in
the
library,
let's
say,
for
example,
exhibits
regressor.
The
actual
default
value
is
not
present
in
our
default
value
in
executes
regressor,
so
that
that
should
be
correct.
A
I
think
I'm
going
to
need
a
little
demo
of
this
one,
because
I'm
not
following
exactly
but
okay,
so
we'll
we'll
get
to
that.
Okay
so
and
then
shaw
what
what
do
we
have
on
the
docket
for
you
today.
B
I
am
I'm
done
with
the
test
cases
for
the
normal
text
model.
I'm
hopefully.
C
B
Make
a
pull
request
by
tomorrow:
sweet
yeah,
there's
once
there's
actually
a
couple
small
things
I
want
to
talk
to
you
about.
It
was
regarding
one
of
the
test
cases
and
I
was
hoping
we
could
discuss
that
all
right.
Great.
A
Great
all
right,
good
stuff,
all
right,
let's
just
dive
in
then
so
all
right.
First
we're
going
to
do
well.
You
know
we'll
wait
for
anybody
to
trickle
in
before
we
do
the
classifier,
because
I
want
to
show
the
I
want
to
show
how
we're
going
to
add
the
console.
A
To
console
examples,
so
let's
jump
into
the
the
anomaly
detection
model
first,
so
so
you're
done
with
the
test
cases,
and
you
want
to
talk
about
one
of
them.
So
could
you
could
you
share
maybe
some
code
with
that
or
like
a
branch
or
a
share.
B
B
All
right,
so
this
is
the
test
case
I
want
to
talk
about.
I
use
this.
A
Code
from
sorry
one
second,
I
gotta.
B
Yeah,
so
this
module
test
zero
to
predict
this.
Basically,
I
was
trying
to
I
used
the
code
from
the
simple
linear
regression
test
case
to
sort
of.
C
B
B
But
for
normally
detections
all
we
have
are
the
labels
zero
or
one.
So
this
doesn't
make
sense.
A
Okay,
so
what
is
our
test
data
so
test,
one
csv,
okay
and
all
right
so
and
I
take
it
test.
One
csv
is
within
the
root
of
this
directory
here.
Can
you
open.
A
D
D
A
A
So
we're
writing
a
test
for
the
anomaly
detection
model
and
we
have
a
through
k,
are
our
features
and
then
y
is
our
label,
and
so
we
want
to
modify
the
prediction
method
or
the
prediction
test
from
the
slr
model
to
make
it
so
that
the
instead
of
seeing
if
the
output
value
is
within
10,
we
just
do
sort
of
a
yes,
no,
because
this
is
a
classification
problem,
essentially
right
with
what
we're
looking
at
okay.
A
So,
let's
flip
back
to
the
test
cases
and
what
we're
going
to
do
here
is
so
you
get
the
predict
method
yield
or
the
predict
function,
yields
three
three
things
the
index
and
the
feature.
A
Actually,
I
think,
that's
the
key
yeah,
the
key,
the
the
record
key,
the
features
and
the
predictions,
and
so
you
can
see
here
we're
getting
yeah
so
features
y
is
the
correct
one
prediction
y
value,
and
so
basically
what
you're
going
to
want
to
do
is
you're
just
going
to
want
to
you
know
you
can
either
compare
directly
and
say
you
know:
if
assert
equal,
you
can
do
self.assert,
equal,
correct,
comma
prediction
and
then
that
will
make
it
so
that
the
test
case
will
do
every
single
one
and
check
if
they're
exactly
correct.
B
B
A
B
A
A
B
A
Yeah,
so
I
think
so
you're
yeah
okay,
so
so
what
you
would
do
here
is
okay,
you're,
already
doing
that
with
accuracy.
I
see
what
you're
saying
yeah
I
mean
this
is
in
this
case
you
have
your
your
model
is,
is,
is
it
sounds
like
you've
implemented
your
accuracy
method
to
be
what
the
body
of
this
predict
test
would
be,
but
yeah
the
predict
test
here
would
basically
be.
You
know,
sum
up
how
many
you
got
correct
right
and
then
check
that
the
number
you
got
correct
was
within
some
acceptable
range
right.
So.
B
B
What
what
I
actually
did
was,
I
calculated
the
number
of
false
positives
and
false
negatives
and
use
that
to
calculate
the
f1
score
of
the
model.
A
Okay,
okay,
yeah
and
your
report.
So
your
way
the
accuracy
method
returns
is
the
f1
score.
Is
what
you're
saying
yep,
okay
and
just
to
confirm.
You
have
a
you,
have
some
comments
in
the
code
saying
that
that's
what
it
is
right.
A
Cool
just
checking
but
yeah
so.
A
I
would
just
say
why
don't
I
would
just
do
for
the
at
the
risk
of
re-implementing
it
and
and
I'll
at
the
risk
of
re-implementing
similar
functionality.
I
would
just
go
here
and
basically
sum
up
the
number
you
got
correct
and
you
know
say
you
know
correct
divided
by
total
and
say:
what's
the
percentage
you
got
correct
and
you
know
make
that,
like
you
know,
90
is
what
we're
shooting
for
to
get
correct
and
then
do
you
know
a
cert
greater
the
percent.
A
You
got
correct,
90
and
and
the
reason
why
you
want
to
do
that
in
this
case
is
because
we're
going
to
change
the
way
that
the
accuracy
method
works
and
suhan
sudhan
has
been
working
on
that,
and
so,
when
that
happens,
we're
going
to
want
to
we're
going
to
want
to
have
these
predict
methods
lying
around
to
make
sure
that
you
know
we're
we're
transitioning
everything
the
correct
way,
because
if
was
we
remove
accuracy
methods,
then
it's
helpful
to
have
another
test
case
that
replicates
similar
functionality.
B
Yeah,
I'm
sorry,
I
didn't
quite
catch.
The
last
part
I
think
minor,
just
broke.
A
Yeah,
so
basically
we're
going
to
we
want
to
implement.
In
this
case
you
know
it
seems
a
bit
redundant,
but
we're
going
to
want
to
implement
it
because
we're
in
the
middle-
and
this
is
sort
of
like
a
you
know,
a
all
of
us-
working
together-
teamwork,
sort
of
situation
more
than
a
a
code,
yeah
situation,
but
yeah,
and
so
because
we're
doing
this
change
to
accuracy
and
the
way
accuracy
is
working.
A
We're
going
to
want
to
have
a
second
test
in
here
it'll
be
helpful
to
have
this
second
test,
which
is
a
you
know,
a
similar
implementation
of
the
one
above
it,
but
it'll
help
us
when
we
go
to
change
the
accuracy
stuff,
because
we'll
now
we'll
know
that
things
are
still
functioning
correctly,
because
that's
going
to
be
a
large
code
change
and
the
more
passing
tests
we
have,
the
more
it
can
help
us
find
the
ones
that
are
really
failing.
B
Right,
so
what
I'll
do
is
I'll
do
something
like
self
dot,
assert
greater
prediction?
Comma.
The
number
of
examples.
I
got
correct,
something
like
that
right.
A
A
X
percent
of
the
number
of
examples
in
the
tests-
all
right,
great
cool,
sorry,
do
you
have.
B
Something
else
yeah
one
more
thing:
I
couldn't
quite
figure
out
how
to
like
use
config
to
take
a
float
as
an
input.
A
A
B
I
used
g,
I
used
graph
command
to
like
get
some
insights
on
that.
C
B
Didn't
work
so
what
what
what
we'll
do
is,
I
think
such
I
spoke
to
suction
about
this.
He
said
that
he's
gonna
help
me,
so
I
think
I'll
sort
this
out.
A
Well,
so,
let's
well
I'd
like
to
I'd
like
to
just
look
at
it
one
second,
so
you
wanted
to
take
one
of
the
config
parameters
as
a
as
a
float
right.
D
A
Yeah,
you
know,
actually
that's
you
just
hit
on
a
really
good
point.
We
need
a
documentation,
a
whole
page,
at
least.
A
Yeah,
okay,
so
one
second,
okay,
okay,
yeah.
We
need
a
whole
documentation,
page
on
config
so
and
you
were
having
an
issue
taking
config
as
a
float,
so
yeah,
obviously
there's
no
documentation
page
on
this.
So
this
is
why
you're
having
an
issue
doing
this
so
all
right,
but
let's
just
real
quick,
we
can
wait.
Can
you
keep
presenting
because
we'll
just
do
it
real.
A
A
Float
and
yeah.
Actually
that
should
be
the
end
of
it
quite
honestly,
but
yeah,
I
think
that's.
D
A
Yeah,
I
believe
this
is
it
now.
You
may
need
it
before
those
equals
fields,
but
you
should
be
doing
equals
fields
to
give
it
a
description
anyway.
So
you,
just
probably
you
know,
k
colon
float,
equals
field
and
then
whatever
you
want
it
to
be,
and
if
you
wanted
to
have
a
default
value,
then
you
say
comma
default
and
then
whatever
its
value
should
be.
B
A
Oh
well,
that
is
so
basically
everything.
So
it's
a
field
within
this
config
structure
so
like
and
actually
that's
a
good
thing
that
we'll
explain
when
we
write
the
documentation
so
explain
that
field
is
a.
A
Okay,
yeah
all
right,
so
now
we
should
be
able
to
say
so
yeah,
so
it
will
now
throw
an
error.
Since
you
didn't
give
a
default
value,
it
will
throw
an
exception
if
you
don't.
A
B
Yeah,
so
I
can
set
a
default
value
here,
or
should
I
just
set
it
directly
in
the
function
you.
A
A
B
I
can
actually
help
with
this,
because
I
I
want
I
wanted
to
like
understand
how
this
works.
So,
okay,.
A
Yeah
that'd
be
awesome.
That
would
be
awesome.
I
wonder
where
we
should
put
that
that's
a
okay!
So
now,
if
we're
thinking
about
where
should
we
put
this,
let's
see
until.
B
Right
where
we
have
the
documentation
for
features
and
everything
else,
I
think
that
would
be
a
nice
place.
A
Yeah,
okay,
yeah
and
let's
we
will,
we
can
probably
put
it
under
under
the
usage
tutorial
section
and
then
we
can
talk
about.
We
talk,
we
have
a
section
talking
about
the
config
and
we
can
link
from
the
the
example
config
to
the
full
documentation
about
what
can
how
to
do.
Config,
let's
see,
put
this
under
usage
tutorials,
make
sure
to
like
from
myself
config.
A
All
right,
sweet,
yeah,
and
so
this
should
now
it
should
be
accessible
as
self.parent.config.k
and
then,
when
you
do,
when
you
do,
let's
see
you
have
yeah
with
your
tests
and
your
tests
when
you
instantiate
the
model,
if
you
could
flip
over
the
test,
real
quick
thanks,
yeah
yeah,
so
self.model
yeah.
If
you
just
say
you
know,
k
equals
whatever
that'll
change,
k.
B
A
Sweet
all
right
anything
else
on
this.
B
Not
on
this,
but
you
told
me
to
ask
you
how
documentation
is
written
once
I'm
done
with
the
test
kiss
and
everything.
Aha.
A
Okay,
well,
that's
actually
perfect
segue
into
the
next
thing
so
because
we're
gonna
talk
about
the
how
to
write
the
the
we're
going
to
talk
about
how
to
write
the
examples
for
the
x,
jeepus
classifier,
so
all
right
great
and
then
I
also
want
to
say
so.
Sudhanshu
do
you
have
any?
I
saw
you
have
we
we.
We
talked
about
phase
four
or
phase
five,
we're
almost
done
with
that.
A
I
think
there
were
just
two
test
cases
within
the
main
module
that
are
failing
right
now
and
that's
I
mean
obviously
we're
having
a
hard
time
seeing
that
in
the
ci
because
of
that
unit
test
issue.
But
do
you
do
have
you
gotten
a
chance
to
look
at
that
yet
and
do
you
have
anything
else
you
want
to
talk
about.
A
A
All
right,
okay!
Well,
then,
while
he's
doing
that
saksham
how's
it
going,
do
you
have
anything
you
want
to
talk
about
today.
C
Hey
no,
I
was
just
joined
the
meeting
to
see
what
everyone's
up
to
oh
and
I
left
some
comments
in
the
chat
and
also
on
my
pull
request.
C
D
C
D
Yay,
okay,
so
I
feel
like
there's
something
wrong
with
the
asynchronous
part.
Okay,
so
I
will
have
to
like
look
into
it.
A
When
you
say
the
async
io
part,
oh
yeah,
oh
the
a
enter
a
exit.
Oh
that's
right!.
A
A
A
Okay,
so
let's.
A
A
A
Where's
that
pr,
okay,
I
want
to
do
this
real,
quick
and
then
I
want
to
talk
about
or
no
let's
talk
about.
Let's
talk
about
the
okay,
so
we
were
just
talking
about.
Let's
cover
this
remind
me
to
cover
this
in
case
I
forget,
but
we
were
going
to
talk
about
how
do
we
write
and
then
so
we
were
talking
about
shaw's
anomaly
detection
model,
and
then
he
said
that
you
know
the
next
step.
A
There
is
writing
the
documentation,
and
so
we
we're
also
going
to
cover
how
to
do
the
console
examples
with
the
xcg
xgboost
classifier
model,
so
we're
going
to
do
okay,
so
how
to
write
examples
or
how
to
write
doc
string
examples:
okay,.
C
Okay,
okay,
so
I
have
another
meeting.
Okay
can
can
we
talk
first?
What
did
you
want
to
talk
about
yeah?
I
left
some
comments
on
the.
C
A
Let's
see
oh
yeah,
so
this
is
with
regards
to
some
of
saksham
stuff,
so
I
will
take
a
look
at
this
and
or
sorry
no
with
regard
to
suitancy
stuff
and
so
we're
going
to
take
a
look
at
this
and
see
if
that
might
fix
the
issue
with
the
accuracy
importing.
So
thanks
for
getting
back
on
that
and
then
let's
see
so
and
then
the
other
thing
was
oh,
I
think
I'm
just
going
to
merge
this.
I
think
I
just
hadn't
merged
it.
Yet
so,
unless
wait
a
minute
did
I
write.
A
Oh
yeah,
I
think
I
responded
with
oh,
it
did
the
thing
where
I
forgot
to
respo
hit
the
wrong
button:
okay,
yeah!
I
just
wanted
to
say
yes
on
the
logger
info
and
then
can
you
also
create
an
issue
so
that
we
can
track
that
thing
in
df,
because
we
should
do
that
since
we're
here.
Can
you
change
yeah?
Can
you
change
it
to
logger
info,
because
I
don't
think
I
can
make
a
a
suggestion
to
do
that.
C
A
A
A
A
All
right,
thanks
drum,
have
a
good
one.
Bye,
wait,
okay,
so
all
right,
we're
gonna
dive
into
the
console
test
stuff.
So
so.
A
A
A
So
so
I'm
going
to
add
your
add
your
branch
as
a
remote
here.
A
A
A
So
this
is
basically
the
the
gist
of
of
testing
the
docs
string
here
with
the
console
console
the
console
test
plugin,
so
we're
gonna
grab
this
and
we're
gonna.
Look
at.
Let's
see
test
classifier
model,
and
I'm
just
going
to
dump
this
in
here
for
a
second.
A
A
A
Because
we
don't
need
any
of
that
setup
stuff
and
all
we
have
to
do
is
pass
it
the
class
that
we
want
to
test
the
doc
string
of
so
we're
going
to
pass.
It
actually
classify
model
all
right
all
right.
Let's
actually
move
this
to
the
top
here.
Oh
no,
we'll
keep
it
at
the
bottom.
Okay,
so
and
then
this
one
you
had.
Okay,
you
were
running
the
example:
okay,
so
yeah,
okay,
so
this
is
yeah.
That's
that's
good!
We'll
leave
that
there,
because
we
can
we
can
we
can
safely.
A
We
can
safely
do
that.
We
don't
need
to.
We
don't
need
to
change
that
since
we're
already
running
it,
but
all
right.
Okay!
So
now
we'll
go
to
the
dock
string.
A
You
have
this
example
usage
here,
which
is
basically
run
this
file
all
right,
so
we're
going
to
put
command
line
usage
and
we're
gonna
basically
figure
out
okay,
so
you
don't
have.
Let's
see.
A
Okay,
so
first
off,
I
guess
we
have.
This
should
be
iris.
A
And
then,
secondly,
because
we're
doing
command
line
stuff
we're
going
to
need
the
the
iris
data
set.
So
where
did
the
irs
data
set?
Go.
A
Download
the
training
data
download
test
data,
and
then
this
is
basically
this
is
sort
of
the
the
magic
part
of
this.
Is
we
just
put
test,
and
now
it's
going
to
run
that
and
it's
just
going
to
check
the
return
codes.
So
those
commands
you
need
to
make
sure
the
commands
are
running
exit
with
a
with
a
bad
return
code
if
they
fail
so,
for
example,
curl
I
found
out
recently
unless
you
patch
dash
lowercase
f,
it
doesn't
actually
exit
if
there's
a
bad
something
bad
happens.
A
A
Okay,
yeah:
I
think
that
was
to.
A
Oh
yeah,
okay,
changing
that
yeah,
whatever
format
they
had
it
in
it's
not
the
same
as
the
format
that
we
like
it
in
so
all
right.
So
we
change
the
format
download
the
training
test,
files
change
the
headers
to.
A
Yeah,
I'm
not
sure
what
the
deal
was,
but
they
don't
actually
name
yeah.
They
don't.
Oh.
The
first
row
is,
I
think,
the
the
encodings
of
the
I
believe
the
first
row
with
this
dataset
is
the
encodings
of
the
classifications,
but
we
obviously
want
real
csv
headers
for
some
reason.
They
don't
do
it
that
way.
So
we
just
change
those
so
change
the
headers
to
dffml
format.
A
A
Is
this
running?
Does
this.
A
B
Yeah,
I
have
run
the
python
files
yeah.
It's
working,
fine,
okay,.
B
Yeah
there's
some
warnings.
A
Okay,
yeah,
that's
that's
fine!
So
what
were
the
warnings?
I
didn't
even
read
them
no
yeah.
This
has
been
going
on.
I
don't
think
this
is
our
issue.
This
is
something
underlying
yeah
sk
learn.
Unless
it
is,
let's
see.
A
Yeah,
that's
something
in
an
underlying
library.
So
that's
not
our
issue,
nothing!
We
can
fix
all
right.
So,
let's
see
okay,
the
the
thing
that
I'm
concerned
about
here
is
is
features
is
just
taking
data
and
I
think
that
this
works
fine,
for
this
example,
data
set
that
you
have
here.
However,
that
is
not
the
okay,
I
guess
yeah
okay,
so
this
works
fine.
A
For
your
example,
data
set
it's
not
going
to
work
fine
for
our
for
our
training
data
set,
okay,
never
mind
it's
just
because
that's
the
format
that
it's
in
also,
though,
let's
see
you're
saying
that
the
feature
length
is
10.
When
you
do
this-
and
I
my
guess,
is
you
know
this
this-
this
obviously
works
because
you're
not
validating
the
the
length
of
the
feature
here.
So
if
we
were
to
look
at
these
data
frames,.
A
Let's
look
at
the
x
data
all
right.
Let
me
just
do.
A
A
All
right,
so
this
is
our
x
data,
yeah
and
so
there's
there's
there's
not
10.
There's
there's
three,
which
is
fine.
You
know
it
works,
but
we,
we
should
probably
have
some
validation
on
this.
So
okay
and
if
np
is
scalar
feature
s
else
feature
okay,.
A
Yeah
yeah
so
well,
if
you
specify
so,
if
you
specify,
if
one
were
to
specify
10
here,
that
would
mean
that
we
should
have
10
columns,
because
you're
saying
that
this
is
a
you
know,
a
vector,
that's
of
link,
10.,
yeah,
yeah,
and
so
in
this
case,
obviously
we're
not
we're
not
doing
validation.
I
mean
the
only.
A
I
think
the
only
one
of
the
only
places
that
we're
doing
validation
is
in
model
tensorflow,
but
in
this
case
let's
we'll
just
put
four
because
that's
the
correct
value
and
we
can,
we
can
add
a
to
do
for
validation
so
or
well.
We
can
just
let's
just
do
the
validation
so
since
we're
here
all
right
so
feature-
and
this
is
the
featured
data
so
future,
let's
see
okay,
so
self.features.
A
A
A
A
A
A
All
right
so
we
create
this,
extend
data
and
then
power
just
flickered,
and
then
we
say
if
the
feature
length,
you
know
it's
not
equal
to
the
length
of
extend
data.
Then
we
raise
an
issue.
A
A
A
All
right,
let's
check
that
out
so
10
should
give
us
an
issue.
A
A
So
we
don't
want
to
add
this
part,
but-
and
we
do
want
to
add
this
part,
so
just
the
stuff
we
did
the
validation
and
then
yes,
we
do
or
we're
not
gonna
we're
not
gonna.
Do
that
right
now.
So.
A
But
wait
where's
our
change
to
four.
Why
didn't
I
pick
that
up.
A
We
just
want
the
change
to
four.
Oh
because
I
took
q.
No,
yes,
all
right!
So
now.
If
we
look
at
the
stage
changes,
then
we
can
see
that
you
know
all
only
the
assertion.
Error
changes
are
ready
to
be
to
be
committed,
and
if
we
do
a
diff,
we
can
see
the
rest
of
them
are
still
still
sitting
there.
So
all
right,
so
let's
commit
that
model.
A
Xg
boost
classifier.
A
Add
feature
length,
validation,
all
right
great.
So
now,
back
to
the
example.
A
D
A
A
A
Okay
and
we
just
go-
and
we
copy
all
these
things
over
here,
so
next
depth,
and
unfortunately
this
is
one
of
the
open
issues
that
we
have
right
now.
Is
that
these
we
need
to
modify
some
of
the
config
code
to
convert
these
underscores
to
dashes,
because
that's
not
not
not
ideal.
A
B
All
right
yeah,
I
I
tried
to
cover
all
those
parameters
that
represent
in
the
exit
boost
classifier,
but
still
some
of
them
left.
So
I'm
working
on
that
so
yeah,
so
that
I
will
cover
all
the
parameters.
B
A
Yeah,
okay
and
is
so
one
thing
to
check
when
you're
doing
something
like
this
is
if
it's
written
in,
if
the
doc
string
is
a
numpy
doc
string,
because
we
have
something
that
will
parse
numpy
doc
strings
and
generate
a
config
class
for
for
the
for
anything
that
we're
wrapping.
That
is
a
numpy
doc
string.
A
A
Okay,
all
right,
so
this
is
the
train
command
and
then.
B
I
think
I
think,
in
train
command.
We
also
need
to
define
the
model
xzip
classifier.
B
A
Okay,
great
okay,
and
then
we
need
our
sources.
Obviously
xg
classifier.
A
A
A
Okay,
so
training.csv
and
I
think
we're
good
to
go
there
so
we're
in
the
train
command,
and
this
is
so.
This
is
another
thing
right.
We
specified
all
of
that
that
stuff
on
the
command
line
right,
let's
see
or
let's
see
where,
how
is
this
written
so
yeah,
it
looks
like
you
have
train,
you
create
the
classifier
and
you
dump
it
to
joblib,
and
then
you
so
you
dump
it
with
joblib,
and
so
you
don't
need
yeah.
We
don't
need
to
specify
any
of
these
things
again,
except
for
the
directory.
A
A
A
A
A
Gotta
register
the
entry
point
and
apparently
update
x,
used.
A
Is
the
same
so
when
you're
looking
at
the
install
or
not
yeah?
So
this
is
where
is
contributing
and
getting
set
up
to
work
on
dffml?
So
when
you're
looking
at
these
docs
here,
it
tells
you
to
do
dash
e
and
dash
e
is
to
install
let's
see
yeah.
Maybe
it
doesn't
explain
it,
but
that
is
to
install
it
in
development
mode
is
the
dash
e
flag
and
that
basically
just
says
that
that
that
just
means
don't
make
a
copy?
A
Don't
because
what
it'll
do
is
it'll
build
as
like
a
zip
file
or
tar
file
of
the
code
and
then
install
that
and
extract
that
site
packages
and
then
it'll
be
where
the
rest
of
the
python
modules
are
installed.
And
so,
if
you
do
that,
if
you
don't
do
it
with
dashi
it'll
it'll,
you
know
basically
make
a
copy
and
then
your
changes
won't
won't,
be
you
know
it
won't.
A
Python
won't
pick
up
your
changes
as
you
as
you're,
changing
your
local
files
so-
and
this
is
how
you,
for
example,
yeah
reinstall,
just
one
thing
so:
python
dash
and
pivot.
A
A
A
Oh
all
right,
this
move,
oh
god,
yeah.
Okay,
this
created
a
giant
mess,
got
this
stupid
packaging,
stuff,
freaking
pain
in
the
ass
okay,
yeah
that
decided
to
reinstall
dffml
as
the
production
version
of
dfmo,
which
is
not
what
we
wanted.
A
Resolver,
luckily
we
gotta
do.
We
have
another
open
issue
where
we
need
to
go
through
and
take
out
all
those
use
feature
2020
resolvers,
because
we
had
to
change
everything
to
use
it
because
of
all
these
machine
learning
packages
depend
on
a
bunch
of
different
versions
of
each
other
and
when
you
install
them
all
at
once
in
one
environment
which
is
part
of
the
goal
of
dfml,
is
to
be
able
to
do
that.
A
You
get
massive
massive
issues,
and
so
luckily
pip
has
released
this
new,
what
they
call
a
dependency
resolver,
which
makes
sure
that
all
the
versions
are
what
they
should
be
so
that
everybody
can
use.
You
know,
finds
the
most
compatible
version
for
everybody
and
and
so
now
now
that
is
the
default.
So
we
can
take
away
all
of
those
okay.
There
we
go.
A
A
A
I
think
this
is
the
start
of
this
test.
Okay,
so
here's
the
start
of
this
test,
so
it's
going
to
dump
out
every
command
that
it
runs
with
some
context
around
that.
So
basically
you
know:
where
is
it
running
and
it's
running
in
this
temporary
directory,
so
it
downloads
we
tell
it
to
download
the
files,
so
it
runs
those
w
git
commands
and
then
it
runs
the
sed
command.
A
So
it's
just
going
to
run
every
command
that
we
told
it
to
run
and
in
this
temporary
directory
and
spit
out
the
results
to
the
console.
It
won't
do
any
comparison
of
the
console
stuff.
So
if
you
want
to
do
comparison,
then
you've
got
to
read
the
documentation
page
here,
which
is
on
contributing
console
tests
and
I'll.
Tell
you
how
to
do
comparison
to
gate
things
on
comparing
the
output
but
yeah.
So
basically
that
that
is
how
you
add
the
examples,
and
now
this
will
show
up
under.
A
Let's
see,
plugins
xg
boost,
so
really
it
didn't
jump
right.
There.
A
So
yeah,
so
this
is
okay.
It
looks
like
we
have
python
example
for
this
one
and
so
yeah
it'll
show
up.
You
know
more
like
this
right.
It'll,
look
like
this
all
right
cool,
so
any
yeah.
Any
questions
on
this
or.
A
You
don't
need
to
do
that
for
the
regressor
within
the
scope
of
your
pull
request,
but
you,
but
you
know
if
you
wanted
to
go
create.
Actually
that
would
be
a
good
thing
to
create
an
issue
about,
because
I
now
you
know,
as
we
just
saw
we,
we
don't
have
that
in
there
either.
So
it
would
be
good
to
go.
Add
example,
usage
from
python
for
the
regress
or
from
the
command
line
for
the
regressor.
So
so,
if
you
could
add
an
issue
for
that,
that
would
be
great.
A
So
that
we
can
track
that,
so
we'll
add
an
issue
to
track
lack
of
console
tests
in
regressor
or
lots
of
console
examples
and
regressor.
Okay
and
let
me
just
commit
this
stuff.
So
let's
look
at
what
we've
got
here.
A
C
A
A
A
So
this,
I
believe
it's
from
this
and
the
weird
part
about
this-
is
that
you
should
be
in
a
should
be
in
a
oh:
it's
because
we're
overwriting
the
setup
class
method,
okay
yeah,
so
we
just
need
to
make
sure
that
we
call.
A
We
call
this
super
method
here
because
it
should
be
putting
us
in
our
own
temporary
directory
and
it
was
not
because
we
weren't
calling
the
the
the
parent
parent
classes
set
up
and
tear
down
methods.
So
let's
try
running
that.
A
A
So
we'll
add
this
stuff
here,
which
was
model
xg,
boost,
classifier
command
line
examples,
and
then
we
will,
let's
see
yeah,
we'll
add
this
stuff
so
and
I'm
just
gonna
actually
put
the
stuff
under
the
rest
of
the
work
that
you
did
so
I'm
gonna
rebase
it
and.
A
A
All
of
this
is
going
to
get
squashed
into
one
commit,
so
it
doesn't
really
matter,
but
just
from
from
a
general
from
a
general
pull
request,
review
and
style
standpoint,
I
think
we're
missing
a
space
in
front
of
model
here,
not
a
big
deal
but
and
then
we're
also
missing
capitalization
on
the
first
letter
here
of
every
thing
after
the
colon
and
then
yeah.
A
If
you
do
these
and
then
so,
if
you
make
commits-
and
then
you
you-
you
know
you-
you
fix
things
right,
so
you
you
fix
things
along
the
way.
What
is
helpful
is,
as
you
get
closer
to
the
point
where
you're
looking
for
a
review
is
to
merge
the
merge
merge
any
related
commits
into
the
same,
commit
right
or
split
things
out
into
different,
commits,
and
and
really
I
mean
you,
you
looks
like
you're
doing
a
good
job
of
of
creating
commits
for
various
things.
A
Yeah
so
yeah
you're
committing
as
you
go
along
and
then,
if
you
go
and
you
go
and
you're
getting
ready
to
submit
the
pull
request
right
and
submit
it
for
final
review,
each
of
each
of
these
yeah
it
looks
like
most
of
these
would
would
be
good
fits
for
just
meshing
into
the
the
the
previous
commit.
So
you
just
go
through
and
you
you
melt
them
into
the
previous,
commit
by
doing
a
a
fix
up.
A
So
let's
see
yeah
okay,
and
that
was
what
I
did
here
by
doing,
you
know
f,
and
that
basically
takes
this
commit
and
it
makes
it
a
part
of
this
commit
before
it.
So
and
it's
not
a
big
deal
like
especially
with
this
one.
You
know
it's
just
sort
of
more
more
for
for
your
own
practice.
I
was
telling
people
a
while
ago.
You
know
this
is
you
know
most
most
of
the
things
we
do
here?
A
We
can
squash
before
we
do
the
merge,
but
this
is
sort
of
like
just
just
just
so
that
we're
all
doing
good
good.
It's
it's
good
practice
for
contributing
to
larger
projects
because,
especially
as
as
you
get
more
complex
changes
with
more
more
more
complex,
yeah,
more
complex,
stuff
and
and
just
more
detailed
and
the
reviews
take
longer,
it
helps
to
have
the
reviews.
A
It
helps
reviewers
to
have
your
changes
as
as
organized
as
possible
and
and
that
that
has
to
do
with
you
know
the
commits
themselves,
so
anyways
yeah.
So
just
some
just
background
on
that.
Okay,
so
I'm
gonna
push
this
stuff
up
and
then
and
then
we
should
be.
Then
it
looks
like.
Maybe
what
else
was
there
anything
else
here
that
we
talked
about
in
terms
of
this
pull
request,
or
I
think
we
may
be
missing,
get
diff
stat.
A
Let's
see,
where
did
you
split
off
from
master
here
so
get
div
stat?
This
will
show
us
all
the
files
that
were
changed,
so
I
think
yeah,
okay,
xg
boost
this
should
be
good.
I
I
think,
let's,
let's
double
check,
we'll
look
at
the.
Was
there
anything
else
that
we
noted.
Oh
yeah.
We
wanted
to
talk
about
the
make
config
numpy.
A
So
if,
for
example,
we
ran
into
this
with
the
auto
sk
learn
recently,
but
so
I
can
show
you
for
auto
sklearn.
B
A
Yeah
no
worries,
and
that's
that's
you
know,
that's
that's
part
of
why
I
had
that
one
assigned
to
myself
because
it
seemed
like
a
little
bit
tricky
and
I
I
didn't
you
know
if
you,
if
you
want
to.
If
you
want
to
dig
on
that
one,
you
know
more
power
to
you.
That's
awesome!
I
you
know
we
will
all
be
grateful
for
to
you
but
yeah.
I
don't.
A
I
don't
know
what
the
hell
is
going
on
there,
that's
a
really
weird
thing
and-
and
it
may
be
something
to
do
with
the
console
test
stuff,
even
I
don't
know
what's
going
on
there,
but
it
seems
like
when
I
added
the
last
lines
about
running
the
final
test
file.
That's
when
it
started
to
blow
up
in
time
the
final
python
test
and
running
that.
A
So
you
might
try
taking
that
that
last
little
section
there
out
where
it
runs
that
python
test
file
and
doing
something
more
like
what
you've
done
here,
where
you
actually
yeah,
where
you
actually
ran
the
test
file
just
by
itself
within
the
module
tests.
That
might
be
the
that
might
be
the
solution
here.
A
For
that
one
cool.
Actually,
that's
a
good
good
related
point
there.
So,
let's
see
in
this
just
to
go
talk
about
this
numpy
doctrine
stuff.
Basically,
what
we're
talking
about
here
is
when
we
have
to
create
these
config
objects
like
this,
and
we
have
to
go
and
usually
we're
wrapping
some
class
right,
and
so,
in
this
case
we're
wrapping
the
instantiation
of
of
xg
boost
well
in
the
auto
sklearn
classifier
in
the
auto
sk
learn
case.
A
We're
wrapping
essentially
their
subclasses
of
this
class
here,
which
is
this
estimator
class,
and
so
you
know
just
like
with
xgboost
we'd,
be
passing
all
of
these
parameters
with
us
autoscaler
we're
passing
all
these
parameters,
so
they're
documented
in
the
numpy
style,
docstring
format,
which
is
this,
and
so
what
we
can
do
is
we
can
take.
A
A
The
change
was
this,
so
basically
we've
we
went
did
some
digging
found
out
that
this
init
method
is
what
we
want
to
look
at,
and
we
can
just
basically
call
this
function.
Just
like
we
have
xpg
classifier
model
config,
we
can
say:
autos
can
learn,
config
equals
make
numpy
config
pass
it
the
name
of
the
class,
and
then
you
just
say.
The
next
argument
is
the
function
that
has
that
docstring
or
if
it's
the
class,
that
has
the
docs
string.
A
A
Is
where
is
it
xg
classifier
aptly
named?
Let's
see
api
reference
perfect.
A
A
Just
I
think
this
is
our
last
thing
that
we
need
to
talk
about.
Oh,
but
the
last
thing
was
the
default
values.
Is
this
related
to
the
default
values
with
what
they
should
be
or
because
you
said
the
default
values
are
now
what
they
should
be
with
something
related
to
the
classifier.
C
A
Okay,
well,
we
can't
find
it
so
I'm
not
sure
so
we'll
just
we
don't
really
need
to
talk
about
that
anymore,
but
you
guys
get
the
picture.
Basically,
if
you
find,
if
you
find
that
something
has
a
numpy
style
draw
string,
if
you
can
find
where
it
is,
and
you
can
look
at
the
doc
string,
I
guess
we
could
look
at
the
dot
string
by
pulling
it
up
in
python,
but
we
won't
spend
more
time
on
that.
Then
you
can
just
call
that
make
make
config
numpy
and
I'll
just
put
the
example.
A
Example:
hps,
github,
dot
com,
slash,
intel,
slash,
different
search.
A
Commit,
let's
make
sure
that's
correct,
okay,
yeah,
so
this
is
the
example
for
this
one-
and
I
think
oh
this-
this
don't
worry
about
this,
but
it's
this
change
here.
So
if
you
find
that,
then
you
can
use
that
function,
all
right
so
and
then
the
last
thing
we
need
to
cover
is
okay.
We
also
need
to
cover
that
async
io
thing,
but
so
we
found
out
the
default
values
are
not
what
they
should
be.
What
do
you
mean
by
that.
B
Oh
actually,
just
just
search
for
the
exit
boost,
craigslist
regressor,
let's
see
okay
and
yeah.
Okay
with
just.
B
Yeah
yeah
and
all
these
things,
the
parameters
are
their
max
depth
and
the
gamma
and
jobs
and
all
these
parameters,
the
default.
The
default
values
of
this
parameter
are
not
actual
default.
That
is
found
on
the
xz
boost,
regressors
library.
B
I
just
found
that
issue
with
the
exhibits
regressor,
but
maybe
that's
also
been
issue
for
the
other
models
as
well.
A
Yeah
and
so-
and
this
is
where
using
that
function-
the
make
config
numpy
allows
us
to
just
grab
the
defaults
out
of
the
out
of
whatever
the
the
class
is
right.
So
that's
that's.
Why
that's
that?
That's?
Why
that's
not
useful!
So,
let's,
let's
just
try
looking
at
it
and
see
extreme
boost
since
since
this
is
this
is
actually
the
problem.
A
C
A
A
A
Okay,
let's
instantiate
it
and
see
what's
up
okay,
so
it
only
got
objective.
What's
up
with
that,
let's
see
where's
objective
objective.
For
some
reason,
this
is
the
only
thing
that
I
picked
up.
That's
weird:
let's
see
yeah,
okay!
So
what
the
hell?
That's
really
odd.
That
must
be
the
only
thing.
Oh
it's
all
in
kw,
args
arguments
for
the
x
youtube.
Booster
objects
full
documentation
found
at
here.
A
Okay,
general
parameters:
okay:
what
class
is
this
yeah
this
this?
This
is
what
we're
looking
for
here
so
and
what
I
guess,
the
the
question
is:
xgg
regressor
so
and
they're
saying
booster.
A
A
A
And
the
auto
sk
learn
one
is
that
all
of
these
are
spelled
out
here
and
there's
no
star
star
kwrx.
When
they
do
star
star
kwrs,
we
can't
inspect
it
and
see
what's
going
on
so
it
might
be
yeah.
This
is
this
is
a
trickier
one,
unfortunately
yeah
if
we
can
find.
I
think
the
thing
is
this
is
another
one.
Where,
with
auto
sklearn,
I
had
to
find
the
base
class
here
and
then
point
it
at
the
base
class.
A
So
if
we
can
find
the
base
class,
we
could
point
it
at
the
base
class.
We
just
need
to
find
it
so
we'll
make
a
note
of
that.
So
tried
doing
so.
We
tried
doing
this,
and
this
is
what
happened.
C
A
If
we
want
to
keep
the
defaults
in
sync
with
upstream
upstream,
is
wherever
we're
getting
this
from.
So
the
actual
xg
boost
project
recommendations.
A
So,
similarly,
to
what
was
done
with
autoscaler
all
right,
okay,
all
right!
So
those
were
kind
of
related
all
right
and
then
our
last
thing
do
you
have
anything
else
on
the
xg
boost
stuff?
Do
you
want
to
talk
about.
C
A
Please
remind
me,
please
remind
me,
because
I
will
do
it
as
soon
as
we
you
know,
I'm
going
to
open
youtube
right
now
and
then
well,
okay,
so
let
me
yeah
we'll
I'll.
Do
it
I'll?
Do
it
I'll?
Do
it
as
soon
as
it's
going
to
suck
up
tab
memory?
Okay,
so
I'm
gonna,
I'm
gonna,
I'm
gonna!
Do
it
as
soon
as
we're
done
here
so
all
right
and
then
so.
A
The
last
thing
was
that
we
have
an
issue
with
the
async
io
part
of
actually
scores,
and
so
let's
pull
that
down
and
look
at
that.
A
D
So
hello,
john.
A
D
Yeah,
so
about
that,
so
I've
actually
fixed
that
issue.
Oh.
A
D
A
That
was
the
easiest.
You
just
fix
that
well
easy
easy
easy
for
me
comparatively.
While
I
was
talking.
D
Yeah
so
now,
there's
like
another
problem
with
the
transformers.
C
D
Hadn't,
provided
that
score
or
a
thing,
so
I
actually
did
that
and
like
some
other
tests,
I
provided
that
score.
D
A
A
D
Wrong,
I'm
not
aware
like
what's
the
question
yeah.
A
Let's
see
oh-
and
I
think
somebody
pointed
out-
I
think
those
you
natasha
pointed
out
that
that
okay
yeah
we
closed
that
issue
about
that
failure
to
connect
there.
Oh,
and
I
should
also
say
that
I
was
using
this
label
as
so,
if
we
have
random
since
there's
so
many
random
ci
failures
that
are
not
our
fault
like,
for
example,
this
one
happened
the
other
day
and
they
had
this
issue
where
they
published
a
new
release
and
then
they
screwed
up
something
it
was
breaking
everybody's
stuff.
A
So
I'm
trying
to
tag
things
that
are
not
our
fault
with
the
kind
ci
failing,
so
that,
if
because
there's
so
many
things
where
you
know
we
go,
do
a
build
and
then
we're
like.
Oh
no
man
like
what
happened
here,
yeah
and,
and
so
I'm
trying
to.
If
you
find
an
issue,
then
then
you
know
or
if
you
find
something
like
that,
put
a
new
issue
and
then
I'll
try
to
label
it
like
that,
and
actually,
let's
see
to
go
along
with
that,
the.
A
The
stupid
examples
should
I
npm
audit
failure.
Okay,
that
one
happens
intermittently
all
right,
but
what
we
are
talking
about
is.
D
So
actually
the
model
is
not
getting
trained.
It's.
D
Yes
and
that's
why
train.
A
A
You
know
this
looks
like
a
this
looks
like
a
issue
where
transformers
got
updated
to
a
later
version,
and
now
we
are
yeah.
You
know
this
looks
like
something
where
we
transformers
got
updated
to
a
newer
version
and
now
we're
we're
accessing
things
that
are
no
longer
present
in
the
newer
version.
So
we
may
just
need
to
pin
the.
D
A
Okay
yeah
and
this
one
is
this:
let's
see?
Well,
that's
that's,
okay!
So
here's
that's
a
good
question
on.
When
was
this
run
five
days
ago?
That's
yeah!
Maybe
that
might
be
overlapping
here
or
not.
Tensorflow,
transformers,
yeah,
okay.
So
the
yeah
that
that's
that's
we'll
rerun
here
and
then,
but
I
think
our
we
may
just
end
up
pinning
it
so
because
I
don't
think
there's
been
any
changes
made
to
get
fetch
origin
so
get
diff
origin
master
model,
transformers,
okay,
wait
a
minute.
A
It
looks
like
transformers
accuracy,
transformers,
okay.
This
is
all
yours,
yeah
you're,
adding
accuracy,
stuff,
classification,
accuracy,
okay,
this
is
all
yeah.
I
don't
think
we've
made
any
changes
to
transformers
that
you.
A
Okay
yeah,
so
it
must
be
a
version
update
thing
because
obviously
we
haven't
touched
transformers,
let's
see
okay,
and
there
is,
let's
see
the
requirements
txt.
Okay,
we
got
that
one
okay,
but
this
is
restricting
the
version
range
which
is
interesting.
A
Okay,
well,
we
re-ran
master
and
let's
just
make.
Let's
just
make
a
note.
Can
you
comment
in
the
are
all
comments
in
the
poll
request
and
say
we
are
seeing
an
issue
with
transformers,
not
sure
if
it's
a
versioning
issue
or
not
are
re-running
the
ci
in
master
to
double
check
that
sound
good.
A
All
right,
okay,
so
that's
a
plan.
Then
that's
our
plan,
we're
gonna,
rerun
the
ci
and
master
and
we're
gonna
we're
just
gonna
validate
that.
There's
a
problem
two
places
then
basically,
and
if
there's
not,
then
we'll
we'll
just
scratch
our
heads
and
move
from
there.
But
my
guess
is:
if
yeah,
if
there's
still
a
problem
and
it's
not
showing
up
in
master,
then
I
think
you
know
we'll
call
this
good
with
what
else
we
needed
one
more
thing
and
then
we're
going
to
re.
Okay.
A
Oh
yeah,
I
mean
so
we'll
we'll
merge
this
into
the
we'll,
we'll
throw
all
these
commits
into
the
accuracy
staging
branch
and
then
we'll
move
on
to
doing
the
rebase
stuff
and
if
there's
still,
if,
if
we're
seeing
issues
here
with
transformers
that
we
aren't
in
master,
hopefully
the
rebase
fixes
that,
although
I
think
I
I
foresee
there
being
problems
everywhere
so
yeah
all
right.
Well,
thanks
guys,
it
was
great
talking
to
you
guys
today.
Is
there
anything
else.
A
Great,
thank
you
both
very
much,
and
I
will
talk
to
you
next
week
and
well.
I
may
I
may
talk
to
you
this
weekend,
but
I'll
probably
be
busy
for
the
next
couple
days.
So
just
so,
you
know
all
right.
Thanks
guys
have
a.