►
From YouTube: Weekly Sync 2020-11-03
Description
Meeting Minutes: https://docs.google.com/document/d/16u9Tev3O0CcUDe2nfikHmrO3Xnd4ASJ45myFgQLpvzM/edit#
A
All
right,
so,
let's
just
go
through
here.
A
I'll
we'll
we'll
do
introductions
in
a
second,
but
the
way
we
usually
go
around
and
do
this
is
well
the
way
we're
supposed
to
is
we
go
through
everybody,
and
everybody
says
what
they
would
like
to
talk
about
during
the
meeting
and
then
some
people
inevitably
sometimes
will
have
some
things
that
are
really
long
topics
and
maybe
only
applicable
to
one
person,
and
in
that
case
we'll
put
those
last
in
case.
Anybody
needs
to
drop
from
the
call,
because
this
is
sort
of
just
a
drop-in
dropout
situation.
A
You
know,
if
you
have
time
some
week
and
you
want
to
jump
on
the
call
great
we'll
be
here
and
if
you
don't,
then
that's
fine.
You
can
drop
in
to
say
something
you
can
drop
out,
but
you
know
basically,
your
your
guarantee
that
someone
will
be
around
from
from
whatever
nine
to
ten
pacific
time
is
so
yeah.
So,
let's
see
so
you
wanted
to
talk
about
and
and
coco
right.
A
Yes,
all
right,
coco,
okay,
I
just
want
to
make
sure
it's
your
name
correctly.
Okay,
so-
and
you
so
you've
been
going
through
the
the
installation
step
and
you
hit
the
google
google
collab
situation,
which
is
unfortunate.
B
C
And
also
welcome
to
with
posika
now
yesterday,
so
I
thought
encounters
a
problem
so.
A
All
right
and
then-
and
you
threw
up
a
pr
here
yes
to
to
to
help
us-
raise
an
error
which
is
good
so
we'll
just
put
down
so
like
in
this
case.
We,
we
would
probably
say
you
know,
okay,
so
review.
A
We
got
sudhans
great
review,
pr
for
which
throws
and
error
if
python
version
is
less
than
3.7
and
then
we
do
this
and
then,
when
we're
done,
reviewing
we'll
check
it
off.
So
let's
see
should
and
so
shoko
or
wait.
No
well
sorry
remind
me
again.
A
Me
yeah:
well,
how
do
you,
how
do
you
pronounce
your
name
again?
Si
okay
cool
sorry!
I
just
want
to
make
sure
I'm
getting
it
right
here.
I
might
ask
you
a
few
more
times,
sorry
see
coco.
So
what
what
is
your?
What's
your
we'll
do,
a
little
round
of
introductions,
but
what's
your
background.
A
Yes,
very
cool,
so
so
do
you
have
like
do
you?
Do
you
have
much
experience
with
machine
learning
at
this
point
or
or.
A
Okay,
cool,
well,
that's
that's
great
yeah!
So
and
that's
and
that's
kind
of
you
know,
that's
that's
that's
our
our
our
target
area
is
sort
of
you
know.
You
know
people
who,
who
are
maybe
new
to
machine
learning.
So
your
you
know
your
your
insights
and
your
your.
Your
feedback
are
much
appreciated.
A
You
know.
So
if
you,
if
you
notice
things,
if
you
you
just
you
know
you
just
sort
of
create
issues
or
maybe
just
put
them
in
getter
whenever
you
run
into
anything
like
like
you
did
where
you
know,
oops
like
this
is
not.
This
is
was
not
clear
right
now.
We
want
to
make
sure
everything
is
very
clear,
and
this
is
a
good.
Your
first
pr
is
a
good
step
towards
making
sure
that
that
that
our
our
installation
instructions
are
clear.
So
all
right
so
and
then
I'll
just
do
brief.
Introduction
here.
A
So
I'm
I'm
john
I'm
at
intel,
and
I
do
I
mostly
do
security
stuff,
and
then
you
know
I'm
the
maintainer
of
this
project,
which
is
machine
learning
and
with
some
security
involved
as
well
and
and
yeah
so
gash,
do
you
want
to
go
next.
D
Yeah,
so
I
I
don't
have
much
things
to
talk
about.
I
I
just
wanted
to
ask
like
what
triggers
should
we
use
for
the
windows
test
and
when
you
say
triggers,
what
do
you
mean
like?
Do
you
want
to
run
them
on?
Every
comment
like
that.
A
So
won't
that
fail
master
every
time.
Well,
okay,
but
we
should
skip
out
the
test.
Remember
we
were
going
to
do
the
unit
test
skip
on
any
tests
that
are
failing
right
now.
D
A
Yeah
I
mean
okay
yeah.
So
what
I
was,
let
me
and
actually
let
me
let
me
just
we'll,
write
this
down
and
then
we'll
discuss.
D
Because,
like
recently
on
29th
october,
only
like
github
released
that
workflow
dispatch
trigger.
So
it's
basically.
A
Oh,
let
me
just
note
this
down,
so
we
had
the
dispatch
trigger
button
released
recently
by
github.
It
runs
tests
whenever
you
hit
a
button
rather
than
on
every
pr
or
push
to
a
branch,
all
right
and.
A
Let's
see
okay,
so
yeah:
let's
send
it
we'll
talk
to
run
windows
tests.
A
A
D
A
A
If
we're
on
windows,
then
we
would
know
if
other
tests
start
failing
and
and
we
you
know
we're
working,
we
would
work
to
fix
the
ones
that
are
being
skipped
right
and
then,
as
we
fix
them,
then
we
unskip
them
within
the
you
know,
we
we
remove
the
skip
part,
and
that
way
we
basically
have
you
know
we
have
a
we
we're,
always
testing
the
ones.
A
We
know
that
work
and
then
we
would
find
out
if
all
of
a
sudden
it
was,
it
was
gonna,
not
work
for
some
reason,
because
you
know,
if
you
have
the
giant,
you
know
if
you
end
up
with
just
a
giant
red
x
chances,
are
we're
not
going
to
go,
read
those
logs
all
the
time
because
we're
like
oh
well,
it's
just
a
red
x
again,
you
know,
and
but
this
would
sort
of
this
would
this.
A
Would
this
would
trigger
us
to
know
if
something
else
that
we
didn't
know
is
breaking
or
was
broken
is
breaking
and,
for
example,
you
know
to
make
sure
that
maybe
we
don't
add
more
tests
that
that
deal
with
the
files
and
and
temporary
directories
and
having
multiple
open,
I
think,
which
is
one
of
the
common
things
on
windows
that
you
were
gonna
you
had.
If
you
had
like
one
or
two
there
you
were
gonna
fix
that
was
related
to
that.
A
But
you
know
if
we,
if
someone
had
added
one
of
those
tests,
it
would
show
up
right
away
because
the
ci
for
that
would
fail
right.
Does
that
make
sense?
Okay,
okay,
yeah
yeah?
It
does
all
right.
So
let
me
just
write:
does
that
still
sound?
Does
that
sound
like
a
good
course
of
action,
then,
or
do
you
have
any
concerns
or
thoughts.
A
A
I
think
you
can
just
add
it
under.
Let's
see,
I
think
you
can
add
it
like
as
another
section
here
so,
for
example,
workflows
testing,
so
so
each
one
of
these
ends
up
as
like
its
own
job
or
like
its
own
line
item.
So
if
you.
D
A
D
A
A
So,
let's
see,
let's
make
a
new
line
item
within
the
checks
by
copying
the
test
or
what
is
it
tests.
A
And
trimming
it
down
until
we
are
just
running
the
tests
for
the
main
python
package
and
maybe
leave
leave
in
the
python
versions.
A
Within
the
matrix,
so
that
we
test
both
3.7
and
3.8
and
then
change
runs
on
to
whatever
it
should
be
for
windows,
and
then
I
use
unit
test
dot
skip
if.
A
Decorator
to
mark
any
tests
that
are
failing,
then
start
fixing
the
failing
ones.
A
Let's
make
this
run
on
every
pr,
every
push
or
pr
so
that
we
know
if
the
status
of
a
test
a
non-skipped
test
starts
failing.
Okay,
does
that
all
sound
good.
A
A
A
Let's
see
so
so
c
co
on
this
one
okay,
so
all
I
noticed
was
that
there
was
a
greater
than
or
less
than
or
equals,
and
it
should
just
be
less
than,
and
so
then
I
think
we're
good
on
this
one
so
and
you've
changed
it,
and
I
don't
know
if
these
tests
should
be
failing
so
or
if
they
are
they're,
probably
a
false
positive.
So
and
then
the
change
log
should
be
failing
so
well,
I
guess
maybe
maybe
we'll
just
let's
see
did
it
run
ran
3.8
docs.
A
Which
means
it
ran
the
setup.py,
so
this
stuff
got
tested.
So
I
think
we
can
merge
this
now.
A
Install
exception
exception
great,
and
then,
let's
see
it
has
black,
I
believe
right
past
style,
great
all
right,
sweet.
So,
let's,
instead
of
notifying
component
installation,
okay,
great.
A
Wonderful,
wonderful,
all
right,
the
only
other
thing
is
is
usually
we'd
capitalize,
the
first
letter
of
the
the
first
word,
but
that's
not
critical.
So
just
since
since
since
we're
all
just
brain
dump,
everything,
but
that's
the
only
thing.
So
I
think
let's
see
and
there's
one
comment
great
I'll
rebase.
This.
A
A
All
right,
yay,
thank
you
so
and
then,
let's
see
so,
is
there
anything
else
that
you
were
thinking
of
so
so
next
I
mean
like
we
talked
about
if
you
want
to
dive
straight.
What
are
you
thinking
of
doing
next?
You
want
to
do
you
know?
Are
you
interested
in
looking
at
more
models
or
because
we've
got
basically
three
things?
I
don't
know.
I
don't
know
how
much
you
read
at
the
docs,
but
we've
got
three
things:
there's
like
models,
data
sources
and
then
there's
like
data
set
modification
and
generation.
A
So
do
you
have
any
specific
one?
Are
you
just
going
to
sort
of
poke
around
and
explore
things.
A
Cool
yeah,
I
think,
the
so
from
an
auto
ml
perspective.
We've
got
the
auto
sk
learn
in
there
and
you'll
probably
want
to
you
know
you
probably
want
to
look
at
so
we
should
be
doing
you
know
and
everybody's
going
to
laugh
when
they
hear
this,
but
we
should
be
doing
a
release
soon.
A
I
swear.
Unfortunately,
a
bunch
of
people
in
my
work
just
left
my
work
and
have
piled
on
this
giant
sort
of
hot
potato
project
on
me.
So
so
that
had
slowed
me
down.
But
let's
see
what
was
I
going
to
say?
Yes
and
I'm
not
sure
if
you
found
this
yet,
but
this
is
where
the
models
are
and
yeah
from
an
auto
ml
perspective
sort
of
find.
You
know
something
that
tunes
the
hyper
parameters
for
you
wait.
Where
is
it.
A
Oh
well,
that's
a
problem.
We
have
a
auto
sk,
learn
library,
which
is
a
wrapper
around
the
just
a
hyper
perimeter
tuning
version
of
scikit
and
then
you
know,
we've
got
the
other.
You
know
we've
got
more
like
you
know,
neural
network
things
with
tensorflow
and
stuff.
So
that
might
be
your
best.
You
know
if
you,
if
you
want
to
go
more
neural
network,
but
then
you
know
tune
the
type
or
parameters
yourself,
then
yeah,
tensorflow,
tensorflow,
pi,
torch
and
they're.
Those
are
those
are
good.
A
Pi,
torch,
sock
sham
just
just
implemented
recently
and
that
that
could
be
good
because
you
can
do
that,
you
can
do
you.
Can
you
can
define
the
layers
as
yaml
if
you
wanted
to
too
so
that's
nice,
and
he
can.
He
can
answer
your
questions
on
that.
A
So,
but
let's
just
see
here
because
scripts
docs
care,
I
think
we
just
may
not
have
yeah.
I
think
we
forgot
to
add
auto
sk
learning
here.
Well
then,
well,
let's
make
an
issue
for
that.
We
have
auto
scikit-learn,
but
apparently
it's
not
listed
in
our
docs.
This
is
there's
another
issue
that
is
supposed
to
be
fixing.
A
This
the
fact
that
we
have
to
add
it
at
all,
but
it's
good
to
know
all
right
yeah,
so
just
you
know
poke,
I
would
say
poke
around
at
the
the
you
know
the
with
the
the
various
tutorials
like
there's,
let's
see
where
there's
there's
the
neural
networks
tutorial
might
be
helpful
for
you
here
and
then
there's
also,
let's
see,
there's
yeah
there's
the
model
tutorials
and
then
there's
there's,
but
the
other
two
are
about
writing
a
custom
model.
A
And
if
you
just
you
know,
when
you
just
want
to
train
a
model,
then
the
plugins
page
for
the
models
is
probably
likely
going
to
be
very
helpful
for
you
or
I
think
there
was
a
few
things
under
the
examples:
mnist
and
flower
stuff,
the
classification
of
flowers
that
that
may
also
be
helpful.
A
And
then
you
can
also
ask
you
know
so
ask
any
of
us-
and
I
think
saksham
may
be
your
your
best
person
to
ask
if
you
run
into
things
with
with
you
know,
with
pie
torch-
and
you
know
you
can
ask
any
of
us,
but
but
he
he
just
did
the
pie
torch
stuff.
So
if
you,
if
you
use
that,
then
you
can
ask
him.
A
All
right
so,
let's
see
and
I'll
just
put
here,
we'll
be
okay,
a
model,
let's
see
so
yes,
so
and
then
I'll
put
in
the
nuts
here
so.
A
For
help,
if
needed,
and
then
let
us
know
any
any
anything
that
was
unclear
along
the
way
or
if
we
could
organize
the
documentation
better
to
make
it
more
clear
what
you
needed
to
find
et
cetera.
So
just
you
know,
just
if
you.
A
Notice
anything
you
know
could
have
been
easier
to
find,
or
something
could
have
been
clear.
Just
you
know,
shoot
us
a
note
and
get
her
and
that
way
we
can.
We
can
track
that
and
then,
let's
see
what
we
also
found
that
auto
sklearn
is
not
in
scripts.
A
E
So
I
implemented
the
data
flow
stuff
and
the
model
is
training,
but
the
problem
again
comes
down
to
the.
There
are
like
4000
images
and
after
like
three
four
minutes
of
pre-processing,
the
process
is
just
killed.
A
A
All
right
and
then
that's
definitely
gonna
be
a
memory
error.
I
would
assume,
but
there
sorry
my
mic,
that's
probably
to
be
a
memory
error.
Are
you
doing
this?
Where
are
you
doing
it?
Your
laptop
or
codelab.
E
E
Cloud
I'm
doing
this
on
google
cloud,
okay,
cool
good
to
know.
A
Yeah,
I
think
I
have
a
patch
set
so
the
way
it
works
right
now,
obviously,
is
it's
going
to
go,
try
to
re-pre-process
all
of
them
at
the
same
time,
and
so
it's
loading
in
all
the
images
and
making
all
those
arrays
and
everything
right
and
that's
probably
just
you
know,
creating
memory
issues
there's
a
I
have.
A
I
have
a
patch
that
it's
part
of
the
stuff,
it's
part
of
this
stuff,
to
to
add
threading
support
for
the
non-async
operations
to
run
them
in
their
own
threads,
and
I
haven't
quite
gotten
the
whole
patch
set
working,
but
I
think
I
have
the
stuff
working
where
we
can
cap
the
number
of
running
contacts.
A
So
you
could
cap
the
number
of
images
that
are
being
pre-processed
at
a
time
which,
should
you
know,
fix
your
memory
issue
so
we'll
all
I'll
look
into
getting
you
that
that
those
set
of
patches
and
we'll
try
to
apply
those
to
master
and
and
hopefully
fix
that
issue.
A
E
E
A
Okay,
that's
that's
fantastic
news,
nice,
so
hopefully
we
can
get
this
done
and
then
you'll
be
you'll
be
this
is
the
this?
Was
the
improved
colorization
one
with
the
neural
network
right.
E
There
was
a
few
yeah,
so
it's
it
gave
out
really
great
results,
but
also
I
wanted
to
say
that
there
were
a
few
lines
of
code
that
were
giving
error
in
the
python
underscore
net
dot,
pi.
A
E
A
A
A
A
A
A
Okay,
john:
to
get
fined
patch
that
that
caps
number
of
executing
contacts.
A
Yeah,
I
have
a
feeling
that
this
is.
I
think
that
I
ran
into
this
issue
whenever
I
did
this
thing
too,
and
that
was
why
I
had
to
do
that.
So,
okay,
anything
else,
you
want
to
talk
about
suckshome.
A
No,
that
is
all
from
up
sweet.
That's
great
news,
good
stuff.
A
Well,
I'm
yeah,
I'm
I'm
I'm
underwater
still
yeah
I
had
I
got.
The
good
news
is
good
news.
Is
I
I've
got
some
stuff
that
I
can
offload
to
one
of
my
co-workers.
The
bad
news
is
two
of
my
co-workers,
who
were
carrying
a
lot
of
the
burden
on
on
this
other
project.
They
just
decided
to
announce
that
they're
leaving
the
company
so
now
that
stuff
is,
is
unfortunately
falling
to
me
one
of
my
one
of
my
co-workers.
He
said
you're,
he
said
you're.
A
So
so,
yes,
that's
how
things
are
going
well,
we'll
hope
that
it
all
works
out,
but
yeah,
okay,
so,
but-
and
then
I
really
want
to
get
this
out
because,
like
I've
got,
my
I've
got
my
yeah
we
want
to.
We
want
to
get
this
release
out
here.
Yes,
that's
how
things
are
okay,
so
I
think
okay,
when
did
I
last
see
this,
because
I
thought
I
saw.
B
And
that
is
the
clustering
model
of
psychic.
Oh.
A
A
A
And
just
okay:
let's
see
so
okay
yeah.
Unfortunately,
I
think
himachu
got
that
full-time
job
and
he's
probably
been
swamped
as
well.
So
let's
just
take
a
look
at
this
real
quick.
Does
anybody
have
anything
else
that
they
wanted
to
talk
about
this
week?.
A
All
right,
well,
just
if
you,
if
you
end
up
with
it,
then
we
can
circle
back
at
the
end
of
this,
but
let's
see
so,
let's,
let's
just
try
to
debug
this
now.
So
let
me
pull
it
down
and
we'll
we'll
see
what's
going
on
here.
Can
everybody
see
resolution
wise
here.
B
A
A
All
right,
then,
you
guys
for
those
of
you
who
haven't
seen
it.
This
is
my
favorite
thing:
it's
called
nodemon
and
it
reruns
all
your
tests
for
you.
Whenever
or
well,
you
can
make
it
rerun
all
your
tests
like,
so
whenever
you
save
a
file,
so
this
is
what
I
do
is
I
say,
cd
models,
oops
cd
model
scikit,
and
we
don't
need
this
coverage
command
here
and
python
setup.py
test.
A
Okay
and
we'll
pick
a
clustering
model
after
we
know
one
which
is
failing
here.
A
So
all
right,
no
records
with
matching
features
so.
A
So
we're
looking
at
model
scikit
tests
and
we'll
check
out
tests
first,
so
we're
looking
at
here's,
the
okay,
so
items
within
the
first
set,
but
not
in
the
second
okay.
So
basically
it
was
saying.
B
Yes,
the
main
problem
is,
it
is
actually
trying
to
find
cluster,
but
the
the
name
is
actually
x.
Okay,.
B
A
A
A
Okay,
config
fields,
t-cluster,
okay,
here's
where
we're
actually
creating
this
okay,
so
we're
creating
yeah
we're
doing
this
funky
creation
of
config
classes.
So
usually,
usually
we
end
up
or
just
just
for
the
sake
of
a
recap.
C
A
Slr
looks
something
like
this,
where
we
decorate
it
with
that
config,
but
we
have
also
got
this
special
make
config
method
which
allows
us
to
create
one
of
these
classes.
Dynamically
without
sort
of
you
know,
writing
it
out
like
we
normally
would
write
a
class.
We
can
just
pass
it.
I
believe
it
dictionary
and
some
other
things
and
it'll
create
the
class
for
us.
A
So
we're
checking
to
see
if
we're
a
supervised,
estimator
or
an
unsupervised
estimator
and
if
it's
supervised,
then
we
get
psychic
context.
Otherwise
we
get
psychic
context
unsupervised,
which
is
our
current
issues
with
the
clustering
models
which
are
unsupervised
so
so
config
fields,
t
cluster
is
default,
is
none
and
the
issue
here
might
just
be
that
let's
see
yeah
so
t
cluster,
so
we
probably
want
to
look
at
the
way.
T
cluster,
oh
and
this,
oh
here
we
go
predict
yeah.
So
here
predict
is
called
cluster.
A
B
Yes,
I
feel
like
that
is
happening
because,
for
forgetting
the
accuracy
we
first
call
the
prediction
and
the
prediction
is
actually
calling
for
x.
B
So
now,
when
we
have
changed
it
to
x,
then
then
that's
why
it's
working
fine.
B
Let's
see
yeah
because
in
the
accuracy
scorer,
what
we
actually
have
done
is
we
first
call
the
predict
method
and
then
we
get
the
ground
truth
value
and
the
predicted
value.
And
then
we
like
calculate
the
accuracy
yes,
but
in
the
predict
method,
it's
actually
trying
to
call
this
cluster,
okay
and
but
actually
it
is
x,
value.
A
But
okay,
so
okay,
I
I
meant
I
meant.
Have
we
moved
it
so
that
it's
the
accuracy
score
context
doing
score.
B
A
Yeah,
oh,
that
was
phase
five
okay
yeah,
because
I'm
thinking
that
may
also
sort
of
help
us
here
a
little
bit
okay.
So
let's
take
a
look,
so
all
right,
so
dfml
model
model.
B
A
B
B
Predict
actually,
but
actually
there
are
two
types
of
clustering
models.
One
of
the
clustering
model
actually
has
the
ground
truth
value,
but
there
is
another
clustering
model
which
does
not
have
the
ground
truth
value.
Oh
in
that,
if
we
actually
trying
to,
we
are
trying
to
find
the
find
the
ground
truth
value.
We
won't
be
able
to
find
it.
A
A
Oh
okay,
great
yay,
all
right
wow!
This
was
really
hard
to
figure
out.
I
mean
we
looked
at
this
for
a
long
time,
so
this
has
been
several
weeks
now:
okay,
yeah!
I
I
I
frequently
confuse
myself
with
the
with
the
whole
the
what
the
transductive
versus
the
non-transactive
ones
that
kamacho
had
talked
about:
okay,
good
jake,
good
job,
okay,
so
I
guess
this
is
your
fix
here
now
and
I'll
just
point
paste
that
in
and
get
her
well.
Let's,
that's!
A
Actually,
let's
not
speak
too
soon,
because
I
did
this
this
morning
and
then
all
my
tests
failed.
So
let's
run
this
test
sweet
again:
oh
yeah,
we're
getting
a
couple
errors
here.
B
A
B
A
A
A
A
A
A
A
A
B
So
actually
psychic.
This
clustering
model
requires
like
its
own
accuracy
score.
B
It
has
like
it
has
like
a
scorer
which
does
not
takes
the
ground
truth
value.
That's
a
scorer
like
that.
That's.
A
A
A
A
And
it
basically
is
saying:
if
t
cluster
is
none
otherwise
t
cluster,
okay,
so
and
and
in
this
case,
okay.
So
the
way
I
read
this
is
that
you
know
there
was
another
feature,
name
added
to
say,
use
this
as
the
true
cluster
value
right
and
then
also
label.
You
know
whatever
I'm
predicting
as
with
the
predict
feature
right.
A
B
A
B
A
And
then
I
think
what
he
did
here
was
saying:
okay,
well,
the
true
cluster
value
may
not
be
the
same
name
as
whatever
you
asked
me
to
predict
on.
So
let
me
accept
that
as
a
separate
feature,
and
then,
if
you
give
me
that
that
that
as
a
separate
feature,
then
I'll
use
that
name
and
then
I'll
still,
you
know,
make
the
prediction.
Whatever
feature
name,
you
told
me
to
do
as
the
prediction,
so
it
may
be
good
enough
to
do
you
know
it
may
it
may
be.
A
I
think
it
may
be
sufficient
to
basically
turn
this
t-cluster
field
into
some
kind
of
you
know,
boolean
value,
for
whenever
you
do
do
the
accuracy
score,
because
you
could
then
just
say
you
know,
instead
of
basically
you're
always
saying
okay,
you
can't
tell
me
what
that
the
true
cluster
value
is
a
different.
You
know
a
different
feature.
Name
right,
like
you
know,
you
get
you're
getting
if
you.
A
If
you
tell
me
you're
going
to
predict
this,
then
you
better
put
the
true
cluster
value
in
there
too,
which
is
what
we
do
with
every
single
other
model
and
but
you
could
just
say:
okay,
well,
you
could
have
a
flag
to
the
accuracy.
I'm
not
yeah
it'll
depend
on
how
you
implement
the
accuracy,
but
you
could
basically
just
turn
this
into
some
kind
of
boolean
flag
in
the
config
right
and
then
you
could
say.
Okay,
if
t
cluster
is
true,
then
actually
use
that
value
for
the
accuracy
prediction.
A
Otherwise,
don't
does
that
sound
like
what
you
were
thinking
there.
A
Okay,
yeah,
I
don't
know
just
I
wanted
to
sort
of
get
that
down
as
a
as
since,
since
we
are
recording,
hopefully
that
in
case,
because
I'm
assuming
we're
gonna
get
confused
by
this
again
later.
A
So
this
is
that
so,
where
you
think
are
you
you
are
going
to
go
then
implement
this
as
an
accuracy
score
right,
but
this
is
oh.
This
is
when
you
were
going
to
go,
wrap
all
of
the
scikit
accuracy
scores
right.
A
Okay,
okay,
cool
cool!
So
let's
see
I
mean
I
think,
do
you
want
to
just
take
out
t-cluster
then
or.
B
B
A
Yeah
yeah,
I
agree:
okay,
let's
just
take
it
out
and
we'll
find
out,
because
I'm
I'm
with
you
on
that
yeah.
That
sounds
a
little
bit
like
a
ap
proposition.
I
wouldn't
I
wouldn't
say
yes
to
that
either:
okay,
let's
just
take
it
out
and
see
what
happens
so
because
if
two
true
cluster
present-
because
now
you
know
without
that-
so
without
that
this
would
basically
be
the
determining
factor
on
okay.
A
To
do
if.
A
A
A
A
A
Okay
and
without
label
so
so
to
do.
This
is
the
case
where
we
don't,
where
mr
would
be
false,.
A
A
A
Test
cases
where
it
needs
to
be
false,
so
yeah,
if
you
saw
okay.
So
if
you
see
basically,
if
we
look
at
this
here
right-
and
this
is
the
case
where
we
want
t-cluster-
to
be
set
to
false
right
since
on
a
command
line,
we
have
a
boolean
thing.
It
would
show
up
kind
of
like
you
know,
t-cluster.
A
Which
you
could
do
t-cluster,
you
know
off
or
something,
and
then
that
would
end
up
being
false,
but
you
know
it'll,
basically,
it'll
it'll
default
to
one
way
right.
So
actually
this
may
run
into
issues
because
we
may
run
into
issues
with
this
because
of
for
the
test
data
right
now
we're
passing
it.
It
has
access
to
that
data
either
way
right,
but
when
we
actually
run
an
example,
it
may
not
have
access
to
that
data.
A
So
that's
a
bit
problematic,
but
I
think
that's
something
that
the
accuracy
score
is
going
to
hit
or
not
hit
based
on
right
when
you
do
when
you're
wrapping
that
predict
method
right
right
now,
I'm
not
sure
if
I
don't
think
you're
gonna
have
to
wrap
the
predict
method
anymore.
Once
you
have
that
phase
five,
where
you're
changing
the
the
calls
making
the
accuracy
score,
call
predict
itself
instead
of
the
model
context
called
the
accuracy
score
call
predict,
so
you
should
yeah.
A
You
should
have
greater
control
in
there,
but
I
just
want
to
get
this
all
on
the
recording
you
know,
but
yeah,
okay,
so
you'll
I
mean
you'll
deal
with
it.
When
you
get
there
right,
but
you'll
you'll
probably
have
to
play
with
you
know,
how
do
you
get
what
you
want
out
of
the
predict
method
and
and
the
boolean
value
here
as
a
reasonable
command
line
flag.
A
A
A
A
A
Works,
let's
see,
we've
got
shaw
how's
it
going
shaw.
A
Okay
and
then,
let's
just
make
sure
we
got
t-cluster
so.
A
A
A
A
I
t-cluster.
A
It's
not
needed
now
that
we've
removed
the
accuracy
method
method
it
used
to
be
used.
No,
that's
not
a
great
sentence.
It
previously
was
used
to
decide
if
we
should.
A
There's
a
c
method:
it
is
all
right,
so,
let's
just
leave
it
as
that
t
cluster
is
not
needed.
Now
that
we
removed
the
sq
message,
it
was
previously
used
to
decide
if
we
should
use
mutual
info
score
or
not
in
this
scikit
unsupervised.
F,
accuracy
method.
If
true
clustering
value
had
been
provided.
A
Phase
five
of
the
accuracy
scoring
refactor:
okay,
all
right
I'll
push
these
guys
up
and
then
we'll,
hopefully,
that's
good.
So
all
right,
okay!
Well,
I'm
glad
we
figured
that
out.
Obviously,
I
think
we've
we've
pushed
your
problems
until
later
a
little
bit,
but
but
hopefully
they
will.
You
know
it'll
it'll
sort
itself
out
with
the
phase
5
stuff.
Do
you
have
anything
you
want
to
talk
about
here?
You
want
to
just
things
to
think
about.
A
A
Great
great,
I
think,
yeah,
I
think
that's
it's
time
so
issue
was
with
t
cluster.
A
So
ready
ready
to
merge
wow.
This
is
a
big
big
one
phase.
Four.
Why
are
you
doing
a
major
refactor
here,
phase
four,
then,
on
to
phase
five
all
right,
and
then
we
have
shaw.
A
A
Right
all
right,
so,
let's
see
so
yeah,
so
nothing
else
then
sudhanshu,
just
we'll
all
merge
it
once
we
double
check
there.
A
A
Hey,
thank
you
very,
very
nice,
all
right
so
shaw.
What
have
you?
What
have
you
been
up
to?
What
do
you
want
to
talk
about
today.
F
I
am
I've
been
making
progress
on
that
anomaly,
detection
model
and
there's
a
couple
of
things.
I
wanted
to
ask
you.
A
F
So
yeah
the
first
one
is
instead
of
accuracy.
I've
used
f1
score
as
as
the
evaluation
metric.
So
is
that
fine,
or
should
we
just
go
back
to
accuracy.
A
What
do
you
mean
you
mean
like
the
so?
What
do
you
mean
by
accuracy
in
this
situation.
F
Like
accuracy
in
this
case
would
be
say,
you
have
a
data
set
of
1000
examples
and
you
have
like
10
anomalies
and
990
normal
examples
right.
F
So
the
reason
I
use
f1
score
is:
I
felt
that
it
would
be
a
better
evaluation
metric
than
accuracy
because
say
in
this
case
you
have
an
algorithm
that
outputs
everything
to
be
to
not
be
an
anomaly,
then
it's
accuracy
would
be
around
99
percent.
F1
score
would
be
pretty
low.
A
I
think
you've
made
the
right
decision.
There
also
noted
that
you
know,
since
I
think
that's
definitely
the
right
decision
for
now.
Obviously
yeah
we're
just
talking
about
sudhanchu's
accuracy
stuff
in
the
future,
we'll
be
we'll
be
switching
things
around
with
the
accuracy
a
little
bit
and
we'll
just
need
to
make
sure
that
we
maintain.
A
You
know,
that's
that's.
This
is
a
good.
This
is
a
good
case
where
we
need
to
think
about
okay,
so
for
some
models,
some
things
are
more
appropriate
than
others
and
we
need
to
have
a
way
to
to
have.
You
know
the
models
recommend.
A
We
need
to
keep
this
in
mind
right
and
I'm
not
sure
because
we
were
talking
about
you-
know
regression,
classification,
nlp
stuff
right
where
the
you
know
these
are
all
sort
of
very,
very
different
when
it
comes
to
what
accuracy
scores
you
actually
want
to
use,
and
so
this
is
maybe
another
case
where
it's
like.
A
F
Like
yeah,
the
place
where
this
usually
happens
is
as
far
as
I've
seen
the
places
where
you
have
a
really
skewed
data
set.
F
A
F
A
A
A
Okay,
yeah,
okay,
cool
yeah
inverse
is
not
the
right
word
there,
so
so,
and
then
let's
keep.
Let
me
just
make
a
note
that
let's
keep
this
in
mind
so
should
we
use
f1
score
accuracy.
A
Hi,
let's
use,
we
should
continue
to
use
f1
score,
because
that
does
a
better
job
of
the
pick.
F
A
Okay,
so
this
model
is
used
on
highly
skewed
data
sets.
We
want
to
keep
this
in
mind
and
maybe
I'll
add
a
like
accuracy
or
something
so,
hopefully
we
can
come
back
through
and
look
at
these
for
accuracy
purposes.
A
We
want
to
keep
this
in
mind
when
we
figure
out
how
accuracy
scores
should
interact
with
models
to
ensure
users
use
the
correct
scores
on
the
correct
models,
all
right
anything
else.
You
want
to
say
on
this.
F
A
F
And
the
like,
the
format
in
which
the
predictions
were
made
was
that
for
each
record
we
output
the
correct
label.
So
my
question
to
you
is:
do
you
want
me
to
continue
with
that
format?
Or
is
it
fine,
if
I
display
say
a
list
for
each
example,
dip
depicting
whether
it
is
an
anomaly
or
not
like
one,
if
it's
an
anomaly
and
zero,
if
it's
not
an
anomaly.
A
A
Okay,
so
wow,
I
can
really
not
spell
anomaly.
Okay,
so
I
mean
okay.
So
what
what
is?
What
are
you
currently
doing?
I
guess.
A
A
If
you've
only
got
a
few
anomalies
right,
then
you
would
only
want
to
output
a
few
things,
but
from
the
from
the
perspective
of
you
know,
we
want
to
make
sure
everything
fits
under
like
this
standard
standard
banner
of
a
way
of
doing
things
right
then
I
think
yeah,
what
you're
saying
about
we
should
probably
use
you
know
whatever
you
know,
if
you
have
the
config
property
of
predict
right
and
whatever
that
feature
name
is
well
well
when
you're,
when
you're,
when
you're
training
you
take
that,
I
assume
you
have
a
predict
feature
and
then
you
you
use
that
as
whether
you
know
it's
a
zero
yeah,
okay,
so
then
I
think
that's
that's
the
correct
thing
to
do
here,
too
yeah.
A
I
think
I
think
that's
because
that
I
mean
that
is
that
maintains
the
the
you
know,
sort
of
the
standard
way
of
doing
things
right
and
then,
if
we
feed
into
something
else
later,
then
it
it.
You
know
it
can
consume.
In
a
similar
in
a
similar
way,.
F
F
Right
this
is,
this
is
not.
This
is
just
something
I
wanted
to
ask.
Why
do
we
have
asynchronous
functions.
A
Why
do
we
have
asynchronous
functions?
I'm
glad
you
asked
oh
wow,
I'm
so
glad
you
asked
okay,
so
the
the
beauty
of
all
these
asynchronous
functions
is
that
we
can
take
all
of
this
stuff
and
we
can
well
okay.
We
can
do
this,
okay,
so
right
right
now,
right,
you
may
be
doing
all
of
this.
You
know
this
is
very.
You
know.
A
It
may
be
very
synchronous
to
you
to
run
all
these
things
on
the
command
line
at
this
moment
or
you
know,
run
the
tests
right,
but
when
we
get
into
like
the
okay,
so
the
the
data
flow
have
you
looked
the
data
flows
at
all.
F
No
not
much
like
I
get.
Why
why
we
use
the
asynchronous
command
my
concern,
or
rather
I'm
pretty
sure
my
thought
process
is
somewhat
screwed
up
here,
but
doesn't
putting
asynchronous,
doesn't
having
every
function
as
asynchronous
mess
with
the
order
in
which
it's
supposed
to
be
executed.
A
A
That
also
has
the
ability,
but,
as
you
will
also
find,
if
you
start
using
the
threading
and
multi-processing
modules,
the
the
error
handling,
you
can
lose
errors
very
easily
when
you
start
getting
into
you
know
hundreds
or
thousands
of
these
you
know
maybe
processes
or
threats
or
whatever
or
just
like.
You
know,
instances
of
running
over
time,
it'll
start
losing
exceptions
and
and
yeah,
basically,
the
the
best
way
to
combat
that
is
to
move
to
the
async.
A
I
o
approach,
because
the
async
async
io
does
a
much
better
job
of
ensuring
that
we
always
handle
errors
appropriately
when
they
come
up
when
you're
in
this,
like
concurrent
or
multi-processing
environment
and
so
yeah.
That's
basically
why
everything
is
all
async
throughout,
and
the
other
reason
is
that,
for
example,
so
on
most
of
the
models
in
here
that
we
have
right
now,
they
pull
the
entire
data
set
into
memory,
which
is
not
ideal,
because
the
the
idea
is
that
you
know
okay.
A
So,
for
example,
if
you
have
a
data
source
right
and
that
data
source
is
a
database
you're
going
to
be
interacting
with
it
over
the
network
right
and
those
network
calls
are
something
that
you
really
would
like
to
have
over
something
like
async
io,
because
then
you
could
be
doing
multiple
of
them
at
the
same
time,
and
so,
for
example,
maybe
you're
training
a
model
and
responding
to
you
know
an
http
request
at
the
same
time,
and
so
you
know,
when
you
get
more
records
in
from
your
database,
then
you
train,
you
know
you
incrementally
train
the
model
more
and
then
maybe
you
get.
A
You
have
like
a
websocket
going
from
an
http
request
and
then
once
you've
trained
your
model
more
now,
you're
using
that
model
for
predictions
right
and
you're.
Using
that,
every
time
you
get
a
new
piece
of
data
over
the
websocket,
then
you
make
a
new
prediction
right,
and
so
that's
sort
of
you
know
that's
why
everything
is
asynchronous,
because,
because
sort
of,
like
the
farther
you
go
into
real
implementation
space,
the
more
the
more
it
becomes,
a
giant
headache
for
it
not
to
be
now
it
definitely.
A
You
know
at
times
is
a
bit
of
a
headache
to
deal
with
and
at
first
to
to
wrap
one's
head
around,
but
it
it
it.
It
greatly
simplifies
a
lot
of
our
problems
later
essentially.
A
Yeah
and
then,
and-
and
there
is
another
thing
you'll
notice,
which
is
so
there
is
another
limitation
of
it
right,
which
is
that
you
know
within
a
co
routine.
You
know
you,
don't
you,
you
can't
run,
you
don't
want
to
run
things
that
block
on
cpu
within
a
co
routine.
A
Now
this
this
can
become
a
problem
with
some
of
the
models,
because
obviously
models
are
cpu
intensive,
and
so
what
we
do-
and
this
is
part
of
why
we
have
all
these
config
classes-
is
that
with
the
config
classes,
you
can
serialize
the
conflict.
The
config,
the
config
class
is
completely
serializable,
so
you
could
pass
the
config
class
into
another
thread:
instantiate
models
within
other
threads
and
then
actually
run
the
cpu
intensive
stuff
within
its
own.
A
You
know
cpu
or
thread
well,
at
the
same
time,
you
still
you,
you
know
you're
still
able
to
use
these
things.
You
know
you're
able
to
use
the
asynchronous
features
right.
So
yes,
so,
for
example,
sorry
this
is
this
is
getting
a
little
long-winded,
but
I'll
wrap
it
up
here,
and
this
is
something
that
we
need
to.
We
need
to
sort
of
get
the
last
mile
on
that
is
part
of
that
patch
set
that
I
was
talking
about
earlier.
A
That
has
the
stuff
about
max
or
capping
the
number
of
contacts,
but
but
yeah
we'll
basically
there's
a
there's
a
little
bit
more
work
to
get
it
all
together.
But
but
you
know
when
you're.
So
if
you
have
a
cpu-bound
model
right
in
one,
if
you
have
a
cpu
bound
model
for
that
cpu
bound
model
to
be
able
to
ace,
you
know
get
the
benefits
of
an
asynchronous
loop
of
accessing.
You
know
a
database
or
whatever
over
the
network.
A
It
all
needs
to
be
written
with
the
async
code,
but
that
co-routine
actually
needs
to
run
in
another
thread
to
be
to
not
block
the
other
co-routines
right.
So
we'd
schedule
it
out
to
another
thread,
but
we
still
write
everything
in
async
so
that
we
can
so
we
can
access
the
database
with
with
with
async
code,
so
that
yeah,
so
that
you
can
use
the
model
within
that
thread.
A
You
know,
as
as
on
on
demand
with
the
rest
of
the
you
know,
with
with
the
rest
of
the
network
operations
that
you
might
be
doing
related
to
that
model,
but
yeah
anyways.
So
if
you
can't
tell
I
love
the
async
stuff,
but
yeah,
so
if
you
and
if
you
guys
ever
have
any
if
you
run
into
any
async
random
non-dfmo,
you.
A
Ask
me
questions
about
stuff:
that's
not
dffml2
if
you
ever
run
into
anything
but
yeah,
and
if
you
run
into
questions
with
that,
just
let
me
know,
but
anyways
so
does.
That
is
everybody?
Are
you
good
shaw?
Do
you
have
any
other
questions
on
that
or
anything.
F
F
Currently,
I've
said
that
to
10
of
the
training
set
later
later,
we
can
like
make
it
user
defined.
A
Yeah
so-
and
I
would
just
say
you
know
add
that
as
a
config
property,
this
and-
and
you
know,
yeah
you
can
you
can
do
it
later,
but
you
know
that'll
sort
of
you
know.
This
is
kind
of.
You
know
how
how
you
you
can
use
that
as
a
way
to
get
practice.
You
know
getting
comfortable
with
the
config
structures.
If
you
wanted
to
make
that
user.
A
Yep
yep
so
I'll
just
make
a
note
here
that
currently.
E
A
A
A
A
And
so
yep
good
to
see
you
sha
and
good
to
see
you
sudhanshu
saksham
and
it
was
sico
right.
A
Yes,
okay,
yes,
and
you
know
correct
me,
if
I
may,
I
may
get
wrong
next
time.
It's
shock
and
tell
you
it's
taking
me
a
couple
times
to
get
get
shot
right.
So
so
yes,
all
right!
Well,
thank
thank
you.
Everyone
and
I,
I
hope
you
guys
all
have
a
a
great
week
and
yeah
just
let
me
know
I'll
be
on
getter.
So
thanks,
everyone
have
a.