►
From YouTube: Weekly Sync: 2021-08-03
Description
Hashim points out the need for accuracy scorers!!! This was an important meeting! Thank you Hashim!!!!
A
A
B
No,
there
are
some
issues
with
that,
so
I
want
to
like
get
some
inputs
on
there.
Okay,.
A
Okay,
yeah,
let's
make
sure
I,
have
access
to
that
while
I'm
doing
this
here
and
then
we
can-
and
you
could
maybe
get
ready
to
present
it
too
yep
great,
oh,
my
gosh,
what
the
hell!
This
is
like
the
world's
most
ridiculous.
Looking
merge,
conflict.
Okay
come
on
now,
so.
A
B
A
A
C
A
That's
right,
we
needed
to
finish
the
thank
you.
Oh
yeah,
okay,
so
we
needed
to
finish
the
thing
where
okay,
this
is
not
gonna
get
done
in
this
meeting
then
so,
because
that
was
okay.
I'm
gonna
have
to
do
this
afterwards,
because
this
was
the
man
yeah,
okay,
I
think
you
tried
it
saksham
tried
it
so
now
we're
back
to
me
trying
it
for
the
second
time.
Okay,
all
right,
yeah!
This
thing
was
it
was
the
get
get
at
her
and
set
after
thing
right.
A
C
I
said
I
thought
you
had
tried
it
so
I
didn't
say
anything.
No.
A
A
I'm
sorry
is
gsoc
is
wrapping
up
next
week.
Let's
see.
B
A
A
Yeah,
let's
make
sure
we
get
enough
input
on
your
guys's
stuff.
If
you
have
any
so,
we've
got
basically
three
weeks.
Okay,
you
guys
are
all
on
on
track.
So
I,
don't
think
we'll
have
any
major
hiccups
here,
but
you
know
I
want
to
make
sure
that
we
get
everybody.
You
know
anything
that
they
need
so
I'll
make
sure
to
get
that
done
today.
I
assume
and
but
don't
get
it
done.
Ping
me
in
the
morning,
I
will
and
we'll
make
sure
I
get
it
done
within
the
next
two
days.
A
I
think
this
once
I
get.
This
demo
should
be
great
more
free
up
here,
so
things
have
been
things
have
gotten
extremely
hectic,
so
let's
see
we're
doing
something
sure
enough
modernization
thing.
Okay,
so
all
right,
let
me
put
in
a
reminder
on
my
calendar,
so
this
is
we'll
config,
Setters
and
Getters.
A
C
So
the
implementation
of
multi
output,
support
on
the
scorers
that
we
implemented
ourselves
in
the
ffml
isn't
really
convincing
and
I.
Don't
think
that's
the
way
to
go
like
we
are
using
a
means
of
accuracies
of
different
predictions
to
you
know
calculate
the
multi-auto
accuracy
and
if
we
are
using
the
same
score
from
scikit
implementations,
they
give
out
different
scores
than
ours.
Do
and
I
can't
really
follow
their
multi-output
implementation.
They're
using
different
scores
inside
scores
and
it
was
getting
messy.
C
I
was
going
to
ask
if
you
know
we
could
consider
just
using
the
scikit
scores
or
I.
A
Yeah
I
wanted
to
talk
about
that
too,
because
I'm
not
convinced
I'm,
not
I'm,
not
convinced,
on
the
way
that
we
have
this
right
now,
I'm
concerned
or
the
scores
in
general
I'm
concerned
that
that
we
that
we
don't
specify
what
the
feature
is
to
score
I
realize
and
we're
just
inspecting
the
models
predict
I,
think
I
I
think
we
maybe
need
to
specify
the
feature
to
the
scorer
itself,
because
I,
don't
know
it
seems
it
seems
like
it
seems
like
this
is
going
to
be
prone
to
issues,
especially
if,
like
now,
we
have
multi-output
and
Now
features
is
all
of
a
sudden
a
list
like
what.
A
If
a
model
wants
to
do
something
else
with
features
right
like
we
should
really.
We
should
probably
separate
the
connection
between
the
model
and
the
score
a
little
more
by
specifying
to
the
score
what
we
want
it
to
score
right.
What
what?
What
predictive
feature
we
want
it
to
score
right.
Does
that
sound
like
reasonable
to
you
guys
or
do
you
think
I
mean
sutachu?
You
have
more
experience
with
this
than
the
rest
of
us.
So
do
you
think
that
that
we
should
leave
it
with
the
introspection
for
any
strong
reason
or
or
are
you?
A
Do
you
think
it
it
are?
You
are
you
open
to
the
idea
of
adding
a
a
config
parameter
to
the
model
scores
you
know
just,
do
you
guys
know
what
I'm
saying
I
think
we
did
it
if
I,
okay,.
C
Start
I'm
not
sure.
What's
the
idea
behind
you
know
specifying
the
feature.
A
A
I'm
just
concerned
that
we're
grabbing
predict
like
the
I'm
concerned
that
we're
grabbing
from
the
from
the
parent
model,
the
from
the
from
the
model
config
for
you
know
for
what
essentially
amounts
to
something
that
I.
That
I
think
is
that
that
would
change
right.
Basically,
so
what
what
happens
with
your
patch
Hashem
is
is
if
we,
where
was
that
the
multi-output
right,
so
your
changes
sort
of
make.
A
It
apparent
that
this
that
that
that
the
the
scores
and
the
models
are
too
intertwined
right
now
right,
because
you
now
begin
to
inspect
the
model,
can
predict
to
see
if
it's
a
features
or
if
it's
a
singular
feature,
and
now
all
this
extra
code
just
explodes.
So
basically,
the
code
in
each
score
is
too
tied
into
the
models.
Is
what
I'm
saying
I
I
believe
right,
because
right
now
you
you
we
we
this
this!
A
The
the
change
that
you
made
resulted
in
a
big
change,
not
a
big
change
but
like
it,
resulted
in
quite
a
few
lines
added
in
the
score
right
and
it,
and
it
showed
that
the
the
functionality
of
the
models
is
now
too
deeply
tied
to
the
functionality
of
the
scores
or
the
scores
of
the
functionality
of
the
scores
is.
The
implementation
of
the
scores
is
too
deeply
tied
to
the
implementation
of
the
model
since
we're
introspecting
the
predict
config
of
the
model.
A
So
what
I'm
suggesting
is
that
we
do
something
like
like
this
essentially
right,
where
we
just
make
this
self.parent,
and
so
now
the
only
dependency
on
the
model
becomes
the
prediction,
and
so
we
can
ask
you
know
we
do
we.
We
we
have
the
score.
Do
let's
see
this
is
where
it's
like.
Okay,
maybe
we
need
to
say,
label
equals
not
all
about
that
yeah,
so
I,
don't
know,
predict
or
feature.
We
should
probably
just
say
feature
so
I
don't
know.
This
is
where
it's
like.
A
That's
that's
good
now,
I'm,
not
sure
what
technology
to
use
here,
but
you
see
how
now
we
end
up
with
so
so
now.
It's
like
the
functionality
isn't
so
tied
together
right.
So
if
you
had
so
so
now,
if
you
end
up
with
a
multi-output
model,
you
could
verify
right
like
like
feature.
Nothing
works
like
let's
see.
A
A
Then
it
would
kick
them
out
and
not
let
them
instantiate
that
classification
accuracy
right,
so
they
would
have
to
use
a
multi-output
specific
classification,
accuracy
score
right
and
that
you
know
that
that
that
minimizes,
the
amount
of
you
know
the
the
possibilities
for
issues
with
this
score
here
right,
because
it's
no
longer
tied
into
every
single
model
and
having
to
deal
with
oh
well.
What,
if
this
model,
does
this
right?
Well,
it
doesn't
matter
because
we're
just
calling
predict
and
then
we're
looking
at
the
feature,
data.
A
B
A
Sense,
okay,
cool
great!
So,
while
we're
talking
about
the
scores,
you
wanted
to
talk
about
renaming
the
accuracy
function
of
something
else,
so
what
we
and
you'd
suggested
score
and
it
suggested
score,
oh
and
then
the
theme
thing,
but
that's
not
a
big
deal.
So
basically,
we'd
looked
at
a
few
themes
and
I
like
that.
A
One
that
you'd
posted
so
yeah
go
check
that
out
so
you're,
not
sure
if
you
get
a
chance
so
yeah
so
score
or
evaluate
I'm,
not
I'm,
not
I'm,
not
in
favor
of
evaluate
personally,
because
I
think
we
already
have
the
record
evaluation
function
and
I'm
not
sure
if
I
love,
that
name
for
that
either
but
score
seems
fine
to
me.
What
are
you?
Is
there
a
better
name
than
score?
Should
we
change
it
to
score?
Should
we
not
change
it
to
score.
A
Okay,
because
yeah
the
high
level
and
I
mean
we
can
always
we
can
yeah,
let's
see
so
I
love
scores
with
regard
to
different
scores,
selected.
A
Let's
see
I
didn't
understand
that
last
sentence.
These
could
also
be
a
resultant
value
of
error.
What
is
that
like,
there's
like
if
it
throws
an
error
or
if
there's
some
kind
of
actually.
C
Record
there
are
algorithms
that
calculate
the
error,
right
error
of
your
model
and
they
give
out
the
value
of
the
error.
That's
present
in
the
prediction,
I
see
what
you're
saying.
A
A
better
name
here
than
I
mean
scores.
What
we've
implemented
the
method
to
be
already
so
that
I
don't
think
it
Strays
too
far
from
that
and
and
that's
the
term
that
psychic
uses.
So
let's,
let's
go
ahead
and
change
this.
A
Thank
you,
let's
see,
oh
I
can
I
can
just
edit
it.
So,
let's
see.
A
Rename
high
level,
let's
just
rename
the
head
of
the
function,
unless
we
should
rename
the
whole
thing
to
I,
guess
score:
we
could
we
could
let's
rename
the
high
level
function
right
now,
and
then
we
can
focus
on
everything
else
later,
because
we
could
move
the
accuracy
directory
to
score
and
that
would
keep
the
command
line
Flags
consistent.
A
So
but
it
doesn't
really
matter
too
much
right
now,
but
the
main
interface
to
this
is:
is
the
high
level
function
so
rename
it
accuracy
to
score?
Does
that
sound
good.
A
A
Okay,
great,
oh
and
it's
already
pinned
all
right.
So
let's
try
to
get
that
done.
A
Anything
else,
so
so
review
reviewing
this
I
can't
do
right
now,
not
enough
time,
but
I'll
try
to
get
to
it
and
I'll
put
it
through
the
the
I
think
this
yeah,
let
me
know
I,
think,
there's
some
comments
and
stuff
in
here.
I
thought
I
saw
some
stuff
that
looked
like
it
wasn't
cleaned
up
like
how
how.
C
Yeah
the
comments
were
just
in
the
the
native
implementations
and
I
thought:
I'd
ask
before
energy
yeah,
okay,
that
makes
sense,
I'll
remove
it.
It's
the
last
comment.
That's.
A
A
That
sounds
great
okay.
So
then,
what
is
your
path
forward
on
this?
Then?
So
are
you
just
gonna
score,
multiple
they're
gonna
score?
A
A
Is
your
plans
to
show
the
score
now
that
we've
decided
that
it
and
correct
me
if
I'm
wrong,
but
it
sounded
like
you're
saying
that's,
not
try
to
implement
or
let's,
let's
either
Implement
separately
or
not
Implement
right
now
scores
for
multi-output
right.
C
I
said
that
we
can
just
use
the
scikit
scores
and
not
have
this
maybe
Implement
into
this.
A
One,
okay
and
with
the
psychics
did
those
are
you
gonna
score?
Are
you
going
to
use
different
psychic
scores
to
score
different
features
or
you
can
use
the
same
score
to
score
different
features?
Are
you
going
to
use
one
score
to
score
multiple
features
or
how
is
that
going
to
work.
C
Yeah
we're
using
one
score
for
multiple
features.
C
In
just
a
second,
let
me
move
your
voice
is
breaking
up,
I!
Think
it's
my
internet.
A
All
right
is
that,
does
that
require
more?
Can
you
repeat
that
does
that
require
changes
to
multiple
or
to
to
the
scores
to
make
them
support?
Multi-Output.
C
A
Great
okay,
cool
and
then
we
did.
We
decided
that
we
need
to
add
some
kind
of
config
parameter
to
the
scores.
A
A
A
So
it's
it's
I'll,
just
review
it
when
it
when
we
get
time
for
it,
I
I
was
just
trying
to
understand
what
what
your
plan
was,
but
it
sounds
like
you
have
a
plan
and
and
I'll
see
it
when
it's
when
it's
ready
here
so,
and
it
sounds
right
so
I
was
just
trying
to
trying
to
ask
if
you
were
gonna
if
you
were
planning
on
using
using
multiple
scores,
but
it
sounds
like
you're
going
to
use
the
psychics
course
and
and
you're
going
to
use
one
score
to
do
scoring
on
all
of
the
predicted
features
right.
C
Yeah
yeah,
that's
what
I'm
doing
so
far.
A
All
right-
and
that
sounds
good,
so,
okay,
so
yeah,
sorry
continue.
C
Yeah
I
meant
that's
what
I've
done
already:
okay.
A
A
A
C
B
C
I
think
we
are
also,
we
were
also
using.
The
conflict
predict
other
than
multi-alc
visual.
C
A
C
A
C
B
So
I
have
actually
created
this
operation
input
layer
which
will
take
all
the
data
points
and
the
length
of
the
source,
and
it
will
actually
create
it
into
a
matrix.
So
I
have
created
a
matrix
here,
so
it
will
take
all
the
inputs
and
add
it
into
the
list.
Okay
and
the
way
I'm
doing
here
is
in
the
seed
I'm,
actually
providing
all
the
values
to
the
input
layer.
So.
B
So
so,
basically
we
are
so
the
operations
the
cleanup
operations
actually
work
on
Matrix
data
right
so
for
that
I
have
created
this
input
layer
which
will
take
all
the
features
feature
data
points
and
it
will
convert
it
into
a
matrix
so
that
we
can
perform
operations
on
so,
for
example,
like
like,
if
you
have
this
operation,
it
actually
takes
this
input
data
in
the
form
of
a
list
of
lists.
B
And
the
error
I'm
getting
here
is
that
it
is
showing
that
it
is
not
instantiated.
A
Okay,
so
okay,
so
I
would
say
if
it's
showing
the
current
error.
I
would
say
it
looks
like
an
entry
point
registration
issue,
so
you
need
to
make
sure
that
the
operation
has
a
name
associated
with
it,
and
then
you
need
to
make
sure
that
the
entry
point
is
registered.
Third
and
you've
installed
the
package.
A
A
A
A
And
this
will
just
show
you
all
the
registry
operations
so
that
you
can
verify
okay
entry
points
list.
If
oh,
no.
A
Operations
just
on
the
end
of
operations,
no
s
leave
it
on
entry
points.
No,
it's
it's
entry
points
with
an
S
and
operations.
Without
an
s
or
it's
it's
gfml.operation,.
A
We
have
a
one
operation,
data
flow
and
the
flow
is
okay,
so
flow
says,
get
everything
from
seed.
Okay,
says
operation,
not
instantiable.
So.
A
B
A
A
A
A
It
attempts
to
instantiate
it
so
I
think
it's
having
a
I
think
it's
having
a
problem
instantiating
that
operation-
oh
I,
think
I
ran
into
this
recently.
So
let's
see
okay,
it's
not
exchangeable!
So
what?
What
is
your
file
that
has
the
operations
in
it?
Look
like
oh
wait,
so
yeah!
So
that's
none,
but
it's
gonna!
Look
at
it's
gonna!
Try
to
load
it
from
the
entry
point
so
or
let's
see,
non-essential
was
not
found
in
blank
Okay.
So.
A
A
A
If,
if
something
happens
when
it
tries
to
import
this
file,
can
you
run
other
operations
from
this
file.
A
Okay,
great,
can
you
can
you
can
you?
Can
you
send
me
a
getter
message
with
that
and
I'll
take
a
look
at
it
great
great,
all,
right
and
then
I'm
gonna.
Do
we
have
any
final
things
or
else
I'm
gonna
drop
off
now.
B
A
A
C
Yeah,
so
basically,
what
I'm
going
to
do
with
the
scorers
is
just
make
it
so
that
we
are,
you
know.
While
we
are
instantiating
the
scores
we
just
send
in
the
features
like
instantiate
models,
the
predict
features
yep
right,
yep.