►
From YouTube: Applied ML weekly team meeting Aug 19, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
Discussing
was
a
discussion
about
the
graphql
api
because
we
we've
got
two
backs
right
now,
so
I
added
some
workarounds,
so
we
can
skip
those
two
issues
that
I
created.
If
it's
not
yeah,
I
mean
if
we
cannot
solve
them
right
now.
C
A
We
should
fix
them
yeah
at
least
that's
what
I
think
I
I
think
all
fields
are
optional,
like
our
fields
have
no,
almost
sorry,
not
not
optional.
Almost
all
fields,
there
have
a
null
false,
which
means
that
not
not
all
of
them
are
optional.
This
particular
one
has
no
true.
So
it's
it
is
optional,
but
I'm
not
sure.
Why
is
that
the
case?
I
mean,
I
think
we
should
store
the
merged
art
timestamp
in
the
database.
C
B
The
discussing
options
to
compare
under
view
and
reviewer
roulette.
Why
don't
you
walk
us
through
that.
C
C
Right
now,
unreview
supports
these
two
metrics
out
of
box,
so
we
can,
I
mean
when
we
train
the
model.
Finally,
we
get
this
metrics,
the
main
problem
that
I
don't
know
I
don't
know
so
maybe
a
review
roulette
doesn't
support
this
matrix,
I
mean,
doesn't
support
any
metrics,
so
it
means
that
we
need
to
implement
them,
but
to
implement
that
we
need
to
generate.
We
need
to
extract
the
data
set
that
is
supported
by
a
reviewer
led.
C
So
it's
hard
to
it's
hard
to
predict
how
many
times
time
we
need
to
to
do
that.
Another
option
is
to
is
to
integrate
and
review
as
reviewer
roulette
and
to
make
recommendations
via
commands
similar
to
what
what
the
stand
models
does
right
now.
So
I
saw
that
stan.
C
Send
me
a
link
with
how
we
can
do
that
through
the
danger
board,
or
something
like
that.
So
there
are
some
danger
files,
so
I
need
to
read
more
about
that,
but
I
think,
as
I
understood
that's
the
easiest
way
right
now,
so
it
means
that
we
integrate
and
review
and
we
can
we
make
recommendations
via
we
commence.
And
finally,
we
can
track
the
choice
of
each
developer
and
compare
the
recommendations
of
and
review
and
review
roulette.
C
So
another
way
how
we
can
calculate
the
same
mean
reciprocal
rank
and
and
the
top
key
accuracy.
B
I
like
that
approach,
and
I
think
we
originally
wrote
it
as
compare
with
reviewer
roulette,
but
that
was
to
as
a
way
of
measuring
how
effective
on
review
is
and
recommending
the
viewers
and
maintainers,
but
that's
not
the
only
way.
So
we
do
want
to
dog
food
our
own
solutions,
so
we
want
to
use
meaning.
We
want
to
use
our
own
things
that
we're
going
to
be
recommending
or
innovating
available
to
customers.
We
want
to
use
it
ourselves,
but
that
doesn't
mean
we
need
to
do
that
in
milestone
one.
B
So
I
think
what
you
propose,
alexander,
I
think,
was
you
stated
two
options
that
was
option.
Two
is
just
looking
at
how
reviewer
look,
how
unreview
is
working
and
then
in
milestone
two
in
mountain
one
and
then
milestone
two
comparing
against
reviewer
roulette,
because
we
do
want
to
dog
food
it
and
start
using
it
ourselves.
C
C
So
that's
why
yeah
it
would
be
nice
to
no
it's
not.
Now
we
have
to
integrate
it
in
the
first
milestone,
but
we
can.
We
can
collect
those
metrics
in
the
second
milestone
because
anyway,
in
the
second
milestone,
we
need
to
integrate
some
other
features
and
we,
we
have
already
created
an
issue
like
time
time
zones
availability.
D
Yeah
I
mean,
I
would
just
say
nothing
says
we
have
to
turn
review
roulette
off.
We
could
let
both
run.
Let
them
both
post
comments
and
see
what
people
choose.
We
need
a
feedback
mechanism
into
the
algorithm
anyway.
We
already
do
this
today
with
tanooki
stan
with
the
issue
labeler
like.
If
people
want
to
change
the
labels,
we
have
an
ml
wrong
label
that
people
can
apply,
which
is
sort
of
a
feedback
signal
to
us.
So
I,
I
think,
there's
lots
of
options
here.
I
agree.
E
Maybe
we
don't
have
the
data
in
in
our
database
or
whatever
already,
but
but
you
know,
would
it
be
possible
to
kind
of
look
backwards
and
extract?
You
know
what
what
roulette
decided
and
then
kind
of
work
work
with
that
data,
or
are
we
missing
key
data.
C
There's
also
a
question
for
me
because
we
need
data
to
to
test
reviewer.
Let's
we
need
some
past
commits.
We
need
to
understand
who
was
available
at
some
time
in
the
past,
so
I'm
not
sure
that
we
have
this
kind
of
data
right
now,
because,
okay,
at
this
time
we
know
a
pool
of
reviewers
who
can
can
be
assigned.
But
for
example,
two
weeks
ago,
I'm
not
sure
that
we
we
we
track
this.
This
information
somewhere.
E
Yeah,
I'm
not
sure
how
the
rule
that
does
it
I
mean
we
would
have
the
pto.
We
have
pto
database.
It
might
be
a
bit
hard
to
match
up
the
database,
the
data,
but
we
can
get
reports
out
of
out
of
pto
ninja
yeah.
Just
I'm
just
wondering
if
there's
some
way
that
we
can.
We
can
do
this
without
having
to
wait
for
some
cycles
to
go
through.
C
Because
the
first,
the
first
and
the
second
one,
because
I'm
pretty
sure
that
and
I
found
because
there
can
be
some
cases
when
reviewer-
let
or
unreview
recommend
someone
developer-
chooses,
chooses
another
person,
but
we
know
that
their
recommendations
of
reviewer,
roulette
or
unreviewed.
They
were
correct
and
we
need
to
understand
why?
C
E
We
might
also
have
that
data
in
in
the
data
warehouse-
I
don't
know
what's
actually
in
there,
but
just
to
just
before
we
leave
this
point
to
answer
your
question
also
alexander.
I
think
many
times
developers,
particularly
when
it's
the
end
of
the
cycle
they're
going
to
pick
the
reviewer
they
know
is
going
to
help
them
get
it
through
fast
right
might
be
someone
in
their
own
group,
for
example,.
C
F
We
can
have
many
ways
of
comparing
both
of
those
ideas,
both
the
models
and
everything.
But
the
key
point
is
that
we,
each
they're
both
trying
to
solve
the
same
problem
they're
both
trying
to
make
it
quicker
to
notify,
which
one
are
the
reviewers,
either
for
with
the
roulette
or
with
the
review.
F
F
There
are
some
metrics
for
that,
for
example,
map
at
k,
which
is
minimal,
average
precision,
minimum
average
precision,
for
example,
that
could
be
used
in
this
scenario,
but
it
doesn't
matter,
there's
always
a
metric
for
that.
We
need
to
define
what
we
want
to
consider
better,
because
we
can
find
many
different
definitions
of
what
is
better,
but
we
need
to
come
up
with
one
otherwise
we're
going
to
stay
on
this
forever
right.
C
F
C
G
I
think
on
what
key
metric
you
are.
Comparing
is
exactly
the
crucial
point,
as
everyone
is
mentioning
so
doing.
Like
maybe
hundreds
of
mrs
myself,
there
are
conflicting
metrics
there.
A
daughter
usually
wants
time
to
merge
to
be
quicker.
So
that's
why
chooses
based
on
time
zone
or
is
all
working
with
someone.
G
Is
the
metric
in
the
development
group
like
in
time
to
merge,
but
on
the
other
side,
if
you
want
to
have
a
magic
for
the
quality
or
review
which
you
can
measure
based
on
the
suggestions
made
or
I
don't
know,
those
are
speculative
or
qualitative
subjective
in
that
case
or
stability,
which
is
number
of
reverse,
we
are
facing,
you
know
stuff
like
that
or
follow-ups
that
can
lead
different
results.
So
the
way
the
ng
roulette
is
totally
tries
to
equally
divide
based
on
people
which
is
which
doesn't
belong
in
any
metric.
G
So
there's
also
a
little
developer,
discussion,
maintainers
and
reviewers.
They
wanna
have
an
even
distribution
of
the
load
between
among
themselves,
so
that
was
raised
a
few
weeks
ago,
like
some
people
get
too
much.
Let's
say
we
have
a
perfect
algorithm
which
always
chooses
the
right
person
based
on
quality,
let's
say
or
mean
time
to
merge.
Even
that
will
not
satisfy
the
requirements,
because
people
will
want
a
well
balanced
day.
E
G
So,
and
also
in
the
review
or
danger,
we
have
several
levels.
If
you
set
yourself
as
the
blue
circle,
you
have
three
times
more
chance
of
getting
a
review.
If
you
set
yourself
as
orange,
you
have
three
times
less
chance
than
normal,
so
we
have
zero
ways
to
tweak
it,
and
none
of
them
has
any
metric,
which
particularly
is
interesting
for
comparing
dangerously
done
review.
That's
what
I'm
afraid.
G
Right,
yeah,
totally
unbalanced
and
anyway,
there's
no
target.
There's
no
reason.
As
I
said,
the
author
has
a
different
target:
maintainers
usually
want
to
have
less
load;
they
don't
want
to
have
like
14
reviews
per
day
just
when
they
can
have
five.
So
so
a
lot
of
different
things
are
coming
down
in
play.
G
G
For
example,
I'm
writing
like
I'm
contributing
a
lot
in
different
parts,
so
I
would
appear
too
much
due
to
commit
history
just
because
you
know,
but
it
will
then
overload
me.
I
might
try
to
reduce
my
load
by
making
myself
unavailable
or
orange,
which
is
one
third
of
normal
in
our
dangerous
terms.
E
Could
quality
be
considered?
As
I
mean,
if
an
mr
needs
to
be
reverted
which,
thankfully
we
don't
have
a
huge
amount
of
data,
because
it
doesn't
happen
that
often,
but
we
do
do
it
is
that,
is
that,
would
that
be
considered
a
quality
measure.
G
I'm
not
sure
over
there,
so
I
want
to
like
on
one
side
we
have
like
the
author
once
as
I
mean
as
quick
as
possible
and
merge,
and
the
review
process
has
multiple
goals.
These
are
you
know
whether
fundamentally
the
mr
is
doing
the
right
thing.
Then
you
have
security,
then
you
have
stability
and
endless
other
concerns
style
now,
reviewer
checks,
all
of
them
and
reversing
is
just
the
stability
which
is
fatal,
maybe
something
which
is
too
bad
to
occur.
G
It's
like
an
ultimate,
let's
say
scenario,
but
we
have
a
little
like
a
follow-up.
Mr,
which
fixes
on
top
of
the
current
commit,
is
also
kind
of
an
improvement
which
can
be
claimed
to
be
a
deficiency
of
the
review
or
the
original,
mr,
but
that's
how
we
work
because
they
do
iterations
on
the
other
side.
Okay
and
I'm
not
telling
anything.
I
just
understand
that
there
are
so.
B
On
this
topic,
but
it's
a
very
important
topic:
we're
ready
to
move
on
to
the
next
one
or
more
to
say
on
this
one.
F
I've
worked
a
lot
with
with
this
specific
of
defining
metrics
and
defining
experiments,
and
things
like
that.
I
think
and
run
a
session
later
into.
I
have
some
exercises
that
help
defining
the
goals
and
where
to
get
there
and
which
metrics
you
can
look
at
so
you
can
either
do
this
as
sync
or
to
sync
in
another
time
I
can
schedule
that
for
us,
that's
if
that's
on
your
interest.
C
That's
just
a
quick
update,
so
almost
everything
is
finished.
I
am
actually
working
on
the
wrapper
for
the
recommendation
engine
that
serves
jrpc
requests.
So
I
think
that
once
I
finish
this
task,
we
can
integrate
the
recommendation
model
into
gitlab
through
the
comments,
as
I
said
before,
and
then
finally,
we
need
to
update
the
user
interface
and
that's
all
for
the
first
milestone.
B
B
Yeah
just
curious
how
the
works
come
along
to
change
the
various
open
source
library
usage.
I
know
we
talked
about
removing
changing
the
libraries
for
that
undesirable
license
the
dw
gfl.
I
think
I'm
getting
the
acronym
wrong
but
and
a
couple
others.
I
know
we
said
if
we
can't.
If
we
need
to
do
that,
milestone
two
we
can,
but
we're
also
going
to
try
to
do
it
in
milestone
one.
If
we
could.
A
B
A
C
Because
I
couldn't
wait.
C
Okay,
so
I
found
that
we
used
two
two
ways
to
close
the
the
close
issues,
the
first
one
we
close
issue,
but
I
saw
that
we
sometimes
we
move
issues
to
the
verification
stage.
So
we
add
the
label
like
verification
or
something
like
that,
so
which
option
should
we
select
when
we
work?
Because
I
see
that
sometimes
we
sometimes
we
also
forget
to
to
change
to
change
the
labels
in
the
issues
I
found.
A
Well,
the
flow
the
fall
we
use
that
we
use
is
like
you
move
to.
You
move
an
issue
to
the
verification
stage
and
a
different
engineer
is
supposed
to
verify
the
job
the
work
you
have
done,
but
I
don't
think
that's
gonna
apply
in
many
cases
here
as
like.
D
Yeah
I
mean
this
is
something
that
all
of
the
groups
do.
We
have
workflow
labels.
We
call
them
out
specifically
on
our
applied
ml
handbook
page.
If
you
need
more
details
about
that,
workflow
there's
a
whole
very
long
product,
page
about
how
we
transition
between
all
of
the
workflow
labels.
B
Once
we
actually
have
agreed
once
we
actually
have
code
that
customers
depend
on
or
users
depend
on
getting
the
labels
right
is.
You
is
even
more
important
than
because
we
want
to
know
you
know:
is
this
actually
in
production
or
not,
can
customers
benefit
of
it,
but
since
we're
in
proof
of
concept
mode,
that's
it's
still
important
to
communicate
amongst
ourselves,
but
it's
less
important
right
now,
but
I'm
glad
you
asked
alexander
number
six.
A
So
quick
update,
I
created
an
issue
on
gitlab
to
engage
the
interest
in
kafka
or
some
other
like
q
system
adoption
at
gitlab,
and
the
too
long
didn't
read
version
is
that
we
already
have
google
pop
sub
github.com
and
we
can
leverage
it
for
our
proof
of
concept.
But
whether
or
not
should
we
use
kafka
remains
to
be
seen.
A
If
we
are
interested
in
putting
putting
down
review
in
omnibus,
because
if
we,
if
we
put
it
in
the
omnibus,
then
we
definitely
need
a
way
for
our
on-premises
customers
to
use
it,
and
we
either
have
kafka
another
redis
instance
or
rabbitmq
or
something
else.
But
we
have
to
think
of
something
if
we
want
to
put
it
into
omnibus,
but
for
gitlab.com
and
poc
we
can
definitely
leverage
google
pops
up.
We
already
have
in
place.
B
Especially
since
we're
planning
to
only
do
this,
at
least
for
applied
ml,
so
far
do
most
of
our
work
in
the
cloud
in
our
cloud.
It's
a
good
solution
and
I'd
say
I'm
very
impressed
the
collaboration
across
get
lab
on
this
issue.
B
D
Yeah,
this
is
more
of
just
an
fyi
there's
an
ml
working
group.
Now
I
was
a
little
surprised
at
how
this
came
about,
but
I
see
this
largely
is
we
now
have
a
lot
of
groups
interested
in
where
we're
going
with
ml
and
we
need
to
get
them
all
on
the
same
page,
and
with
this
being
all
of
this
work
is
happening
at
an
interesting
time
as
a
company
with
us,
focusing
on
a
lot
of
performance
and
scalability
issues.
D
F
Yeah,
I
just
have
a
why
picking
up
the
the
jupiter
diff
on
gitlab
the
beginning.
First,
it's
it's
an
old
ticket.
That's
there
forever,
but
it's
just
such
a
core
part
of
ml
experience
and
data
science.
Experience
so
I'll
be
doing
that
in
the
beginning,
trying
to
sync
with
the
other
teams
right
now,
but
I
will
be
also
on
the
the
working
group
looking
at
what's
going
on
on
the
other
market
on
the
other
parts
of
the
company.
F
I
think
there's
a
lot
of
especially
that
alex
alexander
mentioned
like
and
wayne
as
well
the
concept
of
the
ml
applied
internally
on
the
public
on
the
gitlab.org
and
and
now
applied
to
the
self-managed
instances,
and
things
like
that.
That
would
be
a
common
pattern
that
we're
going
to
see
everywhere.
So
we
need
to
come
up
with
some
processes
how
to
handle
that
specific
case
and
yeah.
That's
it
for
me,.