►
From YouTube: 2021-08-18 Create:Code Review Weekly Sync
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
ready
right,
so
I
have
my
first
point
for
the
weekly
of
code
review.
So
we'll
take
the
first
point
of
rescheduling
the
call
overs
over
to
slack
and
going
on
to
my
second
point,
so
I
shared
with
some
of
you
the
procedure
to
score
the
okr
quarter,
two
and
the
large,
mrs
and
I
wanted
to
have
the
opportunity
to
discuss
that
a
little
bit.
A
Last
week
we
were
discussing
how
to
how
to
weight
each
page,
because
we
had
three
pages:
two
measurements
for
each
one
of
those
pages.
That's
like
six
numbers,
but
then
the
okr
needs
one
number.
How
do
I
digest
all
of
it
and
we
came
up
with
some
approach
to
the
weights
using
the
timings
of
each
page,
and
we,
like,
I
think,
that's
somewhere
in
the
agenda
down
below
and
then
I
I
I
executed
on
that
now.
A
What
that
means
is
we
arrive
to
the
arrive
to
the
number
of
69
percent
accomplished
for
the
okr,
which
is
a
good
result.
Overall,
I
still
I'm
I'm
sharing
I'm
basing
the
numbers.
I
still
think
it's
not
truly
representative
of
the
impact
we
had,
but
that's
another
story,
but
I'm
happy
to
score
it
as
using
the
lcp
rather
than
the
fully
loaded,
because
it
has
at
least
some
bigger
impact
on
the
users.
A
B
I
I'm
fine
with
that.
I
think
you're
right
69
is
pretty
good,
most
okay,
ours.
I
think
they
say
you
should
achieve
like
somewhere
between
70
around
70
or
80.
Otherwise,
you
sort
of
made
it
too
easy
or
whatever
so
getting
right
there
at
70,
with
as
little
time
as
we
had
and
as
much
as
we
got
done,
feels
like
a
pretty
impressive
result.
B
So
I'm
fine
with
that
as
a
methodology
and
that,
as
a
result,
I
think
it's
it
makes
sense
to
me.
A
Okay,
thank
you,
yeah,
I'm,
I'm
sharing
some
things.
I'm
gonna
share
the
learnings
on
the
on
the
ally
okr
as
well.
I
shared
some
some.
What
went
wrong
one
way
well
feel
free
to
add
more
thoughts
onto
that
epic,
because
I
feel,
like
that's
the
central
point
that
everybody's
looking
at,
but
overall,
I
feel
like
we
did
a
very
good
job.
I
think
we
prioritized
the
worst
defender
the
changes
to
have
over
the
others
like
the
commits.
A
The
commits
tab
wasn't
very
worked
on
and
when
we
pick
it
up,
we
have
one
the
previous
milestone
142,
where
the
delays,
the
the
total
blocking
time
and
everything
were
outside
of
the
tab.
It
wasn't
specific
to
the
commits
page
and
we
were
talking
about
like
300
millisecond
difference.
So
all
in
all,
I
feel
like
we
did
a
good
job
at
focusing
on
the
worst
offender.
A
I
don't
think
we
did
a
very
good
job
at
picking
the
right
metrics
at
the
beginning,
maya
cooper
as
well.
I
should
we
should
have
discussed
that
a
little
bit
further
about
using
a
better
metric,
regardless
of
that,
since
we're
losing
the
10k
reference
architecture
pipeline
artifacts.
A
After
a
couple
of
days,
I've
already
asked
tommy
to
start
archiving
the
reports
on
a
monthly
basis,
because
that
will
at
least
gives
us
one
monthly
snapshot
going
forward
so
that
we
can
always
go
back
in
history
and
have
the
full
report
of
site
speed,
because
we
do
get
like
the
top
measurements,
like
the
lcp,
the
tbt,
all
that
we
get
on
the
pipeline
output,
but
we
don't
get
the
full
metrics
and
we
also
asked
the
js
heap
used
to
be
added
to
the
historical
dashboard.
A
So
I
feel
like
that's,
going
to
be
meaningful
for
us,
too,
monitoring
the
memory
usage
for
for
for
the
the
pages
that
we
are
tracking
to
be
added.
There's
a
space
there,
so
yeah
good
stuff.
Overall-
and
I
guess
that's
it
I'll-
just
commit
it
to
the
ally
and
70
sounds
good.
Any
thoughts.
C
Yeah,
just
for
ally
purposes,
so
from
the
back
end
inside,
I
wonder
what
to
put
in
out
in
l
either.
I
ended
up
the
score
that
you
had,
but
there
was
confusion.
Maybe
it's
a
good
retro
thing
that
epic
confusion
about
how
we
measure
it
and
how
to
track
progress.
We
were
asked
to
pass
the
different
ways,
including
put
all
tasks
there
and
then
track
progress
against
that
which
isn't,
since
our
kr
was
just
yeah.
You
know
based
on
us
a
number
we
can't
say
like
well,
we
made
progress
towards
that.
C
It's
hard
to
measure,
but
yeah,
maybe
I'll
I'll
have
some
information
there,
but
but
I
put
something
similar
to
that
effect
in
good
alibi.
But
then
it's
weird
because,
like
I
have
all
these
subtasks
of
that
are
that
are
like
100
done
or
a,
and
that
adds
up
to
80,
90
or
something.
But
then
that's,
but
then
the
score
is
770
or
70.
So
so
I
think.
C
C
A
Your
sound
is
coming
with
like
a
little
echo,
so
there
might
be
something
with
your
connection
or
something
all
right.
We
can
still.
A
Okay,
so
going
back
to
your
point
matt,
it
might
be
worth
capturing
that
on
the
epic
as
one
of
the
things
that
we
can
improve
because
you're
right
throughout
the
quarter,
we're
asked
for
progress
and
that's
on
the
subtasks
part
and
then,
when
we're
measuring
the
success
of
the
okay,
are
we
measuring
the
impact
it
had?
There's
no
easy
answer
there
with
attract
one
or
the
other,
we'll
always
lose
the
other
one
yeah
delay
our
thoughts
there
and
we'll
see
what
we
can
do
from
your
perspective.
How
you
can
score
it.
A
I
think
it's
fair
to
measure
the
impact,
because,
even
though
we
have
these
numbers
and
a
lot
of
it
was
front
end,
it
wasn't
all
front
end.
There
was
collaborations
and
movements,
even
though
you
have
the
caching
being
rolled
out
still
the
goal
was
the
metrics
and
the
results
so
same
with
same
scoring,
I
would
say:
yep
yep
right
next
point.
I
wanted
to
bring
up
the
opportunity
to
discuss
a
little
bit
about
the
ongoing
work
for
infradev
and
engineering
allocations,
especially
since
the
front
end
is
not
highly
focused
on
that.
A
We
have
our
work
and
trying
to
try
to
finalize
14
3
today,
but
I
just
want
to
take
the
opportunity
to
see
if
there's
anything,
that's
coming
up
that
feels
like
we
might
be
able
to
help
matt.
Is
there
any
update
on
progress
for
kaizo.
C
C
C
We're
hoping
this
impacts
on
the
budget,
although
so
far
far
it's
it's
hard
to
tell
so
far
I'll
say
that
the
we've
seen
some
improvements
in
in
the
timings
which
is
good
and
the
ones
that
we've
rolled
out.
So
we've
got
like
dispatch
hatch
on
friday
and
then
the
merge
records
request
show
what
was
monday
monday
and
the
discussions
was
earlier
today,
so
we're
rolling
those
out
and
so
we'll
see.
C
So
that's
what
will
be
our
next
steps
and
then
I'm
not
sure
andre
you
mentioned
pronouns
hell.
Maybe
the
that
merge,
merge,
requests
the
show
controller
they
will
they
could
they
could
do
it
more
honestly
or
something
something
just
definitely
an
option.
The
problem
is,
I
just
we
just
don't
know
quite
yet
of
what
what's
going
to
be
did
so.
A
Yeah
on
that,
I
think
my
decision
there
was,
I
labeled
it
as
a
front
end
considered
a
0.1
of
weight
to
have
it
as
a
standby,
because
that
one
in
particular
I
feel
like
there's
a
lot
of
opportunities
and
options
that
we
might
take.
That
will
require
some
adjustments
on
the
front
and
saying,
if
the
show
is
going
over
the
threshold.
Very
often
we
might
want
to
remove
some
of
those
queries
to
an
ajax
report
ajax
request
later
later,
and
that
will
require
some
front
that
works
inherently.
A
So
I
don't
want
you
to
be
blocked
by
not
having
a
front
end,
so
you'll
have
one
available,
the
other
one.
I
wanted
to
bring
your
attention,
maybe
not
for
this
mouse
and
for
the
future.
Is
that,
like
this
one,
there
might
be
an
opportunity
to
move
data
from
the
batch
diffs
or
the
diff's
metadata,
maybe
mostly
match
batch
move
to
a
moment
where
they're
needed
one
of
the
ones
we've
already
identified
and
talked
about
on
the
issue
is
the
suggestions.
A
C
C
Yeah,
that's
good
the
diff!
That
dispatch
is
a
yeah
yeah,
so
the
air
bud
budget
really
is
kicking
us
at
it's
quests
that
take
longer
longer
than
second.
C
There's
a
couple
api
ones
that
are,
for
example,
get
like
like
get
after.
Like
a
list
of
merge,
merge
requests
it
we
get
trying
to
think
like
in
a
day
in
a
day
I'm
trying
to
look
at
the
numbers.
While
I
talk,
maybe
80
000
cases
where
it's
more
than
a
second,
but
that's
out
of
10
million
or
something
so
it's
low
percentage,
but
we're
not
counting
percent
percentages
against
the
air.
It's
individual.
C
So
that's
a
problem,
dispatch
dispatch,
fewer
instances
but
a
higher
percentage.
So
that
should
be
a
focus
of
it's.
Almost
ten
percent
of
the
requests
are
take
longer
than
a
sec.
Second,
so
it's
what
we
need
to
focus,
probably
but
but
yeah,
it's
tricky
so
so
yeah.
But
I
appreciate
that
appreciate
that
help
and
yeah.
A
I
had
one
question
then:
okay
on
that
topic
that
we're
chatting
yesterday
about
the
numbers
of
the
so
the
context
is:
should
we
invest
time
in
optimizing,
the
user
scenario
where
you
follow
a
link
to
a
diff
file
or
a
diff
line,
or
a
note
on
the
merge
request,
changes
tab
where
we
could,
instead
of
calling
all
of
the
batch
diffs,
we
could
call
just
a
batch
tip
for
that
file
render
it
then
the
user
would
just
request
to
render
the
rest
of
the
page.
A
So
the
question
is
this:
okay,
should
we
schedule
something
for
this
muscle
to
come
up
with
the
numbers
of
that,
or
should
we
just
do
it
amongst
ourselves
and
find
some
engineer
to
help
with
the
production
access
to
get
those
numbers
from
the
from
somewhere,
or
I
don't
know
how
you
want
to
go
about
it.
A
A
Right
and
yeah:
it's
not
for
scheduling
for
implementation,
for
this
monster
just
to
get
it,
get
it
ready
for
the
next
or
whatever,
whatever
it's
possible,
and
to
have
the
discussions
around
it
using
knowing
the
usage
so
cool
right.
Any
other
points.