►
From YouTube: 2021-09-15 Create:Code Review Weekly Sync
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
matt,
you
get
the
first
one
up.
B
All
right
sounds
good,
so
just
weekly
update
on
the
engineering
allocation
by
like
the
last
few
weeks,
I've
been
putting
the
status
in
the
epic
there,
so
you
can
always
follow
along
there.
But
the
highlights
now
are
we're
down
to
one
infradev
issue,
there's
a
mr
for
that,
so
we're
getting
pretty
close
on
closing
that
one
out
and
then
just
as
in
the
last
hour
or
so
our
error
budgets
switched
over
to
the
green,
which
is
great.
We
were
pretty
close
this,
like
I
said
in
slack.
B
B
So
for
now
we're
green
and
good
that
coming
up
that's
going
to
be
changing
again,
how
we
measure
it.
So
I
can
share
the
epic
around
that,
but
the
the
idea
is
that
each
endpoint
would
be
configured
on
how
fast
we
expect
it.
So
I
don't
know
what
timing
on
that
or
anything
but
that'll,
be
kind
of
the
next
iteration
of
error
budget
work
that
we'll
do.
A
A
C
B
Yeah
yep
so
we'll
have
to
look,
I
think,
yeah.
The
idea
is
yes
that
we
wanted
to
first
switch
everyone
to
five
seconds.
So
then
we
can
really
focus
as
a
company
focused
on
like
okay,
which
areas
are
really
the
worst
or
not
the
worst,
but
really
need
the
most
attention
and
then
yeah
then
start
tuning
it,
and
then
each
team
will
can
kind
of
iterate
and
slowly
improve
kind
of
the
slowest
queries
and
make
those
better.
B
The
problem
with
five
seconds
now
is
that,
while
most
of
our
most
of
our
requests
are
happen
much
faster
than
that
now,
if,
if
a
lot
of
them
start
taking
four
seconds,
we
probably
won't
notice
it,
because
because
our
measurement
is
our
threshold
is
so
high,
but
that
that's
kind
of
a
temporary
problem
that
we'll
kind
of
have
to
watch
for.
A
In
theory,
we
will
come
out
of
engineering
allocation
tunish
right
because
we
won't
have
our
error
budget
will
be
good
and
we
have
no
infra
dev
issues,
and
so
therefore,
we
come
out
of
supposedly
we
would
come
out
of
engineering
allocation
is
the
like
thought
that,
like
we
would
end
up
back
in
engineering
allocation
once
they
start
like
changing
these
versus,
like
letting
teams
sort
of
naturally
address
these.
Are
they
not
really
decided
yet?
Are
they
not
going
to
let
people
out
of
engineering
allocation
until
they've
got
version?
B
No,
I
don't
think
I
don't
think
that
sorry,
my
dog
is
going
crazy.
I
don't
think
that
us
being
an
engineering
allocation
is
dependent
on
this
new
change
rolling
out.
So
so
yes
in
theory,
we
should
be
able
to
get
out
of
our
current
engineering
allocation
shortly.
B
But
then
right
so
we're
still
trying
to
figure.
I
know
the
question
will
be
like:
when
will
that
happen,
and
I'm
trying
to
figure
it
out
the
other
complicating
thing
around
the
engineering
allocation.
Is
everyone
pretty
much
all
the
back
and
engineers
are
either
in
an
engineering
allocation
or
a
head
count
reset
slash.
B
The
new
term
is
swarm.
So
if
we
want
to
be
careful
about
like
if
we
come
out
of
an
engineering
allocation,
I
don't
want
the
whole
team
to
have
to
swarm
over
to
some
other
team
and
work
on
their
stuff.
So,
but
that's
so
that
kind
of
gets
to
my
next
point
about
planning
and
14-4,
so
I'll,
just
kind
of
roll
into
that
we
can
cover
them
both.
At
the
same
time
around
I
tried
to
prioritize
performance
security,
some
of
the
reliability
stuff,
because
we
have.
B
B
Not
all
are
high
priority,
but
we
have
a
lot
to
do
in
that
area.
So
I
think,
as
long
as
we
keep
working
on
our
own
performance
and
things,
that's
kind
of
why
I
prioritized
some
of
those
to
to
keep
us
focused,
because
the
goal
for
all
this
is
reliability
and
improving
that
so,
and
I
think
the
mergeability
stuff
falls
into
that
category,
which
is
something
we
wanted
to
do
anyway.
B
I
don't
want
to
be,
you
know,
totally
selfish
and
be
like
nope.
We
can't
work
on
anyone
else's
high
priority
work,
which
is
a
lot.
What
a
lot
of
other
teams
are
having
to
do
right
now,
but
I
think
we
have
enough
high
priority
performance
and
security
fixes
and
things
that
we
need
to
do
on
our
own,
that
not
necessarily
related
to
the
current
engineering
allocation
that
I'm
hoping
that
we
can.
B
B
A
No,
I
think
that
makes
sense.
I
think
it's
probably
a
valid
concern
that
if
we
came
out
of
an
allocation
they
would
our
engineers
would
go
somewhere
else.
B
Yeah,
that's!
What's
like
that's
what
we've
seen
with
the
editor
team,
I
think,
is
working
on
managed
plan.
I
think
the
ecosystem
backend
engineers
are
helping
on
different
teams.
So
if
you
look
at
the
engineering
allocation
handbook
page
that
table
of
which
teams
are
impacted
is
pretty
much
every
team.
So.
B
Is
impacted
at
least
somewhat,
I
think
now
by
this,
if
we
can
continue
to
focus
on
reliability
and
these
quality
issues,
but
do
it
for
our
own
team.
I
think
that
would
be
ideal.
B
So
basically,
we
had
twice
as
much
already
in
144
than
we
had
capacity
for
so
I
just
had
to
try
to
prioritize
things
and
then
just
cut
everything
below
that
off
somehow,
but
if
we
need
to
rearrange
things,
I'm
certainly
open
to
that.
So
that's
why
I
put
in
the
planning
issue.
I
also
listed
out
the
all
the
ones
that
were
in
there
that
I
moved
out.
So
if
you
want
to
review
those
and
switch
any
of
those,
but
we
can
certainly
discuss
that.
B
Yeah,
I
think
at
some
point
we
should
probably
go
through
our
whole
backlog
of
performance
issues
and
make
sure
they're
prioritized
or
have
the
right
priority
and
severity
label
on
them.
So
we
can
kind
of
think
about
those
as
well
there's
yeah,
I
think,
there's
110
or
something
and
then
there's
10
or
15
security
issues
in
our
backlog.
So
we
should
make
sure
those
are
accurate
and
labeled.
Accordingly.
C
Hey
sorry
about
that,
I
had
the
my
point
as
the
first
one
in.
F
The
gym,
I
think,
but
then
I
was
in
the
subway,
but
I'm
here
now
anyway.
I
wanted
to
talk
about
because
we
have
been
going
on
and
on
about
this
in
the
in
the
issue,
and
I
wanted
to
reach
a
decision
quickly
on
if
it's
something
that
we're
going
to
to
do
soon
or
not
in
the
context
for
this
issue,
for
anyone
that
he's
not
familiar
with
the
idea
is
to
display
a
prompt
in
the
product
on
the
merge
requests
page.
F
Experience
is
something
that
we
do
at
least
on
a
quarterly
basis,
with
the
sus
surveys
that
capture
the
user's
perception
of
usability
and
their
experience,
but
that's
for
the
whole
product,
and
this
would
be
much
more
targeted,
just
the
emerging
class
experience
and
we
would
ask
them
specifically
about
merge,
request,
performance
and
merge
request
experience
in
general
and
yeah.
These
kinds
of
prompts
in
product
have
had
good
results
in
the
past
in
other
companies.
F
It's
a
very
common
approach
used
by
google,
for
example,
not
only
when
they
release
new
products,
but
when
they
yeah
keep
improving
products
or
make
or
are
about
to
make
big
changes
or
big
new
features
in
the
products
they
have
the
baseline,
and
then
they
compare
user
sentiment
if
it
went
up
or
down
if
people
are
more
or
less
satisfied
with
the
experience.
F
F
You
know
be
dismissed,
ideally
that
banner
would
contain
some
data
about
the
users
so
that
they
wouldn't
have
to
self-report
on,
for
example,
how
long
they
have
a
gitlab
account
or
if
they
are
part
of
a
paid
plan
or
a
free
plan,
or
how
often
do
they
use
merge,
requests
and
yeah
having
people
self-report
on
that
is
not
ideal,
but
it
could.
It
will,
I
think,
reduce
effort
significantly
if
we
just
hey
here's
a
link
to
the
survey
and
then
ask
people
there,
those
questions
and
they
self-report
on
those
characteristics.
F
A
And
we've
talked
about
this
a
lot
on
the
issue.
I
don't
disagree
with
wanting
the
data
or
doing
it.
I
think
at
this
point
it's
now
a
question
of
effort
right
whether
or
not
we
have
the
required
parameters
that
we're
looking
for
to
be
able
to
the
construct
is
pre-populated
with
the
data
that
you
want
versus
trying
to
get
people
to
self-select,
and
that's
probably,
I
think
the
question
is
like
how
big
of
a
deal
is
that
because,
if
that's
a
big
deal,
I
don't
know
if
we
could
do
that
in
14-4.
A
F
F
The
yeah
I
asked
ann
about
that
our
ux
one
of
our
ux
researchers
and
and
she
said
that
she
would
rather
not
ask
people
those
questions
about
how
long
they've
been
using
gitlab
or
how
many.
How
often
do
they
use
merge
requests
because
it's
more
difficult
to
answer
so
yeah
the
it
would
be
more
difficult
for
us
to
read
the
data
and
make
something
out
of
it.
F
F
F
But
I
I
think
I
think
it's
still
worth
doing
and
we
can
iterate
our
way
there.
The
most
important,
I
think
the
most
basic
things
would
be,
as
you
said,
just
display
a
link
to
a
survey
and
users
can
dismiss
it
if
they
dismiss
it.
F
We
won't
show
it
that
link
again,
for
I
don't
know
how
long,
but
I
would
say
at
least
maybe
a
year
perhaps
or
something
like
that
and
and
that
it
would
have
to
be
behind
the
feature
flag
so
that
we
could
enable
it
on
sas.
F
A
A
A
If
we
even
get
you
know
sort
of
the
interactions
anyways,
it
seems
like
this
data
and
then
drawing
the
conclusions
we
need
is
the
data
would
be
a
challenge,
so
I
think
we
just
need
to
push
for
what
we
can
get
there.
I
will.
I
will
work
with
andre
on
that.
I
we,
you
know
we'll
have
to
finalize
front-end
planning,
I
guess
before
the
end
of
this
week.
A
Now
that
I
look
at
a
calendar,
so
I
will
put
a
note
to
like
put
this
in
and
maybe
what
we
do
during
14
4
is.
A
Maybe
we
just
turn
that
into
spike
for
someone
to
investigate
what
we
have
around
there
in
data
and
then
what
it
would
take
to
get
sort
of
all
of
the
required
data,
and
then
we
can
come
back
and
make
a
more
informed
decision
later,
late,
14,
4
or
14
5
of
like
actually
going
through
with
it.
I
honestly
don't
know
if
yeah
I
sort
of
based
on
what
I
know
about
fulfillment
and
subscriptions.
A
I
actually
don't
know
if
any
of
that
information
is
readily
available
in
the
gitlab
code
base
in
like
an
easy
to
consume
way,
so
it
may
be,
it
may
be
hard
to
get
some
of
that,
so
we
may
need
to
like
rethink
some
of
that,
but
yeah.
I
think
that's.
A
Month
plus
right
on
starting
anything,
so
we
may
need
to
look
at
what
the
effect
of
just
sort
of
how
we
were
thinking
about
ux
and
research
researches
for
the
recorder,
anyways.
F
Yeah
yeah
yeah,
I
think
yeah,
I
think
that's
fair
and
a
spike
is
reasonable.
F
That
would
already
make
me
happy
to
know
that
we're
looking
into
it
and
yeah,
for
example,
this
kind
of
data
would
have
been
tremendously
helpful
to
derive
home
the
results
that
we've
made
with
the
performance
improvements
to
see
like
the
numbers
say:
yes,
we
did
it
or
we
made
huge
improvements,
what
the
users
feel
about
it,
but
anyway,
that
would
be
if
we
had
implemented
it
but
going
forward.
I
think
yes,
as
soon
as
we
can
have
something
like
this.
F
It
would
be
good
and
also
to
understand
if
it's,
if
it's
worth
investing
more
in
this
kind
of
approach,
depending
on
the
data
that
we
have,
but
if
the
initial
data,
as
you
say
it's
already
a
bit
hard
to
interpret,
it-
would
make
us
make
it
even
harder
for
us
to
make
a
case
to
continue
investing
in
it,
so
I'm
I'm
it
would
be
okay
to
to
do
a
spike,
I'm
comfortable
with
that.
F
But
again,
I
I
just
want
to
bring
this
up
to
set
some
expectations,
because
this
is
something
that
I
want
us
to
do,
but
I'm
aware
of
all
of
the
constraints
that
we
have
at
the
moment
so
yeah,
it's
just
balancing
priorities,
kai.
I
think
it's
what
you
do
best.
A
Yeah,
I
think
I
think
we
can
do
that,
so
I
I
created
an
action
in
the
agenda,
so
we'll
do
that.
Thank
you.
A
Only
so
enjoy
your
trip
that
sounds
I'm
jealous,
but
enjoy
your
trip.
F
E
Months
ago
I
said
I
don't
care
what
it
costs
give
me
so
I'll
be
going
for
a
while.
I
pedro
I
put
that
in
there
because
of
the
discussion
you
were
just
having.
If
you
need
me
to
do
a
language
edit,
I'm
available
up
until
that
friday,
and
then
I
go
away
awesome.
Thank
you,
amy
cole,
all
right!
That's
it!
For
me.
A
I
do
want
to
jump
back
up
matt
the
you
mentioned,
the
performance
issues
and
the
security
issues,
and
I
pulled
the
list
of
performance
issues
you're
right,
it's
110,
many
of
them
are
like
three
and
four-year-old
issues
like
one
in
particular,
is
that
I
it's
three
years
old
and
just
says
slow,
front-end.
Rendering
of
merge
requests
I
feel
like.
I
could
probably
confidently
close
it
given
like
all
of
the
improvements
we've
made.
A
I
guess
the
question
is:
do
you
think
it
makes
sense
to
potentially
have
like
every
engineer
draw
like
some
number
of
these
per
week
and
just
go
through
and
close
them?
I
imagine
in
many
cases
we
could
close
them
or
they're
duplicated
for
other
things
and
duplicates
will
be
harder
to
find,
but
we
could
probably
close
a
tremendous
number
of
these
or
they
don't
even
make
sense
like
one
was
sort
of
like
about
only
cache
this
thing.
A
B
Yeah
yeah,
we
could
try
to
figure
out
how
we
can
coordinate
that
but
yeah.
I
think
that
makes
sense,
because
right
I
figured
there
were
either
duplicates
and
yeah.
I
saw
some
of
those
really
old
ones.
That's
like
well,
that's
probably
true
for
not
just
performance
issues
in
our
backlog
that
we
have
probably
a
lot
that
are
old
and
could
be
closed,
which
is
a
whole
separate
topic
but
yeah.
So
that
might
be
a
good
thing
to
do
to
go
through
there
to
see.
B
It
would
be
great
if
we
had
a
list
that
we
were
confident
was
accurate
and
prioritized,
and
then
that
would
make
it
so
much
easier
to
go
through
like
a
planning
process
and
to
share
with
external
stakeholders
of
like
here's
all
the
performance
things
we
we
know
we
have
to
do
and
yeah.
So
we
can
coordinate
that
and
figure
that
out
async.
I
think,
but
I
don't
know
how
we
would
do
that,
but
I'm
sure
I'm
sure
we
can
figure
out
some
lightweight
way
to
arrange
that.
A
Yeah,
I
think
the
key
would
be
making
sure
we
don't
investigate
or
try
and
fix
anything.
It's
purely
like
glance
at
the
issue.
Description
does
this
seem
reasonable
or
not
reasonable.
If
it
seems
unreasonable,
then
just
let's
err
on
the
side
of
closing,
like
let's
just
sort
of
more
optimistically
close,
I
think
is
probably
the
way.
I
would
say
that
and
then,
if
it
is,
you
know
something
that
we
should
do
or
could
revisit
then,
like
at
least
we
still
have
it
that
way.
A
We
cut
this
list
pretty
quickly,
because
I
think
I
mean
just
in
looking.
The
bulk
of
them
are
three
plus
years
old,
so
we've
clearly
addressed
like
I
think,
recent
items
and
honestly
the
last
touch
date
on
a
lot
of
these
is
nine
months
ago,
when
we
bulk
relabeled
and
that's
the
reason
they
show
an
updated
date
and
like
within
the
last
nine
months,
otherwise
they
weren't
even
touched
prior
to
that.
So
is
the
bulk
relabel
from
source
code
to
code
review
that
actually
even
triggers
some
of
these
being
remotely
relevant.
A
So
I
think
that
might
be
a
useful
thing
to
do
during
fourteen
four.
If
we
can
just
just
have
people
try
and
prune
that
list
sure
that
makes
sense
all
right
yeah.
That
sounds
good
awesome
cool!
Well,
thanks!
Everyone
enjoy
the
rest
of
your
week
and
enjoy
the
rest
of
your
wednesday
and
yeah
have
a
good.