►
From YouTube: Create:Code Review Weekly Prod/Eng/UX Sync - 2020-12-16
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I
put
a
few
like
high
level
things
that
are,
I
sort
of
just
wanted
to
start
like
feeler
conversations
and
get
a
feeling
for
how
how
people
f
feel
about
these
topics
and
what
we're
doing
and
how
we're
sort
of
managing
them.
I
the
first
one
up,
is
feature
flags
and
I
put
a
couple
small
comments
and
then
michelle
so
thoughtful.
He
wrote
me
into
to
commenting
on
a
working
group,
merge
request
about
feature
flags
which
is
going
over
about
as
well
as
I
expected
it
would
go
over
with
that
group.
A
So
there's
a
much
longer
explanation
there.
I
would
just
say
that
my
general
feeling
is
that.
A
I
have
seen
the
the
winds
of
feature
flags
right
where
they
they
are
valuable
because
we
had
them
and
we
were
able
to
turn
things
off,
and
I
think
I
have
also
seen
the
feature
flags
compounded
on
each
other,
complicated
intertwined,
spaghetti
feature
flags.
I've
seen
feature
flags
used
as
a
crutch
to
like
decide
to
not
turn
something
on
for
customers,
because
could
it
be
better
sure,
but
is
it
better
than
what
exists
today?
A
Probably
as
well-
and
I
think
those
are
all
I
would
say
I've
seen
the
my
perspective
is
that
the
negative
side
of
feature
flags,
the
things
that
I
I've
mentioned,
are
much
more
visible
than
the
positives
that
we
get,
because
the
positives
are
sort
of
the
natural
things
that
you
would
expect
them
to
do.
And
so
maybe
there's
like
a
bias
that
that
the
more
negatives
just
appear,
because
you
don't
hear
about
the
cases
where
feature
flags
did
what
they
were
supposed
to.
A
But
I
don't
know
I
just
wanted
to
see
sort
of
how
this
group
was
thinking
about
them.
I
will
say
also
say
this
group
uses
significantly
more
feature
flags
than
we
did
in
our
other
group
just
in
terms
of
a
comparison
and
but.
B
B
If
you
had
more,
I
had
more
more
thoughts
right
and
thanks.
I
I
so
I
can
skip
over
a
couple
because
it's
basically
just
reinforcing
what
you
just
said
about
the
positives,
so
you've
already
seen
the
examples
I
will
just
state
that
highlight,
if
you
think
so,
the
compounded
thing
is
is
is
rare,
and
one
particular
example
that
you
mentioned
on
that
merge.
Request
comment:
is
that
one,
particularly
after
the
refactor
we've
done?
We
have
developed
features
depending
on
that
feature
flag
and
that's
rare.
B
That's
usually
what
we've
done
in
the
past
by
rule
is
we
we
use
the
feature
flag
to
toggle
between
two
behaviors
or
or
just
isolate
what
they
affect.
I
think
large
features
like
reviewers
usually
have
like
several
merge.
Requests
under
one
umbrella
feature
flag
and
the
other
was
the
refactor
that
is
is
rare.
So
we
try
to
avoid
that
as
much
as
we
can.
The
other
thing
that
I
think
we
we
need
to
understand
is
like
the,
even
though
the
documentation
says
by
recommendation
by
default.
B
B
I
think
the
maturity
of
the
stage
does
have
something
to
do
with
that.
In
that
we've
had
cases
in
the
past
where,
if
we
didn't
have
the
future
flags,
we
wouldn't
be
able
to
mitigate
situations
that
were
basically
breaking
the
workflow
without
a
deploy
which
does
affect
that,
but
also
it
affects
that
margin
about.
Where
are
we
comfortable
to
ship
it?
And
I
think
that's
that's
the
bigger
conversation
that
I
think
it's
in
this
topic.
B
I
also
like
to
highlight
that
the
process
is
improving.
I
added
a
link
there
to
a
spreadsheet
that
I
prepared
a
couple
days
ago
to
give
an
overview
of
the
current
in-flight
flags
and
notice
that
the
ones
that
have
been
created
before
the
yaml
files
are
a
lot
more
lost
than
the
ones
that
have
created
after
so
I
think
the
process
is
improving
incrementally
and
we
will
continue
to
improve.
So
I
think
that's
also
to
keep
in
mind.
C
B
C
This
is
an
order
to
help
us
track
the
feature
flags
that
get
introduced
and
removed
and
default
on
and
things.
My
opinion
on
this
whole
thing
is
much
more
black
and
white
crystal
clear.
Until
the
documentation
changes,
the
engineers
will
continue
to
follow
it
and
that's
two
things
that
I
think
kai
is
going
to
hate,
which
is
most
things,
need
a
flag
and
those
flags
should
be
off
by
default
and
I'm
not
saying
that
shouldn't
change.
C
C
That's
my
biggest
problem
with
feature
flags.
It
is
rare,
but
it's
like
crippling
when
it
happens,
and
it's
like.
When
will
we
learn
to
be
better
and
I'm
not
saying
we're
awful,
I'm
not
saying
we're
awful,
because
it
is
rare,
but
just
the
side
effects
of
those
compounded
feature
flags
is
like
so
confusing
and
ambiguous
and
stuff.
There's
gaps
that
we
missed
and
things
like
that.
A
When
you
say
the
doctor
clear,
are
you
saying
the
engineering
specific
docs
are
clear
because
there's
actually
like
the
point
above,
I
think
all
of
the
future
flags
which-
and
we
can
debate
this
all
day,
and
I
think
this
is
why
this
this
working
group
exists
is
sid.
A
Comment
is
basically
right
above
that
that
says,
most
features
should
be
developed
without
a
feature
flag,
and
I
think,
if
you've
ever
watched,
sid
talk
about
feature
flags.
One
of
the
things
he
will
say
is
that
using
a
feature,
flag,
sort
of
slows
down
all
these
other
pieces
and
it's
it's
sort
of
a
sign,
and
I
think
this
is
pinterest
point
g
and
the
third
thing
we
want
to
talk
about.
A
But
it's
sort
of
a
sign
that
you
didn't
scope
the
work
right
when
you
need
the
feature
flag
because
you're
not
delivering
iteratively
enough
the
thing-
and
I
think
to
me-
that's
that's-
maybe
the
debate
that
I
want
to
have
about
feature
flags
is
like,
and
maybe
we
just
get
into
it
in
g
and
three
and
four
later,
as
we
start
talking
about
scoping
better
because
I
would
say,
I've
seen
this
group
use
more
feature.
A
Flags
for
things
that
probably
should
have
just
been
shipped
is
maybe
the
better
way
that
I
want
to
phrase
this.
Then
I've
seen
this
group
use
more
feature.
Flags
like.
I
think
we
have
the
points
that
the
pedro
has
below,
and
so
maybe
I'll
just
let
him
go.
But
I'm
curious
like
how
you
rationalize
those
two
things
I
get
that
developers
get
to
make
a
call
and
when
they
think
it's
needed,
it
seems
like
we
over
rotate
to
need
because
we
sort
of
have
these
other
those
other
things
are
higher
on
our
list.
A
D
There
are
a
lot
of
good
things
that
you
feature
flags
allow
us
to
do,
and
sometimes
it's
not
reasonable
to
like
it's
not
possible
to
break
things
down
into
an
even
smaller
piece
of
work.
So
you
need
it's
better
to
break
it
up
in
smaller,
merge
requests,
so
it's
easier
to
review
and
things
have
a
certain
cadence,
so
perhaps
feature
flags
make
sense
in
those
cases
and
others.
But
I
think
perhaps
what
you're
hinting
at
is
and
that
we
will
talk.
D
Next
is
just
the
lack
of
confidence
that
we
have
on
solutions
in
general,
and
that
is
due
to
a
lot
of
different
things
and
also
sometimes
issues
being
too
large
or
in
the
beginning.
We
think
that
they're,
not
that
big,
so
we
start
working
on
them
and
then
we
realize
oh
proof.
This
is
actually
going
to
be
more
work
than
we
thought.
D
Okay,
so
let's
just
save
what
we
have
behind
the
feature
flag
and
we'll
continue
working
on
it
and
in
other
merge
requests,
and
I
think
this
has
become
of
a
habit
in
this
team
and
maybe
others
as
well,
and
that
is
a
sign
that
we're
not
doing
our
planning
work
as
good
as
as
we
should
be
yeah.
That's
that's
my
comment.
B
One
of
the
reasons
for
the
lack
of
confidence
is
the
incredible
amount
of
variations
we
have
with
data
in
production,
like
the
amount
of
people
using
this
and
the
flows
that
they
have
learned
and
expectations
on
our
products
area
are
significantly
more
demanding
and
we
just
don't
have
the
ability
to
test
all
scenarios
until
it
reaches
production
and
even
on
small
features
that
seem
trivial
like
sometimes
we
even
we're
either
surprised
when
something
hits
production
by
the
complexity
or
limits
of
the
application
that
are
not
testable
in
local,
in
development,
so
yeah
and
the
other
thing
about
the
iteration
separating
between
iterating
on
the
code
perspective
versus
on
the
customer
perspective,
there's
still
iteration,
but
we
can
talk
more
about
the
other.
A
A
B
I've
seen
that
then
another
another
companies
where
you
have
an
anonymized
version
of
production
data,
but
that
raises
a
bunch
of
questions
about
legality
and
all
that.
So
I
think
we
haven't
focused
on
having
that
because
it's
just
counter
it's
a
lot
of
cost
to
keep
up
to
date
and
the
real
data
is
in
production
and
the
process
for
featureflex
allows
us
to
do
that.
B
I've
had
I
had
tested
in
the
past
of
having
an
instance
for
a
specific
branch,
but
the
overhead
of
having
that-
and
you
have
one
instance
per
branch-
is
like
to
properly
test
this.
And
then
you
have
to
load
the
data,
it's
just
too
much
overhead
for
one
feature
but
yeah.
I
think
we,
if
we
need
that,
we
need
to
focus
on
doing
that
as
a
side,
project
for
architecture
or
something
environment.
A
A
Oh
practically,
I'm
wondering
if
we
can
do
a
couple
things
to
help
make
this
clearer
one
of
the
things
that
I'm
curious
about
is
weight.
So
when,
when
weight
is
provided
for
an
issue,
are
we
factoring
in
sort
of
the
life
cycle
of
a
feature
flag
as
part
of
that,
and
what
I
mean
is-
and
this
is
the
semantic
debate
I've
been
having
with
with
the
people
in
the
working
group.
A
Inevitably,
when
you
use
a
feature
flag
or
almost
always,
it
takes
two
milestones
to
deliver
anything
with
a
feature
flag
and,
and
we
can,
we
can
argue
about
what
the
word
deliver
means
and
from
the
term
of
product
I'll
say,
deliver
to
a
customer,
because
ultimately,
nobody
gives
a
if
you
put
it
in
the
code
base.
If
you
can't
use
it
so.
A
What
what
I
will
say
is
is
that
it
accounted
for,
because
you
know
we
get
a
weight,
3
issue.
We
look
at
that,
and
we
say:
okay,
we
can
make
that
work
and
fit
reality
is.
Is
that
there's
probably
another
point
of
some
amount
of
time
that
someone
has
to
spend
toggling
the
feature
flag
and
there's
an
additional
weight?
One
issue
that
shows
up
in
the
next
milestone
magically?
That
is
like.
A
I
need
to
remove
that
feature
flag
and
deal
with
something,
and
so
what
I'm
wondering
is
is
could
we
look
at
stuff
from
the
beginning
and
say
we're
going
to
do
this?
It
is
going
to
use
a
feature
flag
that
actually
increases
the
total
capacity
of
you
know
that
someone's
going
to
have
to
dedicate
this
to
to
five,
and
could
we
more
realistically
sort
of
talk
about
the
impact
that
feature
flags
have
on
sort
of
the
other
work
that
engineers
can
deliver,
because
it
is.
B
Sorry,
that's
me,
I
just
want
to
say
that
we
do
account
for
the
adding
of
the
detection
of
the
feature
flag,
but
we
don't
account
for
the
rollout
of
the
feature
flag
to
test
it.
That's
outside
of
the
as
soon
as
the
code
as
soon
as
the
code
is
delivered
in
master.
We
assume
that
issue
has
completed.
B
I
I
think
we
can
improve
in
ways
that
we
can
start
scheduling
the
rollout
issues
together
with
those
feature
flags
with
those
features
that
are
behind
the
feature
flag.
The
what's
harder
to
cover
is
when
the
decision
to
add
a
feature
flag
is
during
development,
and
you
don't
have
that
on
planning.
C
Or
is
it
similar
we?
If,
if
we
know
it's
going
to
be
in
the
next
milestone,
we'll
pre-schedule,
when
we
add
the
flag,
we
will
add
the
rollout
and
we'll
pre-schedule
it
and
add
the
weight,
but
most
of
the
back-end
feature
flags,
and
I
have
no
data
to
support
that,
but
I
think
most
of
them
just
happen
behind
the
scenes.
So
pretty
much.
Every
performance
fix
that
we've
put
out
in
the
past
six
months
had
a
feature
flag.
C
A
A
Okay,
I
mean,
I
think,
that's
fine,
the
other
one
would
just
be.
A
The
other
would
be
on
the
merger
request
side
is
it
like?
Is
it
possible
to?
I
know,
I
have
seen
roll
out
issues
sort
of
get
created
on
the
fly
during
this
milestone
as
I've
watched
issues
roll
in
that,
like
clearly
a
feature
flag
was
introduced
and
we
we
do.
Those
is
it
possible
like
that?
A
We
can
just
be
more
explicit
about
like
rollout
issues
being
introduced
and
then
another
crazy
thought
would
be
like
if
you
submit
a
merge
request
and
you're,
adding
a
feature
flag,
sort
of
also
immediately
submitting
a
merge
request
that
removes
the
feature
flag,
even
if
you're
not
ready
to
remove
it,
which
has
a
cost
of
its
own
in
terms
of
that
code,
base
might
be
changing
and
removing
that
feature
flag
might
not
work
the
same,
but
that
would
sort
of
lead
to
the
fact
that
you're
compounding
things
inside
of
a
feature
flag.
A
If
you
can't
immediately
also
back
that
merge
request
or
back
that
feature
flag
out
within
the
next
milestone,
because
the
intent
would
be
that-
and
I
think
it's
actually
because
we
have
sort
of
a
two-step
you've
default
on
and
then
you
have
to
remove
it,
which
is
like
two
merch
requests
to
do
something.
But
I'm
curious
if
we
could
make
that.
A
A
I
mean
it
obviously
does
right
like
it
has
to.
It
requires
a
merge
request
and
it
requires
people
to
type
stuff
in
slack.
We
just
don't
have
a
good
measure
of
that,
because
it's
probably
happening
outside
of
planning.
B
Then,
like
you
said,
like
opening
the
merge
request
ahead
of
time,
it
could
be
lingering
for
two
milestones,
depending
on
the
feature
and
yeah
keeping
that
up
to
date
might
be
a
cost
and
it
messes
with
the
metrics.
I
I
and
actually
opening
the
merge
request
to
do.
The
change
is
now
what's
taking.
Our
time
is
the
testing.
B
C
B
Having
an
owner
sorry,
sorry
to
interrupt
but
having
an
owner
will
have
a
huge
impact
there,
because
one
of
the
things
we've
seen
is
that
we've
casually
like
create
the
rollout
issue
and
that's
that,
but
we
haven't
specifically
say
all
right
now,
you
own
this
and
something
that
kai
pushed
us
on
the
merge.
Ref
was
give
me
two
owners
like
front
and
back,
and
I
think
that
changes
significantly
about
like
your
two
are
in
charge
for
rolling
this
out
to
the
completion.
B
I
think
that's
a
good
step
that
we
have
started
and
the
the
on
the
spreadsheet.
I
noticed
that
the
ones
that
don't
have
an
owner
specified
are
older
feature
flags,
so
the
recent
ones.
C
Group,
splits
yeah,
I
have
nothing
else
to
add.
I
would
just
say
what
andre
said
is
right
about
the
rollout
issue
being
part
of
the
process
now
automatically,
and
but
also
this
has
always
been.
The
process
here.
Process
also
dictates
that
feature
flag
info
has
to
be
added
to
the
description
of
the
issue
and
that's
not
after
you
create
the
feature
flag,
but
as
soon
as
you
as
an
engineer,
review
that
issue
and
think
I'm
gonna
have
to
release
this
with
a
feature
flag.
A
We
can
or
we
can,
we
can
save
these
topics
for
new
year's.
I
I
think
yeah.
I
think
we've
talked
a
lot
about
feature
flags.
I'm
I'm
thankful
for
the
conversation
there
and
I
think
you
know
the
goal.
A
And
see
what
we
can
do
and
I'm
I'm
glad
everyone
was
receptive
to
my
complaining
about
them.
So,
oh
absolutely.
C
As
a
next
step,
now
that
I
have
this
really
cool
dashboard
of
how
many
feature
flexes
out
there
and
andre
what
you
mentioned
about
the
dris,
maybe
you
and
I
can
refine
some
of
the
existing
rollout
issues
and
start
associating,
even
if
they're
not
worked
on
right
now
but
having
that
drff.
So
let's
do
that.
A
A
Yeah,
let's
talk
about
iteration,
I
don't
think
we'll
have
a
ton
of
time
to
get
into
it.
I
think
one
of
the
things
that's
interesting
about
iteration
is,
is
you
can
slice
it
a
bunch
of
different
ways
and
there
is
certainly
value
in
getting
things
into
the
code
base,
even
if
they're
not
quite
ready
to
be
delivered,
I
would
say
that
is
sort
of
like
the
bare
minimum
for
iteration.
A
Where
you
don't
have
you
know
those
long
live,
merge
requests
and
we
get
those
in
I'm
curious
sort
of
generally,
like
the
other
side
of
this-
and
I
again
maybe
we
just
should
get
to
number
four-
is.
A
A
let's
not
quite
ship
this,
yet
while
we
add
two
more
things
that
we
think
are
needed
right
and
so
I'm
sort
of
curious
how
we
as
a
group
people
feel
about
you,
know,
adding
additional
scope
or
features
versus
shipping.
Those
much
more
incremental
improvements,
recognizing
that
we
need
to
continue
to
work
on
something,
but
actually
getting
it
in
front
of
users
faster.
B
Yeah
you
can
skip
my
point.
I
think
it's
just
what
we
already
talked
about
just
the
code
separating
from
product
delivery,
but
pedro.
I
want
to
hear
your
point.
D
Yeah,
to
summarize,
I
think
it's
it's
about
increasing
user
value,
even
if
it's
a
very,
very,
very
small
value
that
we're
increasing,
but
we
also
have
to
balance
that,
as
you
said,
we
are
a
group
that
has
a
very
old
history
so
and
I
mean
we're
one
of
the
oldest
areas
of
gitlab.
So
we
have
to
balance
that
with
the
current
value
that
we're
already
providing
to
users,
so
the
things
that
we're
making.
D
How
can
we
balance
that
and
make
them
consistent
with
what
we
have
today
with
the
current
workflows
and,
above
all,
regardless
what
we
do
don't
achieve,
something
that
is
broken
right
and-
and
I
mean
that
goes
back
to
the
point
about
future
flags
but
in
general
like
if
we're
making
something
widely
available,
shipping,
something
that
works
and
that
is
is
not
broken,
doesn't
have
bugs
and
it
hasn't
been
tested.
So
yeah.
E
I
actually
have
a
clear
example
guy
this
is.
This
is
right.
Before
you
came
on
the
team,
we
had
received
some
complaints
that
viewers
were
missing
files
because
they
were
too
large
to
show,
and
so
we
just
showed
a
little
like
this
file
has
collapsed
and
it
was
like
you
know-
maybe
maybe
30
pixels
tall
or
something
like
that,
and
our
fix
was
just
to
make
that
taller.
So
a
little
bit
taller
and
an
orange
background.
E
So
we
shipped
that
which
is
a
positive
improvement
for
those
people
who
are
missing
files,
but
for
people
who
are
who
collapse
files
to
get
them
out
of
the
view.
Those
collapse
files
now
are
taller
and
so
they're
seeing
more
on
their
screen.
Despite
having
collapsed
the
file.
Sorry.
E
This
is
one
of
those
things
where,
like
we
didn't
ship
anything
broken
and
and
we
could
have
iterated
to
make
it
a
better
experience
for
both
sides
of
that
experience,
but
there
was
a
there's,
a
pretty
vocal
reaction
to
it
and
we
and
we
wound
up
rolling
it
back
yeah
that
this
this
view
that
whoever
is
sharing-
I
guess,
andre.
E
So
it's
hard
it's
hard
in
our
group
to
like
to
iterate
on
something
because
we
have
such
vocal
users
and
so
many
people
using
it
in
such
specific
ways,
we
can
write
something
really
simple.
That's
not
broken
like
pedro
is
saying
this:
wasn't
this
wasn't
functionally
broken,
but
it
still
broke
some
people's
workflow.
Despite
that,
I
don't.
I
don't
really
know
what
the
conclusion
here
is,
but
it's
an
example
of
like
rolling
out
something
really
small,
actually
just
a
css
change.
Frankly,
that
still
wound
up
being
reverted
because
we
had
to.
D
Yeah,
the
only
thing.
Thank
you,
thomas.
The
only
thing
that's
a
clear
example,
and
the
only
thing
I
would
say
is
that
also
regarding
the
next
points,
which
we
don't
want
to
have
time
to
discuss,
I
think
it's
just
that
we
discovered
that
so
we
don't
plan
things
to
be
broken
right.
We
plan
the
scope
and
we
plan
the
requirements
and
the
design
and
the
solution
so
that
it's
not
broken.
D
D
So
I
think
in
this
case
we
were
kind
of
were
set
up
for
failure,
basically,
because
we
were
trying
to
iterate
on
something
that
wasn't
possible
to
do
in
the
first
place,
or
we
needed
more
time
to
do
it
properly,
as
thomas
then
did
with,
with
the
the
second
merge
request
and
third
and
fourth,
so
I
think
it's
just
down
to
planning
things
better.