►
From YouTube: 2022-08-24 Code Review Weekly Sync
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
so
I
wanted
to
discuss
sync.
The
rollouts
of
the
in
products
survey
about
the
march
request.
Experience,
there's
a
link
there
to
the
rollout
issue
of
the
feature
flag
and
I
wanted
to
discuss
two
things.
One.
The
first
point
there
or
some
point-
is
how
we're
going
to
roll
that
out
and
in
the
context
of
the
purpose
of
this
survey
and
the
objective
and
what
we
want
to
take
out
of
it.
And
then
the
second
point
is
is
just
more
tactical
point
about
what
is
blocking
or
not
so
yeah.
A
The
first
point,
my
understanding
and
the
reason
like
we
started
all
of
this
and
many
of
the
decisions
we've
made
about
the
survey
itself.
The
way
it's
being
tracked
is,
first
and
foremost,
an
experiment
to
see
how
people
respond
to
the
survey
if
it
is
a
good
tool
to
use
and
if
it
is
reliable
and
what
we
need
to
tweak.
A
If
we
want
to
continue
doing
it,
and
you
know,
for
example,
we
I
don't
know
if
we
have
really
thought
about
at
least
in
not
only
in
the
tracking,
but
also
how
we
visualize
the
data.
A
If
we've
thought
hey
what,
if
we
actually
keep
this
survey
as
it
is,
for
I
don't
know,
let's
imagine
two
years
and
so
the
same
user
would
see
the
survey
prompt
again
and
again,
and
I
don't
think
we've
thought
that
through
yet
because
we
probably
just
keep
this
enabled
for
for
one
instance
of
someone
seeing
it,
which
would
be,
I
think,
90
days
or
is
it
180
days?
A
That
means
that
there
will
be
a
you
know
like
the
90
days
will
shift,
of
course,
depending
on
when
you'll
see
it,
and
when
that
survey
is
available
to
you
and
sonny's
love
suggested
some
percentages,
and
I
did
some,
you
know
napkin
math,
and
I
think
that
would
result
in
like
after
five
days
more
or
less,
we
would
have
a
hundred
percent
of
sas
users
with
enabled
that
for
them,
so
it
wouldn't
mean
we
would
have
a
hundred
percent
of
responses
or
dismissals.
A
But
a
hundred
percent
of
all
of
sas
users
would
potentially
see
the
survey
if
they
visited
a
merge
request.
So
yeah.
I
wanted
to
stop
there
and
I
wanted
to
hear
your
thoughts,
stanislav
and
ben,
and
anyone
else
who
wants
to
chime
in
on
this
to
see
what
you
think
is
is
reasonable
here,
given
what
I
think
is
the
purpose
of
being
more
of
an
experiment.
B
Yeah
in
in
talking
this
over
with
some
other
members
of
the
research
team,
it
was
pointed
out
to
me
that
100
of
sas
users
is
potentially
a
whole
lot
of
people,
and
I
don't
really
know
how
this
had
escaped
my
grasp
until
now,
but
like
we
shouldn't
generally
like
it
as
a
research
principle,
talk
to
more
people
or
bug
more
people
than
we
need
to
so
depending
on
you,
you
know
what
we
think
is
reasonable
for
the
amount
of
responses,
and
this
will
be
a
low
response
rate
kind
of
thing.
B
I
don't
think
we
need
to
enable
this
for,
like
everybody
on
sas.
That
seems
like
a
lot
and
especially
not
without
us,
like
maybe
pausing,
to
make
sure
that
everything
is
coming
in
as
we
expect
it
and
so
yeah.
I
just
have
a.
I
have
a
concern
there.
I
think,
especially
if
we're
going
to
continue
to
do
this.
This
kind
of
thing
survey
people
about
mrs
or
various
other
things.
B
B
A
So
is
that
so
I
think
what
you're
saying
is
we?
We
should
try
to
figure
out
what
is
the
right
number
to
get
statistical
significance
or
something
like
that
for
for
what
we
want
right
yeah,
I
think
yeah.
So
the
this
is
based
on
google's
approach
of
their
hats
survey,
so
happiness
tracking
survey,
and
they
do
this
differently
in
that,
if
they
don't
receive
responses
from
100
of
people
at
the
same
time,
they
only
show
it
for
some
people
during
a
certain
time
period.
So
it's
like
I
don't
know.
A
I
think
it
was
like
eight
percent
of
their
user
population
every
week
or
or
one
percent,
so
it
was
like
one
percent
and
then
those
people
would
not
no
longer
see
it
then
another
one
percent,
those
people
would
not
say
so.
It
was
rotating
the
user
population,
and
but
here
we
don't
have
that
capability,
I
think
to
rotate.
So
we
just
have
incremental
roll
out,
not
yeah,
just
rotation
on
those
buckets
yeah.
A
C
My
question
is:
are
you
sure
about
rotation
stuff
do
do
we
have
it
or
we
certainly
do
not
have
it?
If
we
have.
B
A
My
understanding
is
that
my
understanding
is
that,
once
you
enable
for
those
like
10,
the
users
in
those
10
will
be
fixed.
I
don't
think
it
will
rotate
unless.
A
A
But
I
don't
think
it
rotates
by
default.
I
don't
think
that
would
make
a
lot
of
sense,
but
that's
a
great
question
stanislav.
Maybe
it
does
or
maybe
it
doesn't,
and
we
can
hack
that
by
by
doing
what
I
was
saying
like
enable
for
ten
percent
disable
and
then
once
you
enable
for
10
again,
the
code
would
pick
randomly
another
10
percent
of
the
user
population.
A
A
Yeah,
I
think
I
think
the
the
way
forward
here
is
either
to
find
out
if
there's
a
way
for
us
to
rotate
those
like
10
or
whatever
number
it
is.
But
you
know
each
time
you
enable
or
disable
the
feature
flag.
Does
it
rotate
the
10
or
we
decide
a
maximum
percentage
that
we
will
progressively
roll
out
like
like
the
percentages
you
were
suggesting,
but
we
kept
it
at.
A
C
C
If
we
leave
it
at
10,
we
can
just
postpone
this
decision
like
how
much
users
do
we
actually
want
in
this
survey,
so
we
leave
it
at
10.
Wait
for
some
time.
You
collect
some
data
and
then
we
can
decide.
Do
we
need
more
data,
or
should
it
just
like
keep
it
like
that
and
that's
it.
A
Yeah,
so
so
yeah,
so
what
so
doing
the
math
of
what
google?
So
sorry
before
that's
yeah,
I
agree
with
what
you
were
suggesting,
I
think,
progressively
rolling
it
out
to
10.
So
you
know
like
zero
points,
one
percent
or
0.01,
and
then
you
know
multiplying
that
by
10
and
then
by
10,
until
we
reach
10
doing
that
every
I
wouldn't
do
that
every
day.
A
I
would
probably
do
that
every
week,
because
I
think
a
day
wouldn't
be
enough
time
for
us
to
see
people
visiting
mrs
and
then
responding
to
the
survey
unless,
like
we're,
expecting
those
that
percentage
of
people
to
always
be
online
and
looking
at
merge
requests.
So
maybe
a
week
would
be
a
good
time
to
then
trigger
the
next
rollout
until
we
reach
10
percent,
but
just
out
of
curiosity.
So
what
how
google
did
it
they
divided
by
the
number
of
weeks
in
a
year?
A
And
so
once
you
got
to
the
end
of
the
year,
every
week
would
have
a
different
rotation
of
users.
And
that
means
that
every
week
the
their
surveys
are
only
shown
to
around
two
percent
of
the
user
population
and
then
the
next
week
it's
a
different
two
percent
and
then
the
next
week
a
different
two
percent,
which
means
that
yeah.
If
they
do
that,
you
know
for
a
month,
for
example,
it
will
be
around
eight
percent
of
the
user
population
that
would
see
their
surveys
so
so
yeah.
I
think.
B
And
in
the
event,
we're
doing
right
like
an
exponential
scale,
so
last
the
last
week
will
be
of
the
month
if
we're
doing
it
like
that
will
be
like
when
most
people
see
it.
So
we
won't
be
bothering
people
for
like
most
people
for
an
entire
month.
They
either
yes
and
they
just
see
it
once
anyway.
If
they
dismiss
it,
then
they
don't
see
it
again.
So
yes,.
A
Yes,
so
if
we
start
like,
if
we
do
it
for
a
month,
so
every
week
we
increase
the
the
rollout
and
we
do
it
for
four
weeks.
For
example,
we
would
have
to
start
at
0.001
and
then
increment
or
multiply
that
by
10
every
week
for
four
weeks,
and
that
would
get
us
the
10
by
the
end
of
the
fourth
week.
A
A
Yeah,
I
I
think
is
every:
does
everyone
agree
with
that?
Also
also
yukai,
because
you
probably
will
be,
will
get
feedback
positive
and
negative
about
this?
People
will
annoy
you.
D
D
D
Is
an
experiment
sort
of
a
long
time
in
the
making,
in
terms
of
both
getting
the
survey
in
figuring
out
how
to
roll
it
out
and
do
it
and
I'm
I'm
more
anxious
to
get
it
out
there
and
sort
of
see
how
people
respond
than
I
am
like.
Are
people
going
to
be
annoyed?
Yeah
absolutely.
Are
some
people
going
to
provide
good
feedback
yeah?
I
think
so
so
like.
D
Let's
just
see
how
it
goes
and
we
may
find
out
after
a
couple
months
that,
like
nobody,
responds
and
we
don't
get
anything
useful
and
then
we
turn
it
all
off
and
like
try
something
else
the
next
time
and
that's
that's
fine
too,.
A
Yeah,
so
what
we
were
suggesting
is
rollouts
for
four
weeks
starts
at
0.001
percent
and
every
week
at
the
end
of
beginning
or
the
week,
we
will
decide
that
later.
We
will
multiply
that
by
10.
So,
by
the
end
of
four
weeks
we
will
reach
10
of
the
user
population
and
those
10
percent,
which
I
we
believe
are
not
rotating,
will
always
be
the
same
people,
the
same
user
accounts
would
see
the
the
prompt
and
they
can
answer
it
or
dismiss
it
and
and
yeah.
A
But
that's
part
of
that's
part
of
you
know
rolling
this
out
because
we
may
get
someone
that
is
hasn't,
visited
gitlab
in
a
long
time
and
they
visit
gitlab
after
two
years
are
not
using
it
and
they
see
the
survey
and
they
respond
to
it
so
very
infrequent,
infrequent
users,
but
also
very
frequent
users.
I
think
it's
part
of
the
gamble
and
part
of
the
study
to
keep
it
unbiased.
B
A
So
yeah
that
would
be
sids
right
and
and
dz
and
a
few
others.
Let's
see
the
survey.
B
So
yeah,
I
think
that's
fine.
I
don't
think
we
need
to
limit
it
sorry
to
san
slof
at
your
next
point.
I
don't
think
we
need
to
limit
it
for
account
it
just
because
we're
we're
collecting
that
info
right
and
I
would
like
to
see
if
there's
a
any
significant
difference
there.
A
Okay,
thanks
stanislav
for
also
asking
me
about
it
in
the
rotation
slack.
I
don't
think
that
is
a
blocker
to
what
we
were
discussing
here
and
whatever
we
find
yeah.
Please
share
in
the
rollout
issue,
I'll
be
off
the
next
two
weeks,
so
I'm
comfortable
with
what
we've
discussed
here
and
yeah
feel
free
to
make
any
decisions
based
on
that,
I'm,
okay
with
that,
I'm
also
okay.
A
If,
if
everyone
feels
like
they
should
wait
for
me
to
get
back
like
you,
don't
need
to
wait,
you
can
enable
this
and
I'll
be
back
by
the
second
week
when
we're
already
we're
already
at
at
you
know
some
percentage
but
yeah,
I'm
not
a
blocker.
My
pto
is
not
a
blocker
for
this.
I
think.
A
Okay,
yeah,
the
rotation
is
sticky,
thinks
done,
is
left
so
yeah,
so
yeah
disabling
and
enabling
it
again
will
always
be
the
same
users.
Is
that
correct?
A
A
Okay,
okay,
thanks
and
finally
blocker
or
not
about
tracking
survey
dismissals
and
renders
is
it
a
blocker
to
enabling
the
future
flag
and
start
getting
responses
across
sas.
C
Yeah
we
can
track
these
missiles.
That's
really
easy.
I
can
actually
like
create
nmr
for
it
today,
but
for
renders
it's
not
easy,
because
we
have
a
logic
that
hides
all
the
service
in
other
tabs.
C
If
you
dismiss
it
and
if
we
track
like
playing
renders
it
will
track
all
the
tabs
like
if
you
have
five
tabs,
it
will
give
you
five
tracking
points,
but
the
user
will
see
the
survey
just
once
so
we'll
have
to
actually
track
the
visibility
of
it
and
it's
a
bit
complex
with
intersection,
observer
and
other
kinds
of
stuff,
so
it
has
to
be
tested
really
well
for
it
to
be
reliable.
C
I
don't
want
it
to
give
us
false
positives
or
false
negatives,
so
it
might
take
some
time
so
the
first
one
is
really
easy,
the
second
one.
If
we
can
live
without
it
for
some
time
it
will
be.
Okay,.
A
Yeah,
thank
you
for
that
context.
That
makes
sense.
I'm
thinking
that
it
might
not
be
an
issue
to
keep
sending
the
the
tracking
events
every
time
it
is
rendered.
A
Although
you
know
it's
not
very
performant,
because
we're
repeating
something
that
already
happened,
but
from
a
visualization
standpoint,
we
can
count
only
like
once
we
can
de-duplicate
the
rendering
events
to
only
count
the
rendering
events
once
for
each
user.
A
We
can
do
that
in
in
periscope
and
and
visualize
that,
so
it
would
not
skew
and
increase
the
the
funnel
in
a
wrong
way.
We
can
de-duplicate
that
that's
the
the
tracking,
the
visualization
level,
but
yeah
it
might
not
be
performance
and
yeah.
I
understand
if
you
wanted
to
do
it
right
at
the
the
code
level.
A
Okay:
okay,
let's
let's
roll
with
that,
I'm
I'm
comfortable
as
well.
I
don't
have
a
strong
opinion,
so
so
yeah
sun
is
live.
If
you
think
that's
enough
to
to
proceed
with
the
dismissals,
let's
just
do
the
dismissals
now
and
think
about
the
renders
later
yep,
all
right
cool.
I
think
this
was
productive
and
yeah.
That's
that's
what
I
had
to
discuss.
I
don't
know
if
anyone
wants
to
discuss
any
other
points,
although
we're
already
over
time.
D
D
So
thanks
for
thanks
for
leading
the
effort
and
and
getting
it
to
this
point,
it's
been,
I
think,
much
longer
than
any
of
us
thought
it
was
when
pedro
hooked
up
this
idea,
a
while
back
so.
A
Yes,
yes,
yes,
but
yeah
thanks
for
prioritizing
it
as
well,
I'm
hopeful
that
it
will
be
a
good
experiment.
At
least
we
will
learn
something
either
like
continue
doing
it
not
do
it
or
do
it
a
bit
differently.
A
What
I'm
afraid
of
to
be
honest,
is
that,
regardless
of
like,
if
it's
a
positive
or
negative
experiment,
that
many
other
teams
will
want
to
do
it
immediately
after
they
see
this
live
like
a
lot
of
people
and
project
managers
will
say.
Oh
I
want
this
like.
How
can
I
do
it?
Let's,
let's
fork
stannislav's
code
and
do
it
in
our
area.
Now,
that's
what
I'm
afraid
of
please
no.
C
A
Yeah,
for
everyone's
sake
no
yeah,
but
but
yeah,
we
will
have
to
to
manage
that,
and
then
I
think
that's
also
a
good
thing
like
once
we
start
rolling
this
out.
More
broadly
is
to
immediately
say
that
and
say
like
this
is
an
experiment
like
this
is
for
merge
requests
only
we're
still
figuring
out
how
we
can
then
scale
this
to
other
areas.
If
it
is
success
like
set
the
expectations
that
way
so
that
people
know
what
to
expect
both
as
users,
but
also
as
people
that
improve
gitlab
the
product
cool.
A
Thank
you.
Everyone
thanks
thanks
a
lot
for
making
it
this
call,
it
was
great
seeing
everyone
and
have
a
great
rest
of
your
week.