►
From YouTube: Code Review Weekly Workshop - Sep 16, 2022
Description
In this session we discuss approaches to community contributions and identifying testing gaps. We also pair on a gitlab-ui MR review.
B
Thanks
for
joining
us
for
the
code
review,
weekly
workshop
yanic's
got
something
to
show
and
tell
and
or
a
question
so
what's
going
on,
yannick.
C
Yeah,
hey
everybody,
I've,
probably
I've
got
all
of
the
you
just
mentioned
a
little
bit.
What
I'm?
What
I'm
bringing
along
today
is
in
our
last
meetings,
we've
been
speaking
about
merge,
requests
that
that
include
graphical
queries
and
that
they're
a
little
bit
tricky
to
to
test
and
review,
and
all
these
sorts
of
things
the
dmr
looking
at
here
is
a
community
contribution
and,
and
my
eye
is
an
astonishingly
good
one
and
on
one
hand
the
changes
are
pretty
much
straightforward.
C
We
are
having
an
already
existing
feature
which
is
using
the
rest
api
and
it
is
now
refactored
to
be
using
graphql
queries,
no
more
features
added,
just
a
refactoring,
pretty
much
pretty
straightforward.
So
that's
fantastic!
I
reviewed
this
the
way
I
reviewed
things
which
I'll
be
telling
you
more
about
in
a
second
approved.
It
gave
it
to
another
maintainer
because
without
any
comments
like
I
was
just
like
okay,
this
is
how
you
do
this.
This
is
just
perfect
turns
out.
C
Not
quite
so,
there
were
other
people
involved
and
I
was
very
happy
to
to
have
them
around
finding
no
major
things,
but
still
a
couple
of
them.
C
So
I
think
this
is
something
worth
mentioning
or
with
bringing
to
this
meeting,
and
we
could
now
approach
this
in
a
couple
of
ways
and
whatever
you
feel
comfortable
with,
we
could
option
one
just
start
out
giving
this
a
review
without
looking
at
all
the
things
others
have
found
and
see
how
far
we
can
take
it,
or
we
just
basically
can
kind
of
expand
all
the
threats
and
see
what
is
happening
there.
B
So
I'm
I'm
really
interested
in
your
thought
process
of
what
were
you
looking
at
where
you
concluded
this
was
okay
and
then
what
would
be
like
the
lessons
learned
after
it
sounds
like
you've
got.
You've
peaked
my
interests
and,
like
you,
the
story
sounded
like
just
every
other
day,
but
then
darkness.
C
I'm
very
very
happy
to
do
this.
This
way,
full
disclosure-
these
things
that
I
mentioned
in
here,
I
have
not
yet
fully
understood,
which
was
probably
part
of
the
a
part
of
the
thing
that
why
I
didn't
mention
them
so,
but
I'm
I'm
happy
to
to
tell
everybody
what
how
I
approach
this.
So
as
a
quick
overview,
we
are
looking
at
the
issue
analytics
request
for
projects
and
groups.
C
If
I
remember
so,
it
is
pretty
much
straightforward
if
you're
speaking
in
the
rest
world,
making
the
requests
getting
the
data
displaying
it
graphical
kind
of
the
same,
no,
no,
no
mutations,
nothing
crazy,
pretty
much
getting
better
showing
it.
So
I
kind
of
deal
with
that.
First
thing:
I
did
manual
testing
everything's,
looking
fine,
all
the
features
are
there.
As
far
as
I
could
tell
this
has
been
working.
So
therefore
that
was
that
was
pretty
pretty
good
checking
that
that
will
be
my
very
first
go-to.
C
My
second
go-to
would
probably
be
the
tests
and
therefore-
and
that
is
also
kind
of
probably
the
trap-
I
fell
into-
I've
been
looking
at
the
test
and
and
checking
for
what
changed.
Is
there
anything
missing?
Is
there
anything
significant
going
in
there
and
what
I
definitely
first
encountered
is
the
the
test
suite
really
didn't
change
that
much
other
than
obviously
we've
been
feeding
a
get
request
into
it,
and
now
we're
mocking
all
the
graphical
things
and
these
kind
of
things,
but
other
than
that.
C
Nothing
has
really
changed
like
in
terms
of
functionality.
This
still
was
a
was
a
full
box
of
all
the
things
that
we
had
before.
So
I
was
kind
of
already
happy
with
that.
When
I
see
that
no
no
tests
are
fading,
no
tests
have
been
removed
because
I
did
not.
C
That
was
my
bold
statement
so
having
the
test
the
test
suit
covered,
I
was
basically
just
looking
for
the
the
actual
implementation.
C
If
I
can
see
anything
that
makes
me
raise
an
eye
bro,
but
it
was
most
of
the
things
were
kind
of
this
query
or
basically
implementation
code
to
to
actually
trigger
the
query
keys
that
have
been
renamed
things
like
that,
nothing
to
nothing
to
really
pay
too
close
of
an
attention
to
so
that
was
kind
of
my
my
first
approach,
anything
that
makes
you
raise
an
eyebrow.
B
C
Okay,
so
yeah,
that
is
hence
my
first
comment
like
I
was
sold
on
this
thing:
okay,
nice,
great,
congratulations,
kate,
great
contribution,
very
impressed,
all
good.
The
fantastic
dave
then
took
over,
and
here
is
what
I
feel
like.
Okay,
I
need
to
work
on
my
revenue
skills
here,
because
I
don't
like
this
minor
suggestion.
Well,
okay,
praise
yeah!
I
probably
should
have
done
this
as
well,
but
other
than
that
a
potential
testing
gap.
Let's
see
what
dave
has
to
say-
and
this
is
like
disclaimer.
This
is
the
part
where
I
don't
fully
understand.
C
B
So
going
to
the
this
is
likely
the
way
I'm
going
to
interpret
this
situation,
and
this
is
a
lesson
learned
for
whether
your
front
end
or
back
end.
This
is
something
just
keep
in
mind,
so
the
intent
here
would
have
been
pure
refactor.
B
No
new
features
we
used
to
do
rest
now
we're
doing
graphql
and
this
component
pure
refactor.
It's
a
big
assumption
that
we
have
proper
coverage
to
start
with,
and
so
that's
likely
what
what
dave
is
catching
on
to
and
so
one
way
to
to
test.
So
you
can
either
look
at
the
mr
as
an
absolute.
B
What
is
the
health
of
our
code
absolutely
and
what's
the
health
of
the
code
delta
because
of
this,
mr
and
if
you
just
look
absolutely
it'd,
be
okay,
I'm
touching
these
lines.
Let
me
just
start
removing
lines
and
do
tests
fail,
like
that's
my
favorite
way
to
test
if
there's
code
coverage
or
not
and
that
likely
they've
dave
might
have
you
know
that's
rather
than
looking
through
the
tests
like
you
kind
of
described,
that's
maybe
what
dave
did
was.
Okay,
we're
doing
some
sort
of
like
conditions.
B
Which
is
a
little
different,
so
let
me
just
check
if
we've
actually
tested
them
by
removing
them
and
running
the
tests
and
seeing
if
they
fail.
That's
that's
how
I
know
that
I
might
have
and
I've
at
times
stumbled
across
a
reduction
in
coverage,
but
then
it
brings
up
the
big
question
of.
Is
that
like
a
blocking
issue,
because
did
we
have
this
coverage
before?
B
Is
it
in
scope
kind
of
right?
But
I
would
say
I
would
argue
we're
touching
it
so
whatever
we've
touched
and
changed
should
be
well
tested,
because
we
want
to
ensure
that
a
refactor
is
good
and
working
and
like
as
expected
and
continues
to
so
I
would,
I
think,
refactoring
does
imply
if
there's
no
test
for
something
you
get
to
you
get
to
add
tests
now.
C
B
But,
and
I
think,
there's
different
levels
of
it
too,
but
I'd
love
to
hear
cassio
or
drew
or
andre.
What
do
you
think
on
this
situation
of
refactoring
and
identifying
pre-existing
testing
gaps?
How
would
you
handle
that
in
a
review
so.
D
For
my
perspective,
it
basically
depends
right
on
many
factors,
one
of
which
is
what
yannick
mentioned
versus
community
contribution
versus
the
github
employee
contribution
in
general.
What
they
try
to
follow
is
the
voice
code
rule,
so
you
should
leave
the
code
base
or
whatever
you
are
touching
in
a
better
shape
than
it
used
to
be
so
definitely
if
there
is
the
way
to
improve
things
that
is
not
very
cumbersome
to
still
do
as
a
part
of
the
same
earth
request,
then
we
should
do
it.
F
I
reckon
like,
when
you
say
it's
a
community
contribution.
This
is
a
contribution
from
someone
from
the
community
not
from
libya.
D
F
Right,
okay,
I
reckon
I
guess
for
for
gitlab
people
to
when
when
they
touch
on
the
code-
and
there
is
no
test
coverage,
I
guess
there
is
a
bit
of
a
conversation.
I
mean
sorry,
I
I'm
very
new
to
gitlab
just
discussing
right
now,
yeah.
So
generally
speaking,
in
my
opinion,
I
should
say
like
if
someone
has,
if
it's
a
very
crucial
path-
and
someone
has
the
time
to
add
the
test,
then
then
definitely
we
should
definitely
encourage
people
to
add
tests.
F
Generally
speaking,
I
would
still
say
we
should
add
the
tests,
but
I
guess
then
we
can
kind
of
let
it
slide
to
some
degree
and
hoping
that
the
person
would
come
back
when
they
have
more
time
to
add
the
tests.
But
for
like
a
community-
a
contribution
I
would
say
I
don't
know
why,
but
I
I
feel
like
I
would
be
a
little
bit
stricter
like
they
have
all
the
time.
F
B
F
B
Love
it
that's
so
funny
I
had
not
that's.
That's,
usually
not
the
perspective.
I
have
on
community
contributions.
Usually
it's
like
you've
donated
your
time.
Thank
you
so
much
like.
I
don't
want
to
ask
for
more,
but
I
love
your
perspective
like
what
else
are
you
doing.
B
B
Things
are
broken
on.com,
and
this
is
a
very
fragile
piece
of
code
that
we
know
fixed
this,
but
we
can't
add
a
test
because
that's
too
complex
we'd
really
like
to
fix.com
right
now,
like
that's
a
big
deal
for
yeah,
we
should
probably
do
that
and
defer
figuring
out
how
we
can
add
regression
testing
if
that
comes
up.
So
those
those
trade-off
decisions
definitely
need
to
happen
at
times.
B
I
will
say
when
something
isn't
urgent
with
mrs
it's
it's
worth
a
cycle
of
just
keeping
that
standard
of
quality
as
as
high
as
we
can.
It's
worth
one
more
cycle
like,
and
in
this
case
I
think
david
catching
that,
like
that's
great
yeah,
that's
that
was
a
really
good
catch
and
I'm
glad
natalia
jumped
in
like
yeah.
We
don't
want
to
use
rapper
vm,
so
natalia
was
able
to
I
mean
this
looks
like
awesome,
asynchronous
collaboration,
david
identified
testing
gap.
B
Natalia
had
a
great
idea
for
a
better
way
to
check
it
out,
and
what
david
did
then
is.
This
is
my
number
one
way
favorite
way
to
help
contributors
if
we
can
provide
a
patch
for
them
where
they
can
fake
a
basic
review
and
study
it
and
apply
it
every
time
I've
applied
a
patch,
it's
really
easy
to
think
like
oh
well,
no
one's
learning,
anything
or
whatever,
like
I've
heard.
B
You
know
some
people
comment
about
patches
in
that
way,
but
I
think
like
well
over
99
of
the
time
I
provide
a
patch
the
person
that
applies
it.
You
know
they're
they're,
now
reviewing
looking
over
it
and
owning
the
solution
and
that's
a
great
way
to
communicate
changes.
It
makes
it
easy
for
the
contributor
so
the
way
david
applied.
The
patch
was
awesome.
Yeah.
G
C
And
regarding
the
we're
getting
the
pictures.
F
I
I
just
wanted
to
ask
by
patch:
do
you
mean
someone
from
gitlab
jumpscene
and
basically
adds
some
code
on
top
of
the
contributor?
B
Sorry,
can
you
go
back
to
can
go
back
to
the,
and
so
what
the
contributor
to
do
is
actually
copy
this
with
clipboard
and
then
through
the
terminal
do
something
like
pb
paste,
pipe
git
apply
and
it'll
apply
the
patch
locally,
so
the
change
is
still
coming
from
the
contributor,
but
the
reviewer
and
the
maintainer
is
it's
kind
of
just
giving
a
is
giving
a
large,
not
large,
but
like
it's,
not
a
trivial
code
suggestion
it's
just
like
here's,
a
code
suggestion
and
so
yeah
using
patches.
B
I
highly
recommend
getting
familiar
with
generating
patches
and
applying
patches
as
we
review
code.
This
is
so
this
is
I
I
I
do
it
because
I
I
could
spend
an
hour
trying
to
word
something
just
right
or
I
could
spend
five
minutes
of
just
this.
Here's
the
code,
I'm
suggesting
what
do
you
think
and
now
we're
just
collaborating
like
on
the
code
and
that's
great.
D
Isn't
it
actually
something
that
is
already
integrated
into
this
review
flow
in
gitlab.
D
B
Yeah,
that's
a
good
observation,
be.
C
It
a
little
careful
with
those
like
they
work,
fantastic
for
small
changes
like
one-liners
and
stuff,
but
if
things
get
too
too
complex,
it
is
at
least
a
little
painful
to
use,
and
it
could
you
could
easily
introduce
things
that
you
there
was
no
intention
on
introducing.
So
that
is
what
to
keep
in
mind.
B
It's
they.
The
suggestions,
don't
work
under
the
hood
does
not
get
patches
working
under
the
hood.
It's
a
it's
a
range
of
lines
and
then
code
we're
going
to
replace
on
top
of
that
range,
and
so
that's
they're,
they're,
somewhat
limited
in
that
sense.
For
one
liners
like
this
easy
easy
and
that's
great,
I
actually
end
up
just
sending
patches
all
the
time
because
I
like
patches
so
much
and
I'm
a
little
bitter
that
we
didn't
use
patches
for
suggested
changes
so.
C
And
you
can
have
easily
one
pad
for
changes
in
most
multiple
files,
and
things
like
that,
so
it
is
regarding
the
context.
It
has
a
lot
of
benefits
yeah,
but.
B
B
E
I've
I've
been
on
the
contributor
side
of
this
problem
like
when
I'm
I'm
refactoring
something
somewhere
and
verify
I'll
come
across
a
test
gap.
I
almost
always
send
the
new
test
coverage
in
a
separate,
merge
request,
because
the
reviews
are
so
much
faster,
like
it's
really
easy
to
review
one
test
added
to
a
section
of
code
and
it's
also
way
easier
to
review
a
refactor
when
it
is
a
pure
refactor,
and
I
can
link
link
out
to
the
tests
that
say:
here's
why
this
reactor
is
bulletproof.
E
These
are
all
of
our
outcomes
and
I'm
not
touching
them.
So
you
know
the
refactor
is
good.
Both
of
those
reviews
are
like
lightning
they're
great.
So
I
highly
like
I
haven't
had
this
with
a
contributor
yet,
but
if
I
did,
I
might
even
encourage
them
to
do
that,
because
I
think
the
the
diffs
and
the
conversations
are
super
clean.
That
way.
B
Yeah,
that's
a
good,
that's
a
really
good
option
to
consider
too
and
for
community
contributions.
I've
often
I'll
create
the
follow-up.
Mr
too,
of
like
okay,
there's
a
big
testing
thing
that
I'm
about
to
make
a
suggestion,
for
I
don't
want
to
ask
more
of
them.
I
know
this
will
take
me,
maybe
five
to
ten
minutes,
I'm
about
to
write
the
code
for
patch
anyways.
B
I'm
just
gonna
just
write
the
code
in
a
separate
branch
and
create
an
mr
for
it
and
that's.
I
think
that
is
definitely
a
good
option
too.
If
I
I
call
these
fast
follow-ups
and
if
it
has
to
be,
if
we
identify
testing
gap,
ideally
we
can
follow
up
on
it
right
away.
It's
not
a
deferred
46
000
drops
in
the
bucket
follow-up.
It
would
be
a
hopefully
a
faster
follow-up.
B
C
A
good
point
yeah
thanks
so
much
for
the
input
folks.
I
I
feel
a
little
better
about
this
now
and
I'm
starting
to
to
realize
that
my
take
on
this
and
and
please
take
it
with
a
grain
of
salt,
because
what
I'm
about
to
say
is
heavily
opinionated
and
I
absolutely
might
be
wrong
about
this,
and
also
I
do
think
that
this
is
another.
This
is
not
a
binary
question,
but
regarding
a
community
contribution,
I
kind
of
feel
like
I.
C
I
heavily
agree
with
the
the
follow-up
issue
and
this
should
be
set,
and
I'm
I'm
super
happy
with
the
way
this
has
been
handled
in
in
this
smr,
with
basically
like
really
actively
helping
out
the
contributor.
But
I
would
also
say
if
our
code
base
already
has
a
testing
gap,
that's
kind
of
our
issue
and
if
they
are
dealing
with
what
they
have
and
solving
it
within
these
boundaries,
then
to
me
that
kind
of
implies
as
good
enough.
C
It
still
definitely
requires
a
follow-up
issue
and
it
is
something
we
should
be
we
should
be
taking
care
of,
but
regarding
a
blocking
comment
or
basically
having
this
enough
to
to
be
blocking,
I'm
not
so
sure
about
this,
and
therefore
I
also
that's
why
I
I
started
to
breathe
heavily
when
I
hear
the
divorce,
the
boy
scout
rule
term,
and
that
is
may
because
in
my
soul,
in
my
past
career,
I
noticed
that
this
thing
was
kind
of
also
being
misused
as
a
little
bit
of
a
feature
creep
as
well.
C
So
I'm
I'm
super
happy
with
getting
our
code
based
polish,
but
let's
be
very
sure
about
the
scope
we're
currently
speaking
about
and
if
we
encounter
any
further
problems
or
additional
problems
happy
to
fix
them.
Let's
keep
our
mrs
small
and
let's
open
another
issue.
Let's
do
all
of
these
things,
that's
kind
of
what
I
would
tend
to
decide
for,
but,
as
said
there's
more
than
one
opinion
on
this.
B
Well,
I
I
think
I
think
that
it
does.
It
does
depend
because,
like
if
you
are
refactoring
tests,
allow
us
to
make
changes
with
confidence
depending
on
the
testing
gap,
and
you
know
sometimes
you
do
have
contributors
just
stumble
upon.
This
is
actually
really
fragile,
and
so
we
really
for
us
to
actually
change
this.
We
need
to.
B
We
need
to
heart,
have
a
stricter
testing
harness
almost
first
before
we
change
it
because
of
all
the
context
and
stuff
going
on
so
like
I
do
agree,
you're
you're
putting
weight
on
prioritizing
more
than
than
just.
We
got
to
get
everything
absolutely
perfect,
but
I
I
I
do
think
even
if
we
want
to
keep
mrs
small
we
do.
B
Oh
testing
is
really
good
at
also
just
asserting
the
current
change
and
if,
if
there
hasn't
been
a
second
maintainer
mr
cycle
like
if
this
is
the
first
maintainer
in
our
cycle,
if
someone
catches
that,
like
that's
great,
if
this
is
like
the
third
cycle,
when
someone's
introducing
new
things,
that's
yeah,
I
would
agree,
that's
a
little,
maybe
a
little
nitpicky.
But
if
this
is
the
first
one,
it
sounds
sounds
good
catch
to
me,
but
I
don't
know,
but
it's
okay.
If
we
don't
agree
on
it,
we're
we
can
still.
F
I
guess
it
was
kind
of
a
follow-up
to
what
you
were
saying
that
most
of
the
time,
you
would
just
create
a
follow-up
pr
with
the
test
coverage.
I
was
just
wondering
what
would
be
our
preference.
Then
it
is
the
preference
of
gitlab
okay,
let
the
pr
merge
by
the
contributor,
and
then
we
follow
up
like
what.
What
if
we
don't
have
the
time
or
what
we've
we
don't
have
the
capacity
then
it
will
be.
There
will
be
still
a
hole
or
vulnerability,
potentially.
B
Yeah,
you
ask
a
great
great
question
and
I
would
say
the
probably:
the
preference
would
be
we
handle
it
in
the
semr,
but
I
think
what
yanik
is
and
others
are
pointing
at
is.
You
do
have
to
kind
of
judge
the
fatigue
level
of
the
mr
and
the
fatigue
level
of
the
contributor,
and
if
this,
mr
has
been
stretched
out
things
that
aren't
critical,
we
do.
We
don't
want
to
just
continue
to
add
fatigue
on
everyone
involved
in
emr.
We
want.
B
You
know
people
to
feel
efficient
with
the
way
mrs
are
being
handled
on
things.
So
we
don't
want
things
to
hang
around
for
a
long
time,
so
I
think
the
preference
would
be.
We
handle
things
in
the
mr
that
we
identify
them
unless
it's
just
very,
very
out
of
scope,
but
with
community
contributions,
especially
just
reading
the
contributor
and
reading,
where
how
long
this
is
stretched
out.
B
That
can
play
a
huge
factor
in
whether
we
decide
to
follow
up
ourselves,
create
a
follow-up
issue
and
there's
lots
of
competing
goals
because
one
we
want
to
fill
the
test
gap
but
or
we
could
be
inspired
to
create
a
follow-up
issue
that
another
community
contributor
can
pick
up
and
we
do
want
to
encourage
more
community
contributions
and
some
community
contributors
love
like
oh.
I
don't
really
know
what
to
do,
but
now
I'm
really
familiar
with
this
change.
B
I
just
made
this
I'd
love
to
follow
up
with
another
one,
because
I
like
making
more
changes
and
they'll
like
doing
that
if
mmrs
aren't
so
fatiguing
and
so
there's
a
lot
when
it
comes
to
community
contributions,
there's
a
lot
of
soft
things
to
kind
of
just
read
in
the
realm,
and
I
don't
think
you
can
be
over
prescriptive
about
it.
But
if,
if
you're,
really
not
certain,
I
would
highly
recommend
just
reaching
out
and
asking
a
question
like
david
did
of
here's
just
a
question.
B
What
do
we
think
and
I
love
how
he
labeled
a
question?
He
didn't
label
it
that
was
really
appropriate
for
for
what
this
was
of.
Like
he's
we're
just
asking
a
question:
if
it
wasn't
gonna
work
out,
then
it's
not
gonna
work
out
and
because
it
doesn't
seem
like
it
was
super
critical.
C
Do
you
agree,
awesome,
yeah,
thanks
everybody
for
for
sharing
your
thoughts
on
this.
This
was
super
insightful
that
was
about.
That
was
basically
all
I
had
to
say.
C
B
G
B
B
So
this
is
actually
the
one
we
started
pairing
on
last
week,
and
so
it
got
merged
this
week,
which
is
great.
This
is
an
mr
on
the
get
lab
ui
project
and
the
main
takeaway
for
here
I
wanted
to
highlight
was
we
identified
the
original
set
of
changes
and
it
was
hard
to
tell
why
we
were
making
this
change
until
in
a
comment
below
we
saw
a
link
to
an
issue
when
we
linked
to
the
issue.
B
There
was
pictures
of
of
the
actual
bug
that
we
were
trying
to
change
with
this
ups,
that
that
shows
up
the
gitlab
project
with
the
upstream
great
lab
ui
project,
but
the
original
state
of
it
didn't
really
have
so
in
the
gitlab
ui
project,
like
we
have
these
stories,
where
we'll
for
components
where
we'll
actually
visually
screenshot,
the
the
rendering
of
the
component
we'll
do
a
visual
regression
test
at
this
level,
which
is
great,
but
the
changes
weren't
actually
like
testing
these
things,
and
so
it
wasn't
clear
if,
like
we
were
actually
fixing
the
bugs
like
the
changes
to
our
story,
and
it
was
really
confusing
what
kind
of
why
we
were
making
the
changes
to
the
story,
and
so
I
left
some.
B
I
left
some
questions,
one
of
the
questions
being
here.
We
were
doing
this.
This
is
a
view.
This
is
a
view,
behavior
feature
that
we
don't
use
a
whole
lot
of
where
we
can
use
this
dot.
Sync
modifier,
and
that
was
really
that
was
strange
for
me
to
see.
So
I
just
left
the
question
about
it
and
then
seeing
some
other
things
that
are
just
like
these
are
little.
These
are
a
little
not
what
I'd
expect.
So
I
left
some
questions,
but
my
biggest
question
was.
B
Where
was
it,
I
can't
find
him.
Oh
yes,
yes,
I
wanted
to
confirm
these
things,
and
then
I
asked
the
question
if
we
should
add
a
story
to
actually
test
that
thing,
but
it
kind
of
ended
up
changing
what
all
of
our
stories
look
like.
Just
asking
these
questions
because
I
was
confused
and
I
just
want
to
encourage
everyone
when
you're
reading
through
code
and
if
it's,
if
you're
confused
on.
B
Why,
like
that's
a
really
good
signal,
and
it's
worth
asking
a
question
about
not
just
silencing
that
voice
because,
maybe
oh
well,
I
guess
I
guess
this
is
here
for
a
reason,
just
ask
a
question
about
it
and
it
could
pay
off
and
yeah.
This
wasn't
behaving
like
like,
I
thought
it
was
and
yeah
we
weren't
really
introducing
tests
for
what
we
were
fixing.
So
it
was
really.
B
B
Thanks
for
letting
me
share
that,
well,
that's
it
for
show
and
talent
questions
now
now
we're
entering
the
code
review
pairing
part
of
this
workshop.
Does
anyone
have
any,
mrs
that
they'd
like
to
get
reviewed
and
the
call
that
we
can?
B
I
always
have
mars.
I
always
do,
and
I
love
it
when
I
get
my
work
done
and
do
this
meeting,
but
I
I
feel
greedy
and
selfish
doing
that.
B
B
This
is
looks
like
there
is
a
bug
in
one
of
our
gitlab
ui
components
that
glitches
in
safari
that
one
might
be
I'm
hoping
is
somewhat
trivial.
This
is
a
review
on
the
customers
project,
which
is
technically
a
private
project.
So
I'm
not
gonna.
B
Do
that
one
right
now,
these
these
two
are
always
in
my
queue
so
that
we
can
ignore
these
two.
B
Oh,
this
is
also
customers.
So
what
do
you
think
we
got
this
version
bump
one?
We
have
this
ui
component
fix
one,
and
I
just
reviewed
this
one,
so
I
actually
need
to
unassign
myself
from
this
one.
B
Let
me
double
check
this.
One
was
interesting
and
there
are
definitely
interesting
finds,
but
I
had
already
reviewed
it.
I
can
go
over
the
the
finds
that
I
found.
There's
lots
of
lessons
to
be
cleaned
here
or
we
can
do
something
brand
new.
Someone
speak
up.
Tell
me
which
one
I
should
do.
The
version
bump
one
or
the
ui
component
fix
one.
B
E
100
gonna
start
writing
in
issues
that
things
turn
strange
and
that's
the
problem
I'm
addressing.
I
absolutely
love
that
and
you
can
all
look
forward
to
me
sending
that
in
my
issues
and
work
requests.
B
Is
really
good,
let
me
sign
in
over
here
and
see
if
we
can
recreate
this
issue,
I'm
just
curious,
because
that
seems
so.
Wild
props
for
I'm
gonna
pause
sharing.
My
screen
props
for.
B
I
was
about
to
resume
sharing
my
screen,
but
that's
after
typing,
in
my
after
having
one
password
all
open,
that
would
have
been
the
opposite
of.
B
G
C
B
Problem
validation
experiencing
any
strangeness.
G
B
D
B
B
I'll
have
I'll
have
to
review
the
recording
to
recover
what
the
labels
were.
I
don't
remember
what
I
just
clicked
on.
If
I
was
I'm
sorry,
I
think.
B
G
D
B
D
Like
to
comment,
if
I
may
is,
I
think
it's
really
good
that
the
comment
is
provided
for
some
obscure
css.
Yes,
because
it
really
helps
people
to
understand
why
something
is
there,
which
is
not
obvious.
However,
in
this
particular
case,
it
would
probably
be
great
to
actually
either
link
an
issue
in
the
comment
or
provide
more
specific
context,
like
a
version
of
safari,
for
example,
but
the
link,
I
think,
would
be
best.
B
Safari
bug
were
box
shadows,
visible
above
the
drawer
when
hovering
interactive
elements.
My
biggest
question
is
like:
would
this
affect
zn,
so
getting
technical
with
it?
Well
does
this
affect
z
index
stuff?
So,
like
these
these
these
I've,
I
lost
that
the
drawers
need
to
stay
in
front
of
everything,
something
like
translate.
Z.
Zero
sounds
like
I
don't
think
it's
gonna
affect
z
index
stuff,
but
it
sounds
like
it
could.
C
G
C
But
still
I'd
argue
that
well,
we
have
to
to
deal
with
with
what
we're
having
but
translate
zero
is
it's
supposed
to
be
doing
nothing?
So
it's
basically
just
a
reset.
So
this
is.
This
is
clearly
a
browser
bug
and
if
it's,
if
nobody
can
reproduce,
then
we
shouldn't.
We
shouldn't
merge
this
thing,
because
there
will
be
a
fix,
a
fix
or
a
hack
for
browser
buttons,
no
longer
existing
or
maybe
already
fixed.
B
G
Change
this
wow
simon,
can
seems
poof
man,
so
weird.
G
G
G
B
Yeah,
so
I
do
want
to
ask
a
question:
is
this
reproducible
on
the
latest
safari?
Maybe
this
is
a
browser
bug.
D
Clarify
what
sorry
clarify
what
you
mean
by
the
browser
back,
so
that
kind
of
implies
that
then
we
shouldn't
even
fix
it
right.
B
Okay,
maybe
this
is
a
maybe
this
is
more
of
a.
G
C
B
Could
already
be
fixed
and
I
get
that
problem
yeah
all
right,
while
I'm
here
I
am
going
to
throw
in
andre's
suggestion
of
wow
safari
is
so
weird
I'm
doing
this
in
safari
gosh.
It's
so
weird.
All
right
suggestion
could
we
add
a
link
to
the
originating
issue.
Oh,
let
me
say
phrase
thanks
so
much
for
leaving
the
comment.
We
had
a
link
to
the
originating
issue
to.
B
I
think
it's
good
yeah,
all
right,
yeah
that'd,
be
interesting,
that'd
be
interesting,
and
I
so
here's
some
things
that
make
me
feel
a
little
better
it'd
be
really
lame.
If
all
of
a
sudden
like
the
drawer
wasn't,
you
know
visibly
where
it
needed
to
be,
but
our
capybara
tests
do
test
for
the
visibility
of
things
it
clicks
on.
B
So
we
get
not
visual
regression
testing,
but
we
do
get
like
some
visibility
testing
on
things
like
this,
and
so
if,
under
certain
circumstances,
we
had
a
test
and
it
wasn't
able
to
click
on
the
drawer,
because
the
drawer
is
now
behind
other
things
that
those
tests
would
actually
fail.
I
would
still
be
really
interested
in
testing
this,
because
this
is
a
high
traffic
feature.
What's
the
worst
thing
that
could
happen
is
all
of
a
sudden.
This
isn't
visible
anymore.
B
That
would
be
not
great
one
of
the
reasons
for
addressing
this
is
I
mean
I
guess
this
is
opened
by.
I
don't
know
if
this
is
a
I
I
imagine
this
issue
probably
is
opened
by
a
a
member
of
the
community,
and
so
we
do
want
to
prioritize
those,
and
we
do
say
that
we
support
the
browsers,
even
if
they
have
an
internal
issue.
B
We
want
to
try
to
get
our
experience
working
nicely,
so
we
can't
just
excuse
it,
but
I
feel
good
about
the
question
we
asked
and
if
and
if
simon
comes
back
he's
like
oh
yeah,
there's
no
issues
with
it,
I'm
probably
going
to
want
to
just
smoke
test
it
locally
or
with
an
integration
branch.
B
So
that's
one
thing
with
these
get
lab
ui
projects.
I
would
want
to
test
out
the
gitlab
project
using
the
package
from
this.
Mr.
B
To
just
verify
that
an
integration
where
we're
we're
good
and
there's
no
issues
with
drawers
so
yeah,
that's
really
interesting.
Thanks
for
reviewing
that
one
that
turned
out
more
interesting
than
I
thought
it
would
be.
I
think
that's
I
think,
that's
all
we
got
does
anyone
have
anything
they
want
to
sneak
in.
B
Hey
thanks
everybody
for
taking
time
to
share
your
thoughts,
wisdom,
expertise
and
questions,
and
I
think
it's
good.
It's
all
generally
valuable.
You
all
have
a
great
rest
of
the
day
afternoon
evening,
adios.