►
From YouTube: Create:Code Review Weekly UX Sync - 2021-02-23
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I
don't
know,
maybe
like
20,
more
responses
in
the
last
week
than
you
had
the
week
before,
but
the
data
looks
roughly
the
same,
like
none
of
the
averages
dramatically
shifted,
so
I
think
we
should
close
it
and
then
document.
However,
we're
going
to
end
up
documenting
that
and
then
move
on
from
there.
A
I
did
have
another
thought
that
I
wonder
if
we
should
run
it
externally
and
pedro
your
question.
What
would
be
the
goal?
A
I
have
a.
I
have
a
concern
if
you
look
at
like
the
stack
ranking
knowing
who
should
review
my
merch
request
is
like
in
the
middle
of
the
pack
for
gitlab
was
like
not
important.
A
I
think
that's
because
we've
broken
the
model
yeah,
it's
even
lower
than
that
right.
It's
near
the
bottom,
and
I
think
that's
because
we
don't
suffer
that
pain
internally
at
all,
because
we
have
danger.
A
And
so
I
wonder
if
we
have
danger
with
reviewer
with
our
own
homegrown
reviewer
roulette,
and
so
I'm
wondering
if
and
maybe
it
doesn't
matter
like,
I
think
we
know
there's
a
problem
there
and
we're
going
to
go
solve
it.
So
maybe
we
just
continue
down
that
path,
but
if
we
thought
that
maybe
we
wouldn't
continue
working
on
like
surfacing
who
should
review
an
mr,
then
it
might
be
interesting
to
get
external
data.
So
I
don't
know
what
your
thoughts
on
that
are.
B
Yeah
just
just
quickly
some
context
for
for
amy.
This
is
a
survey
that
we
ran
internally,
aimed
primarily
at
engineers,
and
these
we're
looking
now
at
the
results,
and
one
of
the
questions
that
we
had,
which
was
the
main
question,
was
for
them
to
stack,
rank
these
actions
or
intentions
and
in
terms
of
like
problems
like
how
much
of
a
problem
this
currently
is
or
how
much
of
a
pain
point
this
is
in
kit
lab.
I
actually.
B
B
I
you
convinced
me
when
you
joined
code
review,
that
we
should
align
the
philosophies
of
our
internal
code
review
practices
with
what
we
wanted
to
or
what
was
most
important
for
us
in
the
co
review
group
to
build
in
gitlab
and
I'm
sold
on
that,
especially
because
not
only
because
of
dog
fooding,
but
also
if
we
can
make
our
internal
code
review
practices
better.
B
It
won't
be
just
better
for
us
in
the
code
review
group,
it
will
be
for
all
of
gitlab
and
that
will
have
compounding
effects
to
help
us
ship
faster
and
more
effectively
and
better
communication
and
so
on
and
so
on.
So
I
think
having
that
as
the
primary
goal
is
amazing,
but
I'm
I
think
it's
also
good,
that
we
don't
remain
in
a
bubble
and
we
reach
out
and
understand
what
others
are
thinking
about
yeah.
I
think
it
would
be
interesting.
B
C
Yeah
that
was
going
to
be
my
thought
as
well.
I
definitely
think
there's
there's
no
harm
in
doing
it
externally.
It
actually
would
be
great
to
get
more
perspective
on
it,
but
I
was
going
to
say
a
similar
thing
about
whether
the
ranking
would
be
positioned
the
same
ranking
of
problems
or,
if
you
want
it
to
be
more
specific
to
features
or
if
you
had
to
prioritize
something
like
we're
going
to
build
next
or
I
don't
know
whatever
the
goal
might
be.
B
No
opinions
from
me
at
this
time
to
name
it's:
okay,
yeah,
one
of
the
comm.
What
I
was
gonna
say
is
that.
B
A
No,
I
think
if
you
look
at
like
the
number
one,
it
was
like
knowing
what's
changed
right
since
I
last
I
have
it
up
too.
I
already
don't
have
enough.
I
think
that's
number
one.
C
A
A
Yeah
and
so
like,
I
think
we
validated
right
that
you
know.
We've
talked
a
lot
about
like
how
do
we
deal
with
communicating
status
of
emr
and
what's
changed
and
like
what
needs
to
happen,
and
we've
got
a
bunch
of
different
ways
that
I
think
we're
tackling
that
we're
talking
that
with
like
reviewers
and
handoff,
we
introduced
viewed
file,
checkbox
thingy.
A
This
is
like
not
a
good
name,
but,
and
so
I
think
we're
like
tackling
that
problem,
and
I
think
we
all
agree
that
that's
that's
probably
the
biggest
problem
we
want
to
solve,
and
so
I
think
this
further
validates
that.
A
My
concern,
my
only
concern,
would
be
that
one
of
the
things
that,
like
is
high
on
my
list
of
things
that
I
want
to
do
is
is
replace,
get
labs,
homegrown,
review
or
roulette
with
our
reviewers
feature
like.
I
want
to
see
review
a
roulette
burn
to
the
ground
and
never
show
up
in
another
danger
comment
ever
again
right
that
to
me
is
that
is
success
of
the
reviewers
feature.
A
We
probably
would
never
rank
it
high,
and
I
guess
I
want
external
validation
that
maybe
that
is
a
like
another
way
to
sort
of
externally
validate
that
that's
a
pain
that
other
people
experience,
and
we
see
that
in
like
issues,
but
I
wanna,
I
wanna
make
sure
that
we're
not
wrong
on
that
one
yeah
that
makes
sense.
B
It's
I
mean
if
we
reuse
the
survey
and
only
adds
a
couple,
more
questions
like
what's
the
size
of
your
organization
and
what
gitlab
version
are
you
using
like
a
self-hosted,
so
those
boilerplate
and
standard
questions
it
might
be
very
cheap
for
us
to
get
it
out
there
and
it's
also
cheap
for
us
to
then
analyze
the
questions,
because
there's
not
a
lot
of
them
and
the
main
question
is
that
one
about
the
problems
and
the
other
ones
are
just
to
add
more
context.
B
Yeah,
so
I'd
be
up
for
that.
I
think,
but
I
do
think
that
if
we're
going
to
do
it,
we
should
do
it
as
soon
as
possible,
because
that
would
allow
us
to
have
a
better
comparison
about
the
the
data,
and
even
so
it
will
be
interesting
because.
B
We've
been
shipping,
we
ship
reviewers,
we
should
view
the
checkbox
and
all
of
that-
and
I
I
bet
that
many
people
answer
the
survey-
have
never
experienced
that.
That's
the
problem
with
that's
both
the
strength
of
having
the
the
ability
to
download
and
install
github
on
your
server,
but
it's
also
the
problem
with
getting
good
data.
C
That
yeah,
I
would
suggest
that
if
you
do
kind
of
just
duplicate
the
survey
and
add
in
those
questions,
then
just
create
you
can
create
a
recruiting
request
issue
if
you
want
to
get
more
help
with
getting
the
survey
out
there
and
our
research
coordinator
can
help
with
that.
In
addition
to
social
media
or,
however
else
you
want
to
distribute
it.
A
C
Yeah
in
in
the
past,
it
happened
more
often,
but
then
more
and
more
marketing
was
kind
of
like,
let's
not
have
so
many
servers
coming
out
from
our
product
team.
So
usually
it
could
either
go
out
to
our
research
panel
first,
look
which
will
just
be
sent
from
qualtrics
to
the
baldrick's
mailing
list,
and
if
we're
having
trouble
with
that,
then
the
coordinator
might
contact
marketing
and
be
like
hey.
We
really
need
a
lot
of
responses
for
this.
C
C
So
you
could
just
give
like
a
timeline
if
it's
a
a
tight
timeline
or
if
there's
like
some
flexibility
with
it.
The
amount
of
the
amount
of
participants
you're
targeting
and
things
like
that
and
she'll
be
able
to
go
from
there.
C
Yes,
no
one
complains
about
that,
and
if
it's
not
a
huge
survey
like
seth,
is
a
big
one
where
we
were
targeting
200
people,
things
like
that,
it
usually
should
be
pretty
manageable,
with
first
look
and
if
not,
we
can
either
go
the
path
of
the
data
warehouse
or
the
the
docs
survey
banner.
Okay,.
A
A
B
B
B
Given
that
we
had
a
narrow
range
of
responses,
it
was
mostly
about
a
number
of
lines,
number
of
files,
so
maybe
just
looking
at
the
responses
that
we
already
have
and
have
a
multiple
choice,
question
with
with
an
or
that
they
can
end
the
net
whatever
they
want,
I
think,
might
be
better
to
analyze
or
else
if
it's
going
to
be
just
open
text
would
be
more
difficult
to
find
out
what
it
is.
B
Okay
sounds
good
catherine
onto
you.
C
Yep
I
have
a
quick
fy.
I
hear
that
I'm
making
progress
on
the
mr
widget
research
synthesis,
so
I
just
wanted
to
know
if
any
other
questions
have
come
up
along
the
way,
since
I
know
that
you've
gotten
a
lot
of
work
done
on
the
merge
button.
So
if
anything
else
has
come
up
for
consideration,
just
let
me
know,
but
just
to
give
you
an
overview
of
what
I'm
doing
in
that
dos
that
I
linked
I'm.
C
Basically
taking
a
look
at
a
look
through
time
to
understand
the
context
of
why
certain
things
were
introduced
into
mr
widget
when
they
were
introduced
and
who
is
responsible
for
it
and
my
goal.
There
is
to
basically
pull
out
the
top,
or
at
least
intended
job
to
be
done
and
intended
target
audience
for
each
of
them,
so
that
we
can
kind
of
see
whether
we
can
create
a
study.
C
So
that's
the
current
goal,
some
interesting
things
so
far.
I've
seen
that
verify
owns,
or
at
least
introduced,
I'm
not
really
sure
if
they
own
it
introduced
it
collaborated
on
it.
Whatever
verify
owns
quite
a
lot
of
it,
a
lot
of
the
sections
in
the
widget
and
then
create
is
kind
of
second
place,
but
then
secure
so
in
terms
of
promoting
more
collaboration
between
the
different
stages.
That'll
be
interesting
because
a
lot
of
the
decisions
have
been
made
independently,
thus
far,
but
it's
growing
a
bit
unwieldy
in
in
recent
times.
C
So
that
is
the
main
thing
that
I
just
wanted
to
give
an
update
on.
Just
let
me
know
if
any
other
questions
or
any
other
things
you
want
me
to
look
into,
have
come
up
along
the
way.
A
Yeah,
I
added
a
comment
like
inline.
I
assume
that,
like
code
review
actually
owns
all
the
ones
that
you
previously
might
have
said,
source
code
owns
since
it's
all
about
consuming
when
it
gets
to
the
merge
request,
like
we
sort
of
delineated
that,
like
that
means
we're
the
consumer
and
owner
of
that
and
not
source
code.
But.
C
Yeah,
that's
actually
that's
a
good
point
because
I
I
was
like
I
don't
really
know
if,
for
example,
the
branches
part
it
would
that
that
source
code-
I
think
or
is
it
code
review-
I
don't
know-
maybe
you're
right,
maybe
they're
all
code
review
now,
but
that
is
one
area
that
was
a
little
tricky
and
then
there's
code
owner
approvals
in
the
approvers
part.
I
guess
that
would
be
code
review.
A
Yes,
because
that's
about
consuming
it,
so
the
way
we
sort
of
like
the
way
to
think
about
it.
I
guess
is
like
if
you're
creating
rules
or
creating
things
that
are
like
that
stuff,
that's
source
code,
sort
of
technical
responsibility
and
I
think
when
we
consume
it,
we
try
and
treat
ourselves
like
an
api
consumer
of
those
things,
and
so
we
like
the
way
that
appears
and
the
way
that
users
interact
with
that,
is
how
we,
as
a
code
review
group,
have
decided
to
like
consume
that
feature
so
in
the
widget
space
they're.
C
That
is
just
that's
great
and
I
I
I
probably
am
running
into
that
with
some
other
ones
as
well,
like
the
differences
between
what
verify,
testing
and
verify
continuous
integration,
some
of
them
just
seem
to
cross
lines
and
I'm
like
I'm,
not
sure
where
this
lands
now.
So
that's.
Why
I'm
it's
interesting
to
go
back
into
the
issues
that
introduced
them
and
then
look
at
kind
of
where
it
is
now
to
compare
the
two.
A
Cool,
let's
hop
to
number
three
I
put
there's
a
so
I've
watched
your
recording
that
sorry.
B
On
something
about
the
previous
thing,
if
that's
okay,
yeah
yeah,
kathleen!
First
of
all,
thank
you
so
much.
This
is
looking
great
and
thank
you
for
sharing
what
you're
doing
like
this
work
in
progress
as
rough
as
it
is
like
with
all
of
those
things,
I'm
concerned
about
the
scale
of
the
work,
because
this
this
can
go
so
many
different
ways
and
we
can
be
stuck
in
this
forever.
B
C
B
Prioritize
those
research
initiatives
because
we
can
communicate
as
they're
blocking
more
improvements
to
one
of
the
most
important
areas
of
the
product.
So
yeah
I'm
just
I
don't
know-
I
I
don't
think,
there's
a
definite
answer
right
now,
because
you're
still
also
looking
at
everything
that
we
have.
But
I
just
want
you
to
be
I'm
concerned
about
your
work
and
and
how
much
this
can
take
to
complete.
So
keep
that
in
mind
as
something
that
we
can
do.
C
Yeah
yeah,
that's
a
great
call
out,
and
it's
probably
similar
to
how
I'm
thinking
about
navigation
and
settings.
It's
like
there's
going
to
be
the
groundwork
and
to
bring
it
all
together,
but
then,
once
I've
surfaced,
maybe
some
key
pain
points
or
things
like
that
from
the
research
data
or
survey
data,
then
it's
providing
recommendations
to
the
responsible
group
like
this
is
something
that
we
need
you
to
follow
up
on
as
part
of
our
initiative
to
improve
this
whole
page
or
something
like
that.
C
B
C
B
Okay,
let's
now
make
a
decision
about
what
we're
going
to
do
with
the
other
widgets
the
merge
one
is
definitely
the
one
that
we
can
affect
and
influence
today,
because
it's
a
very
narrow
scope
so
but
yeah
well.
Let's,
let's
talk
about
that.
A
Yeah,
I
should
say
for
this
last
one
and
I'll
have
a
hard
stop
here
in
a
couple
minutes.
Thanks
for
the
video
last
week,
I
watched
the
recording
prior
to
this
meeting
and
I'm
still
trying
to
read
through
the
very
long
thread
and
the
issue
and
go
through
all
the
designs
and
sort
of
formulate
things
so
like.
I
have
two
thoughts,
one
like
tactically,
like
what
are
the
next
steps
and
what?
A
How
should
we
be
thinking
about
like
actioning
some
of
this,
and
then
I
have
another
thought,
that's
sort
of
like.
B
You
know
after
yeah,
thank
you
so
much
for
looking
at
the
video.
I
think
the
video
could
be
helpful
if,
because
we
go
through
all
of
that
quickly
and
there's
a
lot
to
digest,
there's
a
lot
of
moving
parts
and
that's
why
I
try
to.
B
Dissect
things
into
must-haves
nice
to
haves
and
could
haves,
and
that's
why
I
wanted
priority
of
feedback
about
the
must-haves
after
looking
at
all
of
the
states
doing
all
of
that
mapping
after
looking
at
what
our
competitors
are
doing,
I
think
there
are
a
couple
of
things
that
stand
out
as
being
high.
Confidence
like
this
is
something
that
I
feel
we
can
change
without
having
to
do
formal
usability
testing.
I
think
this
is
things
that
we
can
change.
B
We
can
put
them
in
the
product
and
get
feedback
from
usage
because
we're
not
gonna
break
workflows
and
the
states
the
current
state
is
is
is
not
good.
So,
if
any
improvement
that
we
make
here
or
there,
I
think
it's
going
to
be
good,
so
my
plan,
based
also
on
the
feedback
that
people
have
been
giving
which
validates
some
of
my
assumptions
and
hypotheses
and
others.
The
other
way
around
is
to
create
small
issues
of
things
that
we
can
start
working
on
today
or
in
the
next
milestone.
B
Other
things
we'll
probably
have
to
be
more
involved
because
we
have
to
involve
other
groups
or
maybe
we
would
need
to
do
a
small
usability
test.
But,
to
be
honest,
I
think
a
lot
of
that
is
is
actionable
in
one
way
in
shape
or
form,
and
probably
it
won't
be
in
the
first
milestone
that
we
will
get
that
new
design.
B
A
A
A
So
maybe
think
on
that
and
then
my
last
one
is
I
like
the
new
design,
I
sort
of
wonder
if
it
doesn't
solve
this
problem
of
like
the
merge
button
still
just
gets
lost
in
the
middle
of
like
a
bunch
of
stuff
above
it
and
a
bunch
of
stuff
below
it,
and
that,
like
this
new
state
thing
or
this
new
like
way
to
think
about,
it,
doesn't
address
that
and
if
our
time
would
be
better
spent
addressing
that
in
a
new
like
sort
of
holistic
way
versus
investing
in
this.
A
And
so
I
don't
have
that
conversation
outside
really
do
need
to
jump,
but
I
would
say
think
about
it
and-
and
we
can
discuss
it
again,
we'll
move
it
to
next
week
too.