►
From YouTube: 2021-09-21 Create:Code Review UX Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
all
right,
so
I
have
the
first
point
about
the
product
design
planning
for
14.4
yeah.
There
are
a
couple
of
things
not
a
couple,
some
things
that
I'd
like
to
talk
about,
just
to
make
sure
that
we
are
aligned
and
that
you
can
raise
any
concerns.
Now
that
could
have
been
could
be
harder
to
express
through
comments,
or
that
would
take
more
time.
A
So,
let's,
let's
go
through
them
one
by
one.
So
the
first
one
synthesized
prior
research
on
mr
navigation
user
flow
that
was
almost
completed
in
1403
annabelle,
did
the
work
there
with
synthesis
and
now
she's
doing
a
presentation
that
needs
clean,
and
then
we
also
need
to
prioritize
the
problems
that
she
found
and
subsequently
also
prioritize
the
recommendations
to
address
those
problems.
A
And
one
of
the
things
that
we
found-
and
that
is
in
under
the
the
follow-up
research
point
is
that
we
did
not
answer
one
of
the
research
questions,
questions
which
was:
what
does
the
user's
journey
with
merge
requests?
Look
like
what
are
their
goals?
What
do
they
value?
Does
that
change
over
time?
We
were
not
able
to
answer
that,
so
one
of
our
recommendations
is
to
do
follow-up.
Research
just
focused
on
mr
user
journeys
and
annabelle.
A
She
reviewed
all
of
the
prior
research
that
we
have
linked
in
that
first
issue
of
the
synthesis,
including
those
very
helpful
recordings
where
engineers
went
through
their
review
process
with
a
real
merge
request
and
she
found
those
extremely
helpful.
A
But
it's
not
exactly
it's
not
comprehensive
enough,
and
it
it's
just
it's
unmoderated,
so
she
doesn't
have
the
ability
to
ask
them
specific
questions
to
understand
the.
Why
behind
certain
things,
and
so
we
feel
that,
although
we
already
have
some
information,
we
need
more
of
this
information
about
the
user
journeys
so
that
we
can
then
make
better
decisions
about
the
problems.
The
first,
the
most
important
problem
being
that
it's
not
clear
what
users
need
at
specific
moments
of
the
user
journey,
there's
just
a
general
sense
of
hey.
This
is
too
much
clutter.
A
A
I
don't
know
if
you
saw
that
that
you
know
listed
the
priority
or
how
important
certain
metadata
is
for
users,
and
that
is
very
helpful,
but
it
doesn't
take
time
into
account
or
moments
or
roles
or
different
goals,
depending
on
what
you're
doing
and
annabelle
felt
that
we
needed
to
raise
some
confidence
about
that
question.
To
answer
it
properly
before
we
can
dig
deeper
and
I'll
pause
there
for
your
reaction.
B
B
B
I
think
that
makes
sense.
I
think
I
think
my
question
is
more.
B
For
the
follow-up
research,
do
we
need
to
do
that
with
like
internal
and
external
people,
or
do
we
think
we
can
just
like
lean
on
engineers
internally
to
sort
of
tag
along
on
a
review
or
borrow
their
time?
To
do
that,
and
I
ask
because
I
know
I
saw
I
don't
remember,
who,
like
there's
like
20
research
like
open
research
studies,
all
trying
to
recruit
and
like
recruiting
pipelines,
are.
B
For
you
know,
research
we've
also
got
like
the
vs
code.
Stuff
that'll
come
up
sort
of
soon,
actually,
which
will
have
to
do
with
internal
people
too.
So
I'm
just
trying
to
figure
out
like
yeah
in
order
to.
B
A
Time,
yeah
exactly,
I
think,
yeah
we
have
to
strike
a
balance
here
of
perfection
or
the
opposite
of
perfection
and
yeah.
Of
course
it
would
be
ideal
if
we
had
research
with
external
people
etc,
but
I
think,
with
internally
with
our
engineers,
we
couldn't
already
sufficiently
raise
our
confidence
to
acceptable
levels
that
give
us
some
security
about
where
we
should
go
and
what
we
should
do
first
and
what
is
important
or
not.
A
One
of
the
things
that
annabelle
mentioned
was
that
other
than
external
users,
that
from
the
wired
community
that
use
gitlab
to
review,
merge
requests.
It
would
also
be
interesting
to
see
how
other
people
use
competitors
like
git,
gitbot,
github,
bitbucket
and
garrett,
but
in
the
end
those
are
very
interesting
inputs,
but
in
the
end
I
don't
believe
those.
A
I
think
evaluating
competitors
is
something
that
we
can
do
separately
on
our
own
in
a
lightweight
manner.
That
just
involves
one
person,
one
designer
looking
through
all
of
it,
and
it
can
certainly
give
us.
A
I
don't
know
a
lot
of
things
so
yeah,
I'm
just
thinking
about
you,
know
80
the
80
20
rule.
The
pareto
law
is
that
you
know
you
get
80
percent
of
the
value
from
20
of
the
effort.
So
I
I
think
that's
exactly
what
I
agree.
We
do
just
internal.
We
might
get
80
of
the
value
there.
Maybe
less,
maybe
more.
I
don't
know
so
yeah,
that's
something
that
we
can
do
internally:
cool,
okay,.
A
A
I
was
very
happy
that
we
could
time
box
this
synthesis
just
to
one
milestone
and
she
could
dig
all
of
those
recommendations.
I
mean
some
of
them.
We
already
knew
others
we
did
not,
but
nonetheless
it
just
paints
a
picture,
a
comprehensive
picture
of
where
we
are
today
and
where
we
need
to
go
now
and
now
the
outstanding
work
of
just
working
on
our
presentation.
So
it
needs
just
some
cleaning
up
in
the
prioritization
of
the
problems.
A
I'm
very
happy
that
we
were
able
to
do
this
in
just
just
over
a
milestone,
basically
taking
into
account
that
she
onboarded
during
this
milestone
and
that
I
was
off
as
well,
and
there
were
other
people
in
vacations.
So
I'm
very
happy
that
we
were
able
to
do
this
and
that's
why.
I
also
think
that
if
we
are
able
to
time
box
this
research
to
answer
the
question
about
the
user
journeys
and
what
they
value
at
different
moments
and
with
different
roles,
I
think
we're
also
able
to
get
just
enough
value.
A
B
A
Yeah
then
we
have
machine
learning
to
suggest
reviewers.
This
is
something
that
I
hope
that
annabelle
is
able
to
take
on
herself.
I
was
initially
thinking
that
I
would
do
that
myself,
but
this
is
something
that
can.
This
is
something
more
or
less
from
scratch.
A
Of
course,
it
would
be
enhancing
what
we
already
have
today,
but
it
would
be
something
that
I
would
have
to
gain
context
and
she
would
have
to
gain
context
as
well,
and
I'm
hoping
that
we
can
do
a
quick,
competitive
evaluation
to
see
how
other
similar
tools
are
suggesting
people.
So
not
only
you
know,
garrett
and
github
and
other
tools
that
suggest
reviewers,
but
also,
for
example,
is
a
very
common
example
when
you're
writing
in
gmail.
It
suggests
you
people
that
you
should
add
as
a
cc
or
other.
A
Other
people
to
your
email,
so
there
are
other
tools
that
we
can
look
at
to
gain
some
inspiration
and
see
what
works
well
and
what
was
doesn't
work
well
so
that
we
can
then
proceed
to
solution
design.
I
think
because
this
is
focused
just
on
showing
and
accepting
reviewer
recommendations
and
that
we're
building
on
what
we
already
have
today,
yeah
I'm
I'm
hopeful
that
we
can
get
this
done
in
in
1404,
so
that
the
machine
learning
team
can
then
work
on
that
and
implement
it
in
14-5,
so
yeah.
A
I
think
I
would
like
for
annabelle
to
take
that
on.
You
have
a
question:
are
there
any
issues
with
this?
Have
we
looked
in
taylor?
Not
yet
not
yet,
but
yeah.
We
will
basically
loop
him
in
and
make
sure
that
they're
involved
in
that
they
can
share
the
existing
contacts
that
they
have
because
they've
already
been
working
on
that.
A
B
I
should
make
that
work
there,
okay,
yeah
and
then
I
just
added
a
note
that,
like
based
on
the
way
that
the
ml
obscures
are
structured,
they
don't
have
front-end
engineers,
and
so
we
just
need
to
be
mindful
of
solution,
because
it
will
likely
eat
into
our
capacity.
B
A
Thanks
for
sharing
yeah,
the
next
point
consolidate
merge,
requests
main
actions
in
one
place,
so
this
is
an
issue
that
has
I've
been
dragging
along
and
essentially
it's
to
continue
the
work.
The
amazing
work
that
sanjong
did
of
that
consolidation,
but
taking
out
just
the
parts
yeah
removing
the
the
sticky
header
out
of
the
equation
and
just
focusing
on
moving
main
actions
to
one
place,
even
if
it's
not
always
visible
in,
but
at
least
having
them
at
the
top.
So
it's
just.
A
And
issues
in
epics,
because
that's
that's
something
that
will
come
up
if,
if
we
need
to
then
propagate
this
design
to
other
areas
of
kit
lab,
if
we
change
the
header
of
merge
requests,
it
would
be
a
bit
odd
if
issues
and
epics
had
a
different
header
unless
there's
a
very
good
reason
for
it.
So
that's
part
of
the
review
of
the
design
and
also
then
validated
with
users,
and
for
that
we
can
use
unmoderated
testing
with
usertesting.com,
because
this
is
a
fairly
it's.
A
A
I'm
not
too
concerned
either
so,
okay,
cool
yeah-
I
just
think
you
know
it's.
I
just
think
it's
important,
even
if
we
end
up
deciding
having
different
headers,
that
we
at
least
show
that
we've
considered
it
and
we
involve
them
yeah.
I
think
that's.
The
important
part
merge
requested
requirement
attention.
So
this
is
something
that
we
discussed
when
prioritizing
problem
areas
as
a
secondary
problem
area
that
we
could
be
working
on
in
parallel
with
the
work
that's
and
I'm
always
doing,
of
about
mr
navigation
user
flow.
A
I
haven't
found
time
for
it
and
there
are
basically
two
questions
here.
One
is
more
urgent:
how
far
away
are
we
from
starting
the
implementation
of
that
those
designs
that
sanjong
validated.
B
It's
a
good
question.
I
mean
at
this
point
it'll
be
no
earlier
than
14
5,
but
much
much
of
that
hinges
on.
B
There's
not
much
we
can
do
here.
I
think,
like,
I
think,
we're
sort
of
our
hands
are
sort
of
tied
waiting
on
back-end
engineering
capacity
and
then
potentially
that's
14-5,
but
then
I
think
the
risk
is.
We
are
committed
to
all
the
new
mergiability
work.
B
There's
also
still
caching,
issues
that
are
open
so,
like
some
of
the
performance
work
that
they're
doing
is
still
open
and
has
feature
flex
so
14
5
would
be
the
earliest
that
feels
incredibly
optimistic,
just
sort
of
given
how
things
have
sort
of
seemed
to
trend.
B
Because
we're
now
starting
to
examine
sort
of
other
backlogs
that
exist
in
terms
of
other
performance
issues
and
for
deaf
security.
Things
like
that
that
need
to
be
looked
at
so
14-5,
but
that
feels
optimistic
I'd,
say:
14-6
is
probably
more
realistic
unless
we
find
an
issue
that
we
could
do.
I
think
one
of
the
pieces
here,
though,
is
that
we
still
haven't
gotten
a
breakdown
from
engineering
on
this.
B
A
Yeah
and
andrea
suggested
a
something
for
front-end
a
rough
plan,
but
then
you
left
a
comment
two
months
ago
suggesting
to
have
engineers
from
each
side
spend
some
time
looking
at
this
and
developing
issues.
A
Yes,
that's
what
I
would
like
for
us
to
do.
I'd
like
you
know
just
the
engineers
to
poke
the
design
to
see
what's
yeah,
what
are
the
known
unknowns
and
what
are
the
known,
knowns.
B
A
Okay:
okay,
let's,
let's
try
to
to
push
I
mean
so
so,
with
the
engineering
allocations.
B
A
A
Okay,
that's
that's
yeah,
that's
great
yeah
and
as
as
part
of
this,
I
I
was
thinking
because
we
we
talked
about
that
when
we
were
prioritizing
the
problem
areas
and
that
this
work
is
related
to
someday.
A
Improving
the
mr
dashboard
someday
improving
the
batch
comments
feature.
A
Investing
a
bit
of
time
into
creating
a
vision,
so
rough
mock-ups
that
tell
a
story
of
how
things
could
be
if
we
improved
those
aspects
in
the
future,
because
this
is
something
that
we
know
we
want
to
work
on.
A
But
I
don't
want
us
to
be
distracted.
What
do
you
think.
B
B
A
That
would
be
it,
but
with
with
the
huge
caveat
that
we
would
commit
to
not
doing
what
is
in
the
vision,
but
that
we
would
commit
to
working
in
that
on
that
problem,
space
during
yeah
in
the
near
future,
which
is
something
that
we
didn't
do
with
real
time.
We
did
all
of
that
and
it
was.
It
was
a
great
experience
and
it
kind
of
energized
everyone,
but
we
were
not
committed
to
it
and
we
were
too
spread
out.
A
As
you
know,
right
we
didn't
have
any
focus
in
the
group
yeah,
but
if
we
have
focus-
and
if
we
say
we're
going
to
commit
to
working
on
this-
we're
not
necessarily
implementing
that
those
mock-ups
as
is,
and
that
story,
but
something
similar
to
that
with
those
goals
in
mind.
Yeah,
I
don't
know
but
again.
B
B
I
think
it's
valuable
in
terms
of
creating
more
vision,
content
for
the
team
to
rally
behind.
I
think
it's
valuable
in
terms
of
putting
that
content
out
for
other
people
to
see.
B
We
sort
of
did
like
the
prioritization
of
problem
areas.
We've
also
got
these
opportunity,
canvas
pieces
and
other
validation
work
coming
through
that
we're
supposed
to
be
doing.
I
guess
my
question
is
like
time
commitment,
plus
what
you
have
already
we
sort
of
already
decided.
We
wanted
to
go
work
on,
which
is.
B
Yeah,
I
think,
that's
fine.
I
think
it's
something
to
continue
to
think
about.
I
like
the
idea
of
doing
it.
I
think
it'd
be
it'd,
be
interesting
to
go
through
that
sort
of
experience
as
a
pm,
because
I
haven't
done
that.
I
thought
that
was
interesting
in
terms
of
the
way
you
and
james
had
done
that
one.
So.
A
Cool
yeah
and
then
I
list
some
some
smaller
work.
You
can
read
through
it.
I
don't
know
if
you
have
any
specific
questions.
B
A
With
some
to
give
something
to
to
our
customer
that
wants
to
help
with
this
some
rationale
for
our
stance,
whatever
that
stance
may
be,
I'm
I'm.
I'm
gonna
tell
you
that
I'm
going
into
this
with
the
assumption
that
all
of
this
can
be
solved
if
we
have
better
default
rules
or
better
rules
period
for
who
can
resolve
a
threat.
A
Okay,
because
from
what
I've
been
reading,
that's
that
thing
sticks
out
is
people
have
no
control
over
the
threats
that
they
create
and
anyone
can
resolve
them
and
the
author
resolves
it
and
and
everything
looks
okay
and
then
it's
ready
to
merge.
A
Yeah,
so
that's
why
I
want
to
do
a
very
quick
problem:
validation
on
that
which
is
basically
just
looking
through
all
of
the
comments
and
other
community
feedback
and
trying
to
dissect
the
problem
into
what
is
really
behind
those
that
feedback.
Basically.
B
Yeah,
I
think
it's
worth
a
discussion
and
I'm
fine
to
to
wait
on
having
that
discussion.
It's
not.
I
don't
think
it's
urgent.
B
And
we
can
talk
about
planning
issue,
I
think
it'd
be.
The
design
planning
issue
is
interesting.
I'm
curious
to
see
how
it
goes
for
the
quarter
or
for
the
month
for
the
milestone.
Whatever
period
of
time.
It
is.
B
A
A
Yeah,
let's
see
how
it
goes,
it
might
be
interesting
to
have
this
experiment
on
the
side
first
to
see
how
well
it
fares
out
and
then
we
can
think
about
transitioning
and
adapting
some
of
the
parts
to
that
global
planning
issue
so
that
it
you
know
yeah.
My
main
concern
was
polluting
it
and
and
having
many
many
things
that
are
specific
to
ux
or
product
design
in
this
case,
but
maybe
we
can
abstract
some
of
the
things
or
hide
them,
or
you
know,
do
some
clever
things
too,
but
keeping
the
same
outcome.
B
Yeah,
I
think
in
the
planning
issue
would
be,
I
think,
there's
noise
to
it,
but
I
we
ask
engineers
to
sort
of
like
provide,
updates
or
like
talk
about
what's
going
on
or
like
problems
or
things
that
they're
running
into
which
helps
at
least
helps
me
like
understand
where
things
are
at
so
I
can
do
all
of
that,
and
so
this
is
sort
of
that
earlier
content
that
I
know
some
some
engineers
have
asked
for
like.
Where
are
we
going?
B
What
are
we
working
on
and
we
don't
talk
about
a
lot
of
this
stuff
in
forums
that
they're
that
they're
typically
present
or
that
they
they
are
in,
and
so
it
might
be
a
way
to
like
shine
that
light
on,
like
here's
all
these
things
that
are
happening,
that
yeah
we're
not
there
on
work
yet
but
like
this
is
what
we're
thinking
about
and
talking
about
on
a
regular
basis,
exactly.
A
Yeah
and-
and
maybe
once
we
do
that,
and
we
have
everything
in
one
planning
issue:
we
can
do
a
retro
and
just
ask
people
did
they
think
it
was
a
lot
of
noise?
Was
it
valuable
or
not
and
we
can
go
from
there?
Yeah.
B
So
cool!
Well,
I
appreciate
you
trying
trying
it
out
this
milestone
it's.
This
is
already,
I
think,
more
insight
than
I
had
maybe
into
what
product
design
worked
out
on
a
regular
basis.
Anyways,
I
knew
in
broad
strokes
what
was
happening.
It's
nice
to
have
a
specific,
so
yeah.
A
Thanks
kai
thanks
for
the
feedback,
it
was
nice
seeing
you
and
good
luck
with
your
mic
and
throw
it
out.
I
think
that's
what's
gonna
happen.