►
From YouTube: 2021-05-26 Create:Code Review Weekly Sync
Description
Weekly Sync for the Create:Code Review group
A
All
right,
I
think
the
out
of
office
is
a
read-only,
so
matt
you've
got.
B
B
Right
so
we
have
this
okr
around.
You
know
performance
of
large,
mrs
obviously.
Hopefully,
everyone
knows
about
that
by
now.
One
of
the
things
we're
gonna
do
here
it's
kind
of
the
end
of
the
month,
but
we
should
be
scoring
these
okrs
and
giving
them
kind
of
a
progress
update
and
I'm
not
sure
I
was
kind
of
wanted
to
get
people's
feedback
on
like
how.
How
should
we
give
these
this
this
specific
okr
score?
B
Should
we
base
it
just
on
the
performance
times
we
use
how
much
work
we've
been
putting
into
it.
I
don't
know
just
looking
for
some.
A
I
think,
given
that
we
have
like
a
incredibly
objective
okr,
which
seems
rare
for
some
of
our
okrs
like
we
should,
this
should
be
purely
measurement
based
right,
so
we
have
a
before
measure
and
in
theory
we
will
have
after
measures
and
I
would
say
that's
the
progress,
so
I
would,
which
also
means,
I
think,
we're
at
zero
percent
still,
which
is
fine,
and
I
think
we
sort
of
knew.
We
would
have
a
lag
leading
into
this
and
expect
more
of
this
to
sort
of
roll
at
the
end.
A
But
I
don't
think
we
should
be
measuring,
because
we're
saying
we
have
like
goals
we
want
to
hit.
I
don't
think
we
should
be
accounting
for
like
work
or
thought
or
planning
activities
into
this.
We
should,
I
think,
be
purely
looking
at
the
outcomes
would
be
my
my
suggestion.
A
We
have
the
so
grant
moved
over
the
site,
speed,
dashboards
and
the
gpt
dashboards
to
use
the
new
agreed-upon
mr
m,
mr
last
week,
okay
yep,
I
did
see
that
I
think,
and
so
that
would
be
the
baseline
zero
measure
because
we
hadn't
shipped,
we
have
not
shipped
any
improvements
yet
so,
whatever
those
reported
last
week
would
be
our
zeros,
so
we
should
probably
document
that.
A
That's
that's,
probably
something
we
need
to
get
documented
in
there
like
here's,
where
we're
starting
yeah
and
then
that
would
be
able
to
easily
tell
us
what
the
targets
we
have
to
achieve
are
for
both
those
pieces.
Okay,
memories
harder.
I
don't
is
memory
in
the
site.
Speed
test
founder
is
memory,
so
there.
C
C
C
And
yeah
that's
kind
of
like
what
I
feel
too
matt.
I,
if
it's
important
for
us
to
not
get
ahead
of
ourselves.
So
we
have,
we
do
have
some
improvements
already
merged
to
master,
but
they're
not
enabled
by
default.
So
we
shouldn't
count
with
those.
We
should
only
count
with
the
ones
that
are
that
our
users
are
seeing
directly.
C
So
I
think
we
might
add
a
reference
to
the
okay
as
a
comment
that
we
have
ongoing
work
but
as
for
the
metric,
because
I
think
there's
a
there's-
an
important
communication
part
of
this
there's
a
lot
of
eyeballs
on
that
okr.
So
it's
important
to
kind
of
like
vocalize
that
there
there
is
shift
work
just
not
enabled
for
customers,
but
for
the
number
of
the
metric.
Definitely
not
zero.
Yeah
I'll
leave
a
comment
on
the
up
here.
A
A
And
I
don't
I
guess
and
like
I
tried
to
look
at
this
tool
and
how
it
was
set
up,
and
we
can
like
do
this
somewhere
else,
because
I'm
not
technically
assigned
any
of
these,
because
they
we've
decided
to
flow
these
through
engineering
and
product
and
not
grouped,
which
makes
this
a
little
bit
more
complicated.
A
So
I
don't
actually
think
I
can
even
update
or
do
anything
to
these
okrs
now,
which
is
I'm
not
sure.
If
that's
intended
or
not
intended.
B
Yeah,
I
that
I
don't
know-
and
I
can
try
to
figure
that
out,
but
yeah
there's
right,
like
each
level
has
like
its
own
version
of
it.
So
that's
why
you're,
seeing
so
many
I
don't
know,
there's
yeah,
there's,
there's
a
little
little
work
that
still
needs
to
be
done
on
la,
I
think,
to
figure
this
out
because,
like
I
have
one
assigned
for
like
our
team.
So
if
I
update
this,
then
all
of
these
other
ones
above
it
will
get
automatically
get
updated.
B
It
kind
of
rolls
up
like
that.
So
so
you
only
really
need
to
update
it
in
one
place
and
then
they
should
all
be,
but
I
don't
know
if
I
should
just
be
not
have
this
and
just
use
the
one
that
christopher
created
at
the
top
level,
because
that's
kind
of
like
the
the
main
one
that
people
are
looking
at.
I
think
so,
if
you.
D
So
I
think
the
the
basically
the
key
results,
so
the
deepest
nested
is
where
you
would
update
it
and
then
yes
cascades
up.
So
you
would
not
update
the
top
level
one
right
and
we
we
have.
We
have
the
same
thing
for
in
for
the
ux
okrs
and
basically
we're
saying
hey.
This
is
the
goal
for
the
whole
department
and
then
what
we
did
was
create
or
duplicate
those
goals
with
exactly
the
same
names
but
for
the
specific
managers.
D
So
it's
basically
just
a
way
like
okay,
this
is
the
goal
and
for
each
person
you
can
individually
update
it.
If
you
want,
but
here,
given
that
it's
a
a
key
result
just
for
the
code
review
team,
I
would
expect
this
to
have
like
what
what
you're
hovering
right
now,
maybe
one
for
the
back
end
and
another
for
the
front
end
or
just
one
for
the
whole
team
and
nothing
else.
B
Right
so
we'll
figure
out
the
yeah.
I
think
that
makes
sense.
I
will
we'll
have
to
get
used
to
this
tool
a
little
bit.
I
think
theory
how
it
should
work
would
be
like
we
don't
have
to
talk
about
okrs
that
much
but
christopher
someone
would
have
okay.
We
need
to
do
this,
and
then
that
might
span
multiple
groups,
and
my
key
results
here
might
be
something
specific
about.
You
know
back
end
stuff
that
would
flow
into
that.
B
There
might
be
a
couple
and
then
maybe
andre
had
a
couple
that
apply
to
the
front
end,
and
then
all
of
those
combined
would
flow
up
into
the
one
main
okr,
but
that's
not
really
how
we've
set
up
okrs
in
the
past,
especially
some
of
these
that
we
just
copy
and
paste
everywhere.
So
we'll
figure
that
out
though,
but
that
can
be
a
topic
for
a
different
conversation.
C
Yeah
I'll
have
a
look
when
I
so
I
have
been
asked
to
create
my
okrs
too.
So,
even
though
we
have
a
joint
okr
between
product
ux,
backend
in
front
and
quality,
we
all
have
their
our
own
okrs
individually
for
the
teams
they're
just
the
same,
so
it
will
cause
that
you
will
have
the
front
and
once
it's
created
will
be
there.
I
think
we'll
have
to
find
a
way
to
work
around
how
the
numbers
are
reported
up.
C
I
don't
know
if
they're,
if
they
support
that,
because
if
you
have
50
achieved,
but
I
have
50
achieved
because
it's
the
same
number
we're
measuring
the
same
thing
that
will
account
for
like
25
for
darva:
it's
not
accurate
it
should
it
shouldn't,
be
the
calculated
in
the
normal
way,
so
we'll
have
to
figure
that
out,
but
that's
the
limitation
of
the
tool
we
can
skip
it
for.
A
B
I
think
so
it
it's
tricky
because,
yes,
I
was
been
asked
to
keep
ally
updated
with
like
the
status
and
the
scores
or
percentage
complete
or
whatever.
But
now
I
I'm
confused.
If
like
we
should
be
putting
like
links
to
issues
I
mean
I
have
a
link
to
the
like
the
okra
issue,
because
we
kind
of
moved
it
over,
but
should
I
be
saying
like?
Oh,
we
did
this
this
this
issue
and
I
like
fill
in
a
bunch
of
details
and
ally,
or
should
that
stay
in
gitlab.
C
I
don't
know
we
still
have
to
figure
some
of
that
out.
That's
why
this
is
a
trial.
I
guess
I'm
trying,
so
I
think
we
can.
We
can
agree
here
that
I
think
the
hardest
part
of
scoring
okay
is
getting
to
the
actual
number.
So
I
don't
think
it's
that
much
of
a
hassle
to
keep
the
issues
updated
as
well,
for
this
quarter
for
everyone's
benefit.
C
Cool,
I
have
my
next
point.
Can
I
go
right
so
I'm
trying
to
provide
an
update,
we
have
shipped
behind
a
feature
flag,
the
improvements
for
virtual
scrolling.
It
still
has
some
experience
gaps.
So
that's
why
it's
not
enabled
for
users,
but
we
also
allowed
you
to
enable
the
mode
for
virtual
scrolling
with
a
parameter
on
the
url.
C
If
you
want
to
play
around
with
it,
so
all
you
have
to
do
is
just
enable
it
with
a
question
mark
virtual
scrolling
equals
true
and
you'll
have
that
to
play
around
in
your
browser.
Feedback
was
welcome,
of
course,
but
that
means
that
we're
able
to
hook
it
up
with
the
dashboards
that
we
have.
So
these
are
not
the
10k
reference
architecture
that
quality
uses.
These
are
the
ones
that
we're
tracking
on
our
live
data
on
production.
So
that's
where
we're
measuring
there.
C
We
have
one
specific
for
virtual
scrolling
and
that
allows
us
to
compare
with
the
non-virtual
scrolling.
So
just
so,
it's
clear
once
we
enable
this
by
default,
we'll
be
able
to
see
the
historical
without
virtual
scrolling
and
then
coming
down
with
the
virtual
scrolling
enabled
okay.
This
is
just
a
temporary
entry
so
that
we
can
see
and
compare
the
metrics
our
id
is
out
over
time.
We
might
just
remove
this
virtual
scrolling
and
keep
the
the
the
one
that
has
been
tracked
over
the
past
couple
of
months
and
years.
I
think
yeah.
C
C
Usability
aspects
this
milestone
and
we're
looking
to
potentially
enable
it
in
production
on
fourteen
zero,
if
there's
no
blockers
from
ux
review,
of
course,
I'm
not
sure
if
we'll
be
in
time
to
enable
it
by
default
on
fortune
zero.
A
Yeah
I
mean
it
looks
like
the
only
improvements
into
total
blocking
time
and
not
necessarily
into
like
everything.
There's
improvements,
but
I
would
say,
they're
not
like
lpp,
is
slightly
lower,
but
not
I
don't
know
two-tenths.
Maybe
that
is
your
tenth.
Maybe
that
is
significant.
I
don't
know.
Are
we
expecting
that
that
gets
better,
or
is
this
more
like
a
the
user
can
start
interacting
faster
sort
of
improvements?
So,
like
I
don't
I
guess
I'm
trying
to
figure
out
like.
Does
this
really
in
the
context
of
the
okr?
A
C
So
there's
many
there's
many
things
there
there's
the
reason
why
these
met
these
measurements
of,
like
the
google
web
vitals,
the
fcp,
the
lcp
all
of
those
are
generic
measurements
and
they
weren't
being
fed
too
much
by
the
full
render
of
the
diff.
As
soon
as
we
render
the
first
part
of
the
page,
then
those
numbers
are
taken
and
I
do
think
they
will
potentially
be
lightened
up
because
of
the
memory
usage
will
be
lighter,
but
not
by
a
lot.
The
biggest
wins
are
definitely
on
the
tpt
and
the
time
the
time
to
interactive.
C
Now
I
do
expect
this
to
to
achieve
one
huge
improvement
on
the
total
on
the
time
to
render
the
full
time
to
render
the
whole
page,
which
translates
itself
to
the
users
being
able
to
interact
quicker,
but
that's
again
depends
on
how
we're
measuring
the
timing
improvement
if
we're
just
looking
at
the
lcp.
This
is
probably
not
going
to
be
that,
but
the
lcp
is
already
below
target,
so
that
might
not
be
the
best
metric
to
focus
on
anyway.
C
But
yes,
so
we'll
be.
Definitely
you
will
have
a
user
direct
impact.
It
will
have
an
impact
on
the
memory
too.
C
C
C
Yeah
so
coming
back
to
this
under
this
metric
section
metric
session
section
down
at
the
bottom,
we
can
see
the
js
heap
ones
and
these
numbers
are
improved.
I'll,
be
honest,
I'm
not
completely
clear
on
the
difference.
C
I
think
I
think
total
size
is
the
one
that
it's
used
total,
but
then
brother
there's
some
garbage
collection,
but
I'm
I'm
guessing
here.
So
this
is
the
actual
one
at
the
end,
which
would
bring
us
from
273
million
bytes,
which
is
like
273
megabytes
roughly
to
184,
but
it
is
showing
some
improvement
with
that
we
can
track
it's
not
the
only
strategy
we
have
to
nail
that
metric,
but
it
is
one
of
the
the
biggest
ones
so
we'll
try
to
keep
track
here,
but
also
on
some
manual
testing.
A
Thoughts
can
you
can
we
add
fully
loaded
to
one
of
the
like
big
boxes
on
the
right?
Only
because
clearly,
I
did
not
see
that
it
existed
in
the
little.
C
D
Yeah,
this
is
a,
I
think,
it's
really
good,
it's
very
good
improvement
and
yeah.
I
think
that
for
very
large
merge
requests.
At
least
when
I
was
testing
it.
It
was
had
a
very
clear
improvement.
Of
course
it
didn't
improve
everything
as
we
see
but
yeah.
It
did
improve
some
things
and
I
think
it's
a
good
good
improvement.
So
thanks
phil,
if
you're
watching
this.
C
Yeah,
so
we're
still
we're
still
waiting
on
on
you
and
and
soon
young
to
provide
us
with
some
guidance
of
anything
left
that
we
might
have
missed
so
keep
your
eyes
open
and
do
relate
to
us
so
open
issues,
that's
the
best
way
so
that
we
can
immediately
jump
on
it
and
fix
them.
C
There's
still
the
question
of
the
search
that
I
think
is
going
to
be
a
a
question
that,
once
everything
is
in
place,
we
might
want
to
have
a
conversation
about
whether
it's
something
that
we
accept
as
a
cost
or
if
it's
something
that
we
want
to
have
a
conversation
about
how
we
can
mitigate
the
lack
of
that
building,
an
idea
building
our
own
search
building
our
own.
C
D
Have
anything
else
just
to
just
to
clarify
it
would
be
the
feature
flag
when
we
released
it
would
be
off
by
default.
C
So
our
idea
is
just
to
follow
the
normal
rollout.
So
as
soon
as
we
are
okay,
with
the
experience
we'll
be
turning
it
on
for
gitlab
project
and
then
for
www.gitlab.com,
then
it's
up
to
us
to
kind
of
like
decide
the
rollout
we
can
spread
around
gitlab.com
before
the
end
of
fourteen
zero.
It
would
give
us
a
lot
of
time
to
collect
feedback,
but
given
the
big,
the
big
change,
I
think
we
need
to
be
careful
about
rolling
out
the
fault
enabled.
C
D
D
Right
and-
and
I
think
what
you
mentioned
about-
searching
and
using
the
browser
find
function,
I
think
that's
yeah,
that's
something
that
we
need
to
solve
before,
but
because
we
have
a
lot
of
feedback
from
users.
We
don't
have
quantitative
data,
but
we
have
qualitative
data
that
shows
that
people
used
it,
especially
in
large
major
requests
to
find
things.
C
So
we
might
make
sense
to
create
an
issue
about
that
particular
part,
so
that
we
can
start
brainstorming
how
we
can
get
around
that.
We
have
a
couple
of
ideas.
I
think
enabling
this
in
production
will
be
providing
a
lot
of
feedback
like
real
world
usage
as
well
say
that
we
turn
it
on
for
a
couple
weeks
and
nobody
complains
on
github.com.
C
That's
a
useful
metric,
I
think,
or
a
useful
piece
of
feedback
if
they
come
back
immediately
on
the
same
day
on
twitter
and
saying
they
they
they
broke.
My
my
experience,
then
we
can
definitely
take
that
feedback
too,
but
we
might
want
to
start
a
conversation
around
that
mitigation
of
the
search
yeah.
D
D
I
mean
I
wouldn't
expect
it
to
break
every
single
review
session,
but
it
would
definitely
break
people's
workflows
now
and
then
and
they
would
have
to
then
say.
Okay
now
I
have
to
review
everything
my
id,
which
is
something
that
people
already
do
when
they
want
to
find
references
and
definitions,
because
we
don't
support
that
natively
in
merge
requests
so
yeah.
I
think
that's
that's
a
concern
that
we
need
to
address
before
enabling
this.
C
Let's
have
a
chat
because
there's
there's
some
solutions
that
revolve
around
disabling
the
virtual
scrolling
on
that
moment,
but
there's
no
really
a
great
way
of
cross-browser
detection
that
the
user
triggered
the
native
search
like,
for
example,
mobile.
We
won't
be
able
to
track
that
on
mobile,
but
on
normal
laptops
and
devices
we
can
detect
the
keyboard
strikes.
C
So
we
could
disable
the
virtual
scrolling
at
that
time,
but
we
don't
know
what
the
experience
would
be.
Probably
the
browser
would
hang
up
there.
So
we
have
to
talk
about
that.
So
yeah.
C
C
Cool
okay:
you
have
your
next
painting.
A
Yeah
one
last
one
in
here
we
have
been
brainstorming
ideas
for
increasing
adoption
of
workflow,
which
is
the
vs
code
extension.
D
A
A
And
then,
if
you
promote
gitlab
things
on
your
socials,
another
blog
post
out,
so
please
share
twitter
linkedin.
That
is
part
of
our
acquisition
strategy
for
the
extension.
So
those
all
help
to
get
those
out
there
to
help
people
download
it
and
find
it
and
all
the
all
the
great
work
that's
in
there.
So
two
things
to
to
bring
up
since
we
don't.
A
We
don't
talk
about
it.
Much
on
this
call,
but
pedro
you
have
a
question.
D
Yeah
I
was
wondering
if
there's
a
specific
adoption
goal
of
this
many
monthly,
active
users
or
just
yeah
an
open
brainstorming
session,
just
to
figure
out
things
to
you
know,
increase
adoption
in
general.
A
Well,
let
me
see,
there's
not
anything
defined.
What
I
will
say
is
when,
when
we
started
talking
about
this
and
sort
of
talking
about
it
in
the
context
of
not
investing
in
the
web
ide
as
much
when
this
was
still
an
editor,
we
sort
of
made
a
bet
like
that
we
would
be
able
to
gather
more
users
than
the
web.
Ide
has
because
there's
a
theory
that,
like
the
s
code,
holds
50
of
the
market,
share
source
code
has
2
million
users
per
month,
or
something
like
that
and
then
so.
A
If
we
got
half
of
those,
we
should
have
a
million
users
in
vs
code
and
like
we
have
not
a
million
users
and
the
web
ide,
I
think,
historically,
has
been
somewhere
in
the
neighborhood
like.
I
was
trying
to
find
the
dashboards
for
it,
but
I
think
it's
it
used
to
be
like
in
the
40s
to
50s
when
we
sort
of
started
talking
about
this
in
terms
of
users,
and
so
I'd
expect
us
to
be
better
than
the
web.
A
Ide
would
be
like
my
first
benchmark
goal
threshold
that
I
want
to
cross
and
we're
not
there.
Yet
the
web
id
still
has
probably
four
or
five
times
the
number
of
users
that
the
vs
code
extension
has.
I
think
so
we're
not
even
we're
not
there.
Yet.
So
that's
the
first
goal,
and
then
we
need
to
figure
out
how
to
get
closer
to
what
we
believe
a
50
market
share
would
look
like,
but
but
the
first
goal
is
to
you
know,
sort
of
cross
that
that
web
ide
line.
B
D
Is
there
a
a
specific
thing
that
you
think
it's
it's
blocking
our
adoption?
Is
there
like
a
specific
hurdle
that
you
think
if
we
do
this
or
if
we
solve
that,
we
would
be
able
to
have
more
adoption.
A
I
mean
my
primary
hypothesis
is
that
it's
like
an
awareness
thing:
it's
people
don't
know
that
it
exists.
It's
not
like
it's
in
the
gitlab
ui
and
then,
if
you're
nvs
code
and
working,
it's
probably
not
likely
that
you
would
think
about
download
like
looking
for
a
gitlab
extension.
If
you
sort
of
had
everything
else,
you
needed,
because
really
you
sort
of
might
think
about.
Git
lab
is
just
like
get
push,
and
then
you
open
your
merchant
question.
A
Then
you
go
to
get
you
don't
think
about,
like
all
of
the
other
things
that
we
have
in
the
extension
that
provide
you,
value
or
sort
of
you
might
have
other
things
there
that
deal
with
those
things,
and
I
think
that's
where
I
think
I
think
it's
primarily
an
awareness
thing,
and
so
the
question
is
like:
could
we
insert
gitlab
workflow
into
the
gitlab
application
in
places
that
would
you
know
help
highlight
that
hey?
You
can
also
do
some
of
these
things
in
vs
code.
I
think
there's
an
idea.
A
I
think
I
floated
this
idea
before
we're
like
when
you
get
push
from
the
command
line.
We
show
you
a
link
to
like
open
a
merge
request.
A
It
would
be
amazing
if
somehow
we
detected
that
your
push
was
from
vs
code
and
then
we're
like
hey
how
about
get
love
workflow,
but
I
don't
think
that's
like.
I
don't
think
we
get
a
user
agent
on
a
git
push.
That
would
like
allow
us
to
return
that
in
the
message,
but
sort
of
crazy
ideas
like
that
to
like
get
that.
I,
I
think
it's
mostly
an
awareness
thing,
because
many
people
have
vs
code
and
have
lots
of
extensions.
It's
just
a
question
of
like
did.
They
know
this
one
even
existed.
D
Yeah
that
makes
sense
yeah.
When
you
were
talking,
I
was
yeah
as
a
joke.
We
could,
if
you
we
detect
a
very
large
and
slow
merge
request
in
the
ui.
We
display
a
message
saying:
hey:
here's,
a
better
there's,
a
better
experience,
don't
use
gitlab
ui
download
our
extension
for
vs
code.
D
C
You're
joking,
but
we
can.
We
have
access
to
the
timing,
so
we
can
just
like.
I
don't
know
if
you've
ever
seen
this,
but
when
you're
loading
gmail
when
you're
in
the
loading
screen,
you
have
a
little
link
hey
this.
If
this
is
taking
too
long,
you
can
go
to
the
simple
basic
html
version,
so
there
are
ways
and
moments
to
upsell
vs
code,
gitlab
workflow.
C
I
think
there
are
some
challenges
that
the
link
that
we
have
kind
of
forces
us
to
clone
the
entire
project,
but
we
can
follow
the
pattern
that
we
have
for
the
checkout
branch
on
the
merchandise
widget.
It
already
has
the
opening
web
ide
if
we
rethink
the
way
that
area
works.
If
you
click
now
today
on
the
checkout
branch,
it
will
show
you
a
pop-up,
a
dialogue
with
instructions.
C
C
A
We
just
got
to
think
through
them.
So
if
you've
got
those
ideas,
please
feel
free
to
drop
them
in
the
issue,
and
then
we
can
figure
out
how
we
prioritize
that
work
in
conjunction
with
some
of
the
other
stuff
going
on,
but
wanted
to
put
it
on
the
radar.