►
From YouTube: 2021-05-12 Code Review Weekly Sync
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
to
have
his
perspective,
cool,
a
bunch
of
out
of
offices
coming
up
take
a
look.
The
only
thing
I'll
highlight
is
mine
will
be
east
coast
time
for
the
next
two
ish
weeks.
So
it'll
make
me
an
hour
closer
to.
B
A
C
Sure
so
I
just
wanted
to
quickly
grab
the
opportunity
to
discuss
a
little
bit
of
the
ideas
that
came
up
on
that
brainstorming
issue,
and
so
the
idea
for
that
issue
is
just
to
grab
any
potential
things
that
we
might
not
have
thought
or
just
to
promote
deeper
discussions,
and
I
think
we'll
we
have
already
some
some
good
insight
there
and
a
bit
of
a
validation
for
the
ideas
that
we've
had
in
the
past.
C
I
highlighted
there
the
auto
single
file
mode
that
david
suggested,
so
it
seems
like
we
do,
have
some
support
for
that.
Server-Side
rendering
came
up
again,
and
that's
always
one
of
the
things
that
that
comes
up.
But
I
I
wanted
to
drill
deep
on
the
one
that
that
kerry
brought
up,
because
a
lot
of
the
things
that
we
do
today
that
are
heavy
and
have
to
be
checked
a
bunch
of
times,
and
it
takes
time
and
it's
it
takes
time
to
render
them.
In
particular,.
C
We
could
potentially
get
a
really
good
quantum
leap
improvement
if
we
flip
it
on
its
head
and
we
do
some
precalculations.
Instead
of
only
calculating
certain
things
on
page
load,
she
has
a.
She
goes
deeper
into
that
rabbit
hole
a
little
bit
on
her
comment,
but
I
do
like
that
idea.
I
do
like
that
idea
of
when
something
happens
to
emerge
requests.
C
We
know
what
that
affects.
Somebody
pushes
a
commit
that
changes.
The
diffs
somebody
adds
a
comment.
That's
just
the
discussions
or
potentially
the
mergibility
is
changed
now.
This
is
far
easier
said
than
done,
because
just
the
mergibility
is
like
thousands
of
checks,
as
pedro
very
well
showed
in
the
mapping
of
the
word
murdoch's
widget,
but
yeah.
I
wanted
to
see.
How
can
we
go
from
here
and
especially
more
on
this
particular
topic
of
pre-r
pre-calculating
things
that
is
taking
long
to
calculate
on
page
load?
C
A
A
C
My
perspective
is
that,
when
we're
rendering
the
page,
we
would
have
that
data
already
available,
so
is
it?
Is
it
mergeable
boom?
We
just
gave
it
a
troll
false
right,
instead
of
having
to
go
and
check
a
bunch
of
things.
So
it's
it's
faster
to
get
that
response.
C
What
what
that
would
happen
is
on
the
events
that
change
the
state
or
change
the
comments
or
the
notes
or
the
resolvability
of
a
note
or
something.
In
that
event,
we
would
do
two
things.
One
would
be
to
update
the
state
that
needs
to
be
updated
and
send
it
down
the
pipeline
of
real
time.
Websockets,
saying
hey,
a
new
comment
was
just
posted:
do
whatever
you
want
with
it,
but
at
the
same
time
we'll
update
the
state
so
that
the
next
time
that
mr
is
rendered
will
have
that
already
pre-rendered
pre-calculated.
C
A
C
How
hard
it
is
yeah,
not
really!
That's.
I
think,
that's
why
we
need
to
get
to
the
bottom
of
especially
what
would
be
pre-calculated,
so
the
mergeable
state
is
kind
of
like
the
biggest
one,
because
it's
just
if
I
could
just
read
a
troll
false,
then
it's
far
quicker
than
having
a
spinner
saying
is
this
mergeable.
C
That
seems
like
the
quickest
win
to
call
to
cover
that,
but
it
could
be
extended
to
the
diffs
like
if
the
diffs
are
cached
and
stored
somewhere.
Every
time
a
commit
is
made,
then
we
can
benefit,
but
I,
but
again
I'm
clueless
on
that.
What
that
would
entail
on
the
back
end,
so
it
sounds
like
at
least
a
spike
would
be
beneficial
to
to
dig
into
this,
but
I
was
wondering
if
we
had
more
actionable
things
than
just
a.
C
C
I
know
this
is
a
broad
scope
and-
and
I
think
that's
that's
what
I
like
about
this
feedback
is-
is
very
like
open-ended,
but
now
we
have
to
drill
it
down.
C
So
unless
somebody
objects,
I
think
I'll
open
an
issue
bring
this
comment
there
and
we
can
get
to
the
bottom
of
exactly
what
would
be
so
tong
talks
about
like
prioritizing
the
things
that
we
really
want
to
provide
to
the
user
by
every
time
they
reach
the
vmr
like
the
pipeline
in
the
latest
version
of
the
diff,
so
that
crosses
over
to
other
stages,
but
it
might
make
sense
for
us
to
test
something
within
code
review
to
see
what
would
be
the
benefits
of
it
in
leveraging.
C
Caching,
more
or
something
like
that
would
be
interesting
to
do
during
this
quarter,
because
I
think
that
would
benefit
the
whole
thing.
The
whole
care,
but
I'll
open
an
issue
and
I'll
share
it
on
slack.
So
we
can
all
follow
along,
but
right
now,
I'm
just
curious
whether
we
have
time
to
turn
this
into
a
deliverable
and
for
14-0,
or
do
we
just
schedule
a
spike.
That's
my
biggest
question.
C
And
right
I'll
take
that
action
anything.
I
know
that
there's
a
couple
of
other
ideas
there.
I
haven't
like
converted
everything
into
an
issue
or
anything,
but
there's
a
couple
of
ideas
there
as
well,
that
are
still
worth
pursuing.
That
will
will
use
this
issue
as
a
guide
in
the
next
couple
of
days
and
weeks.
A
Thanks
andrew
for
putting
this
together
and
soliciting
the
feedback,
I
think
it's
it's
good
to
get
people
thinking
and
it's
nice
to
see
people
outside
of
our
group.
Thinking
about
it
as
well.
The
real
key
is
gonna,
be
finding.
What
can
we
do
in
two
milestones
to
move
the
needle,
so
yeah.
B
B
Is
is
there
an
issue
that
we
can
follow
or
a
place
that
has
that
has
the
issues
that
we're
already
working
on
to
improve
the
performance
are
those
with
the
label?
The
large
mr
label.
A
Yeah
anything
with
the
label
would
be
issues
that
we
think
are
targeting
this
and
right
now,
there's
not
there's
not
really
anything
there,
and
so
we
need
to
be
figuring
out
what
those
pieces
are
that
are
actually
going
to
move
the
needle.
A
I
don't
even
know
that
there's
anything
in
14,
yet
that
sort
of
says
I
think
the
closest
we've
got
is
the
virtual
scrolling
effort.
But
we
know
there's
some
other
things
there
that
we
need
to
test
to
figure
out
if
we
can,
if
we
can
use
that
or
not,
we've
got
outstanding
issues
to
get
oily
for
some
other
things
that
could
potentially
be
in
this
realm.
But
but
we
haven't
heard
back
there
yet
either.
C
Yeah
phil
created
an
epic
for
the
virtual
scrolling,
which
does
include
a
couple
of
side
effects
that
we
would
need
to
address.
I
don't
think
those
issues
are.
C
So
phil
created
this
issue,
this
epic
and
there's
a
couple
of
issues
already,
and
I
expect
this
list
to
grow
as
we
go
through
reviews
and
we
throw
go
through
experimentation
with
the
virtual
scrolling.
But
one
of
the
things
we
we
already
have
identified
that
that
is
broken
and
need
work
is
those
three
issues.
C
The
first
is
allowing
us
to
enable
it,
even
if
the
feature
flag
is
off.
This
will
allow
us
to
test
on
the
site
speed
by
specifying
url
we'll
have
a
tracking
already,
even
though
the
feature
flag
is
off
and
then
it's
linking
to
a
discussion
and
linking
to
a
file,
it's
kind
of
like
broken
at
the
moment.
We
need
to
fix
it,
but
that's
the
thing
you
already
identified,
but
we
expect
this
list
to
grow
a
little
bit
more
as
we
go
through
reviews.
So.
B
Cool
pedro,
we
have
a
question
yeah,
it's
a
quick
question.
I
have
others
related
to
this,
but
I
think
this
is
a
quick
one.
Do
we
know,
can
someone
explain
the
difference
between
these
results
that
we
have
in
the
handbook,
and
that
are,
I
think,
rafana
dashboards,
and
then
we
have
other
results
stored
in
that
wiki
of
the
quality
performance
project.
C
Sure
so,
basically
the
ones
we
have
in
the
handbook,
they're
kind
of
historical
they
they
precede
the
quality
team,
the
quality
performance
team
and
that
work.
This
is
basically
tests
running
against
production.
On
a
couple
of
examples,
we
saw
and
a
couple
examples
that
we
tracked
the
one
you're
pointing
to
the
10k.
C
It's
a
10k
reference
architecture
so
that
we
get
to
experiment
with
really
large
data
sets,
and
it's
something
that
our
performance
team
set
up
to
to
run
against.
I
think
it's
an
environment.
They
set
up
picking
the
code
from
the
nightly,
but
it's
aimed
specifically
at
testing
edge
cases
of
large
customers.
C
They
have
different
moments
in
our
performance
journey.
The
first
one
is
older
and,
as
you
can
see,
the
handbook
is
outdated.
The
last
numbers
we
have
there
is
from
2020
yeah
february
2020,
because
it's
manual,
we
would
have
to
go
in
there
and
take
a
snapshot.
C
We've
eventually
evolved
to
to
new
ones
that
we
just
go
to
the
dashboards
and
we
have
those
historical
things
which
is
far
better.
We
didn't
have
that
at
the
time.
I
think
so.
Yeah
does.
B
C
B
That's
in
line
with
how
was
how
I
was
interpreting
this
okay
cool.
C
There's
a
couple
of
other
dashboards
that
we
look
for
user
timing,
api,
which
is
basically
events
that
we've
added
to
our
own
applications.
C
It's
a
little
bit
buggy,
because
it's
always
every
time
we
load
loads
with
the
snippets
user
timing
marks
so
user
api
dashboard
linky,
that's
a
long
link,
so
you
can
play
around
with
that
to
see
change
the
the
the
page
that
we're
tracking
and
you
can
see
the
merge
request
showing
the
file
tree,
render
the
the
first
file
being
started
to
render
and
the
last
file
rendered.
That
gives
you
more
information
about
how
our
apps
are
are
behaving.
B
Cool
yeah
that
makes
that
makes
sense.
Yeah.
One
thing
that
I
wanted
to
share
related
to
this
that
I'm
doing
during
this
quarter
is
is
evaluating
the
large
merge
requests
or
the
initially
it
was
to
identify
opportunities
to
improve
the
perception
of
performance.
B
Just
doing
changes
in
the
user
interface
without
much
technical
work,
and
it
was
it's
I
think
interesting,
because
I
I
ended
up
choosing
the
same,
merge
requests
that
we
identified
as
the
large
merge
request
for
this.
Let
me
share
the
link
to
that.
B
And
and
so
what
what
I'm
thinking
about
doing
and
and
while
we're
here,
I'm
just
gonna
see
if
that
makes
sense
from
from
your
end.
So
so
in
this
issue,
I
linked
to
to
these
two
different
merge
requests,
and
one
of
them
is
the
one
we
selected.
B
So
it's
this
one,
our
spec
upgrade,
and
this
merge
request
is,
is
great,
but
the
problem,
if
I
want
to
go
through
the
usual
tasks
of
commenting
resolving
and
all
of
that,
it
doesn't
work
that
well
because
it's
already
emerged
and
then
and
then
there's
this
closed,
one,
which
is
also
another
merge
request
that
we
looked
at,
and
my
idea
is
to
for
this
merge
request,
which
is
also
a
large
merge
request,
is
to
reopen
it
and
run
some
of
the
tasks
using
this
reopened,
merge
request
and
then
so,
when
I
reopen
it
all
of
the.
B
I
imagine
that
all
of
these
threads
would
become
unresolved
and
and
then
do
a
similar
thing
for
this
merge
request,
which
is
to
start
a
new
merge
request,
but
reverting
the
changes.
So
it
won't
cover
the
cases
where
we're
commenting
and
all
of
that,
but
it
would
cover
the
case
where
we're
creating
a
large
margin
request,
and
it
would
take
all
of
the
changes
here
and
revert
them.
B
So
I
imagine
in
theory
that
it
would
be
just
the
inverse
right
and
I
was
able
to
successfully
import
this
project
into
my
local
gdk
after
a
long
time
and
a
lot
of
trial
and
error,
but
it's
there.
So
that's
what
I'm
planning
to
do
so
that
we
can
have
the
the
timings
and
all
of
that
machine
reported
information,
but
also
have
me,
and
anyone
else
in
the
future
goes
through
specific
tasks
to
help
identify
opportunities
to
improve
the
perception
of
performance.
C
Was
explaining
it
does,
and
I
did
I
did
bring
it
up
with
grant
on
that
issue,
and
his
point
is
that
it's
just
easier
to
just
grab
something
that
already
exists.
So
having
a
revert
of
that,
mr
that's
helpful
and
we
can
just
import
that
instead,
we
should
probably
get
that
documented
somewhere
so
that
we
can
use
that.
C
Maybe
we
can
go
into
the
okr
issue
that
we
have
open
and
then
document
it
there
that,
instead
of
using
that
one
in
particular
for
open
state
because
it
is
change,
it
changed
quite
a
bit,
and
I
don't
think
if
you
reopen
that.
Mr
you
make
the
discussions
unresolved
because
they're
legacy
notes
so
they're
old.
I
don't.
I
don't.
C
So
it's
trickier
than
that.
So
by
opening
the
revert
you'll
be
helpful,
but
then
you
don't
have
the
comments
so
there's
still
quite
a
few
yeah.
We
should
probably
get
involved,
get
tommy
involved
in
and
get
a
very
robust
solution
for
this,
because
we
need
to
get
this
like
iron-clad
robust
example
of
an
mr
to
reference.
Every
time
we
talk
about
this
yeah.
B
Okay,
yeah,
that's
a
good
point.
I
didn't
remember
that
these
were
very
old
comments.
Yeah
yeah,
let's
see
yeah
I'll,
do
I'll
do
my
best
with
what
we
have
and
even
if
some
of
the
tasks
and
flows
are
not
perfect,
and
even
if
I
have
to
manually,
create
a
comment
just
to
resolve
it
later,
I
don't
know
we
will
probably
hit
some
things
so,
but
thanks
thanks
for
that
and
yeah.