►
From YouTube: 2023-01- 19 Code Review Performance Round Table
A
And
we're
live,
welcome
everybody
to
yet
another
weekly
meeting
of
our
performance
Roundtable
of
code
review.
So
we'll
get
straight
to
the
point.
Basically,
we
have
a
couple
of
open
topics
and
looking
at
the
board,
so
reference
I'm
going
into
the
board
and
looking
at
the
the
ones
that
are
still
being
refined,
so
we
can
discuss.
So
if
you
have
topics
if
you're
watching
the
recording
you
want
to
have
topics
that
way,
it
should
be
looking
into
make
sure
to
add
those
to
that
column
in
the
board.
A
So
first
is
Apollo.
Cache
persist.
We
had
approach
this
in
the
last
week
or
the
last
call
we
had,
but
nobody
really
was
on
top
of
things
and
I.
Think
stanislavony
got
a
little
bit
of
an
update
on
that
issue
so
start
doing
a
quick
summarize.
It
should
I.
B
Most
of
our
heavy
apis
are
our
regular
Json
apis,
it's
one
that
might
be
that
might
benefit
from
it
is
labels,
but
I
think
Natalia
just
recently
posted
that
labels
already
already
has
this.
So
there
is
not
much
left
for
us
to
do
here.
I
think.
A
Yeah
you're
right
she's
working
on
it
I'm,
not
sure
what
the
status
of
that
is
at
the
moment.
But
actually,
let
me
just
scroll.
Oh.
A
Merged,
yes,
I
did
feel
like
he
was
a
bit
faster
and
I
was
like
Natalia.
Is
that
you,
but
so
yeah
I
think
it
was
good
stuff,
so
we're
actually
going
to
be
seeing
how
that
rolls
out
and
the
feedback
of
it
as
I
was
pondering
your
comment
last
week,
I
think
one
of
the
places
that
we
can,
that
is
in
our
domain,
would
be
user
mentions,
which
I'm
not
sure
we
have
reviewers,
but,
like
you
said,
we
don't
really
take
much
benefit
of
persisting
that
across
multiple
page
views.
B
I,
don't
think
neither
is
possible
right
now,
because
the
out
how
to
complete
itself
is
not
using
few
it's
using
jQuery
at
who,
which
is
a
very
old
library,
that
we
have
to
support
yeah
and.
A
B
We
cannot
simply
go
there
and
add
graphql
or
add
some
caching
reusing
some
data
from
the
page.
That's
one
of
the
blockers
and
I
have
an
issue
exactly
for
that
to
to
at
least
try
it.
So
this
library
is
actually
a
big
blocker
for
us.
A
Okay,
do
you
want
to
add
some?
You
want
to
add
a
comment
to
that
I'm,
going
to
add
a
comment
right
now
to
that
issue.
Maybe
you
can
add
that
it
feels
like
there's
a
little
bit.
There's
not
a
lot.
We
can
do
right
now
in
the
context
of
performance
right
here,
so
I
would
propose.
We
can
still
continue
to
explore
that
topic,
but
we
can
take
the
refinement
label,
there's
not
much
to
drill
further
on
the
stop
picking
the
upcoming
weeks
right.
So
we
can
take
it
off
the
board
right.
C
A
B
You
can
still
work
on
it,
but
but
I
just
wanted
to
mention
that
it's
not
related
to
graphql
caching
stuff,
because
we
are
not
using
graphql
in
autocomplete
at
all.
Okay.
A
So
the
next
topic
is
pressing
files
and
divs
to
reduce
overall
payload
I'm
wondering
if
there's
still
work
to
be
done
now
that
we've
scheduled
59
I'm,
not
mistaken.
We
have
have
a
schedule,
we
settled
an
issue,
didn't
we
Kai
and
Matt
or
spiking.
C
C
I
I,
don't
know
if
it's
been
planned
or
not:
okay,
I'm
a
little
I,
don't
want
to
stay
confused,
but
I
just
want
to
like.
If
someone
can
like
explain
the
connection
to
me,
we
Caillou
and
I've
been
talking
in
a
different
issue
about
using
linguists
to
like
press
generated
files.
Like
that
whole
thing
and
the
issue
that's
linked
here
is
more
about
doing
server-side,
rendering
or
at
least
that's
the
conversation
that's
happening
is
that
are
those
related
topics
are
two
different
initiatives
or
ideas.
B
I
think
these
two
are
different,
because
the
original
issue
mentions
just
removing
very
large
divs
from
the
payload
and
server
side,
rendering
basically
suggest
removing
the
API
itself
and
just
outputting
the
ready,
markup
and
that's
it.
So
it
should
be
done
on
the
markup
level,
not
on
the
FPA
level,
and
these
are
different.
A
So
I
think
that's
why
the
issue
is
still
open.
We
left
it
because
of
the
server-side
server-side
rendering
discussion
so
that
discussion
that
you're
having
Carrie
definitely
still
makes
sense
and
makes
sense
in
that
context,
I
think
this
is
just
an
issue
that
has
the
wrong
title.
I
guess:
is
there
anything
else
to
be
discussed
on
the
context
of
server-side
rendering,
though
of
this
discussion
with
Patrick
and
stanislav
we're
having
on
this
issue,
is
there
anything
left
to
be
discussed,
or
are
we
just
waiting
for
prioritization
on
that
effort?
A
C
So
maybe
we
can
use
159
to
discuss
it
if
there
is
any
additional
discussion
that
needs
to
happen,
especially
around
like
what
do
we
need
to
implement?
What
do
we
need
to
build,
or
even
for
a
spike,
and
then
we
get
into
1510.
A
So
we
have
a
spike
for
stats
to
investigate
the
server-side
rendering.
Then
this
is
tied
with
some
of
the
work
that
he's
already
doing.
You
know
the
stuff,
but
there
will
be
some
investigation
on
the
front-end
side
of
things
related
to
service
side,
rendering
again
we're
not
aiming
for
full
server-side,
rendering
we're
aiming
for
partial
server,
side,
renderings
and
then
working
with
that
and
status
working
on
a
related
thing,
with
the
blame
page
that
we're
trying
to
get
some
lessons
out
of.
A
So
this
is
the
continuation
of
that
investigation
applied
to
the
diffs
so
but
yeah.
This
issue
feels
like
issue:
has
the
wrong
title.
C
D
A
A
A
Sorry,
my
brain
I
was
labeling
something,
but
I
want
to
unlabel
all
right
cool.
Sorry
for
that
any
other
things
we
should
talk
about
at
this
point
about
any
of
these
two
topics:
we're
going
to
move
on
to
the
lazy,
Hood.
A
All
right,
let's
move
on
to
the
laser.
This
is
a
new
topic.
I
think
it
hasn't
been
discussed
yet
here
in
the
context
of
the
of
the
call
so
status
you
open
this,
so
you
want
to
do
a
sort
of
a
short
pitch
in
what
you
were
thinking
about
this.
B
Yeah,
we
basically
have
an
HP
like
behavior
on
our
merge
request.
So
when
you
open
up
changes,
page
Pages,
for
example,
it
loads
Magic
request
description,
because
when
you
navigate
to
the
overview
you
have
to
have
that
there
and
that
leads
to
some
performance
issues.
If
you
have
very
large
descriptions
and
also
if
it
has
a
lot
of
markdown
stuff
in
it
like
some
tables
or
anything
else,
basically,
you
have
to
load
it
with
every
page.
B
Even
if
you
go,
for
example,
to
commits
page,
you
still
load
the
description
because
it
has
to
be
there.
So
the
idea
was
to
Lazy
load
it
when
we
are
on
any
other
page
except
for
overview,
so
it's
not
rendered
by
default.
But
if
you
go
to
the
overview
it
is
fetched
and
rendered
and
when
you
go
to
the
overview
initially
it's
already,
there
should
at
least
remove
the
HTML
size
and
you
won't
be
it
won't.
It
won't
be
necessary
to
parse
all
the
markdown,
etc,
etc.
B
It
does
that
in
the
background.
So
when
you
open
the
page
one
other
thing
one
of
the
things
it
does,
it
scans
the
whole
page
for
the
markdown
code
and
it
transforms
it
into
interactive
contents.
Whatever
it
is
like
the
code
blocks,
become,
became
actually
code
blocks
and
stuff
like
that.
So
even
if
you
don't
see
the
overview,
it
is
still
loading
it
is
still
executing
and
the
idea
is
to
do
it
only
when
you
actually
see
the
content.
B
Yeah,
so
you
also
asked
about
CEO.
This
won't
have
any
impact
on
zero,
because
when
we
go
to
the
overview
page
directly
it
it
shouldn't
be
any
different
from
what
it
is
right
now.
It
only
should
change
for
the
pages
that
are
not
the
overview
page.
D
Okay,
when
you
go
ahead,
okay,
this
is
it
when
you
say
we
won't
we'll
only
load
it
if
you
go
to
the
overview
page.
Does
that
mean
like
I?
Would
click
on
the
overview,
Tab
and
then
like
with
CA
spinner,
while
we
retrieve
and
then
parse
and
render
or
like
we'll,
have
the
data
we
just
won't
EXP
and
it.
B
We
can
do
this
in
different
ways.
For
example,
we
can
fetch
it
in
the
background,
fetch
it,
but
don't
insert
it
into
the
Dom
immediately
or
we
can
do
it
the
most
ways.
The
most
easy
way
is
to
shove,
a
spinner
wait
for
the
request,
the
insert
it,
but
that's
not
ideal.
So
fishing
is
the
background,
seems
like
the
most
compromised
way
of
doing
it.
D
A
B
One
of
the
things
that
actually
blocks
the
start
of
the
page
is
the
amount
of
HTML
deserve.
So
if
we
have
a
really
large
description
in
the
issue
or
in
the
Mr
that
actually
blocks
us
from
starting
the
bundle.
So
if
we
don't
have
this
content,
we
can
start
sooner
and
show
the
divs
instead
of
showing
the
spinner
loading,
all
the
content
and
then
executing
the
bundle.
B
A
But
we
all
know
that
there's
some
issues
that
get
really
long,
so
so
the
merge
request
description.
All
right,
I'd
be
willing
to
investigate
this
further.
Would
you
would
you
please
create
an
issue
for
sorry?
This
is
already
an
issue.
Sorry.
A
So
what
I
think
we
can
do
is
put
this
in
the
pipeline
for
one
of
the
upcoming
Milestones,
so
we
can
consider
it
for
implementation
and
then
be
prioritized
against
other
all
the
things
that
we
have
on
the
pipeline,
but
yeah
I
think
it's
optimizing.
Basically,
the
loading
of
all
the
tabs
I
definitely
feel
like
it's
worthwhile.
A
It
doesn't
seem
like
a
big
effort.
Does
it
says.
B
A
A
A
My
Reddit
version:
okay,
I'm,
adding
the
label
back
in
and
removing
the
refinement
and,
if
nobody
objects,
I'm
going
to
put
the
label
put
the
mouse
on
for
1510,
so
it
pops
up
on
the
planning
conversations.
Okay,
is
that
okay.
A
We
discussed
face
on
our
call
within
the
app
for
consideration.
A
All
righty
thanks
next,
so
this
is
an
issue
I
created,
so
I'll
do
my
best
to
present
it.
A
So
this
came
up
in
the
conversation
of
the
the
school
year,
24
Vision
issue
that
I
created
and
Mark
Shaw
brought
up
the
topic
of
observability
and
increasing
visibility
of
the
performance
of
our
pages
and
stuff.
He
was
asking
whether
that
would
be
a
push
from
product
that
sort
of
thing,
so
I
don't
have
much
right
now.
I
I
just
created
it
a
while
ago,
a
few
minutes
so
I'm
going
to
be
inviting
Mark
to
kind
of
like
flesh
it
out
a
little
bit
more
in
the
coming
days
and
stuff.
A
So
I
feel
like
it's
a
bit
too
early,
but
if
anybody
has
any
thoughts
on
it,
I
already
shared
mine
in
the
issue.
Basically,
there's
several
things
here:
you
could
be
the
merge
request
analytics
it
could
help
us
I
think
that's
not
exactly
what
it
means.
We
have
several
performance
dashboards
of
the
code
review
Pages
through
site
speed,
including
some
quality
teams
suits
of
tests
for
the
reference
architectures
that
we're
using
and
referencing
a
lot.
And
recently,
the
team
worked
on
tracking
the
duration
of
specific
parts
of
the
business
request.
A
So
oh
and
Kai
added
the
link
to
the
dashboard
thanks.
Oh
sorry,
you
have
a
link
to
the
issue
to
create
a
dashboard
is
that
it.
D
D
I
think
and
I
think
that's
potentially
part
of
what
Mark
is
talking
about
and
and
one
of
the
things
I
I'm
sort
of
questioning.
As
we
talk
about
these
things
too
is
like
we
have
all
of
these.
How
do
we
make
sure
we're
all
using,
like
the
same
data,
all
the
time
to
talk
about
the
same
thing
like
it's
not
helpful
to
me
as
an
outsider,
to
be
like,
go
query
Prometheus
like
that's,
not
like
a
feasible
thing,
and
it's
not
like
helpful
I.
D
Think
for
everyone
because,
like
those
queries
will
be
slightly
different
or
they
won't
look
at
the
same
data
or
the
date
range
or
whatever,
like
and
I,
think
he's
gonna
look
at
doing
a
grafana
dashboard
for
that,
but
it
would
be
nice
to
like
I
think
make
sure
that
we
have
pages
that
we
go
to
and
like
probably
something
in
our
handbook.
Section.
D
That's
like
here
are
all
of
the
dashboards
that
we
use
for
performance
and
like
what
they
are
with
like
details
of
of
what
they're
looking
at
and
how
to
like
interpret
them,
because
I
think
that's
one
of
the
things
that's
missing,
probably
just
in
general,
like
I,
know,
Santa's
love
did
a
ton
of
work
on
like
the
site.
Speed,
stuff
I
actually
have
no
idea
like
if
any
of
that
anyone
else
could
do,
or
if
all
of
that
was
local
or
like.
A
You
feel,
like
so
I,
see
these
two
into
a.
Are
you
expecting
to
have
one
place
to
see
the
actual
different
metrics
of
different
things
or
a
simple
handbook
page
linking
to
all
the
Rel
dashboards
would
be
enough.
D
I'm
fine
with
it
I,
don't
think
it's
reasonable
to
have
like
one
grafana
page.
That
has
everything
because
I'm
not
sure
that's
the
right,
but
it's
not
like
a
single
tool
for
us
right
like
we
don't
have
a
single
tool
to
do
it
all,
but
we
should
have
an
aggregated
place
to
like
tell
you
where
to
go
to
all
of
them
and
then
sort
of
like
what
they
all
do.
C
That's
actually
exactly
what
I
was
going
to
suggest,
but
I
opened
up
this
the
shoe,
because
that
I
mean
I
have
the
same
problem
where,
like
I,
want
to
answer
a
question.
I'm
like
I,
don't
even
know
where
I
have
to
remember
like
what's
grafana
versus
kibada,
it's
it's
a
confusing
mess.
So
just
if
we
organize
it
and
said,
here's
our
standard
tools
right
our
reference
documents,
if
you
will-
and
at
least
we
have
a
place
to
start
communally.
D
Most
people
haven't
seen
the
site
speed
work
that
that
stanislav
did,
unless
they
were
like
closely
following
that
issue
in
that
Mr
to
like
see
all
the
timings
and
everything
he
was
going
through
with
like,
like
nobody
I,
don't
think
anyone
saw
that
the
timings
that
Patrick
shipped
just
recently
I
don't
think
anyone
saw
that
like
I
was
curious
like
what
now
that
we
have
that
like,
are
we
looking
at
it
and
looking
for
like
and
no
one
like
it
said
anything
because
I
don't
think
anyone
knew
that
that
work
like
existed,
right
and
I
think
that's
sort
of
like
the
missing
piece
of
our
observability,
like
I,
think
we
actually
probably
have
more
data
than
people
think
we
just
don't
surface
it
in
a
way
that,
unless
you're
like
on
top
of
all
of
them,
you'll
never
know.
C
I
I,
don't
in
any
way
want
to
make
more
work
for
anybody,
but
it
would
be
really
cool
if
every
two
weeks
once
a
release
or
something
like
we
just
sent
out
like
hey
here's,
the
here's,
the
performance
Trends
over
the
last
period
of
time.
You
know
just
link
to
the
relevant
here's
three
or
four
interesting
graphs.
C
If
it
showed
nothing
or
is
there
something
interesting
I
know
that
I
found
about
stencil
I
saw
I
found
about
your
work
when
I
think
Andre,
you
sent
out
a
link
saying
look
at
this
great
Improvement
I'm
like
yep
graph,
goes
down
awesome.
You
know,
but
like
other
than
that,
like
I,
don't
have
any
context
for
understanding
it
so,
but
providing
that
right,
because
observability
is
all
great
or
measuring
things
is
great,
but
if
you're
not
observing
the
measurement,
then
like
are
we
observing
things.
A
C
A
Right,
sorry,
but
yes,
I
100,
agree
I'm,
already
dropping
the
comment
on
that
issue
that
you
suggested
having
periodic
updates
like
two
weeks
after
release
about
the
current
metrics
yeah,
it
sounds
sounds
like
a
good
way.
We
could
we
could
leverage,
slack
or
or
because
that's
where
we
were
all
hanging
out
and
it
would
be
a
good
place
to
kind
of
like
have
more
relevant
information
shown
to
everybody,
I
yeah.
A
Let's
discuss
that
in
the
issue,
because
there's
there's
definitely
the
better
way
of
doing
this
automatically
and
it
just
grabs
the
information
from
all
places.
But
maybe
we
can
start
with
a
manual,
so
yeah,
I'll
I'll,
see
let's
discuss
it
there,
because
I
I'll
be
willing
to
kind
of
like
take
it
for
a
couple
of
months
and
see
how
it
runs
so
one.
A
So
the
first
step
would
be
have
the
summary
of
all
the
dashboard
places
that
we
found
relevant
and
then
we
could
have
some
dris
for
each
one
of
the
dashboards
that
are
like
two
two
people
or
something
looking
at
all
the
dashboards
coming
up
with
highlights.
Every
two
weeks
would
be
doable
at
least
so.
I
feel,
like
that's
a
good
idea,
I'm
jumping
back
on
the
there,
so
that
my
future
self
remembers
it.
That
I
mentioned
this
on
the
call.
C
D
A
That's
great,
no
I
100
agree
all
right
boom
thanks,
that's
funny!
No,
that's
great
any
more
thoughts
on
this
topic.
A
All
right,
thank
you.
So
much
for
all
your
thoughts,
an
update,
Phil
is
already
back
and
following
the
great
work
that
Kerry
has
done,
updating
the
documentation,
he's
already
working
on
adding
some
front
and
relevant
information
to
the
docs,
all
the
technical
things
and
I'm
going
to
write
this
in
the
agenda.
A
So
what
I'm
thinking
is
once
we
have
that
in
merged
I'll,
I'll,
probably
create
either
a
synchronous
issue
or
an
actual
call
to
go
through
the
documentation
that
we
created
together,
because
the
goal
of
the
documentation,
if
you
remember,
was
to
get
a
snapshot
of
the
current
state
of
the
system.
And
now
we
can
point
at
things
in
the
mermaid
charts
the
memory
graphs,
the
hey,
which
probably
should
rethink
this,
or
maybe
we
can
join
these
two
things
together
and
or
move
this
to
the
front
and
move
this
to
the
back.
A
End
I
feel
like
that's
a
joint
effort,
so
yeah
just
a
heads
up
we're
getting
some
fun.
It
was
a
movement
after
after
Phil
came
back,
so
we
might
have
some
updates
soon,
I'll
post
it
so
Ducks
updates
who
is
working
on
front
end
after
this
is
merged.
We
have
issue
slash,
call
to
go
through
them
together
right.
A
Any
other
topics
that
you
might
want
to
discuss
at
this
point.
B
I
have
one
question
that
I've
raised
curiously,
while
working
on
changes,
Pages
changes,
page
performance
is
that
we
have
only
a
single
page
that
we
test
against
our
10K
architecture
and
the
problem
with
that
is.
We
have
a
lot
of
different
kinds
of
merge
requests
on
summer
request.
You
don't
need
a
Sideburn
on
some
Edge
requests.
You
have
one
file,
but
it's
1000
lines
and
all
this
work
quite
differently
and
can
be
affected
in
many
different
ways.
So
I
was
thinking.
B
A
That's
I'll
tell
you
what
I'm
thinking
and
I'll
I'll
give
the
room
for
us
just
to
share
yes,
after
first
of
it,
100
I
do
feel
like
and
I,
don't
want
to
demote
you
from
investing
in
in
that
and
finding
what
the
emergency
stuff
but
I
feel
like
currently,
I.
Don't
think
we
have.
The
problem,
isn't
assessing
the
metrics
I
think
where
we
start,
we
need
to
start
batting
on
heavily,
is
on
actual
changing
the
systems
that
we
currently
have
meaningfully
and
more
than
just
iteratively
improving,
which
is
great
and
welcome.
A
I
feel
like
we
need
to
start
thinking
really
hard
at.
What's
the
disruptive
rethinking
of
the
systems
that
we
need
to
do
to
make
the
quantum
leaps
of
performance
on
our
system,
and
yes,
definitely
having
different
kinds
of
Mrs
being
tracked,
will
give
us
different
perspectives
of
some
of
the
stuff.
This
sidebar
that
you
mentioned,
like
the
improvements
we're
going
to
be
doing
on
the
diff
pile
tree
thing,
is
definitely
one
of
the
examples
that,
on
the,
if
you
have
that
closed
up,
you
don't
even
have
you're
not
impacted
by
that
delay
right.
A
D
Add
we
got
a
reasonable
amount
of
pushback
when
we
asked
for
this
previously,
based
on
the
way
the
testing
environments
are
stood
up
like
it
was
hard
enough
to
find
that
one
Mr
that
we
do
use
for
testing
the
the
test
data
that
they
have
pre-loaded
is
really
limited
and
they're
not
super
keen
on
they
weren't
at
the
time.
This
might
have
changed.
They
weren't
super
keen
on
like
expanding
the
test.
Data
for
us
or
letting
us
manually
create
something
or
grab
things
from
different
projects.
They
were.
D
There
was
a
lot
of
pushback
on
it
and
even
shifting
to
the
one
we
did
was
it
took
a
lot
of
a
lot
of
effort,
so,
like
I,
don't
I
think
the
concern
from
from
your
side
is
understood
like
I
agree
like
I
have
I've
never
been
in
love
with
the
fact
that
we
test
one
Mr,
that's
sort
of
like
not
representative
of
the
rest
of
things,
because
it
makes
it
really
hard
to
understand
if
we're
like
doing
the
right
things.
D
I,
don't
know
if
you've
seen
I'll
I'll
just
link
it
in
chat.
You
can
go,
read,
there's
there's
a
backlog
thread.
I'll
just
link
you
actually
to
that
thread.
A
A
A
Was
reading?
Okay,
that's
right!
We
have
another
dashboard
for
for
tracking
live
URLs,
instead
of
being
a
safe
laboratory
environment
that
we
run
an
instance
or
whatever
we
have
another
site,
speed
that
monitors
production
URLs,
and
in
that
case
we
can
add
URLs
freely,
I
believe
some
the
restrictions
there
is
the
sign
the
signed
in
experience
a
little
bit
restricted,
but
we
can.
We
can
track
the
performance
of
live
URL
there.
So
if
we
want
to
track
certain
Mrs
that
we
know
is
a
different
kind.
We
can
use
that
site
speed.
A
B
There
as
well,
it
also
allows
you
to
track
production
pages,
so
it
also
could
be
very
useful
to
see
the
changes
overall
over
time,
and
this
goes
well
with
the
previous
topic,
which
is
observability
having
all
these
links
on
a
single
page
to
track
our
own
homegrown
10K
architecture,
and
also
external
from
debugger
or
site
speed.
To
be
really
nice.