►
From YouTube: 2023-03-14 Source Code Performance Round Table
A
And
we
are
live,
welcome
everybody
to
our
very
first
performance
Roundtable
in
the
source
code
group
I'm
very
excited
to
have
this
Kickstart
here
as
well.
We
have
been
doing
this
on
code
review
and
now
we
are
doing
it
in
source
code
too.
So
I
want
to
start
by
by
giving
an
introduction
of
the
call
exactly
like
establish
the
goals
and
establish
what
we're
trying
to
do
here.
A
The
best
idea
is
just
to
use
the
label
open
an
issue
with
our
label,
which
is
performance,
Round
Table,
colon
colon
source
code.
If
I
could
type
that
would
be
amazing,
so
I'm
adding
it
there.
So
if
you
open
an
issue
with
that
and
add
the
workflow
refinement,
we
can
then
go
to
our
board
in
just
a
second,
so
I
can
get
this
ready
to
share
the
screen
and
I
can
show
you
to
the
recording
as
well.
A
So
in
our
agenda,
we
have
a
link
to
the
board
right
and
in
this
board.
What
we're
looking
for
is
issues
with
performance,
roundtables
source
code
and
then
the
workflow
refinements
list
will
show
everything
that
is
being
refined,
so
to
speak.
So
if
you
have
any
proposals
and
any
crazy
ideas
that
you
want
to
start
put
up
for
consideration
the
best
idea,
the
best
way
to
do
it
is
like
open
an
issue
label
it
with
these
two
things
and
then
present
it
in
the
next
call.
A
That'll
be
the
best
way
and
again
the
idea
for
topics
to
be
discussed
could
be
repurposing
already.
Existing
code
could
be
just
something
you
heard
about
and
you
want
to
validate
across
disciplines
between
back
and
front-end.
Anything
really
that
could
benefit
performance
or
just
stale
issues
that
we
haven't
picked
up.
Anything
is
relevant
here.
So
that's
why
it
started
anybody
who's
interested
in
performance.
A
You
know,
group
and
I.
Think
that's
that's
and
it's.
B
Your
approach
suggests
that
we
already
know
what
we
want
to
do
in
the
issue
that
I
created,
based
on
the
proposal
of
I,
think
Dennis
and
other
people,
Sean
I,
believe
I
outlined
The
Proposal
how
we
find
out
what
are
actually
issues
to
to
ident
to
like
propose
a
process,
how
we
would
identify
what
we
should
focus
on
to
prevent
that
we're,
focusing
on
potentially
the
wrong
improving
the
wrong
things.
A
B
B
B
One
opportunity
type
is
about.
You
know
really
the
interface
performance
interfaces
like
web
pages
or
other
interfaces
like
apis.
B
We
can
just
measure
those
and
see
how
how
fast
and
slow
they
are,
and
we
should
measure
how
often
they
are
being
used.
So
we
don't
want
to
improve
apis
that
no
one
is
ever
calling
right.
It
would
be
a
waste
of
time,
even
if
they're
super
slow
and
then
the
other
one
is
workflow
performance
and
there
was
a
long
discussion
by
different
people
that
chimed
in
and
shared
what's
been
done
in
code
review.
B
So
looking
at
work
for
performance,
where
we
look
at
okay,
you
have
a
certain
goal
in
mind.
Like
make
a
change
in
a
certain
file,
you
need
to
do
certain
steps
to
do
that.
How
likely!
Is
it
that
you
that
users
are
having
this
task
and
yeah?
How
long
does
it
take
to
to
do
this
task,
which
is
basically
a
sequence
of
API
calls
or
on
web
page
reloads
and
then
also
I,
guess
also,
there's
a
mental
load
of
finding
the
right
things
that
you
have
to
click?
B
This
is
probably
a
bit
harder,
but
you
know
understanding.
Is
it
a
relevant
workflow
or
not
right?
So
maybe
it's
unlikely
that
someone
goes
on
the
blame
page
to
the
commit
list,
but
maybe
not
right
so
yeah.
B
That's
basically
what
this
issue
proposes
and
I
outline
here,
two
tables
with
ideas
of
you
know
what
things
we
we
should
track
here
to
make
an
informed
decision
of
what
we
should
focus
on.
C
A
C
The
very
first,
the
first
table
about
interface
performance.
It's
it
like
I'm,
not
sure
whether
it's
it
should
be.
So
it's
it's
really
about
from
what
I
understand
it's
about
the
loading
performance
of
one
individual
page.
This
includes
front-end
and
back
end
plus.
It
is
apparently
but
I'm,
not
sure
whether
it's
covered
here,
so
the
performance
of
apis
I
have
this
point
later
in
the
agenda,
but
the
since
it
explicitly
says
interface
so
I
assume
this
is
about
page
loading,
so
one
single
page
and
then.
B
C
C
C
So,
technically
the
interface
performance
we
we
do
have
again
I
have
those
points
later
in
the
agenda.
I
probably
have
to
have
to
move
those.
If
you
don't
mind,
I
will,
since
I'm
gonna
gonna
be
mentioning
those
so
I
move
those
on
the
Twisters
item
here.
So
technically
we
have
the
grafana
dashboards
where
we,
where
we
measure
over
the
course
of
I,
think
a
couple
of
years.
C
We
measure
page
loads,
the
the
metrics
technically
are
tuned
for
front
end,
but
every
page
has
API
calls
behind
itself
right
so
technically
analyzing
the
the
main
abusers.
So
technically
this
this
first,
this
the
LCP
leaderboard
letter
leaderboard,
shows
all
of
the
pages
and
the
product
that
we
measure
and
unfortunately
the
three
most
severe
abusers
in
this
in
this
table
are
under
our
control,
so
source
code
routes,
those
pages
technically
they
provide
more
like
more
generic
view
on
the
interface
performance.
It
includes,
as
I
said,
front
end
and
back
end.
C
So,
by
drilling
down
that
or
another
pies,
we
can
figure
it
out
where
actually
the
performance
problem
is
whether
it's
front
end
or
backhand,
whether
it's
API
or
whether
it's
I
don't
know
like
JavaScript
blocking
the
main
thread
so
different
things.
This
provides
the
complex
view
on
on
a
particular
interface.
C
Then
the
next
thing
we
have
there
is
related
to
that
workflow
performance,
which
is
called
the
user
journey.
I
mentioned
this
in
in
my
comment
in
the
in
the
issue.
C
This
is
measurement
of
a
sequence
of
actions,
so
opening
a
link
clicking
that
link
drilling
down
to
like
and,
in
this
particular
case,
drilling
down
to
the
to
a
file
two
levels
deep
in
a
folder.
So
that
sort
of
thing
we
can
Pro,
we
can
create
any
any
user
Journey.
We
want
and
start
measuring.
At
the
moment,
we
measure
only
two
scenarios.
C
Technically
one
scenario:
in
two
environments:
the
the
scenario
we
are
measuring
is
loading
repo
clicking
a
folder
and
getting
to
file
two
levels
deep
in
the
folder
structure
and
load
that
file.
So
this
is
the
journey
we
are
measuring
in
two
environments
being
large
repository
and
regular
upholstery.
Those
are
those
get
measured
now.
C
The
problem
with
both
of
this
are,
or
rather
the
workflow
like
when
it
comes
to
the
single
interface
measurements,
like
individual
view,
we
kind
of
have
the
the
metrics
coming
from
the
industry
standards.
So,
let's
say
in
this
particular
case,
Google
tells
us
okay,
the
pi.
This
parameter
should
not
exceed
this
number
or
this
parameter
should
not
exceed
this
number.
So
that's
what
we
have
in
our
in
the
core
of
our
measurements
of
the
interfaces.
C
C
Yeah
yeah,
those
are
those
are
like
Google
has
some
some
what
they
call
advices
suggestions.
The
problem
is
that
this
is
quite
passive,
aggressive
suggestion,
namely
websites
get
punished.
If
you
don't
follow
those,
so
they
get
worse,
worse,
worse
rating,
worse
placement
in
the
search
results,
if
you,
if
your
performance,
doesn't
it
doesn't
match
the
requirements
from
Google
for
example?
So
it's
those
are
requirements,
but
we
do
do
our
best
to
follow
those.
C
Are
this
is
this
is
for
loading
performance,
so
performance
of
loading,
a
page
so
from
sending
the
HTTP
request
to
the
to
when
the
whole
page
is
on
the
client,
with
all
the
JavaScript
processed
and
yeah.
So
there
are
three
main
parameters.
We
measure
LCP
largest
contentful
paint.
This
is
this
is
the
time
to
paint
to
render
the
most
significant
element
on
the
page
on
the
first
screen
of
the
page.
C
There
are
a
lot
of
nuances
there,
but
Google
is
and
technically
every
performance
technique
is
mostly
interested
in
rendering
the
most
significant
content
on
the
very
first
screen
for
the
user
users.
Don't
care
what
go?
What
goes
on
under
the
fold
on
the
second
third
screen
of
the
page,
but
the
point
of
there
of
them
getting
to
your
page
is
quite
specific.
If
we
talk
about
getting
to
a
blob
view,
what
they're
ex
what
they
expect
is
to
see
The
Blob,
they
don't
expect
to
see
the
navigation.
C
They
might
not
care
about
navigation,
they
might
not
care
about
search
if
they
get
to
the
blobview.
They
want
to
see
The
Blob
they
go
to
repository.
They
want
to
see
the
list
of
the
repository
files
and
Trays
if
they
get
to
edit
a
file.
They
want
to
see
the
editor.
They
don't
care
about
the
game
like
navigation
or
my
my
profile
like
and
Avatar
being
loaded
right
away.
C
So
those
things
are
considered
significant
for
that
or
another
route
and
LCP
metric
largest
control
paint
is
responsible
for
measuring
what,
like
the
the
here,
comes,
some
Buddha
magic.
Nobody
really
knows
what
is
considered
to
be
significant
right,
however.
Google
strictly
defines
the
largest
contentful
pane,
so
it's
the
pain,
it's
the
rectangular
area,
where
we
have
either
image
or
text
which
it
might
be
an
area
without
text,
but
with
different
color
different
from
the
background.
C
C
At
some
point,
we
had
a
repository
repo
view
was
a
was
the
biggest
abuser
for
some
for
some
time
before
Jacques
began
with
with
some
some
performance
work
there,
but
taking
that
information
from
Google.
What
we
did
was
in
order
to
improve
the
statistics.
We
didn't
have
capacity
to
improve
the
performance,
but
if
we
wanted
to
trick
the
statistics,
so
what
we
did
was
just.
We
increased
the
description
on
the
page
which
technically
increased
the
contentful
paint
element
on
the
page
and
that
description
was
coming
from
rails
application.
C
B
So
for
us,
it's
not
relevant
whether
Google
will
lists
Us
in
the
search
or
not
because
the
pages
that
we're
talking
about,
but
anyway
highly
Dynamic
pages
and
typically
for
logged
in
users,
only
some
are
not
you
don't
need
to
be
logged
in.
So
we
don't
need
to.
You
know
cheat
on
these
numbers
to
get
a
better
search
result
because
we're
not
interested
in
the
search
results.
Exactly
these
kinds
of
pages
right,
exactly.
C
B
C
The
why
the
the
reason
why
I
cheated
on
that
particular
case
was,
first
of
all,
as
I
said,
we
didn't
have
capacity
to
fix
it
for
real
and
second,
we
were
really
sad
to
see
those
really
high
numbers
like
really
slow
performance
on
the
LCP,
leaderboard
and
team,
was
bugging
us
on
the
application
performance
sessions
that
well.
These
Pages
need
some
attention
and
then
we
kind
of
brought
the
result,
but
what
I'm?
What
the?
What
what's
the
point
here
is
that
we
do
not
necessarily
like
this
get.
C
This
comes
to
my
next
question,
like
what
is
the
point
here?
Do
we
want
to
get
good
statistics?
Do
we
want
to
improve
performance
for
the
users
and,
if,
in
both
of
these
cases,
what
particular
performance
do
we
want
to
improve?
Because
there
are
so
many
things
again,
we
can
employ
psychological
tricks
to
make
users
think
that
the
page
is
fast.
However,
the
page
will
in
the
statistical
data
the
page
will
be
somewhere
really
really
low.
C
So
there
are
a
lot
of
things
when
it
comes
to
Performance
that
we
can
employ,
depending
on
our
goal
and
I.
Think
my
personal
take
on
this
is
that
the
very
first
thing
with
these
performance
roundtables,
we
have
to
identify.
What
is
the
goal?
What
do
we
want
to
achieve?
Is
it
to
improve
to
really
improve
performance
for
the
users?
Is
it
to
really
be
improve
performance
for
the
users
and
really
improve
statistical
data?
C
This
is
the
most
tedious
and
most
labor-intensive
way
of
improving
performance,
or
is
it
just
maybe
in
some
cases
make
users
think
that
the
page
is
faster,
but
at
the
same
time
we
do
not
really
deliver
the
fastest
result
when
it
comes
to
the
absolute
numbers.
A
So
Dennis
there's
this
feeling
I
want
to
add,
because
you
have
these
conversations
also
with
with
the
quality
team,
because
you're
for
tracking
a
really
large
file.
In
this
case
it
doesn't
make
sense
to
evaluate
it
as
if
to
expect
it
to
have
a
a
hundred
milliseconds
response
time
right.
We
have
to
accommodate
the
metrics
to
the
sorry,
accommodate
the
expectations
to
the
examples
that
we're
targeting.
A
So
one
of
the
questions
I
have
here
in
the
agenda
is
one
of
the
things
we
did
in
the
merger.
Crest
was
looking
at
the
sizes
of
the
emergency
grass,
statistically
in
checking
the
percentiles
of
the
sizes,
and
essentially
we
targeted
the
99,
percentile
or
90th
percentile
I
can't
remember
what
it
was,
which
eventually
gave
us
some
more
regress
with.
A
You
know,
x,
amount
of
comments
x,
amount
of
files,
change
x,
amount
of
lines,
I,
don't
think,
we've
ever
done
that
with
the
source
code
files
per
se
looking
at
the
whole
percentile
sets
of
data,
that's
one
the
other,
like
you
said.
The
comment
I
want
to
have
here,
explaining
the
difference
just
to
see
how
fickle
and
how
vulnerable
these
are
vulnerable.
A
How
untrustworthy
these
metrics
could
be
it's
the
code
owner's
box
that
is
changing
that
Matrix
perception,
where
the
larger
file
is
faster
for
the
lcdp
than
the
smaller
file,
because
the
smaller
file,
as
far
as
I
can
see
doesn't
have.
Oh,
no
one
of
them
has
the
code
orders
and
the
other
doesn't
so
I'm
guessing
that.
That's
probably
the
way
that
this
section
comes
from.
C
A
C
Might
relate
to
the
fact
I'm.
Actually
not
I
haven't,
haven't
drilled
down
to
the
data
between
the
difference
between
the
blob,
like
small
blob
and
large
blob,
but
it
might
be
exactly
that
thing
with
the
LCP.
So
if
the
code
owners
becomes
the
largest
Quantum
full
paint,
if
we
split
the
a
large
blob
in
chunks
right,
that's
what
we
do,
then
the
code
owners
might
might
become
the
largest
control
paint,
and
it
just
takes.
A
D
C
A
C
A
Pixel
size,
so
the
person
this
story
might
be
useful
for
you.
One
time
we
had
those
popovers
for
the
Milestone
15-0
release
thing
come
along
on
the
pages
and
that
threw
off
the
LCP
Magics
on
a
bunch
of
pages,
because
that
became
the
largest
content
on
the
page.
Just
so
you
get
an
idea,
yeah.
B
Yeah,
so
I
I
think
your
proposal
to
you
know,
look
at
percentiles
or
file
sizes.
It's
really
good.
It
gives
us.
You
know
one
dimension
of
what
is
relevant
right.
It's
not
relevant
to
have
like
enormously
large
files
or
super
small
files.
If
those
are
not
relevant
and
the
other
dimension
is
if
a
page
is
being
used
or
not.
B
But
for
the
case
of
you
know
the
repo
Pages
The
Blob
blob
page
and
the
directory
page,
we
know
that
those
are
heavily
used
pages.
So
that's
already
clear.
Yeah.
C
Yeah
I,
I
I
think
I
might
I
might
have
missed
this.
But
when
you
talked
about
percentile
you
meant
about
percentile
of
the
users
right
like.
A
C
No
no
I
mean
I
mean
100
talked
about
percentile.
There
is
this.
There
are
these
different
approaches
to
measure
performance.
So,
like
you
get
the
absolute
number
for
like
for
one
file,
but
then
you
you
can
you
can
produce
the
metrics
for,
for
example,
for
90th
percentile
percentile,
so
90
of
the
users
will
get
this
speed
of
loading.
This
particular
page
yeah.
That's.
A
The
so
grabbing
at
the
old
it
could
be,
it
could
be
based
on
page
views
like
person
was
saying,
but
essentially
looking
at
all
the
files
being
viewed
in
the
last
I.
Don't
know,
30
days
90
days,
whatever
checking
the
size
of
the
files
of
that
were
being
accessed
on
youtube.com
and
eventually
seeing
what's
the
largest
one.
What's
the
smallest
one
layer
them
out
and
then
seeing
which
are
the
biggest
ones
being
accessed.
C
I'm
I'm,
a
big
proponent
of
taking
at
least
two
measurements
like
take
your
measurement
on
the
smallest
possible
file
and
on
the
largest
possible
file
to
see
how
the
current
solution
scales
like
we
in
this.
In
this
case,
we
might
have
a
better
understanding
like
if,
for
example,
the
largest
largest
file
like
we
had
one
one
case
when
the
largest,
really
large
blob
I
think
it
was
blob
when
that's
that's.
C
When
I
was
in
contact
with
quality
team,
the
large
blob
took
really
long
about
30
seconds
or
something
like
this,
because
because
of
all
the
parsing
happening
on
the
page,
so
it
really
helped
to
having
having
that
number
to
figure
out
where
the
problem
is.
Is
it
a
back
end
that
returns
file
very
slowly
because
of
its
size,
or
is
it
front?
And
then,
with
that
big
file?
C
It
was
clear
that
front
that
was
messing
up
the
things,
because
back-end
was
very
fast,
no
matter
the
whether
the
file
is
small
or
large,
so
right
having
having
these
two
sort
of
polarized
measurements
helps
a
lot
to
to
have
a
perspective
of
scalability
of
the
current
solution.
A
Sure
I'll
propose
moving
this
discussion
to
an
issue.
For
the
sake
of
time
and
moving
on
to
your
next
facets
of
performance
question,
then
it's
there's
some
thought
there
from
Robert.
Yes,.
C
This
is
this,
is
this
is
the
question
that
I've
already
mentioned,
so
we?
What
is
what
performance
do
we
want
to
Target
like
we
can
say
we
want
to
Target
any
performance.
We
we
perform
opportunity.
We
see,
that's
totally
fine,
but
do
we
have
any
idea
of
what
performance,
I'm,
I,
don't
I,
don't
say
about
particular
view,
but
like
what
type
of
performance
do
we
do,
we
should
we
care
about
most.
A
Great
question
I'll
just
share
here
to
for
the
recording
sake
yeah,
while
we
do
think
that
most
pages
on
our
repository
need
to
be
super
fast
and
potentially
in
the
first
loading.
The
second
is
also
important,
but
we
also
want
it
to
be
faster.
The
first
it
is
important
to
be
mindful
on
the
performance
Beyond
The
Learning
of
the
page,
like
you
mentioned
before
the
user
Journeys,
definitely
important.
A
It's
something
that
we
touch
down
below
the
agenda
as
well
and
then,
following
the
example
of
HTML5
spec
I
have
an
example
of
what
we
could
have
sort
of
a
priority
of
constituents
Define
where
we
value
the
users
perceived
performance
above
front-end,
metrics
above
individual
endpoint
response
times,
because
the
response
times
and
the
endpoints
could
be
super
fast.
A
If
the
front
end
is
screwing
up,
the
user
will
feel
it
and
then,
if
we
can
have
the
perceived
performance
being
faster
for
the
user
by
doing
some
mimicry
or
some
tricks,
that
is
more
beneficial
than
the
front
and
individual
metrics.
A
So
I
do
think
that
the
persistent
performance
should
be
at
the
top.
The
end
point
should
be
at
the
bottom,
but
that
is
that's
kind
of
like
my
response
to
that.
Yeah
all.
D
Right
anybody
want
to
read
about
Robert's.
A
C
No
I
can
I
can
I
can
do
it
since
because
we
have
the
mic.
So
Robert
is
writing
that
initial
page
load
is
where
I
often
feel
performance
the
most
on
applications
and
what
I
try
to
aim
for
in
my
own
applications.
C
Unfortunately,
it's
also
one
of
the
biggest
pain
points
of
gitlab
because
of
few
things,
large
amounts
of
data,
large
amounts
of
markup
being
generated
on
the
back
end
Etc,
but
with
the
combination
combination
of
backend
plus
front-end
changes,
we
could
improve
a
lot
of
a
lot
over
our
current
performance
in
this
aspect.
Yes,
that's
that's
very
true
and,
however
I'm
not
really
I
I,
don't
think.
I
can
agree
that
generating
a
lot
of
markup
on
this
backhand.
C
B
I
didn't
know
that
I
thought
if
we,
if
we
changed,
you
know,
made
more
things
into
view
rather
than
you
know,
preparing
the
stuff
on
the
back
end.
It
would
get
faster,
but
is
that
not
true.
A
C
It
depends
it
depends
on
like
it
depends
on
the
complexity
of
the
markup
if
the
markup
is
like
I'm,
I'm,
afraid
of
getting
way
too
deep,
technically
about
how
the
rendering
works
in
the
browsers.
But
the
problem
is
that
we
get
some
information
from
the
so
user
sends
the
request
right
to
the
server
so
server
starts
send
starts
sending
HTTP
page
HTML
page
to
to
the
client
and
then
different
sources
get
into
play.
C
There
are
some
assets
that
need
to
be
picked
up
from
the
server.
There
are
some
JavaScript
modules
that
refer
some
other
modules,
so
all
of
those
things
after
the
initial
response
from
the
server
build
up
to
this
slow
performance
that
we
might
experience.
So
what
are
the
main
abusers
of
performance?
C
Large
amounts
of
data
is
Robert,
writes
here
so,
but
the
point
is
that
once
the
data
comes
from
the
back
end,
it's
not
necessarily
like
it's
not
necessarily
complex
and
time
consuming
to
generate
the
HTML
on
the
back
on
the
back
end
on
the
server,
the
problem
comes
when
that
amount
of
data
has
to
be
rendered
on
the
screen.
C
That's
the
problem
like
rendering
and
painting
on
the
screen,
depending
on
a
lot
of
new
answers,
can
be
very,
very
slow,
so
in
some
instances
generating
huge
amount
of
data
on
the
back
end
and
then
outputting
it
on
the
front.
End
can
be
very
fast
because
the
data
is
about
I,
don't
know
like
the
bunch
of
divs.
C
This
will
be
more
or
less
simple
and
fast.
However,
it
might
be
that
the
data
coming
from
backend
is
pretty
simple
and
then
front
end
makes
it
complex,
adding
wrapping
into
different
elements.
Html
elements
adding
some
Dynamic.
The
events
like
on
mouse
or
on
click
added
to
every
single
element
will
dramatically
slow
down
the
page,
so
it
all
depends.
It
depends
on
each
particular
case.
Sometimes
fronting
is
faster,
sometimes
back
and
it's
faster,
but
in
most
cases
like
from
my
experience
in
most
cases,
front
end
is
the
main
abuser.
A
Yeah
Dennis,
if
I
may
add
a
few
things
to
help.
There's
some
so
I'll
share
this
one
important
conclusion
we
got
in
the
code
review
performance
round
tables
was
distinguishing
static
content
that
is
going
to
be
static
for
the
most
lifetime
of
the
page
and
dynamic,
and
then
the
static
one
is
more
beneficial
to
pre-render.
On
the
back
end,
it
can
be
cached.
A
So
that's
the
example
of
the
syntax
highlighting
which
is
one
example
that
we
can
again
in
some
ways
benefit,
but
in
other
ways
we
pay
the
the
cost
of
transferring
down
that
transferring
down
that
on
The
Wire.
That
will
have
a
much
bigger
payload.
So
we
have
it's
always
a
trade-off
right,
some
cases
it
benefits
some
cases,
it
doesn't
benefit,
but
more
than
anything,
it's
like
Dennis
was
saying
it.
It
needs
to
be
addressed
in
in
any
particular
cases.
It
matters
whether
you're
re-rendering
this
for
the
customers.
A
C
I
cannot
really
tell
how
how
Google
does
build
it.
C
Yeah,
if
I,
if
I
could
I
would
I
would
probably
build
it
as
well,
but
the
the
the
the
the
the
answer
is,
it
probably
uses.
The
combination
Andrea
made
a
very
good
point
about
static
versus
Dynamic
content,
so
when
it
comes
to
to
Gmail,
for
example,
there
are
elements
that
are
just
static
like
the
layout
of
the
of
the
Gmail,
that
one
is
the
perfect
candidate
to
be
just
spit
it
out
from
the
back
end,
without
front-end,
really
messing
up
and
building
it
with
with
something.
C
This
is
just
like
the
frame
comes
from
the
server
and
then
the
dynamic
data,
like
the
number
of
the
or
and
the
the
order
of
the
folders,
for
example,
in
your
inbox
that
can
be
coming,
can
be
generated
on
the
server
as
well,
because
that
order
is
static.
C
You
do
not
reorder
the
things,
however,
the
number
of
the
items
in
your
inbox
event
listeners,
like
you,
click
on
this
on
this
Chevron
or
whatever
it
is
to
expand
this
action,
and
things
like
this,
those
are
coming
from
the
from
the
front
end
so
front
end
has
to
use
the
back
and,
as
the
back
end,
content
coming
from
the
back
end
as
the
base
and
then
built
on
top
of
that-
and
this
is
this-
is
the
thing
you
you've
mentioned
an
interesting
thing
like
if
we
move
everything
from
rails
to
view,
this
doesn't
necessarily
mean
a
better
performance.
C
B
C
A
C
Yeah
it's
like
if
it's
the
same
component,
it
will
be
cached.
However,
as
I
said,
the
cached
will
be
just
like
the
the
shell
sort
of
the
HTML
markup
might
be,
might
be
cash
engine,
however,
the
yeah,
the
the
JavaScript
and
View
application.
That's
why
we
use
View
application
in
order
to
build
Dynamic
applications
and
the
Dynamics
of
the
application
come
from
JavaScript
and
JavaScript
needs
to
parse
different
parts,
pull
in
data
inject
it
into
the
page
and
there.
C
A
Understand
the
repository
pages
is
a
great
example
of
that.
If
we
took
before
the
Hamel
version
of
the
repository
Pages
you'd
first
load
the
project
overview,
you
click
on
a
folder
you'd
have
to
reload
the
whole
page
and
get
to
the
second
folder.
That's
faster.
Now,
with
the
view
app
right,
even
if,
for
the
first
load
of
the
of
the
page,
we
might
take
some
metrics
and
be
a
bit
slower
on
some
perspectives.
The
overall
goal
of
the
user
is
faster.
A
That's
why
we
refactor
these
things
to
have
more
Dynamics
dynamicism
onto
the
page,
so
we
can
do
more
things
without
coming
to
the
server
and
getting
the
whole
page
again,
but
like
Dennis
was
saying
we
need
to
be
careful
because
a
lot
of
the
times
the
trade-offs
are
not
there
right
away.
If
you
compare
a
really
small
folder
with
three
files,
the
performance
might
be
worse
with
view,
but
if
we
compare
with
3000
files,
the
hammer
version
will
be
much
much
slower
right.
That's
the
trade-offs!
That's
why
all
of
this
depends
the.
C
The
this
is,
this
is
a
very
good
distinction
again
so
rails
or
like
server
generated
content
is
suitable
for
for
static
Pages,
which
are
not
really
change
or
with
very
minimal
Dynamics.
While
view
makes
it
really
easy
to
have
high
velocity
When
developing
Dynamic
things,
for
example,
in
context,
editing
is
impossible.
With
the
rails
doing
the
batch
batch
loading
of
the
of
the
repository
or
like
of
of
a
blob
as
it
is
done
it
is,
it's
now
would
be
nearly
impossible
with
rails.
C
So
there
are
things
that
view
makes
much
much
simpler.
It's
just
a
matter
of
paying
pay,
paying
attention
to
how
we
structure
and
architect
those
applications.
D
One
one
quick
question
regarding
the
initial
load
time
yeah.
So
one
of
the
ways
to
improve
it
is
to
move
something
in
a
bank
background
yeah
and
perform
some
operations
like
asynchronously
yeah.
For
example,
we
have
a
drop
down,
a
user,
don't
see
the
content
of
the
drop
down
until
they
click
it
yeah
they
may
be.
Even
one
click
one
click
this
page
here,
so
we
don't
even
need
the
content
of
the
drop
down
for
this
particular
page
load.
D
It's
just
a
question
from
front-end
perspective.
How
we
sick
of
it
yeah
to
move
into
doing
such
drop
down
so
do
we
still
want
to
do
it
or
can
do
it.
C
It's
it's
very
good
good.
D
Example,
maybe
yeah
or
maybe
a
more
specific
question
is:
do
we
have
like
a
threshold
yeah,
for
example?
Maybe
sometimes
it's
not
worth
it
yet
to
reduce,
like
one
SQL
request
into
into
a
View
application
yeah?
Maybe
we
have
a
threshold
I,
don't
know,
maybe
it's
in
the
milliseconds
or
a
number
of
HTML
or
SQL
requests
or
in
complexity
of
a
particular
page.
D
C
Is
this
is
a
very
good
point,
however,
I
I'm
not
sure
why
we
should
be
talking
either
or
so.
If
we
move
some
component
into
view,
this
doesn't
mean
that
we
should
run
like
if
getting
back
to
that
drop
down.
Example,
we
do
not
have
to
render
the
content
of
the
drop
down
right
away.
That's
that's
not
necessary
right.
We
can
again
make
things
on
demand.
C
So
when
user
clicks
we
send
the
SQL
in
SQL
or
whatever
request
to
the
server
and
then
present
the
content
once
we
get
the
response,
the
only
thing
we
have
to
keep
care
about
is
how
to
entertain
user.
While
they
wait
for
this
request
right,
however,
it
will
make
things
really
faster.
If
we
do
things
on
demand,
it
makes
the
initial
page
loading
faster.
C
So
anything
we
can
remove
from
the
initial
load,
which
is
not
essential,
will
increase
performance,
but
it's
not
really
related
to
to
the
decision
of
whether
we
should
have
any
threshold
of
of
moving
to
view
like
or
number
of
few
components
or
something
like
this.
It's
I
think
it's,
it's
totally
possible
to
achieve
good
results
with
with
both
approaches.
A
And
my
answer
is
that
Igor,
it's
a
great
point
I,
don't
think
we're
tired
of
it.
I
think
it's
still
a
strategy
that
works
in
most
cases.
So
one
of
the
examples
is
the
compare
page
where
the
target
is
exactly
backed
to
listed
branches,
to
select
the
compare
it
still
beneficial
to
First
render
faster
and
then
having
that
call
a
signal
so
yeah
it
still
makes
sense
cool.
Thank
you.
A
A
This
area
is
extremely
costly,
and
yet
the
divs
are
pretty
much
the
product
of
a
good
hosting
platform,
so
they
appear
everywhere
and
are
used
an
awful
lot
and
then
I
added
that
we
should
definitely
put
this
in
an
issue
to
discuss
in
an
upcoming
call
explicitly
this
topic
in
the
context
of
source
code,
because
there's
a
bunch
of
discussions
going
around
in
divs
in
this
code
review
side
of
things.
So
you
should
definitely
unify
knowledge
and
discuss
the
perspectives
of
it.
C
It's
just
it's
just
a
sort
of
the
constant
reminder
about
what
we
already
mentioned,
that
we
can
always
apply
psychological
tricks
to
Performance.
It
will
won't
necessarily
make
our
numbers
any
better,
but
it
might
make
our
users
happier
with
that
before
we
actually
find
time
to
come
back
to
that
or
another
page
and
invest
time
into
reducing
the
the
actual
numbers
we
can
imply
apply
psychological
tricks
as
the
sort
of
as
the
as
the
shield.
While
we
find
time
and
resources
to
to
fix
the
performance
for
real.
B
Yeah
I
think
you
know
the
psychological
trick
to
make
it
appear.
Faster
is
one
aspect,
but
there's
also
the
avoiding
I.
Don't
know
what
the
opposite
of
a
trick
is.
I
can't
come
up
with
the
word.
Maybe
it's
shooting
ourselves
in
the
foot.
B
So
we
had
this
situation
where
the
section
at
the
top
that
shows
the
code
owners
was,
you
know,
first
painted
and
then
later
the
information
from
the
back
end
came
that
showed
okay,
it's
actually
two
lines
and
then
it
got
bigger
and
then
that
made
all
the
stuff
below
move
down
right.
I.
Think
no
one
cares.
B
If
this
box
at
the
top
takes
I,
don't
know
two
or
three
seconds
to
render,
but
a
lot
of
people
care
if
it
moves
right
so
because,
because
most
people
will
anyway
look
at
the
file
content,
not
at
the
code
owners.
So
we
need
to
make
sure
that
we're
not
you
know
using
until
psychological
effects
to
make.
B
C
Yeah
Android
is
typing
faster
than
I.
Then
I
speak
so
there
there's
it's
one
of
those
metrics
from
Google.
It's
called
cumulative
layout
shift
where
Google
actually
penalized
penalizes
if
you
start
moving
elements
later
on.
So
if
you
output
the
elements,
but
then
they
start
all
of
a
sudden
move
around
and
get
shifted,
so
you
get
penalized.
So
this
is
not
the
not
the
thing
that
I'm
talking
about.
A
B
A
Thanks
Justin
so
I'll
move
on
to
the
next
topic.
Then
again,
we
can
come
back
to
this
later
I'll.
Do
we
I
do
I,
do
invite
everybody
to
create
issues
on
topics
you
want
to
see
discussed
again,
because
that
way,
we
don't
forget
about
them.
A
So
this
next
one
I
wanted
to
to
ask
the
thoughts
of
people
here
about
picking
up
this
issue.
Oh
I
should
have
linked
it,
not
bad
I'm,
trying
to
find
the
link
while
I
speak.
So
essentially
it's
the
add
the
file
of
find
file
into
the
main
repository
view
app.
This
is
still
a
separate
component
from
our
repository
view,
app
that
we
have
unified
with
The
Blob
rendering,
and
one
of
the
benefits
that
this
could
bring
is
exactly
going
back
and
forth
between
a
file
tree.
A
A
A
Yeah,
whenever
you
press
t
right,
it
shows
the
fuzzy
file
finder
of
a
file
of
a
repository
right.
That's
what
I
mean
that
particular
screen
brought
that
into
oh
yeah,
it's
amazingly
powerful
but
and
I
would
even
venture
to
say
that
it's
a
very
used
feature.
I,
don't
have
metrics
to
back
that
up.
A
Yet
in
my
screen,
but
I
use
it
all
the
time,
but
I
always
get
dial
a
little
insight
when
I
see
the
time
it
takes
to
go
from
the
repository
page
to
that
and
when
you're
looking
at
those
things,
it's
like
teaching
somebody
to
identify
bad
kerning.
He
will
haunt
them
for
the
rest
of
their
lives
and
that's
the
same
thing
with
performance.
A
Yeah
personally,
yeah
I'll
stop
talking!
That's
it.
B
Yeah,
how
does
that
align
I
mean?
Do
we
with
our
new
navigation
strategy
and
I
forgot
what
it's
called?
Basically,
what
you
also
have
in
vs
code,
the
command
palette
and
the
command
palette?
Do
we
actually
still
need
this
thing,
or
do
we
not
need
it
anymore?.
A
So
I
think
there's
different
things
So.
Eventually,
the
command
palette
could
invoke
the
find
file
as
an
option
as
a
command,
and
then
it
will
go
into
that
mode.
But
the
trick
of
this
is
that
very
quickly,
without
look
without
touching
anything
just
using
the
keyboard,
I
can
hit
T
and
start
typing
the
file
name
as
I.
Remember
it
and
then
goes
directly
to
that
file.
So
it's
a
very
decisive
use
case.
A
I
think
the
command
palette
starts
a
layer
above
which,
like
what
do
you
want
to
do,
and
then
you
have
to
say,
find
the
file,
but
it
could
link
each
other
up.
I,
don't
think
they're
exclusive
they're,
not
the
same
thing
and
even
even
vs
code
has
two
different
things.
One
is
the
command
palette
and
the
other
is
finding
the
file
I'm.
B
A
A
Someone
command
P,
I,
think
yeah,
okay,.
C
B
If
I
do
it,
if
I
do
it
here
in
my
vs
code,
actually
it
opens
the
at
the
same
position.
This
search
the
the
window
of
this.
This
drop
down.
D
B
So
I
do
it's.
Okay,
we
see
it
shift
15,
that's
the
command
palette
and
I
do
p
and
it's
with
search
command.
P.
B
Yeah,
exactly
so
I'm
kind
of
wondering
do
we
need
do
we
want
to
adopt
this
I
mean
with
this
whole
navigation.
A
lot
of
things
become
more
seamless,
I,
hope
and
I
could
imagine
that
if
we
move
this
functionality
here
it
would
be
nice
right.
C
C
You
to
the
designers
to
figure
out
whether
it's
discoverable
enough
and
because,
if
we
want
to
Target
this
fine
file,
functionality
towards
Advanced
users
of
gitlab,
then
yes,
probably
doing
it
in
the
in
the
overlay
like
this
would
be,
would
make
sense.
But
I
think
this.
This
functionality
is
very
useful
for
anybody
and
hiding
it
into
some
sort
of
a
common
plot
palette
would
be
B2
sort
of
too
advanced
I.
Think
it's
it
like
just
placing
the
the
surge
field
or
something
very
visible
above
the
file
tree
would
make
more
sense.
B
That's
let's
mention
Michael
here.
A
Sorry
I
was
typing
and
I
was
going,
I
was
getting
typos
everywhere,
so
I'm
I
was
going
to
say
that
I
think
it
doesn't
change
the
plan
anyway.
So
we
would
always
need
to
pull
the
find
file
into
the
repository
app
anyway,
even
if
you
want
to
use
the
command
palette
so
the
way
we'll
do
it
is
bring
the
final
file
as
it
is
right
now
into
the
repository
app
then
later.
A
If
you
do
want
to
replace
one
with
the
other
once
the
command
palette
is
you
know,
mature
and
shipped
and
whatever,
and
has
the
same
functionality.
We
will
replace
the
component
of
the
find
file.
It
would
already
be
inside
the
repository
app,
so
we
would
use
that
work
that
we
would
do
now,
even
if
you
want
to
do
that,
that
you
said
I'm
using
the
same
command
palette.
B
I'm
not
trying
to
understand
this.
The
command
palette
on
this.
A
But
right
now
so
right
now
the
command
palette
would
need
to
talk
to
the
repository
app
to
load
the
current
state,
sorry
below
the
right
state
of
the
blog
page
right.
So
there
has
to
be
Communication
in
that
component
between
command
palette
and
the
blob
viewer.
D
A
C
A
C
C
C
And
that's
that's
the
problem,
because
this
means
that.
C
If
it
is
the
thing
and
the
command
palette
is
available
any
anywhere
in
the
project,
this
means
that
it
has
to
be
like
it
will
be
different.
If
we
introduce
something
that
which
requires
the
context
of
the
repo,
it
will
be
different
in
different
places,
I'm,
not
sure
how
how
comfortable
it
is
for
the
users,
because
when
you
use
command
palette
in
the
context
of
an
IDE
or
vs
code,
you
are
always
within
the
repository
context.
So
it's
always
there.
However,
in
our
case,
the
product
is
much.
A
I'll
link
the
Epic
so
feel
free
to
go
in
there
and
take
a
read,
and
then
you
can
share
some
thoughts
there.
Thanks
for
tagging,
Michael
Jordan.
B
C
C
I
I
I,
it
was
just
a
comment
that
I
think
find
file.
Functionality
in
the
repository
would
be
super
helpful,
yeah,
it's
specifically
from
the
point
from
the
standpoint
of
pers
of
performance,
because
this
would
make
the
Journey
of
getting
to
that
like,
as
we
have
now
file
two
levels
dip
so
super
fast.
So
you
just
type
in
the
name
of
the
file
and
you're
there
without
like
drilling
down.
C
Yeah,
that's
that's!
That
was
the
question,
so
we
we
have
this.
That
I'm
aware
of
the
dashboards
for
about
the
front
and
more
like
more
or
less
front
and
related
performance
and
I
was
wondering
whether
we
have
dashboards
for
the
API,
endpoints
and
I.
Think
there
are.
There
are
links
there
right.
So
we
do
measure
that
that's
great.
A
Cool,
so
that
concludes
the
agenda
right
on
time.
Sorry
for
pushing
you
along,
but
this
is
this-
was
great
to
have
everybody
on
the
call,
especially
talking
about
front-end
and
and
back-end,
and
product
and
ux,
all
at
once,
everything
everywhere
all
at
once
see,
but
it
was
great
to
have
you
all
in
the
first
session
of
the
performance
round
tables
for
source
code.
A
So
again,
I'll
wrap
this
up.
If
you
have
any
objections
to
making
this
slide,
this
video
public
do,
let
me
know
I'm
gonna
be
making
it
soon
after
this.
Otherwise
have
a
wonderful
time.
We'll
see
you
again
here
in
two
weeks,
so
this
is
not
a
weekly
event.
It's
two
weeks,
if
you
feel
like
we
have
full
agendas
that
we
need
to
talk
more
of
it,
we
can
make
it
weekly,
but
for
now
we
started
with
every
two
weeks
so
I'll
see
you,
then
March
28th
be
there
or
be
square.