►
From YouTube: 2021-07-29 Frontend Performance Discussion: Source Editor (with Monaco) in Blob viewer refactor
Description
On July 29, 2021 we had a discussion about the recent developments.
Discussed REST/GraphQL endpoints still in use and the reasons for it.
Adding tracking to sitespeed at the earliest moment possible.
Discussed a spike detected in recent days and possible causes.
And discussed a progressive enhancement tweak we're exploring for the Blame page that could prove useful in other contexts.
A
Hello,
everyone
welcome
to
the
performance
meeting
about
the
source
editor
in
the
blog
viewer.
This
week
we
have
a
few
things
to
go
over
and
let's
just
go
straight
into
it,
dennis.
B
Yes,
it
was
just
one
observation,
while
I
was
reviewing
jock's
mr,
I
noticed
there
is
one
instance
in
this
new
view:
refactoring
application,
that
we
are
actually
using
the
rast
endpoint
to
get
the
blob
data,
so
I
was
wondering
whether
we
could
could
actually
use
graphql
to
not
mix
up
the
the
things,
especially
since,
since
it's
a
new
application.
Apparently
we
have
some
flexibility
there
and
it's
it's
it's
related
to
performance,
in
my
opinion,
because
it's
gonna
be
faster
to
fetch
the
data
with
graphql.
C
Yeah,
so
we
currently
have
that
one
graphql
or
rest
endpoint
that
we
use
for
viewers-
that's
not
migrated
to
view
yet
so
it's
still
in
the
in
the
haml
format.
So
for
those
views,
because
we
don't
have
the
graphql
fields
available.
C
Yet
I
just
decided
to
use
the
existing
rest
endpoint
and
then
with
the
idea
that
over
time
will
migrate
all
of
the
viewers
over
to
view
anyway
and
then
get
rid
of
that
end
point
over
time,
but
I
think
it
I
think,
you're
right
if
it
would
be
beneficial
to
add
those
fields
to
the
graphql
input.
I
don't
think
it
will
be
on
the
back
end
side.
I
don't.
I
can't
see
that
it'd
be
much
work,
but
I'll
create
an
issue
for
that
and
then.
B
C
So
currently
so
the
graphql
endpoint
fetches,
the
existing
hammel
fragments,
so
the
ammo,
let's
say
we're
viewing
a
file
that
plays
audio.
It's
not
yet
migrated
to
the
view
app,
so
you
haven't
written
any
view
code
for
that
yet
and
then
it
fetches
the
actual
haml
file,
the
rendered
content
rendered
html.
B
Yeah,
okay,
so
so
you
mean
for
like
the
the
graphql
doesn't
provide
us
with
this
like
generated
html.
Exactly
oh,
okay
makes
sense,
yeah.
Okay,
and
by
saying
that
we
might
add
that
data
to
graphql
you
mean
that
we
might
add
that
generated
html
content
to
graphql
endpoint
is
that.
B
That's
that's
like,
after
I
heard
that
I
I
don't
think
I
would
suggest
that,
because,
like
I,
I
thought
there
is.
There
are
some
data
bits
right,
but
I
definitely
I
wouldn't
suggest
adding
that
that
monsters
thing
into
graphql
that
that
would
be
a
bit
too
much
of
a
stretch,
especially
considering
like
you've
mentioned
one
one
important
point.
B
So
over
time
we
are
going
to
migrate
to
view
only
right
application,
so
the
hummel
part
is
gonna,
gonna
be
removed
there
and
this
so
so.
Essentially,
this
means
that
over
time
we
are
going
to
get
rid
of
the
hammer.
Part
means
we
are
going
to
get
rid
of
the
rest
call.
C
B
Right:
okay,
now
that
that
that
that
then
yeah,
then
sticking
with
sticking
to
rust
probably
makes
sense
in
this
particular
case,
because
adding
this
information
to
graphql
doesn't
make
a
lot
of
sense
on
its
own.
But
considering
that
it's
going
to
be
the
temporary
thing,
then
it's
just
just
a
red
flag.
Apparently
so
sticking
to
rest
is
totally
fine.
Thanks
for
for
the
for
the
explanation
that
that
makes
sense,
I'll
write
down
the
summary.
A
Well,
thank
you
yeah.
It
seems
like
you
will
go
away
as
we
migrate.
The
new
viewers.
Okay,
I
have
my.
I
have
the
next
point,
so
I
just
wanted
to
check,
and
just
I
don't
know
if
it
was
mentioned
here
last
time
we
had
this
meeting
or
not,
but
we
have
a
stretch
item
for
fourteen
two.
A
It's
exactly
merge
requests,
so,
if
any,
if,
if
whoever
gets
to
it
first
it'll
be
nice
to
have
it
so
what
we
need
is,
do
we
need
any
feature
developed
jacques
to
be
able
to
turn
it
on
via
url
or
something
we
still
need
that
done
right.
C
A
We
might
actually
either
replace
it,
because
if,
if
we
don't
have
the
url
option,
we
can't
add
the
site
speed
entry,
so
we
need
a
feature,
might
be
worth
creating
an
issue
for
that
feature
and
replacing
it
as
a
stretch.
A
However,
whoever
gets
to
it
first,
the
better,
so
we
have
to
prioritize
the
work
of
the
refactory
itself
and
this
ended
up
being
like
falling
down
the
the
pile,
but
I
feel
like
we're
missing
this
visibility.
A
We're
missing
the
visibility
of
what's
the
state
of
the
testing
right
now,
so
that
we
can
then
get
our
sct
in
place
to
help
update
the
test
if
we
need
to
so
that
it
would
be
really
nice
if,
until
the
end
of
this
milestone,
we
could
sometimes
in
two
weeks-
okay,
we
might
go
for
14
3,
but
it'd
be
nice
to
soon
have
some
way
to
visualize
the
what's
the
metrics
of
the
blog
of
the
new
blob
viewer,
because
they're
not
going
to
be
out
of
the
box.
A
It
won't
be
probably
positive,
I'm
expecting
because
we're
loading,
the
blob,
the
monaco
and
everything.
So
we
are
already
expecting
that.
But
we
are,
we
have
to
start
looking
at
that
too,
because
that's
going
to
be
an
important
part
of
the
story-
and
this
is
a
missing
task
here-
for
the
for
the
test
for
the
quality
team
to
kind
of
update
the
tests
in
a
way
that
reflects
the
pre-loading
of
that
and
it
might
or
it
might
be-
that
we
just
update
the
explore
page
to
also
pre-load
the
source
editor.
A
So
there's
a
couple
of
things
there.
That's
not
that
we're
trying
to
turn
the
feature
right
on
right
now.
We
will
turn
on
the
feature
like
whenever
we're
ready,
but
it
feels
like
it's
it's
an
important
step
to
have
as
early
as
possible.
So
I
just
wanted
to
raise
it
here.
B
Just
one
note
I've
again
while
reviewing
jacques
mar
yesterday
and
today
it's
actually
quite
noticeable
that
the
the
new
application
is
slower
like
it
has
worse
loading,
at
least
loading
performance
comparing
to
the
hammer
application
and
it's
not
related
to
source
editor
right.
It
is
reload
to
the
fact
that
we
are
wrapping
things
into
the
view,
application
and
view
application
is
bootstrapped
later
in
the
process.
It's
discovered
later
in
the
process.
B
That's
that's
the
main
reason.
That's
why,
when
we
migrate
to
view
applications,
we
have
to
accept
that,
no
matter
what
we
do,
it's
gonna
be
slower.
A
B
B
So
what
I
can
tell
you
is
that
we
have,
I
think
we
have
implemented
this
user
matrix
in
the
in
the
blog
viewer,
so
the
it
might
be
that
sort
of
we
reduce
the
server
time
right.
We
reduce
the
server
time
by
not
requiring
this
huge
generated,
html
content
right,
but
at
the
same
time,
on
the
front
end,
we
delay
the
things
because
we
are
moving
to
view.
B
So
I'm
not
sure
whether
those
things
will
be
equal,
apparently
for
the
large
files,
the
view
application
will
still
be
faster.
Once
we
move
to
the
sort
have
we
moved
source
editor
already
in
the
view
application.
B
A
good
sure,
yes,
so
for
large
files,
it
might
be
actually
faster
with,
even
with
the
view
application
but
yeah.
The
the
the
thing
is
that
large
files
are
sort
of
a
niche
use
case.
Large
files
are
not
something
that
we
have
to
right
right
to
to
care
about
that.
Much.
A
Yeah,
it's
a
large
percentile,
yeah
yeah.
However,
we
do
have
to
be
aware.
One
of
the
motivations
for
this
is
the
workflow
going
from
the
file
tree
to
the
blob
viewer,
rather
than
rendering
a
new
page
in
that
sense
that
that
flow
will
be
and
feel
faster.
So
that's
that's
why.
I
think
this
is
very
very
very
crucial
to
this
project
is
to
start
start
working
on
the
story
of
the
metrics,
so
the
earlier
we
have
it
the
earlier
we
can
work.
A
We
can
start
working
on
that
in
preparing
and
and
also
just
prioritizing
work
to
improve
that
because
it's
probably
like
you
said,
there's
many
things
that
are
not
even
related
to
the
source.
Editor
that
we
can
probably
do.
I
don't
know
if
we're
already
loading
those
requests
on
startup.js,
like
a
bunch
of
things,
bunch
of
things
that
we
can
do
to
start
tweaking
the
page
to
make
it
faster
as
fast
as
we
can
so
and
also
update
the
testing
themselves.
So
wait.
B
B
A
A
Because
we
have
the
historical
records
of
the
previous
blob
viewer
and
we
want
to
be
able
to
compare
the
the
evolution
side
by
side
with
the
old
version
hamlet,
the
new
version
handle.
We
want
to
use
the
same
file,
the
same
project,
the
same.
B
B
That's
that's
what
I
did
that's
what
I
did
for
the
large
blob.
I
didn't
create
that
within
the
within
the
main
project,
where
I
created
this
as
a
side
project
that.
A
B
B
We
could
start
with
that,
and
if
this
doesn't
give
us
any
reliable
comparison,
then
then
we
will
sort
of
investigate
and
invest
more
time
into
into
enabling
the
future
flags
from
the
url.
If
we,
if
that's
needed,.
A
A
Okay,
that
sounds
like
a
good,
probably
approach,
actually
see,
then
cool
thanks.
I'm
fine
either
way
we
can,
like.
You
said
if
we
see
that
it's
not
reliable
or
so
for
some
reason
we
want
to
just
change
it.
We
can
still
change
it.
A
B
B
B
To
apples
you,
you
sort
of,
you
exclude
the
project's
differences
from
the
equation
and
then.
B
You
measure
only
the
the
blobs
so
two
new
projects,
they
are
absolutely
equal
sort
of
and
exactly
the
same
file
in
both
projects
and
measure.
Those.
A
A
A
Sweet
right,
the
spike,
so
we
had
a
spike
on
the
timings
of
the
source
view.
The
blob
viewer,
tbt,
I
think,
was
tbt
and
I
wanted
to
check
in.
Do
we
have
do
you
know
already
what's
causing
it?
Is
he
has
he
gone
down?
That's
my
wishful
thinking.
A
B
B
A
A
B
Over
no
our
hopes
on
that.
I
would
look
into
into
some
first
thing
I
would.
I
would
look
into
would
be
the
back
end
time
I
have.
As
I
said,
I
haven't
checked
that
back
in
time
whether
we
have
any
differences
in
the
back
end
time.
If
it's,
if
we
don't
and
if
it's
purely
the
front
end,
then
I
would
just
I
don't
know,
then
then
it
would
require
some
analysis,
but
from
what
we
have
from
what
we
we
know
of.
There
were
no
changes
that
could
affect
that.
B
B
So
the
the
numerous
requests
introduced
sort
of
the
line
between
those
70
and
the
last
one,
showing
the
ellipses
so
to
to
show
that
there
is
something
happening
there,
but
that
was
pure
css
and
I
don't
believe
that
when.
B
Monday,
probably
okay,
I
know
so
it
kind
of
correlates
with
the
with
the
spike
to
some
degree,
but
I
don't
believe
it's
it's
related,
because
there
were
a
lot
of
things
happening
like
the
upgrade
of
side,
speed,
the
upgrade
of
chrome,
my
merge
request,
so
I
do.
A
It
so
that's
not
behind
the
feature
flag.
Is
it
no?
No,
it's
not!
Okay!
It's
just
css,
because
we
we
shipped
something
that
we
thought
was
going
to
make
timings
better
and
it
didn't
so
we
wrapped
it
behind
a
feature
flag
too.
To
kind
of
like
this
is
trickier
because
it's
css
based
on
css,
but
we
should
still
keep
an
eye
on
this
and
then
later,
if
it
comes
to
it,
we'll
probably.
B
If
you
want,
if
you,
if
we'll
have
some
like
five
minutes,
we
can
just
do
the
quake.
Look
into
that
right
now
right,
but
but
we
have
plenty
of
things
to
discuss.
A
No,
I
think
I'd
rather
do
the
investigation
outside
of
the
call
yep
yeah,
but
it
is
something
for
us
to
keep
in
mind
that
it's
not
solved
and
it's
it's
a
significant
jump.
It's
like
more
than
more
than
50
of
a
jump.
So
it
was
it's
not
good.
So
we'll
definitely
keep
an
eye
on
this
and
keep
checking
this
back
again.
Once
we
have
the
new
call
in
two
weeks
I'll
be
off
next
week.
A
So
if
you
can
either
both
of
you,
take
a
look
at
it
and
see
what
what
might
be
causing
that
I'm
not
sure
what
would
be
so
I
don't
know.
A
My
initial
assumption
is
if
we
have
something
that
we
shipped
in
the
code
new
and
recall,
and
there
was
a
spike
regardless
of
that
being
a
chrome,
we
need
to
isolate
that
and
you
need
to
check
whether
that
was
it
or
not
and
there's
two
ways
one.
The
easiest
way
is
we
revert
the
mr
we
ship
it
see
if
it
works
see
if
it
doesn't
move.
A
If,
if
it
doesn't
move,
we
revert
the
revert,
that's
the
easiest
way,
the
other
would
be
to
wrap
it
behind
the
feature
flag
and
then
turn
it
on
and
off
to
see
whether
the
spike
goes
up
or
not.
That
would
be
one
way,
but
the
thing
is
we're
probably
going
to
be
doing
some
investigations
regarding
the
chrome
upgrade
and
everything,
but
this
is
the
one
that
we
can.
Control
is
on
our
side
of
the
code,
so
as
much
as
doesn't
make
sense,
it
would
still
be
something
that
I
would
check.
A
B
The
performance
one
yeah
it's
css,
plus
javascript
like
so
it.
B
Yeah,
it
toggles
the
not
the
lines,
but
it
toggles
the
class
at
the
top
right.
B
A
B
The
the
interesting
thing
is
that
there
is
no
no
correlation
between
lcp
and
the
tbt,
so
the
lcp
hasn't
been
hasn't
got
any
spike,
it's
only
the
tbt,
so
it
cannot
be
something
related
to
some
visual
thing.
It's.
It
has
to
be
related
to
some
some
to
the
main
thread
being
blocked
by
some
javascript
computation
or
something
like
this.
A
But
we
did
so
it's
so
I
get
your
point.
It's
weird
I
I
agree
it's
weird,
but
we
haven't.
We
haven't
shipped
anything
recently
to
that
page.
We
thought
that
we
thought
the
blob
editor
without
the
new
blob
refactor
enabled
so
for
the
hammer
version
we
haven't.
I
don't
think
anything
else
was
shipped
there.
A
What
was
the
name
of
that
merge
request
that
I
can
have
it
here.
Do
you
know.
B
B
Yeah
we
have
to
check
when
it
was
deployed,
and
it
were
it
reached
the
canary
yesterday,
oh
no
two
days
ago,
actually
3
16
p.m.
A
A
Right
what
I'm
trying
to
understand
is
like.
Why
was
the
previous
run?
What
I'm
trying
to
get
to
is
the
10k.
Let
me
get
to
the
10k
reference
architecture,
wiki.
A
A
B
Yeah
yeah,
so
27th
10
a.m
is
the
first
measurement
with
the
increased
tbt.
A
Yeah
and
given
that
that
thing
was
it's
emerged,
I
really
need
a
time
of
merge
at
the
emerging
quest
widget,
so
it
was
merged
26th
at
9,
00
pm,
so
it
is
reasonable
to
have
it
included
on
the
nightly
of
26
to
27
it
was
tested
on
27..
I
think
it's
worth
having
a
look,
but
we
can
take
it
whenever
we
have
time
and
we're
gonna
have
to
rush
it,
but
it
would
probably
something
that
I
would
still.
A
A
A
B
That's
already
there
like
yeah,
that's
that
was
part
of
the
like
the
class
toggle
hasn't
wasn't
even
the
part
of
of
the
performance
emerging
quest
last
week.
It
was
there
for
like
forever,
but
what
I
did
in
the
performance,
related
merch
request
was
moving
this
to
the
request
idle
callback.
I
think
that's
that's
the
only
change.
B
So
if
you,
if,
like
I'm
looking
at
the
dashboard
for
this
page
for
the
results,
so
all
the
all
the
things
actually
are
very
stable.
Some
go
down
some
stay
on
the
same
page
level,
and
it's
only
the
tbt
that
that
is
going
crazy.
I'm
trying
to
figure
out
how,
where
I
can
find
the
side,
speed,
results
and.
B
B
A
B
B
Yeah
and
now
I
see
why,
because
important
now
now
I
need
some
other
day
to
compare
this
to
because
it
seems
like
the.
A
A
Even
more
weird
weirder,
all
right
we
have
to,
I
think
we
just
have
to
keep
an
eye
on
this.
We
can't
figure
out
everything
on
the
call
so
I'll
I'll
I'll
just
suggest
we
keep
an
eye
on
this
and
keep
sharing
on
it's
like
whatever
we
find
thanks
for
the
link
on
the
agenda,
that
was
good
thanks
and
I'll.
Just
yeah
go
move
on
to
your
point.
I
think
it's
better.
C
Yeah
so
in
this
milestone,
I'm
playing
around
a
bit
with
increasing
the
tbt
on
the
on
the
blame
page.
So
it's
not
really
related
to
like
the
blob
discussion
we're
having
at
the
moment,
but
maybe
I
think
it's
worth
highlighting,
dennis
and
feel
free
to
comment
on
the
merger
quest
as
well,
but
I'm
playing
around
with
content
visibility
which
basically
renders
whatever
the
user
is
currently
seeing
in
the
in
the
viewport
and
not
rendering
anything
else.
C
A
If
jack's
may,
let
me
just
clarify
one
thing,
though
I
think
you're
equating
the
rendering
time
with
tbt
and
that's
not
necessarily
that's
not
necessarily
the
same
metric.
C
C
But
it
looks
like
firefox
has
an
issue
to
to
add
support
at
some
stage.
So,
okay,
it's
still
in
a
draft
state,
so
feel
free
to
comment
there.
If
you
have
any
any
comments
or
thoughts,
yeah.
B
This
is
this,
is
this
is
nice?
The
the
problem
is,
though,
that
this
will
differ
from
a
page
to
page
in
terms
of
confusibility
I
mean
I
mean
it
all
depends
on
the
layout
and
on
the
on
the
element
where
I
was
putting
on
that
or
another
view.
So
for.
B
This
fix
fixes,
specifically
the
rendering
and
rendering
technically
rendering,
does
require
some
computing
power
right.
So
there
are
some
things
that
are
related
to
rendering,
like
recalculation,
of
the
style
styling
and
that
does
block
the
does
increase
the
total
walking
times.
But
the
the
thing
is
that
for
the
for
the
for
the
blob
view,
we
have
solved
this,
but
by
not
showing
like
not
using
the
conversability,
but
we
just
don't
show
any
lines
that
are
not
essential
right
away
right,
so
we
limited
70
70
lines
on
on
the
first
load.
B
So
technically,
I
would
expect
the
result
of
using
content
visibility
for
the
blob
view
and
comparing
it
to
the
current
master,
will
technically
yield
exactly
the
same
results
or
like
very
comparable
ones.
A
Yeah
there
are
similar
approaches
for
sure
yeah
yeah.
Are
we
keen
to
see
how
these
two
in
conjunction
would
work
so
by
the
way,
jacques,
this
is
applied
to
the
current
tamil
page
or
the
new,
or
both.
C
C
B
Like
550
lines
or
something
like
this
and
then
then
it's
the
order
or,
like
the
virtual
scrolling
sure
so
it's
it
won't,
have
any
effect
for
for
a
source.
Editor.
Okay,.
A
So
it
would
be
nice
to
see
yeah
the
improvement
this
has
on
the
current
page,
together
with
your
with
your
solution
and
we'll
see,
we'll
see,
yeah
the
the
downside
of
not
having
it
supported
by
many
browsers
by
some
browsers.
I'm
I'm.
Okay
with
the
question
we
need
to
see
is,
as
you
scroll
down
the
page.
Does
it
have
a
negative
impact
on
that
experience,
because
the
rendering
has
to
be
done
at
some
point,
so
the
question
is:
do
you
feel
anything
else
different
on
the
page.
C
It
does
feel
a
little
bit
different
to
me
sure,
keep
trying
to
do
this
so,
as
you
scroll.
A
C
Yeah,
so
currently,
so
obviously,
when
the
light,
the
lines
need
to
be
rendered
at
some
stage.
So
that
means,
when
you
scroll
faster,
like
when
you
scroll
fast,
the
browser
needs
to
try
and
catch
up
with
the
rendering.
C
A
See
like
nested
so
yeah
search
for
nesting
or
something
so
it
does,
it
does
go
to
it.
This
is
an
interesting
one.
Okay,.
C
A
All
right,
I'll
I'll
I'll
still
want
to
see
this
ship,
especially
being
on
the
plane
page.
It's
a
it's
another
approach
and
we're
experimenting
and
learning,
and
I
feel
this
page
is
a
little
bit
more
complex
because
it
has
a
little
bit
of
a
some.
Some
rose
pens
happening
over
there,
so
it
is
more
costly
to
render
so
yeah
I'll
yeah
test
it
and
get
it
through
ux
review
and
and
let's
ship
it
and
see
how
it
how
it
behaves.
A
We
do
have
a
blame
page
being
tracked
right:
the
tests
on
the
10k
at
least
yeah
10k.
We
do
that's
okay,
then
we
can
see
how
we
can
see
the
impact
that
he
has
there
on
the
on
the
10k
but
yeah
good
stuff.
Thanks
for
sharing.
A
Cool
you
can
stop
sharing
right.
There
is
anything
else
from
you
all
good
nope.
B
Nope,
thanks
for
for
for
playing
with
this
condo
disability,
good
job,
it's
it
might
actually
might
bring
some
some
interesting
use
cases
for
that
yeah.
It's
actually.
A
I
think
I
think
it
will
be
interesting
and
I
like
that,
we're
exploring
different
avenues.
We
have
the
full-on
virtual
scrolling.
We
have
your
your
your
trick
with
this
with
the
rows
and
now
we
have
this
option,
so
we
exploring
many
options
because
I
think
the
more
we
understand
the
mitigation
things
for
the
high
tpts,
the
better.
We
can
apply
them
to
each
page,
so
the
other
part
will
be
to
circulate
this
across
the
team,
but
we'll
get
there
all
right,
and
then
I
guess
that's
it.
A
We
have
17
minutes
back
and
thank
you
for
your
time
and
your
contributions.
It's
appreciated
and
have
a
wonderful
weekend
and
I'll
see
you
on
another
call.