►
From YouTube: Application Performance Session - 2021-02-15
Description
Talking about Workflow measurements
C
Hello,
cool
cool,
cool,
cool
cool.
Where
is
my
tab?
I
lost
my
tab
other
this.
Let
me
see
if
I
can
also
actually
show
you
something,
because
then
we
can
discuss
this
way
more
easily.
C
Around
new
ways
of
performance
jumps
in
reality.
So
what
what
do
I
mean
with
that?
The
idea
is
quite
simple:
we
are
to
some
extent
we
have
done
a
tons
of
low-hanging
fruits
over
the
last
months.
We
have
done
we,
we
have
still
way
way
way
way
way
way
more
stuff
in
progress,
which
means
that
j,
curry
slim
and
view
compiler
and
getting
everything
to
hammer
to
bootstrap
from
hamlet
so
that
we
can
remove
bootstrap
gs,
etc.
C
But
an
idea
was
always
to
take
a
look
at
bigger
single
page
applications
so
that
we
have
bigger
single
page
applications
that
do
more
stuff
without
doing
page
reloads,
because
they
are
quite
expensive
right
now
we
have,
I
would
say
one
in
reality,
that
is
like
a
full
nicely
done
view
single
page
application,
and
that
is
the
web
id.
Why?
C
Because
it's
using
view
router
it's
doing
stuff,
you
can
do
tons
of
actions
without
ever
leaving
the
page,
and
the
idea
is
quite
simple:
we
have
a
couple
of
areas
that
are
like
high
focus
and
high
workflow
areas
that
we
want
to
improve
in
a
way
that
makes
a
huge
jump
in
performance
and
one
of
the
things
is
really
combining
different
areas.
What
do
I
mean
combining
right
now?
C
We
have
one
view
for
the
directory,
the
ripple,
and
we
have
100
page
with
tons
of
file
viewers
in
for
each
and
every
blob,
so
every
file
in
reality.
So
this
means
that
you
jump
between
the
view
app
and
the
file
and
back
and
forth.
C
In
fact
it's
quite
slow,
but
there
it
is
the
easiest
areas
to
figure
out
something
like
that
where
we
can
combine
the
repository
view
with
the
file
viewers
into
one
view
app
to
have
something
that
is
much
faster
and
this
can
prefetch
and
do
some
more
tricks,
because
that's
the
bigger
other,
bigger
problem
is
that
our
caching
and
prefetching
is
is
quite
limited
simply
in
the
sense
that
we
have
a
lot
of
zero
caching
and
we
can't
catch
too
much.
So
the
idea
is
to
combine
those
two
and
figure
out.
Okay.
C
Does
this
really
improve
performance?
And
if
yes,
then,
we
will
push
more
way
more
on
combining
in
the
in
the
create
area,
like
the
mr
page,
with
mr
lists
and
perhaps
some
more
so
that
we
have
bigger
single
view
applications,
but
also
quite
the
same
for
issues,
issues
classic
thing,
boards
and
issues
and
lists
and
issues.
So
if
you
could
jump
between
those
two
without
like
the
target
is
really
that
you
had
click
on
a
link
of
an
issue
and
100
milliseconds
later
you
see
the
whole
thing
and
you
can
interact
with
it.
C
That's
where
we
want
to
go
and
that's
where
we
want
to
start
this
quarter
with
the
repository
view
with
combining
directory
plus
value,
to
measure
this
and
to
really
bring
us
into
a
way
into
a
situation
that
we
can
really
compare
those
two
and
say:
okay,
look!
This
is
the
current
experience
and
this
is
the
experience
we
can
do
quite
easily.
C
C
Then
we
have
user
timing,
metrics
that
dennis
has
done
around
the
web
id
where
this
is
already
possible
because
we
have
single
page
application,
but
in
the
current
state,
where
we
have
the
repository
view
and
the
file
viewer,
this
is
basically
switching
between
different
pages.
So
we
need
like
a
process
to
switch
those,
and
this
is
the
target
to
measure
those
two
and
the
idea
is
really
that
you
can
say
gdk
measure
repo
browser
and
it
will
go
through
the
different
pages
through
the
full
interactivity
simulate.
C
The
mouse
movements
do
the
actual
clicks
on
it
and
integrate
also
with
user
timing
metrics
so
that
you
get
the
full
flow
and
that's
my
main
target.
The
moment
is
to
go.
You
click
on
a
project
you
get
into
the
project.
You
click
on
one
directory,
a
second
directory
and
finally,
on
a
file
at
the
end
and
you
scroll
to
line
number
34.
C
How
fast?
How
long
does
it
take
from
the
first
page
to
getting
to
the
line
34
in
the
total
process
and
measuring
that
the
first
measurements
are
just
by
element
clicking,
which
is
a
little
bit
unrealistic,
because
one
of
the
other
things
that
I
would
love
to
see
is
that
we
integrate
like
mouse
over
prefetching
so
that,
if
you
already
hover
with
the
mouse
over
a
link
that
we
already
start
pre-loading,
that
file,
for
example,
or
like
the
data
of
the
file.
C
Another
thing
that
we
can
also
improve,
that
is,
is
literally
the
file
itself
right
now
we
are
doing
all
the
line
and
code
highlighting
on
the
backend.
We
even
had
some
outages
just
two
weeks
ago,
which
was
a
six
hundred
thousand
line
file,
was
bringing
down
a
couple
of
nodes
in
our
server
system,
which
means
that
you
can
have
like
a
two
kilobyte
javascript
file.
C
We
create
50
kilobytes
of
formatted
api
response
and
that's
those
are
really
a
lot
of
things
that
we
can
improve
and
and
that's
the
idea
to
find
a
way
to
measure
and,
on
the
other
hand,
like
a
poc
to
read
and
say:
okay,
look!
That's
where
we
want
to
get
dear
product.
Let's,
let's
get
going.
C
Let's
do
this
and
I
will
simply
show
you
a
little
bit
because
that's
the
nice
thing
that
you
can
really
have
those
measurements
and
can
really
click
through
the
whole
process
of
going
from
one
from
the
project
itself
to
the
directory
and
file
it
into
the
file.
And
what
I
can
then
do.
I
can
exchange
the
two
branches
between
our
current
master
and
a
new
poc
version,
which
is
then
doing
the
view
preloading
and
yeah.
It's
quite
nice
to
really
compare
and
the
first
measurements
have
shown
that
it's
already
2.5
seconds
faster.
C
The
whole
experience,
but
I
also
want
to
get
really
to
a
way
where
you
can
see
the
curves
and
you
have
quite
natural
mouse
movements
to
have
an
even
more
realistic
way
of
measuring,
because
I
think
we
can
get
even
more
out
of
this
through
the
pre-loading
and
stuff,
like
that
and
yeah
would
love
to
hear
your
comments,
feedback
thoughts,
ideas.
I
think.
A
It
it
is,
it
is
user
timing
metrics
where,
like
I'm,
I'm
I'm
all
about
user
timing,
metrics
of
course
like,
so
it
is
user
timing,
metrics
aware
so,
let's
say
in
web
id.
We
do
a
measure
using
user
timing
metrics.
We
measure
the
time
when
between
user
clicks,
a
file
in
the
navigation
tree
till
the
moment
the
file
gets
rendered
in
the
web
id
so
well,
sitespeed
catch
this
and
output.
C
The
user
timing
matrix
yes,
it
will
also
record
additionally
the
user
timing
metrics,
and
what
I
want
to
combine
is
that
I
have
later
the
video
plus
the
user
timing
metric
so
that
I
know
okay,
the
rendering
of
the
directory
took
that
long.
A
rendering
of
the
file
took
that
long.
So
that's
the
idea
that
it
combines
those
two.
The
problem
really
here
is
that
I'm,
comparing
like
classic
clicking
through
pages
to
one
single
page
application.
C
I
can't
compare
this
without
those
site,
speed
scripts
and
what
I'm
currently
trying
is
to
have
a
way
that
we
can
simulate
also
the
mouse
movement,
but
for
the
rest,
it's
classic
selenium.
It
might
be
even
something
that
we
take
a
look
that
we
can
run
this
also
in
our
spec.
But
then
we
have
a
really
quite
nice
way
to
simulate
those,
and
then
the
next
step
is
talking
to
ux
and
the
data
team
this
week
figuring
out.
C
What
are
the
10
most
used,
workflows
in
dev
that
you
access
defined,
that
the
data
team
is
defining
by
the
results
from
statistics
and
then
model
them
into
gdk
in
through
site
speed,
so
that
we
can
measure
locally
if
we
are
doing
some
changes,
if
we
are
doing
some
really
bigger
architecture,
changes
and
that's
where
we
want
to
go?
Because
if
we
have
a
single
page
application,
we
can
do
way
more
classic
topic.
Is
that
you
that
we
do
hard?
Caching,
in
the
browser
of
your
board
and
just
the
next
time
you
hit
the
board?
C
We
simply
render
the
last
state
that
you
had
in
your
browser,
cache
and
then
div
against
the
server
and
then
move
the
stuff
around,
so
that
you
can
get
the
same
quite
the
same
to
an
issue
that
you
already
see
the
content
of
it.
But
then,
in
the
background,
it
checks
if
something
has
changed
so
that
you
that's.
A
A
Yeah,
I'm
I'm
still
toying
with
this
idea
of
service
worker
for
web
id
because,
like
we
still
need
to
fight
that
huge
lcp
for
webide
thing,
and
I
have
some
ideas
about
how
to
implement
this
for
gravity
but
like
getting
back
to
this.
To
this
original
question
about
side
speed
catching
the
user
timing,
metrics
between,
like
not
on
the
load
but
between
like
routes
or
between
actions,
so
webid
is
already
single
page
application
right.
A
So
let's
say
I
want
to
measure
the
scenario
where
I
open
one
file
close
it
open
another
file,
close
it
or
I
open
one
file.
I
open
the
second
file
and
then
switch
tab
to
the
first
file,
so
there
will
be
a
bunch
of
user
timing
metrics
and
some
of
them
will
be.
The
duplications
kind
of
like
you
know
like
opening
a
file
is
one
metric,
but
we
have
two
files.
A
A
C
C
You
commit
it
into
an
mr,
you
open,
mr,
how
long
does
it
take
from
getting
to
the
project
to
seeing
it
in
an
mr
because
then
you're
really
measuring
not
just
something
the
web
id
you're
measuring,
really
that
the
action
that
the
user
will
experience-
and
I
think
that's
the
that's
where
I
want
to
get
to
what
are
the
10
most
used
or
most
done
things
by
our
users,
because
if
we
improve
those,
they
will
feel
it
re
for
real
and
instantly,
and
then
we
can
model
and
do
all
the
magic
tricks
around
that
I
would
say
so.
C
Another
thing
that
I
want
to
figure
out
with
the
data
team
and
with
telemetrics
is
to
have
a
better
understanding.
My
gut
tells
me,
and
some
very
old
data
has
also
shown
most
of
the
users
are
accessing
three
to
five
projects.
That's
it
they
are
not
browsing
around
and,
like
oh
there's,
a
project,
there's
a
project,
there's
a
project,
so
these
most
used
projects
that
we
have
in
this
drop
down.
Why
can't?
C
We
also
use
that
to
already
load
data,
so
you
hit
gitlab
service
worker,
kicks
in
and
says
hey
here
I
am
user
xy
set
just
landed
on
gitlab,
let's
preload
your
most,
your
three
most
used
projects,
let's
preload
your
current
master
tree,
let's
preload
your
current
issue
list
and
stuff
like
that,
so
that,
if
you
click
on
it,
you're
not
actually
going
to
the
shower
and
doing
the
whole
la
la
but
you're
like
instantly
there
and
might
be
something
worth
to
to
investigate.
I
would
say.
A
But
preloading
preloading
the
projects
in
this
way
so
correct
me
if
I'm
wrong,
but
so
you
think
about
those
projects
we
have
in
the
like
recent
projects
or
whatever,
whatever
this
drop
down,
is
called
right,
so
we
preload
data
right.
So
we
we're
gonna
cut
this
this
route,
when
user
clicks
and
then
goes
to
the
project,
and
then
it
gets
fetched
right,
so
we
prefetch
the
data
we
do
not
process
it
in
any
way
would
not
like
do
like
preload
resource
hint
or
anything
like
this.
C
Yeah
got
it,
it
would
be
nice.
I
don't
know
if
the
data
team
tells
us
look.
What
80
percent
of
the
users
normally
do
is
simply
they
come
to
gitlab.
They
click
the
merge
request
page.
They
click
on
one
of
those
merge
requests,
that's
what
they
do
most
of
the
time
yeah.
Then,
let's
take
a
look.
Okay,
we
as
soon
as
the
user
hits
our
domain.
Then
let's
preload
your
your
merge
request.
Let's
preload
the
merge
request
with
the
last
activity,
because
it's
most
probably
the
one
that
you
will
look
at
the
first.
C
The
problem
to
some
extent
is
that
we
have
the
issue
page
right
now.
The
discussion
thingy
takes
on
some
issues
on
submerged
requests:
five,
six
seconds
to
load
all
the
discussions
of
one
thing
which
most
probably
hadn't
changed
at
all
between
visit,
a
or
visit
b
or
when
you
reload
it
and
stuff
like
that.
C
Discussions
we're
loading
literally
everything
and
we
are
pulling
literally
everything
in
the
background
and
we
are
literally
converting
everything
we
are
doing
very
heavy
markdown,
rendering
who
says
that
we
can
do
a
mix
of
markdown
rendering
between
the
backend
and
the
front-end,
so
that
we
offload
way
more
because
that's
a
huge
and
very
pretty
and
heavy
operation
on
the
back
end,
quite
the
same
for
for
not
only
the
markets
but
the
files
itself
to
to
high
to
the
syntax,
highlighting
et
cetera.
We
have
we're
not
going
place.
C
I
think
it's
totally
worth
figure
trying
out
if
we
just
load
the
two
kilobyte
file
and
send
it
to
monaco
and
let
monaco
do
the
syntax
highlighting
rather
than
sending
it
through
tons
of
things
on
the
back
end,
and
we
have
svg
that
we
render
per
line
every
time
into
the
html
and
we
have
a
span
almost
for
each
and
every
character,
etc,
etc,
etc.
C
A
Yeah
about
the
dom
flashing
there
is
like
a
there
is
a
technique
that
that
we
absolutely
don't
use
in
gitlab
right
the
just.
Whenever
we
do,
the
things
just
run
the
things
in
the.
A
In
the
document
fragment
to
to
make
things
like
really
fast,
like
document
fragments
are
really
fast,
that's
that's
another
thing
that
I'm
that
I'm
playing
with
in
relation
to
webide
so
rendering
things
in
in,
for
example,
like
fetching
data
in
parallel,
as
we
said
right
and
then
like
pre-build,
the
things
in
the
document
fragment
right
away
for
some
like
to
to
provide
some
very
basic
consumption
structure
of
dom
already,
and
when
it's.
A
When
this
dome
structure
is
requested,
we
we
dump
it
into
the
dom
with
just
one
dom
change
instead
of
like
you
know
all
this
recursive
rendering
thing
so
that
that
would
be
really
cool,
and
that
would
for
like,
depending
on
the
on
the
complexity
of
the
dom
structure,
right
that
would
that
might
give
really
significant
performance
improvement,
especially
on
the
on
the
merge
request,
with
a
lot
of
comments.
C
We
can't
do
that
because
we
just
get
one
thing
from
the
server
and
we
just
dump
it
in
there,
which
is,
let's
say
95
of
everything
that
we
then
insert
will
never
be
visible
in
the
first
place
to
the
user,
and
that
is
something
for
sure
that
monaco
is
doing
much
much
much
better,
because
it
knows
what
the
user
sees,
etc,
and
but
that
is
exactly
what
we
could
try
out
with
exactly
this
full
site
speed
I
o
scripting,
measuring,
because
we
can
really
measure
like
the
full
workflow
between
version
a
and
version
b.
C
The
other
question
that
I
was
looking
into
is:
if
we
should
already
take
a
look,
if
we
make
some
some
more
global
methodology
for
testing
side
by
side,
because
right
now
what
we
have
is
feature
flags.
We
have
feature
flex
which
are
fantastic
and
they
make
a
huge
difference.
We
can
roll
them
out
to
users
to
groups
to
projects
to
10
to
everyone
perfect,
but
what
you
can't
do
is
use
the
same
project
with
the
same
issues
with
the
same
amount
of
data
side
by
side.
C
Not
a
b
testing,
it's
like
testing
against
the
same
data
against
the
same
project
with
the
same
stuff,
with
method,
a
and
method
b
that
we
have
implemented.
So
if
we
would
roll
out
like
a
full
view,
rainbow
browser
that
does
everything
I
want
to
test
the
the
website
project
with
the
old
method
and
the
new
method
at
the
same
time,
so
that
I
can
run
side
speed
once
like
this
and
side
speed
once
like
that
without
changing
any
feature
flags
without
just
doing
anything,
but
rather
like
a
query,
parameter.
A
This
is
this
is
this
is
this
is
a
very,
very
cool
idea
and
remember,
like
I
think
it
was
back
in
november,
when
I
demonstrated
that
script
that
I
had
for
running
the
user
timing,
metrics
that
has
never
been
published
and
I
had
never
time
to
to
actually
publish
it,
but
I
will
have
to
do
it
so
so
in
that
script
I
actually
have
have
the
possibility
to
so
technically
for
my
different
scenarios.
A
It's
like
it
doesn't
produce
this
visual
thing
that
site
speed
does,
but
it
gives
throws
real
numbers
that
that
I
can
compare
like
which
branch
is
actually
faster
for
for
that
a
particular
for
that
or
another
scenario,
so
that
that's
that's
really
cool
thing,
and
I
would
I
think
that
implementing
that
would
be
really
really
interesting.
C
A
But
for
production
like
technically,
I
was
thinking
about
this
a
lot
like,
since
we
are
running
the
tests
on
the
like
local,
on
the
dev
environment.
Right
then
we
go
to
production.
Then
we
have
completely
different,
build
and
things.
So
what
again
that
script?
Does
it
has
the
flag
that
says
okay,
build,
compare
the
production
build,
so
it
runs
the
production,
build
for
every
for
each
of
these
branches
specified
and
that
measures
it
on
the
production.
A
B
C
B
Some
team
is
measuring
the
team
yaml
and
we're
going
to
remove
the
team
yammer
at
some
point
like
I
think
I
think
some
team
is
using
the
pinyamo
to
see
the
performance.
It's
in
site
speeds.
It's
in
some
dashboards.
I
saw
it
while
nobody's
in
code
and
it
seems
like
dennis
is
nodding,
so
he
probably
added
it.
A
B
What
I
just
wanted
to
say
is
like
we
probably
if
we
want
to
measure
against
like
a
20k
file,
it
probably
makes
sense
to
create
a
test
repo
to
put
it
in
there
and
yeah
to
see
how
the
performance
okay,
I.
B
There's
another
great
example:
we
couldn't
check
in
web
id
by
the
way
and
it
would
be
opening
one
of
the
images
from
the
from
the
team
images
folder,
because
there's
like
1
300
files
in
there
and
the
web
id
takes
like
two
three
seconds
just
to
load
the
folder
yeah.