►
From YouTube: Web-Performance Brainstorming session. Jan 20th, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello:
everyone
we
are
here
at
web
performance,
brainstorming
session
that
surprisingly
gathered
both
front-end
and
backhand
engineers
me
and
janik,
or
janik,
and
me
whatever
way
you
prefer
to
look
at
it.
I
had
a
session
last
week
where
we
talked
about
sort
of
the
the
slow
down
efforts
in
the
performance
land
since,
for
those
who
are
who
are
new
in
the
company,
we
had
a
really
big
effort
last
year
in
the
line
of
performance
and
in
particular
for
the
front
end
performance.
A
A
But
now,
when
we
sort
of
reached
the
point
of
manageable
performance,
should
we
say
where
we
as
the
front
engineers
have
our
lcp,
metrics
more
or
less
at
bay?
But,
as
we
had
a
session
earlier
today
with
course,
lcp
will
happily
shoot
our
shoot
us
in
the
foot
whenever
he
can.
But
that's
that's
another
thing.
We
have
our
metrics
at
bay
and
because
of
this,
the
efforts
in
the
performance
were
sort
of
slowing
down.
A
So
we
had
this
session
with
janik
and
we
thought
okay
like
we're
not
in
the
land
of
super
cool
and
fast
product.
Yet
unfortunately,
so
what
can
we
do
to
make
things
better?
Where
can
we
step
in
and
help
the
product?
So
that's
that's
the
purpose
of
this
session
to
to
identify
areas
where
we
can
preferably
to
follow
the
the
paradigm
of
boring
solution
that
we
have
to
foster
in
the
company.
A
So
preferably
we're
going
to
talk
about
the
easy
and
fast
solutions
that
bring
the
most
effort,
so
the
most
efficient
things
that
we
can
do
like.
So
this
session
is
about
brainstorming.
So
that's
that's
the
whole
point.
Yannick.
Do
you
have
anything
to
add?
I
was
saying
so
many
words.
B
No
worries
thanks
for
the
introduction,
yeah,
not
a
lot,
just
a
really
short
introduction
for
me.
Within
the
the
session
dennis
and
I
had
last
week,
we've
both
felt
a
little
uninspired
in
terms
of
web
performance
since
we're
yeah
everybody's,
pretty
passionate
about
the
topic,
and
there
was
nothing
we
could
really
get
our
hands
on
currently
yeah
we're
currently
working
on
our
platform
and
we've
been
like.
Yes,
can't
really
be.
The
case
like
there
has
to
be
some
thing
that
we
can.
B
A
We
before
proceed
so
for
those
who
who
didn't
who
missed
that
message.
We
have
the
agenda
attached
to
this
meeting
and
this
is
the
same
agenda
that
we
are
using
for
the
application
performance
session
that
we
have.
That
is
the
bi-weekly
session
that
is
yet
another
opportunity
to
participate
in
the
performance
efforts.
This
is
sort
of
off
schedule
session,
but
regularly
we
have
bi-weekly
application
performance
sessions
on
mondays.
A
C
Actually,
I
didn't
want
to
steer
this
discussion
in
the
particular
direction
I
just
wanted
to.
My
goal
was
to
observe
like
what
is
discussed
so,
but
since
it's
brainstorming,
I
have
a
couple
things
to
add.
First
of
all,
what
I
we
are
at
memory
team
working
currently
on
the
tooling
epic.
C
It
means
that
we
are
trying
to
condense
the
wide
amount
of
tools
we
have
in
gitlab
into
something
more
manageable
and
more
usable
for
any
level
of
the
developer,
because
currently
we
have
a
bunch
of
tools
built
inside
gitlab,
our
own
and
bunch
of
external
tools.
Some
of
them
are
even
described
in
the
performance
session,
but
they're,
not
usable
per
se.
What
I
mean
is
it's
not
exactly
clear
how
to
interpret
results
which
which
tools
to
use
first
and
so
on.
C
So
my
idea
is
that
I
would
like
to
understand
how
you,
as
primary
front-end
developers,
how
you
prioritize
performance
issues.
What
are
your
workflow,
where
you
look
at?
How
do
you
assess
each
particular
case
and
based
on
which
thresholds
or
numbers
you
make
decision
to
take
this
into
work?
This
is
the
first
thing
that
interests
me
because
it
will
directly
convert
into
how
to
shape
this
tool
to
weld,
and
the
second
thing
I
just
wanted
to
add
about
boring
solutions.
C
We
had
a
couple
of
efforts
related
to
improving
performance
of
particular
pages,
which
had
a
very
heavy
front
end,
and
we
noticed
this
particular
trend
from
the
cases
that
we
observed.
The
most
heavy
pages
were
related
to
content,
rendering,
of
course,
and
it's
kind
of
expected
for
them
to
be
heavier
than
your
typical.
C
Yes,
there
were
some
complaints
from
inside
the
team
for
people
who
were
used
to
get
everything
rendered
on
a
single
page,
so
they
could
control,
f
and
search
for
something,
but
we
found
that
aging
can
aggressive
pagination
could
at
least
mitigate
problem
temporarily.
So
that's
all
for
me
for
now
yeah.
I
don't
want.
B
A
So
just
just
one
thing:
so:
let's,
let's
make
sure
we
are
following
some
some
guidelines.
Zoom
has
this
personality
of
reactions
with
raising
the
hands.
So
if
you
have
anything
to
that,
please
do
so.
Yes,
we
are
gonna,
go
back
to
school.
Now,
I'm
sorry
for
that,
but
that
gives
some
order
to
the
call.
D
Yeah
as
sorry
alex,
I
was
I
wanted
to
ask
a
question.
You
said
about
the
pagination
and
that
solved
the
the
the
the
high
load
issue.
Have
you
considered
granularly
updating
the
did
on
the
dui
or
you
had
to
just
split
in
in
hard
pages,
so
yeah.
C
It's
a
good
question.
Fortunately,
this
page
already
had
imagination.
We
just
had
to
put
a
reasonable
and
more
grace
values.
We
rendered
100
thieves
gift
chunks
per
page
and
we
reduce
us
to
20
or
10
and
so
on,
which
allowed
us
to
prevent
at
those
related
scenarios
where
we
could
just
load
a
heavy
json
which
was
passed
and
ate
a
lot
of
memory
on
backhand
as
well.
So
in
this
particular
case
we
didn't
change.
You
know,
ux,
we
kind
of
made
it
more
more
paginated
in
this
case,
which
was
a
compromise.
A
Thanks
for
the
question
cause
just
one
one
thing,
because
since
you
were
the
first
asking
the
question,
could
you
please
write
it
down
in
the
agenda
so
that
we
have
it
there
and
just
briefly
the
the
reply?
You've
got.
Okay
thanks.
A
So
my
point
is,
first
of
all
for
those
who
is
watching
who's
watching
this
recording,
please
please,
please,
please
do
go
and
fill
out
the
survey
that
alexa
posted
linked
to
because,
like
we
are
all
in
the
same
boat,
we
will
all
benefit
from
the
best
possible
tooling
and
with
working
with
the
tools
that
actually
do
serve
our
needs
instead
of
having
like
bunch
of
tools
that
we
never
use.
So
this
is
very
important
initiative.
A
Thanks
for
for
setting
it
up,
I'd
say:
that's
super
cool,
so
this
is
the
first
point.
D
A
On
the
the
one
thing
that
bothered
me
a
bit
in
the
in
the
epic
that
you've
linked
in
the
in
the
note
that
it
totally
says
sidespeed
dashboard
in
grafana,
I
would
drop
it.
Could
you
please
elaborate
on
that
or
actually
it's
matthias
who
who
who
posted
this?
But
I,
I
guess.
C
Yeah
it
was,
it
was
exclusively
discussion
about
the
survey
itself,
so
the
idea
was
that
it's
something
that
already
established
and
it's
not
something
we
are
going
to
change.
So
we
try
to
remove
the
tools
that
are
obviously
here
to
state,
because
in
the
initial
version
of
the
survey
we
had
kibano
logs
and
so
on,
but
obviously
it's
something
that
everybody
either
check
or
even
if
you
see
low
results
on
keyboard
unlocks
that
not
something
that
go
away.
So
it's
not
about
deprecating
the
tool
from
our
toolbelt.
It's
it's
just
about
removing.
B
A
Thank
you.
It
kind
of
gave
me
some
shivers.
You
know
that's
the
tool
that
I,
as
the
as
the
front
engineer,
is
using.
So
that's
that's,
that's
probably
the
source
of
information
for
when
I'm
analyzing
performance.
Thank
you
for
elaborating
on
that
yannick
you're.
Next.
B
Yep
yeah,
thanks
for
the
question
I
just
wanted
to
fully
elaborate
on
this,
which
was
basically
about
how
to
refronted
engineers
basically
find
out
what
kind
of
issues
to
to
solve
when
it
comes
to
to
the
web
performance
or
to
yeah
further
enhance
our
web
performance.
B
For
me
personally,
it
has
been
mostly
been
kind
of
my
own
research
best
practices,
I'm
aware
of
which
I
just
checked
check,
gitlab,
for
if
we
were
compliant
with
those
or
just
things
that
that
basically
bugged
me
with
using
this,
I
am
aware
that
we
are
having
many
of
those
grafana
dashboards
and
for
me
those
are
kind
of
a
double-edged
sword
like
I
definitely
see
a
lot
of
value
in
them,
but
that
might
be
just
me,
but
I
think
that
they
are
kind
of
hard
to
use.
B
I'd
say,
especially,
we
are
having
all
those
different
curves
and
stuff,
but
it
is
just
hard
for
me
to
get
this
thing
down
to
a
specific
url,
a
specific
problem,
a
specific
page,
so
we
can
see,
but
it's
just
not
too
easy
to
basically
burn
boulders
down
to
a
specific
problem.
So
that
would
be
something
where
I
personally
think
we
could
really
improve
we're
currently
having
the
stutter.
I
don't
think
we're
doing
the
best
possible
job
when
it
comes
to
communicating
and
observing
those.
B
I
feel
a
little
that
this,
these
dashboards
are
too
hidden
and
they
don't
get
the
the
exposure
they
they
might.
They
they
probably
should
get
so
that
was
about
it
from
my
side.
A
Okay,
I'm
I'm
gonna,
keep
the
erasing
the
hand,
because
my
comment
is
the
first
one
there
so
in
in
in
russian.
We
have
this.
This
wording
like
or
like
you
know,
semi-joke
like
I
don't
like
kittens
well,
you
just
don't
know
how
to
cook
them.
A
A
This
is
just
from
my
experience
for
the
the
dashboards,
give
a
very
good
sort
of
starting
point
where
to
look
telling
you
where
to
look
for
for
inspiration,
so
to
speak,
because
just
surfing
the
product
with
devtools
up
and
running
is
is
is
okay,
but
sometimes
you
just
want
to
know
where
to
look
and
the
dashboards
we
have
in
grafana.
A
Let's
put
it
this
way,
not
the
dashboards
themselves,
but
the
size,
speed
reports
behind
those
dashboards
are
the
most
valuable
source
of
information
for
performance
measurements.
I
guess-
and
I
think
so
yani
do
you
think
it
would
make
sense
to
have
actually
have
a
workshop
like
one
hour
workshop,
where
we
demonstrate
me
or
team
like
if
team
has
has
time,
we
demonstrate
how
we
how
to
use
those
dashboards
and
reports
in
the
most
efficient
way,
because
it
takes
some
time
to
get
used
to
these
side
speed
reports.
A
They
look
kind
of
kind
of
weird
at
first
and
it
for
me,
it
took
some
time
to
to
get
used
to
them,
but
they
provide
a
lot
of
very
useful
information
and
I
think
it
would
make
sense
to
to
dive
into
analyzing
these
size
speed
reports,
but
the
what
what
the
information
that
grafana
dashboards
in
particular
give
is
very
useful.
A
When,
for
example,
you
look
at
the
lcp
leaderboard
that
that
one
gives
you
the
idea
of
who
are
the
worst
or
like
the
most
abusers
of
of
lcp
in
particular
lcp,
and
then
you
just
that
at
least
that's
that's
how
I
did.
I
go
I
went
to
that
lcp
leaderboard
I
took
the
rat,
the
most
rad
route
went
there
and
analyzed,
and
usually
if
the
route
is
really
really
rad,
there
will
be
some
easy
wins
there
that
that
might
help.
A
But
I
think
I
think
the
problem
of
the
grafana
dashboard
is
not
that
they
are
hard
to
use
or
unclear.
The
problem
is
that
too
few
people
are
using
those-
that's
that's
the
problem,
because
they
give
a
lot
of
information,
and
I
don't
know.
C
A
If
we
would
have
this
one
hour
workshop,
maybe
I
could
I
could
make
it
more
useful
for
for
people.
I.
B
Don't
know
to
answer
this
really
really
quickly.
Yeah
I'd
be
a
fan
of
such
a
workshop,
like.
I
am
pretty
sure
that
there
are
there
be
plenty
of
things
to
learn
from
me
and,
as
you
said,
not
enough,
people
are
using
those
dashboards
because
they
probably
might
not
even
know
that
they
exist.
So
let's
get
them
on
board
as
well
or
at
least
try
to,
and
this
could
be
beneficial
for
a
lot
of
us.
A
C
C
C
Cool
and
another
thing
on
this
commentary
about
how
to
increase
visibility.
This
is
a
very
important
point,
and
this
was
also
a
part
of
where
we're
looking
at
doing
our
tooling
research,
because
we
found
that
we
have
all
different
kind
of
triggers
for
developers
to
start
looking
into
performance,
starting
with
slack
channels
like,
for
example,
we
have
this
mechanical
sympathy
alerts
channel,
which
was
the
top
offenders
in
terms
of
memory
and
so
on.
They
could
provide
some
good
leads
and
sometimes
the
manager
could
come
to
you.
C
Sometimes
it's
obviously
an
incident
which
is
the
most
bold
kind
of
invitation
to
research
performance,
a
particular
endpoint.
But
what
I
wanted
to
say
is
I'm
curious.
How
do
you
prioritize
them,
because
this
whole
kind
of
attribution
is
a
very
important
point.
I
wanted
to
share
what
scalability
shared
with
us.
They
told
us,
they
run
a
tool
which
measures
performance
of
particular
endpoints
and
they
also
manage
group
labels
on
each
controller
on
each
endpoint
and
that's
how
they
attribute.
They
automatically
open
an
issue
on
the
team
board
to
address
a
particular
performance
issues.
C
So
I
think
that
we
should
strive
to
do
something
similar
in
terms
of
content
and
performance
overall.
So,
ideally,
we
need
to
have
some
criteria,
some
threshold
when
it
breaks.
We
need
to
automatically
open
an
issue
on
the
team
page
and
make
sure
that
it
goes
through
required
severity
labels
and
make
sure
that
the
team
is
able
to
ask
for
help
from
the
from
you,
for
example,
denise
or
other
team
members
who
actively
work
on
performance.
C
A
I
think
that
would
be
super
cool
to
to
automate
the
process
and
set
the
create
the
issues
and
set
to
the
to
to
particular
groups.
There
are,
and
that's
that's
that's
one
more
thing
that
we
discussed
with
course
earlier
today
that
one
of
the
points
of
having
performance,
the
first
class
citizen
in
the
engineering
routines
last
year,
was
to
actually
engage
the
groups
so
that
they
monitor
the
performance
of
the
routes
that
they
are
responsible
for
and
in
the
ideal
world
they
would
notice
the
problem.
A
They
would
fix
this
and
that's
that's
how
things
would
be.
Unfortunately,
that
didn't
happen.
So
we
are
still
only
a
few
people
carrying
like
caring
about
performance
on
a
more
or
less
regular
basis
and
checking
those
those
dashboards
so
automating
the
process
would
be
super
cool.
However,
there
is
one
catch
there
like.
A
What
do
we
define
as
a
criteria
for
creating
an
issue?
So
is
it
a
spike
in
in
some
parameter?
Then?
Probably
if
this
is
a
temporary
spike,
just
like
one
off
that
might
be,
the
measurements
are
off.
That
might
be
some
hardware
issue
or
something
like
this.
So
if
we
react
on
a
spike,
then
we
might
create
a
lot
of
noise.
A
If
we
like
define
some
criteria
that
would
allow
us
to
say:
okay,
yes,
the
issue
is
persistent.
That
would
be
the
most
valuable
scenario.
However,
I
have
no
idea
how
to
achieve
that,
because
then
we
have
to
to
technically
build
some
mechanism
that
says:
okay
like
there
is
one
spike.
There
is
two.
There
are
two
spikes.
There
are
three
spikes.
Okay.
A
Now
we
create
an
issue
and
notify
the
group,
because
we
had
a
lot
of
things
where,
where
we
had
just
one
spike
like
the
numbers
go
up
and
then
with
the
next
measurement,
they
go
down
again
and
things
are
fine,
so
creating
an
issue
would
be
would
be
an
overkill
because
it
would
pull
in
the
resources
from
europe,
but
the
issue
wouldn't
be
there.
So
do
you
think
there
isn't
any
way
we
could?
We
could
define
these
criteria
that
we
could
follow.
C
Yeah,
I
have
two
things
to
add
to
that.
First
of
all,
I
think
we
definitely
need
to
contact
scalability
because
they
have
more
or
less
established
process
process
of
assigning
euro
budgets
to
each
group,
where
each
group
is
following
these
kind
of
things,
and
it's
already
something
that
they
put
a
lot
of
effort
into.
So
they
have
some
statistics.
C
They
probably
know
how
to
how
to
track
some
trends.
So
it's
not
something.
We
probably
need
to
reinvent
from
from
our
side
here.
So
I
think
we
need
to
hook
them
in
into
this
discussion
for
the
next
meeting,
or
so
we
we
discussed
it
with
andrew,
but
he
is
very
busy
usually
these
days.
So
I
think
somebody
else
from
the
group
could
help
us
and
another
thing
about
actually
defining
when
we
should
react.
C
It's
something
that
I
think
may
be
a
key
question
to
answer
even
more
than
anything
else,
because
it
could
require
a
lot
of
it
could
include
a
lot
of
politics
because
sometimes
spikes
worth
investigating,
because
one
spike
could
be
something
that
could
signalize
about
possible
scenario
for
dos
attack.
For
example,
we
had
an
issue
where
users
were
creating
a
bunch
of
jsons
on
on
the
page,
and
it
made
our
performance
non-existent
on
this
page
so,
and
also
like,
as
you
mentioned,
spike
would
mean
that
our
measurements
is
off.
C
It's
probably
something
we
also
want
to
address
as
early
as
possible.
So
I
would
probably
would
not
underestimate
the
spikes
in.
In
our
experience,
at
least
on
backhand
and
memory
team,
we
found
that
spikes
are
usually
useful.
I
mean
there's
something
worth
checking
in,
even
though
they're
rare,
they
are
not
contribute
to
this
whole
trend
and
could
go
into
p95,
percentile
and
so
on,
but
still
that's
my
take.
C
And
yeah,
I
I
don't
have,
unfortunately,
a
radiance
for
how
to
understand
which
issue
is
which
I
think,
as
I
mentioned,
some
pages
in
my
opinion
are
allowed
to
be
slow.
For
example,
nurse
request,
rendering
it
obviously
could
never
be
as
fast
as
any
other
page
in
gitlab,
because
it's
something
where
heavy
lifting
is
is
happening.
So
we
should
rely
on
statistics.
I
guess
yeah,
not
some
arbitrary
number
like
100,
milliseconds
or
so.
A
Just
just
one
question:
was
it
scalability
group,
you
you,
you
said.
A
A
Well,
it's
just
like
you
know
it's
it's
the
the
the
thing
that
makes
the
grafana
dashboards
not
so
user-friendly
from
time
to
time,
because
the
the
service
is
from
time
to
time
the
service
needs
restart
and
again,
since
only
the
few
people
look
at
the
dashboards.
We
noticed
this
way
too
late
so
like
for
now.
Some
routes
didn't
get
any
data
since
december
16th,
for
example.
A
So
that's
that's,
definitely
not
the
way
we
have
to
monitor
performance
of
the
projects,
so
the
the
problem
is
that
I
think
I
think
team
is
is:
is
the
boss
of
that
of
that
machine?
I've
talked
to
him
yesterday
and
he
said
he
mentioned
that
we
might
need
to
reinstall
ubuntu
on
on
on
that
machine,
but
that's
definitely
something
that
we
have
to.
We
have
to
to
to
fix.
A
We
have
the
tools
to
restart
the
service,
but
again
getting
being
notified
when
the
server
doesn't
get
measurements
would
be
super
helpful
as
well.
That
might
be
that
might
already
make
the
things
much
much
better
and
much
more
reactive.
A
D
My
point
under
under
this
discussion
is
that
I
have
faced
that.
Maybe
not
all
the
all
of
the
about
github.com
is
under
the
grafana
reports,
so
we
might
want
to
check
that
and
ensure
that
the
digital
experience
team,
I
think,
is
responsible
for
the
website
that
they
have
the
right
tools
and
they
are
integrated
with
their
work.
So.
A
As
as,
as
I
mentioned
in
our
chat
earlier
today,
about
gitlabcom
is
under
the
same
umbrella,
it
is
monitored,
it's
like
it
is
supposed
to
be
monitored,
but
it's
one
of
those
services
that
has
never
like
that
hasn't
got
gotten
any
data
for
quite
some
time,
but
definitely
we
do
have
about
gitlab.com
covered
in
the
dashboard.
A
So,
yes,
those
those
pages
should
be
monitored.
D
Can
we
add
this
point
as
a
to-do
list
on
our
next
workshop,
just
to
check
that
the
reports
there
are
present
and
are
useful.
A
These
are
two
different
things.
Unfortunately,
as
I
said
they
are
present,
are
they
useful?
No
because
there
is
no
data
at
the
moment,
so
these
two
are
different
points,
so
we
can
stretch
the
first
one.
The
data
is
like
the
they
are
monitored
like
they
are
supposed
to
be
monitored,
but
the
data
doesn't
get
into
this.
So
I
will
add
the
action
item
to
double
check
and
make
sure
that
we
start
getting
the
data
for
those
routes
again.
D
Yeah-
and
my
point
is
next-
that
I
was
asking
if
we
have
a
bundle,
analyzer
and
yannick
has
answered
that
the
mr
widget
has
the
some
report.
I
I
haven't
used
that
deeply.
I'm
not
sure
if
we
have
a
well
split
of
the
libraries
that
we
are
using
there,
the
the
percent
of
the
size
that
they
are
taking
in
in
a
particular
bundle.
D
A
Yes,
the
bundle
analyzer,
the
that
the
yannick
any
link
to
it
does
do
all
those
things.
So
if
you
by
any
chance,
create
something
that
that
increases
the
bundle,
it
will
tell
you
which
chunk
is
increased
by
how
many
percentage
percents
like
and
then
you
technically,
you
click
the
link.
You
go
to
the
bundle
and
you
you
can
run
the
the
analyzer,
the
the
visual
one,
where
you
will
see
what
chunk,
what
library
you
want
to
what
chunk
what
is
displayed,
how
these
things
work.
A
If
you
have
any
questions
about
the
bundle
analyzer,
I
believe
it's
gonna
be
the
oh.
There
were
so
many
groups
already,
so
whatever
group
ip
is
leading
now
I
think
it's
foundation,
yeah
yeah,
so
yeah,
it's
like
we
have.
We
have
we've
had
way
too
many
groups
recently.
So
so,
whatever
whatever
group
ip
is
leading
they
they
might
help
with
that.
A
And
if
you
have
any
suggestions,
probably
that's
the
right
group
to
reach
out
to
you,
but
but
just
just
wait
and
see-
or
I
can
probably
find
anymore
that
changes
the
size
of
the
bundle
and
so
that
you
could
take
a
look.
What
is
there
and
what
is
not
the
problem
with
that
with
that
widget?
Is
that
sometimes
it?
A
A
A
I
I
think
we
mentioned
people
who
to
reach
out
to
if
there
are
questions
about
this
about
this
report,
so
that
widget,
no,
it
doesn't
say
anymore,
okay
or
actually,
if
this
report
is
bad,
if
there
are
changes
in
more
than
two
percent,
I
think
then
it
will
list
the
people
who
you
can
contact
and
get
approval
on
that
or
another
merchant
was
to
make
to
be
on
the
safe
side.
A
So
the
people
those
people
listed
there,
myself
included,
will
reach
out
check
the
report
and
we'll
conclude
whether
it's
related
to
your
changes
or
not.
So
it's
it's
always
great
to
to
to
double
check
with
with
things
like
this,
but
yes
to
to
directly
answer
the
question
we
do
have
about
the
analyzer.
D
Yeah,
that's
very
useful
to
have
these
if
I
have
increased
the
bundle
in
the
mr
info.
Another
part
of
this
question
is:
do
we
have
a
central
way,
not
central,
but
a
local
hosted
way
to
analyze
the
bundle,
so
I
can
run
a
command
and
see
which
say
library
is
included
in
which
bundles,
so
that
with
the
mr,
it
is
a
reactive
work,
and
I'm
also
asking
about
this.
You
know
research
that
we
can
do
locally
to
understand.
A
A
Right,
ruby
on
rails
is
the
fastest
tool
in
the
world,
so
it
will
take
about
a
half
an
hour
to
write,
run
it
nothing
more
like
that
so
webpack,
because
technically
we
do
get
those
data
from
somewhere
right
for
for
the.
A
For
the
widget,
so
we
do
have
bundle
analyzer
plug-in,
so
it's
just
a
matter
of
enabling
it
locally
and
that's
that
should
work
that
should
work,
and
this
is
just
the
standard.
Webpack
bundle
analyzer
that
you
can
run
locally
and
get
those
that
data.
D
Oh
thanks
you're
welcome.
Let's
say
your
point
is
next.
A
No
wait
a
second,
I
think.
No,
so
I
I
just
wrote
down,
wrote
down
what
alexa
said
about
the
scalability
group
and
then.
C
E
But
if
you
go
further
like
if
you
start
working
on
a
problem
specifically,
they
won't
be
as
helpful
as
you
would
like,
because
you
won't
be
actually
able
to
reproduce
the
same
results
on
your
local
machine
because
you
have
different
hardware
you're,
also
running
a
browser
with
native
rendering
they
run
it
in
the
cloud
without
any
rendering
ties
to
your
gpu
or
anything
else,
and
also
you
cannot
repeat
this
process
like
of
launching
headless
browser
locally,
because
maybe
I
just
didn't
find
this.
E
But
I
wasn't
at
least
able
to
do
the
same
thing
locally.
So
like
running
a
script
that
runs
puppeteer
or
something
else,
and
it
gives
me
the
results.
So
you
just
have
to
trust
it.
Just
push
apply
and
wait
for
grafana
dashboard
to
update
to
actually
verify
that
it
did
make
an
impact.
So
there
has
to
be
like
some
involvement
in
this
and
you
are
not
completely
sure
that
the
changes
you've
done
are
actually
meaningful.
So
maybe
you
decrease
performance
like
I
had
the
same
situation
that
locally
it
worked
great.
E
Then
I
looked
at
the
grafana
dashboards
two
weeks
later
and
some
performance
actually
degraded,
because
the
graffana
suite
was
shanky
checking
another
file
that
had
different
environment
and
it
actually
was
worse
for
this
specific
page.
So
they
are
not
very
trustworthy,
but
they
indicate
the
largest
problems
in
the
front
end
that
you
can
just
peek
at
and
go
from.
There.
A
Those
are
very
good
points
so
again,
like
I
I'll
say,
I
will
voice
what
what
you're
writing
like.
There
was
a
confusion
and
I
was
the
probably
the
cause
of
that
confusion.
We
saw
that
the
dashboard
can
be
dropped
in
that
epic,
but
it
was
purely
about
the
survey
whether
to
ask
about
that
in
the
survey
or
not,
nobody
will
remove
the
tools.
A
So
that's,
of
course,
that's.
That's
that's
out
of
question,
so
we
nobody
removes
grafana
dashboards.
However,
you
made
an
absolutely
wonderful
point
related
to
to
how
to
like.
Okay,
you
get
the
result
of
results
in
the
side
speed.
You
do
see
those
results.
So,
first
of
all,
where
do
the
the
results
in
site
speed
and
projected
to
grifana?
Where
are
they
useful?
A
A
This
is
very
helpful
and
gives
gives
the
idea
of
whether
performance
of
a
particular
view
in
a
particular
environment
has
degraded
or
improved.
This
is
very
important
entry
point
as
you
as
you've
mentioned,
so
when
we
see
the
change
either
the
drop
in
performance
or
increase
in
performance.
This
is
where
we
start
thinking.
Okay,
what
led
to
this
problem?
How
can
we
make
things
better
and
that's
when
we
get
back
to
our
local
development,
we
write
code,
we
change
things.
We
change
the
architecture,
we
mute
gdk
yet
another
time
we
restart
everything
and
you're.
A
You
have
absolutely
nailed
the
point.
How
can
we
tell
locally
without
pushing
things
to
to
the
server,
moreover,
waiting
for
it
to
be
merged,
and
only
because
we
are
measuring
only
on
the
on
the
production
or
staging
technically
when
the
merging
quest
gets
merged?
A
How
do
we
know
whether
we
improve
the
things
or
not,
so
one
of
the
solutions
is
to
run
gdk
measure
locally
on
the
development
there
is
this
command
jdk
measure?
I
think
we
have
documentation
about
this.
That
will
run
exactly
the
same
site,
speed
reports
for
you
locally.
A
It
will
generate
everything
and
do
all
sorts
of
things
for
you
locally.
However,
again
you've
got
a
great
point.
It
will
be
related
to
your
local
jdk,
to
your
local
hardware,
draw
a
hard
drive
and
to
the
whole
environment.
You
have
on
your
machine
so
how
to
eliminate
this
thing.
There
are
a
couple
of
things.
First
of
all,
you
can
run
jdk
measure
on
master
locally
with
development
environment,
on
your
hardware
and
run
jdk
measure
when
you're
on
your
branch.
A
This
will
give
you
the
comparison
of
your
branch
to
the
master
on
your
on
exactly
the
same
environment,
but
in
the
development
mode
right.
It
will
still
give
you
some
some
basic
understanding
of
whether
you
improve
the
performance
or
the
performance
is
degraded,
but
this
is
far
from
being
optimal
right,
so
you
can
generate
the
production
build
locally.
We
have
documentation
on
that
in
the
front
and
faq
section
of
our
documentation.
So
you
generate
the
production,
build
on
master
branch.
You
run
jdk
measure
you
switch
to
your
branch.
A
Jdk
run,
generate
the
production,
build
on
your
branch,
run,
gd
game
measure
compare
to
production
results
on
exactly
the
same
environment.
This
gives
the
better
overview.
A
It's
quite
tiring,
it
it's
time-consuming,
and
but
nobody
tells
that
improving
performance
is.
Is
fun
thing?
It's
it's
always
painful.
So
that's
that's
one
thing
of
doing
things.
Another
one
is.
I
haven't
used
that
script
for
a
while,
but
I
have
a
script
where
you
can
specify
several
branches.
So
I
once
I
had
experiment
with
seven
different
branches
when
I
was
working
on
performance,
so
several
different
branches,
where
you
achieve
the
same
thing,
but
with
different
means,
you
run
the
script.
A
The
script
will
check
out
all
seven
branches,
for
you
run
all
the
measurements
for
you
and
spit
out
the
result
with
more
or
less
one
command
for
you.
So
the
problem
is
that
I
will
and
it
will
run
in
the
headless
browser,
so
I'm
I
probably
have
to
to
get
back
to
that
script
clean
it
up
a
bit
and
we
we
might
have
have
it
available
for
for
broader
usage.
But
it's
it's
very
useful.
A
The
problem
with
of
that
script
is
that
it
is
explicitly
targeting
the
custom
user
timing.
Metrics
not
lcp
because
getting
lcp
in
like
any
web
vitals
metrics
in
in
that
environment
is
kind
of
tricky,
but
we
can
what
what
we
can
do
now
is
that
I
update
the
script
and
instead
of
you
running
jdk
measure,
it
will
be
the
script
that
runs
gdk
measure
for
you.
The
problem
is
that
jdk
measure
requires
docker.
A
Docker
has
to
be
running
and
that's,
like
you
know,
one
more
step
in
this
whole
process
of
for
the
script,
so
I
will
see
what
I
can
do
with
that
script
and
whether
it
might
be
useful
for
broader
audience,
but
last
year
I
I
I
did
a
lot
of
testing
with
this
script
and
it
was
really
helpful
to
to
get
these
results.
Just
you
know
to
the
point
of
where
I
just
show
red
or
green
circles,
for
a
better
branch
or
worse
branch.
D
Yep,
of
course,
yeah.
So
again,
I
think
this
is
a
good
point
for
our
workshop
to
to
investigate
this
flow.
How
actually
we
can
measure
current
branch
and
master
branch
and
compare
them
and
to
work
on
on
the
reports,
so
it
might
be
worth
it
in
there.
A
Yeah,
I
will,
I
will
include
that
it
might.
It
might
extend
the
workshop
a
bit,
but
I
I
think
it's
worth
it's
still
worth
doing
this
and
we
will
record
that
workshop
for
a
future
reference
that
would
be.
That
would
be
very
useful.
I
think
it's.
C
C
Yeah,
I
think
it's
it's
something
also
to
keep
in
mind,
because
we
found
that
most
of
the
issues
on
there.
It's
only
visible
on
a
high
amount
of
data
like,
for
example,
probably
the
easiest
solution
to
have
a
script
to
populate
our
database,
local
database
with
a
lot
of
typical
issues
or
so
on.
It
may
be
a
very
first
step.
The
second
step
we
found
it's
useful
to
import
some
heavy
repository.
C
One
of
the
good
candidates
is
gitlab
hq
which
is
used
by
our
qa
team.
It's
a
pre-made
database,
I
wanted
to
say
not
repository
yeah
database
and
another
example.
We
used
linux
repository
also
have
a
lot
of
stuff,
but
it's
most
important
about
backhand,
because
it's
it
doesn't
have
like
gitlab
entities.
C
That's
important
while
gitlab
hq
have
all
these
guitar
advantages
like
issues
and
so
on
so
yeah.
I
think
it's
also
an
important
point,
because
we
have
these
database
measurements
right.
These
database
lab
channels
where
we
could
run
a
query
on
the
on
the
replica
of
production,
which
is
amazing,
because
we
could
measure
the
performance
on
a
more
or
less
on
an
environment
which
is
very
close
to
the
real
life,
and
it
would
be
interesting
to
have
something
like
reference
architecture
so
too.
To
do
this.
C
I
see
some
efforts
in
a
qa
team
working
on
this,
but
I
didn't
follow
them
closely.
So
I
will
update
the
document
if
I
will
find
some
some
good
tools,
because
they're
working
on
a
staging
reference
architecture
staging
craft
environment,
I
will
post
it
to
the
agenda.
It's
not
something
I
haven't
researched
yet,
but
it's
something
that
may
be
relevant.
Let
me
find
yeah,
that's
it
for
me.
I
will
post
the
link.
A
Cool,
thank
you.
Thank
you,
alexey,
I'm
I'm!
I
see
that
I
have
raised
hand,
but
I
completely
forgot
what
I
was
going
to
talk
about.
So,
of
course
you
you
have
the
next
question,
but
it's
probably
related
to
the
to
the
to
the
first
example
that
alexia
mentioned
about.
A
Okay,
so
stars
again,
you've
mentioned
this
issue
with
dashboards
for
the
grafana
right,
no
way
to
launch
the
same
suit
locally
as
well
performance
criteria.
Sorry
to
find
okay.
Could
you
please
talk
about
that?
One
about
performance,
criterias.
E
E
How
each
page
moves
in
this
stage,
so
they
have
severity
rankings
and
based
on
that
they
create
issues
from
time
to
time
they
lag
sometimes.
So
the
page
actually
me
might
be
a
little
bit
faster
that
than
it
actually
is
in
the
issue.
So
if
you
like
working
on
a
specific
page
looking
at
the
grafana
dashboards-
and
if
you
see
that
metrics
have
improved,
you
can
actually
ping
this
team.
They
will
create
another
issue
to
move
this
page
to
a
lower
severity
rating.
E
A
Yeah
I'll
make
sure
that
we
have
somebody
on
the
next
call.
That's
that's
super
helpful.
Thanks
for
the
link.
A
So
we
are
rapidly
getting
to
the
to
the
end
of
the
hour,
so
matthias
is
not
here,
but
we
talked
about
that
particular
comment.
Yeah.
C
Quite
summarized:
we
run
a
report
which
picked
up
top
offenders
in
terms
of
memory
usage
and
we
created
some
issues
and
it's
yeah.
It's
sometimes
also
front-end
related
or
at
least
rendering
related.
So
it's
it
could
be
a
an
entry
point
as
well.
That's
it.
What
do
you.
A
What
like
you,
you
identify
the
issues,
do
create
the
issue.
Like
you
identify
the
problem.
Do
you
create
the
issues
for
those?
Yes,
okay,
we.
C
Did
then
we
tried
to
solve
them
ourselves,
and
we
found
that
for
some
issues
we
actually
need
to
change
the
product
or
change
the
ux,
which
kind
of
restricting
for
our
group
to
take
action.
So
we
assigned
some
issues
to
the
teams
and
then
they
kind
of
started
to
live
their
own
life.
Some
were
closed,
most
of
them
actually,
but
some
were
addressed.
So
it's
not
something
we
run
regularly.
It
was
a
one-time
process,
it's
maybe
something
we
may
consider
on
a
regular
basis,
but
for
now
it
was
an
experiment.
A
Cool
okay,
but
is
there
any
anything
you
would
need
help
with
or
okay
cool,
not.
A
Okay
cos:
do
you
think
you
will
manage
in
one
minute?
No,
let's,
let's
move
this.
These
items
to
the
to
the
next
meeting
then
seems
like
we
will
have
more
of
this.
We
didn't
get
to
any
particular
problem
today.
A
I
know
yannick
is
disappointed
with
that,
but
man
I
I
promise
you
to
find
something
for
you
to
dive
into
so,
and
I
know
I
I
don't
promise
it's
gonna
be
any
easy
one,
so
I
think
we
will
have
to
make
this
more
often
like
the
more
regular
call,
but
also
again,
I
would
like
to
remind
you
that
we
have
this
application
performance
session.
A
It's
a
bi-weekly
meeting
this
one.
I
don't
know
how
often
we
will
have
this
one
and
whether
it
makes
sense
to
have
this
one
in
addition
to
the
application
performance
session,
but
for
now
I
will.
I
will
post
an
update
on
all
possible
channels
about
the
next
meeting
when
we
have
time
for
this
and
then
we
will
get
get
to
it
and
I'm
super
thankful
to
everybody
who
attended
today
today
and
for
all
the
ideas
we've
got
here.
A
I
think
the
the
main
idea
during
this
call
was
to
we
have
to
clean
up
in
the
tooling.
We
have
to
clean
up
in
the
monitoring
make
sure
that
we
monitor
what
we
have
to
monitor,
and
the
main
thing
is
that
we
have
to
monitor
this
on
a
right,
more
regular
basis,
meaning
we
have
to
either
automate
this
or
pull
like
make
people
somehow
look
at
the
griffon
dashboards,
and
for
that
we
will
have
the
workshop.
A
Probably
during
the
next
couple
of
weeks,
I
will
send
the
invitation
to
everybody
cool.
Thank
you
very
much
for
this
call
and
have
a
nice
continuation
of
your
day.
Thanks.