►
From YouTube: 2020 06 01 Memory Team Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Yeah
thanks
nicole
of
writing
the
section
into
the
dock,
so
for
block
blame
controller
nikola.
We
looked
into
the
plane,
graphs
and
found
that
issue
is
the
biggest
offender,
so
I'm
preparing
the
merge
request
to
speed
up
the
current
result
is
on
my
local
environment.
It's
dropping
from
35
seconds
to
13
seconds
like
more
than
twice.
B
C
A
Okay,
that
was
blame
controller.
Is
there
any
work
underway
on
blob
controller?
Is
that
still
to
be
done
later.
C
B
B
E
F
Yeah,
I
think
only
comment
friday
or
the
weekend,
but
if
we
should
talk
about
this
in
a
separate
meeting,
certainly
open
to
do
so
or
if
you
have
time
at
the
end
of
this,
we
can
talk
some
more
or
just
do
asynchronously
as
well,
but
yeah,
I
think
perhaps
you
can
say,
would
be
good
to
try
and
move
forward
more
quickly.
A
E
Yeah
so
like
we
enabled
that
once
again
like
today
morning
but
like
we
didn't
see
any
problems
that
we
saw
the
last
time.
So
hopefully
this
is
like
the
last
iteration
to
before,
and
I
think
that
to
be
on
by
default,
but
so
far
it
seems
to
be
so
seems
to
be
what
so
far
it
seems
to
be
the
last
iteration.
A
Gotcha
all
right
nicola.
C
Oh
nothing
to
add
here,
like
this
timings
and
measurements
down
to
service
layer.
I
don't
know
who
put
it
there.
It's
merged.
C
This
much,
but
I
will
check
if
the
issue
is
closed,
but
I
think
that
we
closed
it
last
time.
The
only
thing
left
here
is
that
I'm
still
waiting
to
be
injected
in
our
elastic
search.
C
C
C
And
regarding
those
transactions,
there
are
two
issues:
one
is
like
adding
database
total
total
number
of
database
calls
and
like
read,
calls
and
cached
database
calls.
This
is
almost
done,
maintainer
finish
review,
but
he
was
not
familiar
with
the
code,
so
I
asked
camille
to
do
the
final
review
when
he
has
time
and
this
another
one
about
making
those
gitlab
transaction
metrics
is
still
in
progress
like
the
first
iteration
of
review
is
done,
but
it
will
need
some
more
modifications.
C
F
Yeah,
so
I
think
we
have
to
create
a
blog
post
issue
now
for
the
marketing
team,
but
the
corporate
marketing
seems
like
they
approved
it
unless
they
saw
and
so
now
to
create
the
follow-up
issue
for
the
I
don't
know,
just
like
the
social
marketing
team,
blog
working
or
or
whatnot,
and
then
we
can
work
with
them
and
they
can
start
doing
some
reviews.
So
I
think
that
that's
the
next
step
there,
okay.
A
Do
you
want
to
pick
that
up
or
you
want
me
to
do
it?
I
can
pick
it
up.
No,
no!
I
can
do
it's
pretty
easy.
Okay,.
E
Yes,
I'm
kind
of
like
got
one
of
the
first
metric
was
matched
I'm
now
like
working
on
the
future
ones.
It's
actually
occupying
a
lot
of
my
attention
right
now.
E
Yeah
and
it's
kind
of
like
become
a
side
story
that
becomes
from
the
other
angle
seems
to
be
quite
important
because
there
is
like
the
need
to
have
the
future
flag
training,
but
there
is
also
like
a
lot
of
discussion
about
the
feature,
flags
and
integrating
and
the
footing
that
feature.
E
E
So,
just
like
last
friday,
I
managed
to
finally
get
create
a
database
of
all
feature
flags
within
github
for
managing
github
application
and
internet.
There
is
493,
unique
names
of
feature
flux,
so
what
I
am
trying
to
solve
is
like
making
this
accounted
documented
and
actively
being
worked
on
being
removed
from
the
code
base.
This
is
my
primary
goal
and
some
girls
like,
but
these
are
not
my
goals-
are
the
footing.
E
Our
github
feature
plugs
with
that
model,
making
it
easier
to
write,
pms
sr
or
whatever
as
to
discover
exactly
what
features
plugs
are
unable
to
wear
by
whom?
For
what
reason?
What
is
the
issue?
What's
the
current
style
default
state,
and
things
like
that,
so
a
kind
of
my
safe
surfing
picture
goal
that
I'm
trying
to
solve
there.
A
Yeah
there
there's
this
training
issue
on
feature
flag
usage
and
it
spawned
a
bunch
of
discussions
like
different
ways.
Future
flags
are
being
used,
different
understandings
of
what
a
future
flag
is
and
how
we
should
use
it
aiming
all
kinds
of
stuff
and
that's
where
camille
jumped
into
the
conversation
and
started
offering
solutions
with
his.
Mrs,
so
I
added
all
the
links
down
here.
If
anybody
wants
to
follow
along.
A
A
So
we've
got
quite
a
bit
of
un
assigned
work
and
let's
talk
about,
if
it
makes
sense
still
to
keep
these
in
the
milestone
we
are,
we
still
have
we're
halfway
through
the
milestone
right
now,
so
we
still
have
time
the
telemetry
work,
that's
shinyu
and
matthias.
They
are
not
here
and
a
lot
of
these
I
haven't
read
yet
so,
let's
take
a
look.
A
Camille
you
created
this
one.
Does
this
one
still
make
sense
for
this
milestone.
E
A
All
right
that
was
a
while
ago,
okay,
we'll
keep
an
eye
on
it,
but
it
looks
like
it's
likely
to
move
on
to
the
next
milestone.
E
E
E
So
if
we
at
some
point
start
focus
on
like
running
trying
to
run
github
on
2gigs
installation
or
something
like
that
of
the
thirds,
that
would
be
like
the
major
requirement
to
look
at
the
forking
pre-loading
and
memory
page.
E
A
A
D
E
It's
like
like,
like
we
are
only
adding
like
the
matrix
to
give
us
some
some
idea,
but
so
probably
we
saw
the
detect
part
of
it.
A
I'll
follow
up
with
matthias
and
that
one
later
telemetry
is
part
of
the
stuff
that
chinu
and
matias
are
working
on.
A
There
were
a
couple
things
that
came
in
this
week.
One
of
them
was
sid's
comment
and
camille
just
mentioned
it
about
now
that
puma
is
out
we've
seen
such
great
memory
reduction.
Can
we
reduce
our
requirements
on
for
memory
from
four
down
to
two,
and
I
don't
have
that
issue
handy
or
if
there
was
an
issue
created.
E
I
like,
from
my
perspective,
like
two
gigs,
could
be
pretty
aggressive,
because
I
would
assume
that,
like
the
lowest,
the
github
could
run
is
somewhere
like
around
two
gigs
of
memory
usage,
but
it
would
be,
it
would
require
fine
tuning.
But
I
have
no
idea
exactly
how
github
performance
would
behave
in
these
two
weeks.
E
So
could
we
like
ground
don't
scale
like
one
of
our
reference?
Architectures,
like
I
don't
know,
1k
to
like
100,
or
something
like
that
and
see
if
it's
gonna
work
on
two
gig
version.
G
Yeah
yeah
that
could
be
done.
That's
not
it's
not
too
hard.
Actually,
the
1k
environment
could
be
good.
The
1k
virus
does
actually
have
quite
a
few
a
few
nodes,
but
the
main
node
is
all
it's
also
all
the
main
stuff's,
all
in
one
node,
which
currently
runs
on
in
eight
gig
machine.
G
The
problem
is
it's
not
a
problem,
but
I
can
run
the
performance
test
against
it.
They'll
likely
fail,
because
this
is
a
question
of,
can
gitlab
be
very
performant
on
2gig.
No,
but
it
cannot
run.
Can
it
still
be
functional,
so
I
don't
know
how
the
results
would
look.
G
E
I'm
I'm
kind
of
thinking
that
you're
looking
at
at
the
current
state.
Can
it
run
or
it
cannot
run
okay
if
it
can
run
like.
Can
we
drag
don't
scale
1k
to
like
the
lower
number
to
get
to
make
it
run,
so
if
it
cannot
run
it
cannot
run
like.
We
can
maybe
then
optimize
that
further
to
see
figure
out
where
it
could
in
what
case
it
could
run.
G
G
I
also
expect,
if
I
start
doing
this,
he
might
come
back
and
say
no.
I
didn't
actually
want
you
to
do
anything
if
she's
asking
her
curiosity,
but
every
for
me
to
do
a
test
against
the
2gig
one
keys,
not
actually
that
bad.
That's
a
few.
It's
literally
a
one,
one
called
change,
so
I
can
do
that
and
you
can
kind
of
guess
from
there
but
yeah
I
mean
I'll
say
with
the
environment
I
can
get
after
running.
G
I
can
test
it
and
then
I
can
just
manually,
use
it
and
see
how
it
goes
for
a
little
bit
so
but
yeah.
I
guess
that
off
to
my
head,
I'd
be
interested
to
see
if
you
can
do
it
because
with
puma
the
reduction
was
more
like
a
third,
not
half,
so
maybe
43,
although
that's
not
a
nice
memory
number.
So
maybe
that's.
Why
he's
asked
about
two.
F
Yeah,
I
think
it's
a
question
to
ask
and
we
should
have
some
data,
I'm
not
sure
it's
yeah,
I
wouldn't
say
it's
super
important.
I
I
I
you
know.
I
I
think
it's
been
hard
to
quantify
how
much,
how
much
incremental
revenue
we
get
from
from
lowering
up
from
402
right,
but
you
know
this
is
also
why
I've
been
really
concerned
about
action,
cable
being
turned
on
by
default,
because
it
would
likely
consume
a
good
chunk
of
the
savings
we
had
off
puma.
So.
G
It
does
from
the
testing
I've
seen
but
yeah,
it's
a
different
there's
a
different
aspect
but
yeah.
F
F
I
I
think
we
can
try
that
1k.
I
I
think
we
should
just
try
just
a
simple
omnibus
to
false,
install
right
and
just
see
what
happens
might
be
easier.
G
So
we
already
have
a
1k
reference
environment
and
we
test
that
daily.
It
has
some
tweaks
in
it
in
terms
of
conflict,
but
it
is
effectively
a
1k
environment.
G
We
can
the
the
test
I
could
do
quite
quickly
is
just
to
tune
that
down
to
two
gig
in
terms
of
memory
and
see
how
it
does
it'll
do
badly,
but
as
long
as
it
actually
still
maintains
itself
and
actually
is
still
functional
and
maybe
not
performing,
then
I
guess
that
would
suggest
as
long
as
the
nozzle
stays
stable,
then
the
answer
would
be
yes,
I
guess,
but
we
want
non-toilet
testers,
so
I'll,
try
and
get
that
done
this
week.
F
F
Guess
to
be
more
clear
right,
I'm
not
sure
what
the
one
character
environment
is
the
right
one
to
try.
Okay,
if
you
look
at
the
memo
like,
it
would
be
easier
to
just
utilize
our
default
on
your
bus
installation
on
a
single
node
right,
so
just
install
omnibus
and
just
see
what
it
consumes,
as
opposed
to
the
1k
reference
architecture,
which
might
be
a
bit
different.
F
A
You
have
different
metrics
for
this
grant.
G
I
think
so
yeah
I
need
to
double
check,
but
our
1k
environment
is
definitely
not
running
on
32
gig,
it's
running
on
eight,
so
I
didn't.
I
just
lost
the
cpu
piece
of
this
page,
not
the
memory
piece
I
didn't
actually
realize.
I
thought
I
think
I
misread
it
and
I
thought
that
eight
gig,
because
it's
kind
of
highlighted
I
thought
that
meant
four
thousand
users,
because
the
second
entry
in
cpu
is
also
5,
000
users,
but
I
mean
there
is
other
aspects
to,
of
course.
G
If
there's
heavy
ci
usage,
then
obviously
that
might
increase
memory
usage,
but
I
do
seem
a
bit
high
now.
G
But
it's
almost
dysfunctional,
then
I
think
that
certainly
what's
up
this
page
and
say
the
top
one
there
for
two
gigahertz
swap,
I
suppose,
literally
the
same
same
text
that
you
can
do
it,
but
it
will
be
slow
and
then
I'll.
Look
at
the
rest
and
see
what
the
rest
should
be
because
cfa2
seems
to
be
high.
F
I'll
open
up
an
issue
just
to
update
the
that
page,
I
think
it's
quite
out
of
date,
yeah
it
is
yeah.
I've.
G
Yeah,
I
think
it's
fine,
that's
the
one
little
interesting
rub
is
that
we
can't
do
swap
on
google
cloud
instances
or
you
can,
but
you
need
to
go
through
a
lot
to
make
it
work
so
I'll
be
testing
the
first
one,
but
we'll
see
how
it
goes.
A
Okay
and
I'll
I'll
create
an
issue
to
represent
the
tests
for
2gig
environment
yeah.
So.
G
A
Okay
later
today,
and
then
the
other
one
that
came
up
was
unicorn
keeps
running
a
hundred
percent
on
an
upgrade
for
a
customer.
F
I
think
that
one
solved
okay
yesterday
or
I
don't
thing
I
don't
understand
is
if
you
move
it,
if
you
look
at
the
click
on
the
follow
like
it's,
it's
been
moved
to,
omnibus
you
can
click
on
that
link
there.
You
like
clean
up
some
of
his
sim
links
or
something
like
that.
I
don't
really
understand,
but
it's
it's
running
now.
G
We
didn't
catch
this,
I
think
caches
of
the
reference
virus,
because
I
have
a
switch
for
stuff
specifically
because
it's
the
same
config
as
before,
when
unicorn's
default.
So.
F
G
No,
I
I
jumped
the
gun
with
the
headline.
I
should
always
remember
to
read
into
it
more.
A
A
G
I
posted
my
response
with
the
north
star
metrics
as
a
general
kind
of
where
I
think
maybe
this
would
should
go
just
to
summarize,
where
I
think
it
should
go.
It's
to
me.
The
task
needs
to
be
more
about
surfacing
more
information
about
performance
generally
of
git
lab
across
this
various
guises,
dot
com
and
self
managed,
and
any
install
that
really
that's
out
there
really
as
long
as
the
information
is
approved
to
be
sent
back.
G
It's
been
really
kind
of
rewarding
to
see
the
fruit
born
out
of
that
and
we're
gonna
we're
continuing,
but
continuing
continuing
to
keep
doing
that
and
expand
our
performance
testing
and
one
of
these
next
things
would
be
slight
speed
which
we're
looking
to
do
just
like
we've
done
with
k6
that
we
can
then
centrally
manage
and
absorb
the
cost
of
maintaining,
because
with
the
quality
group.
And
that's
that's
our
job.
G
So
that
makes
sense
for
us
to
continue
to
do
that
and
to
expand
it
to
size,
speed
and
then
continue
to
just
iterate
and
building
that,
but
at
the
same
time,
for
sightspeed,
it's
easier
for
developers
to
run
as
well.
So
we
should
certainly
encourage
developers
to
to
run
smoke
tests
with
site
speed
in
their
kind
of
dev
cycles.
G
And
then
everything
would
be
data.
And
that's
one
thing
that
we
do
have
a
little
bit
of
here
and
there
in
our
elastic
instance
or
a
few
other
places,
but
if
we
can
get
more
data,
that's
you
know,
parsed
and
reported
on
cleanly
that
you
know,
like
simple
reports
to
say
well
last
month,
on
got.com,
these
endpoints
were
worse
or
pages
were
the
worst
and
then
a
separate
report
or
same
report
says
and
on
self-managed
instances
from
the
environments
out
there
that
are
sending
back
data.
These
were
the
worst
things
and.
E
G
Know
we'd
also
be
able
to
access
the
specs
of
their
environments
and
what
verses
they're,
using
maybe
that
kind
of
thing,
because
then
that
would
really
be
helpful
for
not
just
us,
but
everyone
across
the
company
to
to
see.
Oh
okay
milestones
is
bad,
which
is
actually
is.
The
case.
Milestones
is
bad.
We
know
this.
We
just
haven't
got
the
coverage
for
it
yeah,
so
it
would
be.
That
would
be
useful
for
us.
I
think
I.
E
G
That's
maybe
where
this
should
go,
but
that
was
just
my
take
from
from
reading
it
over
again
and
seeing
what
it
is.
I
don't
see
cafe
bar
really
being
too
helpful
at
the
moment,
because
it's
doing
a
lot
and
even
just
measuring
the
time
to
complete
a
test
will
include
other
things,
such
as
setup
and
various
other
stuff
and
they'd
be
running
against
various
environments.
G
That
can
be
doing
other
things
as
well,
but
being
able
to
get
the
information
about
customer
journeys
from
actual
real
life.
Instances
I
think
is,
is
quite
powerful.
So
I
think
that's
that
snowplow
actually
sounds
quite
useful
over
cafe
barra.
So
that's
what
I've
kind
of
posted
there
and,
as
I
say,
I'm
happy
to
discuss
that
more
and
try
and
figure
out
what
we
need
to
do.
Moving
forward
yeah.
So.
F
I
think
to
try
and
restate
what
I
what
I'm
hoping
to
achieve
here
and
I
would
all
dial
your
pins.
But
basically,
I
I'm
not
sure
the
the
product
management
team
and
some
of
the
engineering
managers
have
done
an
amazing
job
of
prioritizing
performance
problems
in
the
past
and
really
having
performance
be
top
of
mind.
G
F
And
so
there's
been,
you
know,
the
quality
has
been
great
opening.
You
know
hot
spots
and
slow
and
slow
rails
controllers
and
things
like
that
and
those
are
getting
action
and
largely
probably
because
of
the
ps
and
the
you
know,
the
s
p
labels
and
things
like
that.
But
I
would
like
to
try
to
get
performance
to
be
more
top
of
mind
for
the
teams
and
and
actually
treat
it
like,
a
feature
which
they
work
on,
and
care
about,
and
feed
and
track
the
usage
of
and
the.
F
F
That
that's,
why
I'm
trying
to
solve
which-
which
is
the
ongoing
triage
of
performance
and
making
sure
that
pms
and
ems
are
making
consciousness
they're
aware
of
the
performance
like
the
overall
performance
of
their
product
of
their
area
and
that
they
are
making
conscious
decisions
of
either
working
on
it
or
not
working
on
it
and
right
now.
F
I
think
it's
it's
not
always
easy
to
understand
what
the
real
like,
what
the
performance
actually
is
of
something
and-
and
you
know,
hey,
I'm
not
sure
many
people
aware
of
the
site,
speed
test
that
we
have
today
on
like
page
loads
and
b,
I
think
also
page
loads-
is,
you
know,
not
always
easy
to
consume,
because
you
might
get
like.
F
Oh
this
page
loads
in
three
seconds
or
four
seconds,
and
it's
like
that's
not
so
bad,
but
when
you
actually
try
to
like
stitch
together
what
that
means
for
trying
to
complete
something
like,
for
example,
opening
an
mr
opening
up,
the
you
know:
expanding
the
pipeline
minecraft
and
clicking
on
a
job.
F
You
know
that
takes
15
seconds
to
complete
and-
and
that's
I
think
more,
concerning
than
oh,
the
mr
takes
four
seconds
to
load
so
that
that's.
D
F
And
then
you
and
then
your
you
know,
your
kind
of
job
is
done,
but
no,
if
you
actually
wanted
to
see
why
your
mr
failed,
you
know
you're
gonna,
take
15
seconds
of
clicking
through
and
that's
actually
as
fast
as
you
possibly
can
go,
because
it's
clicking
as
soon
as
the
next
you
know
as
soon
as
the
page
actually
responds
so
yeah,
and
so
that's
why
I
think
the
the
journeys
or
the
north
side
metrics
is
important,
because
I
think
not
only
do
we
need
to
show
up.
F
We
just
show
it
in
a
way
of,
like
you
know,
pm's
and
ems.
You
said
that
these
certain
activities
are
really
important
to
your
to
your
group.
This
is
how
long
it
takes
to
actually
complete
these
activities,
and
this
is
this-
is
the
performance
of
these
activities
right,
so
the
analog
would
be
oftentimes
e-commerce
sites
right.
They
have
page
load
performance.
They
also
track
how
long
it
takes
to
add
something
to
your
cart,
click
on
checkout
and
actually
purchase
it
right,
which
is
a
lot
different
than
the
individual
page
load
performance
of
each
individual.
F
F
G
That's
what
we've
been
doing?
No,
I
I
completely
agree
it's
something
that
I've
been
trying
to
do
as
well,
but
in
my
own
way,
I
suppose
so.
I've
not
not
like
this
has
been
a
secondary
effort
with
the
performance,
testing
and
performance
issues.
I
found
from
my
experience
and
not
just
get
low
in
the
furious
roles
as
well
that
certainly
it's
just
getting
information
out
there
and
getting
in
front
of
the
right
people
to
make
them
kind
of
come
along
with
the
journey
on
the
journey
with
with
us
and
make
them
realize.
G
As
you
say,
performance
is
a
feature.
I've
I've
argued
this
in
the
past
as
well.
I
I
I
see
performances
equal
to
functionality
and
if
your
area
isn't
performing
well,
then
it
shouldn't
be
released,
but
you
need
to
bring
in
along
hearts
and
minds
with
it.
So
with
the
performance
issues,
just
that
alone,
we're
just
servicing
that
data
and
say
hey,
look,
that's
actually
tested
this
area.
It's
bad
like
blame,
didn't
realize
blame
was
that
bad.
That
was
a
blind
spot
for
us
as
well
until
our
competitor
pointed
out.
G
So
then
we
immediately
added
in
every
test.
Every
day
we've
got
issues
raised
and
it's
putting
put
in
front
of
the
right
people
and
hopefully
they'll,
keep
that
in
mind
moving
forward
and
keep.
G
If
there's
going
to
be
any
change
ever
any
more
changes
to
blame,
then
performance,
hopefully,
will
be
a
part
of
mind,
but
I'm
certainly
very
much
up
for
increasing
that
awareness
and
that
initiative,
but
yeah.
I
think
I
think
it's
just
information.
It's
just
the
data.
It's
getting
the
good
data
in
front
of
the
right
people
and
say:
look
here's
the
data.
This
page
is
performing
or
this
scenario
is
performing
badly.
G
So
I
think
site,
site
speed,
pipeline
selling
will
be
a
big
part
of
that,
just
like
what
we
do
with
k6
and
we're
starting
to
see
some
good
kind
of
results
there
and
then
you're
having
that
telemetry
data.
So
a
pm
could
just
click
into
one
of
our
monitoring
platforms
and
go
to
their
area
and
say:
hey
look.
This
is
the
actual
live
stats
of
this
page,
the
last
month
against
the
dot
com
or
a
self-managed
incident.
I
think
that
I
think
that's.
D
F
Think,
as
far
as
improvisation
goes,
the
product
team
had
now
is
working
towards
having
a
page
that
is
reviewed
every
month
which
looks
at
how
you
know
how
much
usage
your
features
are
getting
and
that's
going
to
start
being
utilized
to
control
how
much
we
invest
in
that
area.
F
If
your
futures
aren't
being
utilized
we're
not
going
to,
we
might
not
invest
next
year
that
you
know
that
you
know
you
might
not
get
too
many
people
as
far
as
headcount
goes
because
you're
not
seeing
a
lot
of
usage
and-
and
I
would
and
so
we're
tracking
certain
activities
about
a
mile
which
is
active
users,
active
monthly
nikki
users,
and
I
would
love
to
have
the
performance
of
the
amount
right
next
to
like
all
these
actions,
which
we
say
are
really
key
for
driving
your
users.
F
What
you're
using
to
monitor
you
know
how
much
you're,
using
it
right
there
on
that
page.
That
way,
when
you,
you
know,
if
it's
slow
and
you
make
performance
changes
in
theory,
you
can
see
right
next
to
it.
The
impact
on
how
many
people
are
actually
using
said
future
and
that
way.
Hopefully
you
can
get
a
good
feedback
cycle
of
you
know.
F
We
thought
performance
was
a
reason.
People
weren't
using
it,
we
fixed
it.
Did
it
actually
move
the
needle
was
fun
on
usage
or
not,
and
if
it
did,
I'm
gonna
start
getting
some
good.
You
know
some
good
use
cases
and
proof
points.
I
think
that
that'll
really
help
to
drive
performance
forward.
You
know.
F
Massively,
if
people
don't
see,
if
people
make
changes,
they
don't
seem
to
change,
they
don't
see
any
changes
in
the
active
user
accounts.
Then
you
know
something
else
is
going
on
and
so
having
that
telemetry,
I
think,
is
really
important
and
having
a
part
of
the
overall
story
of
not
just
features
but
also
the
performance
of
your
experience.
You're
providing,
I
think,
will
help.
That's
that's
what
I'm
trying
to
achieve
here
like
because
I
think,
there's
some
questions
as
far
as
why
so
camille
alexi.
E
We
just
slightly
have
different
way
to
reach
this
goal
because,
like
for
example,
we
I
opened
the
scarring
review
proposal
some
time
ago
and,
like
I
I
I
point
like
they're
like
scaling
engineers
and
memorizing
engineers
to
help
help
out
with
scarring
review,
but
one
problem
that
I
like
find
it
challenging
with
this
screening
review
is
like
to
actually
be
useful
to
the
people,
so
they
actually
people
would
like
ask
would
like
us.
E
Sorry
would
ask
us
to
feedback
on
how
to
improve
the
performance
and,
like
I,
I
think,
like
that.
From
my
perspective,
we
have
not
enough
experience
with
fixing
the
performance
aspects
and
memorial
aspects
of
beethoven
to
actually
be
useful.
That's
that's
that's
my
challenge
like
we
are
still
in
the
face
of
learning
a
lot
of
these
aspects
like
one
by
one
issue,
solving
that,
like
alexi
recently
with
nicola,
they
were
investigating
these
prime
controllers,
so
they
learned
frame
graph
and
a
lot
of
awesome
techniques.
E
My
personal,
like
thinking
and
it's
like
that,
it's
so
so
we
we
have
like
a
lot
of
different
topics
started.
We
learned
a
lot
but
like
it's
still
like
very
targeted
on
like
very
specific
areas,
and
I
still
think
that,
like
we
don't
have
very
good
overview
of
the
whole
application
how
it
works.
E
So
I'm
trying
like
to
figure
out
a
way
for
us
to
extend
this
knowledge
to
basically
by
trying
to
work
on
these
items
kind
of
build
these
guidelines,
while
while
we
discover
them
kind
of
something
that,
like
I
started
doing
recently
with
the
future
flux,
if,
if
I
would
not
start
working
on
that
because
of
the
training,
I
would
never
like
figure
out
ahead
of
the
time
what
this
guideline
should
be,
because
I
would
be
so
detached
from
the
problem
to
be
like.
I
would
be
solving
completely
different
problem.
E
The
the
problem
with
the
performance
is
like
that
you
can
solve
that
on
so
many
dimensions
and
everyone
has
different
perspective.
What
is
important,
I
think,
like
a
lot
of
is
being
implied
by
the
team
name,
which
is
memorial.
I
think
we
should
be
looking
at
our
problems
from
the
memorial
perspective.
If
you
want
to
look
at
the
problems
from
different
perspectives,
we
should
change
the
description
of
the
team,
the
name
of
the
team,
to
better
indicate
what
other
problem
we
are
solving
right
now
like,
for
example,
the
seat
question
about
two
gigs.
E
It's
actually
ideal
problem
to
us
to
solve,
because
it's
very
it's
very
like
well-defined
and
it's
actually
solving
what
is
in
the
main
part
of
the
of
the
core
responsibilities
of
the
team.
So
at
least
my
perspective
is
like.
If
we
have
different
perspective
about
what
teams
should
be
doing,
we
should
change
the
team
name.
We
should
change
the
team
description
to
better
reflect
that,
but
if,
if
we
are
memory
team,
I
think
our
primary
focus
should
be
identifying
and
fixing
the
memory
aspects
in
whatever
way
it
is
possible
by
us.
E
It
makes
us
actually
specialist
in
this
area,
so
we
could
actually
help
others
and
github
is
so
big
that
we're
not
going
to
solve
every
aspect
like
with
our
group.
So,
at
least
from
my
perspective,
it's
really
hard
to
write
a
guidelines
on
problems
that
I
I
did
not
touch,
because
these
guidelines
are
kind
of
disconnected
and
they
not
represent,
like
the
like
the
real
problem
being
solved.
E
E
F
Yeah,
I'm
sorry,
I
will
check
in.
I
also
have
a
meeting
with
sid
or
maybe
write
up
an
issue
with
sid.
I
asked
him
directly
last
time
that
I
I
can
find
the
slack
link
if
it
hasn't
been
deleted,
yet
that
you
know
we're
thinking
of
working
on
some
performance
challenges.
Not
specifically
memory
you
know,
are
you
know?
Are
you
okay
with
this?
F
F
F
E
Follow-Up
items
it
is,
it
is
actually
quite
interesting,
maybe
like
we
could
discuss
that
like
another
another
time,
but,
like
my
thought
about
that
is
like
these
items
are
deeply
interconnected
and
if
we
like,
we
gonna
be
great
in
memoir
optimization,
it's
gonna
greatly
affect
the
performance
of
the
key
drop
as
well,
because,
like
it's
just
one
of
the
dimensions
like
we
have
also
database
team,
and
maybe
the
role
of
the
database
team
is
like
to
find
the
ways
to
remove
m
plus
one
problems
actively
from
the
code
base,
and
we
are
memory
team
and
maybe,
if
we
look
at
the
perspective
from
the
memory
like,
we
want
to
also
like
remove
employees
once
which
reduces
memory
usage,
but
also
slows
down
the
end
points.
E
E
It's
gonna
bring
like
much
bigger
improvement
as
well,
but
I
think
like
from
the
other
angle,
it
gives
you
maybe
slightly
better
focus
about
the
topics
that
you
are
looking
for,
but
I
guess
it
kind
of
boils
to
like
to
the
main
question
we
care
about
looking
at
the
aspects
of
the
github
that
are
memory
underperforming,
or
do
we
care
about
looking
at
the
aspects
of
the
heathrow
that
are
slow
on
the
side?
Speed
because
it's
different
dimension,
the
the
the
aspect
that
you're
gonna
be
solving
gonna
be
very
similar.
E
But
it's
like
different,
like
your
goal,
is
different.
Just
the
outcome
might
be
very
similar.
F
Yeah,
I
think
the
prioritization
might
be
different
along
the
way
too
much
problems.
You
look
at
sorry,
go
ahead.
B
Yeah,
I
wanted
to
say
on
a
side
note:
why
don't
we
just,
for
example,
understand
the
user
journey
and
the
beauty
of
it
as
like,
long-standing
metric,
and
something
that
we
could
demonstrate
to
our
customers
and
so
on.
But
if
the
performance
and
latency
not
likely
this
responsiveness
of
the
page
is
our
main
focus.
B
Why
don't
we
just
assign
these
top
offenders
to
teams,
because
we
already
have
the
risk
from
grant?
Why
do
we
need
to
build
a
tool
which
would
take
a
lot
of
time
from
us
and
effort?
Why
don't
we
start
with
just
assigning
slowing
points,
because
I
mean
technically,
I
believe
that
slow,
like
journey
is
usually
the
sum
of
slow
pages.
So
there
is
nothing
in
between
so
not
not
much
in
between,
so
we
could
just
internally
at
least
treat
it
as
a
sum
of
small
pages,
a
slow
journey.
B
So
maybe
we
should
start
with
this
without
like
building
this
tool,
I
mean
at
least
what
camille
said
that
we
should
learn
to
identify
and
fix
this
issue
and
write
guidelines
and
spread
the
knowledge.
So
maybe
we
as
a
memory
team,
should
like
fix
particular
offenders.
First,
instead
of
building
this
tool,
that's
I
think
my
concern
about
this
story
on
building
this
tool
and
like
presenting
this
metrics
when
we
are
kind
of
wasting
our
time,
not
fixing
particular
offenders.
That's
the
concern.
Sorry.
G
Yeah,
a
few
quick
bits
from
that.
Certainly
you
know
if
we
do
it
get
telemetry
data
about
user
journeys
through
and
that's
useful
data,
but
certainly
I
wouldn't
be.
I
wouldn't
expect
a
bug
to
be
raised,
for
example,
to
say
that
the
journey
to
create
a
marriage
request
is
but
is
slow.
I
respect
dogs
about
specific
points
of
that
journey.
G
G
This
is
slow
to
this
specific
thing
is
causing
this
to
be
slow
and
I
think
that's
the
case.
We're
not
going
to
be
building
what
what
is
going
to
suggest,
there's
not
specifically
tool,
I
think,
getting
what's
limited
data
environments
and
again
that
presented
a
nice
way.
I
think
it's
a
very
powerful
thing
and
I
think
that
should
continue
regardless
the
site,
speed
test.
We
actually
already
have
kind
of
like
stuff
in
place
to
decide
we
test.
G
We
just
need
to
expand
it
and
actually
get
to
reporting
in
a
nice
way
and
then
we'll
have
it
there.
So
I
I'm
certainly
not
keen
to
build
a
new
tool
that
would
be
costly
to
do
anything
costly
to
maintain
certainly
for
quality,
we're
looking
just
to
expand
what
we
already
have.
F
Yeah,
I
think
I
couple
comments
there.
One
is
I,
I
think
it
is
important
to
try
and
track
it
from
the
from
the
end
user's
browser
standpoint
rather
than
the
rails
controller,
because
I
think
one
thing
that's
been
apparent
with
the
with
the
look
at
the
front-end
side
is
that
the
back-end
rails
controllers
might
not
be
overall
that
slow,
but
the
page
performance
could
still
be
quite
quite
painful
and
and
whether
we're
pushing
a
lot
of
computation
onto
the
browser
or
how
we're
delivering
things.
F
It's.
It's
interesting
that
that
quite
a
bit
of
the
slowness
to
some
of
these
pages
is
actually
on
the
on
the
front-end
side
to
some
degree
or
how
the
pages
actually
get
built
up.
So
I
I
think
it's
important
to
try
and
track
up
from
the
end
user's
perspective.
I
think
there's
about
the
volume
both
clearly
I'm
not
I.
F
I
don't
want
to
minimize
the
back-end
rails
control
performance
side,
but
you
know
I
I
think
to
some
degree
when,
when
look
when
presented
with
the
information
of
github
speed
versus
getting
speed,
it
was
really
apparent
that,
like
the
difference
from
the
end
user's
perspective-
and
it's
quite
stark
in
many
cases
like
github-
is
like
twice
as
fast.
E
So
maybe
maybe
like
one
way
to
approach
that,
like
maybe
we
just
start
an
issue.
Maybe
we
just
reach
to
each
of
the
team
and
ask
each
of
the
team
to
write
like
the
three
or
four
qa
scenarios
for
like
for
the
most
common
workflows
for
their
team,
and
we
just
start
from
there
like
we
got,
we
get
some
data.
E
Maybe
we
understand
how
people
build
that
and
we
kind
of
built
the
practice
of
how
to
build
these
like
user
workflows
next,
because,
like
it's
probably
related
to
size
speed,
maybe
it's
related
to
some
workloads
that
user
go
through.
I
actually
also
propose
something.
Maybe,
let's
gather,
let's
grab
our
workflow
and
let's
focus
on
improving
our
usage
of
gitlab
for
github
for
github
development.
We
we
often
complain
about,
like
github
underperforming
on
different
pages.
E
E
E
E
I
don't
know
like,
like
anything,
I'm
kind
of
have
like
random
thoughts.
I
I
use
github.
I
have
my
each
pro
up
your
baskets,
like
the
unique
user
usage
of
gitlab
different
controls.
I
mean
there
is
some
way
to
record
it,
but
what
what
we
do?
Maybe
it's
just
like
side,
speed
on
the
one
page
like
issue
on
or
very
large,
merge
sequence
or
like
a
diff
and
see
like
how
it
changes
over
time
really.
E
Is
it
actually
like
getting
gradually
slower,
as
we
are,
adding
or
is
kind
of
becoming
stable,
it's
kind
of
stable
and
then,
if
we
start
fixing
these
individual
aspects,
maybe
we're
gonna
see
how
they
gonna
affect
these
pre-recorded
plans,
because,
like
one
of
the
interesting
aspects
of
the
qi
testing
is
like,
you
can
execute
some
complicated
workflow
of
actions
being
executed.
E
So
maybe
we
could
use
that,
maybe
maybe
like
if,
if
I'm
maintainer,
I'm
kind
of
like
going
through
comments
and
like
each
of
this
operation
takes
200
milliseconds,
it's
actually
kind
of
significantly
slows
me
down.
So
maybe
I
should
just
go
and
fix
something
that
is
slow
to
me,
like
the
workforce
that
is
slow
to
me,
hopefully
likes
making.
Everyone
else
be
more
happy.
F
G
G
No
on
you
no
go
ahead,
I
mean.
What
I
was
going
to
say
is
that
I
think
camille's
suggest
is
quite
logical
as
what,
essentially
that
we're
looking
to
the
equality-
and
maybe
we
should
own
that,
because
in
quality
we
have
this
kind
of.
Also
we
have
the
more
general
overview
of
the
product
like
we
do
the
k6
testing.
G
So
what
one
proposing
is
essentially
the
same
as
site
speed
that
we
have
daily
tests
against
the
reference
environments
that
we
have
already
in
place,
and
these
tests
are
in
a
controlled
test
environment
so
that
we
know
that
if
there's
a
performance
issue
they're
showing
in
them
we've
already
taken
away
pretty
much
all
the
world
cards
that
we
could
think
of,
and
we
know
then
this
literally
is
the
application
that's
not
performing,
and
what
we're
proposing
there
is
that
we
will
be
building
up
that
over
time
and
adding
more
pages
and
then
scenarios
as
well
in
that
site,
speed
pipeline,
so
we're
on
the
same
page,
their
quality.
G
We
wanted
the
same.
We
were
always
looking
to
do
this,
but
we
wanted
to
focus
on
server
first
and
which,
when
I
was
in
a
good
spot,
we
wanted
to
move
back
into
bursar
and
cover
that
the
last
gap
there.
So
then
we
can
see
the
server
is
playing
well
with
the
pages
form
badly,
or
vice
versa,
or
a
mix
of
the
two.
F
Yeah
we're
running
out
throwing
out
time,
but
I
I
think
I
I
do
think
it's
interesting
to
run
around.com
because,
at
least
for
me
hey
it's
a
lot.
It's
a
key
focus
for
us
this
year.
Trying
to
make
sure.com
is
just
a
platform
people
will
enjoy
using
but
but
b.
F
Sometimes
it's
it's
fine
other
times
it's
extremely
slow,
and
I
I
think,
having
that
information
is
interesting
and
you
can
try
and
figure
out
like
what's
happening
during
some
of
these
cases
like.
Why
is
our
performance
swinging
so
wildly
on
gitlab.com.
G
Yeah
I
mean
it's,
it's
it's.
It's
kind
of
the
performance,
the
trouble
performance
testing
as
a
whole
there
on
the
head,
dot
com
can
vary
quite
widely
and
it's
probably
because
of
various
issues
behind
the
scenes
with
infrastructure
what's
going
on
and
what
are
people
doing
so?
What
we
want
to
look
at
quality
is
that
we
want
to
kind
of
build
up
from
the
bottom,
so
we
test
it
in
a
controlled
environment
and
then
we
can
see.
Okay,
that
page
is
just
performing
badly
regardless
that's
an
actual
performance
product
issue.
G
We
need
to
get
it
fixed,
but
then
you
see
reports
of
it.
If
you
look
at
the
telemetry
data
and
say
actually
this
page
form
badly
in
dot
com,
but
on
our
test
it's
fine,
then
that
would
be
a
dot
com
issue
and
then
that
should
be
passed
into
the
infrastructure
team
or
the
right
place
for
that
to
be
investigated
to
say:
well,
this
fight
works
fine
on
local
installs.
Why
is
it
foreign
badly.com?
G
G
So
that's
why
I
kind
of
fuzz,
both
with
either
side
speed
and
controlled
environments.
We
measure
the
how
the
actual
product
performs
in
a
bubble
and
then
we
can
also
then
look
at
the
telemetry
data
and
say
well.
Actually,
the
data
from
real
life
is
this,
we've
got
this
and
we
can
always
then
use
that
data
to
make
informed
decisions.
F
Cool
okay,
I
think
we're
out
of
time.
Let's
keep
let's
keep
chatting
on
the
issue
right.
I
think
we
have,
I
think,
a
lot
of
great
ideas,
and
so,
if
we
don't
online
on
something
in
the
next
couple
days,
we
can
have
a
maybe
an
async
meeting
set
up
to
discuss
further
dedicated
to
this.
But
I
I
think
thanks.
F
I
really
enjoy
the
conversation,
so
thanks
everyone
and
let's
keep
going
the
issue
and
see
how
see
how
far
we
get
and
we
can
have
a
another
meeting
if
we,
if
we
need
to
from
here,
that's
good.