►
From YouTube: 2020 12 14 Memory Team Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Yep
I
just
started
the
recording
all
right
welcome
to
this
version
of
the
memory
team
meeting.
It
is
december
14th.
There
are
a
couple
of
not
verbalized
topics
in
there
so
for
today,
since
the
milestone
actually
ends
tomorrow,
we
have
an
early
milestone
and
I
figured
we'd
run
through
the
board
real,
quick
13-7,
let's
start
from
the
top
camille,
is
this
gonna
close
by
the
end
of
the
day
tomorrow,.
C
The
dog
stay
bull,
not
because
I
didn't
do
anything
about
that.
You
saw
like
I
focus
on
getting
periscope
I
like,
with
the
help
of
the
tyler
from
analysis
group.
We
actually
have
it
done
and
like
waiting
for
a
match.
So
please
move
that.
I'm
actually
gonna
figure
out
try
to
figure
out
to
push
that
to
the
technical
brighter
things,
because,
like
it's
really
more
about
that
right
now,
okay,.
D
D
That
one
like
we
didn't
remove
the
actual
feature
flag
it
just
enabled
by
default,
so
I
will
leave
it
open
and
remove
the
feature
flag
in
the
next
release.
So
I
think
that
we
can
move
it
in
the
next
one,
because
in
this
release
it
will
be
enabled
by
default
and
in
the
next
one
we
can
remove
it.
So,
okay.
A
Seven
and
7.51
will
not
land
13.7,
because
we
need
a
gpt
run
over
it
and
I
will
need
some
qa
help
to
do
that
and
at
that
point
I
don't
think
it's
a
good
idea
to
merge
it
on
the
very
last
day
on
self-managed,
even
if
it
if
it
will
like
be
fine.
So
I
hope
for
the
first
day
of
certain
point,
eight
okay.
D
Sounds
good
for
that
one,
I'm
preparing
the
plc,
but
I
don't
think
that
we
will
like
end
up
with
the
mergeable,
mr
for
this
tomorrow.
F
Yeah,
I'm
working
this
right
now,
but
it's
a
bit
of
an
open-ended
thing.
It's
just
like
we've
created
this
issue
without
really
knowing
what
exactly
we're
going
to
tweak
here,
but
so
I'm
currently
working
on
like
making
this
yeah
supporting
some
of
these
decisions
and
then
we'll
probably
focus
on
the
simplest
thing.
First
that
we
can
do,
but
a
will
definitely
extend
it
to
13
8,
so
I'm
more
I'll
probably
prefer
to
move
it
because
yeah,
I
don't
think
it
will
totally
finish
at
the
end
of
the
week.
C
Yes,
so
this
is,
I'm
gonna
move
that
crack
into
backlog
this
one.
So
I
actually
have
the
mr
open,
but
I
didn't
finish
that
yet,
but
let's
move
that
into
backlog
right
now
and
it's
not
essential,
it
seems.
A
A
It's
yeah,
it's
not
very
convenient.
Actually
I
mean
this
delivery
updates.
F
But
I
mean
it
makes
sense
to
be
a
little
conservative
here
right,
a
big
change.
You
know
free
from
the
main
app
server,
so
probably
better,
to
have
some
confidence
that
this
in
itself
is
an
improvement,
or
at
least
no
regression,
and
then
we
can
look
at
that
separately.
It
makes
sense
to
me
to
move
on.
E
Okay,
can
I
make
a
quick
comment
here,
because
I
still
need
to
record
a
kickoff
video
for
for
memory,
and
I
want
to
make
sure
that
I
say
the
right
things
so
for
the
for
the
puma
related
work,
that's
all
in
its
own
sub
epic.
So
the
work
is
in
progress,
but
we
can
say
that
for
the
next
milestone,
we
are
continuing
to
work
on
that
right
and
ship,
the
new
version
of
puma
and
make
the
changes.
E
So
that's
that's
happening,
and
then
there
are
a
few
other
tasks,
high
impact
tasks
that
we
identified
for
the
memory
stuff
that
are
in
slight,
like
the
garbage
collection
work
that
matthias
is
doing
and
the
optimizations
that
are
also
going
to
continue.
So
the
theme
here
is
still
very
much:
we've
identified
the
most
important
things
to
reduce
memory,
we've
started
working
on
them.
We
continue
to
work
on
them
in
13.8.
B
I
think
everything
that
we
have
in
13.8
is
largely
carryover
from
13.7,
so
I
would
recommend
the
rice
table
to
be
honest,.
A
F
I
think
I
think
we
had
an
item
for
this
any
anyway,
like
in
case
we
talk
about
the
gitlab
exporter
stuff.
If
we
should
change
it
to
like
shrinking
it
or
like
making
some
improvements
that
we
know
we
can
make,
there
will
be
a
bunch
of
work
that
would
fall
out
of
this.
That
would
be
immediately
actionable
as
well.
So
maybe
that's
something
we
can
look
at
as
well.
B
B
B
All
right,
so,
let's
take
a
look
at
13
18.
B
So
we
don't
have
a
ton
there
other
than
high
level
bullet
points.
I
think
we
just
kind
of
discussed
those
high-level
bullet
points,
so
the
carryover
from
13-7
there's
going
to
be
some
puma
work
are.
We
sounds
like
maybe
there's
some
work
from
the
gitlab
exporter
dropping
or
shrinking.
E
E
E
That
involves
very
likely
other
teams.
We
have
different
competing
priorities
right,
so
this
is
a
longer
term
thing
in
my
mind,
right
and
I
can
help-
maybe
prioritize
that
and
we
can
work
on
it,
but
I
think
the
my
question
is:
are
there
things
that
we
can
do
right
now
to
already
reduce
the
memory
impact
that
is
not
dependent
on
other
on
other
folks?
That
would
be
a
question
that
I
have
and
I
have
a
feeling
from
what
I'm
reading
from
you
matthias
that's
kind
of
the
case,
but
I'm
not
sure.
F
Yeah
so
camille
had
a
really
good
suggestion,
which
was-
and
we
didn't
really
measure
this.
We
should
probably
still
do
that
as
well,
but
it's
very
likely
true.
F
The
application
server
that
gitlab
exporter
runs
on
right
now
is
a
puma
which
is
not
a
small
piece
of
software,
and
this
doesn't
serve
a
ton
of
traffic
right.
I
mean
this
is
really
just
the
stats
endpoint,
so
the
idea
was
maybe
we
can
run
this
on
a
lighter
weight
application
server
for
now,
such
as
webrec
or
like
I
don't
know.
If
that's
a
good
idea
to
one
run
webric,
I
don't
think
it's
meant
to
be
used
as
a
production
server,
but
something
like
this
something
lightweight.
F
It's
a
sinatra
rack
up,
so
we
can
really
run
it
on
any
rack
server.
It
doesn't
have
to
be
puma
so
yeah
that
would
I
don't
know,
I
think
I
posted
something
like
I
would
guess
like
it
would
cut
the
memory
use
at
least
in
half.
That
was
like
a.
F
Finger
in
the
air
as
made,
I
have
not
measured
this,
but
that
it
would
be
more
memory
efficient
for
sure
being
being
slower,
though
right,
so
it
comes
as
a
with
the
performance
tradeoff.
So
this
is
something
we
also
need
to
look
at,
for
instance,
I
don't
know
about.
I
have
no
idea
what
the
like
traffic
is
for
our
dot
com,
github
exporter
endpoints,
it's
very
hard
to
say,
even
because
we
run
such
an
ancient
version
of
it
in
production
that
I
don't
even
know
what
it
is
nowadays.
C
C
Yes,
there
is,
there
is
two
aspects.
One
aspect
like
you
mentioned
being
slower.
I
don't
know
about
that.
Second,
we
actually
use
webric
on
the
web
and
sidekick
exporter
in
the
puma
processes
anyway,
so,
like
we
run
the
web
recovered
in
the
productions
with
the
production
traffic
you.
This
is
like
the
aspects
that
you
are
looking
over.
A
F
So
yeah
there's
like
a
footnote
to
this,
which
is
so
you're
mentioning
this.
This
is
only
truepet.com
right,
it's
not
every
customer,
because
we
also
have
a
normal
rails
endpoint
at
source
matrix.
I
think.
C
Yes,
but
we
really,
I
think,
try
to
discourage
everyone
from
using
this
endpoint
because
it
has
very
bad
performance
characteristic
on
the
on
the
puma.
This
is
why
we
use
this
dedicated
end
point
and
I
believe
we
use
this
dedicated
endpoint,
also
on
the
on-premise
installations,
with
the
parameters
integrated
and
we
we
use
this
dedicated
endpoint
on
site
here
and
under
puma
to
describe
these
processes.
Basically.
F
F
Yeah,
this
is
actually
a
whole
separate
aspect
of
this,
because
the
other
way
that
we
can
can
conserve
a
little
bit
of
memory
and
also
just
make
this
thing
a
little
bit
easier
to
work
with,
is
to
just
start
removing
the
things
that
we
know.
We
don't
need
right.
This
is
not
true
for
all
of
these
metric
groups.
Here,
it's.
C
C
C
F
What
I
was
trying
to
say
is
I
I
spend
an
extraordinary
amount
of
time
even
trying
to
understand
how
this
works,
because
it's
really
really
fragmented
the
way
it's
implemented.
We
have
all
these
different
endpoints
that
some
point
to
the
same
metrics
they're
served
by
different
systems.
They
have
different
performance
characteristics
totally
unclear.
Some
of
the
stuff
is
not
documented
at
all,
so
it
was
super
hard
to
work
with
us.
F
So
if
there's
any
chance
of
kind
of
you
know
like
dropping
some
of
that
extra
weight
that
we
think
we
don't
need
that
surely
doesn't
hurt
right
it
will.
It
will
make
the
application
a
little
bit
smaller,
we'll
make
it
a
little
bit
easier
to
understand
and
it
will
certainly
consume
a
little
bit
less
memory,
probably
not
much,
but
it
will
help
anyone.
E
So
I
have
another
question
here
for
from
like
a
product
perspective,
would
this
all
essentially
fall
under
monitoring
and
metrics
like
this
entire
thing
here
right?
It's
like
this
is
exporting
metrics
correct.
So
we
have
a
a
group.
You
know
who
I
know
the
product
manager,
sarah,
you
know
who
I
think
actually
owns
this
problem
space
to
some
extent
right
as
in
you
know
the
reconciling,
and
so
my
question
is:
can
we
get
some
support
from
them,
maybe
like?
E
F
Mean
that
would
be
great,
so
I
think
it's
an
area,
that's
kind
of
belongs
to
everyone
a
little
bit
right,
because
certainly
infrastructure
is
super
interested
in
any
of
this
and
most
of
the
drive
in
the
direction
of
yeah.
Removing
this
as
well
or
shrinking.
It
has
been
coming
from
infrastructure
engineers
as
well,
and
a
lot
of
these
metrics
are
consumed
by
sres
and
infrastructure.
F
Like
all
the
process
stuff,
I
think
where
the
monitoring
team
came
in
was
largely
for
when
how
this
is
packaged
up
as
a
feature
inside
gitlab
right-
and
I
don't
know-
I
don't
think
gitlab
exporter
purely
exists.
For
that
reason,
it's
just
that
they
happen
to
that.
The
internal
and
the
embedded
prometheus
that
ships
with
omnibus
it
just
happens
to
scrape
this
as
well.
So
I
think
it
would
probably
not
be
entirely
correct
to
say
that
they
own
all
of
this,
but
they
certainly
have
a
stake
in
it
as
well.
Yeah,
no.
E
This
is
kind
of
a
like
I'm
a
little
bit
worried,
because
this
feels
like
you've
uncovered
this
thing
and
the
deeper
you
dig
the
more
complicated
it
gets
right
and
the
more
like
craft
is
there,
and
so
I
think
maybe
what
we
should
decide
on
like
for
us
is:
what
can
can
we
actually
do
right
now
to
make
the
situation
better
right,
as
in
short
term?
You
know,
I
think
we
can
maybe
deprecate
this
officially
right.
E
We
can
maybe
change
some
things
to
already
help,
but
I
think
then,
unfortunately,
there
is
maybe
a
wider
question
of
how
can
we
actually
make
sure
that
this
thing
disappears
right
or
that
people
move
it
over
and
prioritize
it,
and
I
I'm
not
sure
who
you
know
that?
Maybe
we
can.
We
can
work
a
little
bit
with
others
so
that
we
can
help
them,
but
they
can
also
no.
F
I
totally
I
mean
I
I
mentioned
this
before
as
well,
because
I
forgot
where
that
discussion
happened,
but
in
some
other
issue
or
maybe
on
the
main
epic.
I
pointed
out
that
this
to
me
is
like
two
levels
down
from
the
main
issue,
which
is
that
the
whole
self-monitoring
feature
runs
by
default,
and
so
we
happened
to
stumble
upon
gitlab
explorer
and
then
it
had
some
history
yeah.
We
talked
about
this
already,
but
really.
F
The
way
we
stumbled
on
this
was
that
gitlab
self-monitoring
as
a
feature,
is
very
heavy,
like
it
clocks
in
at
a
couple
hundred
megabytes.
If
I,
if
I
run
as
a
single
node
omnibus,
even
so
so
yeah,
I
guess
there's
two
angles
from
which
we
can
start
this
either
like
top
down
from
like
and
there's
a
separate
issue
for
this.
F
If
you
remember
like
where
I
suggested
to,
maybe
we
can
review
whether
this
whole
suite
needs
to
be
enabled
by
default,
but
that's
more
like
the
top
down
view,
because
there's
a
bunch
of
other
things
that
run
adjacent
to
github
exporter
that
also
consume
memory.
Just
maybe
not
as
much
or
this
is
what
the
bottom
up
thing.
Where
we
look
at
this
one
component
out
of
the
eight
or
so
that
need
to
run
to
power,
gitlab,
self-monitoring,
yeah
and,
and
can
just
start
with
this
one
yeah.
F
I
yeah,
I
don't
really
have
a
preference
but
like
I
totally
agree
that
this
is
definitely
also
in
the
realm
of
gitlab
self-monitoring.
B
It
seems
like
there's
a
few
issues
missing
from
this
one
from
this
epic
right,
so
the
deprecation
issue,
or
it
looks
like
we
need
to
create
an
issue
for
deprecating
this
and
then
moving
to
a
different
server.
If
we
want
to
get
away
from
puma,
I
didn't
see
any
issue
that
called
that
specifically
out
it
may
be
buried
in
a
comment
somewhere.
That's
something.
F
B
I
will
make
notes
in
the
agenda
for
those
follow-up
issues
and
then
do
you
have
that
link
handy
for
the
other
one,
the
other
self-managed
services
that
we
could
cut
back
on.
F
I
don't
have
any,
but
I'll
find
it
just
a
second.
B
F
F
Yes,
so
I'm
pretty
sure
so,
like
whoever
picks
up
these
issues
should
probably
like
some
of
them
are
like
maybe
90
to
95
percent
clear,
but
I'm
fairly
confident
that
remove
process
metrics
from
gitlab
exporter,
that's
something
we
can
do
now
move
psychic
metrics
is
a
bit
more
work,
but
it's
actionable.
F
F
Yes,
so
there's
a
little,
I
think
I
created
a
table
somewhere
where
I
summarize
what's
actionable
right
now
and
how
confident
I
am
in
doing
that
by
the
way
it
should
be
somewhere
in
there
as
well.
F
B
F
B
B
We
have
this
scheduled
for
13.8,
so
the
outcome
of
this.
What
would
the
outcome
of
this
issue
be?.
F
I
mean,
I
think
it's
a
product
decision,
so
we
need
to
find
out.
Is
it?
Is
that
even
acceptable
to
do
or
under
what
circumstances
could
we
disable
this
like
the
way
I
look
at
it
is,
and
I
might
be
wrong,
but
I
feel
like
if
I
run
a
very
low
volume,
small
single
node,
gitlab
deployment,
I
wonder
like,
is
it
really
necessary
to
have
self
monitoring
enabled
by
default
for
this
node?
F
E
E
I
think
I'll
reach
out
to
sarah,
because
she
should
know-
or
maybe
she
has
some
data
on-
you
know
instant
size
and
memory
and
and
these
metrics
being
enabled
or
not,
but
I
think
this
is
this
is
a
general
question
like
we
talked
about
it
on
the
higher
level.
If
we
are
running
in
a
really
memory
constrained
environment,
are
there
specific
features
with
one
of
these
this
year
that
we
are
comfortable
disabling
in
order
to
significantly
reduce
the
memory
footprint
right?
That's
I
think.
F
F
B
Where
are
we
next
looks
like
it
looks
like
13
8's,
mostly
review
carryover
from
13-7?
Does
anybody
not
sure
what
they're
working
on
for
13.8
and
remember
there's
some
holiday
time
out
in
this
one
for
folks,
so
less
is
probably
more
during
the
next
milestone.
E
I
had
one
additional
thing
that
I'm
not
sure
we
need
to
formalize,
but
I
would
really
like
to
to
have
a
baseline
mini
benchmark
for
our
memory
stuff,
as
in
you
know,
a
like
whatever.
It
is
in
my
mind
as
simple
as
possible
right,
a
box
where
we
know
like
in
13.7.
This
is
the
memory
consumption
and
then
in
13.8.
When,
ideally,
we've
finished
a
bunch
of
things
right,
we
can
assess
again
and
see
if
we
actually
made
an
an
impact,
I'm
not
sure
if
that's
captured
anywhere.
E
A
We've
had
an
issue,
but
yeah
it's
quite
tricky.
What
do
we
want
to
measure
like
on
the
load
maximum
peak
usage
for
what
like,
on,
which
instance,
do
you
mean
like.
E
No,
not
production.,
like
I
mean
like
let's
say
we
have
a
minimum.
I
think
requirement
right
now
for
four
gigs
right,
where
we
expect
stuff
to
behave
right,
and
I
think
maybe
we
have
a
like
a
get
metric
as
to
what
you
know
what
we
expect
in
terms
of
performance
for
that
to
actually
behave
right,
and
I
would
just
say,
for
that
instance,
with
four
gigs
of
ram,
for
example,
right
under
a
a
load
that
we
agree
on
whatever.
That
is
reasonable.
For
that
amount
right.
What
is
the
amount
of
memory
used?
E
Just
a
baseline
right?
I'm
not
saying
I
know
exactly
what
that
baseline
should
be,
but
I
like
we're
looking
here
at,
I
think,
primarily
trying
to
get
towards
two
gigs
of
ram
four
smaller
instances
right.
So
I
think
we
we
need
a
you
know.
Currently,
this
is
what
we
that
what
our
minimum
does-
and
this
is
the
reduced
amount
of
memory
following
on.
E
A
F
I
I
really
agree
with
this,
because
it's
something
I
need
right
now
and
today
again,
I've
already
spent
several
hours
trying
to
reproduce
that
baseline
using
our
gcp
project,
which
is
great
that
we
have
it,
but
it's
like
slow
to
set
up
yeah,
and
I
I
wonder,
though,
there's
a
lot
of
overlap
with
what
qa
engineering
does
right.
Yes,.
F
Already
and
they
do
collect
memory
metrics
as
well-
I
don't
know
if
they
collect
it
in
a
like
structured
way,
in
the
sense
that
this
goes
into
some.
A
Permitted
but
we
could
like
wear
it,
they
have
prometheus
on
these
instances.
F
F
We
need
to
make
this
somewhere,
but
I
feel
there's
so
much
overlap,
and
I
am
I
correct
that
they've
also
completely
reworked,
oh
great,
greg
you're
on
the
call
right.
Didn't
you
recently
completely
overwork
the
way
the
gpt
tools
works
and,
to
be
frank,
I
have
not
caught
up
with
it.
Yet
I
need
to
look
at
it
still,
but
it
certainly
sounds
like
there
is
a
lot
of
overlap,
because
we
would
we
would
be
doing
the
same
things
right.
F
We
would
just
maybe
look
focus
more
on
different
metrics
sounds
like
it
and
maybe
yeah.
What's
the
question
sorry,
so
we
we
need
some
kind
of
setup
where
we
can
benchmark
the
alleged
improvements
that
we
make
as
part
of
memory
optimization
so
that
we
actually
know
we
made
a
positive
impact.
F
We
need
to
do
this
on
a
recurring
basis
right,
so
every
time
we
make
a
change,
we
should
verify
that
this
actually
does
what
it's
supposed
to
do
so
so
we
need
to
have
some
kind
of
yeah,
like
baseline
reference
benchmark
that
we
can
refer
to
whenever,
like
any
of
us,
go
in
and
make
changes
yeah
and
in
the
past,
we've
usually
done
that
by
working
with
your
team
and
basically
like
you
know,
involving
qa
and
john
on
issues
and
asking
to
run
the
pipeline
for
us,
but
I'm
wondering
like
maybe
at
this
point
we
should
align
our
workflows,
that
we
can
also
do
that
on
our
team
and
probably
use
the
same
tooling,
because
it
sounds
like
it
would
be
largely
the
same
thing
we
might
just
be
focusing
on
different
data
sets.
G
Yeah
I
mean
it's
difficult,
as
you
probably
well
know,
there's
no
sub-ability
here
it's
it's!
It's
a
very
annoying
thing
because
it
feels
like
it's
so
reachable
and
so
grabbable
to
be
able
to
do
this.
But
then
you
get
the
integrity
of
it.
It
actually
is
quite
complicated.
So
yeah
someone
mentioned
about
running
gt
against
ephemeral
instances,
that's
difficult,
because
gbt
is
an
access
token
and
access
tokens
secretly
create
for
the
ui.
G
So
that's
that's.
The
first
kind
of
major
blocker
there,
which
is
very
very
annoying
I've,
been
watching
an
issue
for
months
over
a
year
now
waiting
for
that
to
be
fixed,
so
you
can
actually
create
a
token
if
you're
an
api
or
something
that
would
be.
That
would
make
life
a
lot
easier.
Our
functional
tool,
gitlab
qa,
and
get
around
that
because
it
has
selenium
and
browsering.
So
it
can
actually
go.
G
It
goes
and
it
creates
the
access
token
hand,
cranks
it
for
the
ui,
which
isn't
ideal,
but
that's
what
it
does
so
yeah
we're
in
it's
a
really
awkward
space,
because
it's
kind
of
half
functional
half
performance
the
to
actually
do
what
you're
asking
to
do
with
like
constant
checks,
essentially
every
day
or,
however
frequent
you
want
it,
it's
probably
leaves
more
to
get
like
qa.
I
don't
know
if
it's
possible
to
measure
the
docker
memory
usage
in
galaxy-
I
don't.
If
it
isn't.
I
don't
see.
I
can't
see
how
that
wouldn't
be
doable.
G
You're
just
running
some
commands
just
see
occurring
darker
for
memory
usage.
That's
not
going
to
be
quite
difficult,
though,
because,
obviously,
how
often
do
you
create
a
usage?
How
often
you
know
how
do
you
know
there
could
be
a
memory
spike
in
the
middle,
and
you
just
don't
know
about
it.
So
we
can
sit
down,
brainstorming
this,
maybe
in
the
new
year,
and
try
and
figure
out
the
best
way
to
kind
of
tackle
this.
G
G
So,
as
you
see
it's,
it's
not
easy,
but
we
should
try
and
tackle
it.
We
should
try
and
solve
it,
but
it'll
take
some
time
and
some
thought.
F
Yeah,
so
no
that
sounds
great
and
yeah.
I
do
think
because,
like
a
native
kind
of
or
like
an
obvious
extension
point
and
alexa
already
mentioned,
this
could
be
prometheus
right,
because
if
we,
if
we
were
to
just
collect
data
because
we
track
things
like
rss
over
the
duration
of
the
performance
test
suite,
if
we
could
just
make
them
just
extract
them,
you
know
it
could
just
be
part
of
the
performance
tool.
Run
summary.
F
If
that
was
a
ci
job
right,
we
could
extend
that
to
just
collect
that
data
somewhere
and
then
we
would
have
an
automated
way
of
kind
of
even
recreating
our
baseline.
If
it
has
changed,
maybe
it
might
change
again
right
over
time,
but
then
also
yeah
make
that
data
available
and
then
in
future
runs
that
have
changes
applied
compared
against
that
baseline
wherever
that
might
live,
but
it
could
it
could.
It
could
really
just
come
out
of
prometheus
in
the
beginning,
because
that's
easy
to
quote
right.
G
Is
there
and
it
does
you,
can,
I
believe
you
can
create
like
an
api
and
ask
it
to
return
some
stuff
but,
for
example,
our
gpt
for
our
environment
multi-node?
That
complicates
that
straight
away,
you'd
be
asking
for
multiple.
Yes,.
F
F
Maybe
for
now
that
would
be
even
simpler,
well,
whatever
it
might
be,
and
another
thing
was
that
I
wanted
to
talk
to
you
about,
but
we
can
do
that
at
the
same.
At
the
same
time
is
it?
I
wonder
if
there's
a
good
workflow
to
apply
like
setting
deltas
as
part
of
these
performance
pipelines
so
that
we
can
say
run
this
pipeline,
but
with
yeah
like
tuning
the
gc
a
little
bit
differently,
for
instance,
so
stuff
stuff,
like.
G
That
for
the
performance
pipelines
generally,
I
push
against
that
kind
of
thing,
because
if
you
start
bringing
in
a
b
testing,
then
the
forest
flight
lines
take
like
an
hour
to
to
run
for
one
run,
and
the
difference
is
that
we've
got
gt
coming
in
and
running
itself,
which
is
outside
of
the
environment
and
what
you're
asking
is
for
our
other
kind
of
tool
ge
to
get
the
github
environment
toolkit
to
come
in
and
update
environment
to
be,
I
have
one
specific
set
different
and
that
obviously
can
get
quite
overhand.
G
If
more
and
more
teams
came
in
and
asked
for
all
these
things
saying,
so
we
always
target
the
environments
to
to
be
on
the
current
latest.
And
then
we
say
we
can
do
one
offs.
Obviously
that's
fair
enough,
but
on
automated
way,
then-
probably
not,
but
that's
why.
G
B
I
think
we
have
the
issue
in
epic.
I
think
this
is
the
right
epic
to
track
that
fabian
track.
Your
original
request-
and
I
created
an
issue
this
morning
to
make
sure
that
we
monitor
the
memory
growth
over
the
2gig
baseline.
So
we
get
those
signals
when
we
once
we
actually
hit
our
baseline,
whatever
it
may
be,
when
new
features
or
code
changes
that
baseline,
that
we
get
signaled
and
again
you
brainstorm.
How
to
do
it
sounds
like
you
want
a
secondary
one
for
the
four
gig
baseline.
E
Well,
I
think
I
think
my
my
request
is
essentially
I
would.
I
would
like
to
have
the
the
simplest
solution
to
assess
the
status
quo,
as
it
is
right
like
right
now
in
terms
of
memory
usage
and
then
when
we
make
a
change
right
to
be
able
to
make
some
kind
of
inference
that
that
has
an
impact
or
not
right.
Now
I
understand
this
is
sort
of
there's
an
arbitrary
complexity
in
like
how
how
that
could
look.
E
A
Maybe
I
could
just
hardcore
the
hard
quad
like
google
machine
ip
and
just
do
it
in
a
dirty
way.
I
mean
almost
the
same
we
are
doing
manually
during.
We
were
doing
mainly
doing
two
geek
week,
but
just
vici
just
have
a
prepared,
google
machine
or
two
or
three
and
just
to
like
I
don't
know
it
will
be
like
dirty
and
not
pleasant
and
a
lot
of
hardcore
that
values,
but
maybe
it
would
be
faster.
I
don't
know.
C
Means
so
like
so
like,
because
the
like
I
from
what
I
hear
like
guys,
I'm
talking
about
more
about
like
how
I
can
validate
my
changes
before
I
merge
them
to
provide
like
a
meaningful
number
where
it
seems
like
your
perspective
is
like
how
I
can
see
like
holistically.
If
the
changes
that
we
are
merging
are
giving
us
benefit
or
not,.
E
Yeah,
I
think
I'm
maybe
just
because
I
don't
have
the
like
deep,
deep
knowledge
right,
I'm
more
interested
holistically
right.
As
it's
saying,
I
think
there
are
absolutely
two
things
right.
E
It's
like
I
would
like
to
I
mean
the
one
thing
is
just
tracking
memory
usage
against
a
baseline
right
and
there
may
be
many
things
that
are
going
on,
but
I
do
also
think
it
is
quite
important
if
we,
if
we
do
work
right
and
we
are
trying
to
provide
impact
to
be
able
to
say
I
made
this
single
change
right
and
before
beforehand
the
memory
consumption
under
a
specific
load
was
one.
Let's
say
it
was
four
gig,
and
now
it's
3.8
gigs
we're
happy
right.
It's
like
I
like
to
be
able
to
measure.
E
If
we
have
something
that
we
can
track
and
measure
the
memory
consumption,
we
should
be
able
to
to
do
that
right
reliably,
even
if
it's
very
simple
right
and
we
can
get
more
complex
over
time
right
against
different
deployment,
sizes
and
different
scaling
points
right,
but
at
least
like
I'm,
a
little
bit
aware
we're
going
to
do
things
right
and
we're
not
going
to
be
really
able
to
articulate
in
a
couple
of
months
what
impact
that
has
had
on
memory
consumption.
That's
my
that's!
My
worry.
C
Now
like,
let's,
let's
maybe
look
at
the
chance
of
what
nikola
was
doing
the
cast
where
it's
like.
I
would
argue
that
this
change,
even
though
it
increases
memory
pressure.
It
has
no
not
disabled
impact
on
rss,
because
it
kind
of
gets
lost
into
all
the
memory
allocations
that
we
have
and
the
way
how
the
memory
is
actually
persistent
by
system.
C
So
I'm
actually
like
having
trouble
like
saying
that
the
single
metric
is
the
metric.
That's
gonna
holistically
answer
us
that
something
is
behaving
better
or
worse,
because
I
think
it
really
depends
on
what
you
are
fixing.
C
For
example,
we
were
looking
at
the
amount
of
the
quest
queries
and
like
we
saw
that
deep
on
the
amount
of
the
esquivel
queries
executed
by
the
postgres.
But
you
you
cannot
expect
that
they're
gonna
be
less
memory
usage
of
that,
but
also
because
it's
very
small
drop
in
all
the
memory
usage.
It's
not
gonna
affect
rss
at
all.
C
Basically,
so
what
I'm
kind
of
saying
like
that?
Getting
signal
metric
is
it's
just
not
it's!
It's
not
going
to
give
you
like
information
about
the
trend.
It's
it's
maybe
give
you
information
about
that
front.
If
you
really
make
a
lot
of
targeted
sensors
that
that
focus
on
rss,
which
is
like
gc,
mj
malloc,
but
basically
everything
else
like
it's
very
likely,
it's
gonna
affect
rss
directly.
C
You
may
see
the
impact
on
rss
in
the
long
term,
but
not
really
like,
let's
say
value,
very
validating
your
chance,
so
I
I
think,
like
what
I'm
kind
to
think
like
we
should,
each
time
when
we
make
a
change,
add
that
to
a
backlog
of
the
metrics
that
we
monitor
over
time,
how
they
kind
of
change.
For
example,
we
have
the
cast
queries
this.
C
So
I'm
kind
of
saying
that
this
is,
I
mean,
ideally,
we
should
have
rss
being
reduced,
but
practically
this
is
like
the
hardest
metric
to
really
move
a
needle
on
and
and
what
we
can
really
kind
of
have
see.
The
impact
is
actually
this
more
detailed
metrics
about
the
aspects
that
we
are
fixing
but
kind
of
keeping
on
top
of
these
metrics.
Why?
C
While
we
fix
them
that
they
actually
do
not
they're
gonna
write
over
time
hour,
they
do
they
can
narrate
with
the
right
that
we
are
kind
of
anticipating
I
mean
another.
One
would
be
like
amount
of
the
memorial
locations
and
amount
of
their
livestock,
which
is
like
what
is
materials
looking
right
now.
It's
also
pretty
important
to
see
the
long
term
how
it
changes
if
it
grows,
because
this
magic
being
pretty
detailed,
gives
you
a
pretty
good
hint
about
like
the
application
size
and
how
much
pressure
application
generates
on
the
memorial
locations.
C
C
I
I
don't
have
anything
else,
but
I'm
kind
of
saying
that,
like
we
should
start
monitoring
on
someone
on
a
million
dimensions
and
like
try
to
see
the
the
trend
of
these
dimensions,
how
it
changes
over
time-
and
this
can
give
you
like
a
hint
if
you
are
improving
or
regressing
on
the
things
that
we
are
fixing,
because
we
are
going
to
be
fixing
different
things.
Basically
and
rss.
C
F
A
F
E
A
F
The
point
that
camilla
was
trying
to
make
as
well
right
that
we
have.
We
have
this
pre-merge
scenario
where
we're
working
on
some
change,
and
then
you
think
it's
going
to
make
a
difference.
Then
you
send
an
mr
and
you
want
to
kind
of
prove
it's
going
to
make
things
better,
and
then
maybe
you
run
the
performance
tool
suite
to
measure
that,
but
that
is
not
com
right.
So
so
those
are
our
omnibus
reference
architectures
that
we're
testing.
So
I
think
we
need
both
right.
We
also
want
to
build
up
this
metrics
catalog.
F
Over
time,
maybe
via
dashboards,
or
maybe
we
even
have
alerts
for
the
stuff.
Who
knows
so?
We
probably
need
a
combination
of
of
the
two.
I
don't
think
you
can
like
squash
all
these
things
with
just
gpt.
C
I'm
I'm
kind
of
thinking
that
really
like.
We
need
to
figure
out
a
targeted
way
to
test
our
changes
when
we
develop
them
to
have
like
some
understanding
what
the
impact
we
kind
of
anticipating
and
then
kind
of
doing.
The
same
anticipation
like
validate
that
if
it's
actually
translatable
to
high
like
high
throughput
system,
which
is
could
be
like
devgita.org,
staging
github.com
or
whatever
else
environment.
C
Is
that
and
like
with
the
gpd
being
one
of
these,
but
also
like
having
this
statistical
difference
that
it's
not
like
very
statistical,
very
accurate
to
how
like
actually
the
system,
run
in
the
production
due
to
that
data
set
and
like
amount
of
the
request
distribution?
So,
but
I
I'm
kind
of
thinking
that,
like
whatever
we
do
like,
we
should
understand
where
this
change
will
be
impacted.
Like
with
the
nikola.
We
know
that
we
look
at
the
amount
of
the
cash
queries,
so
we
didn't
know
where
to
look
at
and
like.
C
So
I'm
kind
of
anticipating
that
the
the
clue
here
is
like
if
we
identify
the
metric
that
we
are
interested
in,
we
should
have
a
quick
way
to
run
all
our
testing
against
all
of
these
environments
in,
like
the
quickest
possible
time
and
on
this
like
metric
graph
to
our
like
dashboards,
like
from
it
is
being
here,
like
very
good,
told
that
it
actually
give
us
this
history
of
the
data
that
we
can
graph
over
many
months.
C
Basically-
and
we
could
kind
of
also
look
at
this-
how
it
changed
since,
like
one
month
ago,
in
more,
like,
let's
say,
statistical
form,
I
kind
of
like
github.com
because
of
this.
It's
request
distribution.
But
it's
also
really
hard
to
make
a
difference
on
github.com.
E
Yeah-
and
I
think
also
at
least
from
my
understanding,
one
of
the
the
goals
here
is
sort
of
the
the
target
is
the
opposite
right.
It's
like
a
small
instance
under
memory
constrained
environments
right
and
I
think
there
we
also
want
to
know
you
know
if
we
run
puma
in
single
mode.
What
does
that
change
right?
That's
never
something
we're
going
to
try
out
on
dot
com.
Well,
maybe
with
kubernetes,
I
don't
know,
but
you
know.
E
G
So
yeah,
it's
there's
been
a
lot
of
discussion
about
different
approaches
there
and
I
think
it's.
The
team
has
been
completely
correct.
There's
there's
two
different
things
here.
There's
the
mr.
G
I
might
have
called
the
mr
kind
of
check,
which
is
the
check
of
the
actual
actual
in
development
code,
and
that
would
be
a
get
that
curie
kind
of
piece
or
a
different
pipeline
where
the
actual
code
is
compiled
and
actually
built
into
an
actual
instance
to
test
against
and
then
there's
their
aspect,
gpt,
which
is
the
kind
of
the
integration
outsider
test
which
is
done
right
towards
the
end
of
the
of
the
process,
which
is
already
an
omnibus
we're
running
a
test,
and
at
the
moment
what
we
do
is
that
we
keep
an
eye
on
memory.
G
But
to
be
honest,
it
doesn't
really
get
checked
that
much
because
we've,
as
camille
said,
we
very
rarely
see
memory
change
memory
has
been
pretty
consistent
since
we
started
running
this
test
for
much
more
intriguing,
cpu
and
other
things.
But
we
can
try
and
maybe
look
at
that
more,
but
it
is
on
a
kind
of
ad
hoc
basis,
and
maybe
we
can
make
it
more
automated,
but
I
think
the
team
are
probably
much
more
interested
in
that
mr
state
of
memory
changes
and
one
test
that
could
work
well
with.
G
That
is
to
do
like
a
gitlab
qa
test
against
a
a
build
docker
image
that
is
strictly
locked
to
two
gig
of
memory,
and
if
I
remember
how
docker
works
correctly,
docker
should
not
take
well
to
that
memory
being
over
or
attempted
to
be
overrun
and
the
instance.
You
start
failing
quite
quickly,
so
there
could
be
some
kind
of
test.
You
could
do
there
to
try
and
keep
that
monitoring
up,
but
it
really
is
a
complicated
thing.
So
it's
just
trying
to
figure
out
what
is
the
right,
trait
balance
or
trade-offs
and
approaches.
G
I
think
it'll
fail.
I
need
to
go
check
her
docker,
her
docker
handles
memory
going
over,
every
either
will
fail,
or
this
will
just
become
incredibly
bad,
because
it'll
drive
to
swap
and
it'll
just
become
unstable,
and
at
least
then
it
would
fail
and
you'd
have
you'd
have
something
that
says:
hey
something's
going
wrong
here.
It
needs
to
be
investigated
from
memory,
but
then
the
I
guess,
the
expectations
that
we
run
on
everybody's
commits
and
mrs.
So
it
could
cause
confusion
because
memory
is
not
easy.
G
As
you
know,
it
could
just
be
the
case
one
day
that
maybe
just
took
a
bit
more
for
some
reason.
You
don't
know
why,
then
it
doesn't
happen
again.
I
think
that's
that's!
The
hardest
bit
is
sometimes
maybe
we'll
just
wax
and
wane,
whereas
sometimes
advertise
people
will
commit
something
and
there's
a
definitive
like
100
meg
increase
and
obviously
that
that's
maybe
a
bit
easier
to
track.
G
F
G
C
G
C
For
example,
I
I
have
a
like
targeted
question
to
you
for
you
to
consider
like
when
you're
gonna
be
looking
at
the
init
slots,
figure
out
how
you
could
like
gather
this
metric
over
time
like
how
you
would
measure
that
how
you
could
graph
that
to
see
if
it's
regressing
or
not
and
like
how
it's
changing
over
time,
because
this
is
like
one
of
the
examples
that,
like
you're,
not
gonna,
notice
that
on
rss.
C
It
doesn't
matter
on
what,
but
on
something
that
is
of
the
aspect
that
we
kind
of
are
working
on
like
how
it
affects
what
we
are
working
on
on
the
matrix
from
the
application
and
like
gc
have
a
label.
Slot
seems
like
one
of
these
examples.
So
the
question
is
like
how
we
could
like
use
that
how
we
could
graph
that
and
like
add
that
to
our
toolkit
of
the
metrics,
that
we
are
looking
at
how
we
could
look
at
that
from
the
qa
perspective.
As
well
how
we
could
gather
this
data
if
it's
possible.
F
F
C
C
It
should
kind
of
like
correlate
with
the
amount
of
the
dataset,
but
should
give
you
some
initial
idea
like
if
we
are
consuming
more
memory
over
time
on
on
the
pages,
or
are
we
kind
of
going
down
on
like
on
the
on
average
on
per
request?
So
maybe
this
is
like
95
percent.
I
also
maybe
different,
but
this
would
be
like
my
idea
for,
like
the
single
metric.
E
I
need
to
drop,
unfortunately,
thanks
for
the
discussion.
Also,
I'm
I'm
not
implying
that
this
is
like
I'm.
Essentially,
I'm
quite
excited
about
metrics
right.
I
like
measuring
things
and
understanding
them.
I'm
essentially
just
raising
that
this
is
important.
You
know
to
do
and
I
think
these
are
great
ideas,
so
I'm
looking
forward
to
you
know
the
the
issues
and
the
direction
where
we're
going.