►
From YouTube: Memory Team Issue stack ranking - 2GB edition
Description
The Memory team is stack ranking issues using RICE to understand priorities to achieve the GitLab on 2GB RAM goal.
A
A
Yeah,
it's
great.
So
this
is
the
memory
meeting
we
created
to
record
our
meeting
and
we
forgot
to
so
you're
missing
10
minutes
of
content
for
those
that
are
coming
in.
We
essentially
are
discussing
some
sort
of
organizational
questions
of
where
we're
tracking
issues
and
how
to
ensure
that
we're
not
missing
any
any
items
going
forward.
C
Can
you
just
read
the
topic
and
probably
like
just
read
the
topic
like
the
one
that
is,
like
the
top
bullet
point?
Okay,
this
is
like
this
is
what
we
are
using
copy
and
write
flagging
in
mri
mathias,
like
you
work
on
that,
is
it
something
that
like
we
should
follow
up,
is.
D
It
no
I
I
had.
I
have
not
worked
on
this.
This
is
like
something
that
came
up
out
of
a
thread
on
twitter.
I
thought
it
sounded
like
a
super
interesting
idea,
and
I
tried
to
poke
that
person
for
more
information
and
they
never
followed
up
with
anything
actionable.
He
was
an
airbnb
engineer
and
apparently
they're
working
on
a
custom,
ruby
dm
fork
that
supports
this,
and
I
thought
I'd
just
like
bring
it
up
prior
to
the
week
in
case
anything
actionable
falls
out
of
it,
but
it
never
did
so.
D
E
D
I
mean
I'd
like
to
keep
the
idea
around
and
you
know
let's
also
keep,
because
maybe
here's
the
thing
I
mean
they
are
working
on
this.
You
know
like.
Maybe
we
don't
need
to
replicate
this
work,
so
let's
just
keep
it
on
the
radar.
Maybe
we
don't
have
to
work
on
it.
Maybe
airbnb
will
say
hey
we
send
a
patch
back
upstream.
Maybe
it
will
be
in
ruby
core
for
ruby3
who
knows.
A
Okay,
but
that
feels
to
me
like
something:
that's
exciting,
but
in
the
future,
but
not
actionable
right
now
we
can
yeah.
C
F
C
F
C
D
D
Yeah,
maybe
you
are
yeah,
maybe
you
know
maybe
you're
right,
you
know,
and
maybe
in
yeah,
in
some
total,
like
with
other
things,
yeah.
Okay,
it's
a
fair
point.
We
could
do
it.
I
think
the
question
is
it's:
it's
not
clear
like
where
these
allocations
come
from.
So
we
don't
even
know
so
in
terms
of
the
rice
stuff.
I
I
don't
have.
I
certainly
don't
have
100
confidence
that
we'll
be
able
to
fix
them
all
right,
but.
D
C
D
C
It
kind
of
makes
let's
say
noticeable
impact
across
the
fleet.
D
Yeah-
and
there
were
actually
two
aspects
that
came
out
of
this-
that
because
the
hashes
there
was
only
one
thing-
there
were
also
just
a
bunch
of
other
just
really
sorry.
The
empty
hashes
were
only
like
one
thing:
there
were
a
bunch
of
other
hashes
that
were
just
quite
large
where
we
would
have
to
still
follow
up
on
like
what
that
is,
and
if
that
is
something
we
can
shrink
or
get
get
rid
of,
there
was
one,
and
I
think
for
that
other
one.
We
do
have
an
issue
somewhere.
D
Let
me
check,
which
was,
I
think
I
think
craig
might
have
created
it
actually.
D
The
yeah,
the
review
unicode
hashes
that
end
up
in
in
memory.
That's
like
this
big
json
file
that
we
load
into
memory,
which
is
like
also
like
two
to
three
megabyte.
I
think
it
was,
or
one
and
a
half
something
like
that:
yeah
that
contains
unicode
mapping,
which
I
think
we
use
in
the
front
end
somewhere,
something
we
could
look
into
as
well.
It's
just
not
clear
whether
this
is
something
we
can
just
get
rid
of.
I
mean
there's
probably
a
reason
why
we
need
to
make
this
available,
but
yeah.
D
That's
another
story
that
came
out
of
this.
So
this
parent
topic
here
investigate
memory.
Content
was
really
just
us.
Looking
at
you
know,
taking
a
heap
down
and
looking
at
what
what
is
in
memory
at
any
given
point
in
time,
and
then
there.
D
Things
that
looked
like,
maybe
we
can
shrink
that
okay.
D
Yeah,
so
the
last
one
I
just
mentioned,
we
have
an
issue
for
this.
Let
me
double
check
whether
that
is
in
the
list
of
issues
in
the
rice.
C
D
Yeah-
it's
not
it's
not
in
that
list.
So
so
I'm
yeah,
I'm
like
adding
a
comment
here
around
things
that
are
not
in
that
list
and
then
maybe
we
can
create
a
new
one
for
the
empty
hashes
separately,
because
I
think
those
need
to
be
tackled
separately.
F
D
I
mean
I,
I
looked
at
it
for
a
while,
and
my
conclusion
was
what
we're
seeing
here
is
the
ruby
vm
allocating
or
most
of
the
time
was
spent
or
most.
F
D
E
D
C
A
Can
we
move
on
to
the
next
epic
I'm
just
trying
to
move
it
along,
because
we
have
a
very
long
list
of
topics:
yeah
try
to
hack
spring,
to
run
production,
omnibus
and,
let's
say
just
for
your
context.
We're
trying
to
ensure
here
that
everything
that
is
covered
in
those
topics
is
reflected
in
the
other
issue
for
stack,
ranking,
just
making
sure
we're
not
forgetting
anything.
C
So
I'm
not
sure
if
it's
even
like
feasible
that
we
ever
gonna
do
it
like
it
brings
like
the
memorial
benefits
of
like
running
from
the
common
process,
but
there
is
like
still
a
lot
of
aspects
that
I
don't
understand
why
a
lot
of
copy
and
write
happening
when
you
puma
process,
basically
so
I'm
kind
of
like
yes,
like
it's
beneficial
on
one
hand.
C
On
the
second
hand,
like
it's
with
the
height
uncertainty,
if
we
can,
if
it
can
be
done
well
because,
like
it's,
like
the
signal
handling
and
like
alaska,
scorpion
right
being
not
the
best
as
well,
so
it
needs
further
investigation,
but
also
like
the
impact.
Is
it's
tricky
because
we're
not
going
to
run
that
way
for
the
big
applicant
for
the
for
the
big
installations.
We
may
only
decide
to
run
that
way
for
very
small
githubs
installation,
because
it's
not
really
like
very
desired
way
to
run
github
from
like
the
single
parent
in
this.
C
In
the
big
instance
like
in
the
cloud
native,
we
speed
the
processes
even
further.
So
I
kind
of
got
to
that
point.
Thinking
that
like
it
doesn't
make
sense
to
pursue
that
aspect
at
all
because,
like
it's
usability,
is
very
limited
to
only
very
small
installations,
because
in
all
other
cases,
we're
not
gonna
run
that
way,
it
just
doesn't
make
sense.
C
C
C
A
C
Maybe
I
mean
this
would
be:
maybe
not
a
problem
if
you
would
see
like
a
very
big
benefit.
That
would
have
the
very
high
reach,
but
I
kind
of
considered
which
to
be
very
minimal
in
this
case,
because
you
would
use
that
as
a
last
resort
only
in
a
very
specific
circumstances.
We
have
not
used
that
in
general.
That's
that's
my
issue
with
that.
A
Okay,
can
I
move
on
cool
check
the
impact
of
swap.
C
C
So
it's
more
like
the
documentation
output
that
we
may
I
mean
fabian.
We
may
even
like
consider
this
to
be
like
the
bullet
point
in,
like
recall,
of
the
running
guitar
from
the
constrained
environment,
something
that
we
discussed
on
the
monday
because,
like
this
documentation
today
mentions
raspberry
pi.
But
maybe
we
want
to
record
this
documentation
to
discuss
how
to
configure
github
for
like
for
the
two
gig
installations,
and
this
could
be
like
one
of
the
bullet
points,
how
how
the
swap
should
be
used
as
well.
C
So
maybe
maybe
our
issue
here,
autofocus
of
that
we
should
be
like
re-whole
the
memory
running
github
on
the
constrained
environments,
documentation
and
this
being
like
one
of
the
aspects
that
we
may
that
we
should
maybe
cover.
C
C
A
Yeah,
no,
please,
I
think,
then
we
can
add
it
to
the
to
the
to
the
list.
D
Yeah
alexi
worked
on
this.
I
already
took
note
that
I
I'm
not
sure
if
we
have
an
issue
for
this
already,
but
it's
definitely
something
we
should
look
at.
We
we
might
just
have
to
create
a
new
issue.
G
A
Is
this
reflected
in
the
in
the
the
rice
issue
list?
No,
I
guess
no.
D
C
C
So
like,
if,
if,
if
you're
on
like
in
the
kubernetes
board,
if
you
would
run
from
the
same
container
image
from
the
same
file
that
is
in
the
container,
it
would
be
served
across
different
processes
and
not
use
too
much
memory
as
well,
so
it
it
was
like
my
another
way
of
thinking
and
ensuring
that
these
250
megabytes
of
the
let's
say
ruby
code
that
we
load
can
be
effectively
searched
if
the
system
buffers
basically
across
different
processes
that
also
works
in
the
kubernetes
world.
C
D
C
D
D
C
C
D
G
Yes,
we
have
an
issue,
but
it's
kinda
included
in
nakayasha
fork.
I
didn't
notice
any
in
gc
compact
issue,
which
will
be
negative
work
because
I
didn't
notice
any
specific,
with
puma
update
itself
when
I
I
played
with
it.
So
currently
we
updating
puma
it's
our
priority,
but
it
will
result
in
gc,
compacting.
G
A
D
Yeah,
that
was
just
the
parent
theme,
because
we
we're
already
on
ruby
2.7,
but
we
weren't
necessarily
taking
full
advantage
of
it.
Maybe
so.
D
D
To
get
onto
actually
has
a
config
switch
you
can
just
tackle,
which
actually
does
this
for
us
compacting
the
heat
before
we
fork
into
workers.
So
we
decided
to
just
not
do
it
manually,
but
just
do
it
by
upgrading
to
puma
with
the
new
puma
version,
which
has
other
benefits.
D
Yeah,
yes,
that
is
an
ongoing
alexis
you're
working
on
this
right,
the
five
point:
what
is
the
5.1
bump
or.
G
C
I
mean
like
I
was
able
to
partially
investigate
it,
but
there
was
like
nothing
really
interesting
that
I
did
found
on
it,
probably
at
least
from
my
perspective.
It
would
require
more
time
to
like
maybe
look
at
these
mri
flags
that
matthias
mentioned,
maybe
like
try
to
attribute
more
other
locations
that
we
talked
somewhere
else.
A
Okay,
so
should
we
let
that
one
go.
C
D
It
sounds
to
me,
like
that's
just
the
result
of
yeah
the
application
consuming
more
memory
over
time
or
like
mutating
areas
in
memory.
It's
I
don't
know
like
I
don't
know.
I
can't
say
I
didn't
look
into
this
at
all.
It
sounds
like
this
is
probably
due
to
a
billion
reasons.
It
sounds.
I
don't
think
there
will
be
a
single
thing
that
falls
out
of
this.
E
C
C
I
didn't
really
test
that
in
the
end,
if
this
is
also
affects
because,
like
tuning
gc
parameters
being
pretty
aggressive,
I
actually
reach
to
the
having
pretty
constant
memory
usage
over
time
instead
of
the
growing
memory
usage.
So
maybe,
like
these
dc
settings,
also
offers
this
idle
and
runtime
memory.
I
just
didn't
investigate
that.
D
Well,
I
I
looked
at
it
from
the
yeah.
I
looked
at
it
from
the
copy
on
right
angle
and
how
much
we
can
what
the
compact,
when
I
worked
on
the
compaction
stuff.
So
what
I
saw
was
the
the
amount
of
pages
we
share
as
like
when
the
system
is
idle,
it's
quite
good,
but
as
soon
as
you
put
it
under
load
that
deteriorates
completely
like
it
will
almost
that
effect
will
yeah
like
deteriorate,
you're
right
completely
so
yeah.
D
I
don't
know
why
that
is
but
yeah
it's
clearly
pretty
visible.
It's
probably
just
the
same
thing
you
you
saw
there.
I
don't
know
what
you
looked
at
exactly
but
yeah
it's
just
because
the
application
mutates
a
lot
of
memory
as
it
does
its
thing
whatever.
That
is,
it's
probably.
C
I
mean
like
like
at
least
from
my
perspective.
It
could
be
the
outcome
of
like
application.
Basically
initializing,
and
I
was
I
mean,
aiming
to
figure
out
exactly
which
code
paths
are
still
executed
like
after
starting
server
but
before
like
in
the
time
after
starting
server
and
processing.
The
first
request
so
actually
to
see
like
what
are
the
cpu
cycles.
C
We
are
using
that
for,
but
also
you
mentioned
about
the
memorial
application
like
I
found
that,
like
making
dc
pretty
aggressive
and
jmalok,
pretty
aggressive
makes
it
to
actually
like
allocate
less
amount
of
the
memoriam,
because,
like
in
the
default
setting,
we
would
allocate
a
ton
of
memory
that
we
would
never
free
and
he
would
only
keep
allocating
more
and
more
and
more
over
time.
D
Yeah,
I
also
wonder
if
it
could
be
because
of
meta
programming
stuff,
where
you
go
back
and
define
methods
after
the
fact,
if
you
hit
a
certain
endpoint
and
stuff
like
that,
so
that
whole
class
hierarchies
that
could
be
shared
previously
are
not
shareable.
A
Anymore,
my
suggestion
here
would
be.
It
sounds
like
there's
still
a
little
bit
to
talk
about
right,
but
it
it
is.
It
sounds
like
there
is
investigation
potential
right,
but
it's
not
something
that
we
may
need
to
do
right
now.
So
maybe
it's
worthwhile
opening
an
issue
to
at
least
not
forget
about
this
right,
but
I'm
not
sure
what
it
needs
to
happen.
Sort
of
next.
C
Fabi,
maybe
my
proposal
would
be
because,
like
I'm
pretty
certain
that
dc
settings
we
are
at
this
reflected
and
the
other
angle
that
we
might
want
to
investigate
is
like
is
there
aspects
that
we
may
optimize
between
forking
and
processing?
First
requests,
because
this
is
something
that
may
be
affecting
that.
So
maybe
I'm
just
gonna
create
an
issue
to
measure
this
particular
aspect,
which
is
like,
I
guess,
like.
C
C
C
I
think
so
it's
still
like
it's
just
a
store.
It's
still
like
a
ranking,
so
yep
exactly
cool
it
might.
It
might
end
up
like
like
at
the
end
place
of
the
lease
basically
yeah.
A
But
it's
still
in
the
list,
then
right,
cool,
okay
measure,
the
size
of
features
that
uses
language.
I
can
appreciate
a
little
bit
more.
D
So
this
ended
up
making
into
the
list
through
a
slightly
different
issue,
but
we
have
it
covered.
We
only
have
ideas
currently
for
how
to
did
you
end
up
looking
to
this
nicola.
D
D
Yeah
that
that
is
covered
because
we
have
oh
actually,
let
me
check
okay.
D
So
in
the
rice
list
of
issues
we
had
the
gitlab
exporter
thing,
which
was
only
one
of
those
components,
and
I
don't
know
if
camille
had
any
other
ideas
and
then
I
expanded
that
specific
one
into
all
of
monitoring,
because
there's
a
bunch
of
other
systems
that
came
up
as
part
of
the
default
monitoring
settings
that
we
enable,
but
that's
as
far
as
I
went
in
terms
of
that,
so
I
don't
know
because
I
think
camille
you
looked
at
this
as
well
right,
I
don't
know
if
there
was
anything
else.
C
I'm
I'm
not
sure
if
there
is
like
anything
else
that
we
can
disable
to
make
actually
still
continue
making
traffic
still
to
continue
to
work
and
from
this
our
services
is
basically
all
the
monitoring
ones,
because,
like
at
least
for
the
postgres,
we
might
optimize
memory
buffers
for
the
redis
as
well
and
like
for
the
other
services
as
well.
But
this,
like
this
optimization,
is
reflected
like
optimized,
gc
and
memory
settings
of
the
all
other
services.
C
D
Gitlab
next
I'm
not
talking
about
dropping
monitoring,
just
the
gitlab
exporter,
because
it's
deprecated
anyway,
but
in
terms
of
monitoring
I
was
suggesting-
and
that's
really
a
product
decision
in
the
end
is
to
not
enable
it
by
default
on
every
for
every
customer
in
every
environment,
because
I
I
I
I'm
not
sure
I
don't
have
any
data
to
base
this
on.
But
if
I
was
running
a
very
small
single
node
gitlab
deployment,
for
you
know
like
a
few
dozen
users,
do
I
need
grafana
to
monitor
that
system.
D
D
F
A
D
Let
me
check
if
it
was
in
the
rice
list,
because
that's
the
stuff
I'm
working
on
this
week,
so
it's
quite
new.
No,
it's
not
in
the
rice
list,
so
I
broke
so
I
generalized
the
one
up
about
dropping
gitlab
exporter,
which
was
on
the
rice
list
to
revisit
and
honestly.
This
is
not
something
like
the
engineers
can
really
work
on,
because
it's
the
feature
decision.
I
think
it's
something
product
managers
need
to
look
at,
but
there
is
an
issue
I
can
mention
it
in
the
comment
that
I
broke
out,
which.
A
So
my
my
very
high
level
thought
on
like
how
we
can
probably
go
about
it
and
that's
just
on
my
limited
information
is,
I
think,
first,
you
know
doing
or
making
changes
that
are
essentially
efficiency,
improvements
for
everybody
right
so
where
you
know
dropping
gitlab
exporter,
because
it's
deprecated
and
uses
300
megabytes
of
ram
overall
is
great,
especially
if
no
functionality
changes
right,
it's
just
being
more
efficient
and
I
think
there
are
quite
a
few
things
that
can
be
done
in
that
class
when
that
is
more
or
less
exhausted.
A
C
So
like
like,
we
discovered
that
at
least
like
running
in
the
puma
single
and
like
disabling
all
these
monitoring
services.
As
matthias
mentioned,
it
actually
frees
a
lot
of
memory
that
we
could
use
for
guitar,
so
it
still
doesn't
degradate
github,
I
mean
the
product,
yes,
but
they
graduate
some
some
aspects
of
the
monitoring
functions
of
the
gitlab.
So
maybe
it's
still
like
a
good
trade-off
in
this
case
and
like
the
lowest
picture.
A
D
I
think
there
was,
I
might
have
created
this,
but
I
might
have
been
confused
around
the
direction
of
this.
I
think
it
is
disabled
by
default
and
because
it's
very
strange,
like
apparently
rails
is
slower
with
jit
enabled
for
some
reason,
because
the
overhead
of
jit
just
apparently
not
a
long
story
short
unless
someone
disagrees,
it
doesn't
sound
worth
looking
into
because
I
think
ruby,
core
people
or
rails
core
people
have
already
investigated
that
and
said.
That
is
not
something
you
should
be
doing.
C
D
Yeah,
I
think
I
think
we
can
put
it
on
hold
cool.
You
can
skip
this
one
as
well.
That
was
just
camille
wanted
to
create
a
a
benchmark
test
results
or
suite
that
we
can
compare
against.
So
there's
nothing
to
do.
Do.
G
C
There
is,
there
is
one
aspect
like,
but
this
is
more
like
the
quality
thing
like.
Maybe
the
quality
testing
should
have
like
one
of
these
more
constrained
environment,
with
the
smaller
amount
of
the
rps
that
we
are
constantly
testing.
A
C
But
it's
not
very
streamlined.
If
you
ask
about
it,
it's
not
that
like
we
should
change
and
we
can,
let's
say
like
in
another
one
or
two
hours
from
some
kind
of
automated
testing
suit
and
like
automated
environment
creation.
That
would
give
us
like
comprehensive
information.
That
is
not
be
asked
by
any
specific
quick
of
our
configuration
that
we
made
applied.
A
Okay,
is
this
something
that
I
guess
this
is
more
of
a
collaborative
effort
at
this
point,
so
we
should
capture
that
we
want
to
do
it,
but
I'm
not
sure
we
need
to
stack
rank
it
for
immediate
activities.
C
D
D
C
It's
gonna
be
probably
required
anyway,
because
of
the
puma
single
mode,
because
our
prometheus
metrics
consume
use,
ruby
gbl,
I
mean
block
on
the
ruby
gbl
and
like
it
makes
it
very
bad
for
the
performance
aspects.
So
maybe
it's
not
remove
promises
matrix,
but
figure
out
a
way
to
make
it
more
efficient
and
maybe
figure
out
to
move
away
from
our
fork
of
the
prometus
client
as
well.
So.
C
Sure
I
I'm
likely
gonna
create
another
issue
like
with
this
specific
aspect,
and
then
we
can
maybe
link
to
the
the
issue
from
the
monitor
team.
But
it
kind
of
gives
me
a
perspective
that
it
actually
can.
We
can
free,
maybe
20,
30
40
megabytes
on
like
on
big
github
installation,
for
a
single
worker.
Basically
that
would
otherwise
be
consumed.
C
Code,
yes,
I
I
don't
have
issue
for
that.
I
was
thinking
that
maybe
we
could
add
a
very
simple
addition
to
the
ruby
vm
that
would
pin
each
metal
that
was
executed
ever
as
part
of
the
stack
and
then
kind
of
gather
match
rigs
to
understand
which
methods
were
not
ever
executed
as
part
of
the
execution
of
the
application
block
and
maybe
see
if
there
are
parts
of
the
gitlab
or
parts
of
the
germ
that
we
steal
out,
but
they
are
not
used.
I
just
like
them.
D
But,
like
ruby
is
a
dynamic
language
right.
How
do
you
even
discover
that,
because
things
might
be
load,
loaded
lazily,
if
you
have
like
a
test
suite
that
does
not
instrument
a
certain
path.
C
Yeah
I
I
was,
I
was
thinking
that
in
the
context
of
getting
this
breakdown
from
that
production,
so
what
coverage
works,
but
it's
super
slow
actually
to
run
that
on
the
production
and
that
I
could
simply
add
a
single
byte
for
each
I
seek,
and
basically
this
is
the
way
how
I
could
very
cheaply
track.
C
D
Yeah,
could
we
make
this
part
of
because
we
already
have
a
story
for
tracking
memory
used
based
on
features
which
would
require
us
to
hook
into,
like
crate,
trace
points
or
something
to
find
out?
What's
actually
going
on
in
terms
of
code
that
we
load?
Maybe
we
can
kill
two
birds
with
one
stone
there
and
track
also
yeah
other
symbols
that
are
never
being
being
accessed.
C
It's
similar,
but
different,
but
probably
from
the
same
category.
So
the
question
is
like:
do
we
need
a
category
of
saying,
attribute
memory
usage
to
the
groups
because
then,
like
you,
have
the
attribution
of
the
runtime
allocation
of
the
memorial
and
you
have
attribution
of
the
code
being
to
the
group,
so
I'm
not
sure
like
mathias.
Can
you
handle
that
and
like
figure
out
how
to
like,
make
it
nice
with
the
issues
yeah
sure.
A
Okay
last
thing
last
thing
extend
ruby
vm
to
provide
the
detailed
gc
compacting
details.
I
think
we
have
that.
Don't
we.
C
C
D
No
worries,
I
think
I
think
we
should
also
go
through
the
next
epic,
because
there
were
also
a
couple
stories
that
were
in
there
and
there's
actually
a
whole
epic.
A
sub
epic
in
that
which
I
hadn't
looked
at
at
all,
which
we
also
didn't
talk
about
in
the
rice,
which.
D
Yeah
it's
yes,
it
is
the
one
about
giddy,
ruby.
D
A
A
Yeah,
so
I'm
not
sure
why
it
is
in
that
epic-
and
I
think
you
know
that's-
maybe
another
thing-
maybe
next
week,
if
I
have
a
little
bit
of
time,
I'll
go
through
our
epics
and
ask
questions
about
why
certain
things
are
where
they
are
so
maybe
for
for
this
sake
of
moving
forward.
Can
we
just
leave
this
for
now.
G
Yeah
sounds
reasonable
not
to
like
jump
into
it
since
there
is
work
is
going
on,
but
I
think
it's
in
this
epic
because
we
expect
some
improvement
in
our
right.
So
maybe
it
could
happen
silently
and
we'll
see
something
on
matrix
if
it
will
be
finished
cool,
but
I
don't
think
we
should
work
on
it.
Yeah
I
mean.
A
F
D
But
there
were
at
least
three
that
we
don't
have
issues
for
yet
so
I
have
not
created
those
yet.
D
Yeah,
so
these
first
five
we
we
can
just
add
them
to
this.
I
just
didn't
want
to
edit
the
description
while
other
people
do,
but
those
yeah.
A
D
A
A
Maybe
just
a
coffee
yeah.
I
I
like
that
idea
as
well.
Thank
you
for
going
through
the
issues
one
by
one,
though
I
think
that
was
quite
insightful,
at
least
for
me,
even
though
you
know
you
know
it
makes
it
a
little
bit
longer
so
see
you
in
five
game.
F
C
Fabian,
do
you
think
that,
like
we
should
like
score
these
things
like
the
new
ones,
like
alone,
or
should
we
like
try
this?
C
A
A
That
would
be
my
suggestion
just
to
kind
of
get
this
get
this
done
now
that
we're
all
here,
but
I'm
open
to
suggestions
from
the
group
as.
D
A
B
B
C
Actually,
I
added
we
have
okay,
this
really
hard
running
heat
upon
constraint,
environments.
It's
I
renamed
that
we
call
documentation
for
running
on
git
clock.
B
A
D
D
A
D
D
G
A
C
C
I
mean
investigate
confidence
in
this
case
it
means
enable
and
test.
So
I
guess
from
that
point
of
view
doesn't
mean
like
ship
it
to
the
customers.
It
means
evaluate.
A
D
A
C
D
Yeah,
I
had
a
question
about
this,
and
how
did
you
go
about
this
when
you
did
your
race
voting,
because
because
there's
effort
in
terms
of
it
takes
a
long
time,
but
it's
not
actually
a
lot
of
work
because
just
like
monitoring
something
just
letting
it
sit
in
production
and
collect
data,
it
takes
longer.
You
know
it
adds
kind
of
lead
time
more
than
it
does
to
the.
A
D
A
D
C
C
So
it
means
that
you,
if
you
have
20-
let's
say
10
workers,
it's
having
four
buckets
of
the
four
megabytes,
and
sometimes
it's
even
more
because
you
have
more
metrics
that
over
this
four
megabytes,
initial
location,
you're
gonna
require
that
amount
of
the
mmr.
So
four
by
four
by
ten
160
megabytes
in
this
example,
at
least
of
the
memory
to
be
able
to
scrape
metrics
from
the
third
processes.
Basically.
B
C
G
C
That
results
in
reduction
of
the
memory.
It
could
be
even
like
saying
that
initial
allocation
is
not
four
megabytes
but
550
kilobytes.
It
could
be
that
we
periodically
try
to
wipe
this
metrics
from
the
memory.
It
could
be
try
to
run
the
upstream
librarian.
I
don't
know
what
it
takes
really,
okay,
that
we
could
do
it.
A
Better,
I
think
I
think
I
would
suggest
we
move.
We
move
a
little
bit
quicker
because
otherwise
we're
not
going
to
be
able
to
finish
this,
and
I
think
if,
if
there
is
not
as
much
understanding,
I
would
vote
for
the
person
who
has
the
best
understanding
can
go
ahead
and
we
can
always
iterate
on
it
later
on
right
and
you
know
get
into
the
details.
But
I
would
really
like
to
finish
this,
so
we
can
have
a
a
result.
A
C
600,
it's
not
it's
not
or
I
don't
know
so
it's
either
600
or
1000,
because
it
actually
affects
everyone,
but
not
the
same
in
part.
I
would
say
it
will
be
two
okay
confidence
0.5.
C
I
said
four
okay,
it's
like
it
has
a
lot
of
uncertainty.
What
to
take.
Maybe
there
are
smaller
iterations,
but
it's
not
super
straightforward
to
do
it.
A
Thank
you,
and
you
can
see
like
this-
is
what
I
like
about
it
right.
You
can
really
see
this
reflected
in
the
score
already:
okay
extend
omnibus
to
automatically
disable
non-essential
services,
so
this
is
only
going
to
affect
those
that
actually
run
in
the
memory
constraint,
environment,
correct.
C
D
Yeah
reach,
I
would
say
again,
a
thousand
because
it's
like
literally
the
whole
app
and
every
user,
unless
anyone
disagrees
impact,
I
would
say,
is
smallish.
I
can't
for
the
life
of
me
remember
these
weights.
Is
it
like
lowest
a
half,
I
would
say
0.5,
okay,.
D
That's
minimal
yeah;
no,
let's
keep
it
a
little
bit
more
generous
because
it
depends
on
the
type
of
hash.
I
think
that
we're
looking
at
confidence
give
it
a
point
eight,
because
there.
F
D
A
I
think
it
is,
I
think
it's
I
think
the
word
rehaul
is
fine,
but
I
think
rework
is
maybe
a
little
bit
more
common
and
more
understandable
from
for
poor
non-native
speakers
like
myself.
So,
okay.
C
D
D
Json
mapping
of
like
unicode
or
like
shortcuts
to
unicode
emoji
or
something
so
I
don't
know
like
I
mean
reach
should
be
pretty
high
because
I
think
it's
always
in
memory
for
everyone
running
gitlab
yeah,
it's
making
800.
I
guess
impact
is
quite
small,
because
it's
also
not
that
massive,
give
it
0.25
1.5.
D
It
was
similar,
I
think
it
was
a
bit
less.
I
think
it
was
half
of
it
yeah,
so
I
think
that
makes
sense
actually
2.25
here
confidence.
I
have
no
idea.
Actually,
I
have
low
confidence
because
there's
probably
a
good
reason.
We
keep
it
in
memory,
give
it
a
low
confidence,
yeah
sure
effort,
can't
imagine
it
would
be
like
I
don't
know,
we
have
0.5
yeah.
D
Yeah,
so
to
me,
this
is
kind
of
the
product
portion
of
what
camille
discussed
above
about
extend
omnibus
to
automatically
disable
non-essential
features.
I
think
we
need
to
decide
what
is
non-essential
and
for
who,
okay,
so.
A
Let's
say
like
the
the
reach
is
relatively
small,
though
right,
because
that
is
for
folks
that
are
running
in
a
memory
constrained
environment.
Right,
I
would
say,
the
the
impact
is
likely
relatively
high,
because
you
know
you're
disabling
specific
functionality
right
yeah.
My
confidence
in
this
is
probably
also
relatively
high.
You
know,
as
in
you
know,
we
would
that
we
expect
that
to
happen
right
the
effort.
I
think
this
is
maybe
now
like.
F
F
G
I
could
comment
on
that,
so
we
reach
will
be
low
the
same
as
we.
If
we
put
100
and
100
impact
is
high
because
like
for
them
it
it
will
give
room
to
breathe
confidence.
I
would
say
medium.
G
Medium
is
zero
point
end,
because
an
effort
is
pretty
high.
I
would
even
put
three
or
four
pessimistic,
because
we
have
a
lot
of
things
like
for
clustering.
D
And
another
question:
maybe
that's
more
for
camille,
but
we
also.
It
also
came
up
that
in
production
in
kubernetes
we
run
oh
wait
or
what's
that,
for
maybe
I'm
confusing
it.
Maybe
I'm
confusing
with
sidekick
no.
D
We
we
run
like
a
really
low
number
of
processes.
I
don't
remember
it
was
one
or
two,
but
I
because
we
scale
with
pods
now
is
read
or
could
reach
actually
be
way
higher
here
in
the
future.
E
C
A
I'm
not
saying
that's
necessarily
a
problem,
but
it
is
an
accurate
reflection
of
the
impact
that
it's
going
to
have
on
gitlab
as
a
whole.
Right,
if
you
do
something
for
a
very
small
amount
of
users,
you
know
because
you
think
that
that
may
still
be
important,
but
the
reach
is
still
low
right.
There
may
be
other
reasons
to
overwrite
it,
but
I
still
think
that's
an
interesting
perspective
here
right
if
there
are.
C
C
Like
but
like
really
the
question
about
the
reach
which
is
like,
and
it's
the
question
to
you,
fabian
like
what
we
I'm
kind
of
like
having
our
optics
around,
do
we
because,
like
we
had
this
2gig
week,
it's
our
optics
on
actually
being
able
to
run
guitar
from
the
construct
environment,
which
is
like
our
focus
right
now,
which
kind
of
makes
this
reach
here
for
the
constraint
environments
be
for
the
puma
single
mode,
not
100,
because
on
1000
it's
gonna
affect
every
single.
F
A
G
D
G
A
Let's
leave
it
like
this
and
I
I
like
the
concern
in
the
ranking
about
local
versus
global
optimization,
really
like
how
far
is
it
going
to
get
us
to
this
specific
goal
right
versus?
What
is
it
going
to
affect
globally
right
that
we
can
maybe
think
about
that,
but
that's
also
easier
to
see
when
you,
when
you
have
the
ranking
for
everything
and
see
you
know
how
it
actually
turned
out.
So
let's
do
the
last
two
things.
C
C
D
D
A
E
C
D
Yeah,
I
think
camille
raised
a
really
good
point
with
this
local
versus
global
thing,
because
now
I'm
thinking
back
now
because
we
were
very-
we
were
not
really
focused
on
dot
com
right.
When
we
did
this
two
gigabyte,
it
was
really
about
hey
here's,
a
single
note,
2
gigabyte,
you
know
user.
How
can
we
run
gitlab
on
on
that?
And
and
now
we
started
to
reduce
reach
a
lot
that
kind
of
down
down
scoring
really
like
stories.
D
That
sounds
really
impactful,
but
we
we
down
scored
them
in
reach
because
they
do
not
affect.com
that
yeah.
I
wonder
like
yeah
now
in
these
scores,
because
yeah
I
don't
know
I
I
didn't
really
think
about
that
when
I
did
my
rice
scoring.
A
And
I
think
that
may
be
a
balancing
in
priorities
right
as
well.
If
we
have
things
that
are
both
good
for
global
and
local
optimization.
Well,
then,
that's
actually
really
the
things
we
should
do
first
and
then
there
are
some
things
that
are
maybe
much
more
impactful
for
the
local
goal,
but
that
we
still
need
to
do,
but
globally
they
don't
matter,
that's
interesting
to
flush
out
as
well.
Okay,
I
think
there's
one
last
thing.
C
A
I
mean
that's
the
optimization.
We
probably
want
right
so
so
the
last
thing
we
have
to
do
is
this
baseline
performance
bit
and
then
we're
going
to
be
done.
G
C
G
C
Actually,
like
we
had
each
lengthy
discussion
on
monday
about
like
gathering
as
much
metrics
as
possible
to
see
our
like
the
trend
of
the
performance,
which
may
include
memory
and
other
aspects.
C
A
A
So
it's
it
feels
to
me
like
a
thing
that
we
just
must
establish
at
some
point,
because
otherwise
you
know
it's
a
little
bit
pointless
because,
like
we're
changing
stuff
against
an
unknown
entity
right,
so
I
don't
even
think
we
need
to
rank
it
necessarily
because
it
feels
like
an
absolute
prerequisite
for
measuring
what
we
are
trying
to
do.
In
my
mind,
sounds
good.
A
Yes,
that's
maybe
right
the
best
way
of
doing
it.
Okay,
so
you
know
I'm
not
beholden
to
like
religiously
follow
these
things
when
they
don't
make
sense.
Okay,
so
let's
recap:
I
think
I
can't
actually
easily
get
the
summary
table
into
this
right
now.
A
A
I
think
we
may
need
another
round
for
sort
of
reach
within
the
2g
one,
and
then
we
need
to
look
at
sort
of
global
versus
local,
but
I
think
what
we
already
have
is
we
have
a
essentially
we
have
scores
for
everything
that
we
tend
to
do,
which
is
a
lot
right,
there's
a
lot
of
stuff
going
on
here.
So
we
should
be
a
little
bit
careful
as
to
sort
of
over
subscribing
right
as
in
you
know,
there's
a
long,
a
long
stack,
but
I
think
we
can
probably
come
to
a
I
mean.
A
G
C
C
So
maybe
like
it's
actually
interesting
point,
maybe
we
need
to
reach
an
impact.
A
A
Okay,
so
what
you're
essentially
saying
is
that,
because
we
created
all
of
these
issues
with
the
reach
being
tailored
towards
the
2gig
initiative,
if
we
there
is
no
difference
in
ranking
here,
the
difference
is
really
between
global
impact.
You
know,
as
in
will
it
impact
all
of
you
know
like.
Will
it
impact
globally
right,
or
is
this
only
something
that
goes
into
the
2g
impact
and
that
changes
the
rest
is
stable?.
A
C
A
So
I
I
would
say
what
I
can
commit
to
is
I'll
get
our
average
scores
into
into
this
table
here
as
well
and
I'll
I'll
rank
by
that,
and
then
I'll
do
that
today,
and
then
everybody
can
actually
look
at
the
stack
rank
and
see
if
they
sort
of
violently
disagree
with
some
of
the
ranking
right
and
then
we
can
figure
out
why
that
is
right,
because
that
is
also
what
that
teases
out
right.
A
If,
like
our
perception,
is
not
consistent
with
this,
then
we
need
to
maybe
think
about
it
right,
but
I
think
we'll
have
a
scoring
system
in
place
and
also
we
have
a
place
where
we
can
keep
track
of
all
of
those
issues,
because
I
would
like
to
put
them
into
one
epic
with
potentially
sub
epics,
so
that
we
know
exactly
that.
Those
are
the
follow-up
steps
that
we're
going
to
take,
because
that's
important
for
me
just
in
order
to
not
lose
track
of.
What's
going
on.
D
Thing
that
bugs
me
a
bit
still
is
that
this
one
story
that
got
a
really
low
score.
D
A
D
A
No!
That's
that's
perfectly
true
because,
but
it's
it's
important
to
write
that
down
and
say
this
has
a
low
score.
These
are
the
reasons
for
this,
but
this
is
the
reason
why
we
decide
to
do
it
anyways
right,
because
that's
something
to
to
document
right
so
that
when
people
come
and
ask,
we
have
an
answer
to
it
right,
and
I
think
that
is
important,
and
this
is
why
having
these
things
sort
of
ahead
of
time
is,
is
really
nice
anyways.
A
Thank
you
for
doing
this.
I
think
this
was
maybe
a
little
bit
new
for
some
folks.
I
hope
it
was
useful
and
we
are
not
done
with
it,
but
I
think
there
was
a
at
least
for
me.
I
learned
a
lot.
It
was
probably
one
of
the
longest
synchronous
sessions
I
had
in
gitlab.
So
that's
exciting.
A
It
felt
almost
like
being
in
a
room
with
a
whiteboard
which
I
actually
really
like
so
yeah.
Also,
if
you
have
any
feedback
regarding
this
right,
maybe
we
can
do
it
I'll
actually,
post
in
the
in
the
channel
right
like
for
a
sort
of
a
mini,
live
retro.
Just
to
say,
you
know:
how
can
we
improve
what
should
we
have
done
so
that
we
learn
from
this.