►
From YouTube: 2021 02 01 Memory Team Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
What's
next,
what
we
can
wrap
up
for
1310,
where
we
can
realistically
wrap
up
for
1310,
and
I
think
most
of
it
is
the
memory
savings
are
laid
out
in
the
description
here,
so
we
can
run
through
those
again,
and
so
I
guess
the
question
is
folks
want
to
run
through
the
epic.
Do
you
want
to
run
through
the
billboard
and
talk
about
where
we
are
with
current
issues?
B
Yes,
I
I
have
like,
probably
like
another
low
hanging
fruit
kind
of
building
on
the
gitlab
exporter
stuff
and
basically
dump
like
just
copying
these
settings
and
like
pre-configuring
them,
but
I'm
gonna
open
an
issue
and
I'm
gonna
try
this,
but
I'm
kind
of
like
expecting.
This
is
something
that
we
never
look
at,
but
looking
at
the
gitlab
exporter
stuff,
I'm
expecting
this
to
be
pretty
noticeable
improvement,
especially
for
the
2gig
and
basically
like
a
low
hanging
fruit.
B
D
F
F
Yeah,
I
I
tried
to
to
implement
to
get
fixes
for
things
that
got
broken
on
single
mode,
mostly
metrics
and
readiness
chats,
but
I
feel
that
I'm
stuck
completely
because
I
spent
a
lot
of
time
just
trying
to
figure
out
how
prometheus
is
keeping
the
metrics,
but
I
didn't
get
any
meaningful
ones
for,
because
I
got
conflicting
results
on
gtk.
It
works
one
way
on
the
only
device.
It
worked
different
way.
Matthias
hinted
me
just
a
couple
minutes
ago.
F
It's
a
total
bummer
for
me
like
I,
I
don't
know
just
let
me
let
me
just
answer
one
question
does:
do
I
understand
correctly
that
the
issue
we
are
trying
to
solve
under
this
title
is
this,
for
example,
if
we
just
enabling
single
mode,
what
do
we
need
to
fix,
or
how
do
we
need
to
update
documentation
for
it
to
work
for
the
customer
right?
F
G
Yeah
I
mean
yeah
because
it
was
a
bit
of
a
blind
spot
right,
because
we
didn't
really
know
what
ex
what
additional
work
would
fall
out
of
this,
because
we
weren't
completely.
We
had
some
ideas,
but
we
weren't
going
yeah.
G
I
think
we
should
for
now
just
stuck
here
like
whatever
is
easy
to
fix.
I
think
we
should
just
fix,
but
if
there's
like
bigger
things
that
fall
out
of
this,
let's
say
just
for
the
sake
of
example,
if
metrics
would
be
broken
beyond
repair
and
then
we
would
have
to
say
well,
you
know
it
takes
probably
another
milestone
to
fix
this.
Then
I
wouldn't
do
this
now.
G
I
would
just
create
an
issue
so
at
least
we
have
it
logged
so
that
yeah
at
least
that
we
have
a
plan
for
like
how
to
move
this
forward.
So.
G
Experimental
kind
of
researchy
but
yeah
any
kind
of
smaller
things
we
can
fix.
It
probably
makes
sense
to
just
do
them.
F
No,
I
mean
I
just
wanted
to
say
that
this
issue
may
sound
a
bit
weak
because
the
let's
say
what
we
are
trying
to
do
here.
We
are
trying
to
fix
this.
Whatever
is
under
the
note
section
like
we
are
trying
to
either
document
some
of
these
things,
for
example,
if
they
seem
perfectly
normal,
let's
say:
memory
killer
is
no
longer
applicable,
of
course
like
it
will.
F
B
What
did
you
want
to
ask?
I
wanted
to
suggest
that
we
meet
after
this
meeting
and
we
just
fix
prometheus
okay,
so
you're
gonna
be
one
with
one
less
problem.
G
Yeah,
that's
fair
enough,
but
I
think
what
alex
is
saying
is
so
the
way
I
understand
your
lexi
is
just
the
kind
of
the
exit
criteria
for
this
issue
are
not
super.
G
We
know
what
the
goal
is
like:
we
want
to
ultimately
be
able
to
run
boom
in
single
mode,
because
we
have
enough
evidence
that
this
would
be
very
impactful
in
terms
of
the
marine
savings
and
some
environments,
but
it
could.
It
might
turn
out
to
be
a
lot
of
work
to
fix
all
of
it,
and
some
things
are
not
maybe
even
entirely
clear
like
how
to
fix
them,
whether
that's
just
a
documentation,
update
or
that
requires.
F
F
C
Yeah,
so
I
think
my
understanding
is
the
the
best
way
to,
or
you
know,
from
a
high
level.
I
think
the
best
way
to
iterate
here
is
to
to
say:
okay,
you
know
if
we
turn
like,
can
we
turn
on
puma
in
single
mode,
and
I
think
the
answer
is
yes
right.
If,
if
we
do,
you
know
what
breaks
you
know
if
there
is
very
simple
like
fixes
to
unbreak
them,
we
can
maybe
do
that
right.
C
If
there
are
certain
things
that
aren't
just
broken
right
and
there's
more
work
involved,
I
would
open
an
issue
and
say:
metrics
are
broken.
You
know,
if
you
do
this,
here's
the
issue
and
document
it
as
a
caveat,
because
I
think
what
we
want
to
do
is
you
know,
especially
for
this
you
may
have.
You
may
want
to
run
this
in
a
memory
constraint,
environment,
right
and
then
you
should
be
aware
of
the
caveats,
but
actually,
for
example,
if
you
don't
need
metrics
anyways
right,
you
may
not
care
it's
like
fine.
C
I
don't
I
don't
mind
I'm
going
to
deactivate
metrics
right,
but
I
think
that's
good
enough
for
the
first
iteration.
So
I
wouldn't,
I
would
keep
it
on
like
in
that
tight
scope
and
say
like.
Can
we
do
this?
Yes,
all
the
drawbacks
go
maybe
into
the
larger
epic
here
the
like,
reduce
pumas
memory
footprint,
but
we
don't
need
to
fix
that
now
and
we
can
decide
if
we
even
want
to
fix
them.
G
F
C
G
Okay,
that's
fair
enough,
then
my
mistake
because
I
misunderstood
because
I
I
would
consider
if
metrics
are
not
working
that
to
me
is
like
fairly
broken
but
but
I
I
agree
like
maybe
in
some
environments-
that's
not
that
important,
but
but
yeah,
then
we
only
need
documentation,
because
even
the
readiness
probe-
that's
something
for
kubernetes.
Like
you
don't
need
this,
you
don't
need
a
readiness
check.
You
could
just
document
as
well.
Well,
this
is
also
broken.
If
you
enable.
F
B
G
B
B
So
it's
beneficial
to
have
the
issue
with
as
much
details
and
impact
as
possible,
and
we
can
track
that
and
figure
out.
I'm
still
kind
of
thinking
that
fixing
matrix
is
essential
for
the,
but
maybe
it's
going
to
be
13.7,
but
we
can
still
run.
I
think
we
can
run
puma
single,
we're
starting
this
magic
environment
variable
and
it's
very
hacky,
but
it's
technically
it
runs
right.
C
F
C
The
way
really
great
discussion,
I
think
it's
really
important
to
highlight
these
things
and
to
point
out
when
you're
blocked,
because
that's
obviously
the
the
least
productive
state
to
be
in,
and
so
it's
really
good
to
bring
it
forward.
Thank
you.
D
Okay,
I
spent
a
lot
of
time
lately
to
like
provide
a
way
to
run
those
specs
and
to
untangle
what
is
related
to
graphql
and
what
is
not,
and
we
needed
to
make
sure
that,
like
running
specs
for
this
engine
like
for
that
proposed
mechanism
is
very
simple
and
when
we
continue
when
we
continue
splitting
that
application
that
it
will
be
still
easy
to
maintain
that
code
and
run
those
specs
and
the
problem,
the
biggest
problem
was
to
make
sure
that
we
don't
have
any
cross
dependencies
between
graphql
and
our
services.
D
So
I
really
needed
to
make
sure
that
when
we
run
the
specs
without
graphql,
nothing
is
broken
and
that
we
can
run
the
specs
separately
for
everything
together
and
everything
without
the
graphql
codes.
Specifically,
and
additional
problems
were
where
we
have
front-end
specs,
like
the
white
box,
specs
that
are
running
everything
from
the
front-end
from
the
browser
and
those
specs
are
like
requiring
graphql.
D
So
it
really
needed
to
like
it
take
a
lot
of
time
to
make
sure
that
everything
runs
correctly
and
then
that
we
don't
have
any
like
blocking
points
on
the
proposed
architect
architecture.
So
I
was
able,
in
the
end
to
like
fix
almost
all
of
the
specs.
I
think
that
only
one
spec
is
broken
currently
because
today
is
february
and
it
has
28
days
and
spec
is
comparing
against
31
days.
So
it's
totally
unrelated
to
the
graphql.
But
since
today
is
the
first
for
february
the
specs
are
broken,
I
guess
on
the
master.
D
D
This
into
five
commits,
where
like
two
commits,
are
only
moving
the
the
files
to
the
engine.
The
second
one
is
moving
specs
to
the
engine
and
there
are
three
separate
commits
one
related
to
the
engine
logic
itself,
one
related
to
fixing
the
specs
and
the
final
one
is
only
for
fixing
the
ci
job,
because
I
added
two
different
ca
jobs
to
like
run
graphql
specs
and
we
have
the
old
one
without
the
graphql
running
so
yeah,
it's
a
little
bit
complex
nicola.
B
D
D
B
A
simple
I'm
really
interested
in
the
single
metric
like
the
simplest
possible
like
after
you,
bought
up
these
components,
how
much
uss
they
they
use.
Like
I
mean
with
the
default
settings
and
like
with
the
settings
that
we've
been
kind
of
playing
with
matias
recently,
the
one
that
are
like
the
most
memory,
conservative
ones
and
like.
A
G
A
A
Thank
you
all
right.
Next,
one
grant
is
there
anything
left
to
do
on
this.
I
know
there's
a
product
question
that
I
posed
on
measuring
the
impact
of
ruby,
two
seven,
but
that's
kind
of
forward-looking.
It's
not
necessarily
holding
this
one
open.
E
For
for
this
particular
issue
of
measuring
impacts,
that
is
obviously
now
done.
We've
done
it
and
we
did
find
that
there
was
some
memory
increases
around
database
heavy
endpoints.
I
think,
there's
probably
more
caching
going
on
for
some
reason,
but
I
are
for
some
reason,
maybe
not
caching,
but
it's
definitely
using
more
memory
to
retrieve
those
from
the
database.
So
before
you
try
and
figure
out,
sometimes
just
look
into
that
make
sure
it's
fine.
The
memory
increases
and
actually
it's
like
it's
a
few
gigs.
E
So
that's
still
notable
enough
to
actually
review,
but
it's
not
in
reference
architectural
world.
It's
not!
Actually
it
was
already
quite
under
what
we
give
it
so
we're
not
seeing
it
affect
any
performance
test
or
anything
else,
but
it's
not
worth
investigating.
So
for
that
issue
you
should
close
them
should
open
up
now
and
just
investigate
it.
That's
all
aboveboard,
okay,.
F
All
right
we'll
do
that
later
galaxy
yeah.
So,
as
you
remember,
we
rolled
back
this
setting
enablement
after
we
found
the
issue
with
gc
compact,
with
complete
gem
and
the
gem
was
fixed
and
paged
and
we
bumped
the
version.
So
currently
I
just
reopened
the
merch
request
to
bring
this
setting
back.
We
decided
to
start
with
scannery
and
staging
first,
and
I'm
just
spinning
my
tenure,
spinning
them
again
so
to
enable
this
on
our
hibernates
installation
in
our
chart.
B
F
B
Something
to
check
I'm
asking
because
like
if
this
compact
happens
between
forking
between
worker,
a
and
worker
b,
if
you
have
dc
compact
during
that
period,
it's
gonna
break
completely
memory
layout
because
of
the
how
the
compaction
works.
Compaction
in
the
ruby
works
that
it
looks
at
the
memorial
pages.
That
has
the
least
amount
of
the
slots
allocated
and
shuffles
these
slots
into
pages.
That
has
the
most
slots
allocated.
B
So
if,
if,
if
we
for
some
reason,
do
this
gc
compact
between
the
the
workers,
we're
gonna,
actually
increase
copy
and
write
pages
and
kind
of
unlink
many
of
pages,
and
it's
gonna
have
pretty
bad
impact
on
the
memorial.
So
I
think
like
if
you
could
very
validate
that
exactly
okay,
when
it
happens.
G
Yeah
quickly
to
look
at
well
as
well,
it
runs
at
once
during
the
master
boot
before
it
spawns
workers,
but
it
runs
it
again
for
restart
events.
So
if
you
you
can
signal
it
to
restart
workers,
and
then
it
runs
it
again,
so
I
think
there's
some
subtleties
there,
so
yeah.
Let's
look
at
this
a
bit
more.
B
I
mean,
to
be
honest,
be
looking
at
the
code.
I
think
it's
starting
it's
running
before
every
fork,
but
it's
something
to
talk
about.
Okay,
I
think
this
would.
This
would
also,
to
some
extent,
maybe
explain
the
kind
of
strange
later
that
you
were
seeing
that
there
was
like
no
big
difference.
F
G
Yeah
I
just
spent
like
the
last
40
minutes
or
so
on
summarizing
the
results,
and
it's
pretty
good.
I
updated
the
issue
description
with
them
as
well,
so
there's
some
juicy
numbers
in
there
for
release
notes.
I
think
we
can
close
it.
The
last
thing
that
is
maybe
so
we
were
like
in
the
end
fairly
careful
with
how
many
of
these
settings
we
found
work.
Well,
we
actually
applied
by
default.
G
Used
13.8,
I
just
used
omnibus
for
this,
which
didn't
have
any
of
these
optimizations.
Then
I
used
the
nightly
which
uses
the
new
defaults
and
that
was
actually
best
result
and
that
didn't
even
have
any
of
the
ruby
gc
tuning.
It
was
just
using
j.e
mallock
with
our
tweets
supply,
and
that
was
that
leads
to
savings
of
60
in
memory,
which
is
pretty
substantial.
G
So
that's
great,
and
I
did
then
another
check
with
custom
gc
setting
supply
that
we
thought
would
work
well,
but
the
memory
use
was
actually
a
little
bit
worse,
which
might
just
be
you
get
a
lot
of
these
fluctuations
depending
on
when
exactly
you
measure,
but
it
didn't
didn't,
have
an
improvement,
didn't
make
an
improvement.
So
so
I
think
actually
it's
pretty
good
the
way
it
is
so
maybe
we
should
just
leave
it
as
is
and
just
ship
that
with
39
so
yeah,
but.
B
G
Okay,
yeah,
so
so
another
thing
where
the
gc
settings
helped
that
I
saw
just
throughout
the
testing
we
did
over
the
past
few
weeks
was.
It
definitely
helps
to
keep
the
memories
a
bit
more
stable
like
with
without
them.
You
can
sometimes
get
these
breakouts,
because
the
ruby
gtc
is
just
not
as
constrained
in
how
many
object
slots
it
can
allocate.
So
so
that's
also
good
to
have
right
so
that
that
could
be
another
reason
for
why
we
would
want
those
in
place.
G
I'm
just
saying
like
even
without
those,
this
is
already
super
impactful.
So
if
we
wanted
to
shift
just
that,
that
would
already
be
a
win,
but
like
sure,
if
we
want
to
set
gc
default
as
well,
we
can
certainly
do
that.
It's
just
like
it
will
be
less
obvious
like
how
to
what
extent
they
help.
I
don't
think
we're
going
to
see
a
big
difference.
There.
B
B
B
So
I
think,
like
the
gc
settings,
are
important
from
the
perspective
that
it
tries
to
define,
let's
say,
a
medium
scenario
of
the
expected
memory
usage
for
the
average
case,
basically
and
kind
of
defines
like
the
boundaries
that,
if
you
go
over
these
boundaries,
you
can
still
allocate
but
will
be
gonna,
be
more
aggressive
in
freeing.
G
Yeah,
I
don't
disagree
with
any
of
this.
The
reason
I
say
this
is
the
and
I
racist
in
the
retrospective.
There
is
so
much
work
involved
and
like
I
need
to
go
back
and
open
four.
Mrs
again,
just
to
add
these
gc
settings
as
well.
In
the
same
way,
we
did
it
now
and
there
will
not
even
be
a
very
convincing
argument
to
do
it
right
now,
because
the
return
on
investment
will
be
quite
small.
G
So
so
I'm
just
wondering
at
this
point.
If
it
was
up
to
me,
I
would
prefer
to
just
like
say:
this
is
good
enough.
This
is
like
our
80
solution
and
then
we
go
to
look
at
the
main
app
again
because
that
that
should
be
kind
of
our
focus.
Not
get
live
exporter
right.
So
that's
my
only
concern.
We
have
because.
B
C
So
my
question
is:
can
we
record
this
as
a
follow-up
investigation
as
in
something
that
we
may
want
to
look
into,
and
is
it
good
enough,
as
is
to
just
move
on?
I
don't
know
what
the
answer
to
this
is,
but
yes,.
G
I
think
it's
a
legitimate
concern,
it's
like
something
where,
oh
you
know
what
so
so
one
main
problem
we
had,
because
what
camille
describes
is
something
that's
not
easily
seen
if
you
just
run
like
an
a
b
bench
as
we
did
so
far,
and
just
look
at
the
memory
use
that
falls
out
of
this
right
because
that's
the
immediate
effect,
but
what
camille
describes
is
kind
of
this
like
slow,
creeping,
you
know,
ruby
keeps
holding
on
to
more
and
more
memory
and
then
also
the
memory.
G
Allocator
cannot
really
return
those
pages,
but
that's
only
something
that
happens
over
like
like
days
or
so
of
use,
but
camille
did
add
the
ruby
probe
to
get
lab
exporter.
That's
just
basically
a
new
feature,
that's
not
completely
that
the
work
is
done,
but
we're
still
waiting
for
that
to
be
merged
into
omnibus
and
charts.
But
if
we
have
this
at
least,
then
we
can
observe
memory,
use
and
gc
stats
of
github
exporter
in
production,
and
that
would
be
really
interesting
to
have
because
then.
B
I'm
kind
of
thinking
about
like
the
following
that
like
if
we
move
on
to
something
different,
we're
not
gonna
go
back
to
gitlab
exporter,
because
it's
gonna
be
a
dawn
story
because,
like
then,
we
may
be
looking
at
five
megabytes
or
10
megabytes
because
and
we're
gonna
have
significantly
more
important
topics
and
I'm
kind
of
thinking
that,
like
it's
like,
we
are
kind
of
on
the
top
of
these
items.
B
So
now,
if
I'm
looking
at
the
perspective
of
the
time
investment,
it's
pretty
much
like
something
that
you
mattias
know
exactly
what
to
do
and
like
you
are
on
top
of
that,
and
it's
probably
like
not
a
lot
of
effort
to
do
it.
But
if
you
would
have
to
go
back
to
it
in
two
months
from
now,
it's
gonna
be
a
five
time
more
bigger
task
because
of
the
shifting
landscape.
F
B
Not
really
do
it
at
that
point.
That's
that's!
That's!
That's
my
perspective
really
that
like,
if
we
are
on
top
of
something
it's
like
easier
for
us
to
like
to
finish
close
that
and
be
done
with
that
and
do
not
go
back
to
it
because,
okay,
if
you
have
to
go
back
later,
it's
actually
significantly
harder
to
do
it.
B
G
What
I
would
say,
though,
is
so
the
time
consuming
part
is
not
just
like
you
know,
sending
all
these,
mrs
and
like
getting
the
merch
and
reviewed
and
all
that
it
was
so
far
as
well
also
to
play
with
all
these
different
settings
and
like
find,
you
know,
like
the
perfect
combination,
I
it's
like.
Sometimes
the
differences
are
so
subtle,
and
I
found
that
sometimes
you
test
it
on
a
different
day
and
you
get
a
different
result.
G
G
These,
like
external
factors
with
that,
makes
them
so
basically
long
story
short.
Should
we
then
just
agree
on
the
latest
values
that
I
posted
in
the
result
as
well
to
move
forward
with,
so
that
I
can
actually
focus
on
producing
those,
mrs
and
getting
them
reviewed
and
getting
them
in,
so
that
we
don't
spend
like
another
week
on
fine-tuning
keys
so
about
which
values
the
the
ruby
gc
settings?
I
think
we
said
initial
slots.
We
wanted
to
size,
it's
all
in
there
in
the
description
now.
G
I
agree,
I
think
they
make
total
sense
and
what
I'm
saying
is,
though
I
use
different
settings
than
those
like
a
smaller
number
of
init
slots
and
like
different
values
for
min
and
max
ratio,
and
I
got
very
similar
results.
That's
what
I'm
saying
so,
but
but
I
totally
agree
I
find
like
this
is
this
makes
sense.
We
have
there's
a
there's
like
reasoning
behind
these
and
it
sounds
good
to
me,
so
we
can.
C
G
Yeah
I've
like
kept
trying
to
get
back
to
this.
It's
basically
the
same
story
but
for
the
main
app
so.
E
G
Started
to
work
on
this
again
on
thursday
after
we
made
a
bunch
of
improvements,
it
helps
to
measure
this
a
bit
more
easily
and
steadily,
but
it
is
I'm
kind
of
going
through
the
same
stuff
again
that
we
did
for
gitlab
export.
So
this
is
not
done
yet.
B
B
I
actually
started
like
pushing
on
getting
this
merch,
I'm
actually,
first
starting
with
the
github
ci
testing
and
I'm
maybe
it's
gonna
be
merged
today,
which
is
like
would
be
the
very
first
step
to
have
my
hacky
patch
three
hockey
mud
parts
to
be
tested
more
extensively,
and
I
also
opened
because
I'm
actually
having
the
trouble
on
like
propagating
this
patch
like
it's
fairly.
I
have
to
propagate
this
patch
to
omnibus
and
I
have
to
propagate
this
to
cng.
B
I
have
to
propagate
this
patch
to
be
drop,
ci
testing
and
it's
still
pretty
straightforward,
and
I
also
like
because
it's
it
looks
here
and
it's
using
docker
images.
I
also
propagated
that
to
gck
it's
basically
we're
using
github
ci
image
and
removes
a
lot
of
dependencies
on
the
gck
and
it
just
works
as
well,
but
I'm
having
trouble
with
the
gdk
that
basically
compiles
everything
from
scratch
using
some
external
tool,
my
name
a
sdf,
and
I'm
not
really
convinced
that
I
want
to
propagate
this
patch
there.
B
So
I'm
trying
to
figure
out
exactly
the
way
how
to
approach
gdk.
But
I
think
my
perception
right
now
is
like
I'm,
just
gonna
likely
skip
that
and
rather
focus
on
pushing
that
and
getting
that
merged
upstream,
because
we
still
would
have
the
coverage
on
the
ci
omnibus
cng
and,
as
the
last
resort
gck.
B
If
someone
would
ever
have
to
modify
that.
But
my
perception
would
be
that
I
would
completely
disconnect
testing
of
that
patch
from
like
from
other
specs.
I
mean
still
part
of
the
specs,
but
it's
coming
from
other
specs.
So
it
would
still
be
possible
like
to
to
work
on
that,
but
application
would
function
without
that
patch
as
well.
So.
B
I
think
it's
rather
like
on
the
track,
to
be
honest,
I'm
just
kind
of
figuring
out
like
how
others
have
reached
my
gdk
dismissal,
at
least
in
this
case,
given
like
the
complexity
of
patching
gtk,
because
it
requires
recompiling,
ruby.
B
Okay-
and
we
have
the
next
one
here
on
future
flags-
nothing
really
happened
with
that,
as
you
saw
like.
I
asked
this
basically
in
the
working
group
about
that.
There
is
no
one
yet
interested,
so
I'm
kind
of
unsure
like
if
how
we
should,
like,
maybe
I'm
gonna,
just
announce
in
myself.
A
Yeah,
I
think
you
should
understand
all
right,
let's
jump
over
to
the
validation
board,
quick
question.
C
But
okay,
perfect
yeah.
The
item
for
the
documentation
is
there
for
camille
as
well.
So
I
think
we
are
very
good
here.
Cool.
C
C
C
Done
here
we
are
almost
out
of
time,
so
I'm
not
sure
we
can.
We
can
do
it
this.
The
idea
here
behind
this
board
is
to
quickly
do
a
prioritization
for
what
should
be
done
next
right.
What
should
move
onto
the
billboard?
If
we,
you
know
if
we're
done
with
most
of
the
things
that
are
that
are
on
the
build
board,
so
these
are
sort
of
the
immediate
next
things
to
to
consider.
C
C
So
I
don't
know
if
we
have
enough
time
to
talk
to
those
in
in
detail,
but
we
can
maybe
at
least
do
the
two
in
problem
validation.
So
my
question
here
is
for
the
sidekick
cluster
should
preload
before
forking.
Is
this
one
issue
that
is
still
related
to
our
2gig
memory
effort,
or
is
that
a
new
thing.
C
Okay,
are
there
any
items
in
here
at
all
that
are
still
relevant
to
the
2gig.
B
G
C
Is
this
something
that
we
we
may
want
to
move
to
the
billboard
just
so
that
we
can
sort
of
flush
it
out
with
closing
out
the
two
gig
issue,
or
is
there
more
investigation
needed
around
this
before
we
can
actually
start.
G
Yeah,
it's
like
one
of
these
one
of
these
things
where
it's
fairly
easy
to
like
show
that
this
is
possible.
In
theory,
I'm
not
totally
clear
yet
on
what
this
it
has
a
bunch
of
like.
I
think
there
will
be
things
that
fall
out
of
this.
Just
to
give
you
one
super
quick
example
right.
G
So
so
I'm
a
bit
like,
I
wonder
how
much
extra
work
would
fall
out
of
this
and
I
kind
of
have
a
hard
time
saying.
Is
it
worth
doing?
You
know
because
it
might
add
a
ton
of
complexity
in
in
other
areas,
right,
it
will
save
memory,
but
it
might
add
complexity
in
in
other
areas.
So,
that's
why
I
think
it's
definitely
not
ready
for
development.
I
think
there's
some
things
you
would.
C
G
C
My
suggestion
would
be
like
I
haven't
looked
at
this,
so
if
you've
written
this
down
already.
Please
excuse
me,
but
I
would
just
add
exactly
this
into
this
comment
and
say
hey.
This
is
where
we
are
at
because
we
may.
G
C
B
C
So
cool,
given
that
we
still
have
enough
stuff
going
on
on
the
bitboard,
and
we
don't
have
that
much
time.
I
think
we
can
continue
to
add
to
this
right
and
then
I
need
a
separate
session
to
review
it
and
I'll
ping
ping.
Folks
on
it
as
well.
G
Just
like
I
know
I
keep
bringing
this
up,
but
I'm
I'm
totally,
I'm
like
at
this
point
not
sure
anymore.
What's
going
on
with
action,
cable
and
because,
like
there
is
an
issue
open
to
like
look
into
the
access
memory
that
we
now
consume,
it's
just
not
on
our
board,
because
it's
not
filed
with
our
team.
It's
like
it's
part
of
the
team
that
owns
action,
cable,
which
I
guess
is
fine.
I
I
I'm
just
a
bit
worried
that
that
this
is
not
going
to.
I
don't
think
this
will
be
a
high
priority.
G
I
think
there's
a
lot
of
like
pressure
on
this
working
group
right
now
to
just
ship
something
which
we
did
so
I
have
a
feeling
I
might
be
wrong,
but
like
this
might
just
like
not
get
done,
so
I
just
wanted
to
throw
that
out
there
because
it's
like,
I
feel
it's
not
really
on
our
radar.
A
G
Yeah-
and
I
can't
find
the
issue
I
think
I've
loaded
it
before,
but
I
can
find
the
issue
again
as
well.
So
basically,
what
happened
was
there
were
like
multiple
attempts
at
rolling
it
out
and
like
the
first
time
it
didn't.
Work
was
for
unrelated
reasons,
so
we
were
fairly.
G
The
second
time
that
was
last
week,
I
think,
to
roll
it
out,
and
so
we
ended
up
rolling
it
out
and
it
worked.
But
memory
consumption
on
workhorse
is
through
the
roof.
It's
like
you
know
like
an
ordinary
workhorse
consumes
somewhere
like
between
two
and
three
hundred
megabytes
in
prod
on
kubernetes,
like
we
saw
memory,
use
an
action
cable
on
the
action
cable.
That's
like
a
separate
fleet
of
service.
Now
that
service,
only
action,
cable
and
other
websockets
traffic.
It
was
like
north
of
2
gigabyte.
G
So
that's
a
10x
increase
and
we
have
done
some
research
already
like
and
we're
pretty
sure.
It
is
simply
because
all
the
connections
are
proxied
through
workhorse
continuously,
so,
like
these
users
just
come
and
they
stay
connected
even
to
work
words,
and
that
was
something
we
missed
before,
because
we
thought
it
would
only
be
involved
in
the
handshake
and
then
action
cable
connects
directly
to
the
client,
which
is
not
true,
so
so
yeah.
G
G
Unlike
web,
we
only
run
a
small
fraction
of
the
number
of
puma
workers
for
for
our
overall
site,
so
that
is
probably
fine
and
another
thing
we
haven't
looked
at
at
all,
because
we
don't
have
the
tools
right
now
to
do.
It
is
how
how
this
would
behave
on
self-managed
right,
because
we
would
have
to
look
at.
G
We
would
kind
of
have
to
come
up
with
estimates
for
which
group
of
omnibus
or
aha
users
that
use
charts
our
charts
like
what
kind
of
traffic
would
they
have
to
expect
right,
and
this
will
only
get
worse
going
forward,
because
this
is
one
feature
right.
This
is
only
the
issue
assignees
and
we
had
like
on
sas.
We
had
like,
I
think
we
peaked
at
30
000
connections
for
issue
assignments,
and
this
will,
of
course,
be
elevated
for
the
frequently
used
features
like
mrs
and
issues.
G
But
again,
that's
one
feature
right,
so
we
were
looking.
The
whole
point
of
this
kickoff
of
working
group
was
to
put
the
infrastructure
in
place
to
put
more
features
on
top
of
this.
So
I'm
not
sure
where
we're
headed
with
this,
but
like
it
feels
like
it's
quite
expensive
too
right.
C
C
Yeah
so
before
we
wrap
up,
I
think
the
the
last
thing
I
wanted
to
say
is:
I
actually
want
to
say
two
things,
so
I
went
through
all
of
our
rise
stacked
issues
for
the
two
gig
memory
stuff,
the
big
you
know
what
we
wanted
to
do
and
looked
at
yeah
13.1
is
13.10.
I
just
got
grounded
for
whatever
reason
you
know
what
where
we
are
at-
and
I
think
the
short
summary
is
we've
actually
stuck
to
our
prioritization
right.
C
We
started
at
the
top
and
didn't
do
the
the
no
value
things.
None
of
the
issues
are
closed,
so
they're
all
in
flight
right,
but
almost
all
of
them
are
actually
scheduled
for
13.9
release.
C
So
if
we
actually
optimistically
say
70
of
those
are
going
to
close
right,
I
think
we
should
be
in
a
position
where,
after
this
right,
we
can
reevaluate.
C
You
know
specifically
the
impact
that
all
of
those
changes
have
made
on
memory
consumption
in
a
constrained
environment.
That's
how
I
look
at
it
right.
We
won't
have
like
the
the
lowest
hanging
things
here
are
in
the
backlog.
We
haven't
done
them,
but
ideally,
let's
say
row
two
to
ten,
let's
say:
should
close
right
or
wrap
up,
not
gonna
close.
C
Yeah
yeah
so
in
in
general
right,
but
then
I
think
we
should
be
able
to
say
having
done
all
of
these
changes.
Where
are
we
at
that's
sort
of
what
what
I
I
suspect,
and
that
then,
should
we
come
and
go
into
the
documentation
right,
and
at
that
point
we
should
reevaluate.
If
there's
more,
to
be
done
here
or
not
right
or
if
we
are
going
to
say
this
is
the
best
we
could
do
right.
The
rest
is
going
to
be
very
hard.
What's
the
next
thing,
that's
my
thinking.
C
A
Agreed
and
we
can
keep
the
epic
up
to
date
with
the
latest
progress.
C
Yep
and
then
other
than
that-
and
I
I've
said
it
before-
but
I
I
I
just
wanted
to
say
it
sort
of
on
in
sync
again,
so
we
made
a
change
in
13.7
for
the
load
balancer
for
caching,
postgresqueries
and
you've.
Seen
this
right.
We
believe,
or
janis
believes
that
this
actually
has
increased
and
significant
improvements
in
the
query
aptx
for
postgres.
There
are
probably
other
things
happening
as
well,
but
it
has
had
a
big
impact
and,
like
these
things
really
matter
to
the
product
organization
overall
right,
this
is
like
things
people
get
excited
about.