►
From YouTube: 2021 01 11 Memory Team Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
to
this
version
of
the
memory
team
weekly
meeting,
it
is
january
11th.
There
are
a
couple,
not
verbalized
items
and
we'll
jump
right
into
this
week's
agenda.
So
we
have
the
planning
issue
and
I've
asked
for
some
breakdown,
so
we
have
a
lot
of
ideas
and
we're
still
doing
some
research.
I
know
there's
some
implementation
issues,
but
let's
continue
to
break
down
issues
for
the
two
gig
footprint
goal
along
lines
of
the
three
themes
that
we're
talking
about.
A
B
Yes,
so
I
still
have
to
look
at
that,
but
I
think
one
of
the
aspects
that
it's
pretty,
I
guess
important-
is
like
to
rework
our
documentation,
because
it's
like
very
specific
to
very
specific
case
and
like
at
least
we
had
some
discussion
in
the
past
that,
like
the
documentation
and
like
reflecting
the
current
state,
could
be
like
the
very
first
good
step
to
actually
have
a
place
wherever
we
figure
out
some
additional
tuning
to
have
a
very
easy
place
to
add
these
additions
in
the
future.
A
To
disagree,
no,
it
makes
sense.
I
remember
us
talking
about
it
at
one
point
in
time,
I'm
just
trying
to
recall,
and
I
was
looking
real
quickly.
I
thought
we
had
an
issue
for
that.
I
will
make
sure
that
we
do.
B
A
C
Yeah
so
about
puma,
I
grouped
up
all
the
puma
stuff,
not
only
five
points,
one
but
also
nakayoshi,
and
from
a
single,
so
5.1
is
approved
by
everyone,
but
I'm
just
waiting
for
it
to
be
merged.
It
was
postponed
because
of
security
fix
and
some
concerns
about.
Not
everyone
was
available
on
the
time.
So
now
it
should
not
be
a
blocker
anymore.
So
I
just
pinged
everyone
who
I
could
and
I
hope
it
will
be
merged
today
tomorrow.
C
Otherwise
I
will
ping
again
and
try
to
measure
and
to
be
honest
next
I
want
to
concentrate
fully
on
puma
single
because
I
see
like
it's
probably
one
of
the
most
visible
improvements
and
thanks
camille
for
reading
notes,
but
I
didn't
look
at
them
yet
so
it
will
probably
my
main
priority
next
week
or
so.
If
you
have
any
concerns,
please
please
tell
so,
but
I
want
to
to
concentrate
on
puma
single,
mostly.
C
No,
it
wouldn't
require
anything
from
me
like
much
effort.
I
will
just
add
them
it
to
config
and
just
move
these
three
marks
through
review.
So
it's
just
a
side
work,
so
I
will
just
do
it
in
portal
in
parallel,
so
I
will
just
open
three
nakayasha
forks,
mrs
and
we'll
track
them
and
in
parallel
I
will
actually
work
on
boom
single,
which
has
some
issues
as
we've
seen
with
metrics.
B
D
Oh,
I
think
it's
good.
I
mean
in
my
mind
the
way
like
correct
me.
If
this
is
wrong,
my
understanding
was
that
we
would
only
actually
use
puma
single
if
we
detect
there
is
a
very
small
amount
of
ram
available.
So
for
some
some
switch
that
says.
Okay,
you
know
you
are
on
a
memory
constraint,
environment.
You
know
we
are
going
to
default
to
puma
single.
That
is
actually
going
to
save
you
memory.
That
is
how
I
understood
it.
D
That's
a
little
bit
more
if
that's
a
more
minimal
iteration,
I
I
think
that's
a
a
good,
a
good
first
step,
yeah.
E
D
No
nothing
this
is
this
is
maybe
the
maybe
I
misunderstood
you
camille,
but
my
understanding
of
this
documentation
is
to
also
say,
if
you
run
under,
like
a
really
small
instance
and
whatever
that
means
we
can
define
that
or
have
very
small
user
base.
B
Yes,
I
I
mean
my
perception
was
like
that
we
would
like
introduce
this
ability
to
useful
massive
rule,
but
we
would
at
least
for
now
document
that,
but
there
would
be
no
automation,
because
automation
would
mean
like
breaking
trench
of
the
behavior,
so
maybe
like
this
automation
could
be
part
of
the
13.0.
B
B
Maybe
this
would
be
the
way
how
you
would
approach
that,
but
I
I'm
kind
of
assuming
that
like
it
would
be,
we
would
have
some
entry
in
the
documentation
for
puma
single
and,
like
the
the
benefit
of
using
that,
with
the
drawbacks
of
using
that.
D
Yeah-
and
I
I
still
recall-
maybe
in
our
last
meeting
or
the
meeting
before
you
know-
time
has
no
meaning
anymore
and
knocked
down.
I
think
you
came
here
said
you,
you
also
think
of
you
know
the
word
that
we
do
in
like
two
different
ways.
D
We
have
specific
things
that
have
very
high
impact
for
our
small
memory
footprint
like
puma
single,
but
it
actually
has
no
impact
at
all
for
most
of
our
users
or
dot
com
right,
because
we
won't
use
it
there,
but
it
is
still
valuable
for
this
specific
initiative
and
then
there
are
other
things
that
actually
have
more
global
impact.
I
think
at
the
moment
we
should
like
gc
tuning
like
gc
tuning
exactly,
and
I
think
we
still
do
want
to
see
how
far
we
can
move
towards
the
two
gigabyte
in
this
specific
environment.
D
But
there
are
also
all
of
these
other
efforts,
which
I
think
are
generally
really
important,
and
they
would
be
important
even
if
they
have
nothing
to
do
with
our
two
gigabyte
memory
footprint
right
and
I
think,
that's
actually
a
longer
term
theme
right
where
or
gitlab
exporter.
Where
matthias,
I
think,
did
a
lengthy
excavation
of
of
this
site,
and
I
think
there
are
quite
some
implications
for
the
general
application
right
and
that's
also
important.
A
Alexi
question
on
that,
so
I
think
most
of
the
information
is
listed
under
the
puma
upgrade
to
version
5.1.
So
I'm
sorry,
I
got
windows
annoying
I'll
drag
the
issue
over.
C
Do
you
have
a
separate
issue
for
puma
single?
It's
it's
just
a
separate
issue
for
5.1.
A
Is
this
the
best
place
to
track
all
the
things
like.
E
A
E
Was
the
problem
with
that?
We
didn't
really
get
any
good
stable
baseline
for
from
which
to
compare
the
results.
So
we
had
a
sink
last
week
and
we
decided
there's
three
things
we
can
do
that
might
improve
that.
So
one
thing
was
the
way
we
obtained
these
metrics,
which
we
had
so
far
emitted
through
in-app
samplers
that
run
at
a
random
interval
and
they
have
been
collected
from
a
prometheus
sampler,
a
prometheus
scraper
that
also
runs
at
a
random
interval.
E
So
we
said
maybe
we
can
just
add
an
application
endpoint
that
for
each
worker
that
you
look
at
or
if
it's
just
one
puma
single,
you
can
just
get
them
directly
from
the
process.
So
that's
something
I
put
in
mr,
so
camille.
I
was
wondering
with
this,
mr
just
to
speed
this
up
a
bit,
I'm
wondering
like
if
all
this
extra
stuff
with
like
having
this
for
psychic
as
well.
E
Maybe
we
can
just
leave
that
out
for
now
and
just
use
the
main
metrics
endpoint
and
then
add
a
worker
head
to
that
result
and
then
shift
that
first,
because
that
would
already
be
an
improvement.
I
think,
because
I
think
all
this
stuff,
with
having
the
exporters
that
run
in
the
app
support
this
as
well,
it's
yeah
it's
getting
tricky.
B
Yes,
I
I
need
to
look
at
that,
mr
because
I
know
that
you
mark
that
as
a
draft,
but
I
guess
we
can
figure
out
something
smaller.
That
resolves
like
your
particular
case,
and
maybe
we
would
just
fix
that.
We've,
like
exclamation,
mark
sort
of
with
the
star
asterisk
that
it's
only
for
the
puma
single
in
the
developer.
Exactly.
E
B
B
E
Yeah,
I
think,
you're
right.
I
think
it's
a
bit
like
it's
always
a
bit
tricky
when
you
do
it
yeah
for
it
to
be
comparable.
I
think
we
said
we
want
to
use
the
omnibus
vm
and
then
it's
always
a
bit
like
fiddly
to
do
that,
but
it's
definitely
possible
it's
just.
You
have
to
go
in
and
then
mutate
the
source
code
yeah,
but
it's.
B
F
E
B
E
F
F
E
Because
that
is
definitely
it's
not
a
massive.
E
D
B
F
E
E
F
C
E
Is
my
comment?
Sorry,
I
think
I
forgot
to
put
my
in
front
of
it.
I
actually
ben
suggested
catching
up,
and
I
I
agree
that
at
this
point
there
are
so
many
different
opinions
and
conflicting
opinions
on
where
to
take
this
thing,
it's
really
important
that
we
at
least
agree
on
what
what
should
happen
with
gitlab
exporter
and
not
even
on
this.
E
We
have
consensus
and
I
yeah-
I
wonder
if
we
I
think,
maybe
it
would
make
sense
for
camille
ben
and
myself
to
catch
up
first
and
then
take
it
back
to
the
team,
because
I
definitely
think
at
this
point
that
maybe
we
were
a
bit
over
eager
with
saying
we're.
Gonna
remove
this
thing
entirely
ben
would
still
love
to
see
it
go
entirely,
but
there's
so
much
complexity
there
like
some
of
it,
because
we're
not
even
running
this
thing
on
dot
com,
so
we're
already
starting
to
make
all
these
changes.
E
But
we
don't
know
how
we
will
even
test
that
on
gitlab
says
because
it's
not
running
in
its
current
version
on
gitlab
says
so.
So
there
will
be
a
time
where
we'll
have
to
face
the
decision
as
well.
Do
we
want
to
update
it
to
the
latest
version
or
like
yeah,
never
update
it
again
or
move
to
its
potential
replacement
or
whatever,
and
it's
getting
very
complicated?
E
So
I
think
it
would
be
good
to
untangle
some
of
these
yeah
kind
of
branches
that
came
out
of
it
and
then
kind
of
go
from
there
before
we
put
more
work
into
it
and
then
find
that
maybe
that
wasn't
a
good
idea.
E
Like
I
just
said,
I
think
we
need
to
kind
of
we
need
to
find
common
ground
here
to
even
decide
what
the
next
steps
are,
and
I
think
it
would
be
good
to
do
that
synchronously,
because
there's
a
lot
of
back
and
forth
and
issues-
and
there
is
some
pressure
from
other
teams
already
like
bob-
is
looking
to
introduce
a
new
metric
to
gitlab
explorer,
and
we
can't.
E
E
How
are
we
going
to
do
this
or,
if
not,
we
need
to
find
common
ground
for
what
do
we
do
with
these
metrics
that
need
to
move
out
of
it,
because
even
this,
we
haven't
really
found
a
good
solution
for,
like
the
ruby,
the
cyclic
sampler
stuff.
For
instance.
I
closed
that,
mr
because
there
was
a
bunch
of
scalability
problems
with
that
approach.
A
A
We
dropped,
get
lab
exporter
because
we
are
reducing
memory
and
then
from
what
I
recall
we
couldn't
because
it
was
super
complicated,
but
there
are
things
that
we
can
drop
to
reduce
some
memory
there
right
this,
this
one
kind
of
feels
like
the
dynamic
image
exporter,
it
seems
like
there's
some
quick
wins
that
we
can
get
out
of
it,
even
though
they
might
be
small
by
dropping
some
metrics
and
that
the
bigger.
C
A
E
E
Gitlab
exporter-
and
I
think
this
is
where
I
just
said-
I
think
maybe
we
derailed
the
overall
goal
there
a
little
bit.
So
maybe
we
just
want
to
shrink
it
and
there's
different
ways
to
go
about
this
as
well,
that
we
need
to
figure
out,
and
some
of
it
is
maybe
we
just
keep
chipping
away
at
moving
things
out
of
it
and
then
leave
it.
E
As
is
one
thing
we're
looking
into
is
to
run
it
on
a
more
lightweight
rack
server,
that's
something
I
I
have
mrs
open
for
this,
but
I
don't
really
have
like
super
clear
evidence
from
my
local
testing
that
this
actually
significantly
reduced
memory,
but
it's
also
difficult
to
compare
to
something
that
runs.
E
You
know
in
production
for
a
couple
days
and
it's
actually
not
that
easy
to
measure,
which
is
a
good
point
that
alexey
pointed
out,
because
github
exporter
does
not
monitor
itself,
it
just
monitors
other
components,
so
we
don't
actually
have
a
memory
metrics
for
it
in
production.
E
So
so
there's
some
open
questions
there
and
another
point
you
mentioned
was
the
ownership
we
had
a
meeting
last
week
with
monitor
and
correct.
If
I'm
wrong,
but
the
overall
takeaway
for
me
was
they
are
not
going
to
work
on
this
in
the
mid
to
maybe
even
long.
C
E
Because
they
don't
have
capacity,
so
it's
and
also
it's
kind
of
a
fair
point,
because
it
doesn't
squarely
fall
into
a
monitor
section,
because
it's
also
an
infrastructure
component
right.
We
use
it
to
monitor
our
cyclic
fleet
right.
We
do
that
just
to
monitor
like
the
health
of
our
sas
offerings.
So
it's
not
really
just
squarely
like
for
gitlab
self-monitoring
as
a
feature.
You
know
we
also
just
use
it
generally
for
observability.
D
Can
I
so
like
I'll
just
add
my
my
two
cents
to
this
right?
The
way
I
understand
this
is
we
started
looking
at
this
because
we
were
interested
in
reducing
the
overall
memory
footprint.
That
is
also
our,
I
think.
As
a
team.
You
know
that
was
sort
of
our
primary
goal
here,
maybe
you
know
dropping.
It
is
not
the
right
strategy,
maybe
shrinking
it
or
reducing
some
things
is
I
don't
know
right,
but
the
the
goal
for
for
us,
I
think,
was
to
you
know,
reduce
the
memory
footprint.
D
The
way
I
see
this
is
in
the
process.
You
know
of
touching
this.
We
encountered
a
gigantic
hair
ball
of
complexity.
You
know,
and
finding
a
component
that
is
not
well
maintained,
has
all
sorts
of
issues
right.
You
talked
about
many
of
them
and
I
think
the
decision
here
is,
or
you
know
what
we
need
to
establish
is:
what
can
we,
as
a
team,
be
reasonably
expected
to
do?
D
You
know
not
only
selfishly,
but
also
for
sort
of
global,
optimization
and
say,
like
you
know,
what
can
we
actually
do
here
right
and
is
that
helpful
for
for
git
lab
as
as
an
overall
product?
And
what
can
we
not
do,
because,
maybe
you
know
the
what
actually
should
be
done,
which
is
you
know
to
really
work
on
it
right
and
or
provide
an
alternative
or
whatnot
is
actually
a
really
big
endeavor
right.
D
That
requires
a
lot
of
focus,
and
maybe
that's
really
not
where
we
should
spend
our
our
time
going
forward,
given
the
other
responsibilities
that
we
have,
but
I
think
what
we
definitely
should
do
is
write
down.
What
we
found
say
like
look.
This
is
this
is
how
we
encountered
it
right.
We
talked
to
those
people.
This
is
the
current
situation
and
then
try
to
find
you
know
we.
We
talked
to
the
product
team
in
monitor.
They
said
not
our
priority
right.
D
You
know
see
if
there's
a
better
home
for
it,
but
I'm
I'm
personally
would
be
hesitant.
You
know
to
to
say
this
is
what
we're
going
to
take
on
for
the
next
three
months
and
we're
going
to
completely
rewrite
gitlab
exporter,
unless
we
really
have
very
good
reason
to
do
so
for
sort
of
global
global
reasons
right.
But
I
think
I
would
be
a
little
bit
hesitant
you
know,
essentially
just
because
we
did
the
poking
right
and
we
found
something
does
not
necessarily
that
we
need
to
clean
it
up
completely
as
well
right.
D
I
think
there
are
some
things
we
can
do
if
we're
transparent.
I
think
that's
important,
but
I
don't
think
we
need
to
like
sort
it
out
completely
either
right,
because
that
means
that
other
things
will
not
be
able
to
happen.
A
Yeah-
and
I
think
thanks
for
the
summary
agreed,
I
think,
a
lot
of
that's
covered
in
the
epic
that
I
have
listed
here
in
the
dock.
So
maybe
the
effort
is,
we
identify
the
sub
issues
in
that
epic,
on
what
we
will
own
and
then
much
like
we
did
with
the
image
resizer.
We
find
an
owner
to
hand
off
eventually,
and
maybe
it's
maybe
there
is
no
owner
and
it's
labeled
as
not
owned.
E
E
We
just
say
it
belongs
to
infrastructure,
maybe
that
would
actually
be
a
better
owner.
I
actually
see
a
lot
of
commits
from,
but
from
four
years
ago,
so
I
just
had
a
chat.
Another
chat
like
a
online
exchange
with
this
is
amar.
I
forgot
sorry,
I
forgot
he's
on
the
he's
on
the
infrastructure
team,
so
so
there's
definitely
multiple
teams
that
contribute
to
the
system.
But
that
said,
the
things
that
we
thought
would
be
most
impactful.
We
already
own.
E
We
already
work
on,
like
I
said,
that's
the
migration
to
the
to
a
different
app
server.
So
that's
almost
done
anyway,
with
the
with
the
little
footnote
that
we
will
only
really
be
able
to
test
this
for
self-managed,
which
also
adds
a
bit
more
risk
right
because
it
means
like
it's
just
another
thing:
we
change
that
will
not
apply
to
dot
com
but
but
yeah
yeah.
D
D
I
don't
really
think
monitors
said
this
is
not
our
issue
at
all.
I
think
they
they
said
more
like
we
have
so
many
other
things
that
are
more
important,
and
I
am
in
no
position
to
judge
that
right.
They
know
much
better
what
they
have
to
work
on
than
than
I
do,
and
I
think
infrastructure
or
whoever
is
the
customer.
They
may
also
disagree
on
that,
but
then
I
think
they
need
to
talk
to
that
team.
A
little
bit
more.
D
I
personally,
you
know.
I
would
not
think
that
the
memory
team
is
the
best
team
to
own
this
just
superficially.
I
feel
it's
not
like
the
right
thing.
If
he's
like.
A
E
E
D
E
G
Thank
you
on
the
graphql.
Oh,
this
is
actually
the
yeah,
so
this
is
actually
the
part
where
we
are
trying
to
provide
the
mechanism
for
splitting
our
application
in
some
functional
parts.
So
I
prepared
the
poc
that
is
like
proposing
the
mechanism
to
use
the
rails
engines
and
we
had
a
really
nice
discussion
last
week
about
it.
We
looked
at
it
and
yeah
like
providing
like
proposing.
G
The
new
architecture
will
be
a
little
bit
complex
thing
to
push,
and
we
decided
that
I
will
concentrate
this
week
to
move
the
specs
to
the
engine
itself,
which
which
will
allow
us
to
check
if
everything
works
as
expected,
and
it
will
like
give
us
insight
the
what
is
the
complexity
of
the
real
solution
like
how
much
effort
it's
needed
to
move
some
component
to
the
engine
itself,
because
we
will
not
just
stop
with
the
graphql
idea,
is
to
move
different,
functional
parts
to
those
engines.
G
So
we
we
need
to
know
like
if
this
solution
is
simple
enough,
and
I
would
also
like
to
provide
some
metrics.
So
we
are
sure,
like
how
much
memory
we
have
saved
by
moving
the
graphql
to
the
separate
engine
and
if
it's
not
loaded
in
sidekick,
like
how
much
faster
the
application
boots-
and
I
don't
know
how
much
memory
do
we
save
I'm
not
sure
how
to
achieve
this,
because,
like
measuring
this
is
a
little
bit
complex,
but
I
think
that
we
will
need
all
those
informations
because
camille
created
the
blueprint.
G
So
we
can
like
push
this
conversation
over
to
like
wider
audience.
So
I
guess
that
all
those
infos
will
be
helpful
if
we
like
decide
to
move
in
this
direction.
I
think
that
this
solution
also
can
help
like
speeding
up
the
booting
time
of
application,
which
is
another
issues
that
we
have
for
13.9.
I
think-
and
I
don't
know
camille
if
you
have
anything
else,
please
add
up.
B
Second,
one
is
the
complexity
behind
that
and
third
like
what
is
the
benefits
because
like
if
we
have
these
three
items
in
place,
this
is
actually
like
the
very
good
items
to
really
like
to
push
to
others
to
to
give
their
opinion,
and
we
also
know
if
this
is
really
worth
the
effort
in
the
end
from
us,
because
if
we
would
follow
that,
it's
gonna
be
pretty
significant
item
to
push
and
I'm.
I
am
just
want
to
understand
that
this
is
actually
where
you
are
for
us
to
push.
G
B
B
I
agree
this
is,
why,
like
we
need
to
have
a
success
story,
why
it
is
needed
because
then
like
we
could
actually
have
more
teams
to
think
about
like
this
problem
as
well.
Not
only
us
so
at
least
from
my
perspective
like
if
we
can
deliver
graphql
as
a
success
story
of
the
speed
and
give
like
the
kind
that
this
brings
this
much
of
the
benefits.
B
This
can
be
like
a
very
good
data
point
to
change
how
we
actually
develop
gitlab,
because
it's
actually
would
so
like
how
complex
is
to
extend
that.
First,
what
I
mean
how
benefits
are
from
the
testing
perspective.
I
mean
running
our
specs
and
also
impact
on
the
on
the
velocity
of
our
development
and
quality.
B
So
there
is
like
a
lot
of
uncertainty,
but
I'm
not
really
worried
about
like
the
next
step.
I'm
worried
about,
like
figuring
out
if
the
graphql
is
like
really
the
simplest
item
to
present
like
the
benefits
of
that
model,
because
if
it
is
at
its
most
impactful,
I
think
it
could
be
like
very
beneficial
for
github
in
the
general.
D
G
A
The
chordo
and
I
was
going
to
say
I
agree
with
camille's
summary.
It
was
a
very
good
summary
of
making
sure
we
have
that
example
out
there.
It's
well
documented
how
to
measure
before
how
to
measure
the
impact
of
after
so
that
we
can
enable
other
teams
to
do
something
similar
instead,
an
architectural
pattern
so
agreed,
and
it
fits
with
the
goal
of
our
section
being
the
enablement
section.
So
nice
summary
camille.
Thank
you.
A
And
while
we
were
talking,
I
brought
one
of
the
issues
into
13.9
that
was
related
to
that
one,
so
which
is
a
good
segue.
So
for
specific
issue
updates,
I
I
called
out
a
couple
here.
So
there
was
the
one
reduce
the
memory
impacted,
get
lab
monitoring.
There
was
a
question
to
fabian
for
answering
some
questions
outlined
in
the
description,
so
something
we
should
just
move
to
13.9,
since
this
milestone
is
going
to
end
at
the
end
of
the
week.
D
Update
but
I
failed
to
update
the
issue
that
we
spoke
to
sarah
who
owns
this,
and
I
think
the
answer
is
surprisingly:
it's
complicated
all
right,
so
they
they
don't
know
really
so
many
parts
of
the
application
use.
This
monitoring
stack
right,
and
so
they
would
be
uncomfortable,
just
saying
hey
if
you
use
less
than
that,
we're
going
to
turn
it
off
and
I
think
they
have
no
good
understanding
of
who
is
actually
using
it
and
for
what
either.
D
So,
that's,
I
think,
a
part
of
the
complexity
here
that
doesn't
mean
we
can't
actually
maybe
try
it
ourselves,
just
turn
it
off
and
see.
You
know
like
what
exactly
that
you
know
the
impact
is,
but
that's
kind
of
the
only
information
I
have,
but
here's
you.
You
asked
that
question
actually.
Is
that
a
fair
summary.
E
I
don't
think
it
directly
relates
to
what
you
said,
but
one
thing
I
misunderstood
when
we
talked
about
this
was
that
that
it
was
pointed
out
that
it's
several
features
across
the
whole
gitlab
product
suite
that
depend
on
this,
but
she
was
actually
referring
back
to
a
comment
I
had
made
in
the
issue
because
we
export
metrics
for
like
a
bunch
of
different
components
like
like
git
and
ci
runners,
most
of
which
we
have
to
get
out,
though,
by
now
whether
they're
used
or
not,
but
but
still
yeah.
A
Okay,
so
it
sounds
like
there's
still
some
experimentation
that
could
take
place.
It
doesn't
sound
like
it's
gonna
happen
this
week,
so
I
will
move
this
to
thirteen
nine
and
probably
the
same
for
the
next
one.
The
sidekick
cluster
should
preload
before
forking,
we'll
kick
that
one
down
the
road
again,
there's
nobody
that
has
been
assigned
to
it
so
move
those
down
and
then
camille
you
have
the
next
one.
B
Yes,
I
I
I
have
this
very
small
addition
to
the
ruby
vm
about
like
being
very
accurate
about
a
memorial
location
and
there
is
like
associated
issue
to
that,
but
I'm
actually
right
now
trying
to
understand
if
the
upstream
is
interested
in
the
future.
If
he's
interested,
it
would
give
me
very
good
in
incentive
to
finish
that
and
second
match
that
internally
and
maybe
even
parts
like
there
will
be
vm
for
internal
usage.
B
B
I
was
thinking
that
this
may
be
the
way
for
us
to
see
how
the
runtime
memory
of
the
application
do
change
over
time
and
like
what
aspect
aspects
of
the
application
requests,
or
maybe
workers
allocate
most
of
the
memory
in
a
given
unit
of
time,
which
could
be
a
second
and
something
that
could
be
useful
for
us
to
optimize,
based
on
amount
of
the
these
occurrences,
so
like
finding
a
way
to
figure
out
a
parts
of
the
application
that
are
the
biggest
thing
of
the
memoir
allocation.
B
When
running
that
are
frequently
executed,
that
we
could
optimize.
We
today
don't
have
any
way
to
like
measure
that
we
can
only
grab
a
metric
sampling,
the
whole
process,
but
we
don't
have
any
granularity
and
an
ability
to
assign
that
for
the
given
category,
there
was
some
idea
about
the
trace,
object
allocations,
but
it's
expensive
this.
This
approach
is
basically
free
to
basically
measure
and
like
to
count.
B
So
I'm
not
trying
to
understand
like
the
how
interested
is
the
upstream
on
getting
something
like
that.
E
You
me,
you
mean
the
usdt
probes
I
mentioned
today.
E
Sorry,
can
you
say
again
you
mean
expensive
with
regards
to
what
is.
Is
that
a
reference
to
the
usdt
probes
thing
I
mentioned
on
slack
this
morning
because
because
ruby,
the
ruby
vm,
I
think
it
does
define
a
bunch
of
trace
points
you
can
hook
into
already.
So
I
wonder
if
it
would
make
sense
to
spend
some
time
with
that
as
well,
because
it
doesn't
rely
on
a
custom
fork
or
something
for
me.
It
just
needs.
It
means
it
needs
to
be
compiled
with
our
switches
enabled
so.
B
B
Currently,
like
you
get
a
global
counters,
maybe
you
may
be
tracer
locations,
but
it's
still
like
the
global
information
that
you
receive.
I
am
just
interested
in
the
given
context
of
the
execution
which
could
be
request
or
or
sidekick
job,
how
much
memory
the
given
threat
that
allocate,
and
I'm
assuming
that
this
accurately
translate
to
how
much
the
pressure
on
the
memorial
allocator
it
produced.
It
could
be
jayamalok
or
or
gc.
B
B
At
the
same
time,
I
know
exactly
how
much
memory
the
given
request
had
to
allocate
during
its
execution.
I
mean
it's
about
how
many
gc
slots
it
consumed
during
this
execution.
It
could
be
like
every
object
application.
It
could
be,
I'm
assuming
that
incremental
is
fine
and
how
much
like
actual
malloc
memory
was
allocated
as
well.
B
So
today
you
don't
have
like
any
way
to
get
this
data
in
the
kind
of
like
shared
environment.
We
had
this
information
in
the
unicorn
world,
but
unicorn
would
execute
only
a
single
request
at
the
given
time
in
a
given
process,
so
you
could
just
get
gc
start
before
and
after
it
would
still
not
give
you
a
very
accurate
malloc
increased
bytes
information,
because
it's
a
counter
that,
it's
being
sorry,
it's
a
gotch
that
is
being
incremented
or
decremented.
B
But
it
would
give
you
some
hint,
but
this
is
actually
like
a
counter
that
is
really
incremented
over
time
and
gives
you
like
very
accurate
information
about
how
much
the
given
pros
the
given
context
of
the
execution
that
allocates
during
this
period
of
time.
It's
very
simple.
It's
basically
adding
props
in
three
different
places
to
measure
that
where
there
is
like
the
object,
allocation
object
allocation
always
happen
on
a
on
a
treat
on
a
thread
that
is
being
executed
within
within
the
given
context
of
the
execution.
B
So
I
tested
that,
like
manually
on
on
some
tests,
examples
and
like
it
actually
give
gave
me
the
information
that
I
was
expecting
to
receive.
E
And
then,
because
that
that
had
been
in
the
backlog,
I
just
added
it
to
the
current
milestone
because
it
looks
like
we
might
be
closing
it
out
this
milestone,
because
we
just
spun
workhorse
to
a
new
version
that
includes
our
kind
of
png
chunk,
correcting
custom
reader.
E
D
And
I
mean
this
the
last
point
here
is:
I
consider
the
handover
done.
We
had
scheduled
a
sync
meeting
that
was
not
necessary
because
of
matthias
excellent
documentation,
but
the
the
I've
decided
to
leave
the
issue
open
for
a
little
while,
just
in
case,
if
team
members
from
the
other
team
read
the
docs
and
have
some
questions,
but
we
should
be
able
to
close
this
out
essentially
at
will.