►
From YouTube: 2020 11 20 Memory Team 2GB Sync
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
last
day,
it's
the
finale
captain.
A
C
B
A
A
C
A
More,
we
can
look
at.
I
think,
that's
okay,
I
mean,
I
think
we
can.
You
know
then
next
week
actually
focus
on
trying
to
make
some
improvements.
A
I
also
feel
personally,
I
always
want
to
get
something
tangible
out
of
it
at
the
end
of
the
day,
and
if
you
just
collect
data
for
such
a
long
time,
it
feels
it
doesn't
always
get
super
satisfying
right
because
it
just
often
just
raises
more
questions
than
you
than
those
you
already
had
and
yeah.
Okay.
B
But
yeah.
A
C
It
can
run
onto
geek
already
like
I.
I
think
this
is
optimistic.
I
mean
like
I'm,
I'm
certain
like
okay,
even
like
with
these
terms
like
with
the
puma
singles
some
other
random
improvements.
Nakayashi
fork,
maybe
I
mean
input
single.
We
don't
really
need
it.
I
mean
okay,
actually,
fork
is
still
useful,
yeah
gt
parameters,
maybe
like
tuning
some
of
these
services,
like
it
can
run
like
on
two
gig
yeah
I
mean
and
like
we
can
really
figure
out,
like
the
architecture
sizing
for
like
running
on
the
two
weeks.
A
A
This
a
bit
with
craig
in
the
one-on-one
yesterday
and
like
I,
don't
know
how
you
feel
about
it
youtube,
but
my
feeling
is
because
we
looked
at
very
different
things
right,
which
is
great.
I
love
that
we
looked
at
so
many
different
things
and,
and
some
some
of
them
were
more
taking
an
angle
of.
Is
there
like
something
we
can
do
less
of
right?
A
You
know,
like
start
fewer
services
or
something,
but
then
also
out
of
the
things
we
need
to
do,
how
much
memory
do
they
consume
and
can
we,
you
know,
improve
that,
but
it
seems
to
me
like
it
sounds
like
the
lowest
hanging.
A
We
do
it
for
the
image
scale
as
well,
so
we
actually
have
all
the
infrastructure
in
place
to
make
these
decisions,
and
it
should
be
fairly
simple
to
do
something
like
you
know,
if
you're
running
on
a
note
restricted
to
two
gigs,
you
know
just
don't
like
start
x,
y
and
z
by
default,
and
maybe
we
can
give
the
admin
a
heads
up
like
hey,
we
disabled
these
features,
but
if
you
do
want
to
run
them,
you
have
to
enable
them
explicitly,
but
it
will
cost
you.
C
A
C
Need
some
parameters
of
the
omnibus,
basically,
which
is
like,
let's
say
like
this:
we
disable
graphana
or
like
these
other
services
that
we
may
not
need,
but
also
like
training,
these
gc
params,
for
maybe
for
this
other
services.
So
I
guess,
like
this
could
be
like
the
lowest
hanging
fruit
that
we
can
do
to
have
like
much
better
memory.
C
Research
in
these
circumstances-
and
I'm
kind
of
like
thinking
that,
probably
like
my
biggest
takeaway
from
that
week,
is
like
we
need
to
turn
the
services
and
the
parameters
of
these
services,
because
we
are
not
doing
that.
We
are
running
in
the
default
really
and,
like
I
think,
like
we
have
this
discussion
about
the
worker
killer,
puma
worker
killer,
which
like
says
we
should
leave.
C
We
should
increase
the
memory
limit,
but
I'm
now
thinking
that,
like
to
address
that
we
should
tune
gc
params,
basically
not
if
the
memory
means
so
like
play
with
different
settings
check
if
we
can
actually
tune
the
gc
settings
to
be
more
aggressive.
But
still
I
have
decent
performance,
and
this
is
kind
of
like
keep
this
memorial
mark
into
place.
So.
C
But
I
think
in
general,
like
we
need
to
have
different
settings
than
the
defaults.
The
default
just
doesn't
work
in
our
case,
which
kind
of
results
in
this
like
constantly
growing
memory
usage
over
time.
So.
C
A
C
A
Also
feel
like
that
and
the
the
gc
is
only
one
part
of
the
equation
right,
because
we
also
have
the
allocator
that
will
have
an
impact
on
fragmentation
and
memory
growth.
So
what
I
have
not
looked
at
at
all.
This
is
like
super
low
level
stuff,
and
it
would
take
me
a
while
to
understand
that,
but
like
what
we
think
could
look
at
as
well
is
how
we
could
tune
je
mello
differently
for
differently
sized
deployments,
because
it
has.
A
C
A
A
Looking
at
just
puma
stuff
that
and
there's
probably
different
things,
we
can
do
about
psyche
yes
and
like
and
like.
C
But
I
guess
it's
still
like
connected
with,
like
my
general,
like
even
looking
at
your
heap
usage
and
looking
at
the
loaded
features
the
amount
of
the
fight
that,
without
like
we
do
a
lot
of
bad
stuff,
basically
of
like
loading
everything
always
like
14
000
files,
it's
ridiculous
just
to
like
to
serve
the
request.
A
A
C
Guess,
like
it's
kind
of
like
something
that
we
didn't
investigate,
very
deeply
like
how
we
can
selectively
load
because,
like
I,
I
think,
if
we
could
be
more
selective,
it
could
actually
give
us
a
lot
of
space.
Basically
yeah.
I
mean
this
graphql
that
we
discussed
yesterday.
I
I
don't
think
that
we
kind
of
proceed
with
that.
Further
right.
We.
D
A
No,
but
I
I
think
that's
okay
mean,
I
think
it's
okay.
If
we
just
identify
stuff
this
week
and
then
starting
next
week,
we
can
actually
create
issues
for
fixing
that
stuff
right,
and
I
think
and
graphql
was
one
of
these
things
that
came
out
rouge
was
something
I
looked
at
yesterday
super
briefly
only,
but
it's
it's
an
h.
It's
sorry,
I
think
it's
a
markdown
or
like
a
code
highlighter
or
something
like
that.
We
use
it
for
markdown.
We
don't
need
to
load
this
for
the
api
or
something
right.
C
A
Prefer
because
I
think
our
code
base,
our
boot
path
is
already
a
bit
messy
in
the
sense
that
it's
full
of
these
kind
of,
if
deaths
or
like
you
know.
If,
if
this
is
this
runtime
you
know
or
this,
and
that
feature
is
enabled
or
whatever
we
go
around
these
separate
code
paths
and
it's
really
difficult
to
wrap
your
head
around.
C
I
was
thinking
about
something
that
maybe
what
we
should
kind
of
do
is
like
as
part
of
their
parts
like
define
different
contexts
and
like
let's
say
that
you
have
like
the
share
context
that
is
loaded
always
but,
like
you
have
the
context
which
is
like,
I
don't
know
the
folder
side,
key
like
what
you
have
like
folder
web
app
site.
C
Key
yes,
basically
and
like
you
have
the
specific
code
for
these
features
in
this
folder,
basically
that
it's
only
loaded
in
this
context,
but
not
in
other
contexts,
so
kind
of
like
because,
like
I,
I
think
one
aspect
is
like
identifying
how
to
load.
But
the
second
aspect
is
like
having
the
the
the
pattern
that
makes
everyone
to
say.
I
want
to
add
this
class
into
sidekick,
so
I'm
just
gonna
add
that
into
that
folder,
and
this
is
only
gonna
be
loaded
in
the
side.
C
Key
right
and
kind
of
have
our
test
suit
also
adhere
like
this.
So
I
guess,
like
the
common
part,
would
really
be
all
models,
maybe
majority
of
the
d,
but
there
would
be
a
lot
of
uncommon
parts
that,
like
side,
key
workers,
probably
even
like
most
of
the
services,
would
be
only
the
in
the
psychic
context,
already
right
and
not
really
in
the
puma
context,
but
puma
would
load
the
controllers
that
side
key
would
not
vote.
C
B
A
Yeah-
and
I
just
want
to
just
also
for
anyone
who
watches
this
recording.
A
I
just
recently
looked
at
this
and
I
got
really
confused,
so
this
is
our
community.
C
A
It's
it's
crazy,
really,
it's
so
hard
to
understand.
What's
going
on,
because
also
this
is
not
even
it
will
not
go
different
down.
Different
branches
like
here,
for
instance,
so
like
there's
like
all
these
different
dimensions
right
because
you
have
first,
we
have
these
checks
is.
Are
you
a
web
server?
So
are
you
puma
basically,
but
puma
can
run
in
different
modes
right,
puma
could
be
puma
could
be
action,
cable
right
or
it
could
be
like
a
normal
web
node
or
also
on
dot
com.
A
It
could
even
be
the
puma
worker,
but
maybe
it's
only
serving
api
traffic
right.
So
that's
yet
another
kind
of
workload.
Then
we
have
you
know:
okay,
the
e-switches
we
have
anyway,
but
then
yeah
psychic
might
do
something
totally
different
yeah.
Then
we
have
environments
right.
So
are
you
in
a
production,
environment
or
not
and
oh
and
another
one
which
is?
If
you
are
a
web
server,
are
you
running
in
clustered
mode
or
not?
A
Because
then
sometimes
we
delay
things
to
run
in
worker
or
like
puma,
4k
callbacks
right.
Otherwise,
we
run
them
immediately
and
it
was
really
hard
to
just
look
at
this
and
understand
like
what
is
going
on
in
any
of
these
cases.
I
thought
like
this
was
this
file
was
a
good
example,
because
it's
not
very
long
right.
It's
not
that
long,
but.
A
Yeah
I
mean
this
is
an
extreme
example
like,
and
I'm
not
saying
this
is
everywhere
like
that,
but
I
thought
it
was
a
good
example
to
just
show
the
complexity
that
we're
dealing
with
here,
and
it
will
be
difficult
to
untangle
all
this
stuff
because
just
not
not
two
different
cases
or
whatever.
We
need
to
distinguish
it's
all
these
nested
kind
of
multi-dimensional
cases
where
there
are
multiple
cases
per
dimension.
Basically,
that
can
be
true
in
certain
combinations.
A
So
maybe
we
could
start
even
by
having
like
a
some
kind
of
matrix
that
we,
where
we
write
down,
which
cases
are
even
possible.
You
know,
for
instance,
like
sidekick
and
action.
Cable
cannot
be
true
at
the
same
time
right
so
so
like
so,
some
of
these
cases
are
mutually
exclusive,
but
not
all
of
them
are-
and
I
can't
even
say
I
understand
all
of
them
at
this
point.
So
for
me.
C
C
A
A
Like
those
users
are
it's,
it
will
probably
take
a
like
the
way
I
see
it
is
how
this
usually
unfolds
is
that
there's
a
large
chunk
of
users
that
upgrade
straight
away,
but
then
there's
this
long
tail
of
users
that
will
maybe
they
will
never
upgrade
right,
because
those
are
the
ones
that
are
very
slow
to
follow
up
on.
Like
version
upgrades
so
yeah,
maybe
we
need
to
define
a
certain
threshold
below.
C
C
Be
like
the
moment
when
we
remove
unicorn
like
a
major
release,
right,
yeah
and
because,
like
so
far,
I
don't
think
that
we
really
had
any
troubles
with
puma.
It's
like
it's
running.
It's
running
stable,
so
yeah.
A
C
So
so
I
guess
like
it's
kind
of
binding
ahead.
C
C
A
Yeah
and
then
you
also
mentioned,
like
some
of
there's
like
other
stuff,
we're
dragging
around
right,
the
gitlab
exporter
that
runs
its
own
puma
server.
Where
like
what?
What
what
maybe
also
next
week,
we
can
look
at
these
things
and
say
what
needs
to
happen
so
that
we
can
actually
toss
it.
Can
we
hang
out
because,
like
I,
I
think,
there's
like
if
we
could
remove.
C
That
this
is
also
another
like
low
hanging
fruit
that
you
save
yeah
three
megabytes,
yeah,
totally
yeah
from
from
omnibus
and
like
if
we
could
remove
that
as
well.
It
kind
of
keeps
like.
A
C
C
A
Yes,
I
would
say
just
add
to
this
like
unused
compo
or
like
low
yeah
components
that
are
not
much
used
much
right.
It's
not
just
unicorn
right,
it's
also
gitlab
exporter,
yeah
and
then
okay
well,
make
that
separate.
We
also
want
maybe
to
look
at
things
that
we
don't
want
to
remove,
but
maybe
not
run
by
default
right.
A
A
Yep
graphql
and
grape
as
well.
Actually
I
realized
should
I
share
my
screen,
so
people
watching
the
recording.
C
A
Okay,
should
we
quickly
go
through
yesterday
and
today
you
ready
for
that
yeah?
Why
not
do
you
want
to
kick
it
up?
Camille.
C
So
just
I
answered
one
question
I
looked
at
this
did
up
again
and
I
got
the
metrics
wrong.
Basically,
I
I
calculated
it
wrong
so
actually
like
what
is
to
free
this
default
configuration
is
about
like
four
or
three
megabytes,
like
I
created
this
very
simple
patch
to
ruby
vm
that
I
tested
it
actually
freed
1.5.
C
C
Pretty
simple,
like
it
works
all
hip
pages,
all
slots
looks
at
the
type
of
the
reference
and
like
uses,
gc
compaction,
update
references
to
rewrite
this
references
to
other
objects
and
then
freeze
these
slots
from
other
cases.
So
actually
this
dc
compaction
like
make
it
super
simple
to
write,
because
the
only
need
that
I
have
to
do
is
like
to
work
all
pages
to
find
strings
to
rewrite.
C
We
create
these
strings
using
the
frozen
strix
strings
literal,
which
is
like
an
internal
structure
of
the
ruby
that
it
holds
all
the
duplicated
strings
in
the
hashtag
wall
and
and
then
like
mark
these
slots
as
the
three
so
they
these
slots
would
I
mean
the
slots
gonna
be
freed
right
away
basically,
but
then,
like
you,
can
run
the
gc
compact
again
to
actually
compact
the
memory
pages.
C
So
like
this
part,
is
pretty
simple.
It's
not
good
at
proof
because,
like
I
didn't
yet
figure
out
in
what
cases
you
should
use
a
zone
poisoning
because
there
is
like
the
concept
of
marking
locations
to
like
to
read
write.
C
Then
I
also
like
got
curious
like
why
I
see
so
many
unfrozen
strings
and
it
turned
out
that
I
looked
at
the
love
that
advice
and
we
and
when
I
saw
that
we
have
14
000
loaded,
ruby
files,
I
I
was
kind
of
like
screaming
yeah
ridiculous
because,
like
this
table
of
the
files
along
the
fine
names,
it's
two
megabytes
in
the
memory.
C
C
Like
our
app
oh,
it's
it
through
bbm.
It
adds
each
loaded
file
into
loaded
features.
They
do
that,
because
subsequent
require
doesn't
load
the
file
again
only
like
that
okay,
but
require
only
about
yeah.
If
it's
not
loaded
yeah,
so
you
actually
have
like
very
good
way
to
access
exactly
what
was
that?
What
order
and
things
like
that.
A
A
C
Sorry,
its
name,
its
name
loaded
features
loaded
feature
a
global
variable,
but
the
one
thing
that
I
learned
after
analyzing,
these
loaded
features
is
like
a
number
of
the
files
doesn't
use
the
frozen
string
literals,
which
is
something
that
frozen
intervals,
makes
the
dap
of
the
strings
as
part
of
the
loading
so
like
in.
In
my
section
where
I
measure
like
these
strings,
like
I
have
the
frozen
section
and
the
unprocessed
section
and
a
frozen
section,
pretty
much
comes
from
the
lack
of
the
frozen
string,
literals
and
some
of
these
are
our
gems.
C
Some
of
these
are
someone
else
gems,
but
there
is
like
a
plenty
of
the
gems
that
just
doesn't
use
that,
and
it
seems
that,
like
fixing
that
alone
frozen
stream
literally,
should
give
us
about
five
or
six
megabytes.
I
mean
adding
frozen
string
literal
to
the
header
of
the
gem
and
fixing
like
this
very
few
cases
where
it
breaks.
C
C
C
C
So,
basically,
we
migrated
to
github
code
based
to
frozen
string
literal
and
what
we
actually
were
doing
is
like
adding
plus,
which
means
like
the
duplicate,
the
string
or
like
unfreeze,
this
string,
which
means
like
duplicate
if
it's
frozen,
which
is
like-
and
this
is
really
like-
the
only
case
where
these
frozen
string
twitter
are
true
breaks.
C
C
So
if
you
have
like
medium
of
the
new
line
characters,
it
doesn't
really
matter,
then,
because
it's
only
it's
always
pointing
to
the
same
character,
and
I
also
got
curious
about
like
the
symbols
and
strings
literals
being
garbage
collected,
and
it
appears
that
if
you
use
a
symbol
or
a
string,
each
of
this
object
if
it's
no
longer
used
or
referenced
it's
going
to
be
garbage
collected.
C
So
these
frozen
literals
can
also
be
garbage
collected
if
you,
if
they
are
no
longer
referenced,
so
it's
pretty
efficient
from
memory
perspective,
really
to
do
it
in
that
way.
Of
course,
it's
like
it
doesn't.
The
frozen
string
literally
doesn't
cover
our
cases,
because
there
are
still
some
strings
that
you
that
you
can
duplicate,
but
it's
really
like
a
very
small
portion.
A
D
C
C
Let
me
check
that
there
is
like
there
will
be
opt
parameter
for
that
that
you
can
force
considering
all
files
to
to
use
the
frozen
extreme
criteria,
but
okay,
I
actually
have
that,
but
then,
like
some
sometimes
like
http
kind,
2.83,
it's
like
it's,
not
it's.
It
has
exactly
that
case.
If
I
look
at
this
line
in
this,
it's
exactly
this
case.
B
C
B
A
B
D
C
Thanks
for
presentation,
so
yes,
and
like
we
also
talked
about
these,
I
look
at
the
flame
graph
of
the
puma
startup
and
when
we
finish
execution
of
initializers,
we
are
around
seven
or
eight
thousand
loaded
files.
But
after
the
time
because
of
the
roots
controllers
happy,
we
still
load
7000
more
files
before
we
start
processing
requests
right.
It's
kind
of
ridiculous
to
be
honest
and
like
this
kind
of,
although
that
kind
of
leads
me
to
the
conclusion
that
we
need
to
figure
out
to
selectively.
Note
like
there
is
no
really
way
around
the
beacon.
D
D
D
C
It's
it's
it's!
It's
not
easy
to
ask.
It's
probably
like
a
major
architectural
change
to
the
github,
but
I
think
we
are
kind
of
like
slowly
getting
to
that
scale.
That
is
inevitable
because
of
the
monolith
and
how
much
we
are
adding
so
so,
yes,
like
nicolaite,
will
not
happen
overnight.
It's
probably.
C
C
A
Oh
you,
maybe
you
want
to
even
have
some
kind
of
tooling
that
you
know
I
don't
know
if
it
would
be
a
cop,
but
there's
this
shopify
project
called
puckberg
where
they
statically
analyze
your
code
and
you
can
create
domain
boundaries
based
on
rule
sets
and
then
it
will
actually
refuse
to
even
build
your
code
if
you
violate
these
boundaries,
so
you
know
I
don't
know.
Maybe
this
is
something
we
can.
We
can
look
at,
but
it's
it's
really
hard
to
retrofit
this
on
an
existing
big
cool
face.
C
I
I
don't
know,
but
maybe
like
for
the
graphql
like,
since
you
focus
on
like
on
the
very
specific
component.
Like
you,
you
prepare
like
the
structure
and,
like
you
migrate,
just
this
component.
Initially
right,
yes
and
and
like
and
like
because
iotas
and
there
is
like
gonna-
be
a
lot
of
challenges
with
that.
So
like
focusing
on
a
very
single
on
various
things
on,
a
single
component
could
give
like
and
objectively
that
we
can
see
like
how
it
plays
with
the
wider
architecture
and
like
how
it
could
be
executed.
A
And
also
this
is
a
bit
like
simpler,
because
graphql
is
a
technical
concern
right,
so
we
actually
know
we
only
need
that
when
serving
api
requests,
but
these
domain
specific
boundaries,
that's
super
tricky,
because
that
is
something
that
is
really
hard
to
put
in
place.
After
the
fact
you
know
things
like.
Should
you
be
allowed
to?
You
know
I
don't
know,
make
changes
to
whatever
like
the
issues
domain,
when
you're
working
on
a
controller
that
belongs
to
a
different
domain
or
something
you
know
like
the
this.
A
Pacquiao
back,
I
think,
was
designed
for
yeah,
but
that
stuff
was
really
hard
to
figure
out.
I
I
also
would
say,
let's
start
focusing
on
the
the
technical
things
that
are
a
bit
more
obvious.
You
know
like
the
syntax
highlighting
and
the
graphql.
C
As
well,
you
know
exactly
so
like
there
are
like
these
components
that
you
know
that
you
don't
really
need
in
another
context,.
D
A
Okay,
I
can,
I
can
go
next.
So
basically,
what
I
was
trying
to
answer
yesterday
was:
where
did
this?
Where
does
this
big
discrepancy
come
from
from
rss
used
and
what
the
a
lot
of
these
memory.
C
A
So
I
was
hoping
I
could
like
drill
into
this
and
find
by
breaking
down
what
does
rss
even
mean,
so
actually
I
actually
spent
some
time
I
posted
these
links,
try
to
even
read
up
on
how
all
these
different
memory
regions
are
defined
and
there's
this
super
useful
tool
called
pmap
that
will
give
you
a
breakdown
of
the
memory
areas
I
just
like
through
that
in
this
spreadsheet
here,
where
it's
a
bit
easier
to
look
at,
but
basically
the
pmap
output
is
in
this
on
this
left
table
here,
and
it
will
basically
summarize
by.
A
I
think
that
these
are.
I
don't
think
these
are
pages.
No.
No.
I
think
these
so
size
is
basically
the
virtual
memory,
especially
the
memory
request
that
the
process
was
making
of
this
particular
kind.
That
doesn't
mean
it's
actually
using
that
much
memory
right.
You
can
see
it's
like
bite,
aligned
and
stuff,
so
the
rss
column
will
actually
say
what
was
actually
used
here.
B
A
A
So
they
are,
I
believe,
also
not
shared,
and
if
there
is
a
name
here
in
the
mapping
file,
then
this
means
in
the
mapping
column.
It
means
it
is
file
backed.
So
it
was
yeah,
it's
fireback
memory,
so
that's
often
like
shared
libraries
and
stuff,
so
but
yeah.
The
kind
of
disappointing
taking
way
was
that
it
did
not
answer
that
question,
because
I,
because,
like
a
lot,
you
can
tell
here
so
this
is
now
the
sum.
A
A
Still
a
chunk,
so
it's!
How
much
is
that
so
that
mine
is
that
which
comes
down
to
like
30
megabytes,
maybe
around
roundabout,
so
30
30
megabyte
I'm
not
in
this
blip
in
this
big
chunk
of
this
big
black
box
here
and
that's
this
difference.
The
30
megabytes
is
the
sum
of
all
these
file
backed
memory
maps,
which
you
know
yeah,
there's
stuff
in
here
like
ruby,
you
know
and
like
some
of
them,
a
lot
of
them
are
really
small,
but
some
of
them.
A
That
for
like
I'm,
not
really
sure
like
for
ssl
or
something
you
know,
making
outbound
https
requests
or
something
I
don't
know.
2.3
megabytes
seems
chunky,
though,
for
a
shared
object
right,
oh
yeah,
but
no
like
really
big
things
with
grpc
here
as
well,
which
might
be
difficult
to
get
rid
of.
So
I
don't.
C
A
Major
takeaways
from
from
this,
like
I'm.
C
Kind
of
thinking
like
because
maybe
we
compile
all
these
libraries-
maybe
it's
also
about
toning,
the
like
the
compiler
options
and
like
disabling
the
aspects
that
we
are
certain
that
we
never
use
as
well.
A
Yeah
yeah,
I
mean
we
could
look
at
that
for
some
it's
just
like
most
of
them.
It's
not
even
worth
looking
at
I
mean
look
at
this
is
48
kilobytes.
You
know.
Okay,
I
mean
like
I
don't
know.
If
we
look
at
every
single
one
of
them,
it's
just
like
the
return
on
investment
here
seems
low
right.
Let's
say
we
would
cut
this
in
half,
then
we
have
a
megabyte
saved
right
for
for
well.
What
did
we
go
for
for
liquid
finds.
A
And
by
the
way,
here's
the
ruby
runtime,
which
is
quite
small,
like
it's
only
three
megabytes.
I
was
kind
of
surprised
by
that.
I
thought
it
was
bigger,
but
yeah
yeah.
So
so
that
was
that
bit
and
then
after
lunch,
we,
how
do
I
actually
go
back
when
I'm
in
full
screen.
C
A
And
igor
joined
and
paired
a
little
bit
on
using
je,
prof
and
yeah
igor
is
amazing,
like
he
did
an
amazing
job
at
like
figuring
out
figuring
out
how
to
create
flame
grabs.
I
kind
of
mostly
like
observe
like
what
he
was
doing,
but
in
the
background
I
I
looked
a
little
bit
and
basically.
A
This
bit
here
hang
on.
A
Here
so
we
had
others,
yours
actually,
but
I
got
similar
results
like
these
reports
of
memory
used
by
type,
so
we
have
38
megabytes
of
hashes
in
memory,
and
I
was
just
curious.
You
know
like
let's
just
look
at
it.
You
know.
Maybe
something
stands
out,
so
I
tried
to
so
I
dumped
like.
I
flew
object
space.
I
just
dumped
all
these
hashes.
A
That
was
way
too
big
like
if
you
actually
dump
the
full
hash,
so
I
ended
up
dumping
just
by
keys,
so
I
took
the
key
set
and
then
the
size,
the
hash
consumed
and
the
keys
that
I
was
hoping
kind
of
giving
away
what
we're
dealing
with
here.
What
this
might
be.
So
the
summary
is
down
here,
if
you
want
to,
if
you,
if
you
want
to
look
at
it,
so
there's
some
like
okay,
so
ignore
ignore,
always
like
sorry
like
ignore
these
entries
here,
because
that
is
the
script.
A
Because
it
also
creates
hashes,
but
like
anything
outside
of
like
this
line,
for
instance,
this
means
here
we
have
almost
80
000
empty
hashes
in
memory,
so
it
has
an
empty
key
set
right
and
the
size
and
memory.
Sorry
that
this
is
the
occurrences,
so
we
have
78
000
empty,
empty
hashes,
and
then
we,
I
also
sum
them
up,
because
some
of
them
are
the
empty
hash
should
always
be
the
same
size
but
like
so.
This
is
3.2
megabyte
of
empty
hashes.
A
By
the
way
this
particular
one.
I
think
this
might
get
a
lot
better.
My
suspicion
is,
you
can't
see
like
where
these
come
from
here,
but
my
suspicion
is,
these
might
all
be
default,
hashes
and
the
way
the
ruby
vm
handles
that
the
like
the
default.
Have
you
know
the?
What
is
it
called
these
like
named
the
named
parameters?
Right,
if
you,
if
you
have
like
name,
defaults
right.
C
For
a
method
isn't
like
the
same
problem
as
with
like
a
string,
it's
like
it
seems
like
the
same
pattern.
Like
you
have
the
code
in
the
code
you
have
like
empty
house
like
being
used,
it
could
be
yeah
and
basically
this
kind
of
translates
to
this
structure.
You
then
like
it
gets
copied
as
an
object.
A
Okay,
so
what
I
was
thinking
about
was
was
methods
like
these.
I
believe
where
you
have
something
like
like
that
where
well
actually
so
sorry,
these
don't
have
default
values.
I
think
you
would
have
to
have
something
like
default.
Values
like
this
method
can
be
called
like
this
right.
A
I
can
just
say
I
call
this
method
without
arguments,
because
they
have
default
values
and
internally,
the
way
ruby,
vm
implemented
this
prior
to
2.7
was
quite
inefficient
because
it
always
created
a
default
empty
hash
to
represent
these
default
arguments,
and
that
has
changed
with
2.7
to
be
more
efficient,
so,
but
we're
not
making
use
of
that,
because
we
still
still
often
default
these
like
options
to
like
explicitly
use
empty
hashes
and
that
there
was
even
a
warning
like
remember:
when
stan
did
the
2.7
upgrade
he
had
to
silence
like
15,
000
warnings
or
so
because
there
was
this
warning
saying
you
shouldn't
be
doing
that
anymore,
because
the
ruby
vm
takes
care
of
that.
A
It
might
also
be
just
something
like
yeah,
I
don't
know
you
start
out
with
an
empty
hash
and
no.
C
C
They
are
not
did
up
at
all.
No,
they
are
not
copied.
So
I'm
kind
of
like
thinking
that
maybe
can
I
share
my
screen
for
a
second.
A
Yeah
yeah,
let
me
just
just
a
few
more
I'm
saying
like
if
you,
if
you,
if
you
look
at
this,
there's
a
couple
interesting
things
where
I
think
I
don't
know
if
this
is
from
the
ruby
dm
or
if
it's
like,
like
some
kind
of
meta
program,
we're
doing
because
there's
clearly
stuff
going
on
where
we
keep
hash
presentations
or
something
that
sounds
like
a
proc
or
a
method.
So
maybe
that's
just
a
ruby
bn,
but
it's
quite
sizable
as
well.
A
Like
I
mean
look
at
this
is
two
megabyte
1.5
megabyte
and
then
something
related
to
like
character,
code
pages
or
something
like
that.
I
don't
know
what
this
is
like.
Maybe
someone
yeah
who's
been
around
the
core
base.
More
might
be
able
to
identify
this.
You
know
again,
1.6
megabytes,
it
all
adds
up.
You
know
like
if
you
add
up
all
these
things
here,
you
come
down
easily
to
10
megabytes
of
stuff
that
we
might
be
able
to
yeah,
maybe
not
get
rid
of,
but
reduce
in
size.
C
A
Well,
if
you
either
either
you
mutate
it
and
then
it's
still
in
use
and
it's
not
empty
anymore,
so
it
wouldn't
show
up
in
that
report
or
it's
like.
Oh
you
make
a
copy
and
the
empty
object
is
not
being
garbage
collected
for
some
reason.
A
But
but
I'm
saying
this
one,
it's
not
going
to
show
up
in
the
report.
I
pulled
because
I
did
a
major
gc
after
that.
So
these
3.2
megabytes
of
empty
hashes
have
inbound
references
so
that
they're
not
eligible
for
garbage
collection,
so
they
must
be
sitting
somewhere
being
held
onto
and
what
you
return.
There
is
just
a
temporary
value.
C
C
Okay,
so
actually
for
this
one,
it
uses
a
new
hash,
but
for
for,
like
case
like
that,
it
uses
dab
hash.
Actually
it
duplicates
a
hash
so
like
for
this
one.
It
actually
has
the
optimization.
So
it's
not
that,
but
for
the
hash
that
is
like
more
complex,
it
just
starts
the
base
in
the
memory
in
like
in
the
completely
not
that
the
duplicated
form-
and
it's
like
it,
duplicates
this
hash.
C
C
No
yeah,
I
think
what
is
happening
like
if
he
sees
something
like
that,
it's
creating
new
hash,
but
if
it
sees
the
hash
that
has
value
it
needs
to
store.
This
has
somewhere
and
it's
doing
duplicate,
hash
and
the
same
seems
to
be
to
the
empty
era.
It's
using
new
array,
but
for
the
an
array
like
with
some
values,
it's
actually
doing
duplicate,
hash.
Sorry,
duplicate
of
this
array
that
is
stored
somewhere
as
part
of
the
of
the
compilation.
C
It's
it's
actually
interesting
like
to
see
how
like,
because
this
is
efficient
like
in
this
form,
what
it
appears,
but
it's
not
what
is
not
super
efficient
if
you
have
like
in
two
different
methods,
something
like
that,
even
though
they
are
basically
the
same
they're
gonna
be
stored
like
us
to
two
different
hatches
yeah.
D
Yeah,
I
don't
have
much
to
show,
but
let
me
share
the
document
so
yeah.
Yesterday
I
paired
with
alexey,
we
run
puma
in
the
single
mode
and
try
to
run
gpt
to
like
warm
up
the
instance
and
then
taking
a
hip
snapshot
again,
and
we
try
to
look
that
that
snapshot
using
hippie
to
like
maybe
figured
out
something
because
previously
we
were
concentrating
on
like
taking
snapshots
during
the
boot
time.
So
I
was
hoping
to
see
something
interesting
and
application
is
running
and
under
the
heavy
load.
D
But
there
are
a
lot
of
data
there
and
nothing
like
stands
out
like
there
are
some
files
like
magic
instrumentation.
That
is
appearing
like
with
a
big
number
of
our
location,
and
we
already
know
that,
like
your
camille
tried
that
it
doesn't
save
us
much
space,
so
I
I
I'm
not
really
sure
like
what
to
do
with
all
of
that
data,
and
we
also
looked
at
the
ca
jobs
that,
like
are
running
derailed
benchmarks
on
our
production.
D
Unfortunately,
we
are
not
running
the
memory
profiler
there,
so
maybe
this
is
something
that
we
maybe
can
add
for
the
future,
because
I
guess
that,
like
memory
profile
report
can
give
us
more
information,
because
at
the
moment
we
are
just
getting
the
summarized
memory.
Consumption
of
the
requested
files-
and
that's
it
so
maybe
running.
Also
this
memory
profiler
benchmark,
would
give
us
better
insight
what's
happening
in
production,
because
it
contains
retain
and
allocated
memory
by
file
by
class
and
everything
else.
D
I
couldn't
make
it
work
and
the
third
one
is
what
matthias
said
we
like
eager
joined
and
we
were
able,
finally,
to
like
run
the
the
actual
g-proof
against
this
g-malloc
dump,
because
yesterday
we
were
able
to
like
create
a
dump,
but
it
didn't
like
fix
the
addresses.
It
was
just
showing
the
addresses
in
the
memory
it
didn't
contain
any
useful
info.
D
A
Actually,
like
it's
funny,
igor
just
posted
a
smaller
version
of
that
awesome
svg,
which
uses
a
min
like
a
like
a
cutoff
for
the
sample
size.
I
think
it
is
so
that
it
should
be
it's
only
them.
It's
like
30
times,
smaller,
so
that
that
sounds
promising.
It
should
be
a
bit
easier
to.
D
A
A
I
mean,
if
I
read
it
correctly:
it
looked
like
the
largest
chunk
was
allocated
from
requires
right,
kernel
load,
which
makes
sense,
but
it
doesn't
tell
us
much.
You
know
it's
just
basically.
A
A
D
D
Yes,
we
found,
there
are
a
lot
of
things
and
we
found
that
we
lost
a
lot
of
their
a
lot
of
stuff
and
but
none
as
he
said,
nothing
big
stands
out
which
is
expected
like
we
were
not
hunting.
The
some
big
memory
leak
like
we
need
to
optimize
smaller.
So
I
agree
with
camille
that
maybe
selectively
loading
stuff
is
the
next
big
major
thing
that
we
should
do.
C
C
It's
like,
like,
like
your
ears
like,
and
you
just
kind
of
look
at
the
these
hip
analysis.
Maybe
like
this,
I
mean
size
objects
that
that
I
did
maybe
also
rss
and
like
because
it
could
give
us
like
some
hint
exactly
how
much
saving
we
may
be.
Looking
at.