►
From YouTube: GitLab 13.9 Kickoff - Enablement:Memory
Description
Kickoff for the 13.9 release for the memory team.
Clarification:
Fabian did mix up Puma's single mode a bit.
Puma in single-mode will still be multithreaded, so users will still enjoy the ability for Puma to handle requests concurrently to some extent. (threads are fairly lightweight constructs.)
"Single" here refers to it running a single process, as opposed to clustered mode where there is a primary/parent process that after startup forks off several worker processes. This is what can consume a lof of excess memory.
Planning issue: https://gitlab.com/gitlab-org/memory-team/team-tasks/-/issues/85
A
A
So
what
we're
doing
right
now
is
we
are
working
on
understanding
how
we
can
reduce
get
lab
memory-
consumption
overall
globally,
but
also
specifically
how
we
can
reduce
the
footprint
for
installations
that
have
not
as
much
memory
available,
ideally
with
the
goal
of
reducing
it
to
under
two
gigabytes
and
so
in
gitlab
13.8.
We
did
quite
a
bit
of
research
work,
identifying
building
blocks
for
memory
reduction
in
13.9,
we're
going
to
actually
do
some
more
implementation
work,
hopefully
reducing
gitlab's
memory
consumption.
A
A
So
this
is
particularly
impactful
for
installations
that
have
not
as
much
memory
available,
who
also
tend
to
be
smaller
and
therefore
may
not
actually
benefit
from
puma
running
in
our
standard
sort
of
multi-threaded
mode,
and
we
may
be
able
to
actually
just
use
puma
in
single
mode,
which
hopes
which
we
hope
has
an
effect
of
around
250
megabytes
when
it's
idle.
A
The
other
thing
is
that
we're
going
to
run
gc,
so
garbage
collection
compacting
before
forking
in
to
puma
workers
via
this
specific
fork
here-
and
this
is
also
a
specific
setting-
that
we
can
apply
in
order
to
reduce
the
memory
consumption
and
the
work
is
already
on
the
way.
So
we
hope
that
we
can
ship
this
in
13.9.
A
We're
also
going
to
make
some
initial
work
happen
with
regards
to
loading
optimizations
for
dependencies,
so
we
are
interested
in
providing
a
mechanism
to
only
really
load
graphql
when
it
is
needed.
So
this
is
quite
interesting
because
graphql
may
not
be
needed
everywhere,
so
we
are
we're
trying
to
figure
out
how
we
can
actually
only
load
what
we
really
require.
A
That
may
then
in
turn
reduce
memory
consumption
and
then
there
is
a
rather
large
piece
of
work
here
to
really
split
the
application,
a
little
bit
more
so
that
we
only
load
those
areas
of
the
application
that
are
needed.
That's
a
little
bit
more
researchy
at
the
moment,
because
it's
a
it's
a
broad
topic.
A
We're
also
going
to
take
some
sort
of
further
steps
in
breaking
down
and
reviewing
items
that
are
on
our
list
for
future
milestones.
So
there
is
a
bit
of
work
that
I
have
to
do
and
other
team
members
in
order
to
understand.
What's
next
and
that's
it
from
the
memory
team,
we're
looking
forward
to
13.9
and
delivering
on
some
of
the
findings
that
we
had
earlier.
So
thanks
for
listening
and
see
you
next
time.