►
From YouTube: GitLab 13.6 Kickoff - Enablement:Memory
Description
Kickoff for the 13.6 release for the memory team.
Planning issue: https://gitlab.com/gitlab-org/memory-team/team-tasks/-/issues/66
A
A
The
major
items
we're
working
on
here
are
continuing
to
wrap
up
some
of
our
existing
work
that
we've
been
carrying
on
for
from
the
previous
release.
The
first
is
that
we
want
to
be
able
to
bring
dynamic,
imagery
sizing
to
our
self-managed
customers.
It
is
running
right
now
on
gitlab.com
to
a
nice
benefit,
and
we
want
to,
of
course,
bring
those
benefits
to
our
to
our
users.
A
We're
also
working
through
and
optimizing
additional
cache
equal
calls,
and
then
the
big
item
we're
working
on
here,
which
is
new,
is
we
are
pushing
forward
with
memory
improvements
and
so
we're
changing
a
little
bit
away
from
the
dynamic
imagery
sizing
towards
the
memory
constrained
environment
and
we'll
talk
more
about
that
as
details
as
we
get
to
them.
Let's
jump
into
it
here,
as
I
mentioned
before,
we
do
have
dynamic
image
sizing
running
in
production,
as
mentioned.
This
is
a
pretty
nice
benefit
to
end
users.
A
Previously,
avatars
could
be
up
to
200k
large,
even
though
in
this
case
this
is
quite
small,
and
so
this
led
to
really
large
page
sizes
and
really
kind
of
suboptimal
loading
speeds,
and
so
we
have
now
got
this
going,
and
so
we
deliver
the
right
size
image
to
the
browser
for
the
particular
use
case,
and
we
want
to
bring
this
to
self-manage.
A
The
challenge
we
have
is
that
we
have
the
benefit
of
a
cdn
in
front
of
com,
actually
two
of
them,
and
so
we
are
working
through
what
the
performance
impact
would
be
too
self-managed,
because
most
of
them
do
not
have
a
cdn
in
front
of
them
today
and
what
the
impact
of
that
would
be
without
any
other
caching
effectively.
Every
single
image
load
would
be
resized
on
demand.
A
So
we're
working
to
understand
that
and
then
based
on
that
information,
we
can
then
decide
what
to
do
as
far
as
next
steps
here
for
self-managed,
whether
it's
building
a
caching
layer
or
simply
leveraging
better
documenting
how
to
set
up
a
cdn
with
gitlab
and
then
simply
leveraging
the
cdn
for
caching,
which
could
make
it
significantly
less
work
on
our
side
since
effectively.
That's
our
argument
built
by
the
various
cdn
services.
A
Next
up,
we
are
also
continuing
to
look
through
the
cash
sql
calls.
We
have
a
few
of
them
sketch
for
13.6,
and
these
are
in
general,
just
sub-optimal.
Sql
queries
that
consume
additional
memory
and
performance,
and
so,
for
example,
we
see
some
n
plus
ones
in
here
and
they're,
also
cached,
plus
ones,
which
is
doubly
bad
because
now
we're
also
consuming
additional
memory,
as
we
cast
those
queries
so
we're
working
through
those
and
we'll
improve
the
performance
and
memory
profile
of
these
various
controllers.
A
Next
up
again
is
our
kind
of
forward-looking
next
large
project
that
we're
taking
on,
which
is
to
explore
how
to
best
get
get
lab
running
with
a
reduced
memory
profile
or
basically,
to
simply
say
that
reduce
our
memory,
consumption
for
the
standard,
gitlab
installation,
the
two
big
things
we're
working
on
here
is
number
one.
We
are
working
on
upgrading
our
ruby
version
to
2.7
there's
a
couple
reasons
for
doing
this.
A
One
of
them
is
that
2.7
introduces
a
new
compacted
garbage
collector,
and
this
can
save
significant
memory
by
how
it
manages
memory,
especially
memory
that
gets
that
gets
cleaned
up
and
re-utilized
during
the
garbage
collection
process.
A
Previously,
it
would
lead
to
a
lot
of
dirty
pages,
and
when
that
happens,
if
you
have
multiple
processes
running
running
your
web
server,
which
we
do
in
get
lab
over
time,
that
just
causes
a
reduction
in
the
amount
of
memory
that
these
processes
can
share,
and
so
they
consume
more
and
more
and
more
memory
with
the
new
garbage
combats
with
the
new
confident
garbage
collector
that
should
much
more
efficient
and
lead
to
less
fragmentation
and
additional
pages,
and
so
we
should
be
able
to
get
a
significant
lift
out
of
that
in
conjunction
with
that.
A
What
we're
also
working
to
do
is
we're
looking
to
pre-load
more
libraries
before
we
start
forking
in
the
process.
This
will
allow
us
to
effectively.
You
know
if
we,
if
we've
pre-loaded
before
we
fork,
we
can
then
share
that
memory
space
and
for
each
of
those
processes,
and
in
conjunction
with
that
compounding
garbage
collector.
This
should
allow
us
to
optimize
to
a
significant
degree,
the
amount
of
memory
that
we
consume
for
each
of
our
rails
processes
that
we
launch
so
looking
forward
to
that.
That
is
the
first
step.
A
In
particular,
for
example,
there
are
a
number
of
performance
improvements
coming
into
f7,
but
also
leading
up
to
3.0,
as
the
overall
ruby
goal
is
to
make
it
three
times
faster
and
we
fitted
out,
and
so
this
is
continuing
to
move
us
towards
that
goal.
As
the
ruby
core
team
looks
towards
releasing
through
todo
towards
the
end
of
this
year
or
early
next,
so
that's
the
what
we're
working
on
here
in
the
memory
team.
Thank
you
so
much
for
listening.