►
From YouTube: GitLab 12.7 Kickoff - Memory
Description
Memory Kickoff for the 12.7 release
A
I'm
Josh
Lambert,
a
group
product
manager
here
at
gitlab,
and
today,
I'd
like
to
talk
through
what
the
memory
group
is
looking
to
accomplish
in
our
12.7
milestone.
We're
focused
on
three
key
epochs,
which
make
up
the
vast
majority
of
the
22
issues
that
we
have
planned
here
are
told
out
7
release.
You
can
see
the
full
details
here
on
our
issue
boards
for
the
memory
buy
milestone,
but
we
also
have
linked
the
three
epochs
to
our
kickoff
page
and
you
can
view
them
easily
there
as
well.
A
The
first
and
most
important
project
that
we
have
is
to
continue
to
make
improvements
to
our
project
import
export
functionality.
This
is
really
important
because
we've
seen
customers
have
some
problems
here,
in
particular
with
importing
large
projects.
They
can
take
a
significantly
long
period
of
time
at
which
point
the
user
might
assume
they
have
failed
and
perhaps
try
again
or
move
on
to
a
different
project
or,
alternatively,
they
actually
might
never
succeed
in
the
worst
case
and
continue
to
be
tried
over
and
over
and
over
again
consuming
additional
resources
on
the
gitlab
server.
A
A
We
also
want
to
have
constant
and
predictable
memory
usage
on
top
of
that,
we're
also
trying
to
drive
a
two
buck:
speed
increase
in
import
and
export
processes
to
reduce
them
and
awaiting
that
users
have
to
essentially
perform
before
they
can
see
the
results
of
their
import.
You
can
see
the
details
down
here
below
and
we're
actually,
as
you
can
see,
getting
close
to
wrapping
up
this
epic
here.
Most
of
the
issues
are
either
been
solved
or
are
assigned
to
our
child
7
release,
which
is
really
exciting.
A
There
are
two
main
kind
of
categories
of
these
features.
Some
of
these
are
oriented
towards
performance,
and
you
can
see
that
here
with
things
like
improving
the
caching
or
using
batch
in
sort
for
the
gate,
Liban
port
and
to
the
database
we're
also
working
on
a
couple
of
technical
issues
as
well.
For
things
like
making
sure
we
have
proper
unit
test
coverage
for
some
of
these
projects,
and
so
these
functions
as
well.
So
all
in
all,
this
is
really
important
for
us.
A
The
next
two
epics
that
we're
working
on
in
this
release
is
continuing
to
drive
towards
enabling
Puma
for
the
web
server
in
get
lab.
For
those
of
you
who
might
not
be
aware
currently,
gitlab
uses
unicorn
by
default.
Unicorn
has
a
multi-process
model
for
handling
multiple
requests
at
a
given
time
on
a
single
server.
A
This
works
well,
but
it
can
end
up
consuming
significant
amounts
of
memory
because
we
to
load
all
the
libraries
all
the
dependencies
each
time
for
every
single
process
with
Puma
it
has
a
multi-threaded
model,
and
so
they
can
actually
see
some
significant
memory
reductions
for
a
same
similar
type
of
performance
as
unicorn.
Right
now
for
our
current
configuration
and
recommended
defaults
we're
seeing
rather
than
a
30%
reduction
in
memory
usage
for
similar
performance.
Now
there
are
limits
to
how
far
we
can
go
with
threading
because
of
the
Ruby
GDL
some
of
the
Python.
A
This
is
the
limitation
where
the
Ruby
interpreter
will
only
generate
code
for
one
thread
at
a
time,
and
so
you
can
realize
benefits
from
multi-threading
with
Ruby
when
a
thread
is
blocked
and
typically
has
a
short
period
of
time.
It's
actually
executing
code,
but
the
longer
thread
you
might
have
or
though
the
higher
thread
count
you
try
to
achieve,
and
if
you
have
some
procedures
or
some
functions
that
end
up
generating
a
fair
amount
of
compute.
A
These
can
end
up
actually
causing
a
decrease
in
performance
and
responsiveness,
because
the
threads
essentially
won't
be
waiting
on
IO,
but
they'll
be
trying
to
finish
the
execution
and
they'll
be
getting
interrupted
for
other
threads
to
continue,
and
so
that's
an
outcome.
We're
trying
to
avoid
and
that's
why
we're
also
doing
some
tuning
here
as
well
with
Puma
as
well,
to
make
sure
we
have
the
right
values
here
for
our
customers
along
those
lines,
we're
also
working
to
handle.
How
rugged
the
Birgit,
patches
and
promotion
work
together
as
well.
Rugged
was
a
patch.
A
We
are
working
to
make
sure
that
puma
and
rugged
can
work
together,
and
you
can
see
that
work
being
done
here
in
that
issue
with
that,
you
can
see,
are
actually
making
quite
strong
progress
towards
having
puma
enabled
and
so
we'll
start
to
recommend
to
users
to
start
to
try
puma.
You
know
it's
current
experimental
states
and
we'll
be
driving
towards
enabling
puma
by
default
and
our
13
other
release
later
in
2020.
A
The
third,
though,
that
I
want
to
talk
about
here,
is
enabling
puma
on
get
lab
calm.
So,
as
I
mentioned
before,
and
as
you
could
see,
we're
making
steady
progress
on
puma
in
general
and
we're
very
close
to
having
puma
enabled
on
get
live
comment.
I
early
we've
been
working
together
with
the
infrastructure
and
SRE
team
is
on
github.com
and
we've
been
running
puma
on
a
canary
node
for
some
time
in
production,
we
saw
some
differences
in
behavior.
A
How
were
you
able
to
trace
those
two
changes
that
were
unrelated
to
puma,
and
so
we
wanted
to
go
ahead
and
continue
to
deploy
Puma
out
across
a
broader
range,
the
fleet,
but
given
the
timelines
and
given
the
folks
that
we
have
but
on
holiday,
we'll
probably
wait
for
that
until
after
the
holidays,
but
still
part
of
this
release
to
continue
to
further
increase
the
rollout
of
Puma.
And
so
those
are
the
main
topics
that
we're
working
on
here
in
12.7
for
the
memory
team.